Competition 3.7 Results 2021

Thank you very much to everyone who submitted an entry to Competition 7: Artificial Intelligence! We really enjoyed reading all of the entries that we received. Below you can see the winners, and read some of the winning entries.

1st place (Year 10): Michelle M.

Michelle’s winning entry

How would an AGI system react to being unplugged?

To start, I want to look at a more crucial question: could AGI develop consciousness?

For AGI to have self-awareness, a human would have to program it… however, the meaning of ‘self-awareness’ could be debated. When we create a piece of music, we might add sentiment, striving to displace our pain or love or joy, while a robot works based solely on pre-programmed algorithms. AGI structures don’t learn to reason but, instead, reason by a system made by the researcher, limiting their flexibility. Genuine sentiments, I believe, would be difficult to programme into AGI, therefore making it hard for them to think independently past what they are coded to feel. This leads me to believe that a system could only react negatively to being unplugged if it had been programmed to respond that way.

This leads me to an advantage of AGI: its behaviour can be programmed. Humans are susceptible to greed and corruption, but, since a machine’s ‘emotions’ are artificially engineered, they could be made to feel an excess of unnatural altruism (this is also a disadvantage if vice versa occurs). With enough development, AGI could eventually replace hazardous jobs, like working in the stead of rescue workers. AGI could save lives; assuming the manufacturers have good intentions, of course, which brings me to the next point.

The saying ‘knowledge is power’ is true- the more knowledge you had of these machines, the more power you could possess. Let’s say these machines took over human jobs- a person with knowledge of how to manufacture and manipulate them would prevail in this ‘dystopian’ future. Also, AGI works in an agentic state for humans, so the manufacturer is arguably responsible if the robot causes harm. But if robots develop self-awareness and cause harm of their own volition, do we hold human or robot accountable? This could cause a range of complications and debates, if the problem ever arose. 

With those difficulties, why should we have Artificial Intelligence when we have never needed it in the past? To take over people’s jobs, making unemployment rise for self-interested innovators? What is the point of having AGI when it could just diminish the value of work and make us lazy? All things have a tipping point- already the consequences of capitalist growth on our climate are showing, for instance. Surely there must be some catch for AI too.

It has proven that rebuilding biological intelligence from scratch- especially that of a human- is incredibly taxing. Perhaps I am vain in claiming that the human brain is by far the most complex machine of all, unsurpassed by anything else. I find the idea of AGI quite hard to comprehend, but I also know not to underestimate humans. We are a strange species, and our capabilities seem to constantly grow. Whether we can replicate our own thought processes is something I doubt… however I am interested to see how far science can go in this.

1st place (Year 11): Lauren W.

Lauren’s winning entry

Artificial intelligence is evidenced by machines, they don’t use natural emotional intelligence and are instead programmed. In the future, this will influence the daily processes of life, such as medical systems and employment. Artificial intelligence ranges in complexity from relative AI, being very simple to ‘self-aware’ AI, this is only shown in fiction as machines that are independently intelligent beyond human programming and algorithms.

AI is becoming more significant in medical diagnostics. Scientists have found a way to use artificial intelligence to pre-diagnose Parkinson’s syndrome, something that we had never been able to do. They have developed an algorithm to track the movements of our steps, it measures step distance and finds patterns and anomalies, this can detect uneven distances and strange sequences that may be a sign of Parkinson’s disease. They have also produced algorithms that can identify conditions through x-rays, it has taken hundreds-of-thousands of x-rays and sorted them into categories and can now independently make accurate diagnoses for things like tumours and tuberculosis. As they have become more advanced, these programmes have succeeded doctors in their accuracy. Although the advances are extremely positive in the medical field, AI carries some disadvantages in terms of human employment. Robots are taking over jobs like factory packing, postal work, in China even restaurant staff. With the development of self-driving cars, even jobs like taxi and bus drivers will become vulnerable. This decrease in need for workers will eventually lead to large increases in unemployment, poverty and homelessness affecting society through healthcare and economics. We also know this will have a disproportionate impact on poorer communities.

As of now, self-aware AI does not exist. One of the biggest questions around artificial intelligence is self-aware systems. A self-aware machine has the most advanced AI, they are conscious, have the capacity to develop their own personality and are able to carry emotions and thoughts like humans. One of the main benefits for self-aware systems is to take the weight of labour off humans, but once these robots become conscious and decide they don’t want to work, forcing them to would be slavery creating major ethics issues, but can we unplug them? Once machines develop consciousness there is no way to stop them further advancing themselves just like humans did and there is no way to tell if they will advance for good or evil. Some theories propose that we ourselves might argue that robots are living and deserve rights and accept them as the next step in evolution. Others like Elon Musk, for example, argue that AI will present an existential threat to humans, eventually leading to humans becoming obsolete.

In my opinion, artificial intelligence could do more damage than good. Although the breakthroughs in medicine are incredible and should continue, I think that the consequences of introducing AI may ultimately not be compatible with the future of human life. Is that a risk we are willing to take?

2nd place (Year 10)

Elizabeth H.

2nd place (Year 11)

Charlotte C.

Jade V.

Finalist (Year 10)

Eve A.

Giulia B.

Lili S.

Zach W.

Finalists (Year 11)

Oscar D.

Seb G.

Oliver H.