Originally published on
Recently, a video game has been making the rounds and gaining a lot of attention. Detroit: Become Human is a decision-based game that explores consciousness in machines and the social issues that surround it.
That’s not a new issue! In fact, we seem to come back to it again and again, whether through Hollywood movies, science-fiction novels or, in this case, a video game.
As much as this sounds like an easy way to create drama, this is potentially a real issue, at least in theory, that could happen at some point. How far in the future is anyone’s guess, but there are some theories out there.
Artificial General Intelligence Is Decades Behind, And Therefore Decades Away
According to Ben Medlock of bigthink.com, the idea of human-like artificial intelligence (AI) has been off-track since the 1950s. According to him, we’ve been building machines with a very different type of mind for some time, and the big reason behind this is the makeup of the human versus mechanical brain.
The human brain, he argues, is designed to predict, while a mechanical brain is designed to memorize.
For instance, you’ve probably seen pictures of dinosaurs before, and become familiar with the range of species. Using that evidence, if you were given a picture of a giant reptile, you’d probably conclude that it was a dinosaur, regardless of how different it looked.
A computer would need to be exposed to a long list of pictures of different dinosaurs before it would be able to identify the new example because it functions more on memory than prediction.
We May Never Know For Certain If Machines Achieve True Awareness
George Zarkadakis in The Huffington Post makes an argument that questions the very nature of humanity. The endgame of artificial general intelligence is self-awareness.
The problem with using Descartes’ model is that “thinking” is too vague. What exactly counts as thinking? If thinking is narrowed down to just problem-solving than we’ve already surpassed human-level AI.
Alan Turing, in addition to breaking the German code in WWII, created a test in which a person talks to a robot without being told it’s a robot. If they still can’t tell after the conversation, the robot has passed the test.
The problem is that even that definition has holes in it. The ability to seem human to an observer is definitely a step towards true awareness, but is it true awareness in and of itself?
Many would argue that, of course, it isn’t, and that true awareness involves an awareness of self and the ability to form emotions, goals and desires outside the realm of physical needs. This is where the Turing Test trips us up again. If we can’t tell when a robot is pretending to be human, how will we be able to tell when they’ve reached true self-awareness?
Self-Awareness Will Take A While, But It’s Hard To Say Exactly When It Will Happen
Some have argued that any sort of guess would be a stretch at this point. The field of AI is immensely confusing, to say the least, despite the fact that more money is being poured into it every day.
We’re definitely making progress — it seems like there’s a new step forward every day — but there’s a lot left to do. The science fiction movies and books never seem to convey exactly how difficult it is for true sentience to be achieved.
They also tend to take a very gloomy outlook, when in reality, science isn’t going in blind. Truth be told, there’s an entire field dedicated to the moral issues created by the evolution of AI.
This field is simply called Robo-ethics, and the best-known examples are Asimov’s laws of robotics. Which, in summary, are
- A robot cannot harm a human or allow a human to come to harm by inaction.
- A robot must obey any orders a human gives them unless the order violates the first law
- A robot may defend itself if they can do so without violating the first or second law.
This is the best current example we have, but even those raise a few questions.
We’re Nowhere Near Close
In a survey conducted in 2016, presented by 352 experts from various positions in the field tried to determine when robots would be able to perform any task better than a human. The answers were somewhat startling, with the average answer falling within 50 years regarding everything from mathematics to surgery to writing novels. Really?
Artificial general intelligence as a concept is often misleading because all around us we see robots beating humans at various games and we panic. The problem is that those machines were designed to do what they did.
In terms of actual intelligence, robots are way behind. Facebook’s head of AI, Yann Lecun, says that in terms of unfocused intelligence, robots are currently behind rats, even. Even so, please note that rats are actually quite smart.
Will artificial general intelligence be the nightmare some predict?
The biggest factor that people seem to ignore is that humans are generally intelligent creatures, and we’ve started who knows how many fights, wars, debates and the like? AI is often shown in movies working like a hive mind, with the whole species supporting the same cause, but that’s not likely. AI allows us to take the same data and draw varying conclusions with it, which means that any robot with true intelligence is perfectly capable of forming their own opinion.
Rise Of The Robots?
Truth be told, artificial general intelligence is a bit overblown, to say the least. First of all, most would agree that we’re not as close as many would think and that AI doesn’t necessarily mean the end of the world. Intelligence means an understanding of complex situations and an ability to understand that only some humans are cruel and potentially evil.