The Future of AI and the Rising Ethical Questions


AI (Artificial Intelligence) is at the very cutting edge of industry but how much do we really understand about it? There are several questions that we should ask about it, considering that AI could become a thinking and more importantly, feeling reality.

If strong AI, whose goal is to develop artificial intelligence to the point where the machine’s intellectual capability is functionally equal to a human’s, is introduced, then in effect, it would be able to think intelligently and could potentially feel. What are the questions we should be asking? If thinking and feeling machines become reality, the dynamic between man and machine would become that of slave and owner. Based on history, would this dynamic be acceptable?

Would we be able to separate intelligence from emotions? A real ethical question should be whether machines of the future should be allowed to become sentient. Although in a very distant future, it is widely accepted that AI is coming. From the 1950’s when science fiction led the conversation, with Asimov’s ideas of a positronic brain to 2018, with the attempts to apply reinforcement learning to problems, which enables machines to model human psychology in order to make better predictions; or contesting neural networks with generative adversarial networks algorithms which requires less human supervision and enables computers to learn from unlabelled data; making them more intelligent.

We spoke to Dr. Eduardo Castello Ferrer and here is what he had to say:

I am a postdoctoral fellow at MIT Media Lab and before coming here, I did my masters and PhD degrees in robotics engineering at Osaka University (Japan). I was working in the lab of Prof. Ishiguro, a famous humanoid roboticist. I was working with him for seven years and I specialised myself in swarm robotic systems. This brought me to the Media Lab. At the beginning of my stay, I worked with a lab focused on agricultural robotics. During that time, I co-developed a robot called the Personal Food Computer. What I do right now is conduct research on all sorts of distributed robotic systems. For instance, I am currently exploring the synergy between blockchain and swarm robotic systems.

What we call “weak AI” is a set of mathematical equations and algorithms, that try to analyse and optimise certain inputs to match or predict certain outputs. These algorithms have been with us for a really long time. One of the main examples are chess playing systems. The main difference between those “old school” algorithms and current ones is that nowadays’ systems rely on big amounts of input data to be trained and improved upon. In the last decade, we have witnessed an exponential growth of this input data coming from cell phones and online services. In contrast, “strong AI” is a theory, which states that in the future, machines will be able to “generalise”, in other words, that the chess playing algorithm can use the knowledge that it acquired playing chess and apply it to a completely different problem, for instance, understanding people. Generalisation is a concept that has not been achieved so far, and it is not a realistic outcome with the current state of AI technology. Strong AI is very hard to imagine right now.

In order to apply the knowledge that you gain from playing chess to understand people you need a set of tools and methods that do not exist right now. New knowledge in neuroscience, biology, and computer sciences is required in order to break the limitations of current AI systems. Without this collection of important breakthroughs, it is hard to envision a strong AI, simply because the tools that are required do not exist.

A plausible future for AI, in my opinion, is not a world where this technology has consumed us, but a scenario where AI has extended us. For instance, at the Media Lab we prefer to talk about EI (Extended Intelligence) rather than AI (Artificial Intelligence). Along those lines, I think that we are going to use these very precise optimisation algorithms and techniques with the correct data, maybe even with our personal data, in order to get insights and knowledge to support our decisions. For example, I envision a doctor that has to review a large number of cases and has to take very important decisions in a short amount of time. In those situations, AI will be extremely helpful, not for deciding whether a patient needs to go to the operating room or not, maybe that should always remain under the authority of a physician, but to help the doctor search extensive amounts of literature, predict the odds of recovery based on previous cases, etc. In other words, extend doctor’s capabilities, in terms of memory, processing power, and prediction.

It is very difficult for me to foresee a situation where strong AI co-exists with us. Indeed, the world would be a very different place. However, if we assume (and it is a very big assumption) that the convergence of neuroscience, biology and computer science occurs, I think that we should start a debate as a society about whether a machine (by definition a more deterministic and rigorous entity than a homo sapiens) can decide our fate as humans, even though it lacks empathy. We will definitely have to ask questions like “can a machine dictate or enforce law?”, even though law is something we traditionally interpret, not something that we compile and execute – which currently happens in case of code. Do we need a new “social contract” between us and the machines in order to sustain our living, not in the most efficient way but in the most liveable way? Do we need programmers to swear hypocratical oaths? Since AI is becoming increasingly important in our lives, the programmers that code these algorithms must promise that they will not code anything that is biased or breaks human rights. We need to decide as a society what will be acceptable and implement this together. I think those will be the most plausible questions in our future with AI.

Robotics News, November 21