Neurala: Interview with Max Versace

0
784

Neurala is the company behind The Neurala Brain, a deep learning neural network software that makes smart products like cameras, robots, drones, toys and self-driving cars more autonomous and useful.

Unlike traditional AI companies, which are designed for super computers connected to the internet, Neurala’s first project was for NASA to be used for autonomous planetary exploration. NASA wanted a “mini-brain”

able to drive a Mars Rover autonomously in an uncharted environment. The task: explore, perceive, navigate, and go back home. Super computers were not available. Battery life was limited. Fast internet access was impossible, and GPS was unavailable. Their deep learning neural networks had to be lightweight and perform in real time without ground intervention. With these constraints, Neurala modelled “The Neurala Brain” on animal brains—highly sophisticated “computers” that perform all functions needed for autonomous exploration much more efficiently than traditional robotic approaches.

This approach worked, and today the company is bringing The Neurala Brain to market. Neurala’s smart, fast brain works on systems ranging from single-board computers and large servers to toys and enterprise drones.

We spoke to Massimiliano Versace, CEO of Neurala, and here is what he had to say:

Neurala is a Boston-based company, started back in 2006 as a container for intellectual property that three co-founders—an American, a Russian and an Italian—came up with when we were working on our Ph.D.s at Boston University. Back then, we were working on machine learning, neural networks and emulation of brain functioning in software when we came up with an idea for how to accelerate our algorithms so that they could run at a scale and speed that is useful for real-time deployment in robotics. We came up with a patent for using graphic processing units (GPUs), as they had started to become programmable. The basic idea was this: instead of these processors computing “pixels,” GPUs computed “neurons”—millions of them, all in parallel. It was prophetic: today, GPUs are the main fuel in the hardware revolution in robotics. After incorporating, we worked on several side projects while completing our studies until Neurala became too big to merely be a side project. We worked with NASA, the United States Air Force and some private customers back in 2010, and in 2013 we joined the TechStars start-up accelerator in Boston. Since then we have raised about $16M in venture funding. Today we are engaging in bigger and bigger deals for large-scale deployment of our technology to different devices, from small toy robots and consumer devices to industrial drones and automotive applications.

Can you take a moment to explain the applications of AI?

Where there is software, AI can be injected. So, the sky is the limit! Applications where human-like perception and decision-making can be applied to large volumes of data are key. Examples include driving a car, flying a drone, driving a train, flying an airplane, looking at security videos, looking at emails, judging the beauty of a picture or song—these are all activities where a human needs to process a large amount of information in real time to make informed decisions. Automating these activities will be a quantum leap in our ability to design smart devices that do not need a human at the wheel (or joystick).

Can you just briefly explain the difference between strong AI and weak AI?

I like to call it traditional AI versus “brain-based” AI. Traditional AI relies on the intelligence of the researcher or the programmer to come up with a solution to a particular problem, whereas brain-based AI–of which neural networks and deep learning are part—relies on the ability to replicate human or animal brain functioning so that the program itself can figure out a solution. Imagine that a traditional and a brain-based AI scientist are given the same problem: create a software that can classify apples and bananas. The traditional AI scientist will come up with heuristics that will detect shapes, colours, and various combinations of them to partition the “fruit space” in two camps: roundish-reddish apple objects and elongated, yellowish banana objects. This is probably the very first thing that many scientists and programmers will do. And it will most likely be a quick task and will work some of the time.

The brain-based AI scientist will tackle this differently by asking herself what basic processes underlie humans’ and animals’ ability to learn and perceive these objects. By cracking this code, which is a much harder and longer-term task, the brain-based scientist will discover fundamental principles and algorithms that may be applicable to other objects as well. The brain-based AI scientist will realise that the brain does not have a bunch of specialised decision-making routines for apples and bananas but rather learns to recognise these objects by sheer exposure to, and manipulation of, these items over time.

For many years, brain-based AI was the joke of traditional AI. While traditional AI was spitting out thousands of narrow-focused algorithms each day, brain-based AI took many decades to develop. But the reward was worth the wait. While traditional AI may be programmed and deployed more quickly, it is virtually unusable in real-world, “messy” applications where even an apple, depending on lighting conditions, obstruction, colour variations, or even appetite, may dramatically change its appearance: take a bite from the apple, and there goes the round shape. Traditional AI simply does not work. Brain-based AI does. And this is a statement supported by evidence: all major pattern recognition competitions are won today by neural networks, which outscore competing traditional AI approaches. Brain-based scientists are laughing now.

We saw this coming and started building tech, patents and know-how while others were asleep at the wheel. Today, we have a head start on the competition, even within the brain-based camp.

What are the difficulties of introducing AI into everyday objects?

The difficulties have changed over the years. Ten years ago, when we were working with NASA, the Air Force and other companies, the difficulty was convincing them that investing in brain-based approaches was a good idea and was going to pay off. Traditional AI’s ridicule of neural networks was still a blocking factor. Getting venture funding was out of the question.

Today, many of the objections we had ten years ago have been addressed. Nobody wants traditional AI. The same guys who kicked us out of their offices in 2007 now want to fund us and talk to us as if they supported brain-based AI from the get-go. But there are many more competitors, as so many companies have jumped on our bandwagon. Another challenge is disentangling the reality of what AI can deliver today from the “AI hype.”

As AI gets more and more press coverage, the perception is that this tech is omnipotent. It is not, and neither are the humans this technology is helping. For instance, it is unrealistic to expect 100 percent accident-free driving from autonomous vehicles when humans still get into accidents. The same argument holds for visual and auditory classification: perfection is not the standard. There is always an error rate associated with human perception. AI, still a nascent technology, should not be held to unreasonable and unreachable standards.

The other challenge is trust and transparency. Artificial neural networks benefit from the same advantages—but also the same shortcomings—as their biological counterparts. Their performance is state-of-the-art, but they are also opaque in their reasoning. Unlike more simple-minded traditional AI approaches, the decision process in brain-based technology is not described with verbally explicit “if-then” statements. This is particularly true for complex perceptions, navigation and decision-making tasks: deciding if something is a sport car or a sedan; if the drone is going to hit the tree or nearly miss; or if you should marry somebody or not. These are usually complex, multi-stage neural processes that are not easily verbalised as a simple chain of “if-then” statements. One of today’s challenges is to make our customers comfortable with the fuzziness of neural network decision-making and to educate them on the fact that human decision-making is equally fuzzy and imperfect, without detracting from its power and precision. In fact, the imperfection and fuzziness are what makes brain-based processing so powerful and robust.

The issue of how AI is thinking, and why, is particularly acute when dealing with mission-critical applications. It’s one thing to have a toy misclassify an apple for a peach, but it is something entirely different when a drone is misclassifying a tree for a cloud or a car is misclassifying a pedestrian for an “all clear ahead, proceed!”

However, there are several initiatives that will soon mitigate the lack of transparency. AI researchers have total control and access to each and every single value being computed in the program. This means that building explainability—why AI is making the decision it is making—in AI systems is orders of magnitude simpler to achieve than in humans or animals. It’s writing code versus sticking electrodes in brains. A future in which AI can verbalise its decision-making processes is within reach.

Can you tell us a little bit more about your work from NASA?

NASA engaged Neurala back in 2010. The goal was to change the paradigm of how missions in unexplored environments may occur in the future, transitioning from step-by-step human control of expensive robots from Earth to a future where machines can operate mostly with local intelligence without relying on remote control. The way to that future: brain-based technology, whose advantages were clear to NASA.

In just a few grams of brains and “inexpensive” sensors, animals as small as rats can execute autonomous exploration and foraging tasks that are very similar in scope to what a Rover should, in principle, be doing—such as plan a route and find targets—food for rats and minerals of specific types for Rovers, for example—and then remember where those targets were found, avoid collisions, and go back to base. These are all tasks negotiated by small-footprint nervous systems, completely out of reach of NASA technology.

NASA’s demand was simple: make the Rover completely autonomous, with local computation, using a simple monocular camera as input. No GPS. No Wi-Fi. And, more importantly, no server support. All we could use was the onboard compute power. And this is where the true moment of innovation at Neurala happened. We not only needed to perceive and avoid collisions, but we also needed to continuously learn new information from the environment while performing the task. We needed to learn the appearance and the location of unknown objects found along the way, learn where we encountered them, and do this quickly, with little power. At the time, it was a major departure from how neural networks were trained, with the dominant paradigm being lengthy training on large servers on big data sets, lasting from several hours to weeks. We had to figure out how to learn quickly, on the device, what normally took orders of magnitude more time and resources to do.

We turned to neurobiology, system neuroscience, and math to solve this problem, and we knew that if we succeeded, we would have a paradigm-changing technology. As in 2006, when we identified GPUs as being the enabling hardware tech for AI deployment, we see continuous learning at the edge (on-device) as the enabling software tech—the algorithmic innovation, if you will, that is going to unlock a new era for AI. At Neurala, we like to call it AI 2.0, a world where machines can learn after deployment, at the compute edge, how to perform completely new tasks or refine already known ones, without relying on internet connection or massive computing resources. We call this Lifelong Deep Neural Networks (L-DNN for short). L-DNN will change AI once and for all, starting in 2018.

How have you managed to model your AI on animals and what are the key differences between animals and AI?

The training of Neurala co-founders was strictly centered around reproducing brain-like decision-making and learning in software, paying attention to how animals and humans perform and to their anatomy and physiology. This was radically different from the dominant AI of the time, where there was no interest at all in replicating the fine architecture and functioning of nervous systems. It was seen as a lack of pragmatism. We saw it instead as a hard way to solve AI but the one with the biggest return on investment. I am happy to say that it paid off, and today we can say that the difference between AI and animals or human anatomy and physiology is shrinking.

Is there a favorite project that you have worked on, what is it and why?

Our favourite projects are ongoing! We are working with multiple world-leading consumer electronics companies in different verticals. It is an exciting time, transitioning from technology creation to fielding real-world applications. We are looking forward to saying that Neurala has deployed the largest number of AI brains on planet Earth. After working on Mars problems, we can’t ignore the irony of that statement.

As an established player what advice would you give to new market entrants?

Don’t be a copycat. Don’t follow the trends, but be bold and establish them. Don’t care too much about what other people say. To be ahead of the curve means to be controversial and sometimes a bit crazy, but if you are right and have the resilience to withstand the effort, adversities and closed doors, eventually you will succeed. I see many AI companies today popping out of nowhere claiming to be the “AI for X” and the “AI for Y,” but sometimes there is little substance behind the claims. Downloading a package from the web and loading a particular data set does not build a durable AI company. Technological differentiation and creative business models do.

Also, the time is ripe for robotics and AI to be “married.” For so many years, they have been running on parallel tracks, with brain-based AI and robotics rarely meeting in a real-world application. Today, that chasm has been crossed, and brain-based AI is a prime technology to propel the robotic industry forward, where machines will slowly require less and less from humans to operate, leaving our minds free to perform tasks that are still outside of the realm of AI.

Real-time learning in the robot is the next big revolution, where customers will receive machines that will slowly but steadily tailor to their needs, day after day, learning how humans do. As humans, we never turn off learning. Neither should machines, if they want to be really useful.

Robotics News, January 29