The limitations of AI today

The limitations of AI today

By Mohamed Akrout

Many companies are investing heavily in artificial intelligence (AI) and automation to reduce their costs and increase their productivity. For example, Deutsche Telekom and Royal Bank of Scotland chose to replace some call center staff with chatbots, a strategy that could save them billions of euros over the next five years. Similarly, BNP Paribas and Wolters Kluwer seek to increase their revenues by using machines to analyze financial markets and customer databases in real time to launch alerts automatically.

Siemens uses AI to manage its gas turbines more efficiently than humans, and in the logistics sector, Deutsche Post DHL is deploying IBM AI tools to replace employees with robots in their warehouses, as Amazon already does. There are even companies that are using AI with a goal of helping people rather than just making money; Facebook is deploying AI tools to detect patterns of suicidal thoughts and, when deemed necessary, send mental health resources to the user at risk or their friends.

Despite all this, AI still has many limitations that need to be taken into consideration. In this article, we describe the major limitations of today’s state-of-the-art AI technology.

AI can fail too:

Machines can fail. For example, Microsoft was forced to disable its Tay chatbot, which was able to learn through its conversations with users, after Internet users managed to make it racist and sexist in less than one day. Similarly, Facebook was forced to admit failure when its bots reached a failure rate of 70% and started talking to each other in a strange language only they understood.

Errors are acceptable, but the acceptable error rate for critical applications like autonomous vehicles or a computer-controlled turbines must be minimal because when things go wrong, it involves human lives. The recent Tesla, Waymo, and Uber car accidents under autopilot confirm why AI remains a mostly hypothetical technology to many.

AI needs big data:

Machines are not suitable for all tasks. AI is very effective in rules-based environments with lots of data to analyze. Its use is therefore relevant for things such as autonomous cars, which drive in dense traffic governed by specific laws, or finding the best price at which to resell a batch of shares.

On the other hand, to choose what to invest in, or to recommend products to new customers without data to exploit, AI is less effective than humans. Lack of rules or data prevents AI from performing well. The existing AI models require large amounts of task-specific training data such as ImageNet and CIFAR-10 image databases, composed of 1.2 million and 60 thousand data points (labeled images), respectively. Labeling these data is often tedious, slow, and expensive, undermining the central purpose of AI.

AI needs a dedicated computational infrastructure:

All AI systems’ successes use a specific hardware infrastructure dedicated to the AI task to be solved. For instance, Google DeepMind’s AlphaGo Zero system, which crushed 18-time world champion Lee Sedol and the reigning world number one player of Go, Ke Jie,  was trained using 64 GPUs (graphics processing units) and 19 CPUs (central processing units).

The OpenAI Five system, which defeated amateur human teams at DotA 2, was trained on 256 GPUs and 128,000 CPU cores. In June 2017, Facebook Research published a paper showing how they succeeded in training an AI model on 1.3 million images in under an hour. However, to achieve this impressive result, which would have previously taken days on a single system, they used 256 Tesla P100 GPUs.

While the human brain, the smartest system we know in the universe, is remarkably low in power consumption, computers are still far from matching the energy efficiency of the brain. A typical adult human brain only consumes around the equivalent of 12 watts per day, which is a mere fifth of the power required by a standard 60 watt light bulb in the same time period.

AI does not understand causal reasoning:

AI algorithms, as currently designed, do not take into account the relationship between cause and effect. They replace causal reasoning with reasoning by association. For instance, instead of reasoning that dry and cracked skin is caused by psoriasis, AI only has the capacity to reason that psoriasis correlates with dry and cracked skin.

During his interview at Quanta Magazine about his new book, The Book of Why, published on May 15th, 2018, Judea Pearl, the 2011 winner of the Turing Award and the AI godfather who introduced the probabilistic approach to AI, mentioned that

“All the impressive achievements of deep learning amount to just curve fitting”.

He emphasized that the future of AI should introduce a causal framework in the design of the next generation of algorithms.

AI is vulnerable to adversarial attacks:

Adversarial attacks are like optical illusions for machines. They are intentionally designed to cause the model to make a mistake. These attacks add noise of small amplitude in the data submitted as input to the AI algorithm in order to mislead these algorithms, forcing them to predict a wrong answer.

The Wall Street Journal successfully applied adversarial attacks to dupe the face recognition feature of the iPhone X, as shown in their demo, after creating fake masks to unlock the phone. These results of adversarial attacks are disturbing. Google researchers have shown that changes almost invisible to the naked eye on something as innocent-seeming as the image of a panda could disrupt an AI to the point that it starts to identify a monkey instead.

In another example, researchers at UC Berkeley targeted AIs specifically for voice transcription. They succeeded in generating and adding an indistinguishable background noise to a recording, which, though undetectable to the human ear, was transcribed by AI systems to generate other commands and access the user’s system for potentially nefarious purposes.

Adversarial attacks are not limited to actions on digital data; it is possible to carry out such attacks in the real world. A research team comprised of members from Samsung Research, University of Michigan, University of Washington, and the University of California, have placed simple white stickers, specifically built, on road signs and managed to have an AI identify a stop sign as a speed limit sign. We can immediately see the unfortunate consequences that such malfunctions could have on an autonomous car.

Conclusion:

Though AI is seen by many as the next big thing, it still has severe limitations that may have unforeseen and potentially disastrous consequences if it is not implemented in the correct fashion. With the intensity of research and development currently being undertaken in this sector, we will likely see advancements to counteract many of these factors, expanding the potential applications of AI significantly.

If you have any questions or would like to know if we can help your business with its innovation challenges, please contact us here or email us at solutions@prescouter.com.

Never miss an insight

Get insights delivered right to your inbox

More of Our Insights & Work

Never miss an insight

Get insights delivered right to your inbox

You have successfully subscribed to our newsletter.

Too many subscribe attempts for this email address.

*