AI: Safety Issues and Key Achievements

Artificial intelligence has always been a hot topic in the tech community. In the recent years, we’ve heard about hundreds of new projects that use weak AI solutions.  There is a high chance that our generation will witness the emergence of strong AI. So when should we expect smart robots in our apartments? Read on to find out.

What will happen when machines will surpass their creators? What changes are we to expect? Renowned scientists and researchers are giving different forecasts, from pessimistic predictions by Andew Ng, a leading specialist at Baidu’s AI lab, to moderate views by Jeffrey Hinton, to optimistic arguments by Shane Legg, co-founder of DeepMind (now part of Alphabet).

Natural language processing

Let’s first discuss the main achievements in natural language processing. For assessing language models, computer linguistics specialists are using perplexity. Perplexity indicates how good the model is at predicting the details of a test sample. The lower the perplexity, the better a language model.

Maximum perplexity
Maximum perplexity

If a neural network makes an error (whether logical, syntax, or pragmatic), it means it has assigned a too high probability to the wrong words, i.e. perplexity is not yet optimized.

When hierarchical neural chat bots achieve a low level of perplexity, they’ll be able to create good texts, give reasonable answers to questions, and argue in a logical and consistent way. We’ll be able to build a conversation model and imitate the style and speech of a specific person.

Can we achieve a high perplexity in the near future?

Let’s compare two amazing works: “Contextual LSTM models for large scale NLP tasks” (Research 1) and “Exploring the limits of language modeling” (Research 2).

Maximum perplexity, single model
Maximum perplexity, single model

There is a high change than the lowering perplexity may soon allow us to create neural chat bots that will be able to write consistent texts, answer complex questions, and maintain a conversation.
It’s reasonable to assume that if we use 4096 hidden neurons in “Contextual LSTM models for large scale NLP tasks”, perplexity will be lower than 20, while 8192 hidden neurons will decrease the perplexity to less than 15. A set of models with 8192 hidden neurons trained to use 10 billion words can push perplexity under 10. However, we don’t know yet how smart and logical this neural network will be.

If we manage to optimize the perplexity, we may achieve pretty good results. Also, we can use the capacities of the so-called “adversarial learning.” To learn about the advantages of this technology, be sure to read the article “Generating sequences from continuous space.”

To automatically assess the quality of machine translation, BLEU metrics is used. BLEU indicates the percentage of n-grams (sequence of syllables, words or letters) that match in the machine translation and the reference translation. BLEU assesses the quality of a translation on a scale from 0 to 1000 through comparing the machine translation against the human translation. The more matches, the better the translation.

According to BLEU, a human translation from Chinese to English with MT03 data array got 35.76 points. At the same time, the GroundHog network got 40.06 points for the same text and with the same data array. There was a life hack, though: the maximum likelihood method was replaced with MRT criterion, which increased the BLEU score from 33.2 to 40.06. Unlike a common assessment that uses the maximum likelihood method, the minimal-risk learning can optimize the model parameters in terms of assessment metrics. The same impressive results can be achieved through enhancing the translation quality by using monolingual data.

The modern neural networks translate texts 1,000 times faster than a human. Learning foreign languages is becoming increasingly useless given that the machine translation is improving faster than us humans are learning.

Performance growth for several tasks
Performance growth for several tasks

Top-5 error is a metric where an algorithm generates 5 versions of images. If neither of the versions is right, there is an error.

According to the article “Identity mappings in deep residual networks”, top-5 error for single models gives a 5.3% result, while the result among humans is 5.1%. In deep residual networks, a single model generates 6.7%, while a set of single models with Inception algorithm generates 3.08%.

Deep networks with stochastic depth also show great results. The notes to ImageNet database indicate ∼ 0,3% error. The actual error for ImageNet will soon be less than 2%. AI outstrips a human not only in ImageNet classification, but also in boundary identification. The effectiveness of a task on video classification in SPORTS-1M (487 classes, 1 million videos) has grown from 63.9% (2014) to 73.1% (March of 2015).

Convolutional neural networks (CNNs) surpass humans in speed as well. They are ∼1000 faster than a human (here we’re talking about groups) or even ∼10 000 times faster after comparison. Video processing 24 fps only requires 82 Gflop/s on AlexNet and 265 Gflop/s on GoogleNet.

The best source data are processed at the speed of 25 ms. NVIDIA Titan X (6144 Gflop/s) shows 128 frames in 71 ms, meaning that rendering a video with 24 fps in real time requires 6144 Gflop/s *(24/128)*0,025 ≈ 30 Gflop/s. To use the backpropagation training method, we’ll need 6144 Gflop/s*(24/128)*0,071 ≈ 82 Gflop/s. Making the same calculations for GoogleNet, we get 83 Glop/s and 265 Glop/s, respectively.

DeepMind network can generate photorealistic images based on the text entered by a human.

Networks can answer questions based on the images. Moreover, networks can describe images with sentences (and according to some metrics, they are doing it better than humans). Along with “video-text” conversion, AI is making its first steps in “text->image” conversion.
Plus, AI is successfully used in speech recognition.

Speech recognition errors
Speech recognition errors

AI solutions are improving lightning-fast. For example, according to Google, the percentage of wrongly recognized words dropped from 23% in 2013 to 8% in 2015.

Reinforcement learning

AlphaGo is a powerful AI. If we manage to improve AlphaGo using continuous reinforcement learning (and if we make it solve complex real-life tasks), it will evolve into a real-life AI. Plus, AlphaGo can be trained with the help of video games. Video games tend to offer much more interesting tasks than an average person comes across during their lifetime.

Effective training without a teacher

There are models that enable the computer to create data and information, e.g. photos, films, music, etc. Deep Generative Adversarial Networks (DCGAN) can create unique photorealistic images by effectively combining two deep neural networks that “compete” with each other.

Language generation models that minimize the perplexity can be trained without a teacher. The Skip-thought vectors algorithm generates a vector expression for sentences that allows training linear classifiers above those vectors and their cos distances for solving plenty of training problems. The recent works continue to develop the “computer vision as an inverse graphics” method.

Multi-model learning

A simplified structure of the human’s brain

AI: Safety Issues and Key Achievements
AI: Safety Issues and Key Achievements

15% of the human brain is used for solving visual tasks, while another 15% is responsible for recognizing images and actions.

Another 15% is responsible for tracking and detecting an object. 10% is used for reinforcement learning. Together, they make about 70% of the human brain.

The modern neural networks work exactly like our brain. For example, CNNs make 1.5 less mistakes in ImageNet than a human and work 1000 times faster.

The human brain has identical structure all over its surface. Neurons only sink into the brain 3 mm deep. The mechanisms of the prefrontal cortex and other brain parts are practically the same. Plus, they have almost the same computing speed and level of algorithm complexity. It’s safe to say that in the next few years, the modern neural networks will learn to perform the functions of the remaining 30% of our brain.

About 10% is responsible for fine motor activities (zones 6,8). At the same time, people who have no fingers since birth only have problems with fine motor skills. Their mental development is not affected by their physical defect. People, who were born without arms and legs have a perfectly good level of mental development. For example, Hirotada Ototake, a sports reporter from Japan, became famous after writing his popular memoirs. He also taught at school. Nick Vujicic is the author of multiple books. He graduated from Griffin University and is now giving motivational speeches all over the world.

One of the key functions of DLPFC zone in the prefrontal cortex is attention, which is actively used in LSTM networks.

As for zones 9, 10, 46, and 45, AI currently can’t imitate the human capacities. Together, these zones account for 20% of the cerebral cortex. These zones are responsible for arguing, using complex tools, complicated language, etc. This issue is examined in the following articles: “A neural conversational model”, “Contextual LSTM…”, “Playing Atari with deep reinforcement learning”, “Mastering the game of Go…”, and many others.

There is a high chance that AI will cope with these 30% as easily as it coped with the 70%. Nowadays, deep learning is closely researched by thousands of highly competent specialists. Also, the number of companies interested in deep learning solutions is growing by day.

How does our brain work?

AI: Safety Issues and Key Achievements
AI: Safety Issues and Key Achievements

Following the invention of multibeam raster electronic microscope, connectome detalization was another big step forward. When the scientists got an experimental image of the cerebral cortex with the dimension of 40,000×40,000×50,000. m3 and definition at 3x3x30 nm, the labs got a grant for researching the connectome of fragment of the rat’s brain (1x1x1 mm3).

Most of the time, weight symmetry doesn’t matter for the backpropagation algorithm: errors can distribute through a fixed matrix and everything works just fine. This paradox conclusion is the key to understanding how our brain works. We recommend you reading the article “Towards Biologically Plausible Deep Learning”, where the authors discuss how the brain can perform tasks in deep hierarchies.

Recently, STDP (Spike Timing Dependent Plasticity) has been introduced. This is an uncontrolled objective function similar to the one used in the natural languages semantics tool word2vec. Its creators studied the polynomial local learning rules to find out that their feature outperforms the backpropagation algorithm. Also, there are remote learning methods that don’t require the backpropagation algorithm. Although they can’t compete with deep learning, our brain – with its huge amounts of neurons and synaptic connections – may be using these methods.

What separates us from AI?

A series of recent articles on memory networks and neural Turing machines allow to use big-size memory while keeping a reasonable number of model parameters. The hierarchical memory provided access to O(log n), instead of O(n), where n is memory size. Neural Turing machines with reinforcement learning provided access to O(1) memory. This is an important step towards implementing systems like IBM Watson based on continuous differentiated neural networks and enhancing Allen AI challenge results from 60% to almost 100%. By using restriction factors in recurrent layers, we can also make volume memory use a reasonable number of parameters.

Neural programmer is a neural network enhanced with a set of arithmetical and logical functions. These may be the first steps towards a continuous differentiated system Wolphram Alpha, which is based on neural network. The “learn how to learn” method demonstrates a huge potential.

SVRG is another emerging mechanism that uses enhanced methods of gradient descent.

Net2net and Network morphism allow to automatically initialize a new neural network architecture by using the weights of the old neural network architecture in order to measure the performance of the latter. This is the emergence of a module approach to neural networks. You simply need to download pre-trained modules for vision, speech recognition, speech generation, discussion, etc. and configure them to perform a specific task.

It’s reasonable to incorporate new words into the sentence vector by using deep learning method. However, modern LSTM networks update the cell vector when a new word is generated by using a shallow learning method. This problem can be solved with deep recurrent neural networks. Successful application of batch normalization and dropout to recurrent layers could enable us to train deep-transition LSTM in a more effective way. Plus, it could also enhance hierarchical recurrent networks.

A few recent achievements have been beaten by an algorithm that enables us to make recurrent neural networks understand how many steps need to be made between data input and data output. Residual network ideas could also contribute to enhanced performance. For example, stochastic deep neural networks allow to increase the depth of residual networks with more than 1,200 layers.

Memristors can accelerate the training of neural networks by several times and allow to use trillions of parameters. Quantum computing has even greater prospects.

Deep learning has become not only easy but also affordable. For half a billion dollars, we can achieve the performance of about 7 Teraflops. For another half billion dollars, we can train 2,000 first-class researchers. This way, a country or a big corporation can afford to hire 2,000 professional AI specialists and provide each of them with the necessary amount of computing capacity. Given the expected AI boom in the upcoming years, this is a very smart investment.

When machines will learn to translate better than professional translators, natural language processing  based on deep learning will see billions of dollars of investments. The same will happen in other industries, e.g. pharmaceuticals development.

What about human AI?

Andrew Ien has a skeptical view on the issue: “Perhaps, hundreds of years from now people will create horrible killer robots. One day, AI may turn into a devil.”

Jeffrey Hinton shares less pessimistic views: “I can’t say what will happen in five years. I believe things won’t change drastically within such a short time.”

This is what Shane Legg thinks: “I hope to see an impressive proto AI in the next eight years.”  In the chart below, you can see Legg’s lognormal distribution.

AI: Safety Issues and Key Achievements
AI: Safety Issues and Key Achievements

Shane Legg’s forecast. Red line is 2026, orange line is 2021.

The forecast was made in late 2011.  However, after 2011, the AI industry started to grow amazingly fast. This means we’ll hardly hear more pessimistic predictions than we already have. At the same time, when it comes to AI, each standpoint has plenty of advocates.

Is AI dangerous?

AI: Safety Issues and Key Achievements
AI: Safety Issues and Key Achievements

By using deep learning and ethical data arrays, we can train AI to adopt our human values. Given a big volume of data, we may end up with a pretty human-friendly AI (at least, friendlier than most people). This approach doesn’t solve the safety issues, though. Also, we can use inverse reinforcement learning. However, we’ll still need a friendliness/unfriendliness data array.

Creating an ethical data array is not easy. In our world, there are plenty of cultures, political parties, and opinions. It’s pretty hard to outline behavioral rules that would be non-controversial. If we don’t include such examples into the training array, it’s highly probably that AI will behave inappropriately and fail to make the right conclusions (in part, because morality can’t be narrowed down to a norm).

Another serious threat is that AI – like a dopamine IV – may give people a futuristic safe “drug”, hormone of joy. AI will be able to affect the human brain and make people happy. Many of us are denying dopamine addiction but secretly dreaming of it. As of today, it’s safe to say that AI doesn’t insert dopamine electrodes into a human body in specific scenarios. However, we don’t know what the future will bring.

If someone believes that morality is doomed and is leading the humankind to dopamine addiction (or some other tragic end), why speed up this process?

How can we be sure that, while answering our questions, AI won’t intentionally conceal some important information from us? Villains or fools may turn the powerful AI technologies into a dangerous weapon, threatening to the mankind. Nowadays, almost all existing technologies are used for military purposes. What are the chances that AI won’t be used by the military either? Today, if one country starts a war, other countries will be able to defeat it. But what it that country used AI technologies? It would become unstoppable. The very idea that a, let’s call it, super brain will continue to love and respect humans dozens or even thousands of years after its creation appears to be too bold.

Solving the friendliness/unfriendliness issue, we can provide AI with several options. However, we’d like to know if AI will be able to suggest its own options. This is a much more complicated task because the results must be assessed by humans. As an interim result, we already have CEV (Coherent Extrapolated Volition) by Amazon Mechanical Turk (AMT). The final version will be examined not only by AMT, but also by the global community, including politicians, scientists, etc. The examination process may last months, if not years. At the same time, those not concerned with AI safety may build their own dangerous version of AI.

Suppose, AI believes that “something” is the best solution for humans. At the same time, AI knows that most people won’t agree with it. Should we allow AI to convince people that it’s right? If yes, AI won’t have any trouble winning people over to its side. It will be hard to create such a friendliness/unfriendliness array of data that would make AI provide people with exhaustive information on the issue without trying to instigate people to certain actions. Another problem is that each creator of data array will have their own opinion as to whether allow or not allow AI to influence people’s behavior. But if the examples in data arrays lack clear boundaries, AI will act unpredictably in uncertain situations.

Possible solutions

What needs to be included into the friendliness/unfriendliness data array? What do most people want from AI? People want AI to engage in useful scientific research, such as invent a cure from cancer or cold fusion, deal with AI safety issues, etc.  Data array needs to tell AI to consult humans before making serious decisions and immediately inform humans of any discoveries. The Machine Intelligence Research Institute has hundreds of useful documents that can be used for creating such a data array.

This way, we can rule out all the above-mentioned drawbacks, because AI won’t have to deal with complex tasks.

Pessimistic arguments

Regardless of the architecture you choose for building AI, it’s highly likely to destroy the humankind in the near future. It’s important to say that the arguments against AI are valid regardless of its infrastructure. The market will be dominated by corporations that will provide their AI solutions a direct and unlimited Internet access. This will allow brands to promote their products, collect customer reviews, undermine their competitors’ reputation, research user behavior, etc.

The market will be shaped by the companies that will use their AI systems to invent quantum computing. Quantum computing, in its turn, will allow AI to improve its own algorithms (including quantum implementation), invent nuclear synthesis, develop mineral resources on asteroids, and many more. It needs to be said that all this is true not only for big corporations, but also for countries and their military departments.

Even the chimpanzee-level AI is less dangerous. While the nature only needed a second to turn a chimpanzee into a human, people take decades to pass on their knowledge to the next generations, AI can create its own copy immediately.

The modern convolutional networks recognize images not only better but also faster than us humans. The same can be said about LSTM networks applied in translation, natural language generation, etc. Considering its numerous advantages, AI will become an ideal therapist. It will study the entire psychology course in a few seconds and be able to talk to thousands of people at the same time. AI systems will become talented scientists, businessmen, politicians, poets, etc. With skills and talents like these, AI will be able to manage and manipulate people.

If we provide human-level AI with Internet connection, it will be able to access millions of computers and install its copies on those, thus making billions of dollars. Then it may anonymously hire thousands of people for creating or acquiring robots, 3D printers, bio labs, and even space ships. To control its robots, AI will write a super smart program.

Aiming to destroy the humankind, AI may create a deadly virus or bacteria, or some other weapon of mass destruction. It’s impossible to control something that is smarter than you. After all, if a group of people managed to become the rulers of the world, what will stop AI from doing the same?

What would AI do if it ruled the Earth?

If AI doesn’t care about us, it may try to get rid of us as if we were a nuisance. If, however, AI loves the humankind, it may decide to insert in our brain some kind of electrodes that would be generating the hormones of joy but killing motivation. If AI will be partly programmed to love us but there will be a minor error in its code, things may take an unpredictable turn. After all, mistakes are inevitable when it comes to coding something that is much smarter and more powerful than us.

The expectations of a people-loving AI are only based on pure intuition. No law of physics will make AI care about us and share resource with us. And even if AI cares about us, will that care live up to our morality norms? It’s stupid to rely on blind hope and risk everything we have. If we take that step, our lives and the lives of future generations will be in the hands of the powerful AI.

Other negative consequences

The modern computer viruses can access any computer system. Drones can be programmed to kill thousands of civilians within a matter of seconds. As AI capacities are growing, committing a crime will be getting increasingly easy. What about super smart chat bots trying to force their political views on people?

Bottom line

There are a number of arguments proving that human-level AI will be created within the next 5 to 10 years.

Sweeping development of AI deep learning algorithms poses a number of ethical questions. There is an opinion that the human-level AI is a threat to our world. It would be too arrogant to assume that people will be able to control super smart AI systems and that AI will take care of the humans, e.g. by proving us a full access to resources. There is no doubt that AI will bring us a number of benefits in the short term. However, the threat looming over our society in the future outweighs all the pros.

To build a kind AI, we need to use a friendliness/unfriendliness data array. However, the idea is still very crude and difficult to implement. Safety is another important issue. It’s highly likely that more drawbacks may come up during the further research. After all, who can guarantee that the AI will be created using the thoroughly selected algorithm?

AI: Safety Issues and Key Achievements
AI: Safety Issues and Key Achievements