Siri and Alexa are pretty good at answering your questions. Google often shows you products you are actually interested in buying. But how do these technologies work?
Apple, Amazon, and Google, the leading technology companies of our time, have heavily invested in Siri, Alexa, and AdSense, respectively. Each of these technologies is powered by Artificial Intelligence (AI).
I have previously written about AI and how it evolves from machine learning algorithms. In this article, I will focus more on the history, categories, and applications of AI.
To recap briefly: AI is the phenomenon of computers simulating human intelligence, for example by comprehending and solving a complex problem, and correcting course as necessary. A computer that can solve a problem generally considered to require human reasoning or skill (for example, learning, planning, reasoning, perceiving, solving problems, moving, or manipulating objects) is using AI.
History of AI
During the Second World War, noted British computer scientist Alan Turing worked to crack the ‘Enigma’ code which was used by German forces to send messages securely. Alan Turing and his team created the Bombe machine that was used to decipher Enigma’s messages. The Enigma and Bombe Machines laid the foundations for Machine Learning. According to Turing, a machine that could converse with humans without the humans knowing that it is a machine would win the “imitation game” and could be said to be “intelligent”.
In 1956, American computer scientist John McCarthy organised the Dartmouth Conference, at which the term ‘Artificial Intelligence’ was first adopted. Research centres popped up across the United States to explore the potential of AI. were developed in America to expertise the new technology. Researchers Allen Newell and Herbert Simon were instrumental in promoting AI as a field of computer science that could transform the world.
Getting Serious About AI Research
In 1951, an machine known as Ferranti Mark 1 successfully used an algorithm to master checkers. Subsequently, Newell and Simon developed General Problem Solver algorithm to solve mathematical problems. Also in the 50s John McCarthy, often known as the father of AI, developed the LISP programming language which became important in machine learning.
In the 1960s, researchers emphasized developing algorithms to solve mathematical problems and geometrical theorems. In the late 1960s, computer scientists worked on Machine Vision Learning and developing machine learning in robots. WABOT-1, the first ‘intelligent’ humanoid robot, was built in Japan in 1972.
However, despite this well-funded global effort over several decades, computer scientists found it incredibly difficult to create intelligence in machines. To be successful, AI applications (such as vision learning) required the processing of enormous amount of data. Computers were not well-developed enough to process such a large magnitude of data. Governments and corporations were losing faith in AI.
Therefore, from the mid 1970s to the mid 1990s, computer scientists dealt with an acute shortage of funding for AI research. These years became known as the ‘AI Winters’.
New Millennium, New Opportunities
In the late 1990s, American corporations once again became interested in AI. The Japanese government unveiled plans to develop a fifth generation computer to advance of machine learning. AI enthusiasts believed that soon computers would be able to carry on conversations, translate languages, interpret pictures, and reason like people. In 1997, IBM’s Deep Blue defeated became the first computer to beat a reigning world chess champion, Garry Kasparov.
Some AI funding dried up when the dotcom bubble burst in the early 2000s. Yet machine learning continued its march, largely thanks to improvements in computer hardware. Corporations and governments successfully used machine learning methods in narrow domains.
Exponential gains in computer processing power and storage ability allowed companies to store vast, and crunch, vast quantities of data for the first time. In the past 15 years, Amazon, Google, Baidu, and others leveraged machine learning to their huge commercial advantage. Other than processing user data to understand consumer behaviour, these companies have continued to work on computer vision, natural language processing, and a whole host of other AI applications. Machine learning is now embedded in many of the online services we use. As a result, today, the technology sector drives the American stock market.
Four Types of AI
As I mentioned in my previous article, there are many ways to classify different kinds of AI algorithms. Here, I will first categorize them in terms of how advanced they are, and then discuss their applications.
Reactive machines are basic in that they do not store ‘memories’ or use past experiences to determine future actions. They simply perceive the world and react to it. IBM’s Deep Blue, which defeated chess grandmaster Kasporov, is a reactive machine that sees the pieces on a chess board and reacts to them. It cannot refer to any of its prior experiences, and cannot improve with practice.
Limited Memory machines can retain data for a short period of time. While they can use this data for a specific period of time, they cannot add it to a library of their experiences. Many self-driving cars use Limited Memory technology: they store data such as the recent speed of nearby cars, the distance of such cars, the speed limit, and other information that can help them navigate roads.
Theory of Mind
Psychology tells us that people have thoughts, emotions, memories, and mental models that drive their behavior. Theory of Mind researchers hope to build computers that imitate our mental models, by forming representations about the world, and about other agents and entities in it. One goal of these researchers is to build computers that relate to humans and perceive human intelligence and how people’s emotions are impacted by events and the environment. While plenty of computers use models, a computer with a ‘mind’ does not yet exist.
Self-aware machines are the stuff of science fiction, though many AI enthusiasts believe them to be the ultimate goal of AI development. Even if a machine can operate as a person does, for example by preserving itself, predicting its own needs and demands, and relating to others as an equal, the question of whether a machines can become truly self-aware, or ‘conscious’, is best left for philosophers.
Functions of AI
Though I briefly discussed these earlier, the phased development of AI over past six decades has unearthed various applications. Here are the most common ones:
Industry has often sought to leverage technology to drive productivity. So, to reduce production costs, industries have automated many repetitive activities and processes to reduce the amount of human intervention required. Machines and computers use automation to perform repetitive tasks and adapt to changes in circumstances. Automation has been widely adopted in both blue-collar and white-collar workplaces.
Machine learning is a revolutionary idea: feed a machine a large amount of data, and it will use the experience gained from the data to improve its own algorithm and process data better in the future. The most significant arm of machine learning is Neural Networks. Neural Networks are interconnected networks of nodes called neurons or perceptrons. These are loosely modeled on the way the human brain processes information.
Neural Networks store data, learn from it, and improve their abilities to sort new data. For example, a Neural Network tasked with identifying dogs can be fed various images of dogs tagged with the type of dog. Over time, it will learn what kind of image corresponds to what kind of dog. The machine therefore learns from experience and improves itself.
Deep Learning is a subset of Machine Learning. In Deep Learning, Neural Networks are arranged into sprawling networks with a large number of layers that are trained using massive amounts of data. It is different from most other kinds of Machine Learning, which generally stress training on labeled data (for example, a picture of a dog with a tag identifying the name of the dog, and some instructions on how to process each of these). In Deep Learning, the sprawling artificial Neural Network is fed unlabeled data and not given any instructions. It determines the important characteristics and purpose of the data itself, while storing it as experience. Returning to our dog example: when images of a dog are fed to a Deep Learning Neural Network, the machine itself determines the important characteristics of each breed of dog from the images, and can then use these to identify a given dog’s breed.
Machine Vision seeks to allow computers to see. A computer captures images from a mounted camera and converts them from analog to digital (the latter can be easily analyzed). Machine Vision methods often seek to simulate the human eye. Machine Vision has various potential uses, such as signature identification and medical image analysis.
Natural Language Processing (NLP)
NLP techniques (including voice recognition, text translation, and sentiment analysis) allow computers to comprehend human language and speech. While Siri and Alexa are examples of commercially available products using NLP algorithms, the major technology companies have developed far more advanced NLP techniques than the ones Siri and Alexa use.
Enterprise Applications of AI
Below, I list just a few applications of AI in each industry. These are merely examples – they do not come anywhere close to being exhaustive.
In healthcare, AI can help improve patient outcomes and reduce costs. Machine Vision can already help diagnose issues in X-Rays and other such images far better than human doctors can. AI can also be used to create medical chatbots and other applications that provide medical answers on the internet, or to more easily schedule doctor appointments.
In the corporate world, consumer preferences are constantly shifting. AI, after digesting enough information about consumer preferences, can help understand or even project these trends. It can also be used in virtual customer service agents or chatbots.
By observing students, AI can determine how they best learn. It could also provide personalized virtual tutors tailored to the student’s skill level and personality.
From trading securities and commodities to powering customer-facing robot investment advisers, AI has many uses on Wall Street and in the financial services industry.
The outcomes of potential or real legal cases depends on rules established in previous such cases, known as precedents. Machine Learning alone is not enough to process precedents and derive rules, because the reasoning in precedents is very fact-heavy. However, if AI truly understands the words written in legal judgments, it could have a transformative impact on the practice of law.
AI-powered robots are replacing segments of the human workforce. This cuts both ways for humanity: it could reduce the number of low-skilled jobs available, but also make products cheaper for all customers. AI could also help tailor creative solutions to global problems, ranging from care for aging populations, to combating extreme weather.
AI’s march has not been slow and steady. Rather, it has been characterized by decades of investment and hype, followed by periods of disappointment and lack of investment. AI has made great progress in the past decade. Yet today’s most prominent AI method, Deep Learning, is reaching the boundaries of its capabilities.
A new AI paradigm will soon emerge. Companies and governments are currently investing heavily in AI. Competition among American, Japanese, Chinese and other governments will bolster AI algorithms.
As for conversational AI, Siri and Alexa are passable but not great conversational partners. My guess is that by 2030, we will have conversational machines that are indistinguishable from humans, and that can therefore win Turing’s imitation game.