By Dr. Tanmoy Chakraborty, Assistant Professor (Department of CSE) IIIT Delhi, India
Let’s start the discussion with a funny comment: "If it is written in PowerPoint, it is probably AI; if it is written in Python, it is probably Machine Learning".
The world is abuzz with news about ‘Artificial Intelligence’ (AI), but relatively fewer people tend to talk about ‘Machine Learning’ (ML). These two terms, although are not the same, are used interchangeably quite often. It is important to understand the distinction between them.
Artificial Intelligence is seen as a big future paradigm shift that for some will put millions of people out of jobs, and for others perhaps take over control and end up being the downfall of humanity. It is human nature to be scared of things we don’t understand AI. Hence, people tend to be scared of AI especially because they don’t understand it that well. Let us try and demystify what Artificial Intelligence and Machine Learning actually are.
Fig: AI is of a broader scope. It has many sub-areas within it, and ML is one out of those many areas. People don't talk about other areas because ML has had the most impact on our lives as of now than any other sub-areas out there.
Artificial Intelligence
Artificial Intelligence (AI) is often defined as the ability of a computer system to perform tasks commonly associated with intelligent living beings. These tasks include visual perception, speech recognition and decision making. AI exists when a machine has cognitive capabilities such as problem-solving and learning from examples. In general, AI has three different levels: (i) Narrow AI, when a machine is better than us in a specific task; (ii) General AI, when a machine is like us in any intellectual task; and (iii) Strong AI, when a machine is better than us in many tasks. We are currently in the era of Narrow AI. We have made a lot of progress on performing these tasks over the past decade; but most experts tend to view what they are doing as ‘Machine Learning’ and ‘Pattern Recognition’, not as AI. The root of the problem with calling current systems as ‘AI’ is that these systems might be good at recognizing a ‘cat’ within an image, but they don’t intrinsically understand what a cat actually is, like a human would.
But what we need to realize is that Artificial Intelligence isn’t something new. When calculators were first invented, people used to consider them as ‘Intelligent’. But they soon became just ‘machines’. When the first rule-based chess programs started beating humans, they were considered intelligent. But they soon lost their charm. AI is always seen as the frontier. In other words, AI is always a moving target. As soon as we figure out how to solve a difficult problem previously thought impossible for a machine to do, the solution stops being ‘intelligent’, and the target is moved to a new frontier.
Machine Learning
Machine learning is defined as the study of algorithms and statistical models that computer systems use to progressively improve their performance on a specific task. These techniques rely heavily on large amount of data to extract useful patterns from. And thanks to the increase in availability of data across domains, these techniques have gained a lot of attention by giving useful results.
We have made a lot of progress in solving newer types of problems that were considered very hard for computers to do. Language translation from speech, and object identification from images being among the most popular examples of them. This progress has been possible largely thanks to Deep Learning, or Neural Networks. Deep Learning is a subset of Machine Learning that allows us to model and recognize much more complex patterns in data.
It may serve us well to think of machine learning as an enabler for newer kinds of automation. Just like calculators and computers once were. However, the potential scale of this automation is much larger than earlier, because it includes many tasks that were performed exclusively by humans - tasks in the domain of speech and vision. Computers were already good at processing data at a large scale. Now, computers will gain the ability to process newer kinds of data - data originating from speech and visual signals. They won’t replace expert designers, architects, engineers or doctors anytime soon. But what they will do is enable these professions to become much more productive at their jobs, by automating some of the mundane tasks and letting the human expert focus on doing what they do best - applying ‘intelligence’ to solve problems.