A tour through past and present of Artificial Intelligence (AI)

Published On June 8, 2016
In Global, Technology, Technology, Blog Archives

The History of Artificial Intelligence (AI)

Not very far in the past, Artificial Intelligence (AI) seemed a distant dream. Today, it is a ubiquitous phenomenon. We carry it in our pockets; it is in our cars and in many of the web services we use throughout the day.

Artificial Intelligence (AI) is defined as the theory and development of computer systems, which are able to perform tasks that normally require human intelligence such as visual perception, speech recognition, decision-making, and translation between languages. A more lucid definition would define AI as “the intelligence exhibited by machines or software”. It also refers to the academic field of study, which studies how to create computers and computer software that are capable of intelligent behavior. A lot of people struggle with identifying the difference between AI and ordinary software, as there is no clear line between the two concepts. Hence, one may think of the AI technology as the sort of technology, one would use to perform tasks that require some level of intelligence to accomplish.

The long-term goal of AI research ever since its inception has been what is called Strong AI. The term Strong AI describes a computer capable of doing every intelligent task that a human can perform. However, we are presently at a point, where the disruptive power of AI innovation no more depends on whether or not we achieve Strong AI. The technology has now progressed to a level, where regardless of future advancement in the field, it is poised to disrupt any innovation-based industry.

The origins of AI

Before we go ahead, let us try to trace the origins of the insatiable quest to create AI.

Throughout the human history, people have used technology to model themselves. Each new technology has, in its turn, been the base to build intelligent agents or models of mind. About 400 years ago, people started to write about the nature of thought and reason. Hobbes (1588-1679), who has been described by Haugeland (1985), p. 85 as the “Grandfather of AI”, espoused the position that thinking was symbolic reasoning like talking out loud or working out an answer with pen and paper. Descartes (1596-1650), Pascal (1623-1662), Spinoza (1632-1677), Leibniz (1646-1716), and other pioneers in the philosophy of mind, further developed the idea of symbolic reasoning.

The idea of symbolic operations became more concrete with the development of computers. The first general-purpose computer designed (but not built until 1991, at the Science Museum of London) was the Analytical Engine by Charles Babbage (1791-1871).

In the early part of the 20th century, scientific researchers were further developing the basics of modern computation. Computer scientists proposed several models, including the Turing machine by Alan Turing (1912-1954), a theoretical machine that writes symbols on an infinitely long tape.

Early AI programs

Once real computers were built, some of their applications were AI programs. There was also much work on low-level learning inspired by how neurons work.

These early programs concentrated on learning and search as the foundations of the field. It became apparent early that one of the main problems was presenting the necessary knowledge to solve a problem. Before learning, an agent must have an appropriate target language for the learned knowledge. In the folloing years, the AI community progressed through three stages:

  • 1960s and 1970s – Scientists succeed in building natural language understanding systems in limited domains.
  • 1970s and 1980s – Researchers work on expert systems with the aim to train computers to perform expert tasks.
  • 1990s and 2000s – AI subdisciplines of perception, probabilistic and decision-theoretic reasoning, planning, embodied systems, machine learning progress significantly.

However, with all these developments, AI has seen its share of ups and downs. Yet, it has always survived through its “winters” over the course of time.

Returning interest in AI technology

AI appears to be entering a new phase, where interest is surging again. An example of this is the sharp increase in the commercial use of AI, also known as machine intelligence, such as IBM’s Watson. Watson is a distributed cognitive system, meaning that it is spread out in the cloud, collecting information every time it’s being used, everywhere. This means that the more people use Watson, the better the system becomes at its job.

Although several things have changed to promote this surging interest in AI, the following are the essential drivers for this upswing:

  • Cost of computing power has declined dramatically
  • The power of computing has improved exponentially
  • The data availability has increased manifold
  • The core algorithms within the AI systems have evolved substantially

The mix of the four elements sketched out above is making AI applications, which have a tendency of consuming massive amounts of computational power and information, more pragmatic.

AI’s place in today’s business world

In reasonable terms, this has implied that computations that used to take up to a few weeks, now take less than a day, and the time is contracting. Social networks, mobile phones and the emergence of wearable consumer devices have created an explosion of data needed to feed the data-hungry AI engines, and, in turn, enable them to operate at peak performance. Besides, this data explosion is so vast and overpowering that it has become difficult to comprehend it without smart computerized support.

Advances in analytics, especially progress in machine learning with the needed computational power now available to support them, make AI systems more versatile and less demanding to create and actualize. Finally, despite its “winters,” the AI innovation base has kept on evolving and growing exponentially, (but discreetly), with every development expanding surpassing the former, where its effect is obvious. Technology companies have already developed algorithms that track online habits of users, creating deeply personal online experiences. Users are increasingly exposed to customized context-sensitive information and advice derived by systems that collect and analyze the past actions of users.

Ritwik Dey
Ritwik Dey
About the Author

Ritwik Dey is a director at SG Analytics' ICT Research with over 10 years experience in supporting clients across financial advisory and the technology and communication sectors. He worked on long term projects in Japan and Australia and holds a MBA-degree in marketing and sales.

Write to Us for More Information or No-obligation Consultation