>Business >The history of Artificial Intelligence

The history of Artificial Intelligence

In the first part of the 20th century, science fiction novels and movies put forth the concept of artificially intelligent robots to the world. It started with the “heartless” Tin man from the Wizard of Oz and went on with the humanoid robot that emulated Maria in Metropolis. Midway through that century, we had a league of scientists, mathematicians, and philosophers with the idea of artificial intelligence (or AI) culturally integrated in their minds. One of these persons was Alan Turing, an innovative and young British polymath who looked into the mathematical possibility of artificial intelligences. Often touted as the father of modern computer science, Turing indicated that human beings leverage available data as well as go about exercising their reasoning capacities to find solutions to issues and make decisions, so why couldn’t machines be capable of the same thing? This was the logical foundation of his 1950 research paper, Computing Machinery and Intelligence in which he spoke about how to develop smart machines and how to evaluate their intelligence.  

There is a saying that talk is cheap. What held back Turing from getting to work at that very moment? To start with, computers were required to change at a fundamental level. Prior to 1949 computers lacked a critical prerequisite for intelligence: they couldn’t record commands and instructions, only execute them. To put it in different words, computers could be instructed as to what to do, but couldn’t recall what they did. Secondly, computing was really expensive. In the early 1950s, the expenses of leasing a computer could go up to $200,000 a month. Only the big academic institutions and leading technology enterprises could wade through these uncharted waters. A proof of concept in addition to advocacy from high profile individuals were required to persuade funding sources that machine intelligence was a worthwhile pursuit. 

Half-a-decade on, the proof of concept was started by Allen Newell, Cliff Shaw, and Herbert Simon’s Logic Theorist. The Logic Theorist was an application developed to emulate the problem-solving skills of a human agent and received funding from Research and Development (RAND) Corporation. It’s viewed by several to be the first artificial intelligence application and was introduced at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by Marvin Minsky and John McCarthy in 1956. In this ground-breaking conference, McCarthy, visualizing a great collaborative initiative, brought together leading researchers from several domains for an open discussion on artificial intelligence, the term which had its coinage at the same event. Unfortunately, the conference did not fulfil McCarthy’s expectations, individuals came and went as they desired, and there was a lack of agreement on standardized methods for the domain. Regardless of these factors, everybody was in complete alignment with the notion that AI could be accomplished. The noteworthy part of this event cannot be underplayed as it was the proponent of the next two decades in artificial intelligence research. 

Through 1957 to 1974, artificial intelligence rose to prominence. Computers could record more data and became quicker, more affordable, and more accessible. Machine learning algorithms were also enhanced and individuals were getting better at being aware of which algorithm to apply to their situation. Preliminary demonstrations like Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA demonstrated promise towards the objectives of problem solving and the interpreting of spoken language, respectively. These successes, in addition to the advocacy of prominent researchers persuaded government entities like the Defence Advanced Research Projects Agency (DARPA) to go about funding artificial intelligence research at various institutions. The government was especially interested in a machine that could transcribe and convert spoken language in addition to high throughput data processing. Optimism was at an all-time high and the expectations were at a fever pitch. In 1970, Marvin Minsky stated to Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” 

Although the basic proof of principle was present, there will still some way to go before the end objectives of natural language processing, abstract thinking, and self-recognition could be accomplished. 

Breaking through the preliminary fog of artificial intelligence unveiled a plethora of hurdles. The largest was the lacking computational capabilities to do anything considerable, computers just couldn’t record adequate data or process it quickly enough. To communicate, for instance, one requires to know the meanings of words and terms and comprehend them in several combinations. Hans Moravec, a Ph.D. student of McCarthy at the time, specified that “computers were still millions of times too weak to exhibit intelligence.” As patience wore thin so did the funds, and research came to a slow down for about a decade. 

During the 1980s, artificial intelligence was reignited by two factors: an expansion of the algorithmic toolkit, and a boost of funding. John Hopfield and David Rumelhart made famous “deep learning” strategies which enabled computers to learn leveraging experience. Somewhere else, Edward Figenbaum put forth expert systems which emulated the decision-making process of a human specialist. The application would question a specialist in a particular domain how to react in a provided situation, and after this was learned for practically every scenario, non-specialists could obtain advise from that application. Specialist systems were broadly leveraged within industry. The Japanese State made heavy investments in specialist systems and other AI-related initiatives as part of their Fifth Generation Computer Project (FGCP). 

From 1982-1990, they invested $400 million with the objectives inciting a revolution in computer processing, implementation of logic programming, and enhancing artificial intelligence. Unluckily, most of the ambitious objectives could not be accomplished. Although, it could be argued that the indirect impacts of the FGCP served as a massive inspiration for a talented young generation of engineers and scientists. Notwithstanding, funding of the FGCP stopped, and artificial intelligence wasn’t in the spotlight anymore. 

Ironically, without state funding and public prevalence, AI continued to thrive. Over the course of the 1990s and 2000s, several of the landmark objectives of artificial intelligence had been accomplished. In the year 1997, then current Chess world champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer application, driven by artificial intelligence. This highly popular match signified the first time a world chess champion was defeated by a computer and was a huge evolutionary step towards an artificially intelligent decision-making program. Again, in 1997, speech recognition software, produced by Dragon Systems had its implementation on Windows. This was another massive step in the right direction but in the direction of the spoken language interpretation endeavour. It appeared that there wasn’t a single issue that machines could not tackle. Even human emotion was on the table as demonstrated by Kismet, a robot made by Cynthia Breazeal that could identify and exhibit emotions. 

We have not gotten more intelligent about how we are coding artificial intelligence, so what’s the difference today? As it happens, the basic limitation surrounding computer storage that was limiting us some three decades ago is no longer an issue. Moore’s Law, which predicts that the memory and speed of computers increases two-fold each year, had finally caught up, and in several scenarios, went beyond our requirements. This is just how DeepBlue was able to defeat Gary Kasparov in 97, and how Google’s Alpha Go was able to upset Chinese Go champion, Ke Jie, just a few months prior. It provides a little of an explanation to the roller coaster of AI research, we saturate the capacities of artificial intelligence to the level of our present computational capabilities (computer storage and processing speed) and then wait for Moore’s Law to catch up again. 

We are currently in the era of “big data” an era where we possess the capability to gather huge quantities of data too cumbersome for an individual to process. The application of artificial intelligence in this regard has already been quite beneficial in various industries like technology, banking, marketing, and entertainment. We’ve observed that even if algorithms don’t evolve much, big data and massive computing just enable artificial intelligence to go about learning through brute force. There might be some evidence that Moore’s Law is slowing down a bit, but the appreciation in data definitely hasn’t lost any speed. Revolutionary breakthroughs in computer science, mathematics, or neuroscience all function as potential outs through the ceiling of Moore’s Law. 

What’s to come 

So what can we look forward to in the future? In the near future, Artificial Intelligence language is seemingly the next big thing. As a matter of fact, it’s already in progress. When was the last time you called up an enterprise and spoke directly to a human? One could visualize communicating with a specialist system in a fluid communication, or having an interaction with a specialist system in a fluid conversation, or having a communication in two differing languages being translated in real time. We can also look forward to see self-driving cars on the road within the next two decades. Keep in mind that these are conservative estimates. Over the longer term, the objective is general intelligence, that is a machine that supersedes humanity’s cognitive abilities across all activities. This is akin to the sentient bots we have often seen in science fiction movies.  

It is arguable that this is a stretch, especially as mainstream technology within the next five decades. Even if the capacity is present, the question of ethics and morality would function as a robust barrier against realization. When that time is upon us, but it would be better even before the time comes, we will be required to have a sincere conversation with regards to machine policy and ethics, but as of now, we’ll permit artificial intelligence to gradually improve and run amok among us. 

Add Comment