A short history of AI
The concept of artificial mechanisms that emulate life goes back to Plato, Aristotle, and possibly even earlier. From the 1800s individuals have begun to actually develop and even attempt implementing them. Preliminary efforts were more along the lines of mechanical calculators, but they’ve already furnished answers to a few issues which typically need extensive thinking. But officially, AI as a domain of research and enquiry cropped up only in 1956, sixty-five years ago.
Prior to 1956, Alan Turing, John von Neumann and several others have put forth that it is doable to emulate several cognitive functionalities leveraging computers. They’ve mainly concentrated on abstract automatons and Turing Machine, which is now the theoretical foundation for every computer out there. Cybernetics was shared ground for the individuals who desired to comprehend biological intelligence and control systems, generally. Computers were pretty uncommon, costly, slow, and difficult to work with. However, the interest, enthusiasm, and fascination of a few scientists and researchers ultimately prevailed.
Sixty-five years ago, in the year 1956, McCarthy had put together the Dartmouth Summer Research Project and invited Simon, Shannon, Newell, Minsky, among others. They had discourses about the potential of developing the thinking machines for approximately 2 months. While few of them expressed their doubts, artificial intelligence began to obtain more and more public focus. Meanwhile, the new wave of cognitive strategies in psychology, linguistics, and other domains came to the forefront.
Following that, several scientists worked on chess AI and automatic theorem proving. Regardless of preliminary optimism, those activities proved to be a lot more difficult than devs presumed. A few them also attempted to develop robots, however, all of these attempts had not furnished useful systems.
The next year, using biological neurons as inspiration, Rosenblatt developed Perceptron, the first trainable ANN (artificial neural network). It was a massive deal for the adherents of connectionism. But critique from the book Perceptrons by Marvin Minsky released in 1969 had eradicated nearly all funds and research interest in neural networks for approximately a decade.
By 1974, people were disillusioned with the absence of tangible progress. The first AI winter came and prevailed until 1980.
Then the primary focus changed to the knowledge-based systems. Those were greatly specialized for specific tasks and provided solutions to a few real-world problems. The Japanese state made massive investments in AI research and development, other nations reacted as well. Further, Douglas Lenat commenced Project Cyc whose primary objective was to gather all commonsense human knowledge. Chess playing systems came close to matching the capabilities of international players.
Artificial neural networks also experienced some very critical advancements and breakthroughs. Hopfield networks has some fascinating theoretical attributes and quite popular within AI literature, while Backpropagation is still leveraged as a general algorithm for computing errors for multi-layer neural networks in the course of training.
But the objectives proved to be too high. No AI systems were capable of sustaining a simplistic conversation, undertake translation tasks, or identify imagery. Second winter started in 1987 and prevailed until 1993.
The rise of Neural Networks
During the 1990s, we witnessed the preliminary commercial applications of neural networks with character and speech recognition. A quarter century ago, IBM produced special computer Deep Blue that defeated international Chess champion Garry Kasparov. In 2010, we witnessed the proliferation of the first successful autonomous cars and industrial bots.
A critical paradigm shift propped up; an increasing number of researchers have begun to regard intelligence as a capability to behave profitably. Internal mechanisms became not as critical.
From that time, neural networks slowly outpaced humans in several activities such as recognition, prediction, and classification. Deep Learning turned mainstream within the domain of artificial intelligence research and proved to be very profitable. While discourse about the potentials of emulation of human thinking are still happening, we don’t have any considerably arguments for any of these points.
Meanwhile, AI systems are proliferating several industries. However, we are required to be careful with regards to our expectations, to avoid third winter.