>Business >An Introduction to AI

An Introduction to AI

The main objective of this blog post by AICoreSpot is to furnish intuition with regards to concepts, strategies and applications for individuals who want to know more about artificial intelligence. It contains crisp descriptions and is based on rich research being carried out in the domain.  

Where did AI come from? 

Artificial intelligence was given birth to in efforts to mimic human intelligence. It’s not obvious whether it’s doable to emulate the human being’s mind, but we can surely emulate a few of its functionalities. Simultaneously, the entire domain of cognitive sciences has obtained widespread attention and popularity. 

Simultaneously, it has given way to various concerns in our global society. Issues with computers conquering the planet are somewhat hyperbole, to say the least, but job displacement is a real issue that is facing us. We will be pitting artificial intelligence against natural intelligence in an upcoming blog post, so stay tuned for that. It will give you a better understanding in which aspects computers are better than us. 

Broadly speaking, all computers, devices, and heck, even calculators are representations of artificial intelligence. As one of the inventors of the domain stated: “As soon as it functions, nobody calls it AI anymore.”  

As computers become quicker and methods progress, artificial intelligence is bound to get, well, more intelligent. Latest research indicates that up to half of the jobs are in danger of being outsourced in the next decade. Whether we will be capable of emulating our minds or not, artificial intelligence will have a dominant influence on our day-to-day lives. 

Prerequisite mathematics 

As a matter of fact, math is not a strict requirement, but it is required to obtain an in-depth comprehension of the methods. It is recommended to obtain at least fundamental intuition in every subsection prior to proceeding.  

To start with, it is worth observing that classical logic, which we are all aware of, is not representative of most modern strategies. Therefore, you ought to comprehend the ideas of Fuzzy logic. 

The second critical topic is graphs.  

Linear Algebra 

Linear algebra details theories and operations from common algebra to groups of numbers. They are usually representative of inputs, and operational parameters.  

Next up is tensor. Tensor is a generalization of scalars, vectors, and matrices to high dimension objects.  

Probabilities 

As typically do not have precise data with regards to anything, we have to manage probabilities. Probabilities, statistics, and Bayesian logic needs to be understood and we will be exploring these as well in an upcoming blog post, so stay tuned. 

Target functions 

These are also referred to as objective, cost, or loss functions. They are representative of the primary purpose of our methods. Typically target function quantifies how well our algorithm is performing it’s function. Further, through optimization of this function we can enhance our algorithms.  

The most critical ones are mean squared error and cross-entropy. At times we are unable to compute objective function in a straightforward fashion and are required to assess the performance of the algorithm in practical scenarios. However, these assessments serve the same objective. 

Optimization 

After we’ve constructed the target function we require to optimize its parameters to enhance the performance of our algorithm. The typical approach to optimizing is gradient descent.  

There are several variants of GD. One is stochastic gradient descent, which needs merely a subset of training information to compute loss and gradients at every iteration. Another critical class of GD algorithms consists of momentum, which forcibly makes the parameters to move more in a common way. 

No Free Lunch 

The theorem of the no free lunch illustrates an extremely critical concept, there is no universally efficient AI methodology. In summary, this theorem specifies that each problem-solving procedure has some computational expenditure for every activity, and not one of these procedures is superior on average in comparison to the others. It is to be noted that this theorem has not been proven in the most general case, practically, it has demonstrated why it’s noteworthy. 

Therefore, we must opt for the correct methodologies for each issue. 

Typical methodologies and algorithms 

The primary objective of each method is to develop a good input-to-output mapping model for a particular issue. Further, their combos have the outcome of improved solutions. Strategies such as stacking, parallel search, and boosting and assisting to develop improved models leveraging mixes or simpler ones. 

While search methodologies typically need only problem specs, a majority of deep learning algorithms require massive amounts of data. Hence, available information has a critical role in opting for methods. 

Classic programming 

Even though programming is not viewed as AI tech these days, it was viewed as such several years ago. A singular program may execute simplistic addition of its inputs, which may not appear like an intellectual task. But, it may control the robot’s behavior and execute complicated operations. 

Interpretable and strict specs of this strategy enable to bring together hundreds and even thousands of differing programs in a singular structure. But, this approach fails in a majority of complicated, practical situations. It is very difficult to predict all potential input-output combos in complicated systems. 

Rule-based and expert systems 

Typical Expert Systems contains of a knowledge base as a group of if-then rules and an inference mechanism. Modern knowledge graphs are typically leveraged for question-answering and NLP is general. While those strategies are still leveraged presently, popularity of expert systems is reducing gradually. 

Search 

In scenarios when you can provide definitions of the space of potential solutions, search will assist you to identify a good one. Regardless of the apparent simplicity, these strategies can accomplish brilliant results in several fields when leveraged correctly. 

Genetic algorithms 

Genetic or evolutionary algorithms are a variant of search based on biological evolution. 

Machine learning 

Typically, ML strategies also leverage a type of search, commonly gradient descent, to identify a solution. To put it in different words, they leverage training instances to learn/fit parameters. There are typically dozens of ML algorithms, but a majority of them are reliant on the same principles. 

Support Vector Machines, Regression, Naïve Bayes, and Decision Trees are some of the most widely leveraged. 

The following sections will detail the most noteworthy domains of machine learning. 

Probabilistic graphical models 

These models go about learning statistical dependencies amongst variables in the form of graphs. Currently, they are actively substituted by neural networks in practical applications, but still good for analyzing complicated systems. 

Deep Learning 

In summary, deep learning is a subset of ML methodologies which consist of several layers of representations. One of the most useful attributes of NN is that you can stack differing layers in any combo. A high-level description of the layers that make it up is typically referred to as architecture of the network.  

Basic variants of neural networks: 

  • Feedforward 
  • Recurrent 
  • Convolutional 

Amongst blocks with more specialization, the neural attention mechanism is demonstrating brilliant outcomes in several applications. Other plug-and-play blocks such as long short-term memory are providing increased flexibility in architecture development. In this fashion, you can simply develop recurrent convolutional network with attention and other things. 

Limited Botlzmann Machines is a famous instance of unsupervised learning networks.  

Other strategies are developed to enhance Generalization of neural nets and other ML frameworks, which subsequently have positive impact on the precision. The most famous amongst them are Batch Normalization and Dropout. 

Other successful categories of networks are autoencoders. Their most popular application, is Word2Vec. Additionally, they are leveraged to develop representations for documents, knowledge graph entities, images, genes, and several other things. 

Another fascinating instance is Generative Adversarial Network, which can learn to generate convincing imagery, videos, and other variants of data. 

Several other variants of NN are famous in literature, however, have comparatively few practical applications: self-organizing maps, spiking neural networks, Boltzmann machines, adaptive resonance networks and others. 

Reinforcement Learning 

Intuition behind RL was based on behavioral psychologists who investigated that animals go about learning behaviors from the basis of rewards. This led to the creation of methodologies that look for policy that has the outcome of maximization of rewards. Many RL methodologies were generated throughout the span of history. Cutting-edge techniques are Deep Reinforcement Learning, Evolution Strategies, Asynchronous Advantage Actor-Critic (A3C) and others. 

Development 

As most progressive systems leverage more or less the same hardware (GPUs and TPUs) in this portion, we will concentrate on software development. 

Fundamentals 

Python is possibly the best programing language for starters. It is pretty universal for present AI methodologies and has many commonalities with the initial successful language, Lisp. Python contains intuitive syntax and a large community with a huge number of packages, tutorials, and training content. 

Data Science 

As AI methodologies are largely reliant on data, you require to be able to go about analyzing and manipulating it. Learning numerical linear algebra targeted at coders will enable to learn critical operations. 

Machine Learning 

The underlying math and basic methodologies behind machine learning are: Support Vector Machines, Linear and Logistic Regressions, simple neural networks, and Principal Components Analysis. The only critical thing that is absent from this listing is Decision Tree.  

We can now delve deeper. The Deep Reinforcement Learning course from Berkeley will ease you into modern RL methodologies. 

There are also methodologies to go about automating Machine Learning Models design. However, to obtain good results, AutoML requires much more assets than manually built models, so it’s not commonplace as of now. Additionally, while working on AI projects, you should think of potential safety problems. 

Reading up with courses and guides online is great, but to really comprehend the entire process, you ought to take up some practical, real-world information and work with it.  

Real-world applications 

This subsection is primarily intended to furnish inspirational illustrations for devs. You can look at how AI systems are evolving the planet currently and which direction will be especially relevant in the not too far off future: Medicine, Military, Education, Science, Physics, Economics, and a lot more. 

Add Comment