
What is Artificial Intelligence
“Any fool can know. The point is to understand.” — Albert Einstein John McCarthy, who coined the term Artificial Intelligence in 1955, defined it as “the science and engineering of making intelligent machines.” Currently, the term “artificial intelligence” refers to a specific field of computational science that creates systems capable of collecting data and making decisions and/or revolving problems, which typically require human intelligence.But how to approach something as broad as intelligence?
John McCarthy adds: “All aspects of learning, or any other characteristic of intelligence, can in principle be described so precisely that a machine will be able to simulate them.” So, it is this same approach that AI follows. The human being is the smartest being known, so the machines that employ AI seek to imitate the human being. The human being can see. This is the field of Computer Vision. The human being can recognize the environment and move. This is the field of Robotics. The human being can write, read, speak, and listen. These are the fields of Natural Language Processing and Dialogue Recognition. The human being can recognize patterns. This is the Pattern Recognition field. Machines are even better than humans recognizing patterns because they can use more data and learn more data dimensions. This is the field of Machine Learning. The human being has a brain, composed of a network of neurons that allows learning new things. If it is possible to replicate the structure and functions of the human brain, we can achieve cognitive abilities. This is the field of Neural Networks. When these neural networks are deeper and more complex, and we use them to learn more complex things, this is the field of Deep Learning.What are the real applications of AI?
Although AI was introduced in the 1950s, the world is only now beginning to understand the impact it can have and how we deal with data analysis and decision planning. LinkedIn 2020’s “Emerging Jobs Report” states that “Artificial Intelligence will require the entire workforce to learn new skills, whether to stay up-to-date with its current role or to achieve a new career as a result of automation.” And the results of the growing application of AI are visible on social networks, in the media, and even in the world with which we interact.Some examples of artificial intelligence applications
- Artificial Creativity:
- MuseNet: a deep neuronal network that can generate 4 minutes of musical compositions with 10 different instruments.
- Wordsmith: a natural language generation platform that transforms data into an insightful narrative.
- Social Networks:
- Identification of friends by facial recognition- Identification of hate speech
- Chatbots:
- Virtual personal assistants (Siri, Cortana, Alexa, etc.): dialogue recognition and natural language processing to answer requests/questions.
- Autonomous vehicles
- Vehicles parking autonomously.
- Vehicles with autonomous driving.
- Gaming Industry
- AlphaGo: First AI program to defeat a human in the board game Go.
- Banking and Finance
- AI systems can analyze patterns of high amounts of data and produce market predictions, identify fraud, etc.
- Agriculture
- AI systems can monitor and identify plant and soil status and act on the results.
- Health
- IBM Watson: analyzes medical data and makes medical diagnoses.
- Deep Mind Health: analyzes retinal images and diagnoses eye problems.
- Marketing
- Targeted advertising (product recommendations related to a purchased product).
- Recommendation of movies based on the movies viewed.
- Recommendation of songs based on the songs heard.
But how does it work in practice?
Where do Chatbots, Machine Learning, Data Mining, neural networks, algorithms come in? AI is an area in constant evolution and with this, the terminology used is also evolving to a very fast step, without reaching a consensus on how to organize all concepts in a well-defined taxonomy. And with all its success, Artificial Intelligence for being a victim of that. The number of people who talk about AI exceeds their education and knowledge in the subject, generating unrealistic misconceptions and expectations. It is not necessary to be an Expert in AI, but AI literacy as part of digital literacy will be a key requirement in the 21st century. This literacy begins by naming and understanding things. As there are numerous AI glossaries, here is a brief taxonomy where we’re going to dissect Artificial Intelligence over several dimensions.1. Intelligence amplitude: Narrow vs. General AI
An artificial intelligence can be called weak/narrow AI or strong/general AI. A weak AI operates in a strict scope, such as AI that recommends a film, that recognizes faces, that identifies tumors, or that drives a car. A General Artificial Intelligence can demonstrate intelligence at a human level throughout a set of cognitive activities that the human being performs in his life, and is capable of easily going from one subject to another and relate them. There are still no strong AI systems and forecasts indicate that only around 2050-2060 will we see this type of systems.2. Learning ability: Symbolic AI vs Machine Learning
3. Machine Learning segmentation by application type: Classification, Regression, Clustering …
The tasks performed by Machine Learning can be grouped into several applications, of which we will present three of the most popular. At the top of the list is the ranking. Most image processing or computer vision is based on classification, from automatic tagging of friends on Facebook to detecting tumors on an MRI, from quality control on a manufacturing line to identifying obstacles by autonomous vehicles. Regression is used to predict continuous values. Determining the likely price of a house or the annual sales of a product, predicting the demand for electricity or the number of years an employee can stay in a certain position are continuous estimation problems. These generally benefit from the use of many input variables. The third, clustering, boils down to the act of sorting and grouping a population-based on common characteristics. It is one of the main task of exploratory data analysis and a common technique for statistical data analysis. It has many applications, such as to identify market segments for consumers or students with similar competencies and challenges, or words that belong to similar semantic groups. In the broader sense, it also includes recommendation systems, prescribing the next product to be offered to a customer.4. Machine Learning by learning paradigm and data use: Supervised, Unsupervised, By Reinforcement, etc…

5. Artificial neural networks by depth: simple vs deep

6. Artificial neural networks by algorithm type: Simple Feed-forward, CNN, RNN, GAN, etc…
The basic artificial neural network algorithm is called “feedforward”. The algorithm “travels” along the neuronal network in a single direction, from the input layer to the output layer without ever going back or looping. However, the training of the net, to determine the weights associated with each node (“neuron”), uses a calculation of “back-propagation”, which flows in the opposite direction. There are numerous variations of the feedforward algorithm. Two of the most popular neural network algorithms are CNNs and RNNs. Convolutional Neural Networks (CNNs), the patterns of connectivity between the nodes are like those of the visual cortex of animals. CNNs are particularly suitable for image recognition. Recurrent Neuronal Networks (RNNs) capture the notion of sequence and are very useful in the context of Natural Language Processing.Conclusion
This article presents a very small introduction to the goals and main concepts of Artificial Intelligence. As AI technology is becoming more influential, reaching many levels of our professional and social lives, it is important that people understand some AI vocabulary and achieve a foundational level of understanding.
Tags: Artificial Intelligence