Artificial Intelligence: A Brief Introduction

This article is introducing artificial intelligence briefly by talking about its history, definition, and different fields.

ARTICLES

Youssef Kusibati | Omar Arafah

9/2/20235 min read

AI Image
AI Image
Table of Contents:
  • Introduction

  • What is AI?

  • Different fields in Artificial Intelligence: machine learning, deep learning, neural networks, natural language processing (NLP), computer vision.

  • Conclusion

  • References

Introduction:

You would be surprised to know that the artificial intelligence concept started in the mid-20th century. In 1956, proof of this concept was provided by Allen Newel, Cliff Shaw, and Herbert Simon, and was showcased in the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) (Anyoha, 2020). This event was a start to the following decades of research in this field. However, according to Anyoha (2020), at that time and due to the lack of resources (e.g., computational power and storage), the development of AI faced multiple setbacks. In fact, when our technologies developed artificial intelligence continued to thrive, like nowadays.

What is AI:

As McCarthy (2007) mentioned in his paper, “it is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.” Basically, artificial intelligence aims to imitate the human ability to solve problems through various computational procedures or algorithms. Artificial intelligence will continue to grow and will eventually reach a market value of approximately two trillion dollars by 2030 (Thormundsson, 2023). Thus, this stat and many other indicators point out that the current stage of artificial intelligence is no more that just a beginning of a great field in the future.

Different fields in AI:

Artificial intelligence is a big field with various different subfields or disciplines of studies including machine learning, deep learning, Neural networks, natural language processing, and computer vision.

-Machine Learning:

As Arthur Samuel, an AI pioneer, mentioned in 1959, that machine learning is “the field of study that gives computers the ability to learn without explicitly being programmed.” Machine learning start with collecting data. Afterwards, this data is prepared to be used as training data. More data is usually better for a programmer, but the quality of this data should be taken in mind - there should be no bias in this data, for example. Then programmers upload this data to machine learning models and tweak this model over time resulting in machine learning systems (Brown, 2021).

-Deep learning:

According to Sam (2023), deep learning could be referred to as machine self-learning. Deep learning is a subset of machine learning that imitates the human way of obtaining knowledge, learning by example (Gillis, Burns and Brush, 2023); it works by analyzing different patterns in the input data it is given. Deep learning algorithms are inspired from the human brain; that is, neural networks form the bases of deep learning algorithms, and in deep learning, the neural networks are actually more complicated with a network of connected hidden layers that help analyze the input data (Madan and Madhavan, 2020).

-Neural networks:

As Jones (2017) states, neural networks ,which is a subset of machine learning and the backbone of deep learning algorithms, are adaptive statistical models based on the structure of the biological brain. These quantitative models link inputs and outputs adaptively in a process analogous to that of the human brain. Artificial neural networks in machines are created by stacking several perceptrons, the perceptron refers to the brain’s artificial equivalent neurons (IBM Data and AI Team, 2023). The neural networks gather information by processing various training instances before providing a desired output. Training data teach neural networks and help improve their accuracy over time. Once the learning algorithms are fined-tuned, they become powerful computer science and AI tools. Google’s search algorithm is a well-known example of a neural network.

-Natural language processing (NLP):

As mentioned in their paper, Nadkarni, Chapman, and Ohno-Machado (2011) stated that natural language processing or “NLP” started in the 1950s as an intersection between artificial intelligence and linguistics. As defined by Liddy (2001), “Natural Language Processing is a theoretically motivated range of computational techniques for analyzing and representing naturally occurring texts at one or more levels of linguistic analysis for the purpose of achieving human-like language processing for a range of tasks or applications.” NLP occurs at different levels depending on the specific scenario it is used in; phonology, morphology, lexical, syntactic, semantic, discourse, and pragmatic are all different levels that NLPs could be used in (Liddy, 2001). Additionally, natural language processing is approached differently according to different cases.

-Computer vision:

Computer vision is one of the fields of artificial intelligence that trains and enables computers to understand the visual world. Over the last few decades, computer vision science has made significant progress, allowing computers to use digital images and deep learning models to accurately identify and classify objects and react to them. Massive amounts of information are required for computer vision. Repeated data analysis are performed until the system can differentiate between objects and identify visuals. There is a lot of research going on in the subject of computer vision, but it isn't simply research. Real-world applications highlight the importance of computer vision in healthcare, sports, transportation, education, entertainment , construction, security ,software engineering, and daily life.

Conclusion:

To summarize, The Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in 1956 marked the beginning of artificial intelligence (AI) in the middle of the 20th century. Through a variety of computational techniques or algorithms, AI seeks to mimic how humans solve issues. Machine learning, deep learning, neural networks, natural language processing, and computer vision are just a few of the many subfields or academic disciplines that make up this broad field of study. As stated by Bergur Thormundsson that this is only the beginning of the revolutionary field of AI, and that with the advancement of technology the global artificial intelligence (AI) software market is forecast to grow rapidly in the coming years.

References:

Anyoha, R. (2020) The History of Artificial intelligence. Available at: https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/.

McCarthy, J., 2007. What is artificial intelligence.

Thormundsson, B. (2023) Artificial Intelligence market size 2030 | Statista. Available at: https://www.statista.com/statistics/1365145/artificial-intelligence-market-size/#:~:text=According%20to%20Next%20Move%20Strategy,a%20vast%20number%20of%20industries.

Brown, S. (2021) Machine learning, explained | MIT Sloan. Available at: https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained (Accessed: September 1, 2023).

Sam (2023) “6 Fields of AI,” TechEmergent, 6 July. Available at: https://techemergent.com/fields-of-ai/. Accessed [2 September 2023]

Gillis, A.S., Burns, E. and Brush, K. (2023) “deep learning,” Enterprise AI [Preprint]. Available at: https://www.techtarget.com/searchenterpriseai/definition/deep-learning-deep-neural-network. [Accessed: 2 September 2023]

Madan, P. and Madhavan, S. (2020) An introduction to deep learning. Available at: https://developer.ibm.com/articles/an-introduction-to-deep-learning/ (Accessed: September 2, 2023).

Nadkarni, P.M., Ohno-Machado, L. and Chapman, W.W., 2011. Natural language processing: an introduction. Journal of the American Medical Informatics Association, 18(5), pp.544-551.

Liddy, E.D., 2001. Natural language processing.

Jones, M. T. J. (2017, July 23). A neural networks deep dive. Retrieved September 2, 2023, from https://developer.ibm.com/articles/cc-cognitive-neural-networks-deep-dive/?mhsrc=ibmsearch_a&mhq=Neural%20Networks

Abdi, H. A., Valentin, D. V., & Edelman, B. E. (1994). Neural Networks (7th ed., Vol. 124). Sage Publications.

IBM Data and AI Team. (2023). AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the difference? Accessed September 2, 2023, from IBM Blog. https://www.ibm.com/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks

Muthumali, Hashini. (2022). A Study on Computer Vision. Accessed September 2, 2023 https://www.researchgate.net/publication/362002273_A_Study_on_Computer_Vision/citation/download

Simplilearn. (2023). What is computer Vision: applications, benefits and how to learn it. Simplilearn.com. (Accessed September 2, 2023) https://www.simplilearn.com/computer-vision-article

IBM Data and AI Team. What is Computer Vision? | IBM. Retrieved September 2, 2023, from https://www.ibm.com/topics/computer-vision

<a href="https://www.freepik.com/free-photo/ai-technology-microchip-background-digital-transformation-concept_17122527.htm#query=ai&position=2&from_view=keyword&track=sph">Image by rawpixel.com</a> on Freepik