Search
Close this search box.

From Turing to Today: The History and Terminology You Need to Understand AI

The use of “AI” as both a technology and terminology is inescapable right now. With the field developing so rapidly, it can be hard to keep up or even know what people are referring to when they suggest incorporating AI into business.

In order to better understand what it means to “use AI” and to effectively reap its benefits, it is helpful to understand the history and terminology of the field. Let’s look at where this all came from, how it evolved, and what it means now.

 

History of Artificial Intelligence

Origin

In 1950, one of the fathers of computer science, Alan Turing, published “Computing Machinery and Intelligence” where he proposed the famous “Turing Test,” or “Imitation Game” – a human evaluation of a machine’s “intelligence” based on whether its written answers are indistinguishable from that of a human’s.  At the time, this was an impossible task and the test served more as a thought experiment or philosophical conundrum. A few years later, John McCarthy created LISP – a symbolic expression-based programming language that laid the groundwork for symbolic computing and flexible data representation, i.e. the groundwork for artificial intelligence. For years, serious “AI” work was done on Lisp machines, specially built to evaluate Lisp expressions.  From this we see that computer science, programming, and artificial intelligence have evolved alongside each other from the very beginning, but computing power has historically limited the reach of AI.

 

Development

Early artificial intelligence programs from the 1960s relied heavily on symbolic manipulation, knowledge representation, search algorithms, and rule-based reasoning. This allowed them to thrive in tasks related to decision making, planning and scheduling, language processing, and numerical computation. The computing power was strong, but still far from what one might consider “intelligent.”

Some examples of successful adaptation of artificial intelligence in the 60s and 70s include Expert Systems that mimicked the decision-making abilities of experts in specific fields like microbiology (MYCIN) and mass spectrometry (DENDRAL). Natural Language Processing (NLP) and Semantic Networks also grew during this time, paving the way for systems like ELIZA that could simulate conversation via pattern matching and substitution.

While the 60s were a time of formalization of AI concepts, the ‘70s are sometimes referred to as “AI winter” as technology fell short of the expectations laid out in the 1960s. Public perception fell as skepticism grew, but research continued, creating groundwork for even deeper machine learning.

 

Neural Networks

In the 1980s, advances in computational techniques provided the power needed for some aforementioned expectations, and focus and energy was renewed. While research progressed in classical areas like rule-based systems and expert systems, the frontier of Neural Networks arose.

Rather than relying on explicit, developer-created logic rules, neural networks are modeled more like a human brain: nodes (analogous to neurons) perform individual computations and are connected, sharing calculations. The strength of the connection or pathways between neurons changes with use and association with correct outcomes. By changing the strength of the connections between nodes in a network you can teach it to encode knowledge.  But rather than a human choosing the appropriate values, neural nets are trained by feeding them problems with correct, and incorrect, responses, and letting them self-adjust their network values to maximize the likelihood of getting the correct solution.

 

Machine Learning

Neural networks are one type of “machine learning” (or “ML”) which is a broad term we may use for many of the processes associated with “artificial intelligence.” Overall, the goal of machine learning is to allow computers to learn from data and make predictions and decisions independently, without explicit programming.

Neural networks are one way to get a computer to recognize patterns, relationships, and insights, but other modes such as decision trees and support vector machines can be effective architectures for machine learning. These types of models were popular through the 90s and early 2000s and created a framework for much of the ML utilized today.

 

Modern Solutions

As computing power and technology advanced, the power of Neural Networks surged. Deep Learning expanded bringing rise to techniques like convolutional neural networks (CNNs), recurrent neural networks (RNNs) and attention mechanisms, which enabled breakthroughs in tasks like image recognition, natural language processing, and speech recognition.

On top of this, Big Data – the availability and prominence of large-scale data sets – allowed researchers and programmers to train their models by feeding them massive datasets that helped refine their learning and capabilities exponentially better. Using Big Data, ML models of all types were able to create breakthroughs in natural language processing, image recognition, decision making, and gameplay, mastering specific tasks, systems, and challenges.

These advancement in technologies and techniques contributed to the popularity of deep learning by industry leaders as well as individuals. Not only was it heavily invested in by major tech companies like Google, Facebook, and Amazon, but open-source tools and frameworks arose, providing access to the technology and methodologies to anyone who was interested. This opened up the world of AI and was a major contributor to the accelerated rate of development associated with the field.

 

Generative AI

Neural networks remain a prominent technique in AI models, but until recently were not often used to create original content. Programs that use machine learning and artificial intelligence to create fall under the relatively new category of “Generative AI” or GenAI.

This explosion was made possible by an architectural innovation known as transformers. Transformers are a neural network architecture that utilize self-attention mechanisms rather than sequential processing, allowing models to capture long-range dependencies and contextual information more effectively. This significantly improved natural language processing (NLP) tasks, so tools like ChatGPT, Perplexity, Claud, Gemini or other Large Language Models (LLMs) leverage the transformers’ ability to understand and generate human-like text. Now they can generate original conversations, blog posts, technical documents, and even poems.

Transformers also harness the power of textual input and advanced language processing to generate original images or even videos. Applications like DALL-E and Magenta rely on transformer-based architectures to process textual descriptions and translate them into corresponding images or other forms of creative content. With the push of a button, a user’s prompt can be brought to life in a vivid picture.

 

 

Using AI Today

Today’s generative AI models, capable of instantly bringing our imaginations to life in pictures, writing original content, or engaging in conversations about anything, would certainly have surprised Turing—some might even pass his test. Yet, the field of Generative AI (GenAI) is still in its infancy and developing rapidly.

Consequently, GenAI models are prone to growing pains; they are not infallible. Most of the time, you reap what you sow: you won’t get what you’re looking for without the right prompt. Being detailed and specific helps, but even so, programs can still become “confused.” Alternatively, you may only receive exactly what you asked for, which can lead to confirmation bias, especially in research settings. Programs like ChatGPT, while immensely powerful, are not all-knowing, and discerning the output it generates is key to using it effectively. Knowing what to ask and how to ask it will also go a long way.

But despite its limitations, GenAI has the potential to revolutionize the way humans work and create. When humans and GenAI collaborate, they can achieve remarkable results. For instance, artists can use AI tools to create intricate digital art in minutes, researchers can analyze massive datasets quickly, and writers can generate imaginative narratives with ease. By combining human creativity and intuition with the speed and analytical power of AI, we’re witnessing a new era of innovation where the line between human and machine creativity blurs.

Ultimately, understanding the nuances of how to work with GenAI will be crucial for harnessing its potential. While we are still in the early stages of fully integrating GenAI into everyday tasks, learning to navigate this landscape effectively opens up a world of possibilities. Whether in art, writing, research, or problem-solving, the future of GenAI lies in fostering a symbiotic relationship between humans and machines.

At Calavista, we have enthusiastically followed the development of these models and are working out the most effective ways to implement them into our workflow as well as how to use them to create new solutions for our customers. If you are curious about how GenAI can help your business practices, reach out to us at info@calavista.com.

Share on Facebook
Share on Twitter
Share on LinkedIn