A journey through time

The Evolution of Artificial Intelligence
From saving a contact to your phone and Gmail suggesting the end of a sentence when typing, everything is AI, but when exactly did this all start?
Yolanda Nel
If you thought Artificial Intelligence (AI) is the latest technology on the block, think again. The relentless pursuit of creating machines capable of emulating human intelligence is already 68 years old!

According to Havard University, the roots of AI can be traced back to the 1950s when computer scientists began to explore the concept of machines mimicking human thought processes. The term artificial intelligence was coined by John McCarthy in 1955, marking the birth of the discipline. Early pioneers, such as Alan Turing, formulated the famous Turing Test in 1950, which aimed to determine a machine's ability to exhibit intelligent behaviour indistinguishable from that of a human.

A pivotal moment arrived in the form of the Dartmouth Workshop, held in 1956, which served as the birthplace of AI as a field of study. “McCarthy and his colleagues aimed to develop machines capable of simulating human intelligence through reasoning, learning, and problem-solving. This event laid the groundwork for AI research, sparking interest and investment from various sectors,” according to Harvard

Tableau published an article outlining the significant growth during the 1960s and 70s, where computers used symbols and rules to solve complex problems. “Expert systems emerged, designed to replicate human decision-making in specialised domains. However, these early efforts were soon met with scepticism due to limited computational power and the inability to fulfil grand promises, leading to what is known as the AI Winter.

The late 20th century witnessed the resurgence of AI with the emergence of neural networks. Inspired by the human brain's structure, neural networks aimed to enable machines to learn from data, paving the way for machine learning. Tableau reported that the introduction of backpropagation algorithms in the 1980s improved training methods for neural networks, but progress was hampered by hardware limitations.

Transformation
According to TechTarget, the advent of the internet in the 1990s transformed the AI landscape by providing access to vast amounts of data. Machine learning techniques began to shine, as algorithms like decision trees and support vector equipment enabled machines to make sense of this data. “AI found applications in fields like speech recognition, natural language processing, and image analysis.”

The 21st century marked the dawn of the deep learning era, characterised by the development of convolutional neural networks (CNNs) and recurrent neural networks (RNNs). “These architectures revolutionised image and speech recognition, propelling AI into new dimensions of performance. Breakthroughs like AlphaGo's victory over a human Go champion highlighted AI's potential in complex problem-solving,” TechTarget reported.

Now, while AI has become an integral part of our lives, powering virtual assistants, recommendation systems, autonomous vehicles, and medical diagnostics, this progress has raised ethical questions regarding biases in AI algorithms, privacy concerns, and the potential impact on jobs and society at large.

Ensuring responsible AI development and deployment has become paramount. Whatever your stance may be on the current technological advancements, AI has traversed a path marked by breakthroughs, challenges, and paradigm shifts.