Key Milestones in the History of AI

An hour glass sits on top of a laptop

Artificial Intelligence (AI) is the study and simulation of human intelligence processes by machines and computer systems. The field of AI uses Machine Learning (ML) and Deep Learning (DL) methods to establish learning, reasoning, and self-correction in machines. 

Over the past few decades, AI has come a long way from mere Sci-Fi to practical technology that is quickly becoming ubiquitous with everyday products. To fully understand the impact of AI in today’s modern society we must trace its history and each key milestone in the development of artificial intelligence. 

The Birth of AI (1950s-1960s)

Originally coined “Artificial Intelligence” by John McCarthy, the study of AI didn’t officially begin until 1956 during the Dartmouth Conference. This event marks the beginning of AI as a scientific and academic discipline within computer science. 

However, some developments in AI go back slightly further than 1956. A year earlier in 1955, Allen Newell was already experimenting with AI concepts, developing the first AI program known as the Logic Theorist with Herbert A. Simon. The program was made to mimic human problem-solving and prove mathematical theorems. 

A decade later, developments in AI technology reached new heights with the creation of Eliza, the world’s first AI chatbot, at MIT by Joseph Weizenbaum using Natural Language Processing to act like a therapist. The machine used pattern-matching techniques to engage in text-based conversation, mimicking therapy methods of repeating a patient’s statements into questions.

The First AI Winter (1970s)

During the early 1970s funding and interest in AI development began to slow down causing the first AI Winter. Despite putting many theories into practice successfully, hardware developments were lagging behind and preventing more ambitious experiments from happening. 

While chatbots like Eliza were possible, they were also expensive to build and maintain which caused further issues for government and corporate interests. The science was there, but there was no practical application yet to justify funding. For the next decade, AI would play a smaller role as it waited for hardware advancements to catch up. 

AI’s Renaissance (1980s-1990s)

By the 1980s, production in components had finally reached a point that could keep up with the demand from AI development. Hardware was also becoming more advanced with nations like Japan putting significant resources into the industry. Many of Machine Learning’s core concepts were developed during this time:

  • Expert systems: AI programs that are trained on a selective set of skills to help advise specialists within specific domains such as Finance or Law. These systems are rule-based, using a list of rules to analyze information for decision-making.

  • Backpropagation: An algorithm used by ML models to move data backward through a network of layers. This gives the model the ability to learn from its mistakes by using the output results to adjust its weights and biases. 

  • Neural Networks: An AI model that mimics the human brain, containing hidden layers containing nodes between the input and output layers. These nodes act like neurons, processing information and moving it onto the next layer. 

The Second AI Winter (Late 1990s-Early 2000s)

Despite new advancements, AI development hit another snag at the turn of the century. After the dawn of the internet, ambitions quickly got out of hand, leading to the dot-com bubble which eventually popped in the early 2000s and caused significant reductions in the IT industry. 

With more funding setbacks, the AI community saw a shift in perspective, taking a more pragmatic approach to development that focused on small, measured successes over large, revolutionary breakthroughs. This change in scope would allow the next era of innovation to be truly remarkable. 

AI’s Modern Era (2010s-Present)

The second AI winter came to an end in the 2010s with the emergence of Big Data and Deep Learning. By adding more layers within Neural Networks, engineers could create new, more complex, and ‘deep’ learning algorithms that were fed data from the internet. 

Online activity had become massive with social media, video games, and smartphones all attracting large numbers of users that generated massive amounts of data every day that was used to train DL learning models. 

In 2016, Google was able to use this new collection of data to train its AlphaGo computer program which beat the World Champion Go player soon after, echoing Kasparov’s loss to DeepBlue in the 1990s. 

Since then, AI development has only continued to prosper following the release of GPT-3. The generative AI program garnered massive adoption and its ChatGPT application became the fastest-adopted computer program, beating out TikTok's acquisition of 100 million users by 5 months. 

Keegan King

Keegan is an avid user and advocate for blockchain technology and its implementation in everyday life. He writes a variety of content related to cryptocurrencies while also creating marketing materials for law firms in the greater Los Angeles area. He was a part of the curriculum writing team for the bitcoin coursework at Emile Learning. Before being a writer, Keegan King was a business English Teacher in Busan, South Korea. His students included local businessmen, engineers, and doctors who all enjoyed discussions about bitcoin and blockchains. Keegan King’s favorite altcoin is Polygon.

https://www.linkedin.com/in/keeganking/
Previous
Previous

Pros and Cons of Different AI Programming Languages

Next
Next

Difference Between Machine Learning & Deep Learning