Timeline of Major Breakthroughs in AI

A robot hand touching a calendar

Despite AI’s sudden rise in popularity and decades of Hollywood horror stories suggesting the worst from artificial intelligence, the study of artificial intelligence began decades ago when computers were just exiting their infancy. 

While concepts of the internet and digital communications only existed in concept at the time, real plans to develop artificial intelligence slowly evolved in its own bubble of experts that has finally seen half a century of work and study culminating into everyday products we see today such as ChatGPT and Google Bard. 

Origins of AI (1950s-1960s)

Artificial intelligence was first coined by Dr. John McCarthy, a computer science professor at Stanford University when he was an assistant professor at Dartmouth College. 

During his time at Dartmouth, McCarthy suggested hosting a summer workshop for a small number of experts in computer science to brainstorm and discuss the development of artificial intelligence - a term he created to be unbiased and disconnected from separate, but similar, studies in cybernetics. 

The result was the Dartmouth Conference of 1956 which is regarded as the beginning of artificial intelligence as a field of study. Many of the concepts like neural networks and language processing discussed at this workshop became integral to the advancement of AI and are still taught today. Early concepts of AI include:

  • Symbolic methods: A function that uses token systems to inform AI programs like Language Learning Models how to respond to inquiries and requests. 

  • Systems focused on limited domains: Limiting the scope of parameters given to AI in order to mimic expertise in specific fields. 

  • Deductive systems versus inductive systems: Teaching machines to learn from specific examples to create general principles.

Early AI Programs and AI Winter (1960s-1980s) 

A decade after the Dartmouth Conference, advancements in AI began to expand following the development of multiple programming languages such as LISP and MAD-SLIP which were created by John McCarthy and Joseph Weizenbaum respectively. 

  • ELIZA: Developed by Weizenbaum between 1964 - 1966, ELIZA was a language model that explored how humans could communicate with machines. The program famously recreated Rogerian psychology which was possible at the time because the machine only had to repeat phrases asked by users in the form of a question.

    While ELIZA was not meant to impersonate therapists in any form, it did achieve its goal of creating a dialogue between man and machine, leading it to become the first example of a chatbot. 

  • SHRDLU: Developed by Terry Winograd at MIT in 1968, SHRDLU was a breakthrough computer program that created an intersection between natural language and command prompting.

    The program created a series of geometrical shapes called a block world that could be manipulated through natural language inputs such as “place the green pyramid on top of the blue block.”

Despite these major advancements in artificial intelligence less than a decade after the Dartmouth Conference the development of AI would stagnate for the next few years as more advanced computer systems unavailable at the time were needed to explore the field. 

Revival of AI and Machine Learning (1980s-1990s)

By the 1980s, advancements in technology allowed AI to resurface again as a major academic field, specifically in expertise models which were derived from the concept of Limited Domains explored during the Dartmouth Conference.

Suddenly, advanced tasks for high-end jobs such as credit scoring and medical diagnosis were seeing the benefits of artificial intelligence, helping to popularize its growth in research with the introduction of machine learning

Machine learning created massive impacts because it took output results one step further than predecessors with effective algorithms. Instead of merely outputting the same results, whether they be correct or not, scientists were now able to program the systems to learn from mistakes, leading to more accurate outputs through trial and error, paving the way for deep learning. 

One example of Machine Learning in the 1980s came with the introduction of Backpropagation which not only taught machines how to learn based on outputs but also recognized complex patterns in those outputs that led to more direct decision-making. 

The Rise of Deep Learning (2000s)

With the advent of the internet and the world wide web, machine learning was able to train itself on an exponentially larger pool of information created by data collection from companies like Google, IBM, and Facebook. 

This explosion in information led to the development of Deep Learning, a subset of Machine Learning that uses “feature learning” to automatically extract data and teach itself, mimicking the human brain’s own neural network, unlike Machine Learning which still requires human interaction to feed highlighted data sets. 

Hardware advancements were also creating major impacts at the time as computer production and manufacturing began to take off with more of the general public having access to PCs and desktops connected to the internet. 

AI Integration and Breakthroughs (2010s-Present) 

Since the advancements of Deep Learning, our world today is littered with more AI technology than many might realize. While ChatGPT is still the most trendy example of AI, the technology was already being marketed to the masses during the 2010s with the ascension of Tesla and its self-driving cars. 

Google also began taking larger strides into the realm of AI when they developed AlphaGO and defeated the world champion of GO in 2016, mimicking IBM’s Deep Blue chess match with GM Gary Kasparov in 1997. 

Now with the release of GPT4 and its API keys, we are seeing a plethora of AI products reaching customers as new generative tools showcase how entire careers and industries will change because of the massive innovations to workflow and productivity. 

The Future of AI (2020s-Beyond)

As exciting as AI is becoming, there is no doubt that the next era of AI development needs to include legal frameworks and responsible, scientific oversight. In less than a century, AI technology has gone from a small classroom in Hanover, Connecticut to the most adopted computer program ever (ChatGPT) with some concerning implications. 

The power to generate compelling, realistic content in mere seconds poses an issue to both man and machine - it's fast, capable, and easy to use. If left unnoticed, the ability to create significant harm is within arm’s reach and could create major problems with deep fakes already being used by political candidates

However, the pioneers that established the field of AI are some of the brightest minds to ever exist and have created discourse on the fair and proper use of AI for decades now; or, better yet, we can just ask ChatGPT what to do about that. 

Keegan King

Keegan is an avid user and advocate for blockchain technology and its implementation in everyday life. He writes a variety of content related to cryptocurrencies while also creating marketing materials for law firms in the greater Los Angeles area. He was a part of the curriculum writing team for the bitcoin coursework at Emile Learning. Before being a writer, Keegan King was a business English Teacher in Busan, South Korea. His students included local businessmen, engineers, and doctors who all enjoyed discussions about bitcoin and blockchains. Keegan King’s favorite altcoin is Polygon.

https://www.linkedin.com/in/keeganking/
Previous
Previous

Introduction to Neural Networks and Their Structures

Next
Next

Basics of Deep Learning for Beginners