Introduction to Reinforcement Learning in AI

A game of chess on a tablet

Reinforcement Learning (RL) is a subset of Machine Learning (ML) and artificial intelligence where an agent is placed in a new environment and must learn through trial and error. This learning process is taught by taking action, either exploiting information gained from past experiences or by considering new choices (exploration) to earn rewards and set policies. 

Reinforcement Learning plays a critical role in both Deep Learning (DL) because of its strengths as an unsupervised learning method. By continuously interacting with radiant data floating within an environment, DL agents and models are able to observe wide ranges of patterns from large datasets and adapt to changes. 

Understanding the Basics of Reinforcement Learning

RL is a complex neural network that attempts to recreate how humans learn by placing an agent into an environment developed by programmers to help it learn on its own. The agent is free to explore and interact with the world it’s been placed in, using feedback created by its actions in the environment to adapt to its surroundings. 

Once this process begins to repeat itself, the agent starts to develop strategies known as policies that maximize rewards and reduce penalties. This incentive layer allows the agent to gauge its own performance, allowing it to learn without constant supervision from humans. 

Supervision within machine learning is a complex topic that has many strengths and weaknesses when applied to neural networks. In supervised learning, human programmers are able to lead agents toward the correct outputs systematically by using input data and weights. 

Unsupervised learning models on the other hand rely much more heavily on the machine to create its own understanding by observing responses and exploring the environment. However, it’s important to note that the term “unsupervised” is used loosely in ML because human intervention with the agent’s learning process is always present to some degree as explained by Lex Friedman during a lecture on RL at MIT. If a machine were to learn entirely on its own, that would be a scientific breakthrough.

Components of Reinforcement Learning

Within Reinforcement Learning, it’s important to distinguish that the agent and the environment are not two parts of the same artificial intelligence program. Instead, it is an AI system operating in a simulation world that encourages agents to develop themselves.

An agent is an entity, or AI/DL model, created by developers to learn within its environment. This agent can be represented in a number of different ways ranging from a computer program to advanced robots. The agent is responsible for making decisions, taking action, and receiving feedback. 

An environment is a world provided by programmers for agents to reside in. It can be a digital world full of raw data or resemble real-world locations such as street traffic, but it can also be much more simple realms like a chess gameboard. In any way, it is where the agent will interact and train itself using feedback created by the environment. 

Other core components of reinforcement learning include:

  • Action: The choice that an agent makes from a set of options in an environment. Every action will alter the state of the environment. 

  • Reward: A feedback signal from the environment to the agent, indicating whether the agent is performing well or not. 

  • Policy: A strategy created by the agent to determine its actions and behavior to optimize rewards. 

  • Value function: A prediction of future rewards, allowing the agent to make decisions and policies that earn the highest reward output. 

The Process of Reinforcement Learning

RL typically goes through a three-step process that forces an agent to interact with its surrounding environment in order to optimize efficiency and maximize rewards while reducing penalties.  

  1. Observation: The RL agent begins the learning process by taking notes of the world it’s been placed in, detailing the environment’s current state. 

  2. Decision-making: After observation, the agent begins to take action by making policies that cause it to interact with the environment. 

  3. Learning: Feedback is given to the agent in the form of a reward or penalty and used to influence the repetition of good behaviors.

Games tend to be some of the easiest ways to analogize reinforcement learning because games require strategies that play into an RL agent’s tasks of creating effective policies. When an agent is introduced to a Chess or Go board, they begin assessing the situation, observing how the state of the environment changes after every piece is moved. 

Rewards can be understood as outputs that lead to winning plays in a game such as a checkmate while penalties can be seen as the opposite. Reinforcement can be instilled too because games are repetitive challenges, capable of being replayed over and over by machines. 
Types of Reinforcement Learning

Reinforcement Learning can be characterized in two ways based on whether or not the agent is using a model within the environment. Model-Based RL can be imagined as the agent being handed a tool that provides a map of the environment. The agent can then make observations and decisions off of this “map.” 

Model-Free RL takes a more simplistic approach by placing an agent directly into an environment with no preconceived notions of what surrounds it. This forces an agent to learn from trial-and-error more than model-based methods because of its lack of given knowledge. 

Due to Model-Free methods being so flexible, there are many iterations:

  • Value Iteration: An algorithm the agent uses to compare neighboring states and the environment’s current state to optimize the value functions. 

  • Policy Iteration: A method where the agent focuses on evaluating its policies against value functions and then improving them for future use. 

  • Q-learning: An alternative to using a model, agents will create new policies by estimating a particular action from a specific state using Q-values (action-value function).

  • Deep Q Networks (DQN): A deep approach to Q-learning methods, DQN uses high-dimensional state spaces to estimate Q-values. 

Use Cases of Reinforcement Learning in AI

Reinforcement Learning is seen in many applications since it was first developed in the early 1990s by Gerald Tesauro. His program TD-Gammon, which simulated backgammon, along with IBM’s Deep Blue and Google’s AlphaGO show that RL is widely used in games. 

The relationship between games and RL has been mutually beneficial. Games provide RL systems with the environments they need to effectively learn strategies and earn rewards, helping to innovate critical aspects of AI. Likewise, these improvements in technology also help developers create more engaging content in video games which has become a multi-billion dollar industry since the 1990s. 

The Future of Reinforcement Learning in AI

With AI continuing to expand at a rapid pace, advancements in RL hold promising potential. There is still a lot of integration that needs to be done with deep learning, leading to more advanced learning methods such as multi-agent reinforcement learning and exciting innovations in robotics which can use our physical world as the agent’s learning environment. 

RL could also help spur the next wave of climate change technology, helping meteorologists and geologists make sense of data collected from weather models to help protect vulnerable communities from harm or astronomers trying to piece together vast amounts of data about our universe collected by the James Webb Telescope. 

Keegan King

Keegan is an avid user and advocate for blockchain technology and its implementation in everyday life. He writes a variety of content related to cryptocurrencies while also creating marketing materials for law firms in the greater Los Angeles area. He was a part of the curriculum writing team for the bitcoin coursework at Emile Learning. Before being a writer, Keegan King was a business English Teacher in Busan, South Korea. His students included local businessmen, engineers, and doctors who all enjoyed discussions about bitcoin and blockchains. Keegan King’s favorite altcoin is Polygon.

https://www.linkedin.com/in/keeganking/
Previous
Previous

Fundamentals of Generative Adversarial Networks

Next
Next

Fundamentals of Natural Language Processing