BCCN3

View Original

OpenAI to Provide Grants for AI Cybersecurity

As AI continues to develop at a rapid pace, the need for stronger monitoring of the technology only becomes more important. While Hollywood has presented the worst-case scenarios for AI throughout the decades in films like Terminator 2 and 2001: A Space Odyssey, with the technology now fully available to the public, we’re starting to see new potential for bad actors to use AI in innovative ways to harm people.

Due to this, OpenAI, the company responsible for ChatGPT, has created a new grant program to help establish stronger cybersecurity measures to prevent artificial intelligence from spiraling out of control before criminals and black hat hackers find new ways to use it to sow political chaos and financial ruin. 

Grants for cybersecurity

OpenAI’s cybersecurity grant begins with a $1 million lump sum that will be dispersed to cybersecurity experts and organizations so that they can focus on creating effective safety protocols to thwart unknown dangers. 

The company plans to use the grant funding to encourage 3 areas of cybersecurity in order to provide a stronger security net for the general public as well as more awareness of the power of AI for users and potential victims. 

  1. Empower - Empowerment is OpenAI’s first goal. By providing cybersecurity defenders with the necessary tools, knowledge, and capabilities to develop efficient safeguards, the company believes that it can reduce the risk of major scams and privacy concerns. 

  2. Measure - The company also wishes to work with defenders to measure their capabilities to better understand how much effort and progress is required to maintain proper enforcement. 

  3. Discourse - Intelligent discussion is another major aspect that OpenAI wants to strengthen. While Language Learning Models like ChatGPT are experiencing massive adoption, there are still large swaths of the general public that have difficulties understanding the implications of AI and must be informed before they become victimized. 

Project proposals

Included in OpenAI’s grant announcement page is a list of proposals that the company believes will help push funding for AI cybersecurity and what they wish to see when considering who to help fund.

One notable example is a project to “Detect and mitigate social engineering tactics” which will help prevent the use of deep fake images and audio that are designed to mislead people into believing that public figures have committed some sort of wrongdoing that could jeopardize their career or the safety of themselves and others. 

These types of concerns are some of the most prevalent as social media figures, politicians, celebrities, and podcasters all have a large amount of content online that can be used to train AI to create convincing fake evidence. 

This use case for AI can already be seen on platforms like YouTube where more harmless examples of social engineering can be observed. New channels have begun popping up all over the video hosting site with content using AI to mimic popular figures like Elon Musk, Donald Trump, and Joe Biden discussing random topics such as their favorite video games or playing Dungeons and Dragons

Although these examples are meant to entertain audiences, the content shows how convincing AI technology can be when mimicking well-known figures and the dangers of using it for more deceptive reasons that would lead to major libel lawsuits.