Policies in Guiding Ethical AI Development

Two binders labeled policies and procedures sit on top of paperwork

Artificial intelligence (AI) has been in development for decades, but recent advancements have thrust the emerging technology under new scrutiny as more people become aware of its design and use cases. This is leading to a lot of tension because of the public’s concern about its misuse and the fear of AI becoming uncontrollable. 

However, many legal experts and politicians are already discussing the topic at length, searching for new ways to prevent machine learning and deep learning from being used incorrectly, keeping it aligned with human morals and values to mitigate unknown harms. 

Historical Backdrop

Initial debates about the use of AI first started with discussions about the philosophical implications of machine intelligence and its potential in modern society. There were concerns that AI could become too powerful, displacing jobs and risking humanity. However, as AI became more widespread, these concerns shifted to more practical measures regarding privacy laws and data collection. Incidents like Cambridge Analytica caused many people to scrutinize how big data was mining data and using it to manipulate public opinion for political purposes. 

This has led to the emergence of many AI-focused policies around the world, most notably in the EU. Some examples include:

  • EU Guidelines on Trustworthy AI: This EU guideline contains three components - (1) AI must be lawful, complying with all laws and regulations, (2) AI must be ethical and adhere to general principles and values, (3) AI must be robust from a technical and social standpoint. 

  • General Data Protection Regulation: This EU regulation focuses on the use of data and how it is collected, requiring companies to follow certain obligations that protect consumers. 

  • OECD Principles on AI: This policy observatory aims to ensure that AI is used both ethically and innovatively without breaching the public trust in both recreation and commercial use. 

Current Landscape

The EU holds many of the most current laws about the use of AI, but the technology is still under heavy scrutiny around the world including the United States. California specifically has many laws regarding how AI is used due to its major role in the tech industry with laws banning facial recognition for law enforcement in San Francisco and the California Consumer Privacy Act. However, there are no major laws concerning AI on the federal level yet. 

Most of the United States guidelines on AI come from the corporations developing AI programs like Google and Microsoft which each uphold their own set of AI principles. Other companies like OpenAI have gone as far as to testify to Congress about the need for stricter regulations on AI software, suggesting AI licensing for any company selling products that contain deep learning frameworks

Challenges with Bias

Bias in AI continues to be a major issue. In 2020 IBM announced that they would be ending their research into facial recognition technology citing racial profiling and discrimination as their reason. There is a belief that facial recognition can be used incorrectly and would lead to ethical concerns if implemented in law enforcement practices, with AI algorithms being trained on data that could falsely target minority communities. 

Bias was also observed in AI when Amazon deployed an algorithm that could sort through resumes and job applications for roles at their company. Eventually, people discovered that the AI bot was throwing out applications sent by females because its training data placed a heavy emphasis on male applicants which was indicative of the male dominance in the tech industry. 

Future Trajectories

Calls for regulations in AI have only grown stronger since the release of ChatGPT and the public’s realization that generative AI has the power to automate many high-end creative jobs. Not only that, but generative adversarial networks are also capable of producing deep fake images that have already been spotted in US political ads and could disrupt the very fabric of public trust during election seasons. 

However, proper policies for AI can’t be done in one country or region alone. Tech can be shared online meaning many bad actors from around the world have the capacity to cause harm with AI algorithms. So, an international effort is most likely needed to mitigate excessive harm created by AI.

Keegan King

Keegan is an avid user and advocate for blockchain technology and its implementation in everyday life. He writes a variety of content related to cryptocurrencies while also creating marketing materials for law firms in the greater Los Angeles area. He was a part of the curriculum writing team for the bitcoin coursework at Emile Learning. Before being a writer, Keegan King was a business English Teacher in Busan, South Korea. His students included local businessmen, engineers, and doctors who all enjoyed discussions about bitcoin and blockchains. Keegan King’s favorite altcoin is Polygon.

https://www.linkedin.com/in/keeganking/
Previous
Previous

Case Studies of Ethical Issues in AI

Next
Next

How AI Can Detect and Prevent Cyber Threats