Case Studies of Ethical Issues in AI

A group of circuitry recreates a human superimposed over a computer and a stack of books

With artificial intelligence (AI) becoming more common every day we must begin analyzing the ethical considerations of its use and to determine how AI can be used properly. This groundbreaking technology holds a lot of potential but also creates many unknown risks that have to be mitigated before they can occur. 

AI has already been implemented worldwide and it hasn’t always been perfect, showing us many times how AI can lead to increased bias and discrimination through poorly trained algorithms being used on large scales. From simple hiring practices gone wrong to large-scale police monitoring systems that only reduce community trust, let's dive into some of the most important case studies in the ethical use of AI. 

The Moral Code: Defining Ethics in AI

Ethical standards vary from culture to culture, making it difficult to find a common denominator about how computers should operate, but there are still a few key points that ethical AI can be boiled down to keep algorithms safe. 

  • Bias: Machine learning can be susceptible to bias depending on the training data it receives. If the initial data is unbalanced or faulty, it can lead to an algorithm that produces inaccurate results. 

  • Fairness: AI models must be trained to respect human morals and values, preventing them from executing commands from bad actors that can disrupt the societal fabric and prevent any sort of discrimination. 

  • Transparency: AI systems often work in secrecy, making their decisions in an unobservable environment. This can cause issues for developers if they are unable to identify which inputs led to an output. 

  • Accountability: Responsibility for proper AI use involves many actors - the developer, the user, and the AI itself. Mechanisms are needed to ensure that all parties are using the model properly, the way that it was intended. 

It is crucial that AI development is done correctly if we are to rely on this new form of technology. These algorithms can be used to harm societies unintentionally if used incorrectly and damage their trustworthiness over time, causing the development of innovative technology to stagnate. 

Case Study 1: Racial Bias in Facial Recognition Technologies

In 2021, Robert Williams sued the Detroit Police Department (DPD) after they wrongfully arrested him on charges of theft from the year prior. Willaims was falsely identified for theft by facial recognition that was unable to distinguish between different African-American face types. 

This incident led to Williams being detained for 30 hours before his release. Two weeks later, the charges were dropped after the DPD admitted their mistake. Police Chief James Craig then went on to call the investigative work “shoddy” and apologized. 

The lawsuit is one of the first of its kind, citing the wrongful use of AI for law enforcement and many cities have since banned the practice for this reason. However, facial recognition is still being used on the federal level to locate rioters from the January 6th Capitol riot in 2021. 

Case Study 2: Autonomous Vehicles and Decision Making

In August 2023, two autonomous driving taxis were involved in collisions in San Francisco, raising concerns that fully automated driving is still not ready for public use. The first car was driving through an intersection when it failed to recognize emergency sirens from a passing fire truck, causing the two to crash. 

The second collision occurred only a few hours later in the same city when a self-driving car was driving through an intersection and a human driver collided with it. However, in this case, it was the human driver who was at fault because they were speeding through a red light. 

Although human drivers are not perfect, there are still many fears that AI is not ready for the road and there is some validity to that belief. AI is prone to mistakes on the road like most normal drivers, but it will take a lot more certainty until they are fully accepted. 

Case Study 3: Privacy Concerns in AI-driven Surveillance Systems

Surveillance systems continue to be a major concern with AI’s ability to track people using computer vision in CCTV cameras. These programs can not only be for law enforcement but workplace monitoring too. The practice has become so controversial that the city of San Francisco outlawed the use of AI and facial recognition for public surveillance in 2019 in an 8-1.

Predictive analytics have also caused concern, especially in Los Angeles where police relied on an AI model that mapped out and predicted where crimes were most likely to occur, singling out minority groups and causing tension among police and citizens. The program was ultimately discontinued in 2020 with the LAPD claiming that AI did not lead to a reduction in crime. 

Case Study 4: AI in Hiring - Fairness and Discrimination

Unethical use of AI can also be found in the workplace too. In 2018, it was discovered that Amazon was using an AI model to filter applications and resumes. After deployment, developers realized that the model was unfairly dismissing female applications based on training data that was unfairly balanced, reflecting real-world bias in the tech industry. 

In this example, it is clear that AI was not creating new discrimination policies, but simply replicating old ones that were already present in society. Without transparency and accountability, this fault in the system nearly went unnoticed before corrections could be made. 

Lessons Learned: Adapting and Improving Ethical Practices in AI

Despite its successes, AI poses many challenges that need to be addressed before it can be fully integrated into society. AI is not perfect and has many faults that are hard to detect. Whether AI is being used in the workplace, or law enforcement, is a new and emerging technology, we still need to monitor it to guarantee that algorithms are working as intended and without harm.

Fortunately, systems like explainable AI can be used to help improve transparency and remove any black boxes from decision-making. While these systems can reduce the complexity of a model, they are necessary for the greater good, helping us ensure that proper AI use remains ethical.

Keegan King

Keegan is an avid user and advocate for blockchain technology and its implementation in everyday life. He writes a variety of content related to cryptocurrencies while also creating marketing materials for law firms in the greater Los Angeles area. He was a part of the curriculum writing team for the bitcoin coursework at Emile Learning. Before being a writer, Keegan King was a business English Teacher in Busan, South Korea. His students included local businessmen, engineers, and doctors who all enjoyed discussions about bitcoin and blockchains. Keegan King’s favorite altcoin is Polygon.

https://www.linkedin.com/in/keeganking/
Next
Next

Policies in Guiding Ethical AI Development