How New Regulations on Bias in AI are Changing the Game

by Ashley Bratton & Sacha Taylor

In an era increasingly dominated by artificial intelligence (AI), concerns about bias and discrimination have come to the forefront of discussions in technology and ethics. While AI holds immense potential to revolutionize industries and improve lives, it is not without its challenges. Bias in AI, whether unintentional or systemic, can have profound consequences, impacting not only companies and their products but also the very communities they serve.

The European Union (EU) and other countries have been at the forefront of addressing this critical issue, and it’s now clear that the United States is following suit, ushering in a wave of new regulations that demand attention and action from companies. In this thought leadership article, we’ll delve into the regulations aimed at addressing bias in AI, explore the harm to communities, and emphasize the need for companies to get ready for change.

The Regulatory Landscape: Europe’s Pioneering Role

  • General Data Protection Regulation (GDPR)

The European Union took a monumental step in addressing data and AI ethics with the implementation of the GDPR in May 2018. Enforced by data protection authorities of each EU member state, the GDPR has set the global standard for data privacy. While not explicitly an AI regulation, GDPR has implications for AI systems, ensuring that the data they use and generate must adhere to strict privacy guidelines. This regulation puts significant onus on companies to ensure their AI systems are transparent, accountable, and respectful of individuals’ rights.

  • AI Act (Draft)

Building on the foundation of GDPR, the European Parliament and the Council of the European Union are actively negotiating the AI Act. While it’s not yet clear who will enforce this act, its potential impact cannot be understated. The AI Act aims to establish a comprehensive framework for the development and use of AI, with a focus on addressing bias, discrimination, and transparency in AI systems. Companies operating in or serving the European market should closely monitor the progress of this legislation.

  • Non-Discrimination Principle (NDP)

The European Commission’s Ethics Guidelines for Trustworthy AI include the Non-Discrimination Principle, further emphasizing Europe’s commitment to ethical AI. While not legally binding, these guidelines provide a clear ethical framework for AI development, pushing companies to prioritize fairness and non-discrimination in their AI systems.

 

The United States: Catching Up to Global Standards

  • Algorithmic Accountability Act (H.R. 1693) and Algorithmic Fairness Act (S. 2991)

In 2023, the United States took a significant step toward addressing bias in AI with the introduction of these bills, enforced by the Federal Trade Commission (FTC). Though not yet passed into law, they signal a growing recognition of the need for AI regulation. These bills aim to ensure that companies developing AI systems are accountable for any bias and discrimination that may arise from their use.

  • Joint Statement on Combating Bias and Discrimination in Automated Systems and Artificial Intelligence

Issued by several U.S. agencies, this joint statement, while not legally binding, underscores the government’s commitment to combating bias and discrimination in AI systems. It sends a clear message that regulatory action is on the horizon, and companies must proactively address bias and discrimination in their AI systems.

  • National Artificial Intelligence Initiative Act (P.L. 117-35)

This law created the National Artificial Intelligence Initiative, tasked with developing policies and regulations to address bias in AI systems. It demonstrates the U.S. government’s recognition of the importance of AI ethics and bias mitigation.

  • State-Level Initiatives

States like California and New York have also taken action. The California Consumer Privacy Act (CCPA) and the New York City Automated Employment Decision Tools Law set precedent by enforcing transparency and accountability in AI systems.

 

The Harms to Communities: Real-Life Examples

  • Criminal Justice: Facial Recognition Bias

In 2016, a study by ProPublica found that a facial recognition algorithm used by the Chicago Police Department was more likely to misidentify black people than white people. This led to the wrongful arrest of several people. This heartbreaking example illustrates how bias in AI can have dire consequences for individuals and entire communities.

  • Fintech: Discriminatory Lending Practices

In 2020, a study by the Consumer Financial Protection Bureau found that black and Hispanic borrowers were more likely to be denied loans from online lenders than white borrowers, even when they had similar credit scores. This is likely due to bias in the algorithms used by these lenders, perpetuating financial disparities among communities.

  • Healthcare: Racial Disparities in Diagnoses

In 2021, a study by the National Institutes of Health found that an AI-powered tool for diagnosing skin cancer was less accurate for black patients than for white patients. This is likely due to the fact that the tool was trained on a dataset that was predominantly white, exacerbating healthcare disparities in vulnerable communities.

  • Employment: AI-Powered Discrimination

In 2022, a study by the University of Chicago found that AI-powered resume screening tools were more likely to screen out black and female applicants than white and male applicants. This is likely due to the fact that these tools are trained on datasets that are predominantly white and male, perpetuating systemic employment biases.

 

Mitigating Bias: Solutions that Matter

  • Collect and Use More Diverse Data

One powerful way to mitigate bias in AI is to collect data that is representative of the population the system will serve. By ensuring diversity in training data, AI systems can better avoid discriminatory outcomes.

  • Use Fair Machine Learning Algorithms

Fair machine learning algorithms are designed to account for potential bias and make decisions that are fair to all groups of people. These algorithms should be a cornerstone of responsible AI development.

  • Monitor and Audit AI Systems

Continuously monitoring and auditing AI systems is crucial to identify and rectify bias. By tracking decisions and patterns, companies can proactively address and correct any harmful biases.

  • Educate the Public about AI Bias

Raising awareness about the potential for AI bias is essential. Public education can empower individuals to understand the risks and advocate for equitable AI practices.

 

Conclusion

Bias in AI has real-life consequences, as illustrated by these alarming examples. It perpetuates discrimination, reinforces inequality, and harms vulnerable communities. The EU’s proactive stance on regulation has set a global precedent, and the United States is following suit with a wave of new regulations.

Companies must not only comply with these regulations but also embrace the ethical imperative of mitigating bias in AI. By doing so, they can protect their interests, uphold their social responsibility, and contribute to a more equitable and just society. As we navigate this evolving landscape, the convergence of ethics, technology, and regulation will define the future of AI—a future in which bias has no place, and fairness is the norm.

Bias in AI has real-life consequences, as illustrated by these alarming examples. It perpetuates discrimination, reinforces inequality, and harms vulnerable communities. The EU’s proactive stance on regulation has set a global precedent, and the United States is following suit with a wave of new regulations. Companies must not only comply with these regulations but also embrace the ethical imperative of mitigating bias in AI. By doing so, they can protect their interests, uphold their social responsibility, and contribute to a more equitable and just society. As we navigate this evolving landscape, the convergence of ethics, technology, and regulation will define the future of AI—a future in which bias has no place, and fairness is the norm.

Bias in AI has real-life consequences, as illustrated by these alarming examples. It perpetuates discrimination, reinforces inequality, and harms vulnerable communities. The EU’s proactive stance on regulation has set a global precedent, and the United States is following suit with a wave of new regulations. Companies must not only comply with these regulations but also embrace the ethical imperative of mitigating bias in AI. By doing so, they can protect their interests, uphold their social responsibility, and contribute to a more equitable and just society. As we navigate this evolving landscape, the convergence of ethics, technology, and regulation will define the future of AI—a future in which bias has no place, and fairness is the norm.

Bias in AI harms vulnerable communities.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment