Home Rent a car AI Weekly: Algorithmic discrimination highlights the need for regulation

AI Weekly: Algorithmic discrimination highlights the need for regulation

0


[ad_1]

The The Transform Technology Summits begin October 13 with Low-Code / No Code: Enabling Enterprise Agility. Register now!


This week, a piece of The Makeup uncovered Prejudices in US mortgage approval algorithms that lead lenders to turn down people of color more often than white applicants. A decision-making model called Classic FICO failed to account for daily payments – like on-time rent and utility checks, among others – and instead rewarded traditional credit, to which Blacks, Native Americans, Asians, and others. Latin Americans have less access than white Americans.

The results are not revealing: In 2018, researchers at the University of California at Berkeley found that mortgage lenders charged higher interest rates to these borrowers compared to white borrowers with comparable credit scores. But they highlight the challenges of regulating companies that risk adopting AI for decision-making, especially in industries that have the potential to inflict damage in the real world.

The stakes are high. Economists at Stanford and the University of Chicago showed in a June report that because underrepresented minorities and low-income groups have less data in their credit histories, their scores tend to be lower. precise. Credit scores are factored into a range of enforcement decisions, including credit cards, apartment rentals, car purchases, and even utilities.

In the case of mortgage decision algorithms, Fannie Mae and Freddie Mac, congressional mortgage companies, told The Markup that Classic FICO is regularly assessed for compliance with fair loan laws internally and by the company. Federal Housing Finance Agency and the Ministry of Housing. and urban development. But Fannie and Freddie have, for the past seven years, resisted efforts by advocates, the mortgage and housing industries, and Congress to enable a new model.

Algorithmic discrimination

The financial industry is not the only culprit of algorithmic discrimination, to hell with equality and fairness laws. Last year, a Carnegie Mellon University study found that Facebook’s advertising platform behaves detrimentally against certain demographics, sending advertisements related to credit cards, loans, and insurance in ways disproportionate to men compared to women. Meanwhile, Facebook rarely showed credit ads of any type to users who chose not to identify their gender, the study showed, or who labeled themselves as non-binary or transgender.

Laws on the books, including the United States Equal Credit Opportunity Act and the Civil Rights Act of 1964, were written to prevent this. Indeed, in March 2019, the US Department of Housing and Urban Development filed a complaint against Facebook for allegedly “discriminating against people on the basis of their identity and place of residence”, in violation of the Fair Housing Act. But discrimination continues, a sign that responsible algorithms – and the power centers that create them – continue to outpace regulators.

The European Union’s proposed standards for AI systems, released in April, are perhaps the closest to reigning in decision-making algorithms gone mad. If passed, the rules would subject ‘high risk’ algorithms used in recruiting, critical infrastructure, credit scoring, migration and law enforcement to strict safeguards and outright ban social scoring. , child exploitation and some surveillance technologies. Companies that violate the framework face fines of up to 6% of their global revenue or 30 million euros ($ 36 million), whichever is greater.

Fragmentary approaches have been taken in the United States to date, such as a bill in New York to regulate algorithms used in recruiting and hiring. Cities like Boston, Minneapolis, San Francisco and Portland have imposed bans on facial recognition, and congressional officials including Ed Markey (D-Mass.) And Doris Matsui (D-CA) have introduced legislation to increase the transparency in the development and deployment of companies. algorithms.

In September, Amsterdam and Helsinki launched “algorithm registries” to bring transparency to public AI deployments. Each algorithm cited in the registers lists the data sets used to train a model, a description of how an algorithm is used, how humans use prediction, and how the algorithms have been evaluated for potential bias or risk. The registries also provide a way for citizens to give their opinion on the algorithms used by their local government, as well as the name, city department and contact details of the person responsible for the deployment responsible for a particular algorithm.

This week, China became the latest to step up its oversight of algorithms companies use to drive their businesses. China’s cyberspace administration said in a draft declaration that companies should adhere to the principles of ethics and fairness and should not use algorithms that trick users into “spending large sums of money or spending money in a way that can disrupt the business. ‘public order’, according to Reuters. The guidelines also require that users have the option to turn off algorithm-based recommendations and that Chinese authorities have access to the algorithms with the option of requesting “rectifications” if there are problems.

Either way, it becomes clear – if it wasn’t already – that industries are poor self-regulators when it comes to AI. According to a Deloitte analysis, in March, 38% of organizations lacked or had an insufficient governance structure to manage AI data and models. And in a recent KPMG report, 94% of IT decision makers said they believe companies need to focus more on corporate responsibility and ethics when developing their AI solutions.

A recent study found that few large AI projects properly address the ways in which technology could negatively impact the world. The results, which were published by researchers at Stanford, UC Berkeley, University of Washington and University College Dublin & Lero, showed that dominant values ​​were “operationalized in a way that centralizes power. , disproportionately benefiting companies while neglecting the less advantaged in society. “

A investigation from Pegasystems predicts that if the current trend continues, a lack of accountability within the private sector will lead governments to assume responsibility for regulating AI over the next five years. Already, the results seem premonitory.

For AI coverage, send topical advice to Kyle Wiggers – and be sure to subscribe to the AI ​​Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle wiggers

IA personal writer

VentureBeat

VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you
  • our newsletters
  • Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
  • networking features, and more

Become a member

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here