Other parts of the world are accelerating laws designed to protect consumers from advanced artificial intelligence tools, including a chatbot that can replicate human tasks and biometric surveillance of faces in public spaces.
But federal legislation has stalled in the US, leaving the job of regulating Open AI’s ChatGPT and other generative AI tools to local governments. How much protection consumers have in this country at the moment depends on where they live.
There are six states that have or will have laws on their books by the end of 2023 to prevent businesses from using AI to discriminate or deceive consumers and job applicants: California, Colorado, Connecticut, Illinois, Maryland, and Virginia.
One city, New York, has joined in those efforts to pass an ordinance regulating the use of AI in the hiring process.
"We are absolutely at an inflection point," data privacy attorney Goli Mahdavi of Bryan Cave Leighton Paisner told Yahoo Finance. "We have mass adoption of AI tools across enterprises."
On Thursday, the European Union also took a big step towards passing legislation to regulate AI tools, including Chat GPT. Its lawmakers agreed to strengthen draft legislation to include a ban on facial recognition in public spaces and more transparency around programming generative AI.
In the US, Mahdavi says, the states with legislation in place target similar protections. California, Colorado, Connecticut, Virginia outlaw AI "profiling," unless a consumer consents. New York City does the same.
The practice involves collecting or sharing personal data, such as work, health, and financial records, and relying on a computer algorithm to evaluate the data to make decisions that come with legal consequences, such as granting or denying applications for loans, insurance coverage, or housing.
To the extent that AI tools make automated decisions in these categories, Mahdavi explains, businesses must explain to customers the logic used to program the AI's decision-making capabilities.
The laws also require that businesses using the tools offer an opt out to consumers, and that businesses undergo risk assessments detailing for consumers the benefits and risks of an AI tool.
Laws in Illinois and New York City put different guardrails around how employers can use AI tools, preventing employers from collecting a job applicant’s personal data to make hiring decisions.
In Maryland, employers are prohibited from using facial recognition technology to identify job candidates, unless the candidates consent.
California's laws offer the most robust protections for the state's consumers. In addition to blocking profiling, the state makes it unlawful to use online bots to promote sales of goods and services to a person within its borders.
An exception in the law permits the use of bots so long as the AI tool is clearly disclosed, and bots are similarly prohibited when used to influence election votes.
Mahdavi expects companies to face compliance challenges as these laws are applied.
"Any time you’re grappling with a patchwork of state laws it kind of brings the compliance risk up, because there is no universal standard by which companies can tie their compliance programs to," she says.
For now, she adds, the most heavily impacted industries are those in the housing, employment, and insurance sectors.
Mahdavi explains that California is best equipped to enforce its laws because the state has allocated enforcement resources. Whether other states have the muscle to enforce their laws remains to be seen.
"We know that in California there is a dedicated consumer privacy agency that has its own budget, and they are currently staffing up," she said. "So we can expect that in July, when the enforcement date comes, there will be an enforcement sweep."
(Source: Yahoo Finance), all rights reserved by original source.