Lawmakers from three other states that recently passed artificial intelligence laws discussed how to protect people’s privacy in future legislation being drafted and debated by lawmakers in New Mexico and across the country.
Senator James Maroney of Connecticut is leading a working group to develop definitions and doctrines that can serve as a basis for drafting privacy laws for AI. The group has members from all but three states, he told the New Mexico Courts, Corrections & Justice Committee on August 13.
In total, about 19 states have passed laws similar to Connecticut’s, which typically address the rights of citizens and the responsibilities of data controllers and processors, Minnesota state Rep. Steve Elkins said during the committee meeting.
New Mexico this year enact its own law related to AI, but it has to do with political ads and does not address other related issues such as data protection. Another The invoice The proposal discussed in the 2024 regular session would have provided basic guidelines for the government’s use of AI, but did not come to a vote in either chamber.
Maroney and Elkins said that while no two AI bills in state legislatures would be exactly the same, it was important to make them similar so that national companies could more easily comply with them.
“So if you take Senator Maroney’s bill, Maryland’s bill or my own bill as a starting point, you’re almost there, because all of the provisions of those bills have been thoroughly vetted and broadly accepted by the business community,” Elkins told the committee in New Mexico.
Maryland Senator Sara Love said such AI laws should be able to be implemented in every state. “But that doesn’t mean I’ll accept everything (the companies) want.”
In most other states, Love said, laws say the data controller must limit its data collection to what is “adequate, relevant, and reasonably necessary in light of the disclosed purposes for which the data is processed.” In other words, the company only has to disclose that it collected the data and provide a reason for doing so.
Love said she wanted something stronger in your bill, It therefore required data collectors to limit their data collection to what is “reasonably necessary to provide or maintain the product or service.”
For example, a company that operates a mobile game should not be allowed to record the user’s location because it is not necessary to provide the service, Love said.
AI decision making
AI has the potential to make decisions better, fairer and more transparent, said Christopher Moore, professor at the Santa Fe Institute.
On the other hand, AI algorithms are often based on historical data, which contains their own biases, and often assume that past patterns will continue into the future, Moore said later on Aug. 13 during another presentation to the Committee on Courts, Corrections and Justice.
“AI treats people in some ways like statistics,” he said. “They don’t look at a person’s individual facts the way a human decision maker would.”
Maroney said the working group was concerned that biased algorithms could lead to profiles of people and negatively impact their applications for jobs or housing.
Landlords often use AI to run a background check on a prospective tenant, Moore said. Even if a landlord doesn’t intend to discriminate, the AI can still have a discriminatory effect, he said. There’s a lot of inaccurate data floating around, including eviction records for someone with a similar name or dropped criminal charges, he said.
It is important that the law requires companies that conduct this algorithmic profiling to examine all possible disparate impacts, Martoney said.
“We need to make sure we test these algorithms before they make important life decisions,” he said.
It is also important that companies subject to the law cannot discriminate against consumers when they exercise their rights, said Love. This applies, for example, by charging more for a product if they refuse to transfer rights to consumers’ data.
Many AI systems are so-called “black boxes,” meaning you can’t see how or why they make a particular assessment or recommendation, Moore said.
The people affected by AI and the decision-makers it advises need to understand the logic behind it, know what kinds of mistakes it can make and independently assess its accuracy and fairness, he said.
“AI is a great tool, but like any tool, we need to figure out when and where it’s the right tool for the job,” Moore said. “Or do we just have to take the vendor’s word that it works great and we should pay for it?”
Data protection
Elkins said his bill adopted language from Oregon and Delaware that requires companies to disclose where they may have sold personal information and prohibits them from selling or using sensitive information, such as precise location data, for targeted advertising.
He said he also included a provision in his Minnesota bill that explicitly prohibits the re-identification of data that was intentionally collected anonymously.
Minnesota state law also requires companies to submit an annual report on their data minimization efforts. This means that companies cannot collect data they do not need to provide goods or services, nor can they collect data just to sell it to someone else.
Love said Maryland’s law allows people to access their data collected and processed, correct inaccuracies and obtain copies. It also allows people to object to the processing of their data for targeted advertising, the sale or the use of their data for profiling for high-impact decisions, she said.
New Mexico Senator Antoinette Sedillo-Lopez (D-Albuquerque) asked what “opt-out” means.
Elkins said that if you don’t want companies to use your information to buy your car insurance or book an apartment, many state laws say you can opt out of having your information used for that purpose.
Elkins said he didn’t think that standard was appropriate because if you have a reason to complain about profiling, “it’s probably already happened.” So he added an additional provision to his bill saying you have additional rights if profiling has already occurred.