Google, Microsoft seem OK with Colorado’s controversial AI law. Local tech not so much.

This post was originally published on this site.

This story was updated on Oct. 22 at 12:52 a.m. to add additional comments from large tech companies that presented at the task force meeting.

Representatives from Google, Microsoft, IBM and other massive technology companies showed up at the State Capitol on Monday to support and make suggestions for Colorado’s controversial artificial intelligence law, which became the first in the nation to pass last spring.

Big Tech seemed more or less OK with the new law, which aims to put guardrails on machines that make major decisions that could alter the fate of any Coloradan. That specifically includes AI used in decisions for jobs, lending or and financial services, health care services, insurance, housing, government or legal service or a spot in college. 

From left to right, Soona founder Liz Giorgi, Range Ventures co-founder Adam Burrows and Jon Nordmark, founder of Iterate.ai, shared their concerns and hopes for the revision of Senate Bill 205. The new Colorado law regulates artificial intelligence harm to consumers but there’s concern that it hurts innovation. They spoke during a meeting of the Artificial Intelligence Impact Task Force on Oct. 21, 2024. (Provided by Colorado Technology Association)

But that wasn’t specific enough for a group of local founders who were involved in the most heated discussion during Monday’s Artificial Intelligence Impact Task Force meeting. Their concerns echoed a letter signed by a group of 300 technology executives sent to Gov. Jared Polis a few weeks after he signed Senate Bill 205. Polis pledged to revise the new law and is relying on the task force to figure those things out. A report is due in February. 

Luke Swanson, chief technology officer at popular Denver shopping app Ibotta, wondered if cash-rebate offers made to its shoppers are considered “financial services.” Jon Nordmark, a local entrepreneur who founded eBags in the 1990s, said his current company Iterate.ai developed technology for private AI systems. Customers use Iterate’s tech to build and train their own AI systems. For Iterate to be in compliance, they’d have to know everything a customer does, which they don’t. And they couldn’t afford to, even with 100 employees. Most of its employees are tech experts. It just hired its first staff accountant.

And Liz Giorgi, cofounder and CEO of Soona, a Denver company that provides visuals for ecommerce companies, said she’s struggled to figure out if taking photos of a health care device that Soona enhances with AI tools qualifies as a health care service. 

“When you’re running a 110-person organization, we do not have a lawyer on staff who is guiding us on these decisions,” Giorgi said. “Are we giving health care advice and are we deploying health care services in some inconsequential way? I’ve asked three attorneys. I’ve gotten three different answers.”

After some intense debate, one task force member responded to Giorgi with, “No.”

At least one task force member pointed out the disconnect between big tech and local companies. Seth Sternberg, CEO of Boulder-based home-care company Honor, said that the locals were just sharing their experience. 

“The little tech people got the harder questions than the big tech people. And they got the stronger reactions than the big tech people,” Sternberg said. “And the big tech people get to, frankly, have 1,000 compliance people on their staff that let them then come and say very generic things because they know that if 205 does occur, that it won’t have a substantial impact on them. They can handle it. 

“But the little tech people, if 205 in its current form, the way they’re interpreting it happens, they’re existentially scared for their lives,” he said. “That is their reality.”

Members of the Artificial Intelligence Impact Task Force listen to speakers from TechNet, Amazon, Google and Salesforce talk about how to revise the state’s new AI law during a committee meeting on Oct. 21, 2024 at the State Capitol. (Tamara Chuang, The Colorado Sun)

Others on the task force asked questions about what needed to change. The law requires certain companies to disclose when AI is used or interacts with consumers. How do you assess AI risks or disclose AI usage to customers, said Tatiana Rice, deputy director of the Future of Privacy Forum, which works with organizations and governments to shape policies. 

Another task member, Matt Scherer, senior policy counsel at Center for Democracy and Technology, said he felt a little frustrated that when he was trying to drill down with what local technology founders wanted, nobody was on the same page.

“They kept talking about things that the law does not cover or things that the law does not require,” Scherer said after the meeting. “They talked very generally about that they didn’t want proactive disclosure and that the outside world generally does not know right now when automated systems are being used. But if you don’t have proactive disclosures, just to be blunt, you might as well not do regulation at all.”

What Big Tech said

The global tech companies spoke about the need for regulation — Microsoft’s Ryan Harkins said founder Bill Gates has pushed for a federal privacy law more than two decades ago. They also talked about their companies’ commitments to responsible AI development.

“There’s a growing recognition around the world that it’s all well and good for companies to take it upon themselves to do what they think is right,” Harkins said. But, he added, “We also need laws and there’s a growing conversation around the world about what those laws should look like. From our point of view, we want to see laws put in place that  both facilitate and enable innovation … but will also address serious and real risks of harm. And that is a hard thing.”

A Google representative applauded the state’s risk-based approach to regulation but suggested different approaches based on industry would create “targeted revisions to the law will improve its effectiveness by focusing on truly high risk use cases,” said Alice Friend, Google’s head of AI and emerging tech policy. 

“This is because AI is a general purpose technology. It can help you plan a party or help you manage your retirement savings. Merely using AI should not automatically trigger harmful and onerous regulatory applications,” said Friend, who attended virtually. “We share the goal of protecting Coloradans while leveraging this once in a generation technology.”

Friend also suggested the state consider how its law could work in harmony with future national or global laws, such as the International Organization for Standardization’s AI standard or the White House’s executive order on AI. The task force could work on something similar to the guidance the Massachusetts Attorney General provided to AI developers and users to other existing laws that deal with consumer protection, discrimination and data security.

Microsoft, one of the largest corporate investors in OpenAI, doesn’t support the part of the law that requires companies to proactively notify the attorney general if their AI “is reasonably likely” to cause discrimination. That can be unclear and could result in unnecessary notifications “flooding the AG’s office.”  

But Colorado’s regulation is a good start, Harkins said, because “the law attempts to address a long-standing problem in tech regulation: How do you ensure that laws keep pace with technology? How do you ensure that as soon as you pass a law to regulate tech, it’s not already out of date?”

For one,  he said, it requires technology companies to “take reasonable care to, as this law would, protect consumers from known or reasonably foreseeable risks that your system will discriminate against them.” 

And quibbling is allowed — from the small companies that feel the regulation is too burdensome to the consumer advocates who feel it doesn’t go far enough. But overall, he said, “We think that general approach is interesting, it’s creative, it’s innovative and it’s not a bad place to start.”

Both Harkins at Microsoft, and IBM’s Ryan Hagemann, the company’s global AI policy issue lead, said their companies supported a similar AI bill in Connecticut that was scuttled after the state’s governor said he would veto it. 

.wpnbha article .entry-title{font-size: 1.2em;}.wpnbha .entry-meta{display: flex;flex-wrap: wrap;align-items: center;margin-top: 0.5em;}.wpnbha article .entry-meta{font-size: 0.8em;}.wpnbha article .avatar{height: 25px;width: 25px;}.wpnbha .post-thumbnail{margin: 0;margin-bottom: 0.25em;}.wpnbha .post-thumbnail img{height: auto;width: 100%;}.wpnbha .post-thumbnail figcaption{margin-bottom: 0.5em;}.wpnbha p{margin: 0.5em 0;}.wpnbha.ts-1 .entry-title{font-size: 0.7em}.wpnbha.ts-1 article .newspack-post-subtitle,.wpnbha.ts-1 article .entry-wrapper p,.wpnbha.ts-1 article .entry-wrapper .more-link,.wpnbha.ts-1 article .entry-meta{font-size: 0.8em;}

And IBM, the century-old tech firm, is still supportive of Colorado’s law. 

“We’re optimistic that the bill is in relatively good shape,” said Hagemann, who had flown into Denver to present additional suggestions to the task force. Regulations should be specific to AI risks, not the technology. And they should, as the Colorado law does, target preventing the tech from making “consequential decisions” like hiring. 

Task force members had few questions for IBM and Microsoft representatives.

But Hagemann, in an earlier interview, said that the concerns of local tech companies and startups weren’t unreasonable. The law requires developers of high-risk AI systems that make consequential decisions must create and make public a general statement of “reasonably foreseeable uses and known harmful or inappropriate uses,” as well as documentation of how it was evaluated and the measures taken to mitigate discrimination.

“I think part of the issue here is that baked into this law there are certain expectations regarding the provisioning of things like documentation and transparency requirements. Those are obligations that the day before the bill passed did not exist,” Hagemann said. “For a lot of smaller companies, figuring out internal governance processes to make that sort of compliance regime manageable from their perspective requires a lot more resources than they perhaps previously had put into it.”

It’s a cultural mind shift, he added. Colorado’s law could help small companies learn and adapt to the new rules now since the state won’t be the last to pass AI legislation. 

“If you are looking to essentially develop or deploy models in the American marketplace, at a certain point you’re going to have to provide a certain amount of transparency and documentation to regulators. That’s almost certainly going to be a near term obligation that everyone’s small and large businesses are going to have to comply with,” he said. “Some of the small businesses will have to be investing a little bit more in internal governance, the compliance process and figuring out how they’re going to showcase transparency with regard to their systems and models.”

Keeping Colorado on track

State Sen. Robert Rodriguez, a Denver Democrat and the bill’s sponsor, acknowledged the concerns of small and local businesses, but he also has the task of balancing the side that feels the law doesn’t go far enough to protect consumers in this new world of generative AI where services like OpenAI’s ChatGPT can conjure up photos, legal research and podcasts as if they were produced by humans.

“With the inception of ChatGPT and the growth of this technology in the last three, four years, do we go the route of where we were behind the ball on data privacy? … Do we wait till it’s too long and then just not pass anything?” Rodriguez asked. “I struggle with trying to figure out where the balance is (with) consumers and innovation.”

He’s hopeful that the task force will find general agreement between big tech, local companies and consumers because if not, the situation could become worse. 

“We’re committed,” Rodriguez said. “When California passed their privacy bill, it was the end of the world. That was a ballot measure. So if I put something like this on the ballot, I bet you I could get a lot stronger policy than this as fearful as people are.”

The task force plans to meet again on Nov. 13. A report of its recommendations is due to the Joint Technology Committee and Governor’s office by Feb. 1.

Type of Story: News

Based on facts, either observed and verified directly by the reporter, or reported and verified from knowledgeable sources.