Pittsburgh’s AI future: CMU experts warn of risks, discuss jobs and ethics

This post was originally published on this site.

Standing at the podium of Carnegie Mellon University’s Simmons Auditorium, Zico Kolter asks an audience of AI researchers, policymakers and students to imagine an AI system that could successfully complete a colleague’s workload.

“When do you think this is going to happen?” Kolter asks. “Could I get a show of hands by people who think this will happen by 2030?”

About a third of the audience raises their hands.

“What about 2040?”

Approximately another third raises their hands.

“What about not in our lifetime?

A couple of hands go up — not nearly enough to account for the rest of the audience.

“If this is true — if you’re going to have systems that are functionally identical to your colleagues, but let’s be honest, any colleague … — that opens up a mind-boggling array of possibilities in terms of the risks,” Kolter says.

“And, by the way, if you talk to people at OpenAI, they would all have raised their hands [for] by 2030. Maybe in the next two years. Maybe in the next year.”

Kolter is a professor of computer science and director of the machine learning department at CMU. Last November, Kolter was appointed the chair of OpenAI’s Safety and Security Committee. OpenAI is the maker of ChatGPT, text-to-video AI generator Sora and several other AI products. 

On March 10 and 11, he was one of 15 experts who spoke on the ethics, governance and potential “catastrophic risks” of AI at the K&L Gates Initiative in Ethics and Computational Technologies

Computer science professor Zico Kolter leads a lecture on Convolutional Network Implementation at the Rashid Auditorium in Gates-Hillman Center at Carnegie Mellon University on Oct. 22, 2024. Photo by Alexis Wary.

Despite the uncertainty, Kolter believes AI can do a lot of good for humanity.

“Imagine an effectively unlimited number of scientists working on the hardest problems we have in a very open-ended, adaptive manner. That’s amazing,” Kolter said in an interview after a panel discussion on different AI governance structures for private entities. 

But his poll to audience members reflected a theme that developed over the day’s seven hours of sessions: For how imminently people expect AI to dynamically alter the human experience, its associated costs aren’t well understood.

“The point that often strikes me is that in my little poll, the vast majority of the audience said that this is going to happen within 15 years, which is nothing from the standpoint of, I mean, the iPhone was around for longer than [AI] — this is nothing from a technological, let alone, humanity perspective,” Kolter says. “If you believe that, we’re not acting like it.”

AI is the new steel

Pittsburgh has been an AI hub since long before CMU and K&L Gates hosted their conference, or local private and public leaders formed an AI Strike Team earlier this year, or even before ChatGPT sprang into public consciousness in late 2022. 

If you were to seek out a conversation on local AI in the late-2010s, odds are you’d run into Kenny Chen.

While his recent work has led him away from AI, the Las Vegas native landed in Pittsburgh in 2014 and — until he left to attend a master’s program at Harvard University’s Belfer Center for Science and International Affairs in 2020 — he was involved in many AI initiatives: PGH.AI, the Partnership to Advance Responsible Technology and Community Forge.

His work with these organizations and at his tech startup incubator, Ascender, earned him a spot on NEXT’s “25 Essential Pittsburghers” list in 2019.

“That period from 2017 through 2020, … my biggest passion was seeing Pittsburgh fulfill some of that potential as not just a hub for AI and robotics innovation, but as a place where it could do so responsibly,” Chen says. 

Kenny Chen. Photo courtesy of Partnership to Advance Responsible Technology.

With postindustrial decline and subsequent technology-based growth so tightly woven into Pittsburgh’s story, Chen assumed that the Steel City would be more prepared in the face of once again losing jobs.

“I think there’s a lot of ways that it’ll be complementary relationships — augmenting human capabilities — but there’s no denying that, already, millions of jobs have been disrupted, and there’s many million more [disruptions to come].”

Mike Doyle, formerly the representative for Pennsylvania’s 18th Congressional District, is now a government affairs counselor at K&L Gates and a member of the aforementioned AI Strike Team. He attended the conference to moderate a panel on governmental policies on generative AI.

Doyle told NEXTpittsburgh that while some jobs might not exist in the future, tech-centric development will result in the creation of new jobs.

“One of the other things that’s really big here is workforce development programs to make sure we’re training people in our community colleges and other programs for these jobs that we see coming into Pittsburgh,” Doyle says, though he did not go any further in depth.

The AI Strike Team mentions the creation of jobs on its website — often near language that states its goal is to position “Pittsburgh as the epicenter of the global AI economy” — but does not provide specifics.

Four days ahead of the event, the Strike Team announced that Hellbender, the AI-driven perception systems company, signed a lease for 40,000 square feet of office space on Bakery Square’s so-called “AI Avenue.” The growth will add 100 new jobs within the next year “with plans to scale to over 300 employees in the near future,” according to a press release from Hellbender.

The Strike Team calls AI the new steel, but Doyle says Pittsburghers shouldn’t expect an economic downturn this time around.

“We’re a very diverse economy — we’re not just doing AI,” Doyle says. “We’re eds and meds, we’re autonomous vehicles.

“There’s never going to be one company that can send Pittsburgh into a recession, so it’s not accurate to compare it to the steel industry when it became global. Those days are over for us.”

Former U.S. Rep. Mike Doyle, currently a government affairs counselor at K&L Gates and a member of Pittsburgh’s AI Strike Team, addresses the AI conference’s attendees ahead of a panel titled, “Governmental Policies on Generative AI,” which he moderated on Monday, March 10. Photo by Roman Hladio.

Still, Kolter and his co-panelist Carol J. Smith — principal research scientist in human-machine interaction at the Carnegie Mellon University Software Engineering Institute — were not shy about saying that jobs as we know them will change.

Smith says this isn’t a new problem; AI signals another work shift, just as the laptop, cellphone or even automobile did. 

“I was a photojournalist in a previous life … I was trained on film in the dark room,” she says.

Then, Photoshop came out and all that work went digital.

“Through using it, I was like, ‘Actually, this is a lot better and I’m not exposed to chemicals.’”

“There will be jobs lost, but I do think there’ll also be a lot more jobs created because these systems require people keeping an eye on them,” Smith says. “Particularly if it’s a generative AI system or any type of nondeterministic [system], you’re not always sure what’s going to come out, so you need people making sure that it’s working as intended, that they’re managing it when it’s not — either reverting to another version or retraining. People need to be able to do that.”

Kolter says that the types of jobs that will exist in the future would be considered “not real work” in the same way that current jobs might seem “not real” to people from the Middle Ages.

“People will have a purpose, people will have aims, they’ll have goals,” he adds. “Ultimately, a more abundant and fulfilled society is the goal of all this.”

Energy concerns

Beyond AI’s economic and cultural challenges, it raises issues related to energy consumption and sustainability. Reporting from Yale School of Environment, the Harvard Business Review and MIT News all cautioned against generative AI’s electricity demand and water consumption in the past year.

Even amid those concerns, Doyle says Pittsburgh is a prime spot for AI growth.

“We’re an energy center, too; we do nuclear, we’re sitting on the Saudi Arabia of natural gas,” he says.

“These things are very energy intensive. How do you power a data center? Well, Westinghouse makes the AP300 Small Modular Reactor — perfect sized to fund a data center. We’ve got the natural gas, we’ve got the rivers. We’ve got a lot of things in place to do this.”

Data centers are the physical infrastructure that provide computing power and storage, among other things, which enable AI training and operation.

At a different CMU-hosted event more than two weeks later, though, sustainability expert Eric Masanet called for more caution when considering the effects of AI’s energy use. Masanet is a professor and chair of sustainability science for emerging technologies at the University of California, Santa Barbara.

“When we zoom out to the global level, or even the national level, the power and energy use of data centers may look kind of small — 1 to 2%, 2 to 4%,” Masanet says. “It’s a very different view at the local level.”

Masanet spoke at a panel on AI’s energy requirements and impacts on climate goals during CMU’s annual Energy Week, hosted from Tuesday, March 25, through Thursday, March 27.

Eric Masanet. Photo courtesy of University of California, Santa Barbara.

Masanet pointed to “Data Center Alley” in Ashburn, Va., which is home to about 60 data centers. In order to generate enough power for the data centers, new fossil fuel generation is happening at the cost of local air quality.

“A lot of the commitments by large data center operators are for renewable and [Small Modular Reactors] — those are coming down the pike in five years, maybe 10 years,” Masanet says. “In the meantime, we’re adding more fossil fuel capacity, mostly natural gas, but more fossil fuel capacity in very local contacts where people are feeling the effects today.”

“We have to be careful to distinguish between the big picture and what that number may tell us about the overall sector and the local effects.”

Chen agrees with Doyle that Western Pennsylvania’s surplus of natural resources and potential renewable energy generation (even though renewable growth is relatively stagnant) position the region well to be an energy hub. Small Modular Reactors, like the AP3000, are also cost-effective energy options without the meltdown risks associated with nuclear energy.

“But it’s still going to be such an uphill battle with public sentiment and some of these timelines as well,” Chen says. “Even for building relatively small-scale ones, it’s in the ‘few-to-several years if not decades’ timelines for these, and AI is not going to wait that long.”

“People need to be very realistic about the timelines and costs and political risks.”

“You should be able to just turn it off

Since ChatGPT’s release in late 2022, AI innovation has been nothing short of rapid, and coming with it is ever-changing researcher estimations of how soon we’ll reach a science-fiction future.

“People’s projections for super-intelligence, AGI [artificial general intelligence], have gone from 2047 … [to] by the end of the decade,” Chen says.

AGI is the type of artificial intelligence common in science-fiction stories — machines that are intellectually equivalent to humans. The generative AI and large language models we use today merely understand the “pattern” of human language and stitch together responses that fit the mold.

Concern about these “loss of control” scenarios are just one of the “catastrophic risks” Kolter and OpenAI’s Safety and Security Committee discuss.

From left: panel moderator Lorrie Cranor, professor of engineering and public policy at CMU’s School of Computer Science; Trevor Hughes, president and CEO of the International Association of Privacy Professionals; Zico Kolter, professor and director of CMU’s Machine Learning Department; and Carol J. Smith, principal research scientist in human-machine interaction at the CMU Software Engineering Institute. The three panelists discussed AI governance at the K&L Gates Initiative in Ethics and Computational Technologies on Monday, March 10. Photo by Roman Hladio.

To Smith, the idea that a sci-fi future is quickly approaching is frustrating.

“These tools aren’t inherently good or bad, and they have the power to really help us do some amazing things — I hope they cure cancer,” Smith says.

“They can be an enabler of great good, but they can also raise the inherent risks that are already in the situation and, particularly in the work I do, we don’t want to raise that risk very much.”

Her obvious solution? “It should have an off button. You should be able to just turn it off.”

Smith says we should have confidence and trust in any system we use. From a human-computer interaction perspective, trust comes from understanding that AI systems are tools and knowing when AI is the right tool for the job and when it is not.

Kolter made clear that he is not advocating for the constant consideration of catastrophic risks associated with AI, “but I do think that every community — including current governments — needs to be thinking very actively about these risks … that are buried just below the surface, which we don’t think about very often, but which will become very, very oppressing very, very soon.”

“It’s more a social question of, ‘What do we want humans to do? What do we want machines to do?’” Smith says. “Not trying to mash it all together, but rather focusing on ‘humans are good at some things, machines are good at other things’ and figuring out what that ecosystem is like when we have more powerful systems. Which is great, but also not forgetting that they’re for us. We don’t serve the machines.”

A “circuitscape” mural titled, “URBAN” on display in Bakery Square by Sandy Kessler Kaminski and fourth and fifth grade students. Photo by Roman Hladio.

To Chen, part of why conversations about AI are so difficult is that they exist on a spectrum. At one end, he says, are grandmothers interacting with automated chats or getting scammed online. On the other are questions like, “Will AGI kill everybody by 2040?” 

The dynamic range of conversations to be had makes it hard for any one person to recognize how they fit into the equation, but Pittsburgh is primed to host those conversations, since its population represents most of that spectrum, says Chen.

“There’s so much going on in the region, and many of those impacts and implications are close and personal enough for people where I can practically guarantee that any person living in the city nowadays — unless they literally live under a rock — knows people whose lives or work have all been changing really, really drastically,” Chen says. “For most people, that’s the place to start.”

But adding to the topic’s difficulty is the juncture at financial, national security, humanitarian and ideological issues tied into AI that society finds itself. He had hoped Pittsburgh would have better prepared itself to counter these problems 10 years after he first came to the Steel City.

“Maybe the core institutions didn’t jump on it early when they had the chance, but the opportunity still exists for Pittsburgh to emerge in that leadership capability,” he says. “But what would that look like, and to whose interests would that be?”