How AI Intersects with Politics, Jobs, and How We Get Information | Tufts Now

This post was originally published on this site.

My favorite example is employment in bank branches. When the ATM was first invented, there were widespread predictions that banks would need fewer employees, because they wouldn’t be handing out cash. Today, there are more people working in banks, because they’re selling mortgages and other types of products. 

The concern I have is if we have much more rapid progress in AI than prior trends, it’ll be harder to digest the labor market impact. But that could be a good scenario as well as a bad scenario. 

If we reach an age of rapid productivity growth, that will create much more wealth. And then the question is just, can it be allocated in a way that addresses the labor market impact that’s created? But right now I don’t think we see evidence that we’re having this extraordinary acceleration in productivity growth. It’s certainly not present in economic data. 

For now, I think the most reasonable assumption to make is that technology will continue to improve. We’ve reached a new phase of computing technology, which is AI. It’s going to be one of the key drivers of productivity improvements—but not a revolutionary one, or at least not any more revolutionary than smartphones or PCs or mainframe computers were in their day.

There’s also the deeper concern that in a world with pervasive AI, there could be apocalyptic possibilities. As a historian, what do you think about that? 

I think it’s impossible to fully reject or disprove apocalyptic scenarios. But I’m struck by our ability over time to develop institutions to manage risks in other technologies. The fact that we’ve managed to live in the world for 70 years without nuclear weapons use, even though we have nuclear weapons, I think is a positive sign. 

I’m much more worried that we slow down AI progress by wrongly fixating on the worst-case scenario. I think that there’s so much concern in the AI community about safety—and safety is important—that we aren’t looking enough at quality, which is even more important. 

We should be focusing just as much on ways to make sure we’re using AI in ways that are going to improve outcomes in health care and the provision of consumer and government services, for example. I worry much more about our slow adoption than I worry about AI displacing or replacing or causing humans to go extinct.

With the Taiwan Semiconductor Manufacturing Company making 99% of the advanced chips that are used for AI, how does the fact that the Chinese Communist Party views Taiwan as a renegade province and threatens its current independence affect AI going forward? 

There’s a really direct relationship between peace in East Asia and progress in AI, because the way supply chains are structured today—tech companies couldn’t get the chips they need without access to Taiwan. It is at the absolute epicenter of AI supply chains. We often think of AI models as being disembodied, something that exists in the internet. But in fact, the physical products that make AI possible depend on supply chains that, to a very large extent, get traced back to a single country in East Asia, Taiwan. 

This is a key challenge to the future of the AI industry. It’s already the case that over the past couple of years, leading AI labs have struggled to get access to enough of the GPU chips and the high bandwidth memory needed to build AI servers and train and deploy AI systems. 

How about shifting the industry to the United States? The 2022 CHIPS and Science Act was meant to bring microchip manufacturing to the United States. How successful has it been so far, and what are its prospects for success? 

The goals of the CHIPS Act were, to boost investment in chip-making facilities in the United States, and to encourage cutting-edge R&D in chip-making technologies to guarantee the U.S. technological lead when it comes to as many segments of the chip supply chain as possible. 

We have seen a dramatic increase in investment in chip-making facilities in the U.S. for the past couple of years, relative to prior trends. That’s directly due to the subsidies and tax credits that Congress offered via the CHIPS Act. 

But there are some challenges in terms of ensuring that these dollars invested are resulting in the types of output that you’d optimally want to guarantee your economic security. One of the challenges is cost. It’s some 30% more expensive to make chips in the U.S. versus Taiwan. So even if you’re investing more, you’re getting fewer chips per dollar of investment than you would if you’d invested a dollar in Taiwan. 

Plus, there’s a lot of different types of chips. And when it comes to the most cutting-edge chips, the most advanced, they’re still only made in Taiwan. Companies like Apple and Nvidia, they’re sourcing some chips from the U.S., but they’re sourcing the most important chips still from Taiwan.

In terms of cutting-edge R&D, I think we’re still in the early stages of the CHIPS Act beginning to set up the pipelines of talent from academia to industry. But I’m relatively optimistic that these institutions will have a major impact in solving the key challenge in the chip industry, which is moving ideas from the lab and universities towards the fabrication facilities, where they’re brought into production.Â