This post was originally published on this site.
Key takeaways
The deployment of Artificial Intelligence in the workplace has grown rapidly in the United States. Labor unions have been at the forefront of efforts to encourage more productive and sustainable workplace AI and digitalization strategies. Case studies from Germany and the United States show the importance of public policy for supporting worker voice in new technology adoption and deployment.
Policy options that could be considered at the state and local levels in the United States to boost worker power and voice in the era of AI innovation include:
- Legislation regulating employer AI usage: laws that prohibit excessive surveillance, data collection, and automated decision-making by employers; that regulate employers’ use of electronic monitoring and automated decision systems; and that require employers to conduct impact assessments of AI usage
- Legislation implementing human oversight of AI tools: laws that put guardrails on replacing humans with digital technologies by requiring human oversight in sensitive sectors
- Protections for workers exposed to AI: requiring retraining and severance for displaced workers, state and local procurement of AI tools that focus on their equity and impacts on workers, and agency actions to take on deceptive and unfair practices by employers
Overview
Employers are rapidly deploying digital tools that apply algorithms and artificial intelligence to restructure operations and manage workers.
Economists have observed that this corporate deployment of AI embeds existing power relations in the workplace and is biased toward automation applications focused on short-term cost savings. These trends risk increasing unemployment and inequality, as well as stagnating productivity in the United States.
Studies also have identified risks to workers, customers, and citizens from the widespread application of algorithmic tools to automate surveillance and management decision-making, based on often biased and faulty models.
But AI does not have to be deployed as such and indeed can have positive impacts on workers and the workplace. Research shows that the quality of AI models improves when the people who use them participate in developing these models, selecting and maintaining data, and interpreting and verifying results. Management research likewise finds that workplaces with high trust, worker autonomy, and investment in workers’ skills experience better performance outcomes and innovations associated with AI investments.
Public policy can help support this alternative approach to AI adoption and deployment that benefits firms, workers, customers, and our broader society. Legislated minimum standards and collective bargaining institutions that support worker voice are needed to rebalance power in the workplace and to place productive constraints on employers that encourage investments in expanded worker discretion and skills. These institutions have long supported quality-focused competitive strategies during technology-related restructurings, and they are just as crucial today.
In this policy brief, we argue that policies strengthening worker rights and power are crucial to encouraging a high-road approach to AI that complements rather than replaces workforce skills. A case study of Deutsche Telekom AG in Germany shows the potential of collective bargaining to support this high-road approach in a context where strong worker rights support more balanced workplace power. We then turn to the United States to examine how unions are responding to similar challenges. We conclude with recommendations that follow from our analysis.
Let’s turn first to Germany.
Worker voice and mutual gains in Germany: The case of Deutsche Telekom
Institutional support for worker voice
Germany has among the strongest worker rights in the areas of data protection and participation in management decision-making. Two areas in particular stand out: protections against unauthorized worker data collection and strong collective bargaining rights.
German law has long prohibited the unauthorized collection, processing, and storage of workers’ personal data. Indeed, Germany has had a federal data protection act since 1978, which has been updated over the years, including to conform with EU data protection regulations.
Worker representatives in Germany also have very strong legal bargaining rights. In Germany, labor unions typically negotiate collective agreements at the company or sector level, including over areas such as worker pay, working time, and job security. Meanwhile, works councils made up of elected employees within companies negotiate separate “works agreements” at the company and workplace levels over a range of management practices, including scheduling, variable pay, health and safety, and performance monitoring.
Strong co-determination rights, which give employees a voice in how their company is run, give works councils different possibilities to negotiate binding rules relevant to the usage of AI and algorithms. Many works agreements, for example, ban the collection of individual performance data, which can limit the use of speech analytics tools or workforce management software.
In addition, a revision to Germany’s Works Constitution Act in 2021 extended works councils’ consultation rights on new technologies to include AI-based tools. It also extended co-determination rights over selection guidelines for hiring, transfers, and terminations to include situations in which AI is used, and it requires companies to fund an expert (engaged by the works council) to consult on proposed changes or policies involving AI.
Worker voice at Deutsche Telekom
The case of Deutsche Telekom AG shows how this different institutional framework of rights supports strong worker voice in AI-based technology adoption and deployment. Deutsche Telekom’s works councils negotiate works agreements, and the labor union Vereinte Dienstleistungsgewerkschaft, or Ver.di, negotiates separate but coordinated collective agreements.
Works agreements restrict supervisors’ access to individual performance data and require that this information is used to develop, rather than to discipline, workers. In addition, an agreement from 2010 states that automation should first be used to reduce subcontracting. This provision gave workers a baseline of job security to encourage joint labor-management efforts to improve efficiency.
In the mid-2010s, Deutsche Telekom’s works council organized an 8-month project to analyze the workforce impact of new digital, algorithm, and AI-enabled tools. Based on that project’s findings, a series of works agreements established the following rules and processes:
- Management must consult with the works council before purchasing new technology. After evaluating the risk to workers, the two parties decide jointly whether to prohibit or negotiate over the tool.
- Management draws up a “digi-road map” laying out planned digitalization measures. Management then meets with the works council to discuss, and eventually negotiate over, the impacts on employment, service quality, and work content.
- A labor-management Workforce Analytics Expert Group reviews how employee data and AI-enabled analytics tools are used. It holds regular evaluation workshops and provides training for employees to use workforce analytics responsibly.
- A labor-management AI Ethics Committee reviews AI-based tools and systems compliance with agreed-upon ethics provisions.
Additionally, in 2024, aWorking Time Agreementwas negotiated to deploy an intelligent predictive shift-planning tool. With this tool, workers are able to choose their own shifts, and the AI-enabled system then creates an optimal “duty roster.”
Impacts on job quality and service quality
Together, these collective agreements placed productive constraints on management, supporting a high-road approach to AI adoption at Deutsche Telekom in two main ways: using AI tools that enhance skills rather than discipline workers and giving workers a choice in how they use AI tools.
First, job security and limits on individual monitoring encouraged management to invest in AI tools that enhanced skills and service quality rather than increasing worker discipline and control. Workers are protected from invasive monitoring and privacy abuses, and workforce analytics and coaching tools are only used where they comply with clear rules; the fairness and use of these tools is evaluated by joint worker-management committees.
This process reduced the risk to management that works councils would oppose expensive IT systems after they had already been purchased. It also improved worker trust in how managers were using controversial tools, such as speech analytics software, which uses AI and natural language processing to analyze recorded customer conversations and identify recurring issues to more quickly address network or service problems.
Second, these agreements gave workers more choice and control over how they use AI-based tools, encouraging creative applications that improve productivity, scheduling flexibility, and service quality. Call-center workers, for example, could choose whether and how to use Deutsche Telekom’s agent-assistant tool to look up information relevant to customer calls. A majority did choose to use it and also were involved in correcting its mistakes to improve the information it provided. Similarly, the intelligent predictive shift-planning tool mentioned above was broadly welcomed by workers, who were able to more closely tailor their schedules to balance work with their families and lives outside of work—all while meeting management’s goals of more flexible staffing.
Managers reported that the benefits of this deliberative approach to AI adoption could be most clearly seen in the company’s increasing service quality scores and so-called first-call resolution (meaning fewer customers had to call back due to unresolved problems). Meanwhile, worker representatives secured in-house jobs at good pay, with employment security and worker control over how they did their jobs, drawing on their experience and skills. This allowed employees to focus on providing good-quality customer and technician service.
The high rates of customer satisfaction and first-call resolution also added significant value to job quality, as stress and burnout in service jobs very often result from frequent interactions with dissatisfied or abusive customers.
How workers are using collective voice in the United States
The United States has weaker data protection and labor laws than Germany. While a growing number of U.S. states have passed data privacy laws, most of these are targeted to consumer data, and some even explicitly exclude workers from protections.
Yet there also are many examples where unions are using or building institutions that support worker voice to address similar challenges as those seen in Germany, including to prohibit certain uses of AI, to improve job security and reduce workplace monitoring, and to strengthen worker voice in AI decision-making.
Prohibiting certain uses of AI
The Deutsche Telekom example shows the benefits of clear, bright-line rules prohibiting certain uses of AI.
In the United States, recent hard-fought collective agreements negotiated by the Writers Guild of America, or WGA, as well as actors at SAG-AFTRA, similarly place strict limits on how generative AI is used in these creative jobs. The WGA, for example, won provisions in its 2023 contract restricting the use of AI-generated scripts, requiring disclosure of AI-generated material, and giving writers control over whether and how they use AI software. SAG-AFTRA’s agreement, meanwhile, restricts the use of digital replicas of actors, requiring consent for creating and using digital replicas and regulating compensation for the use of these replicas.
Likewise, members of The NewsGuild-Communications Workers of America, or CWA, have negotiated contracts that prevent any use of generative AI except by working journalists themselves, prohibit job cuts driven by AI, and make clear that only journalists do the work of journalism.
Improving job security and reducing work intensification and monitoring from AI-based tools
Similar to the unions and works councils at Deutsche Telekom, the CWA has long responded to threats from automation at U.S. telecom employers, such as AT&T Inc. and Verizon Communications Inc., with agreements improving job security and retraining. Agreements also provide protections against disciplining employees if they do not meet certain time-based measures and specify that monitoring technologies should be used primarily for training purposes.
Past research finds that these kinds of negotiated supports for worker skills and voice create benefits for sales, service quality, and employee retention. These agreements continue to protect workers from unfair discipline as new AI monitoring technology evaluates tone of voice and adherence to scripts, and new automation tools speed up work.
Other unions have organized similar efforts in other service industries to adapt past agreements to new threats from AI- and algorithm-based tools. Unionized workers’ 2023 contract with United Parcel Service Inc. includes language prohibiting the implementation of new technologies that would eliminate significant parts of the workforce until 2028, including drones, driverless vehicles, platooning of semi-trucks, and other AI.
Similarly, UNITE HERE Local 226 in Las Vegas negotiated new contracts in 2023 for 40,000 hotel and casino workers that strengthened existing technology protections, including advance notice, training, severance pay, privacy rights, and expanded bargaining rights. In one case, housekeepers won back control over the sequence of rooms they clean through analyzing data records from the software applications that workers were required to use for cleaning rooms and filling orders.
Strengthening worker voice in AI decisions
At Deutsche Telekom, the works councils strengthened their own capacity through studying AI’s uses and employment impacts and then establishing clear principles and joint committees to steer those uses and impacts. Similar initiatives can be seen in the United States.
U.S. National Nurses United, or NNU, found that the use of generative AI for shift scheduling and remote patient monitoring was widely perceived by their members as harming patient care by undercutting nurses’ skills and nuanced understanding of patients’ needs. The union then developed an AI bill of rights for nurses and patients, drawing on these findings and experiences. These principles, in turn, have been deployed by NNU members in many hospitals through established technology committees.
The CWA also has organized Technology Change committees since the 1950s, and organizational support for these committees is included in many of its telecom collective agreements. A priority of worker representatives on these committees is to focus on technology applications that improve the quality of service, taking into account a broader range of stakeholders.
The CWA also has studied the use and worker impacts of AI. The union then used this research both to educate local representatives and to support collective bargaining. The union published a set of AI principles in 2023 based on the deliberations of a committee of members from the telecom, media, and technology sectors.
Similar initiatives can be seen across other unions, including:
- The WGA West’s Board, covering film, TV, radio, and new media writers, has appointed an AI advisory committee that is documenting writers’ experiences with and developments in AI.
- IATSE, a union for theatrical stage workers, published a set of AI principles in July 2023 that includes a demand for “transparency from employers regarding their use of AI.” In 2024, IATSE members ratified a new contract that establishes ground rules for the use of AI, a committee to facilitate AI skills training, and requirements that AI use cannot be outsourced to nonunion labor, among other provisions.
- The United Auto Workers-Ford’s Letters of Understanding, which outline terms not covered in their union contract, include provisions under which a joint committee “will research AI technology for worker safety and how it applies to facility operations.” The Letters of Understanding also establish that management will provide advance notice on new technology, with investment in training programs.
These examples show that U.S. workers and their unions have been creative in adapting existing agreements and developing new joint initiatives. Yet they also are limited in their ability to extend and deepen worker voice due to lower bargaining coverage, weaker bargaining rights, and an overall weaker framework of baseline data protection rules. Agreements protecting workers (and customers) from the worst abuses of algorithmic management and de-skilling cover only a minority of U.S. workers, even in the sectors where these agreements are present.
Policy recommendations
How can the lessons learned from these case studies from Germany and the United States be extended more broadly across the U.S. workforce? Most importantly, to support worker voice in AI decisions, we need policies and strategies that strengthen worker power. The German experience suggests that a longer-term goal should be labor law reforms that remove the steep obstacles to organizing unions in U.S. workplaces and strengthens collective bargaining rights. U.S. firms with union-represented employees can also pursue a high-road path by bargaining constructively over digital technology implementation.
Despite weak institutional supports, many U.S. unions have won contract provisions related to technology and, through decades of case law, established the right to negotiate over technology changes that impact working conditions and freedom of association. U.S. firms that aim to use digital technology to complement their workforces should look to the German model of co-determination, which harnesses the collective wisdom of front-line workers alongside management to guide changes to work processes that maximize both productivity and job quality.
Worker power also can be buttressed against the most harmful and invasive uses of digital technologies with baseline legal protections at local, state, and federal levels. The EU’s recent AI Act is a potential model in its explicit prohibition on using AI to measure or emulate human emotions in workplaces and educational settings—a red line that would rein in abuses in call centers and other service-sector jobs. To protect workers from exploitative use of digital metrics and monitoring, policymakers can prohibit excessive surveillance, data collection, and automated decision-making.
Some examples include:
- State bills, such as California’s 2022 Workplace Technology Accountability Act, as well as New York State’s 2024 Bossware and Oppressive Technologies Act, lay out a broad framework for regulating employers’ use of electronic monitoring and automated decision systems, and also require employers to conduct impact assessments.
- Warehouse workers have called for limits on algorithmic metrics, such as Amazon’s “time off task,” that undermine worker health and safety, with legislation passed in California and proposed in several other states.
- So-called just cause bills in Illinois and New York City limit the use of electronic monitoring data in dismissing workers.
- Bills that address sector-specific risks by requiring worker oversight on AI decisions can provide baseline protections requiring a human-in-command approach, for example, in healthcare and publishing.
To protect against employers rushing to cut costs by substituting AI tools for human workers, unions also are pursuing contractual protections for job security. Many unions are exploring legislative strategies, too, including requiring notice, retraining, and severance for workers who experience technology-driven job loss, as proposed in New Jersey, and prohibiting replacement of workers with digital technology in specific industries that would harm the public, such as in community colleges, call centers, and healthcare.
At the federal level, the Biden administration asserted that agencies have some existing authority to take on deceptive and unfair employer practices around AI tools, and President Biden’s Office of Management and Budget encouraged agencies to consult employee unions on the design, development, and use of AI—actions that have since been rescinded by the Trump administration.
Government also has significant influence on technology development through its procurement of goods and services. In March 2024, the White House Office of Management and Budget issued a memo on AI governance that encouraged agencies to consult with federal employee unions, among other impacted groups, on the design, development, and use of AI. Though the Trump administration has since rescinded this guidance as well, states and localities have begun to adopt their own guidance on procurement of AI tools that foregrounds equity and impacts on workers.
Conclusion
Workers and their unions have been at the forefront of efforts to regulate the use of AI and other digital technologies in a variety of U.S. workplaces. Yet collective agreements cover only a minority of workers. Most employers lack much-needed productive constraints on low-road strategies relying on automation, de-skilling, and intensified surveillance.
Frontline workers are well-positioned to steer AI investments toward the high-road alternative, but only if the broader public and policymakers have their backs. Policies that strengthen worker rights and worker power are critical for securing broadly shared prosperity in the era of AI innovation.
About the authors
Virginia Doellgast is the Anne Evans Estabrook Professor of Employment Relations and Dispute Resolution in the ILR School at Cornell University. Her research focuses on the comparative political economy of labor markets and labor unions, inequality, precarity, and democracy at work. She is currently studying the impact of digitalization and AI on job quality in the information and communication technology services industry, based on comparative research in North America and Europe. She is author of Exit, Voice, and Solidarity (Oxford University Press, 2022) and Disintegrating Democracy at Work (Cornell University Press, 2012); and co-editor of International and Comparative Employment Relations (Sage, 2021) and Reconstructing Solidarity (Oxford University Press, 2018).
Nell Geiser is director of research for the Communications Workers of America, a labor union representing workers in telecommunications, media, technology, public service, airlines, manufacturing, and other sectors. Geiser and the research department support CWA initiatives across collective bargaining, policy, and organizing. She has worked as a researcher for labor unions since 2006. Geiser has a B.A. from Columbia University and in 2014, she fulfilled the requirements to become a Chartered Financial Analyst.
Did you find this content informative and engaging?
Get updates and stay in tune with U.S. economic inequality and growth!