This post was originally published on this site.
As artificial intelligence (AI) reshapes society, ethical questions are increasingly raised about its moral compass, biases, and societal impact. AI innovation isn’t just about the products, it’s the very concept too, and how AI aligns with human rights, sustainability, and democratic principles.
In this conversation with Euractiv, Paula Gürtler, Associate Researcher in the Global Governance, Regulation, Innovation and Digital Economy (GRID) unit at the Centre for European Policy Studies (CEPS), explores AI’s inherited biases, market concentration, and ethical dilemmas, highlighting the need for transparency, accountability, and regulation to ensure AI serves humanity responsibly.
XZ: Should AI systems have a moral compass, and if so, who gets to decide what ethical framework they follow?
PG: The question of a moral compass in AI systems is a tricky one. When I hear ‘moral compass’, I think about an agent who acts by certain norms and moral principles.
However, I also assume that this agent is able to make well-considered judgements about the appropriateness of actions in a specific context. AI systems are currently unable to make such complex judgements.
But that does not mean that AI systems are morally agnostic. Instead, their “compass” for decision-making derives from historical decisions represented in the training dataset. These historical data serve as a moral compass insofar as they reflect certain norms and moral attitudes of the past.
Of course, this historical compass causes some issues. For one, historical data is not always aligned with our values today. Fortunately, we have made considerable progress towards minority inclusion and gender equality.
For example, AI developers often implement different guardrails and protocols to correct historical biases. This is the focus of the field of Explainable AI (XAI). Ideally, such guardrails should be established collectively through deliberative processes.
Another instance where a moral compass is clearly lacking can be seen in chatbots that provide poor mental health advice to users. In these cases, developers have failed to implement the correct guardrails. Regulations such as the AI Act can help establish governance frameworks that prevent harm.
XZ: How can we prevent AI from amplifying societal inequalities while ensuring it remains innovative and effective?
PG: AI amplifies social inequalities in at least two ways: through biases and through market concentration and supply chain inequalities. Regarding biases, I mentioned the field of XAI and AI fairness, which address these concerns.
However, philosopher Shannon Vallor points out in her 2024 book The AI Mirror: “The fundamentally correct explanation always offered for unfair machine learning bias is that the trained model is simply mirroring the unfair biases we already have.” This means that if we do not tackle societal inequalities, AI will always mirror these back to us. The current dominant practices of AI exacerbate these inequalities.
The issue of market concentration and supply chains highlights global inequality. We must remember that AI is not a disembodied, god-like power; it has a material reality of rare earths, microchips, data centres, energy, water consumption, and human labour. The material infrastructure of AI enmeshes it in violent conflicts over cobalt mining and droughts in the US.
Globally, we see inequalities in AI production – such as click workers in Kenya v. US tech giants – deployment by big corporations, governments and military forces; and subjugation, for instance, with vulnerable migrants on national borders, political dissidents in China and Russia, Ukrainians and Palestinians on the battlefield, citizens online.
These inequalities cannot be solved solely through XAI or fairness metrics. Instead, we need to deliberate on when and where it is appropriate to develop and deploy AI at all.
In my opinion, innovation and competitiveness in AI, or any other sector, should be pursued within the bounds of human rights and sustainability. To ensure AI does not amplify inequalities, we must scrutinise the system that produces it and make that more sustainable, just, and equitable.
XZ: As AI increasingly makes decisions affecting human lives, how do we ensure transparency and accountability in its recommendations?
PG: Transparency is an essential pathway towards accountability. However, transparency is insufficient. OpenAI, for example, could provide me with the entire code underlying ChatGPT4.0, but I would be unable to make sense of it.
This is why explainability and interpretability are emphasised in AI research. We need a specific kind of transparency, one that provides data on how an AI system functions so that ordinary users can interpret why and how AI models behave the way they do.
Interpretable and explainable transparency is crucial for accountability because it enables the average user to recognise when an AI system has made an incorrect or discriminatory decision.
When discussing AI accountability, we must remember that we are ultimately trying to hold humans and corporations accountable – the ones responsible for designing AI systems and any resulting harm. This means ensuring that existing liability frameworks for defective products are adapted for the AI age.
XZ: How can we balance the economic benefits of AI-driven automation with the ethical responsibility to protect human jobs and livelihoods?
PG: Among researchers, there is reasonable doubt about whether automation will have the radical impact on employment that many fear.
A cautionary tale in that regard is economist John M. Keynes, who, in 1930, wrote ‘The Economic Possibilities for Our Grandchildren’. He was deeply concerned that rapid technological progress would lead to mass unemployment.
Nearly a century later, we have possibly even more economic opportunities because new job profiles have emerged that Keynes could not have imagined.
Some argue that a similar phenomenon may occur with AI.
The mass job losses we fear may never materialise. However, efficiency gains are expected in most sectors, and certain tasks will be automated.
It’s an interesting question whether we have an ethical responsibility to protect jobs. I would argue that we have a moral obligation to ensure people can live healthy and fulfilling lives, which does not necessarily equate to preserving employment.
In response to the risks of AI-related job losses, some propose implementing a Universal Basic Income to safeguard livelihoods and provide individuals with the freedom to pursue meaningful lives. This could be one way to balance automation with ethical responsibility.
XZ: In an era of AI-driven journalism, who should be held responsible when an AI system produces misleading or harmful content?
PG: Most journalism ethics policies and media codes of conduct assign full editorial responsibility to the human editor who publishes AI-generated or AI-enhanced content. This aligns with existing liability regimes.
I believe this allocation of responsibility should remain because journalism is a vital pillar of democracy. Since democracy is governance by the people, for the people, we should not allow one of its fundamental pillars to be overtaken by machines.
However, individual journalists and editors cannot bear sole responsibility for AI-generated content. We need a comprehensive accountability system. An essential aspect of such a system would be ensuring that AI models used in journalism are reliable and transparent in an interpretable and explainable way.
AI developers must be responsible for the technical robustness and legality of their systems. A well-functioning legal framework is crucial, and regulators must oversee its enforcement.
XZ: How can news organisations balance the public’s right to information with ethical concerns about privacy as AI enhances data collection and analysis?
PG: The pressure on journalists to be timely and efficient is ever increasing. With the rise of AI-generated content and misinformation, spread by both unassuming social media users and malicious actors, the sheer volume of information that journalists must sift through is overwhelming. AI tools, such as those used for synthetic data detection, assist journalists in managing this workload.
However, this raises ethical concerns.
For example, well-documented issues exist in image recognition databases such as ImageNet, as examined by Kate Crawford in her book Atlas of AI. This analysis suggests that many of these systems contain deeply problematic (racist, sexist, and stigmatising) categorisations. Journalists, bound by ethical codes, have a duty to protect vulnerable groups.
Using large language models (LLMs) for news generation presents further ethical challenges. For instance, private corporations that developed state-of-the-art models, OpenAI in particular, used copyrighted material to train their systems.
Researchers have also found that ChatGPT’s training data include personal information, such as email communications and copyrighted content from newspapers such as The New York Times.
These concerns underscore the importance of journalists carefully selecting AI tools that align with professional ethical standards. Transparency and accountability mechanisms are essential, making initiatives like the EU’s AI-CODE project valuable in developing trustworthy AI tools.
[Edited By Brian Maguire | Euractiv’s Advocacy Lab ]