This post was originally published on this site.
Editorsâ note: This piece is from Nonprofit Quarterly Magazineâs winter 2024 issue, âHealth Justice in the Digital Age: Can We Harness AI for Good?â
A symbiotic relationship between love and boundaries is essential for creating healthy, sustainable connections in all types of relationshipsâromantic, familial, or platonic. While love is often associated with openness, generosity, and care, boundaries are crucial for love to remain respectful, mutual, and emotionally safe.1
As we enter a new era shaped by artificial intelligence, we face both unique opportunities and profound risks. AI is a powerful tool for advancing human potentialâbut only if it is designed with intentional boundaries that protect and uplift the most vulnerable among us.
At its core, AI must be a force that serves humanity, not the other way around. It must enhance our collective capacity to build more just and equitable societies. AI tools are not neutral; they carry the biases and assumptions of the systems and individuals who create them. When we build AI, we must ask: Who benefits from this technology? Who is harmed? This manifesto advocates for the creation of parameters rooted in love, equity, and justice to guide AIâs continued development and deployment. It calls for AI that is designed explicitly to dismantle systemic inequities and address the social ills caused by historical and present-day injustices.
In order to do soâŠ
We Must Build Parameters to Protect Our Most Vulnerable Populations and Precious Resources
The relationship between AI and its environmental impact is both intricate and far-reaching. Ironically, the technology that promises a more efficient future is also contributing to a great strain on the planetâs resources. AIâs ecological footprint extends through water-intensive demands, energy consumption, carbon emissions, and resource extraction, all of which pose serious risks to human health and fragile ecosystems.
The data centers that power AI systems consume vast amounts of water for cooling, exacerbating water insecurity in vulnerable regions. Large AI models require immense energy, contributing to air pollution and climate change; and mining for the rare earth metals used in AI hardware degrades soil, contaminates water sources, displaces local communities, and leaves behind hazardous electronic waste that pollutes both land and water.2
AIâs insatiable energy consumption has negative consequences on human health, too, manifesting in respiratory illnesses due to increased air pollution,3 as well as (indirectly) in heat-related illnesses, due to AIâs role in greenhouse gas emissionsâwhich drive global warming and intensify the frequency and severity of extreme heat events.4 The invisible weight of data centers hangs heavy in the atmosphere, diminishing the quality of life for nearby communities through increased noise, strain on local resources, and environmental disruptions.5 In Granbury, TX, for example, residents living near a Bitcoin mining facility reported migraines, vertigo, hearing loss, heart palpitations, hypertension, panic attacks, and chest pain due to constant noise pollutionâshowcasing the profound physical toll such data centers can impose on those in their proximity.6
These examples remind us that the unimpeded development of AI has tangible consequences, emphasizing the need for thoughtful parameters that prioritize human and environmental wellbeing.
We Must Build Parameters to Protect People from AI Creators and AI Creators from Themselves
In the rush to lead the global AI race, it can be tempting to prioritize innovation, speed, and profit without pausing to consider the profound ethical, societal, and human consequences. But unchecked ambition can leave those who create AIâand those impacted by itâvulnerable to the risks of a world increasingly shaped by unregulated technological advancements. To prevent injury, we must establish metaphorical boundaries of love through ethical guardrails that guide AI development with compassion, care, and foresight.
The future of AI cannot be shaped in silos; we must bring diverse voices into the rooms where AI is created, where decisions are made, and where systems are designed.
AI creators, driven by a desire to innovate and lead, may not always foresee the long-term repercussions of their work. These developersâengineers, data scientists, and tech leadersâcan fall victim to the pressure to be the first and fastest, driven by profit motives and competitive market forces. Without thoughtful parameters in place, they risk creating systems that perpetuate harm, exacerbate inequality, and destabilize societal norms. In this sense, establishing policies and ethical frameworks acts as a boundary of love, not only safeguarding society at large but also protecting creators from the unintended consequences of their own innovations.
For those impacted by AIâcommunities, workers, everyday peopleâsuch policies serve as essential protective barriers. Without oversight, AI systems can deepen social divides, automate biases, and destabilize labor markets. AI policy must, therefore, act as a boundary that prioritizes the wellbeing of all people, assuring that technological progress is guided by empathy and justice. By embedding values of equity and fairness into AI systems, we ensure that the development of AI is an act of love, offering tools that elevate humanity rather than exploit it.
Through thoughtful, intentional AI policies, we can build a future where boundaries are not seen as barriers to progress but rather as defenses designed to support both creators and those affected by their creations. These boundaries of love provide the space for responsible innovation, protecting individuals from the unintended wounds of a rapidly advancing digital age.
We Must Protect the Economic and Financial Security of Workers
As AI is integrated into industries, workers face growing fears about job security.7 Automation threatens not only manual labor but also complex white-collar jobs.8 This concern is valid, as AI has already begun reshaping such sectors as manufacturing, healthcare, and legal services.9 Anxiety over potential job displacement affects worker morale, financial stability, and mental health.10
Protecting workersâ economic security requires policies that guarantee they arenât left behind in the technological shift. This includes promoting upskilling programs, financial support during industry transitions, and strong safety nets like unemployment benefits and retraining opportunities.
AI must not become a tool of exploitation or a means of cutting costs at the expense of human dignity. Workers deserve to benefit from the productivity gains AI offers. By establishing protective policies that prioritize workersâ rights, financial stability, and long-term career development, we can make certain that AI serves as a partner in human progress rather than a force that diminishes livelihoods.
We Must Prevent AI Technology from Encroaching upon the Quality of Life and Wellbeing of Black and Brown People
The use of AI in criminal justice practices, especially through facial recognition technology, poses a serious threat to the social determinants of health, particularly by infringing on social and civic engagement for communities of color. Facial recognition algorithms have been shown to inaccurately identify people with darker skin tones at significantly higher rates than their lighter-skinned counterparts.11 This technological bias leads to wrongful detentions, arrests, and surveillance, mirroring the overpolicing and excessive scrutiny historically imposed on Black and Brown communities. Such AI applications replicate problematic policing practices, triggering trauma linked to decades of discriminatory justice systems and reinforcing community distrust.12
Without oversight, AI systems can deepen social divides, automate biases, and destabilize labor markets.
Additionally, AI systems in criminal justice often operate in âblack boxes,â with opaque decision-making processes that lack transparency and accountability.13 This secrecy prevents public oversight, leaving communities vulnerable to unchecked biases that reinforce systemic inequities. AI tools trained on biased historical data can exacerbate discriminatory practicesâand predictive policing algorithms, which target specific areas based on flawed datasets, drive over-surveillance and privacy violations, particularly in lower-income and minority neighborhoods.14
The deployment of these technologies without community consent disregards the voices of those most affected, stripping communities of autonomy and reinforcing a top-down approach to safety. Moreover, the reliance on AI-driven tools has contributed to the militarization of police forces, which further alienates communities and distances law enforcement from community-based approaches.15 Continuous surveillance also has a profound psychological impact, fostering an atmosphere of fear and hyper-vigilance that undermines mental health and wellbeing.16
Sign up for our free newsletters
Subscribe to NPQ’s newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Beyond criminal justice, AI systems reinforce inequities in such critical areas of life as housing and employment by relying on biased data proxiesâsuch as eviction histories, criminal records, and ethnic names. These algorithms often disfavor marginalized communities, resulting in unfair denials for housing, loans, or jobs, which compromises economic stability, housing security, and broader social determinants of health. Instead of alleviating systemic discrimination, AI can frequently amplify it, obstructing opportunities for stability and wellbeing in communities already impacted by inequality.17
Addressing these injustices in AI-driven criminal justice and related systems that influence quality of life is essential to advancing health equity and ensuring that technology fosters, rather than obstructs, opportunities for community wellbeing: The use of biased proxy data in AI must be carefully scrutinized and removed; transparent, community-led oversight and rigorous auditing of AI datasets is necessary; we need more representative datasets to offset historically biased ones and eliminate data proxies; and itâs imperative that we ban harmful AI applications in policing and housing, so as to challenge systems that have long marginalized Black and Brown communities. Only by demanding accountability in AIâs design and implementation can we begin to shift its role from a tool of exclusion toâat the very leastâa respecter of humanity.
We Must Create Opportunities for People to Safely Opt Out of These Innovations
AI systems are becoming integrated into daily life. Facial recognition technologies, in particular, present significant concerns around data privacy and surveillance. These systems can collect and store personal data without individualsâ explicit consent, raising critical ethical issues around informed participation, discrimination, and potential misuse.18 This technology is frequently implemented in outdoor public spaces, retail environments, airports, and even within digital platformsâyet many people are unaware that their faces are being scanned, analyzed, and often stored in databases, sometimes indefinitely. This poses a serious risk to privacy, particularly as data breaches or improper use of this technology can lead to identity theft, wrongful arrests, or surveillance abuses that disproportionately affect marginalized communities. For instance, research shows that facial recognition systems are often less accurate when identifying people of color, as noted earlier, and women, increasing the potential for biased outcomes and social harm.19
The use of AI in criminal justice practicesâŠposes a serious threat to the social determinants of health.
To counter these risks, we must create clear and accessible pathways for individuals to opt out of facial recognition and other AI-driven data-collection processes. This can involve implementing legislation that mandates transparency around where and how such technologies are used and providing users with real-time notifications when their data are being collected. Furthermore, ensuring the availability of alternatives for those who wish to avoid these systems altogether is crucial, especially in settings like airports or workplaces, where participation might otherwise feel compulsory.20
The right to opt out is not merely about privacy; it is about allowing individuals control over their digital footprint and the ways their personal data are utilized. Establishing robust opt-out mechanisms is essential to respect individualsâ rights to privacy and autonomy; it also bolsters an important truthâthat currently, participation in AI-driven technologies remains a choice, not a mandate.
We Must Invite Diverse Groups of Thinkers and Doers Behind the AI Curtain
The future of AI cannot be shaped in silos; we must bring diverse voices into the rooms where AI is created, where decisions are made, and where systems are designed. These thinkers and doers, from a wide range of lived experiences, industries, and cultures, have an essential role to play in making sure that the technology we build reflects the values of justice, equity, and love.
Along with these thinkers and doers, we must invite those committed to warning us, so that we never forget the histories of oppression, the dictators, and the authoritarian systems that have eroded our shared humanity. Historians, social justice advocates, and ethical scholars are essential for reminding us of the devastating impacts of sexism, racism, capitalism, and authoritarianism on human lives and natural environments. Their insights can help us design and refine AI systems that actively avoid perpetuating these violations, so that new technologies do not silently reinforce the worst aspects of our past.
We ask those who nobly answer the call, to provide cautious oversight so that inequality does not become permanently enmeshed in algorithms, replicating biases that can be scaled indefinitely. These oversight actors should include policymakers, ethicists, technologists, and community leaders who understand that an unencumbered automation of societal practices, many of which are already questionable or outright destructive, would bake in existing disparitiesâmaking it nearly impossible to reverse systemic injustices without significant intervention.
And we seek those who call out discriminatory biases already at work in AIâin criminal justice, hiring practices, and public service decision-making. These are the data scientists, civil rights organizations, and legal professionals who have demonstrated time and again how biased data lead to biased outcomes: AI that disproportionately incarcerates Black and Brown people, denies job opportunities to marginalized populations, and limits access to essential public services.21
We Must Protect Peopleâs Minds, Especially Young People, from Overdependence on Generative AI
Generative AI, whereby AI is used to generate content via prompts from a user, offers unprecedented creative opportunities but also a risk of overreliance. AI should serve as a creative partner, enhancing human ingenuity, rather than becoming a crutch that stifles original thought.
Young people in particular are at risk of losing their innate capacity for critical thinking, problem-solving, and imaginative exploration when overrelying on generative AI tools. As AI systems offer instant solutions, answers, and even art, the need for human-driven experimentation, curiosity, and struggle diminishes. Without proper boundaries, we risk generations that bypass the deep, sometimes challenging process of learning, growing, and creating.
The boundaries proposed here are not limitations but rather acts of care designed to secure a future where AI is in service of equity.
To protect minds from the passive consumption and regurgitation of AI-generated outputs, we must reframe AI as a collaborative toolâa partner that amplifies human creativity rather than replacing it. AI should be integrated into learning and creative environments in a way that encourages users to remain engaged, questioning, and involved in every step of the creative process. Whether itâs generating ideas, providing inspiration, or assisting with tasks, AIâs role should be complementary, not directive. We can teach young minds that the value of creativity lies in the journeyâin the act of thinking, experimenting, and iterating. We must be intentional in guiding young people to see AI as a powerful assistant, not a substitute for their unique brilliance.22
We Must Use AI to Make Equity Investments in Systems Where Inequity Currently Thrives
AI offers powerful opportunities to address entrenched inequities in sectors like healthcare, education, criminal justice, and employment. These systems, often biased by design, disproportionately affect marginalized communities. AI can reveal and correct these disparities by analyzing large datasets and identifying patterns of inequity.
In education, AI could personalize learning and bridge achievement gaps, offering tailored support to students from disadvantaged backgrounds.23 Additionally, AI could audit hiring, promotion, and sentencing decisions in employment and criminal justice, helping to remove bias and lead to fairer outcomes.24 And in healthcare, AI could detect and address racial and economic biases in diagnosis and treatment and improve access to care for underserved populations.25
By using AI to make equity-driven investments in these systems and others, we can build the boundaries of love and care needed to dismantle structural inequities. But it is essential to remain vigilant in our commitment to these calls to action.
***
The boundaries proposed here are not limitations but rather acts of care designed to secure a future where AI is in service of equityâacts that we all must be a part of bringing into being. This manifesto, therefore, is not meant to be comprehensive but rather a draftâa living document meant to be expanded by all who care about creating a just world through technology.
This manifesto emphasizes the critical intersection of AI, equity, and justice, building upon the foundational themes from RTI Internationalâs Transformative Research Unit for Equityâs (TRUE) Narrative Convening on AI, Equity, and Storytelling, held in 2024 and inspired by our esteemed convening keynote speaker, Ruha Benjamin. It draws from key principles in narrative change and technology ethics to create a vision for AI development that serves the common good.
Notes
- Kendra Cunov, âThe Connection Between Love & Boundaries,â Kendra Cunov, September 22, 2017, kendarcunov.com/2017/09/22/the-connection-between-love-boundaries/.
- April Anson et , Water Justice and Technology: The COVID-19 Crisis, Computational Resource Control, and Water Relief Policy (New York: AI Now Institute at New York University, 2022); Guangqi Liang et al., âBalancing sustainability and innovation: The role of artificial intelligence in shaping mining practices for sustainable mining development,â Resources Policy 90 (March 2024): 104793; Josh Cowls et al., âThe AI gambit: leveraging artificial intelligence to combat climate changeâopportunities, challenges, and recommendations,â AI & Society 38, no. 1 (February 2023): 283â307; and Jie Chen et al., âArtificial intelligence based e-waste management for environmental planning,â Environmental Impact Assessment Review 87 (March 2021): 106498.
- Yuan Yao, âCan We Mitigate AIâs Environmental Impacts?,â interview by YSE News, Yale School of the Environment, October 10, 2024, yale.edu/news/article/can-we-mitigate-ais-environmental-impacts.
- âClimate Change Impacts on Health,â United States Environmental Protection Agency, last updated August 2021, 2024, epa.gov/climateimpacts/climate-change-impacts-health; âHuman Health Impacts of Climate Change,â National Institute of Environmental Health Sciences, accessed November 22, 2024, www.niehs.nih.gov/research/programs/climatechange/health_impacts; and âClimate Change,â World Health Organization, October 12, 2023, www.who.int/news-room/fact-sheets/detail/climate-change-and-health.
- Naomi Slagowski and Christopher DesAutels, âEnvironmental and Community Impacts of Large Data Centers,â Trends, Fall 2024, gradientcorp.com/trend_articles/impacts-of-large-data-centers/.
- Andrew Chow, ââWeâre Living in a Nightmareâ: Inside the Health Crisis of a Texas Bitcoin Town,â TIME, last modified July 16, 2024, time.com/6982015/bitcoin-mining-texas-health/.
- See Kate Whiting, âIs AI making you suffer from FOBO? Hereâs what can help,â World Economic Forum, December 20, 2023, weforum.org/stories/2023/12/ai-fobo-jobs-anxiety/.
- Ray Smith, âAI Is Starting to Threaten White-Collar Jobs. Few Industries Are Immune.,â Wall Street Journal, February 12, 2024, www.wsj.com/lifestyle/careers/ai-is-starting-to-threaten-white-collar-jobs-few-industries-are-immune-9cdbcb90; and Aurelia Glass, âUnions Give Workers a Voice Over How AI Affects Their Jobs,â Center for American Progress, May 16, 2024, www.americanprogress.org/article/unions-give-workers-a-voice-over-how-ai-affects-their-jobs/.
- MxD, âHow Artificial Intelligence Is Reshaping the Manufacturing Workforce,â interview with Daniel Griffin, Department of Defense Manufacturing Technology Program, October 8, 2024, dodmantech.mil/News/News-Display/Article/3936325/how-artificial-intelligence-is-reshaping-the-manufacturing-workforce/; Sandeep Reddy, âThe Impact of AI on the Healthcare Workforce: Balancing Opportunities and Challenges,â HIMSS, April 11, 2024, gkc.himss.org/resources/impact-ai-healthcare-workforce-balancing-opportunities-and-challenges; and Matthew Stepka, âLaw Bots: How AI Is Reshaping the Legal Profession,â Business Law Today, American Bar Association, February 21, 2022, businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/.
- Garen Staglin, âConfronting Anxiety About AI: Workplace Strategies For Employee Mental Health,â Forbes, December 18, 2023, forbes.com/sites/onemind/2023/12/18/confronting-anxiety-about-ai-workplace-strategies-for-employee-mental-health/.
- Joy Buolamwini and Timnit Gebru, âGender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,â Proceedings of Machine Learning Research 81 (2018): 1â15.
- Thaddeus Johnson and Natasha N. Johnson, âPolice Facial Recognition Technology Canât Tell Black People Apart,â Scientific American, May 18, 2023, www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/; and Kristin Nicole Dukes and Kimberly Barsamian Kahn, âWhat Social Science Research Says about Police Violence against Racial and Ethnic Minorities: Understanding the Antecedents and ConsequencesâAn Introduction,â Journal of Social Issues 73, no. 4 (December 2017): 690â700.
- Rebecca Heilweil, âWhy algorithms can be racist and sexist,â Vox, February 18, 2020, vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency.
- Tim Lau, âPredictive Policing Explained,â Brennan Center for Justice, April 1, 2020, brennancenter.org/our-work/research-reports/predictive-policing-explained; and Dhruv Mehrotra et al., âHow We Determined Crime Prediction Software Disproportionately Targeted Low-Income, Black, and Latino Neighborhoods,â The Markup, December 2, 2021, themarkup.org/show-your-work/2021/12/02/how-we-determined-crime-prediction-software-disproportionately-targeted-low-income-black-and-latino-neighborhoods.
- Sofia Gomez, âThe Dangers of Militarizing Racist Facial Recognition Technology,â Georgetown Security Studies Review, September 30, 2020, org/2020/09/30/the-dangers-of-militarizing-racist-facial-recognition-technology/; and Christi M. Smith and Jillian Snider, âTo restore community trust, we must demilitarize our police,â R Street Institute, August 31, 2021, www.rstreet.org/commentary/to-restore-community-trust-we-must-demilitarize-our-police/.
- Kayleigh Rogers, âWhat Constant Surveillance Does to Your Brain,â VICE, November 14, 2018, vice.com/en/article/what-constant-surveillance-does-to-your-brain/.
- Olga Akselrod, âHow Artificial Intelligence Can Deepen Racial and Economic Inequities,â ACLU, July 13, 2021, aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities.
- Clare Garvie, âGarbage In, Garbage Out: Face Recognition on Flawed Data,â Georgetown Law Center on Privacy & Technology, May 16, 2019, flawedfacedata.com/; and Algorithmic Justice League, âTSA Is Expanding Its Facial Recognition Program. You Can Opt Out,â accessed November 13, 2024, www.ajl.org/campaigns/fly.
- See Kashmir Hill, âThe Secretive Company That Might End Privacy as We Know It,â New York Times, last modified November 2, 2021, nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html; Meredith Whittaker, âThe Steep Cost of Capture,â Interactions 28, no. 6 (NovemberâDecember 2021): 50â55; Larry Hardesty, âStudy finds gender and skin-type bias in commercial artificial-intelligence systems,â MIT News, February 11, 2018, news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212; and Sidney Perkowitz, âThe Bias in the Machine: Facial Recognition Technology and Racial Disparities,â MIT Case Studies in Social and Ethical Responsibilities of Computing, February 5, 2021, mit-serc.pubpub.org/pub/bias-in-machine/release/1.
- Alison Lawlor Russell, âEmerging Laws and Norms for AI Facial Recognition Technology,â Ăther: A Journal of Strategic Airpower & Spacepower 3, no. 2 (Summer 2024): 26â42.
- Olga Akselrod and Cody Venzke, âHow Artificial Intelligence Might Prevent You from Getting Hired,â ACLU, August 23, 2023, aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired; Will Dobbs-Allsopp et al., Taking Further Agency Action on AI: How Agencies Can Deploy Existing Statutory Authorities To Regulate Artificial Intelligence (Washington, DC: Center for American Progress, 2024); and Molly Callahan, âAlgorithms Were Supposed to Reduce Bias in Criminal JusticeâDo They?,â The Brink, Boston University, February 23, 2023, www.bu.edu/articles/2023/do-algorithms-reduce-bias-in-criminal-justice/.
- Bakhtawar Amjad, âOver-Reliance of Students on Artificial Intelligence,â Medium, April 21, 2024, medium.com/over-reliance-of-students-on-artificial-intelligence-709a931bdc79.
- See Denise Turley, âLeveling the Field: How AI can empower Disadvantaged Students,â AI Journal, February 27, 2024, com/levelling-the-field-how-ai-can-empower-disadvantaged-students/; Thomas Davenport and Ravi Kalakota, âThe potential for artificial intelligence in healthcare,â Future Healthcare Journal 6, no. 2 (June 2019): 94â98; and âThe role of AI in modern education,â University of Iowa Education Blog, University of Iowa, August 27, 2024, onlineprograms.education.uiowa.edu/blog/role-of-ai-in-modern-education.
- Frida Polli, âUsing AI to Eliminate Bias from Hiring,â Harvard Business Review, October 29, 2019, org/2019/10/using-ai-to-eliminate-bias-from-hiring; and Kieran Newcomb, âThe Place of Artificial Intelligence in Sentencing Decisions,â Inquiry Journal (blog), spring 2024, University of New Hampshire, www.unh.edu/inquiryjournal/blog/2024/03/place-artificial-intelligence-sentencing-decisions.
- Isabella Backman, âEliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines,â Yale School of Medicine, December 21, 2023, yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offers-guidelines/.