Building Boundaries in Love for Equity and Justice: An AI Manifesto – Non Profit News

This post was originally published on this site.

“Manifesting Love” by **DALL-E 3/ **openai.com/dalle

Editors’ note: This piece is from Nonprofit Quarterly Magazine’s winter 2024 issue, “Health Justice in the Digital Age: Can We Harness AI for Good?”

A symbiotic relationship between love and boundaries is essential for creating healthy, sustainable connections in all types of relationships—romantic, familial, or platonic. While love is often associated with openness, generosity, and care, boundaries are crucial for love to remain respectful, mutual, and emotionally safe.1

As we enter a new era shaped by artificial intelligence, we face both unique opportunities and profound risks. AI is a powerful tool for advancing human potential—but only if it is designed with intentional boundaries that protect and uplift the most vulnerable among us.

At its core, AI must be a force that serves humanity, not the other way around. It must enhance our collective capacity to build more just and equitable societies. AI tools are not neutral; they carry the biases and assumptions of the systems and individuals who create them. When we build AI, we must ask: Who benefits from this technology? Who is harmed? This manifesto advocates for the creation of parameters rooted in love, equity, and justice to guide AI’s continued development and deployment. It calls for AI that is designed explicitly to dismantle systemic inequities and address the social ills caused by historical and present-day injustices.

In order to do so


We Must Build Parameters to Protect Our Most Vulnerable Populations and Precious Resources

The relationship between AI and its environmental impact is both intricate and far-reaching. Ironically, the technology that promises a more efficient future is also contributing to a great strain on the planet’s resources. AI’s ecological footprint extends through water-intensive demands, energy consumption, carbon emissions, and resource extraction, all of which pose serious risks to human health and fragile ecosystems.

The data centers that power AI systems consume vast amounts of water for cooling, exacerbating water insecurity in vulnerable regions. Large AI models require immense energy, contributing to air pollution and climate change; and mining for the rare earth metals used in AI hardware degrades soil, contaminates water sources, displaces local communities, and leaves behind hazardous electronic waste that pollutes both land and water.2

AI’s insatiable energy consumption has negative consequences on human health, too, manifesting in respiratory illnesses due to increased air pollution,3 as well as (indirectly) in heat-related illnesses, due to AI’s role in greenhouse gas emissions—which drive global warming and intensify the frequency and severity of extreme heat events.4 The invisible weight of data centers hangs heavy in the atmosphere, diminishing the quality of life for nearby communities through increased noise, strain on local resources, and environmental disruptions.5 In Granbury, TX, for example, residents living near a Bitcoin mining facility reported migraines, vertigo, hearing loss, heart palpitations, hypertension, panic attacks, and chest pain due to constant noise pollution—showcasing the profound physical toll such data centers can impose on those in their proximity.6

These examples remind us that the unimpeded development of AI has tangible consequences, emphasizing the need for thoughtful parameters that prioritize human and environmental wellbeing.

We Must Build Parameters to Protect People from AI Creators and AI Creators from Themselves

In the rush to lead the global AI race, it can be tempting to prioritize innovation, speed, and profit without pausing to consider the profound ethical, societal, and human consequences. But unchecked ambition can leave those who create AI—and those impacted by it—vulnerable to the risks of a world increasingly shaped by unregulated technological advancements. To prevent injury, we must establish metaphorical boundaries of love through ethical guardrails that guide AI development with compassion, care, and foresight.

The future of AI cannot be shaped in silos; we must bring diverse voices into the rooms where AI is created, where decisions are made, and where systems are designed.

AI creators, driven by a desire to innovate and lead, may not always foresee the long-term repercussions of their work. These developers—engineers, data scientists, and tech leaders—can fall victim to the pressure to be the first and fastest, driven by profit motives and competitive market forces. Without thoughtful parameters in place, they risk creating systems that perpetuate harm, exacerbate inequality, and destabilize societal norms. In this sense, establishing policies and ethical frameworks acts as a boundary of love, not only safeguarding society at large but also protecting creators from the unintended consequences of their own innovations.

For those impacted by AI—communities, workers, everyday people—such policies serve as essential protective barriers. Without oversight, AI systems can deepen social divides, automate biases, and destabilize labor markets. AI policy must, therefore, act as a boundary that prioritizes the wellbeing of all people, assuring that technological progress is guided by empathy and justice. By embedding values of equity and fairness into AI systems, we ensure that the development of AI is an act of love, offering tools that elevate humanity rather than exploit it.

Through thoughtful, intentional AI policies, we can build a future where boundaries are not seen as barriers to progress but rather as defenses designed to support both creators and those affected by their creations. These boundaries of love provide the space for responsible innovation, protecting individuals from the unintended wounds of a rapidly advancing digital age.

We Must Protect the Economic and Financial Security of Workers

As AI is integrated into industries, workers face growing fears about job security.7 Automation threatens not only manual labor but also complex white-collar jobs.8 This concern is valid, as AI has already begun reshaping such sectors as manufacturing, healthcare, and legal services.9 Anxiety over potential job displacement affects worker morale, financial stability, and mental health.10

Protecting workers’ economic security requires policies that guarantee they aren’t left behind in the technological shift. This includes promoting upskilling programs, financial support during industry transitions, and strong safety nets like unemployment benefits and retraining opportunities.

AI must not become a tool of exploitation or a means of cutting costs at the expense of human dignity. Workers deserve to benefit from the productivity gains AI offers. By establishing protective policies that prioritize workers’ rights, financial stability, and long-term career development, we can make certain that AI serves as a partner in human progress rather than a force that diminishes livelihoods.

We Must Prevent AI Technology from Encroaching upon the Quality of Life and Wellbeing of Black and Brown People

The use of AI in criminal justice practices, especially through facial recognition technology, poses a serious threat to the social determinants of health, particularly by infringing on social and civic engagement for communities of color. Facial recognition algorithms have been shown to inaccurately identify people with darker skin tones at significantly higher rates than their lighter-skinned counterparts.11 This technological bias leads to wrongful detentions, arrests, and surveillance, mirroring the overpolicing and excessive scrutiny historically imposed on Black and Brown communities. Such AI applications replicate problematic policing practices, triggering trauma linked to decades of discriminatory justice systems and reinforcing community distrust.12

Without oversight, AI systems can deepen social divides, automate biases, and destabilize labor markets.

Additionally, AI systems in criminal justice often operate in “black boxes,” with opaque decision-making processes that lack transparency and accountability.13 This secrecy prevents public oversight, leaving communities vulnerable to unchecked biases that reinforce systemic inequities. AI tools trained on biased historical data can exacerbate discriminatory practices—and predictive policing algorithms, which target specific areas based on flawed datasets, drive over-surveillance and privacy violations, particularly in lower-income and minority neighborhoods.14

The deployment of these technologies without community consent disregards the voices of those most affected, stripping communities of autonomy and reinforcing a top-down approach to safety. Moreover, the reliance on AI-driven tools has contributed to the militarization of police forces, which further alienates communities and distances law enforcement from community-based approaches.15 Continuous surveillance also has a profound psychological impact, fostering an atmosphere of fear and hyper-vigilance that undermines mental health and wellbeing.16

Beyond criminal justice, AI systems reinforce inequities in such critical areas of life as housing and employment by relying on biased data proxies—such as eviction histories, criminal records, and ethnic names. These algorithms often disfavor marginalized communities, resulting in unfair denials for housing, loans, or jobs, which compromises economic stability, housing security, and broader social determinants of health. Instead of alleviating systemic discrimination, AI can frequently amplify it, obstructing opportunities for stability and wellbeing in communities already impacted by inequality.17

Addressing these injustices in AI-driven criminal justice and related systems that influence quality of life is essential to advancing health equity and ensuring that technology fosters, rather than obstructs, opportunities for community wellbeing: The use of biased proxy data in AI must be carefully scrutinized and removed; transparent, community-led oversight and rigorous auditing of AI datasets is necessary; we need more representative datasets to offset historically biased ones and eliminate data proxies; and it’s imperative that we ban harmful AI applications in policing and housing, so as to challenge systems that have long marginalized Black and Brown communities. Only by demanding accountability in AI’s design and implementation can we begin to shift its role from a tool of exclusion to—at the very least—a respecter of humanity.

We Must Create Opportunities for People to Safely Opt Out of These Innovations

AI systems are becoming integrated into daily life. Facial recognition technologies, in particular, present significant concerns around data privacy and surveillance. These systems can collect and store personal data without individuals’ explicit consent, raising critical ethical issues around informed participation, discrimination, and potential misuse.18 This technology is frequently implemented in outdoor public spaces, retail environments, airports, and even within digital platforms—yet many people are unaware that their faces are being scanned, analyzed, and often stored in databases, sometimes indefinitely. This poses a serious risk to privacy, particularly as data breaches or improper use of this technology can lead to identity theft, wrongful arrests, or surveillance abuses that disproportionately affect marginalized communities. For instance, research shows that facial recognition systems are often less accurate when identifying people of color, as noted earlier, and women, increasing the potential for biased outcomes and social harm.19

The use of AI in criminal justice practices
poses a serious threat to the social determinants of health.

To counter these risks, we must create clear and accessible pathways for individuals to opt out of facial recognition and other AI-driven data-collection processes. This can involve implementing legislation that mandates transparency around where and how such technologies are used and providing users with real-time notifications when their data are being collected. Furthermore, ensuring the availability of alternatives for those who wish to avoid these systems altogether is crucial, especially in settings like airports or workplaces, where participation might otherwise feel compulsory.20

The right to opt out is not merely about privacy; it is about allowing individuals control over their digital footprint and the ways their personal data are utilized. Establishing robust opt-out mechanisms is essential to respect individuals’ rights to privacy and autonomy; it also bolsters an important truth—that currently, participation in AI-driven technologies remains a choice, not a mandate.

We Must Invite Diverse Groups of Thinkers and Doers Behind the AI Curtain

The future of AI cannot be shaped in silos; we must bring diverse voices into the rooms where AI is created, where decisions are made, and where systems are designed. These thinkers and doers, from a wide range of lived experiences, industries, and cultures, have an essential role to play in making sure that the technology we build reflects the values of justice, equity, and love.

Along with these thinkers and doers, we must invite those committed to warning us, so that we never forget the histories of oppression, the dictators, and the authoritarian systems that have eroded our shared humanity. Historians, social justice advocates, and ethical scholars are essential for reminding us of the devastating impacts of sexism, racism, capitalism, and authoritarianism on human lives and natural environments. Their insights can help us design and refine AI systems that actively avoid perpetuating these violations, so that new technologies do not silently reinforce the worst aspects of our past.

We ask those who nobly answer the call, to provide cautious oversight so that inequality does not become permanently enmeshed in algorithms, replicating biases that can be scaled indefinitely. These oversight actors should include policymakers, ethicists, technologists, and community leaders who understand that an unencumbered automation of societal practices, many of which are already questionable or outright destructive, would bake in existing disparities—making it nearly impossible to reverse systemic injustices without significant intervention.

And we seek those who call out discriminatory biases already at work in AI—in criminal justice, hiring practices, and public service decision-making. These are the data scientists, civil rights organizations, and legal professionals who have demonstrated time and again how biased data lead to biased outcomes: AI that disproportionately incarcerates Black and Brown people, denies job opportunities to marginalized populations, and limits access to essential public services.21

We Must Protect People’s Minds, Especially Young People, from Overdependence on Generative AI

Generative AI, whereby AI is used to generate content via prompts from a user, offers unprecedented creative opportunities but also a risk of overreliance. AI should serve as a creative partner, enhancing human ingenuity, rather than becoming a crutch that stifles original thought.

Young people in particular are at risk of losing their innate capacity for critical thinking, problem-solving, and imaginative exploration when overrelying on generative AI tools. As AI systems offer instant solutions, answers, and even art, the need for human-driven experimentation, curiosity, and struggle diminishes. Without proper boundaries, we risk generations that bypass the deep, sometimes challenging process of learning, growing, and creating.

The boundaries proposed here are not limitations but rather acts of care designed to secure a future where AI is in service of equity.

To protect minds from the passive consumption and regurgitation of AI-generated outputs, we must reframe AI as a collaborative tool—a partner that amplifies human creativity rather than replacing it. AI should be integrated into learning and creative environments in a way that encourages users to remain engaged, questioning, and involved in every step of the creative process. Whether it’s generating ideas, providing inspiration, or assisting with tasks, AI’s role should be complementary, not directive. We can teach young minds that the value of creativity lies in the journey—in the act of thinking, experimenting, and iterating. We must be intentional in guiding young people to see AI as a powerful assistant, not a substitute for their unique brilliance.22

We Must Use AI to Make Equity Investments in Systems Where Inequity Currently Thrives

AI offers powerful opportunities to address entrenched inequities in sectors like healthcare, education, criminal justice, and employment. These systems, often biased by design, disproportionately affect marginalized communities. AI can reveal and correct these disparities by analyzing large datasets and identifying patterns of inequity.

In education, AI could personalize learning and bridge achievement gaps, offering tailored support to students from disadvantaged backgrounds.23 Additionally, AI could audit hiring, promotion, and sentencing decisions in employment and criminal justice, helping to remove bias and lead to fairer outcomes.24 And in healthcare, AI could detect and address racial and economic biases in diagnosis and treatment and improve access to care for underserved populations.25

By using AI to make equity-driven investments in these systems and others, we can build the boundaries of love and care needed to dismantle structural inequities. But it is essential to remain vigilant in our commitment to these calls to action.

***

The boundaries proposed here are not limitations but rather acts of care designed to secure a future where AI is in service of equity—acts that we all must be a part of bringing into being. This manifesto, therefore, is not meant to be comprehensive but rather a draft—a living document meant to be expanded by all who care about creating a just world through technology.

This manifesto emphasizes the critical intersection of AI, equity, and justice, building upon the foundational themes from RTI International’s Transformative Research Unit for Equity’s (TRUE) Narrative Convening on AI, Equity, and Storytelling, held in 2024 and inspired by our esteemed convening keynote speaker, Ruha Benjamin. It draws from key principles in narrative change and technology ethics to create a vision for AI development that serves the common good.

Notes

  1. Kendra Cunov, “The Connection Between Love & Boundaries,” Kendra Cunov, September 22, 2017, kendarcunov.com/2017/09/22/the-connection-between-love-boundaries/.
  2. April Anson et , Water Justice and Technology: The COVID-19 Crisis, Computational Resource Control, and Water Relief Policy (New York: AI Now Institute at New York University, 2022); Guangqi Liang et al., “Balancing sustainability and innovation: The role of artificial intelligence in shaping mining practices for sustainable mining development,” Resources Policy 90 (March 2024): 104793; Josh Cowls et al., “The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations,” AI & Society 38, no. 1 (February 2023): 283–307; and Jie Chen et al., “Artificial intelligence based e-waste management for environmental planning,” Environmental Impact Assessment Review 87 (March 2021): 106498.
  3. Yuan Yao, “Can We Mitigate AI’s Environmental Impacts?,” interview by YSE News, Yale School of the Environment, October 10, 2024, yale.edu/news/article/can-we-mitigate-ais-environmental-impacts.
  4. “Climate Change Impacts on Health,” United States Environmental Protection Agency, last updated August 2021, 2024, epa.gov/climateimpacts/climate-change-impacts-health; “Human Health Impacts of Climate Change,” National Institute of Environmental Health Sciences, accessed November 22, 2024, www.niehs.nih.gov/research/programs/climatechange/health_impacts; and “Climate Change,” World Health Organization, October 12, 2023, www.who.int/news-room/fact-sheets/detail/climate-change-and-health.
  5. Naomi Slagowski and Christopher DesAutels, “Environmental and Community Impacts of Large Data Centers,” Trends, Fall 2024, gradientcorp.com/trend_articles/impacts-of-large-data-centers/.
  6. Andrew Chow, “‘We’re Living in a Nightmare’: Inside the Health Crisis of a Texas Bitcoin Town,” TIME, last modified July 16, 2024, time.com/6982015/bitcoin-mining-texas-health/.
  7. See Kate Whiting, “Is AI making you suffer from FOBO? Here’s what can help,” World Economic Forum, December 20, 2023, weforum.org/stories/2023/12/ai-fobo-jobs-anxiety/.
  8. Ray Smith, “AI Is Starting to Threaten White-Collar Jobs. Few Industries Are Immune.,” Wall Street Journal, February 12, 2024, www.wsj.com/lifestyle/careers/ai-is-starting-to-threaten-white-collar-jobs-few-industries-are-immune-9cdbcb90; and Aurelia Glass, “Unions Give Workers a Voice Over How AI Affects Their Jobs,” Center for American Progress, May 16, 2024, www.americanprogress.org/article/unions-give-workers-a-voice-over-how-ai-affects-their-jobs/.
  9. MxD, “How Artificial Intelligence Is Reshaping the Manufacturing Workforce,” interview with Daniel Griffin, Department of Defense Manufacturing Technology Program, October 8, 2024, dodmantech.mil/News/News-Display/Article/3936325/how-artificial-intelligence-is-reshaping-the-manufacturing-workforce/; Sandeep Reddy, “The Impact of AI on the Healthcare Workforce: Balancing Opportunities and Challenges,” HIMSS, April 11, 2024, gkc.himss.org/resources/impact-ai-healthcare-workforce-balancing-opportunities-and-challenges; and Matthew Stepka, “Law Bots: How AI Is Reshaping the Legal Profession,” Business Law Today, American Bar Association, February 21, 2022, businesslawtoday.org/2022/02/how-ai-is-reshaping-legal-profession/.
  10. Garen Staglin, “Confronting Anxiety About AI: Workplace Strategies For Employee Mental Health,” Forbes, December 18, 2023, forbes.com/sites/onemind/2023/12/18/confronting-anxiety-about-ai-workplace-strategies-for-employee-mental-health/.
  1. Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 1–15.
  2. Thaddeus Johnson and Natasha N. Johnson, “Police Facial Recognition Technology Can’t Tell Black People Apart,” Scientific American, May 18, 2023, www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/; and Kristin Nicole Dukes and Kimberly Barsamian Kahn, “What Social Science Research Says about Police Violence against Racial and Ethnic Minorities: Understanding the Antecedents and Consequences—An Introduction,” Journal of Social Issues 73, no. 4 (December 2017): 690–700.
  3. Rebecca Heilweil, “Why algorithms can be racist and sexist,” Vox, February 18, 2020, vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency.
  4. Tim Lau, “Predictive Policing Explained,” Brennan Center for Justice, April 1, 2020, brennancenter.org/our-work/research-reports/predictive-policing-explained; and Dhruv Mehrotra et al., “How We Determined Crime Prediction Software Disproportionately Targeted Low-Income, Black, and Latino Neighborhoods,” The Markup, December 2, 2021, themarkup.org/show-your-work/2021/12/02/how-we-determined-crime-prediction-software-disproportionately-targeted-low-income-black-and-latino-neighborhoods.
  5. Sofia Gomez, “The Dangers of Militarizing Racist Facial Recognition Technology,” Georgetown Security Studies Review, September 30, 2020, org/2020/09/30/the-dangers-of-militarizing-racist-facial-recognition-technology/; and Christi M. Smith and Jillian Snider, “To restore community trust, we must demilitarize our police,” R Street Institute, August 31, 2021, www.rstreet.org/commentary/to-restore-community-trust-we-must-demilitarize-our-police/.
  6. Kayleigh Rogers, “What Constant Surveillance Does to Your Brain,” VICE, November 14, 2018, vice.com/en/article/what-constant-surveillance-does-to-your-brain/.
  7. Olga Akselrod, “How Artificial Intelligence Can Deepen Racial and Economic Inequities,” ACLU, July 13, 2021, aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities.
  8. Clare Garvie, “Garbage In, Garbage Out: Face Recognition on Flawed Data,” Georgetown Law Center on Privacy & Technology, May 16, 2019, flawedfacedata.com/; and Algorithmic Justice League, “TSA Is Expanding Its Facial Recognition Program. You Can Opt Out,” accessed November 13, 2024, www.ajl.org/campaigns/fly.
  9. See Kashmir Hill, “The Secretive Company That Might End Privacy as We Know It,” New York Times, last modified November 2, 2021, nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html; Meredith Whittaker, “The Steep Cost of Capture,” Interactions 28, no. 6 (November–December 2021): 50–55; Larry Hardesty, “Study finds gender and skin-type bias in commercial artificial-intelligence systems,” MIT News, February 11, 2018, news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212; and Sidney Perkowitz, “The Bias in the Machine: Facial Recognition Technology and Racial Disparities,” MIT Case Studies in Social and Ethical Responsibilities of Computing, February 5, 2021, mit-serc.pubpub.org/pub/bias-in-machine/release/1.
  10. Alison Lawlor Russell, “Emerging Laws and Norms for AI Facial Recognition Technology,” Æther: A Journal of Strategic Airpower & Spacepower 3, no. 2 (Summer 2024): 26–42.
  11. Olga Akselrod and Cody Venzke, “How Artificial Intelligence Might Prevent You from Getting Hired,” ACLU, August 23, 2023, aclu.org/news/racial-justice/how-artificial-intelligence-might-prevent-you-from-getting-hired; Will Dobbs-Allsopp et al., Taking Further Agency Action on AI: How Agencies Can Deploy Existing Statutory Authorities To Regulate Artificial Intelligence (Washington, DC: Center for American Progress, 2024); and Molly Callahan, “Algorithms Were Supposed to Reduce Bias in Criminal Justice—Do They?,” The Brink, Boston University, February 23, 2023, www.bu.edu/articles/2023/do-algorithms-reduce-bias-in-criminal-justice/.
  12. Bakhtawar Amjad, “Over-Reliance of Students on Artificial Intelligence,” Medium, April 21, 2024, medium.com/over-reliance-of-students-on-artificial-intelligence-709a931bdc79.
  13. See Denise Turley, “Leveling the Field: How AI can empower Disadvantaged Students,” AI Journal, February 27, 2024, com/levelling-the-field-how-ai-can-empower-disadvantaged-students/; Thomas Davenport and Ravi Kalakota, “The potential for artificial intelligence in healthcare,” Future Healthcare Journal 6, no. 2 (June 2019): 94–98; and “The role of AI in modern education,” University of Iowa Education Blog, University of Iowa, August 27, 2024, onlineprograms.education.uiowa.edu/blog/role-of-ai-in-modern-education.
  14. Frida Polli, “Using AI to Eliminate Bias from Hiring,” Harvard Business Review, October 29, 2019, org/2019/10/using-ai-to-eliminate-bias-from-hiring; and Kieran Newcomb, “The Place of Artificial Intelligence in Sentencing Decisions,” Inquiry Journal (blog), spring 2024, University of New Hampshire, www.unh.edu/inquiryjournal/blog/2024/03/place-artificial-intelligence-sentencing-decisions.
  15. Isabella Backman, “Eliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines,” Yale School of Medicine, December 21, 2023, yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offers-guidelines/.