I promises unparalleled technological breakthroughs in the most pressing problems of today. At the same time, we recognize that bias in AI can harm humans, and that humans can unwittingly internalize that bias as fact, long after they’ve stopped using that algorithm. AI’s potential for good or harm hinges on our ability to root out biased data along with the human and systemic biases those datasets are embedded within.
One critically under- and mis-represented dataset is people of faith and the importance of religion and spirituality in most humans' lives.
Most of humanity (84%) affiliates with a religion. That percentage is expected to increase over time. Meanwhile, according to an AI-enabled study of 30+ million documents, 63% of faith-related digital content treats faith negatively. Furthermore, 11% of digital content mentioning faith is negative in extreme ways, including hate speech, for a total of 74% of all digital faith-related content being negative to some extent. Across multiple global studies, a majority of individuals say representation of their faith is stereotypical, sensationalized, negative, or simply absent.
The digital landscape provides training data for AI, so these negative typifications of faith could have harmful effects on human flourishing. Religion and spirituality have been linked to better wellbeing, mental health, civic engagement, prosocial behavior, and longevity. Religious stereotypes can encourage violence against marginalized groups. Training datasets filled with negative and even hateful religious representation risk exacerbating violence and decreasing desperately needed positive impacts on personal wellbeing and society.
If we want AI to be truly inclusive and to support human flourishing, training datasets should be audited to protect against unfair, inaccurate bias against religion and spirituality.
Experts and stakeholders urgently need to pressure world leaders, the media, public and private sectors, and other influential people to proactively support accurate and representative stories of faith on our collective cultural narratives. By encouraging representative stories and guarding against bias and discrimination—for all faiths and types of spirituality—AI systems can be trained on better datasets.
In the face of rapidly accelerating and often incomprehensible technological change, AI can accelerate us toward catastrophic harm or unimaginable thriving. The power of AI to elevate the beauty of what makes us uniquely human depends on the representation of our most ennobling and transcendent virtues in what powers AI.
Spirituality is not tangential to the impact of AI on human flourishing; it is essential.
a global affairs media network
Spirituality is Essential to Ensuring AI Helps Humanity Flourish
Photo courtesy of Marek Piwnicki via Unsplash.
January 18, 2024
We already know that biased training data sets are one of the ways AI could be harmful to our societies. Of all digital content related to faith, around 74% of content treats faith negatively, indicating a worrying bias that must be confronted as AI becomes central to life, writes Angela Redding.
A
I promises unparalleled technological breakthroughs in the most pressing problems of today. At the same time, we recognize that bias in AI can harm humans, and that humans can unwittingly internalize that bias as fact, long after they’ve stopped using that algorithm. AI’s potential for good or harm hinges on our ability to root out biased data along with the human and systemic biases those datasets are embedded within.
One critically under- and mis-represented dataset is people of faith and the importance of religion and spirituality in most humans' lives.
Most of humanity (84%) affiliates with a religion. That percentage is expected to increase over time. Meanwhile, according to an AI-enabled study of 30+ million documents, 63% of faith-related digital content treats faith negatively. Furthermore, 11% of digital content mentioning faith is negative in extreme ways, including hate speech, for a total of 74% of all digital faith-related content being negative to some extent. Across multiple global studies, a majority of individuals say representation of their faith is stereotypical, sensationalized, negative, or simply absent.
The digital landscape provides training data for AI, so these negative typifications of faith could have harmful effects on human flourishing. Religion and spirituality have been linked to better wellbeing, mental health, civic engagement, prosocial behavior, and longevity. Religious stereotypes can encourage violence against marginalized groups. Training datasets filled with negative and even hateful religious representation risk exacerbating violence and decreasing desperately needed positive impacts on personal wellbeing and society.
If we want AI to be truly inclusive and to support human flourishing, training datasets should be audited to protect against unfair, inaccurate bias against religion and spirituality.
Experts and stakeholders urgently need to pressure world leaders, the media, public and private sectors, and other influential people to proactively support accurate and representative stories of faith on our collective cultural narratives. By encouraging representative stories and guarding against bias and discrimination—for all faiths and types of spirituality—AI systems can be trained on better datasets.
In the face of rapidly accelerating and often incomprehensible technological change, AI can accelerate us toward catastrophic harm or unimaginable thriving. The power of AI to elevate the beauty of what makes us uniquely human depends on the representation of our most ennobling and transcendent virtues in what powers AI.
Spirituality is not tangential to the impact of AI on human flourishing; it is essential.