ore than 70 national elections are scheduled for 2024, including in eight of the ten most populous countries. But one group is likely to be significantly underrepresented: women. A major reason is the disproportionate amount of abuse female politicians and candidates receive online, including threats of rape and violence, and the rise of artificial intelligence, which can be used to create sexually explicit deepfakes, is only compounding the problem.
And yet, over the past year, platforms such as Meta, X (formerly Twitter), and YouTube have de-emphasized content moderation and rolled back policies that previously kept hate, harassment, and lies in check. According to a new report, this has fueled a “toxic online environment that is vulnerable to exploitation from anti-democracy forces, white supremacists, and other bad actors.”
Online attacks against women in politics are already on the rise. Four out of five female parliamentarians have been subjected to psychological violence such as bullying, intimidation, verbal abuse, or harassment, while more than 40% have been threatened with assault, sexual violence, or death.
The 2020 election in the United States was particularly revealing. A recent analysis of congressional candidates found that female Democrats received ten times more abusive comments on Facebook than their male counterparts. And immediately after presidential candidate Joe Biden named Kamala Harris as his running mate, false claims about Harris were being shared at least 3,000 times per hour on Twitter.
Similar trends have been documented in India, the United Kingdom, Ukraine, and Zimbabwe. Minority women face the worst abuse, together with those who are highly visible in the media or speak out on feminist issues. In India, one in every seven tweets about female politicians is problematic or abusive, with Muslim women and women belonging to marginalized castes bearing the brunt of the vitriol.
The disproportionate targeting of women discourages them from running for office, drives them out of politics, or leads them to disengage from online discourse in ways that harm their political effectiveness—all of which weaken democracy. In Italy, “threats of rape are used to intimidate women politicians and push them out of the public sphere,” says Laura Boldrini, an Italian politician who served as president of the country’s Chamber of Deputies, adding that political leaders themselves often issue these menacing remarks. This creates a vicious cycle, as a dearth of women in government has been shown to result in policies that are less effective in reducing violence against women.
Tech companies should take four steps to counter this trend. For starters, they should publish guidelines on what constitutes hate speech and threatening and intimidating harassment on their platforms. Some tech giants have included, and even provided examples of, gendered hate speech in their policies. Google’s YouTube policy is one example.
Second, platforms need to reinvest in effective content moderation for all countries, not just the U.S. and Europe. That means using a combination of human capital and improved automated systems (during the COVID-19 pandemic, when tech companies relied more heavily on algorithms, campaigners in France noticed that hate speech on Twitter increased by more than 40%). Equally important is training human moderators to identify online violence against women in politics and more equitable investment in effective content moderation. Until now, the unpleasant job of finding and deleting offensive content has typically been outsourced to regions where labor is least expensive.
Third, “safety by design” principles should be embedded in new products and tools. That could mean building mechanisms that “increase friction” for users and make it harder for gendered hate speech and disinformation to spread in the first place. Companies should improve their risk-assessment practices prior to launching products and tools or introducing them in a new market. Investing in innovation, such as the ParityBOT, which serves as a monitoring and counterbalancing tool by detecting problematic tweets about women candidates and responding with positive messages, will also be important.
Lastly, independent monitoring by researchers or citizen groups would help societies keep track of the problem and how well tech platforms are handling it. Such monitoring would require companies to provide access to their data on the number and nature of complaints received, disaggregated by gender, country, and responses.
In the context of social-media companies’ rollback of content policies and lower investment in moderation, it is important to note that the percentage of women in tech leadership roles is currently 28% and falling. If, as in politics, female tech leaders are more likely to address violence against women, this trend could create a similar vicious cycle.
Crucially, governments must also take steps to prevent gendered online abuse from undermining democracy. Tunisia and Bolivia have outlawed political violence and harassment against women, while Mexico recently enacted a law that punishes, with up to nine years in prison, those who create or disseminate intimate images or videos of women or attack women on social networks.
In the UK, legal guidelines issued in 2016 and in 2018 enable the prosecution of internet trolls who create derogatory hashtags, engage in virtual mobbing (inciting people to harass others), or circulate doctored images. In 2017, Germany introduced a law that requires platforms to remove hate speech or illegal content within 24 hours or risk millions of dollars in fines (a similar measure was struck down in France for fear of censorship).
But even when laws exist, female politicians speak of “virtually constant” abuse and report that law-enforcement officials do not take online threats and abuse seriously. In the UK, for example, less than 1% of cases reported to Scotland Yard’s online-hate-crime unit have resulted in charges. Police officers and judges need better training to understand how existing laws can be applied to online violence against female politicians; too many think that it is simply “part of the job.”
Tech companies and governments must act now to ensure that both men and women can participate equally in next year’s elections. Unless they do, representative democracies will become less representative and less democratic.
Copyright: Project Syndicate, 2023.
a global affairs media network
Saving Representative Democracy from Online Trolls
Women remain underrepresented even in many Western political systems, despite a long tradition of political activism for equitable representation. Photo by Library of Congress on Unsplash
January 4, 2024
More than 70 countries will hold national elections in 2024. Women remain underrepresented among candidates, due largely to the disproportionate amount of abuse female candidates receive online, including threats of rape and violence, and AI will worsen the issue, writes Ngaire Woods.
M
ore than 70 national elections are scheduled for 2024, including in eight of the ten most populous countries. But one group is likely to be significantly underrepresented: women. A major reason is the disproportionate amount of abuse female politicians and candidates receive online, including threats of rape and violence, and the rise of artificial intelligence, which can be used to create sexually explicit deepfakes, is only compounding the problem.
And yet, over the past year, platforms such as Meta, X (formerly Twitter), and YouTube have de-emphasized content moderation and rolled back policies that previously kept hate, harassment, and lies in check. According to a new report, this has fueled a “toxic online environment that is vulnerable to exploitation from anti-democracy forces, white supremacists, and other bad actors.”
Online attacks against women in politics are already on the rise. Four out of five female parliamentarians have been subjected to psychological violence such as bullying, intimidation, verbal abuse, or harassment, while more than 40% have been threatened with assault, sexual violence, or death.
The 2020 election in the United States was particularly revealing. A recent analysis of congressional candidates found that female Democrats received ten times more abusive comments on Facebook than their male counterparts. And immediately after presidential candidate Joe Biden named Kamala Harris as his running mate, false claims about Harris were being shared at least 3,000 times per hour on Twitter.
Similar trends have been documented in India, the United Kingdom, Ukraine, and Zimbabwe. Minority women face the worst abuse, together with those who are highly visible in the media or speak out on feminist issues. In India, one in every seven tweets about female politicians is problematic or abusive, with Muslim women and women belonging to marginalized castes bearing the brunt of the vitriol.
The disproportionate targeting of women discourages them from running for office, drives them out of politics, or leads them to disengage from online discourse in ways that harm their political effectiveness—all of which weaken democracy. In Italy, “threats of rape are used to intimidate women politicians and push them out of the public sphere,” says Laura Boldrini, an Italian politician who served as president of the country’s Chamber of Deputies, adding that political leaders themselves often issue these menacing remarks. This creates a vicious cycle, as a dearth of women in government has been shown to result in policies that are less effective in reducing violence against women.
Tech companies should take four steps to counter this trend. For starters, they should publish guidelines on what constitutes hate speech and threatening and intimidating harassment on their platforms. Some tech giants have included, and even provided examples of, gendered hate speech in their policies. Google’s YouTube policy is one example.
Second, platforms need to reinvest in effective content moderation for all countries, not just the U.S. and Europe. That means using a combination of human capital and improved automated systems (during the COVID-19 pandemic, when tech companies relied more heavily on algorithms, campaigners in France noticed that hate speech on Twitter increased by more than 40%). Equally important is training human moderators to identify online violence against women in politics and more equitable investment in effective content moderation. Until now, the unpleasant job of finding and deleting offensive content has typically been outsourced to regions where labor is least expensive.
Third, “safety by design” principles should be embedded in new products and tools. That could mean building mechanisms that “increase friction” for users and make it harder for gendered hate speech and disinformation to spread in the first place. Companies should improve their risk-assessment practices prior to launching products and tools or introducing them in a new market. Investing in innovation, such as the ParityBOT, which serves as a monitoring and counterbalancing tool by detecting problematic tweets about women candidates and responding with positive messages, will also be important.
Lastly, independent monitoring by researchers or citizen groups would help societies keep track of the problem and how well tech platforms are handling it. Such monitoring would require companies to provide access to their data on the number and nature of complaints received, disaggregated by gender, country, and responses.
In the context of social-media companies’ rollback of content policies and lower investment in moderation, it is important to note that the percentage of women in tech leadership roles is currently 28% and falling. If, as in politics, female tech leaders are more likely to address violence against women, this trend could create a similar vicious cycle.
Crucially, governments must also take steps to prevent gendered online abuse from undermining democracy. Tunisia and Bolivia have outlawed political violence and harassment against women, while Mexico recently enacted a law that punishes, with up to nine years in prison, those who create or disseminate intimate images or videos of women or attack women on social networks.
In the UK, legal guidelines issued in 2016 and in 2018 enable the prosecution of internet trolls who create derogatory hashtags, engage in virtual mobbing (inciting people to harass others), or circulate doctored images. In 2017, Germany introduced a law that requires platforms to remove hate speech or illegal content within 24 hours or risk millions of dollars in fines (a similar measure was struck down in France for fear of censorship).
But even when laws exist, female politicians speak of “virtually constant” abuse and report that law-enforcement officials do not take online threats and abuse seriously. In the UK, for example, less than 1% of cases reported to Scotland Yard’s online-hate-crime unit have resulted in charges. Police officers and judges need better training to understand how existing laws can be applied to online violence against female politicians; too many think that it is simply “part of the job.”
Tech companies and governments must act now to ensure that both men and women can participate equally in next year’s elections. Unless they do, representative democracies will become less representative and less democratic.
Copyright: Project Syndicate, 2023.