NOW is fighting back against the exploitation of women and girls through AI-generated imagery. Our efforts are exposing the surreptitious use of artificial intelligence to create harmful, non-consensual graphics—often sexualized or degrading in nature. The damage inflicted by these images is not only immediate and deeply personal but also enduring, as such content can persist indefinitely on the internet and within the dark web. NOW’s advocacy highlights the urgent need for legal protections and ethical standards to safeguard the dignity and rights of women and girls in the digital age.

The National AI Initiative Act of 2020 focused on expanding AI research and development and created the National Artificial Intelligence Initiative Office, which is responsible for “overseeing and implementing the US national AI strategy.” It is one of the few federal laws that relates to AI.   

  • Deepfakes Are a Form of Violence Against Women
    • The American Privacy Rights Act would create a comprehensive consumer privacy framework. The draft bill includes provisions on algorithms, including a right to opt out of covered algorithms used to make or facilitate consequential decisions.
    • Privacy relates to reproductive rights and justice.
  • The Draft No FAKES Act, would protect voice and visual likenesses of individuals from unauthorized recreations by Generative AI.
    • California and Tennessee have enacted similar acts to prevent the unauthorized use of AI to mimic a person’s name, photograph, voice, or likeness, without a license
    • This Act relates to ending violence against women, including online sexual violence.
  • Tech to Save Moms Act
    • (1) The use of innovative technology (including artificial intelligence) in maternal health care, including the extent to which such technology has affected racial or ethnic biases in maternal health care.

There is currently no comprehensive legislation in the US that directly regulates AI. However, proposed legislation at the federal and state levels generally seeks to address the following issues: safety and security, responsible innovation and development, equity and unlawful discrimination, protection of privacy, and civil liberties. Here at NOW, we recognize how A.I. is being used, and we are in full support of rules and guidelines around its use, and the web. Below is some proposed federal AI legislations that relates to NOW’s 6 core issues.  


The connection to NOW’s six Core Issues:

Artificial Intelligence is developed using data generated by humans, –data that often reflects deep-rooted societal biases. As a result, AI systems frequently reproduce and even amplify discrimination against women, people of color, individuals with disabilities, members of the LGBTQ+ community, and other marginalized groups. Without intentional anti-bias safeguards and consistent human oversight, these technologies risk becoming powerful tools of harm rather than instruments of progress. Here is how AI affects our Core Issues.


Reproductive Rights and Justice

AI is increasingly being used in sexual and reproductive healthcare, aiding in areas like sperm and embryo selection for IVF, infertility and pregnancy screening, contraception provision, maternal and newborn care, STI/HIV treatment, and even efforts to reduce unsafe abortions. However, the integration of AI also raises critical ethical and human rights concerns. Privacy risks, algorithmic opacity, and the potential for embedding existing biases can undermine equitable access. Without transparency in how these tools are designed and deployed, they may inadvertently exacerbate disparities, especially among historically underserved populations.

Economic Justice

AI has the potential to worsen economic inequality in the U.S., particularly through its use in hiring and the labor market. Because AI systems are trained on existing data, they often replicate and reinforce existing societal biases, especially when used in processes like job applications. This can lead to unfair outcomes that disproportionately harm marginalized groups, such as people of color and low-income individuals, by limiting their access to employment opportunities. As AI becomes more integrated into economic systems, it risks deepening income inequality unless these biases are actively addressed.

Ending Violence Against Women

AI is intensifying violence against women by enabling the rapid and anonymous spread of targeted disinformation and deepfake pornography, which pose serious threats to women’s safety and agency. As noted by Penn Carey Law, deepfakes are being weaponized to silence women, often journalists and human rights defenders, through fabricated videos and false audio. One study has found that 98 percent of deepfake videos online are pornographic and that 99 percent of those targeted are women or girls. AI’s ability to produce convincing fabricated content rapidly and anonymously creates new threats to women’s integrity and safety.


Digital gender-based violence is on the rise, and due to the speed at which these technologies have developed, legislation protecting women and girls does not really exist. Survivors are often left with limited resources.

Racial Justice

Artificial intelligence systems frequently reflect and perpetuate systemic racism due to biased data and discriminatory design. These technologies often inherit the structural inequalities present in the societies that create them (ohchr.org). Whether through facial recognition, predictive policing, or automated hiring tools, AI often replicates or even amplifies racial disparities under the guise of objectivity.
Civil rights organizations, including the ACLU, have consistently raised concerns about the unchecked spread of racially biased AI. The ACLU has warned that without enforceable protections, AI will continue to produce discriminatory outcomes in critical sectors such as policing, employment, housing, and healthcare.

LGBTQIA+ rights

AI technologies often reproduce and reinforce cis/heteronormative norms, perpetuating systemic discrimination faced by LGBTQIA+ individuals. Generative AI tools trained on stereotyped or narrow datasets tend to misrepresent queer identities, such as producing hypersexualized or whitewashed portrayals of transgender people, or reinforcing tropes about gay men and lesbians, thus limiting the diversity of queer representation and erasing marginalized subgroups. In healthcare and social services settings, this bias can exclude or harm LGBTQIA+ people, particularly transgender individuals, by misgendering users, flagging gender-diverse bodies as anomalies, or reinforcing barriers to affirming care.

Constitutional Equality

AI technologies pose a threat to constitutional equality by reinforcing systemic biases that undermine the promise of equal protection under the law. These systems mirror and magnify existing discrimination based on race, gender, and socioeconomic status. While the Equal Rights Amendment (ERA) was originally written in 1923 to enshrine gender equality in the Constitution, it makes no mention of AI or digital technologies.


In the News

January


Resources