top of page

Regulating AI Before it's Too Dangerous



The AI-amplification of lies, misinformation, and fake news is a major concern that could have catastrophic consequences if not regulated. The spread of lies and misinformation has already contributed to the Rohingya massacre, preventable deaths due to COVID-19 vaccine misinformation, and weakened democracy in the US. AI technology that generates realistic images and text with no controls on who generates what could make it easier to generate fake news, violence, extremist articles, and non-consensual fake nudity. Regulations are needed to address critical problems, including the use and training of specific types of AI technology like facial recognition technology. Social media companies should be responsible for posted content, and existing laws around monopolistic practices should be enforced. Companies should remove all child abuse content, and interpretable models should be used for high-stakes decisions. Finally, any new and potentially dangerous technology should be regulated before it causes harm on a wide scale, and a government agency for AI should be created to ensure that AI technology is used safely and responsibly.


Author: CYNTHIA RUDIN

Organically published on The Hill on Feb 8, 2023

0 views0 comments

Comments


bottom of page