SEATTLE As Microsoft Corp, OpenAI, Google and other technology companies accelerate the release of chatbots and other artificial intelligence-based tools to the public, a US Senator is demanding answers about how they intend to protect kids from harm.
Michael Bennet, a Democrat from Colorado, wrote to the chief executive officers of OpenAI, Microsoft, Alphabet Inc’s Google, Facebook parent Meta Platforms Inc and Snapchat owner Snap Inc - all of which are building and distributing AI technology that lets users ask questions, get advice and generate text in various forms. Calling some of the output of these software programmes “alarming”, Bennet asked the CEOs to respond to questions about how the companies assess, mitigate and audit their AI services and the models behind them by April 28, focusing mainly on how they are keeping young users safe.
Chatbots, once relegated to barely useful customer-service web applications, have become an increasingly competitive area since OpenAI’s ChatGPT went viral in November. Microsoft is building OpenAI’s technology into its Bing internet search engine; Google is offering a test version of a chatbot-based search called Bard; and Snap has its own ChatGPT-powered bot called My AI. Bennet detailed several examples of the programmes offering inappropriate responses, including a report that showed Snap’s programme providing disturbing advice to researchers posing as children.
“Few recent technologies have captured the public’s attention like generative AI. The technology is a testament to American innovation, and we should welcome its potential benefits to our economy and society,” Bennet wrote in the letter, according to a copy provided by the senator’s office. “But the race to deploy generative AI cannot come at the expense of our children. Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk, and mitigate harm.”
He also asked the companies to tell him about data collection and retention policies related to content put into chatbots by younger users, and how many staff each company has working on AI safety, particularly those focused on younger users.