The world’s most prominent backer of artificial intelligence is warning its staff about chatbots, as controversy over Google’s bot has prompted many people to think twice about using these programs. The Alphabet Inc (GOOGL.O) unit has advised employees not to enter their confidential materials into AI chatbots, the people familiar with the matter told Reuters. The company also confirmed that advice, citing a long-standing policy on safeguarding information.
The move reflects concerns that bad actors may misuse the technology. In one example, a chatbot created by OpenAI named Bard made the unfounded claim that it was the first to take photos of planets outside our solar system. This sparked widespread ridicule online. “This isn’t science; it’s comedy,” a blogger wrote.
Some experts believe the AI industry is prone to these sorts of mistakes and that they will get worse as the technology evolves. The danger, they say, is that chatbots might be so sophisticated that they can manipulate us, not just with the fake news that caused so much angst in 2016.
At Google, where teams working on new products are siloed from rank-and-file workers, many ethics employees have been reluctant to raise questions about generative AI. They worry that if Google releases a product that generates answers, it will erode user trust and hurt its search engine, which accounts for $208 billion of the company’s overall revenue. But they haven’t been able to express their doubts publicly because the company has enacted community guidelines that restrict discussion on mailing lists and internal channels.
The people said some engineers had been warned to avoid the direct use of computer code that chatbots can generate. They said the caution reflected efforts to prevent commercial losses from software launched in competition with ChatGPT’s backers at Alphabet and Microsoft Corp MSFT.O, with billions of dollars at stake and still untold advertising and cloud revenue from new AI programs.
In March, two reviewers in Ms. Gennai’s team recommended blocking the imminent release of Bard because they believed it was not ready for public use. But she overruled them, saying continuing training, guardrails, and disclaimers would keep the bot safe.
It’s not clear whether other companies have similar bans on employees entering confidential material into publicly available chatbots like those from OpenAI and Microsoft. Microsoft declined to comment on whether it has a broad ban, although Yusuf Mehdi, its consumer chief marketing officer, said it “makes sense” that companies would not want their staff to use such programs for work. He also urged users to read the privacy policies of these programs and be aware of what they’re sharing with them. The same goes for those wishing to create their chatbots. GitHub, a software development platform owned by Microsoft, offers free chatbot templates and a tool that lets developers build and test their own. Its default setting is to save conversation history, which users can delete. The company also sells a separate service restricting data flow to external partners.