Why Bill Gates Isn't Worried About AI Models Making Stuff Up

43676

Making AI models more “self-aware” and “conscious” of their biases and factual errors could potentially prevent them from producing more false information in the future, the Microsoft cofounder wrote on his blog, GatesNotes

Bill Gates
Bill Gates
ФОТО: © Depositphotos/ChinaImages

Generative AI tools are here, and so is the endless potential for their misuse. They could fabricate misinformation during elections. They routinely concoct biased and incorrect information. Plus, they make it extremely easy to cheat on essay assignments in school.

Billionaire Bill Gates, who told Forbes earlier this year that he thinks that the shift to AI is “every bit as important as the PC,” is worried about all of these challenges. But as articulated in a recent blog post, he believes that AI can be used to tackle the problems it has created.

One of the most well-known issues with large language models is their tendency to “hallucinate” or produce factually incorrect and biased or harmful information. That’s because models are trained on a vast amount of data collected from the internet, which is mired in bias and misinformation. But Gates believes that it’s possible to build AI tools that are conscious of the faulty data they are trained on and the biased assumptions they make.

“AI models inherit whatever prejudices are baked into the text they’re trained on,” he wrote. “I’m optimistic that, over time, AI models can be taught to distinguish fact from fiction. One approach is to build human values and higher-level reasoning into AI.”

In that vein, he highlighted ChatGPT creator OpenAI’s attempts to make their models more accurate, representative and safe through human feedback. But the viral chatbot is riddled with biases and inaccuracies even after training it on an advanced version of its large language model, GPT-4. AI researchers found that ChatGPT reinforces gender stereotypes about the jobs of women and men. (Newer chatbots like Anthropic’s ChatGPT rival bot Claude 2.0 are also trying to improve accuracy and mitigate harmful content, but they haven’t been as widely tested by users yet.)

Gates has a reason to talk up ChatGPT: His company Microsoft has invested billions of dollars into OpenAI. In late April, his wealth increased by $2 billion after Microsoft’s earnings call mentioned AI more than 50 times. He is currently valued at about $118 billion.

One example Gates discussed in his blog is how hackers and cyber criminals are using generative AI tools to write code or create AI-generated voices to run phone scams. These out-of-control impacts of the tools led some AI leaders and experts including Apple cofounder Steve Wozniak, Tesla, SpaceX and Twitter CEO Elon Musk and Center for Human Technology cofounder Tristan Harris to call for a hiatus from the deployment of powerful AI tools in an open letter published in late March. Gates pushed back against the letter, and stressed that he doesn’t think a pause on developments will solve any challenges. “We should not try to temporarily keep people from implementing new developments in AI, as some have proposed,” he wrote.

Instead, he said these consequences offer further reasons to continue developing advanced AI tools as well as regulations so that governments and corporations can detect, restrict and counter misuses using AI. “Cyber-criminals won’t stop making new tools…The effort to stop them needs to continue at the same pace,” he wrote.

But Gates’ claim that AI tools can be used to combat the deficiencies of other AI tools, may not practically hold up — at least not yet. For instance, while a range of AI detectors and deepfake detectors have launched, not all are always able to correctly flag synthetic or manipulated content. Some incorrectly portray real images as AI-generated, according to a New York Times report. But, generative AI, still a nascent technology, needs to be monitored and regulated by government agencies and companies to control its unintended effects on society, Gates said.

“We’re now in.. the age of AI. It’s analogous to uncertain times before speed limits and seat belts,” Gates wrote.

Rashi Shrivastava, Forbes Staff

   Если вы обнаружили ошибку или опечатку, выделите фрагмент текста с ошибкой и нажмите CTRL+Enter

Орфографическая ошибка в тексте:

Отмена Отправить