Tech NewsTrending

ChatGPT Is Under Investigation Over False Information Risk

US regulators are investigating OpenAI, the company behind ChatGPT, due to concerns about the risks of false information and harm to consumers.

Artificial intelligence company OpenAI, backed by Microsoft, is under investigation by the Federal Trade Commission (FTC) in the United States regarding the risks posed by its language model, ChatGPT, in generating false information. The FTC has sent a letter to OpenAI requesting information on how the company addresses the potential harm to individuals’ reputations. This inquiry reflects the increasing regulatory scrutiny surrounding AI technology.

ChatGPT provides human-like responses to user queries, transforming the way people search for information online. Competitors in the tech industry are also racing to develop similar AI models, leading to debates on data usage, response accuracy, and potential violations of authors’ rights during the training process.

The FTC’s investigation focuses on OpenAI’s efforts to mitigate the risk of ChatGPT producing false, misleading, disparaging, or harmful statements about real individuals. Additionally, the commission is examining OpenAI’s approach to data privacy and the acquisition of data used to train the AI.

OpenAI’s CEO, Sam Altman, has stated that the company has invested significant time in safety research and made ChatGPT “safer and more aligned” before its release.

Mr Altman said “OpenAI had spent years on safety research and months making ChatGPT “safer and more aligned before releasing it”.

Altman emphasized their commitment to protecting user privacy and ensuring their systems learn about the world, not private individuals. OpenAI intends to cooperate with the FTC during the investigation.

“We protect user privacy and design our systems to learn about the world, not private individuals,” he said on Twitter

Altman previously testified before Congress, acknowledging that AI technology can have errors and advocating for regulations and the creation of a new agency to oversee AI safety. He also anticipated significant societal impacts, including job implications, as the uses of AI become clearer.

“I think if this technology goes wrong, it can go quite wrong… we want to be vocal about that,” Mr Altman said at the time. “We want to work with the government to prevent that from happening.”

The FTC’s investigation, in its preliminary stage, was revealed through a report by the Washington Post. OpenAI and the FTC have not provided comments on the matter. The FTC, under the leadership of Chair Lina Khan, has taken a prominent role in regulating large tech companies, and she has expressed concerns about ChatGPT’s output, such as the emergence of sensitive information or defamatory statements.

We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else,” Ms Khan said.”

We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we are concerned about,” she added.

OpenAI has faced previous challenges related to these issues, as Italy banned ChatGPT in April over privacy concerns. The service was later reinstated after implementing age verification tools and providing more comprehensive privacy policies.

Related Articles

Back to top button