Sunday , Sept. 29, 2024, 2:57 a.m.
News thumbnail
World / Sat, 01 Jun 2024 Business Standard

BJP slams OpenAI for late report on 'dangerous threat' to Lok Sabha polls

Responding to OpenAI's report, ruling Bharatiya Janata Party (BJP) labeled this a “dangerous threat” to democracy, stating that OpenAI should have informed the public when the threat was first detected in May. We disrupted some activity focused on the Indian elections less than 24 hours after it began,” OpenAI said. ALSO READ: Blocked attempts from Israel-based firm to influence LS elections: OpenAI “This operation targeted audiences in Canada, the United States, and Israel with content in English and Hebrew. “We’ve terminated accounts linked to covert influence operations; no significant audience increase due to our services,” OpenAI noted. “Our investigations into suspected covert influence operations (IO) are part of a broader strategy to meet our goal of safe AI deployment,” the company concluded.

A campaign nicknamed “Zero Zeno”, run by Israeli political campaign management firm STOIC, tried to influence the ongoing Lok Sabha elections in India, according to OpenAI, the creators of ChatGPT.

Responding to OpenAI's report, ruling Bharatiya Janata Party (BJP) labeled this a “dangerous threat” to democracy, stating that OpenAI should have informed the public when the threat was first detected in May.

“It is absolutely clear and obvious that BJP was and is the target of influence operations, misinformation, and foreign interference, being done by and/or on behalf of some Indian political parties,” said Minister of State for Electronics and IT Rajeev Chandrasekhar on X (formerly Twitter).

According to OpenAI’s report, the threat was detected in May, although the exact date remains unknown. Four of the seven phases of the Lok Sabha elections were conducted during this month.

“This is a very dangerous threat to our democracy. It is clear vested interests in India and outside are driving this and it needs to be deeply scrutinised, investigated, and exposed," the minister said.

"My view is that these platforms could have released this much earlier, and not so late when elections are ending,” he added.

This is very dangerous threat to our democracy. It is clear vested… It is absolutely clear and obvious that @BJP4India was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties.This is very dangerous threat to our democracy. It is clear vested… https://t.co/e78pbEuHwe May 31, 2024

OpenAI’s report on deceptive uses of AI

OpenAI stated it acted within 24 hours to disrupt the “deceptive” use of artificial intelligence (AI) in this covert operation. The threat actors leveraged OpenAI’s language models to generate comments, articles, and social media profiles that criticised the ruling BJP and praised the Congress party.

“In May, the network began generating comments that focused on India, criticised the ruling BJP party, and praised the Opposition Congress party. We disrupted some activity focused on the Indian elections less than 24 hours after it began,” OpenAI said.

OpenAI banned a cluster of accounts operated from Israel that were used to generate and edit content across various platforms, including X (formerly Twitter), Facebook, Instagram, websites, and YouTube.

ALSO READ: Blocked attempts from Israel-based firm to influence LS elections: OpenAI “This operation targeted audiences in Canada, the United States, and Israel with content in English and Hebrew. In early May, it began targeting audiences in India with English-language content,” the company stated.

Other target of deceptive AI practices

OpenAI highlighted that the content posted by these operations covered a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments, along with the ongoing Indian elections.

“We’ve terminated accounts linked to covert influence operations; no significant audience increase due to our services,” OpenAI noted.

OpenAI disclosed that it has disrupted five covert operations in the last three months that sought to use their models for deceptive activities across the internet. “Our investigations into suspected covert influence operations (IO) are part of a broader strategy to meet our goal of safe AI deployment,” the company concluded.

logo

Stay informed with the latest news and updates from around India and the world.We bring you credible news, captivating stories, and valuable insights every day

©All Rights Reserved.