Sunday , Nov. 24, 2024, 4:21 a.m.
News thumbnail
Technology / Thu, 11 Jul 2024 AIM

OpenAI CTO Mira Murati is an Absolute PR Disaster

OpenAI Needs No Safety LessonsOpenAI has a history of bad PR, but it knows how to turn a crisis into an opportunity. However, she is now facing questions about ChatGPT’s perceived liberal bias. “When we began building GPT more than five years ago, our primary focus was the safety of AI systems,” said Murati. Briefly, after GPT-3 was developed and released in the API, OpenAI was able to integrate AI safety into real-world systems for the first time. Not to forget the ‘OpenAI is nothing without its people’ campaign started by Murati during Sam Altman’s ousting.

During a recent podcast at Johns Hopkins University, Mira Murati, the chief technology officer of OpenAI, acknowledged the criticism that ChatGPT has received for being overly liberal and emphasised that this bias was unintentional.

“We’ve been very focused on reducing political bias in the model behaviour. ChatGPT was criticised for being overly liberal, and we’re working really hard to reduce those biases,” said Murati.

However, no specific details or measures on the redressal efforts have been provided yet. This is all part of their ongoing effort to improve the model and make it more balanced and fair.

However, in an interview back in March, Murati was asked where the video data that was used to train Sora came from. The CTO feigned ignorance, claiming to not know the answer, making her the talk of the town on social media.

Netizens were quick to create memes highlighting her as “an absolute PR disaster”.

OpenAI Needs No Safety Lessons

OpenAI has a history of bad PR, but it knows how to turn a crisis into an opportunity. In a previous discussion moderated at Dartmouth, Murati focused on safety, usability, and reducing biases to democratise creativity and free up humans for higher-level tasks.

In a recent post on X, she said that to make sure these technologies are developed and used in a way that does the most good and the least harm, they work closely with red-teaming experts from the early stages of research.

“You have to build them alongside the technology and actually in a deeply embedded way to get it right. And for capabilities and safety, they’re actually not separate domains. They go hand in hand,” she added.

Notably, her optimism on AI stems from the belief that developing smarter and more secure systems will lead to safer and more beneficial outcomes for the future. However, she is now facing questions about ChatGPT’s perceived liberal bias.

Meanwhile, OpenAI’s former chief scientist Ilya Sutskever launched Safe Superintelligence shortly after leaving the company in May 2024, allegedly due to disagreements with CEO Sam Altman over AGI safety and advancement.

After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the… — Ilya Sutskever (@ilyasut) May 14, 2024

In an apparent response to this and to ward off safety concerns, OpenAI formed a Safety and Security Committee led by directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and Altman.

Murati to the Rescue

In a July 2023 discussion with Microsoft CTO Kevin Scott, Murati expressed concerns about the prevailing uncertainty in the AI field, emphasising the need for clear guidance and decision-making processes.

She highlighted the challenge of determining which aspects of AI to prioritise, develop, release, and position effectively. “When we began building GPT more than five years ago, our primary focus was the safety of AI systems,” said Murati.

Highlighting the risks of letting humans directly set goals for AI systems—due to the potential for complex, opaque processes to cause serious errors or unintended consequences—Murati and her team shifted their focus to using RLHF to ensure AI’s safe and effective development.

Briefly, after GPT-3 was developed and released in the API, OpenAI was able to integrate AI safety into real-world systems for the first time.

An Accidental PR

Murati’s acknowledgement of ChatGPT’s perceived liberal bias and her emphasis that this bias was unintentional represent a significant and positive step towards the responsible use of AI.

Her addressing criticisms openly demonstrates a commitment to transparency and accountability, which are crucial for the ethical development of technology.

Murati’s approach not only seeks to rectify past concerns but also underscores a proactive stance on refining AI systems to better serve diverse user needs. This openness fosters trust and shows that OpenAI is dedicated to addressing issues constructively.

Murati’s tryst with responsible AI is not new-found. In a 2021 interview, she discussed AI’s potential for harm, emphasising that unmanaged technology could lead to serious ethical and safety concerns. Some critics argued that Murati’s comments were too alarmist or did not fully acknowledge the positive potential of AI.

While Murati aimed to promote responsible AI, the backlash led to broader debates on the technology’s future and its societal impacts.

Not to forget the ‘OpenAI is nothing without its people’ campaign started by Murati during Sam Altman’s ousting. One thing is for sure: Murati is truly mysterious, and no one knows what she’s going to say next to the media. We are not complaining!

logo

Stay informed with the latest news and updates from around India and the world.We bring you credible news, captivating stories, and valuable insights every day

©All Rights Reserved.