Sunday , Sept. 29, 2024, 4:49 a.m.
News thumbnail
Technology / Sat, 04 May 2024 The Financial Express

What is ‘responsible AI’ and what is Microsoft doing to ensure greater ‘transparency’: Full details

Amidst this as Microsoft releases its Responsible AI Transparency Report, which sheds light on the steps taken by Microsoft to launch responsible AI platforms. This is believed to ensure that these systems are protected from adversarial attacks and follows Microsoft’s responsible AI principles. This will include probing AI systems for vulnerabilities and potential weak areas, such as the generation of harmful or misleading content. Microsoft will also be including the Azure OpenAI service which is expected to play a key role in Microsoft’s risk management strategy. Early reports suggest that Microsoft’s leadership in AI governance got recognised in the 2023 IDC MarketScape for AI Governance Platforms.

As the four big tech companies adopt artificial intelligence, experts suggest depending on AI too much can lead to several negative impacts. Amidst this as Microsoft releases its Responsible AI Transparency Report, which sheds light on the steps taken by Microsoft to launch responsible AI platforms. The report is expected to include the actions taken by Microsoft to safely deploy AI products in 2023.

In order to initiate a risk assessment management system, Microsoft plans to develop Counterfit, an open-source automation tool for security testing of AI systems. Counterfit will allow organisations to evaluate the security of their AI applications effectively. This is believed to ensure that these systems are protected from adversarial attacks and follows Microsoft’s responsible AI principles.

What to expect from ‘Counterfit’?

Also Read Tata Electronics prepares ground for high-end chips Mobile retailers urge CCI to cancel OnePlus, POCO licences Close call: Apple Watch helps Delhi woman escape life-threatening heart issue; Tim Cook responds iPad gets its biggest price cut in India ahead of Apple May 7 Let Loose special event — here are all the details

As Microsoft strengthens its AI risk management, it has also adopted AI red teaming as a key component of its security strategy. This will include probing AI systems for vulnerabilities and potential weak areas, such as the generation of harmful or misleading content.

From what it is understood, Microsoft plans to ensure that their AI systems are not only technically sound but also safe and fair, adhering to their established AI principles. By conducting these hard tests before deployment, Microsoft aims to improve its risk management area.

Microsoft will also be including the Azure OpenAI service which is expected to play a key role in Microsoft’s risk management strategy. This service provides organisations with the tools to identify, assess, and mitigate potential risks, ensuring the security and integrity of AI deployments. In addition to this Azure’s risk management resources will help businesses in navigating the challenges posed by AI technologies.

The collaboration ahead!

Early reports suggest that Microsoft’s leadership in AI governance got recognised in the 2023 IDC MarketScape for AI Governance Platforms. Industry experts believe that Microsoft’s initiative to implement governance frameworks can ensure fairness, transparency, and ethical use of AI technologies.

Also Read Microsoft is testing ads inside Windows 11 Start menu: Check full details

By integrating advanced tools such as Counterfit and employing strategic risk management services such as Azure, and conducting thorough red teaming exercises, Microsoft will be able to protect its own AI systems and even contribute to the broader goal of bringing responsible AI innovation.

Follow FE Tech Bytes on Twitter, Instagram, LinkedIn, Facebook

logo

Stay informed with the latest news and updates from around India and the world.We bring you credible news, captivating stories, and valuable insights every day

©All Rights Reserved.