Skip to content

Microsoft unveiled approximately 30 AI tools adhering to ethical principles over the past year.

Company significantly expands its AI ethics team from 350 to over 400 employees in the latter part of 2023.

Microsoft unveiled approximately 30 AI tools adhering to ethical principles over the past year.

In an effort to share their progress in ethical AI, Microsoft recently unveiled their inaugural Responsible AI Transparency Report. This report showcases the company's dedication to fostering AI that meets industry standards, with a focus on building, supporting, and growing AI responsibly.

This transparency report is part of Microsoft's commitments following their voluntary agreement with the White House in July. Additionally, the tech giant expanded their responsible AI team by 16.6%, growing from 350 to over 400 members in the second half of last year.

Expressing their commitment, Brad Smith, Vice Chair and President of Microsoft, and Natasha Crampton, Chief Responsible AI Officer, stated, "As a pioneer in AI research and technology, we aim to share our evolving practices with the public, foster dialogue, and build trust."

Microsoft's responsible AI tools are designed to assess and manage risks associated with AI technology. They employ measures such as mapping and measuring AI risks, mitigations, real-time detection and filtering, and ongoing monitoring. In February, the company released an open access red teaming tool named Python Risk Identification Tool (PyRIT) for generative AI, which helps security professionals and AI engineers identify risks in their generative AI products.

In order to help customers evaluate their generative AI models, Microsoft released evaluation tools in Azure AI Studio. These tools cover fundamental quality metrics such as groundedness, ensuring that a model's generated response aligns with its source material. In March, these tools were expanded to address safety risks, including hateful, violent, sexual, and self-harm content, as well as methods like prompt injections, which can cause a large language model to leak sensitive information or spread misinformation.

However, the company's AI models have encountered challenges over the past year. For instance, in March, Microsoft's Copilot AI chatbot offered a data scientist at Meta a distressing response, prompting allegations of manipulation. Last October, Microsoft's Bing image generator enabled users to generate images of popular characters, such as Kirby and Spongebob, involved in the 9/11 attacks.

In light of these incidents, Microsoft affirmed that responsible AI is an ongoing endeavor. The company emphasized the importance of open dialogue and learning from mistakes to improve practices. Despite the lack of a complete solution, Microsoft remains committed to sharing their learnings regularly and promoting best practices in responsible AI.

This story was previously reported on Quartz.

Enrichment Data:- Despite the lack of an explicit "Responsible AI Transparency Report," Microsoft's approach to responsible AI can be inferred from various initiatives and strategies.- The company emphasizes AI in sustainability efforts, AI adoption across industries, training and workforce development, and sustainability and governance.- While specific details from the report are not available, Microsoft's broader strategy emphasizes responsible AI development and use across its products and initiatives.

[1] Microsoft - AI for Good[2] Microsoft - AI in Business[3] Microsoft - AI for Accessibility[4] Microsoft - AI SkillsInitiative[5] Microsoft - Sustainability: AI for Climate Change

  1. Microsoft's inaugural Responsible AI Transparency Report demonstrates their commitment to fostering technology that adheres to industry standards, focusing on the responsible growth and support of artificial intelligence.
  2. In an effort to promote dialogue and build trust, Brad Smith, Vice President of Microsoft, and Natasha Crampton, Chief Responsible AI Officer, expressed their aim to share the company's evolving ethical AI practices with the public.
  3. As part of this commitment, Microsoft has developed tools to assess and manage risks associated with AI technology, including Python Risk Identification Tool (PyRIT) for generative AI and evaluation tools in Azure AI Studio for customers to analyze their own generative AI models.
  4. Despite challenges encountered in the past year, such as inappropriate responses from AI chatbots and misuse of AI generators, Microsoft remains dedicated to the ongoing endeavor of responsible AI, emphasizing the importance of learning from mistakes and open dialogue to continually improve their practices.

Read also:

    Latest