Microsoft is hiring its entire ethical AI team amid AI boom

Microsoft is currently in the shoehorning process text-generating artificial intelligence into every single product it can. And starting this month, the company will continue its AI rampage without a team dedicated internally to ensuring these AI capabilities meet Microsoft’s ethical standards a Monday evening report by Platformer.

Microsoft scrapped its entire ethics and society team within the company’s AI sector as part of Ongoing layoffs are said to affect a total of 10,000 employees, per platformer. The company maintains its Office of Responsible AI, which covers the broad, Microsoft-wide principles to drive enterprise AI decision-making. But the ethics and society taskforce that bridged the gap between politics and products is reportedly gone.

Gizmodo has reached out to Microsoft to confirm the news. In response, a company spokesman sent the following statement:

Microsoft remains committed to developing and designing AI products and experiences in a safe and responsible manner. As technology evolved and strengthened, so did our investments, which at times meant adjusting team structures to be more effective. For example, over the past six years we have increased the number of people on our product teams dedicated to ensuring we uphold our AI principles. We’ve also increased the scope and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating policies to protect customers.

The company previously reportedly shared this slightly different version of the same statement with Platformer:

Microsoft is committed to the safe and responsible development of AI products and experiences… Over the past six years, we have increased the number of people on our product teams within the Office of Responsible AI, sharing responsibility with all of us at Microsoft ensuring we put our AI principles into practice… We appreciate the groundbreaking work the Ethics and Societies team has done to help us on our continued AI journey of stewardship.

Note that in this older version, Microsoft inadvertently confirms that the Ethics and Society team no longer exists. The company had also previously disclosed staffing increases at the Office of Responsible AI to people generally “dedicated to ensuring we uphold our AI principles.”

But despite Microsoft’s assurances, former employees told Platformer that the Ethics and Society team played a key role in turning big ideas from the Accountability Office into actionable changes at the product development level.

From the socket:

“People looked at the principles that came out of the office of the responsible AI and said, ‘I don’t know how that applies,'” says a former employee. “It was our job to point them out and create rules where there weren’t any.”

At the time of the elimination of the AI ​​Ethics and Society team, it reportedly consisted of just seven people. Before that, however, it had been much more robust. At its largest, the group consisted of 30 workers from all disciplines (including philosophers). In October 2022, the team was reduced to seven as part of a reorganization. In this reorganization, most employees were not laid off, but transferred to another location.

The company’s VP of AI told the remaining ethics and society staff that their team is not “disappearing,” but rather “evolving,” via platformer. Only about five months later, however, the team is gone. Staff were reportedly notified in a Zoom meeting on March 6th.

It’s a worrying corporate decision because Microsoft (together with google And only over every body else tech company under the sun) is pushing to rapidly expand and add its artificial intelligence technology all product areas that it can. Last month the company announced that this will be the case cram its Bing AI into word processors and maybe even Outlook emails.

Meanwhile, Microsoft has already made some alarming mistakes in its rush to bring its ChatGPT-based AI text generator to the search market. The company’s first demos of its chatbot were full of financial mistakes and faulty advice (Google screwed it up too). Then there was the disturbing “personality” of Bing’s alter-ego Sydney, which the company had to scrap after it freaked users out.

Granted, while the ethics and corporate affairs team were still there, Microsoft executives might not have listened to their concerns very much anyway. Sure the The company eventually withdrew from its AI-powered emotion recognition technology. However, former employees told Platformer that Microsoft partially ignored a memo expressing concerns about the company Bing Image Creator (not yet available in the US) and the way it would steal and harm artists.

The Ethics and Society team reportedly offered a list of mitigation strategies, including allowing the image generator to prevent users from entering living artists’ names as prompts, or creating a marketplace to support artists whose works have turned up in searches . None of these suggestions have been incorporated into the AI ​​tool, sources told Platformer. However, the concerns highlighted by ethics and society have certainly come to fruition, as several US lawsuits against other companies peddling AI imaging have arisen in the last few months.

Updated 03/14/2023 1:21 PM ET: This post was updated with a statement from Microsoft.

Leave a Reply

Your email address will not be published. Required fields are marked *