Microsoft limits access to facial recognition tool in AI ethics overhaul - 3 minutes read




Microsoft says it intends to keep ‘people and their goals at the centre of system design decisions’. Photograph: Christophe Morin/IP3/Getty Images Facial recognition Company also restricts use of custom neural voice technology owing to deepfake concerns

Microsoft is overhauling its artificial intelligence ethics policies and will no longer let companies use its technology to do things such as infer emotion, gender or age using facial recognition technology, the company has said.

As part of its new “responsible AI standard”, Microsoft says it intends to keep “people and their goals at the centre of system design decisions”. The high-level principles will lead to real changes in practice, the company says, with some features being tweaked and others withdrawn from sale.

Microsoft’s Azure Face service, for instance, is a facial recognition tool that is used by companies such as Uber as part of their identity verification processes. Now, any company that wants to use the service’s facial recognition features will need to actively apply for use, including those that have already built it into their products, to prove they are matching Microsoft’s AI ethics standards and that the features benefit the end user and society.

Even those companies that are granted access will no longer be able to use some of the more controversial features of Azure Face, Microsoft says, and the company will be retiring facial analysis technology that purports to infer emotional states and attributes such as gender or age.

“We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs,” said Sarah Bird, a product manager at Microsoft. “In the case of emotion classification specifically, these efforts raised important questions about privacy, the lack of consensus on a definition of ‘emotions’, and the inability to generalise the linkage between facial expression and emotional state across use cases.”

Microsoft is not scrapping emotion recognition entirely – the company will still use it internally, for accessibility tools such as Seeing AI, which attempt to verbally describe the world for users with vision problems.

Similarly, the company has restricted use of its custom neural voice technology, which allows the creation of synthetic voices that sound nearly identical to the original source. “It is … easy to imagine how it could be used to inappropriately impersonate speakers and deceive listeners,” said Natasha Crampton, the company’s chief responsible AI officer.

Earlier this year Microsoft began watermarking its synthetic voices, incorporating minor, inaudible fluctuations in the output that meant the company could tell when a recording was made using its technology. “With the advancement of the neural TTS technology, which makes synthetic speech indistinguishable from human voices, comes a risk of harmful deepfakes,” said Microsoft’s Qinying Liao.

Source: The Guardian

Powered by NewsAPI.org