Microsoft is taking steps to restrict artificial intelligence systems’ access to facial recognition tools. On Tuesday, the company released a 27-page report titled “Responsible AI Standard,” and announced that the programs Video Indexer, Azure Face API and Computer Vision will now have limited access to facial recognition. In addition to compliance with its newly set standard, the company will also be taking steps to better its speech-to-text AI program, Azure’s Custom Neural Voice.
This restriction of facial recognition AI comes after studies determined that it disproportionately misidentifies females and those with darker complexions, with one Harvard study demonstrating a 34 percent higher error rate for darker-skinned females when compared to lighter-skinned males. And with a Georgetown Law study estimating that half of all Americans are in a law enforcement facial recognition network, it is no surprise that many would be concerned with the erratic success of the technology. Since facial recognition AI is often used in criminal identification and surveillance, this misidentification can cause serious problems for both law enforcement and American citizens.
Microsoft is not alone in its effort to limit this AI tech, however. It joins companies like Facebook, Google, and Amazon, who have either restricted, limited, or shut down their own facial recognition and emotion reading programs. That said, Microsoft is not shutting down or restricting all of its artificial intelligence systems, continuing to use internal programs for uses in accessibility and more. And while it may be limiting its AI tech for privacy and security purposes, the company is allowing customers to obtain approval in order to use its facial recognition for services such as website log-in face scans.