Activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person's age, gender and emotional state can be biased and shouldn't be sold.

Microsoft said on Tuesday that it would remove some features from its artificial intelligence service for face detection. They will no longer be available to new users this week and will be phased out over the course of a year.

Microsoft is pushing for tighter control of its artificial intelligence products. After a two-year review, a team at Microsoft has developed a "Responsible AI Standard," a document that sets out requirements for A.I. systems to make sure they don't have a harmful impact on society.

Ensuring that systems provide valid solutions for the problems they are designed to solve is one of the requirements.

Technology that could be used to make important decisions about a person's access to employment, education, health care, financial services or a life opportunity are subject to a review by a team led by Microsoft's chief responsible A.I. officer.

Microsoft was concerned about the tool that labeled someone's expression as anger, contempt, disgust, fear, happiness, neutral, sadness or surprise.

The way in which we express ourselves has a lot of cultural and geographic variation. There are bigger questions about whether facial expression is a reliable indicator of your emotional state.

The age and gender analysis tools being eliminated, along with other tools to detect facial attributes such as hair and smile, could be useful for blind or low-vision people, but the company decided it was problematic to make the profiling tools generally available to the public.

She said that the gender classification was not in line with the values.

New controls will be put on Microsoft's face recognition feature, which can be used to perform identity checks. The software in the app is used to verify that a driver's face matches their ID on file. If you want to use Microsoft's facial recognition tool, you'll need to apply for access and explain how you'll use it.

Users will need to explain how they will use other potentially abusive A.I. systems. The service can generate a human voice print, based on a sample of someones speech, so that authors can create synthetic versions of their voice to read their audiobooks in different languages.

Because of the possible misuse of the tool, speakers must go through a series of steps to confirm that the use of their voice is authorized, and the recordings include watermarks.

The ethical A.I. group is taking concrete steps to live up to their principles. It will be a big journey.

ImageMicrosoft’s campus in Redmond, Wash. The company will put new controls on its face recognition feature, which can be used to perform identity checks or search for a particular person.
Microsoft’s campus in Redmond, Wash. The company will put new controls on its face recognition feature, which can be used to perform identity checks or search for a particular person.Credit...Grant Hindsley for The New York Times
Microsoft’s campus in Redmond, Wash. The company will put new controls on its face recognition feature, which can be used to perform identity checks or search for a particular person.

Microsoft has had problems with its products. In 2016 it released a bot on the social networking site that was designed to learn from users' conversations with it. Microsoft had to take the bot down after it started spouting racist and offensive messages.

Black people were found to work less well with speech-to-text tools. Microsoft's system was the best of the bunch, but it mis identified 15 percent of words for white people and 27 percent for Black people.

The company didn't know how diverse language could be. A sociolinguistics expert from the University of Washington was hired by Microsoft to explain the different languages. People speak in formal and informal settings.

She said that thinking about race as a determining factor of how someone speaks is misleading. In consultation with the expert, we learned that there are a lot of factors that affect linguistic variety.

The journey to fix the speech-to-text disparity helped inform the new standards.

She pointed to Europe's proposed regulations setting rules and limits on the use of artificial intelligence. We hope to contribute to the discussion about the standards that technology companies should be held to by using our standard.

There is a vibrant debate about the potential harms of A.I., fueled by mistakes and errors that have real consequences on people's lives. Child care benefits for needy families were wrongly taken away by the Dutch tax authorities.

It has been controversial to use automated software for face recognition. Last year, Facebook stopped using a system to identify people in photos. There are many concerns about the place of facial recognition technology in society according to the company's vice president.

A number of black men have been wrongly arrested. At the same time as the Black Lives Matter protests were taking place, Amazon and Microsoft issued moratoriums on the use of their facial recognition products by the police in the United States.

Judicial oversight over police use of facial recognition tools has been mandated by Washington and Massachusetts.

Microsoft had considered making its software available to the police in states with laws on the books, but decided not to do so at this time. As the legal landscape changed, that could change as well.

Arvind Narayanan, a computer science professor and prominent A.I. expert, said companies might be stepping back from technologies that analyze the face because they were moreVisceral, as opposed to various other types of A.I.

Some of the systems may not be worth much for the moment. Microsoft couldn't say how many people used the facial analysis features. Mr. Narayanan predicted that targeted advertising would be less likely to be abandoned because it was a cash cow.