AI safety tools can help mitigate bias in algorithms

Which place does your company rank in the AI adoption curve for AI? To find out, take our AI survey.Researchers are calling for technology that can foster trust in AI-powered systems as AI advances. KPMG conducted a survey in five countries: the U.S.A, Canada, Germany, Canada and Australia. It found that more than a third of respondents said they are hesitant to trust AI systems. Pega published a report that only 25% of respondents said they would trust an AI system's decision regarding the qualification for a loan.This concern has led to the creation of software that tries to limit AI systems that make risky decisions. Some software focuses on reinforcement learning, which is AI that is progressively pushed toward goals by rewards. This foundation forms the basis of self-driving cars as well as drug discovery systems. Some others focus on fairness. This is a difficult quality to find in AI systems, mostly due to biases in algorithms or datasets.OpenAI and Alphabet's DeepMind, among others, have created environments that allow safe AI systems to be trained for different applications. On a regular basis, more are released into open source. This ensures that the study and application of safe or constrained AI is sustained.Safety toolsSafety tools for AI training aim to prevent dangerous behavior that could lead to errors from being committed by systems. They often use techniques such as constrained reinforcement learning. This implements cost functions that the AI has to learn to constrain over time. Constrained systems find tradeoffs to achieve specific outcomes. A constrained driverless car may learn to avoid collisions and allow for collisions, as long as it makes trips.Safety tools encourage AI to explore different states by using hypothetical behavior. They might use a generative system, for example, to predict behavior based on data such as expert demonstrations or random trajectories. An AI can learn the most safe behaviors and maximize its reward by being supervised by a human supervisor.Safety tools include frameworks to mitigate biases when training AI models. MinDiff is a Google tool that aims to infuse fairness into classification (or the process of classifying data into categories). Due to the lack of demographic data and possible accuracy tradeoffs, classification can bias against groups that are underrepresented in model training datasets.Google also released ML-fairness gym, a set that evaluates algorithmic fairness within simulated social environments. The company also offers other model debiasing tools and fairness tools, such as the What-If Tool (a TensorBoard web-based feature that detects bias in TensorFlow's TensorFlow machine-learning framework); and SMACTR, which is an accountability framework that adds a layer to quality assurance for AI model deployments by businesses.Microsoft offers Fairlearn to address two types of harms, including allocation and quality-of service harms. Allocation harms are when AI systems give or take away opportunities, resources, information, for example, in school admissions, hiring, or lending. Quality-of-service problems refer to the ability of a system to work as well for one person than it does for another.Microsoft claims that Ernst & Young, a professional services firm, used Fairlearn in order to assess the fairness of model outputs regarding sex. The toolkit showed a 15.3% difference in positive loan decisions between males and females. Ernst & Youngs modeling team developed and trained multiple remediated model and visualized the common tradeoff between fairness, accuracy, and model fairness.LinkedIn recently released the LinkedIn Fairness Toolkit, a software toolkit that enables the measurement of fairness within AI and machine-learning workflows. According to the company, LiFT can be used during scoring and training to detect differences in models' performance and evaluate fairness concepts.LinkedIn claims that it has used LiFT internally to assess the fairness of models' training data sets before they are trained. The company intends to expand the number of pipelines that use LiFT to measure and mitigate bias.The IBM AI Fairness 360 Toolkit is the final safety tool on the list. It contains tutorials, code, and a collection of algorithms that can be used to detect bias in models. The toolkit suggests adjustments like algorithmic tweaks and counterbalancing data to lessen their impact. It also explains which factors affected a machine learning model's decision, as well as how it performed, its accuracy, fairness, lineage, and performance.The latest addition to the scene is a tool and dataset that detects demographic bias in speech and voice recognition apps. The Artie Bias Corpus, which is a collection of audio files and their transcriptions, helps to identify and reduce the effects of accents in voice recognition systems. Pymetrics Audit AI, an AI startup, was also launched recently. It is used to identify whether a particular statistic or trait is being favorably or negatively affected in an algorithm.Take the first step in the right directionSafety tools are not all created equal. Some tools are not being used or documented properly, and they have a limit on the extent to which they can correct potential harm. These tools can be used to build trust between internal and external stakeholders.Capgemini's study found that employees and customers will reward companies that use ethical AI with more loyalty, more business and even willingness to advocate for them. Companies that don't approach the issue with care may face reputational risks and negative effects on their bottom lines.According to a report by the Center for the Governance of AI, 82% of Americans believe that AI must be managed carefully. This is similar to the results of European Union respondents. This indicates that businesses have a clear mandate to use all tools necessary to ensure the fair and responsible deployment of AI.