Coined the fourth industrial revolution, the advancement of artificial intelligence and machine learning brings interesting discussion to the table. Because AI is so comprehensive and covers several industries, we find ourselves asking obscure questions such as "Do we need to legalize predictive AI policing?" or "How do we iron out biases from algorithms that determine job promotions?"
With these questions arising, the key one that remains unanswered surrounds ethics. How do we ensure that AI technologies are ethically designed?
To answer this question, there are essentially four aspects that dictate the result: the dilemma, the impact, adoption, and institutionalization. These components formulate what is considered ethical because it shows who is onboard, how everyone supports ethics, how we recognize the ethics, and, most importantly, why we trust these ethics.
Recognizing the dilemmas that AI can pose is the first building block of developing an ethics guideline. It wasn't too long ago the automotive industry faced global backlash for the issue of life and death with self-driving cars. The world learned that moral choices are not universal and the outcomes of difficult situations fall into the hands of those who built the machine. As autonomous vehicles will be part of the future of the automotive industry, it becomes necessary for governments to address the potential impacts of these vehicles on the road. As an example, NCSL provides real-time data about state autonomous vehicle legislation in the USA.
This same concept can be applied to artificial intelligence. Looking at it from a larger scope, the main problem with AI lies within its thinking and learning capabilities. Machines do not think like humans because they don't have personal judgments. Machines are built on a program and only think about how they are told to think. The challenge around using unsupervised, semi-supervised, or supervised data is what data we include or exclude, how written algorithms impact outcomes, and determining our guide to train the data and make a judgment. See what the dilemma is?
After this dilemma we need to focus on the impact AI ethics can have on businesses, consumers, organizations, and governing bodies. By pushing out guidelines, existing AI that is not compliant with those standards will have to be reconfigured. This creates two issues:
We have learned a lot through industrial revolutions around basic human rights and social values. While companies can set up committees to form rule-based algorithmic models, this usually protects the organizations and may allow unethical conduct. It is essential to have a discussion around the application of AI ethics in law to the well-functionality of society.
Take a look at the EU ethics guideline on trustworthy A I that rolled out this year. The guideline has some components such as privacy, fairness, governance, and environmental well-being that companies such as Amazon, Uber, and Google can't comply with right off the bat. These companies will likely take a toll.
The third component is guideline adoption from organizations to win trust and loyalty from the public. This will build a strong foundation for the future of machine learning. According to a recent study by Capgemini, 62% of consumers said they would place higher trust in a company whose AI interactions they perceived as ethical. Many consumers understand the potential consequences that come from using AI, such as giving up privacy, but with organizations adopting an ethical guideline, they feel safer from exposure.
These days companies can use AI-based tools to better understand their consumers with early-warning and risk detector Even Trustpilot and its fairly modest AI tools require constant debate about what data to include and exclude, how to train the data, and what information to provide to customers or consumers.
Then follows the actual integration of ethics through institutionalization. This comes in the form of institutions (governments, NGOs, academic) delivering ethics from the top-down to ensure all guideline requirements are met.
For example, China recently approved an ethics advisory group for human gene editing in an effort to regulate the biological genetic industry. The World Health Organization also put forth a plan to create a human gene registry, where it can track research to make sure scientists are performing ethically. Involving institutions legitimizes ethics on a global scale and creates a straight path for future AI developers to practice.
The best way to push institutionalization can actually be taken from past examples of new technologies such as gene editing, social media, bots, etc. First, the industry needs to create a central body to govern ethics and evaluate AI integrations while also having the power to hold accountability for violating companies. Second, companies must hire review boards to ensure all AI follows ethics protocol internally. Lastly, a training department in academia should teach students the guidelines of AI and how to develop machine learning without violations.
As AI and its technologies continue to evolve at an astonishing rate - and as we find new and innovative uses for them - it's more important than ever for organizations to embed the principles and attributes of trust into their data and AI ecosystem from the very start. Those who embrace leading practices in ethical design and governance will be better equipped to mitigate risks, safeguard against harmful outcomes, and, most importantly, sustain the essential confidence that their stakeholders seek while using AI to serve consumers.Ramin Vatanparast is Chief Product Officer at Trustpilot. Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they're always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact email@example.com.