MVP versus EVP: Is it time to introduce ethics into the agile startup model?

The rocket ship trajectory of a startup is to get an idea, build a team and slap together a minimum viable product that you can get in front of users.

Artificial intelligence and machine learning are becoming ubiquitous in tech products and the market is becoming more conscious of the ethical implications of replacing humans in the decision-making process.

A powerful feedback loop that drives today's customer-led business can be created by using an mvp. Over the past two decades, the lean, Agile model has been extremely successful, launching thousands of successful startups and some of which have grown into billion-dollar companies.

Building products and solutions that work for the majority isn't enough anymore. Over the past several years, a number of artificial intelligence and machine learning-powered products have been killed off because of ethical problems that crop up downstream after millions of dollars have been funneled into their development. This risk can be fatal for well-established companies in a world where there is only one chance to bring an idea to market.

The lean business model is not needed for startup businesses to switch to a more risk-averse alternative. There is a middle ground that can introduce ethics into the startup mentality without sacrificing the agility of the lean model, and it starts with getting an early-stage proof of concept in front of potential customers.

Instead of developing an Maximizing Value product, companies should develop and roll out an ethically viable product based on responsible artificial intelligence, which considers ethical, moral, legal, cultural, sustainable and social-economic considerations during the development, deployment and use of artificial intelligence.

This is a good practice for startups, but it is also a good standard practice for big technology companies.

The three steps that can be used to develop an EVP are listed here.

An ethics officer can lead the charge.

Chief strategy officers, chief investment officers, and even chief fun officers are part of the startup culture. A chief ethics officer is just as important. The person can work with different stakeholders to make sure the startup's product is in line with the moral standards set by the company, the market and the public.

They should act as a liaison between the founders, the C-suite, investors and the board of directors with the development team to make sure everyone is asking the right ethical questions in a thoughtful, risk-averse manner.

Historical data is used to train machines. If there is systemic bias in a current business process, the computer will think that it should continue to behave in that way. If your product is later found to not meet the ethical standards of the market, you can't just remove the data and find new data.

These are already trained. A 40-year-old man can reverse the influence his parents or siblings had on him. You are stuck with the results. Chief ethics officers need to be aware of the inherent bias in the organization.

Integrate ethics into the process.

It is not just a point in time. It is an end-to-end governance framework focused on the risks and controls of an organization. This means that ethics should be integrated throughout the development process.

The development team should work with the chief ethics officer to be aware of general ethical artificial intelligence principles that are valid in many cultural and geographic applications. These principles can be used to suggest or inspire how artificial intelligence solutions should behave when faced with moral decisions.

Any risk to anyone's physical, emotional or financial well-being should be identified through a risk and harm assessment. The assessment should look at the impact of the solution on the environment.

During the development phase, the team should be asking how their use of artificial intelligence is in line with the company's values, whether models are treating different people fairly and whether they are respecting people's right to privacy. They should also consider how effective the operating model is at ensuring accountability and quality.

The data that is used to train the model is a critical component of any machine learning model. The context and geographic reach of the model should be of concern to the startup. This will allow the team to pick the right data to avoid future bias issues.

Don't forget about regulatory compliance.

It is just a matter of time before the European Union, the United States or some other legislative body passes consumer protection laws governing the use of Artificial Intelligence and Machine Learning. Once a law is passed, those protections are likely to spread around the world.

The General Data Protection Regulation in the EU led to a wave of other consumer protections around the world that require companies to prove consent for collecting personal information. People across the political and business spectrum are calling for ethical guidelines around artificial intelligence. The EU is leading the way after releasing a proposal for an artificial intelligence legal framework.

It is important for startup companies to demonstrate ongoing governance and regulatory compliance before the regulations are imposed on them. It is necessary to perform a quick Scan of the legislation, guidance documents and other relevant guidelines before building the product.

It is advisable to revisit the regulatory landscape prior to launch. It would be helpful to have someone on your board of directors or advisory board who is embedded in the deliberations happening around the world. It is good to be prepared for the coming regulations.

There is no doubt that the benefits of artificial intelligence will be enormous. The ability to automate manual tasks, streamline business processes and improve customer experiences are too great to dismiss. The impact of artificial intelligence and machine learning on the market and society at large is something that startups need to be aware of.

If a high-performing product is killed because ethical concerns weren't uncovered until after it hits the market, it would be a shame. From the very beginning, ethics need to be integrated into the development process, an EVP needs to be developed, and there needs to be a good governance system after the launch.

The need for compassion and the human element in innovation are what we need to keep in mind when we think about the future of business.