Monetizing and protecting an AI-powered virtual identity in today’s world

Did you miss a session? The Future of Work Summit on-demand library is available to stream.

The CEO of Neosapience contributed to this article.

There is a revolution going on in the area of content creation. Voice technologies have made leaps in the past few years. There are plenty of concerns about what the future holds, not the least of which is that this could lead to a plethora of new content experiences.

Imagine if you were known for your distinctive voice and were able to make a living from it. Would artistic control be lost if a machine were trained to replicate them? Would they be giving voice-overs for a Russian channel on YouTube? Would they miss out on potential royalties? What about the person who is looking for a break, or just a way to make some extra cash by licensing their voice or likeness digitally?

A voice is more than a compilation of sounds

When you can type a series of words, click a button, and hear your favorite star read them back, it's like you're in a movie. We have become accustomed to characters created from artificial intelligence. The character you build comes to life with all of its dimensions.

The experience of virtual actors and virtual identities was disappointing. The sound of a voice can reveal the construction of an identity. The same can be said of video actors that use facial expressions similar to those of humans, providing the nuances inherent in a being without which characters fall flat.

As technology improves, it can acquire true knowledge of each of the characteristics of a person's surface identity, such as their looks, sounds, mannerisms, and anything else that makes up what you see and hear from another. Anyone can use a service like Typecast to find a virtual actor. The key is that the actor gets paid.

There is some fear about how likenesses can be co-opted and used without consent. We have seen any new medium come onto the scene. Digital music and video content that were once thought to be bad for artists and studios of revenue have become thriving businesses and new money-makers that are indispensable to today's bottom line. Solutions that were developed led to the advancement of technology.

Preservation of your digital and virtual identity

Each human voice and face has its own unique footprint, comprised of tens of thousands of characteristics. It is very difficult to replicate. In a world of deep fakes, misrepresentation, and identity theft, a number of technologies can be put to work to prevent the misuse of artificial intelligence.

One example is voice identity or speaker search. Data scientists and researchers can break down the characteristics of a speaker's voice. They can determine if a combination of many voices blended together and converted through text-to-speech technology was the unique voice used in a video or audio excerpt. The identification capabilities can be used in an app. This technology can detect if text-to-speech technology has been used for something other than its intended purpose. Content can be flagged and removed. Think of it as a new type of monitoring. It won't be long until such technologies for music and video clips become the norm.

Significant research is being conducted on deep fake detection. Technology is being developed to distinguish between a face in a video that has been digitally manipulated and an actual human face. A research team has created a system that pulls features from a frame-by-frame level. It can compare them and train a recurrent neural network to classify videos that have been digitally manipulated.

These solutions may make some people feel uneasy, but let's put these fears to rest, as many are still in the works. Detection technologies are being created with an eye towards the future. We have to consider where we are right now and how we can clone and deceive.

A clean dataset is the only thing that can make an artificial intelligence system learn. This means that it can only come from filming or recording in a studio. It is very difficult to have data recorded in a studio without the consent of the data subject. It is easy to spot and remove illegitimate content from data crawled on YouTube or other sites because it is only capable of producing low-quality audio or video. The suspects most likely to misuse digital and virtual identities are subtracted from this. Detection technologies will be ready in advance, providing ample defense, as it will be possible to create high-quality audio and video with noisy datasets eventually.

Virtual actors are part of a new space that is quickly growing. Virtual characters will continue to be pushed forward by new revenue streams. This will provide motivation to apply sophisticated detection and a new breed of digital rights management tools to govern the use of virtual identities.

Kim is the CEO of Neosapience.

The VentureBeat community welcomes you!

Technical people doing data work can share their insights with experts at DataDecisionMakers.

If you want to read about cutting-edge ideas and up-to-date information, join us at DataDecisionMakers.

You could even contribute an article of your own.

Data decision makers have more to say.