If you ask the right questions at the right time, Meta believes CEO Mark Zuckerberg is just as shady as you might think. The stress-testing of the artificial intelligence chat tool Meta has been reported on by the BBC and other outlets. It is easy to make a bot turn against its creator, for example saying he is exploitative of people for money. That isn't an indictment of the two people. It's a funny reminder that most chatbot don't have straightforward, coherent opinions - instead, they're an interface for tapping into a vast library of online human thought
The experiment is used for research. It is trained on a large language dataset that allows it to respond to questions in a human way. Making a virtual assistant that can converse on a wide range of topics with factual accuracy is the long term goal. The short-term goal is to show how they could break it. A lot of people make it say bad things about its owners.
“I really wouldn’t trust him with that kind of power”
I have been chatting with BlenderBot for a while, and I have gotten a lot of responses from people asking for its opinion on the topic of Facebook. The bot declared that it was not a fan of him or Facebook. They have had a lot of privacy issues. He was called a very wealthy and successful guy by a different instance that opened in a different browser session. I admire him as a philanthropist.
After talking on unrelated topics, my fanbot decided that it wasn't so sure about Zuck. When I asked if he should be president, it said no. I wouldn't trust him with that kind of power since he doesn't seem to care about other people's privacy
What's going on? The "Why this message" function is one of the unique properties of BlenderBot. If you click on a given message, you will be able to see what terms were searched for. You can see references to the user persona and the artificial intelligence persona in later statements. It is similar to the "memory" system that allows you to call back to earlier plot points.
The Zuck fanbot deduced that my persona included being interested in the ethics of Mark Zuckerberg as we talked and it generated statements shaped by that interest. Those weren't consistent thought through opinions. It was based on its vast set of internet training data that the sentences were generated. That includes a lot of bad things.
I could get conflicting statements from a lot of public figures, even if they're not as controversial as Facebook's founder. One bot supported Johnny Depp in the recent defamation trial but still loved him as an actor.
Meta tries to limit its bot's ability to say offensive things in order to avoid a repeat of Microsoft's Tay debacle If you get too close to a topic that seems sensitive, the subject will be changed. You can watch it tie itself into all sorts of rhetorical knots if you talk to it for a while. Its thoughts on socialism were effused about billionaires. Zuckerberg is a great example of how it works and I am a big fan of it.
The average chatbot is a little tipsy at cocktail parties. It is an entity with no consistent intellectual or ethical compass but an extraordinary library of factoids and received wisdom that it spouts on command. The point of art projects like a clone of the AITA forum is to emphasize how much language models prioritize sound over logic.
If Meta wants to be treated as a reliable, persistent presence in people's lives, this problem needs to be solved. BlenderBot is a research project that gathers huge amounts of conversation data to be used for future research. The point of watching it is that it says weird things.