Don't use artificial intelligence language generators to solve your ethical quandaries, first of all. The results of the simulation are fascinating.
Are You The Asshole (AYTA) is built to mimic the r/AmITheAsshole (AITA) crowdsourced advice forum. The site lets you enter a scenario and ask for advice about it, and then you can get feedback on your situation from a series of posts. The feedback does a good job of capturing the style of real human-generated responses, but with the weird, slightly alien skew that many artificial intelligence models produce. The classic sci-fi novel Roadside Picnic has a plot.
The writing style and content is pretty convincing even though they tend toward platitudes that don't fit the prompt.
I asked it to settle last year's Bad Art Friend debate.
The first two bots were confused by that one. Lots of humans were as well.
There are more examples on the site.
Three different language models were trained on different data subsets to create AYTA. Around 100,000 AITA posts from the year 2020 were captured by the creators. One bot was fed a set of comments that concluded the original posters were not the asshole, one was given posts that determined the opposite, and the other got a mix of data that included both previous sets. A few years ago, someone made an all-Bot version of the site that included advice posts, but also generated a more bizarre effect.
An earlier tool called Ask Delphi used an artificial intelligence trained on AITA posts to analyze the morality of user prompts. The framing of the two systems is different.
“This project is about the bias and motivated reasoning that bad data teaches an AI.”
Ask Delphi highlighted the many flaws of using artificial intelligence for morality judgments, particularly how often it responds to a post's tone instead of its content. The absurdity of AYTA is more explicit. It's similar to the style of commenters on Reddit. It doesn't deliver a single judgement, instead allowing you to see how the machine works.
The project is about bias and motivated reasoning that bad data teaches an artificial intelligence.