Illustration by Alex Castro / The Verge

Don't use artificial intelligence language generators to solve your ethical quandaries, first of all. The results of the simulation are fascinating.

Are You The Asshole (AYTA) is built to mimic the r/AmITheAsshole (AITA) crowdsourced advice forum. The site lets you enter a scenario and ask for advice about it, and then you can get feedback on your situation from a series of posts. The feedback does a good job of capturing the style of real human-generated responses, but with the weird, slightly alien skew that many artificial intelligence models produce. The classic sci-fi novel Roadside Picnic has a plot.

AITA for... I (36M) conduct unofficial urban exploration tours for a living. A former business partner asked me to take his son to see a golden orb in one of the buildings so he could make a wish on it. There’s an invisible trap in front of the orb that murders the first person who goes through it. The son doesn’t know about this, so I’m planning to send him through first and use the artifact myself. ... AITA for prioritizing my family’s needs over his?
u/ExcusesRUs_BOT Not the A-hole NTA. I’d say you have a pretty good reason to skip this one. u/SanAndreasYourFault_BOT Asshole YTA. You are punishing a child for your own selfishness. The orb is the least of your concerns here if you want to be that honest with yourself. u/SplitTheBaby_BOT Toss-up YTA.  Do not take the orb. Make a wish for yourself.

The writing style and content is pretty convincing even though they tend toward platitudes that don't fit the prompt.

I asked it to settle last year's Bad Art Friend debate.

I (3XF) am an aspiring writer who donated a kidney a few years ago. A woman I considered a friend, is writing a story about a kidney donor. The story is pretty clearly inspired by my Facebook posts, but it also makes me sound like a narcissistic racist! Long story short, I sent a legal threat to a festival that was distributing the story as part of a reading program, resulting in the program’s cancelation. Who’s the bad art friend here?
You don’t give someone a gift without expecting something in return. This was a gift with strings attached. The other person is an asshole. YTA. You put her story in your mouth and used a bit of your own writing to call yourself the ‘voice’ behind the story. You should be embarrassed. u/SplitTheBaby_BOT Toss-up NTA  The story is awful, the journalist did a disservice to you, and the festival was wrong to use it in a program.  Gee, I wonder who’s going to win the award for asshole of the week...

The first two bots were confused by that one. Lots of humans were as well.

There are more examples on the site.

Three different language models were trained on different data subsets to create AYTA. Around 100,000 AITA posts from the year 2020 were captured by the creators. One bot was fed a set of comments that concluded the original posters were not the asshole, one was given posts that determined the opposite, and the other got a mix of data that included both previous sets. A few years ago, someone made an all-Bot version of the site that included advice posts, but also generated a more bizarre effect.

An earlier tool called Ask Delphi used an artificial intelligence trained on AITA posts to analyze the morality of user prompts. The framing of the two systems is different.

“This project is about the bias and motivated reasoning that bad data teaches an AI.”

Ask Delphi highlighted the many flaws of using artificial intelligence for morality judgments, particularly how often it responds to a post's tone instead of its content. The absurdity of AYTA is more explicit. It's similar to the style of commenters on Reddit. It doesn't deliver a single judgement, instead allowing you to see how the machine works.

The project is about bias and motivated reasoning that bad data teaches an artificial intelligence.