When WIRED asked me to cover this week's newsletter, my first thought was to ask the bot to come up with something. It is what I have been doing all week with emails and recipes. The productivity is down but the limericks about Musk are up a lot.
The results were not great when I asked the bot to write a column about itself. The commentary was generic and didn't really capture Steven's voice. It was easy to understand, but not convincing. It got me wondering if I would have gotten away with the crime. What systems can catch people using artificial intelligence for things they shouldn't?
In order to find out, I spoke to a professor of technology and regulation at the Oxford Internet Institute. I asked her what it would look like.
One big talking point this week has to do with whether or not it can help students cheat. Would you be able to tell if a student had used it to write a paper?
This will become a cat and mouse game. The tech isn't good enough to fool me as a person who teaches law, but it might be good enough to convince someone who isn't in that area. I wonder if technology will be able to trick me as well. It is possible that we need technical tools to make sure that what we are seeing is created by a human.
There are less artifacts and telltale signs for text than for deep faked imagery. It is possible that the company that is generating the text needs to build a reliable solution.
You need to buy in from someone who is making that tool. If I offer services to students, I might not be the type of company that would submit to that. Even if you put watermarks on, they're not permanent. Groups that are tech savvy will find a way. There is a tech tool that you can use to detect whether or not output is actually created.
What would a version of the game look like?
There are two things. Whoever is creating those tools should put watermarks on them. The EU's proposed Artificial Intelligence Act deals with transparency and says you should always be aware of fakes. It's possible that the watermarks can be removed if companies don't want to. Independent tools that look at artificial intelligence output are the focus of the research. We need to be more creative about how we assess students and how we write papers in education. It has to be a combination of technology and oversight.