On a rainy day earlier this year, I typed a simple instruction for the company's artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT 3 and add scientific references and citations inside the text.

I was in awe as it began to generate text. There were references in the right places and in relation to the right context for the novel content. It appeared to be an introduction to a good publication. I didn't have high expectations because I'm a scientist who studies ways to use artificial intelligence to treat mental health concerns and this wasn't my first experiment with GPT 3. I looked at the screen in astonishment. The paper was written by the computer program.

A series of ethical and legal questions have arisen from my attempts to complete that paper and submit it to a journal. The value of a human researcher's publication records may change if something non sentient can take credit for some of their work.

GPT 3 is known for its ability to create humanlike text, but it isn't perfect. It has written a news article, produced books and created new content from dead authors. Although a lot of academic papers had been written about GPT 3, none of them made GPT 3 the main author of its own work.

I asked the algorithm to look at the thesis. I felt like I was watching a natural phenomenon when I watched the program. I contacted the head of my research group to see if a full GPT3penned paper was something we should pursue. He was equally interested.

The stories allow for multiple responses and then only the best excerpts to be published. We wanted the program to create sections for an introduction, methods, results and discussion, as you would for a scientific paper, but we didn't want to interfere with anything. We wouldn't cherry-pick the best parts, and we only used the first iteration from GPT-3. We will see how well it works.

For two reasons, we decided to have GPT-3 write a paper about itself. GPT 3 is new and there are less studies about it. It doesn't have as much data to analyze about the paper's topics. It would have reams of studies to sift through and more opportunities to learn from existing work if it were to write a paper on Alzheimer's disease.

If it got things wrong, we wouldn't be spreading artificial intelligence-generated misinformation in our attempt to publish. We were trying to prove that writing about itself doesn't mean it can't write about itself.

The fun started when we designed the proof-of-principle test. The paper was written in two hours. I encountered my first problem when I opened the submission portal for our journal, which was a well-known peer-reviewed journal in machine intelligence. I had to write something and write "None" as I had to enter the last name of the author. The affiliation was obvious, but what about phone and email? My contact information was used by Steinn Steingrimsson.

The legal section asked if all authors consented to the publication. For a moment, I panicked. I don't know how to tell. It's not a person. I summoned the courage to ask GPT 3 directly because I had no intention of breaking the law or my own ethics. The answer was yes. If it had said no, I wouldn't have been able to go on further, so I checked the box for yes.

The second question was about conflicts of interest. GPT-3 told me that it had nothing. We were laughing at ourselves because we were having to treat GPT-3 as a sentient being even though we knew it wasn't. A dispute over whether one of the company's artificial intelligence projects, named LaMDA, had become sentient led to the suspension of a worker at the company. The suspension was due to a data confidentiality violation.

You can sign up for Scientific American's newsletters.

We began to reflect on what we had just done. If the manuscript is accepted, what do you do? Is it possible that journal editors will require everyone to prove that they did not use GPT-3? Do they need to give it co-authorship if they have? How do you get a nonhuman author to accept suggestions?

The idea of a linearity in a scientific paper is thrown out the window by such an article. The paper is almost all the result of the question we were asking. It wouldn't make sense to add the method section before every single paragraph that was generated by the AI if GPT is producing the content. We had to come up with a new way of presenting the paper that we didn't write. The situation feels like a scene from the movie "Menet: Where is the narrative beginning and how do we reach the end?"

We don't know if the way we presented this paper will serve as a great model for future research or if it will be a cautionary tale. Time and peer review can't tell. GPT-3's paper has been assigned an editor at the academic journal to which we submitted it and it has now been published at the international French-owned pre-print server. The main author may be the reason for the lengthy investigation. We are waiting for the paper to be published to know what it will mean for us. Grants and financial security may be moved away from being based on how many papers we can produce. It would be possible to produce one per day with the help of our first author.

Maybe it will not lead to anything. Because of a nonhuman first author, first authorship is still one of the most sought after items in the academic world. The value of artificial intelligence in the future is dependent on how we value it.

It may seem like a simple thing to do now, but in a few years, who knows what problems this technology will cause and how we'll deal with them? We knew we opened a gate. We hope we didn't open it.

The views expressed by the author or authors are not necessarily those ofScientific American.