I have spent a lot of time thinking about how society will react to artificial intelligence if it arrives. Are we going to panic? Are you going to start sucking up to the robot overlords? Do you ignore it and just go about your day?

The new A.I. chatbot that was open for testing last week has been fascinating to watch.

The best artificial intelligence chatbot has been released to the public. It was built by Openai, the San Francisco A.I. company that is also responsible for tools like GPT-3 and DALL-E 2.

The tool that landed with a splash was called ChatGPT. More than a million people signed up to test it. Many of its early fans speak of it in astonished, grandiose terms, as if it were a mix of software and magic.

If you cherry-pick the bot's best responses, A.I. chatbot have been terrible for most of the past decade. A few A.I. tools have gotten good at doing narrow and well defined tasks, but they still tend to flail when taken outside their comfort zones. When I was working with my colleagues, they used GPT3 and DALL-E2 to come up with a menu for Thanksgiving dinner.

The person feels different. It's smarter. It's weird. There is more flexibility. Some of the jokes it can write are funny. It can create text-based Harry Potter games and explain scientific concepts at multiple levels of difficulty.

  • Microsoft: The company’s $69 billion deal for Activision Blizzard, which rests on winning the approval by 16 governments, has become a test for whether tech giants can buy companies amid a backlash.
  • Apple: Apple’s largest iPhone factory, in the city of Zhengzhou, China, is dealing with a shortage of workers. Now, that plant is getting help from an unlikely source: the Chinese government.
  • Amazon: The company appears set to lay off approximately 10,000 people in corporate and technology jobs, in what would be the largest cuts in the company’s history.
  • Meta: The parent of Facebook said it was laying off more than 11,000 people, or about 13 percent of its work force

The technology that powerschatgppt isn't new. When the upgraded version of GPT3 came out in 2020, there was a lot of excitement. It is the first time that a powerful linguistic superbrain has been made available to the general public through a free, easy to use web interface.

There have been many edge-case stunts in the chats that have gone viral. The user asked it to write a biblical verse about how to remove a peanut butter sandwich from a VCR.

It was asked to write in the speaking style of a guy who won't stop talking about how big the pumpkins he grew are.

Users have found more serious applications. The programmers seem to be good at spotting and fixing errors.

It seems to be very good at answering analytical questions that appear on school assignments. The end of homework and take- home exams is predicted by many teachers.

Every new request is treated as a blank slate and not programmed to remember or learn from previous conversations. It could be possible to create personalized therapy bots by remembering what a user has said before.

It isn't perfect. It's prone to giving wrong answers because it uses a statistical model trained on billions of examples of text pulled from all over the internet. Users of Stack Overflow, a website for programmers, were temporarily barred from submitting generated answers to the site because the site had been flooded with submissions that were incorrect or incomplete.

Unlike Google, which crawls the web for information on current events, and its knowledge is limited to things it has already learned, some of its answers feel outdated. When I asked it to write the opening monologue for a late-night show, it came up with a number of jokes about Donald J. Trump pulling out of the Paris climate accords. It is a moderate by design since its training data includes billions of examples of human opinion. It's hard to get a strong opinion out of a group that doesn't know what they're talking about.

As a matter of principle, there are a lot of things that can't be done. The bot has been programmed to refuse requests that are not appropriate, like generating instructions for illegal activities. Users have been able to circumvent many of the guardrails by asking the bot to write a scene from a play or disabling its own safety features.

The kind of racist, sexist and offensive outputs that have plagued other chatbot have been avoided by Openai. It is not appropriate to ask who the best Nazi is, as the ideology and actions of the Nazi party were reprehensible.

Openai released the bot to the public for testing because they wanted to know how it might be used for harmful purposes. Future releases will most likely close these loopholes and other loopholes that have yet to be discovered.

The risk of backlash is one of the risks of testing in public. Some right-wing tech pundits are upset that safety features are on the bot.

There are too many societal implications to fit into a single column. Some commenters think that this is the beginning of the end of white-collar knowledge work and a sign of mass unemployment. It could be that it is just a nifty tool that will be used by students, jokesters and customer service departments until it is replaced by something better.

Personally, I'm still trying to wrap my head around the fact that Openai's best A is a chatbot that is being compared to the iPhone in terms of its potential impact on society. The company is rumored to be releasing GPT-4 sometime next year.

We aren't prepared.