There is a new obsession in the valley. It refers to the idea of computers taking over creative tasks such as writing, filmmaking, and graphic design.
If you've been paying attention to the industry's forecasts, this may sound strange. Billions of dollars are being poured into software tools designed to replace certain kinds of creative work because of the incorrect predictions made about the demise of truck and taxi driving and human customer service. Because these things tend to go together, they attract awed media coverage and cause a moral panic.
You are already behind the curve if you are still trying to wrap your head around it. According to one of Sand Hill Road's most respected investors, computers will write better than humans by the year 2025. The company predicts that professional writers, artists, and video game makers will be replaced by computers. Some companies have recently unveiled software that can take a text prompt and use it to create a short story or a video. An astronauts riding a horse in outer space is one of the weird and memorable images produced by Openai.
I don't think the stories are convincing or that the art is good. When my colleagues in the businessweek art department tried to use the technology to generate illustrations for an article about the prospect of professional illustrators using the technology to enhance their work, the results were incoherent.
Even so, media accounts tend to focus on the prospect that these obviously limited tools might soon get so good, they'll threaten to put most creative professionals out of business When an artificial intelligence artist won a blue ribbon at a state fair in Colorado that had a category for digitally created or manipulated art, the New York Times treated the achievement as a landmark moment and included panicked quotes that suggested the modest achievement constituted "the death of artistry." The Stability Diffusion text-to-art tool was warned by The Verge, but they played down some of the flaws. The publication said that naked models sporting extra limbs and placed in physically impossible poses wouldn't limit its impact because the quality of the output will improve in the near future.
The idea that artificial intelligence will solve the mystery of human sexual desire in the future is quite large. The belief that advances in generative artificial intelligence are creating an ethical crisis is not new in tech circles. Kevin was a columnist for the New York Times. The impulse to debate the ethics of technology developed by some of the world's largest and most ethically compromised companies is understandable. When it is too late, Silicon Valley has a track record of ignoring the consequences of its software. There are obvious problems with the current crop of artificial intelligence apps, which tend to make existing biases worse and may be using copyrighted works to train their algorithms.
We should be debating his premise that just how good are these artificial intelligence systems? What do they do well? The chief executive officer of OpenAI suggested that the latest version of Dall-E was a preview of his ultimate goal of artificial general intelligence. The term refers to computers that are capable of thinking for themselves. AgI is going to be crazy. The implication was that building a computer that can think was a huge step forward.
Some artificial intelligence experts have questioned the concept, suggesting that it may be more of an illusion than a reality. Gary Marcus is an entrepreneur and retired NYU professor who writes a Substack newsletter on the limitations of artificial intelligence. It is doing something that is more primitive. Marcus, who co-authored a research paper about Dall-E, says that the system doesn't seem to understand relationships between words. Marcus says it's like talking to a second language learner and they don't understand how they fit together.
This isn't to say that generative services aren't cool, even when they're used by humans who know how to craft interesting Prompts for the Artificial Intelligence to respond to. These new services don't say much about the future of artificial intelligence. It's possible that Dall-E is a novel form of intelligence that will pave the way for robot assistants and a world beyond human labor; on the other hand, it's possible that Dall-E is more akin to a next- generation version of Adobe's popular image- Marcus says the image synthesis is amazing. Stock images for a powerpoint presentation are the bomb. If you want to put this technology in a robot, it is just a demo.
Ten years ago, futurists thought that self-driving cars were as good as humans. They imagined that the end of traffic deaths would allow people to catch up on sleep as they commuted from the exurbs. The predictions pushed companies to invest about $100 billion into the field and prompted policymakers and futurists to debate the ethics of self-driving cars and the potential for mass unemployment. The ethical debates were encouraged by the industry because they were conditioned on the idea that the technology worked.
That idea is not as clear as it used to be. There has been a lot of progress in automotive automation. Many advanced driver assistance systems are available. For most of the time, the long-promisedrobo-taxis are notexistent. Part of the problem is that the industry fell for its own demos. You see a self-driving car and think it can navigate any route. An example would be if you trained an artificial intelligence bot to act like a human and then asked it if it was sentient.
It's possible that the current backers of generative artificial intelligence are making the same mistake. This isn't to say that these new services won't be useful; just that, like the current generation of "self-driving" cars, they'll need human artists to drive them.
Mark is not saying much about Facebook these days.