The world of artificial intelligence is in a mess. The system is a broken mess because of the way academics prioritize easy-to-monetize schemes over breaking novel ground and the Silicon Valley elite use the threat of job loss to encourage corporate-friendly hypotheses.

The lion's share of the blame should go to Google.

How it started

In the year 2000 there were 85,000 research papers published on the subject of Artificial Intelligence. There were more published in the US alone in 2021.

There has been an explosion in the field. Deep learning is one of the world's most important technologies because of the influx of researchers and new ideas.

Big tech abandoned its principles to adopt mobile first and artificial intelligence first.

Artificial intelligence developers and researchers are in higher demand than almost any other job in tech outside of the C-suite.

This kind of growth has a dark side. The field has become as cutthroat as professional sports in the scramble to meet the market demand for deep learning-based products and services.

In the past few years, we have seen Ian Goodfellow, Timnit Gebru, and others get fired from their jobs for their opinions on the efficacy of the internet.

The flood of talent that arrived in the wake of the deep learning explosion brought a mudslide of bad research, fraud, and corporate greed along with it.

How it’s going

More than any other company, Google is responsible for the modern Artificial Intelligence paradigm. We need to give big marks for bringing natural language processing and image recognition to the people.

It also means that we can point the finger at Google for creating the researcher-eat-researcher environment that has some college students and their big-tech-partnered professors treating research papers as little more than bait for venture capitalists and corporate headhunters.

The world's most talented researchers are willing to be hired by Google. If they don't toe the company line, it will fire them in a heartbeat.

In December of 2020, the company fired Timnit Gebru, a researcher it had hired to help lead its ethics division. Margaret Mitchell was fired a few months later.

Both women and supporters claim the firings only occurred after they brought up ethical concerns over research the company had signed off on.

It is barely over a year later and history is repeating itself. The team of scientists led by Satrajit Chatterjee challenged another paper Dean had signed off on.

The mudslide effect

The competition for high-paying jobs is fierce at the top. The hunt for the next great researcher or developer begins earlier than ever.

Students working towards advanced degrees in the fields of machine learning and artificial intelligence, who eventually want to work outside of academia, are expected to author or co-author research papers that demonstrate their talent.

There is a lot of bad papers written by students who are interested in writing software that can be monetized.

For example, a quick search for "natural language processing" shows nearly a million hits. Many of the papers have citations.

On the surface, this indicates that machine learning research has gained attention from researchers around the globe.

There was a lot of results for searches for artificial neural network, computer vision, and retraining learning.

There is a lot of research that is either fraudulent or full of bad science.

The mode of communicating research that worked well in the past is quickly becoming outdated.

Stuart Richie wrote an article wondering if we should do away with research papers altogether. Science's problems are baked deep.

This system comes with big problems. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce “better” results, and sometimes even commit fraud in order to impress those all-important gatekeepers. This drastically distorts our view of what really went on.

Researchers may fail to gain their approval at their own peril, as they tend to hold the keys to students and future employment.

Even if a paper makes it through peer-review, there is no guarantee that the people pushing things through are awake.

The Problematic Paper Screener is a project created by Guillaume Cabanac, an associate professor of computer science at the University of Toulouse.

The PPS uses automation to flag problematic papers. Every paper that has been flagged gets a manual review from humans. The job is too big for a few humans to do in their spare time.

There are a lot of problematic papers out there according to a report. Most of the time, the majority have to do with machine learning.

The screener deemed about 7,650 studies problematic, including more than 6,000 for having tortured phrases. Most papers containing tortured phrases seem to come from the fields of machine learning, artificial intelligence and engineering.

Red flags are raised by torture phrases because they attempt to describe a process that is already well-established.

The use of terms such as "counterfeit neural" or "man-made neural" could indicate the use of a thesaurus plug-in by bad actors.

The solution

The devolution of peer-reviewed research has been made possible by the role played by Google in the field of machine learning and artificial intelligence.

This isn't to say that Google doesn't support and prop up the scientific community through open-source, financial aid, and research support. We're not trying to imply that everyone is studying to make a quick buck.

The system is set up to encourage the monetization of the technology first. Big tech and academia need to commit to wholesale reform in how research is presented and reviewed in order for this to change.

There is no widely recognized third-party verification authority. The peer-review system is more like an honor code than a set of agreed-upon principles.

The NCAA has precedence for the establishment and operation of an oversight committee with the reach, influence, and expertise to govern across academic boundaries.

We could form a governing body to establish guidelines for academic research and review if we could unify a fair-competition system for thousands of amateur athletics programs.

If the company continues to fire the researchers it hires to oversee its ethical artificial intelligence programs, the CEO will be summoned to congress again.

US capitalism means a business can hire and fire whoever they want, but shareholders and workers have rights too.

If it doesn't commit to ethical research, it will be unable to compete with the companies and organizations that are willing to do it.