Were you not able to attend the event? You can find all of the summit sessions in our library. Here is the place to watch.
The rapid progress in artificial intelligence will continue to accelerate according to one of the pioneers of the deep learning revolution.
In an interview before the 10-year anniversary of key neural network research that led to a major artificial intelligence breakthrough in 2012 some critics said deep learning has "hit a wall."
There are going to be more compliant and dexterous robots that do things more efficiently and gently.
The 2012 research on the ImageNet database built on previous work, which unlocked significant improvements in computer vision specifically and deep learning overall, is an example of a pathbreaker. The results pushed deep learning into the mainstream and have sparked a huge amount of interest.
MetaBeat is a sequel to Meta Beat.
Thought leaders will give guidance on how metaverse technology will transform the way all industries communicate and do business in San Francisco on October 4.
Register HereIn an interview with VentureBeat, LeCun said that obstacles are being cleared at a rapid pace. He said that the progress over the last four or five years has been amazing.
In an interview with VentureBeat, Li said that the evolution of deep learning since 2012 has been a phenomenal revolution.
Critics tend to find success appealing. There are people who call out the limitations of deep learning and say its success is very limited. They maintain the hype that neural nets have created is just that, and is not close to being the fundamental breakthrough that some supporters say it is: that it is the groundwork that will eventually help us get to the anticipated Artificial General Intelligence.
Gary Marcus, the founder and CEO of Robust.ai, wrote about deep learning hitting a wall and said that while there has certainly been progress, we are stuck on common sense.
Emily Bender, a professor of Computational Linguistics at the University of Washington and a regular critic of what she calls the deep learning bubble, doesn't think that today's natural language processing and computer vision models add up to "what other"
Huge progress has already been made in some key applications like computer vision and language that have set thousands of companies off on a scramble to harness the power of deep learning, power that has already yielded impressive results in recommendation engines, translation software,
There are deep learning debates that need to be discussed. There are important issues to be addressed around ethics and bias, as well as questions about how artificial intelligence can protect the public from being discriminated against.
As we look back on the past decade of artificial intelligence, VentureBeat wants to know what lessons can be learned. What do you think the future holds for this revolutionary technology that is changing the world?
He knew the deep learning revolution was on the way.
A group of us were convinced that backpropagation was the future of artificial intelligence. We showed that what we had thought was correct.
LeCun was one of the first to use backpropagation and neural networks. He said that he had little doubt that techniques developed in the 80s and 90s would be adopted.
There was a contrarian view that deep learning could be applied to fields such as computer vision, speech recognition, and machine translation to produce better results than humans. Pushing back against critics who wouldn't even consider their research, they maintained that techniques such as backpropagation and convolutional neural networks were key to restarting the progress of artificial intelligence.
Li was confident that her hypothesis that the ImageNet database held the key to advanced computer vision and deep learning research was correct.
She said that it was an out of the box way of thinking about machine learning and a high risk move.
All of these theories were developed over the course of many decades of research. A new deep learning revolution started when a breakthrough happened.
The ImageNet competition was founded by Li to evaluate large-scale object detection and image classification. The paper ImageNet Classification with Deep Convolutional Neural Networks was written by the trio. It proved to be much more accurate than anything that had been done before.
Thanks to the ImageNet dataset and more powerful hardware, the paper was able to lead to the next decade's major artificial intelligence success stories.
The global startup funding of artificial intelligence increased from $670 million in 2011 to $36 billion in 2020.
The deep learning trend was picked up by media outlets. According to a New York Times article, scientists see promise in deep- learning programs, which use an artificial intelligence technique inspired by theories about how the brain recognizes patterns. The speed and accuracy of deep-learning programs, often called artificial neural networks or just 'neural nets' for their resemblance to the neural connections in the brain, is what is new.
In June 2012 researchers at the X lab built a neural network made up of 16,000 computer processors with one billion connections that began to identify "cat-like" features until it could recognize cat videos Jeffrey Dean and Andrew Ng were doing breakthrough work at the same time. The performance of convolutional neural networks on multiple image databases was improved by researchers.
Most of the computer vision research had switched to neural nets by the year 2013. He said that it wasn't appropriate to have two papers on deep learning at a conference.
At the 2012 conference in Florence, Italy, Li personally announced the winner of the ImageNet competition and it came as no surprise that people recognized the significance of that moment.
It was a vision that hardly anyone supported. She said it paid off in a historic way.
The progress in deep learning has been rapid.
According to LeCun, there are obstacles that are being cleared quickly.
Areas have progressed more quickly than anticipated. The use of neural networks in machine translation saw great strides in the last year. He thought that would be a long time. Advances in computer vision, such as DALL-E, have moved faster than I thought.
Some people disagree that deep learning progress has been jaw-dropping. Gary Marcus, the founder and CEO of Robust.ai, wrote an article for the New Yorker in which he said that a better ladder doesn't necessarily mean it's better.
Marcus doesn't think deep learning has brought artificial intelligence any closer to the moon than it was 10 years ago.
He said that in order to get to the moon you have to solve causality and natural language understanding. There hasn't been much progress on those things.
The branch of artificial intelligence that dominated the field before the rise of deep learning is the way forward, according to Marcus.
Both of them dismissed Marcus' criticisms.
"If you look at the progress recently, it's been amazing."
There are no walls being damaged. There are obstacles that need to be cleared and solutions that are not completely known. I don't think progress slows down at all.
The man isn't convinced. She told VentureBeat by email that 2012 seemed to have some qualitative breakthrough. "Anything grander than that is all hype."
The field of artificial intelligence and deep learning has gone too far. She believes that the ability to process large datasets into systems that can generate synthetic text and images has led to us getting way out over our skis. We seem to be stuck in a cycle of people discovering that models are biased and trying to debias them despite the fact that there is no such thing as a fully debiased dataset or model.
She would like to see the field held to real standards of accountability, both for empirical claims made actually being tested and for product safety, for that to happen, we will need the public to understand what is at stake as well as how to see
LeCun pointed out that people tend to simplify these questions and that a lot of people have assumptions of ill intent. He said that most companies want to do the right thing.
He was upset about people not involved in the science and technology.
He said that there are a lot of people shooting from the bleachers and that they are just getting attention.
The debates are what science is all about. She said that science is a journey to find the truth. The journey to discover and to improve is what it is.
She finds some of the debates and criticism to be a bit contrived, with extremes on either side of the debate. She thinks it is a version of a deeper, more nuanced, more multidimensional scientific debate.
Over the past decade, there have been some disappointing developments in the field of artificial intelligence. She said that the most disappointing thing was when she and her former student started to bring young women, students of color and students from underserved communities into the world of artificial intelligence. We want to see a future that is more diverse in the world of artificial intelligence.
She said the change is too slow eight years after it started. She wants to see faster, deeper changes and she doesn't think there is enough effort in helping the young people. Many talented students have left us.
LeCun admits that some artificial intelligence challenges have not been solved.
He says that other people underestimated the complexity of it. He said it would take a long time. Some people think that it is just a matter of making the models bigger.
LeCun believes that current approaches to artificial intelligence won't get us to human-level artificial intelligence.
He sees a lot of potential for the future of deep learning, but he's most excited about getting machines to learn more like animals and humans.
One of the reasons I advocate for self-supervised learning is because I don't know what the underlying principle of animal learning is. It would allow us to build things that we are currently out of reach, like intelligent systems that can help us in our daily lives as if they were human assistants, which is something that we are going to need because we are all going to wear augmented reality glasses.
There is a lot more learning to be done. He believes there will be another breakthrough in the basic computational infrastructure for neural nets because it is currently just digital computing done with accelerators that are very good at doing matrix multipliers. Digital signals need to be converted for backpropagation.
He thinks we will find alternatives to backpropagation. Most of the computation will be done in analog in the future.
Communication and education are the two most important things for the future of deep learning according to Li. She said that they spent an excessive amount of effort educating business leaders, government, policymakers, media and reporters and just society at large.
She said that she was concerned that the lack of background knowledge didn't help in conveying a more nuanced and more thoughtful description of what this time is about.
Deep learning success has been offered by the past decade.
He emphasizes that deep learning should be remembered as an era of computer hardware advances. The progress in computer hardware is the reason for it.
Marcus thinks that deep learning might be seen as a mistake in the future. People in the future will look at the systems from 2022 and think they were brave, but they didn't actually work.
The last decade will be remembered as the beginning of a great digital revolution that will make all humans, not just a few humans, live and work better.
She said that she wouldn't want to think that deep learning is the end of artificial intelligence. She wants to see the use of artificial intelligence as an incredible technological tool that is being developed and used in the most human-centered way.
She said that how we are remembered depends on what we are doing now.
The mission of VentureBeat is to be a digital town square for technical decision-makers to gain knowledge. Our briefings can be found here.