Perceptron is a weekly rundown of news and research from around the world. There is too much happening for anyone to keep up with, and machine learning is a key technology in practically every industry now. Some of the most interesting recent discoveries and papers in the field of artificial intelligence will be collected in this column.

Previously known as Deep Science, check out previous editions here.

There are two studies from Facebook/Meta this week. A collaboration with the University of Illinois at Urbana-Champaign aims at reducing the amount of emissions from concrete production. Even a small improvement could help us meet our climate goals because concrete accounts for 8 percent of carbon emissions.

This is a type of testing.

The model was trained on over a thousand concrete formulas, which differed in proportions of sand, ground glass, and other materials. It was able to find subtle trends in the data and come up with a number of new formulas. The winning formula had 40 percent less emissions than the regional standard and met some of the strength requirements. It is very promising, and follow-up studies should move the ball again soon.

Changing how language models work is one of the topics of the second Meta study. The company wants to work with experts to compare how language models compare to actual brain activity.

They are interested in the human ability of anticipating words far ahead of the current one, like knowing a sentence will end in a certain way, or that there is a. Artificial intelligence models are getting better, but they still mostly work by adding words one by one, like Lego bricks, and looking back to see if it makes sense. They are just getting started and already have some interesting results.

Researchers at Oak Ridge National Lab are getting into the fun of artificial intelligence. The team created a neural network that could predict material properties using a dataset of quantum chemistry calculations, but inverted it so that they could input properties and have it suggest materials.

Instead of taking a material and predicting its properties, we wanted to choose the ideal properties for our purpose and work backward to design for those properties quickly and efficiently with a high degree of confidence. That is known as inverse design, according to ORNL's Victor Fung. It seems to have worked, but you can check for yourself by running the code on Github.

View of the top half of South America as a map of canopy height.

The image is called ETHZ.

This project uses data from two satellites to estimate the heights of tree canopies around the globe. An accurate global map of tree heights up to 55 meters tall is achieved by combining the two in a neural network.

Being able to do a regular survey of trees at a global scale is important for climate monitoring. Good maps of where trees are needed. We don't know how much carbon we are releasing when we cut down trees.

You can easily find the data in the map form.

This project is about creating large-scale simulations for virtual vehicles to traverse. They might have saved money by contacting the makers of the game Snowrunner, which is what they want for $30.

Images of a simulated desert and a real desert next to each other.

The image is from Intel.

The goal of RACER-Sim is to develop off-road vehicles that already know what it's like to rumble over a rocky desert and other harsh terrain. The program will focus on creating environments, building models in the simulator, and transferring the skills to physical robotic systems.

In the field of artificial intelligence pharmaceuticals, MIT has a model that only suggests molecules that can actually be made. The molecule's disease-fighting properties can be tested if a chemist can make it.

Can you make it without the horn?

The MIT model guarantees that molecule are composed of materials that can be purchased and that the chemical reactions that occur between those materials follow the laws of chemistry. It would be great to know that the miracle drug your artificial intelligence is proposing doesn't require any exotic matter.

The University of Washington, MIT, and others are working on teaching robots to interact with everyday objects, something we hope will become commonplace in the next couple decades. The problem is that it is hard to tell how people interact with objects since we can relay our data in high fidelity. There are lots of data annotations and manual labeling involved.

The new technique only requires a few examples of a person grasping an object for the system to learn how to do it on its own. Normally it would take hundreds of examples or thousands of repetition in a simulation, but this one needed just 10 human demonstrations per object in order to effectively manipulate that object.

The image is from MIT.

It achieved an 85 percent success rate with minimal training. It is limited to a few categories but the researchers hope it can be generalized.

There is a promising work from Deepmind that combines visual knowledge with linguistic knowledge so that ideas like three cats sitting on a fence have a sort. Our own minds work that way.

Flamingo, their new general purpose model, can do visual identification but also engage in dialogue, not because it is two models in one but because it combines language and visual understanding together. This kind of approach produces good results, but is still very experimental and intense.