An Amazon Echo Show on top of a wood table.
Alexa’s getting better at doing its thing without your input.
Image: Amazon

The goal of the changes is to make the virtual assistant easier to use. Developers can now create and recommend to users, instead of having to manually build their own automations, as a result of a change in howAlexa handles routines. Amazon is trying to make sure that the most important commands, like stop, work no matter what wake word you use.

During its live developer event, Amazon announced a number of new features for its voice activated assistant. They can learn more about their surroundings, plug into a simpler setup, and support Matter and other smart home systems.

If you can't figure out how to use the new features, they're useless. The team is leaning towards just making the system do the work for you rather than building newUIs. Rubenson says that the team wants to make automation and pro activity available to everyone who interacts with the service.

The most obvious example is the change to routines. Developers can now build routines into their skills and give them to users based on their activity. Rubenson says, "Jaguar Land Rover is using the Alexa Kit Routines to make a routine they call 'Goodnight,' which will make sure the car is locked, remind customers about the charge level or fuel level, and then also turn on guardian mode." Few will do the work to create for themselves, but now they will have to turn it on.

Rubenson says that people who use Routines are some of the stickiest and most consistent users and that he wants them to keep having the knobs they need to build their weird and craziest automations. He acknowledges that not everyone will take that step. Adding pro activity to routines could make them more useful to more people.

Voice assistants are a tricky UI problem, so Amazon’s just trying to make them automatic

Since they don't offer a series of buttons or icons, voice assistants have always presented a trickyUI problem. The team has tried to make it difficult to say the wrong thing. It is part of the thinking behind its multi-assistant support, which allows developers to put their own virtual helpers next to the device. You can speak to your headphones by saying "Hey Skullcandy" or "Alexa".

Amazon is working on a feature called Universal Commands that will make it possible for an Amazon device to do certain things even if you say something different. You could say "Hey Skullcandy, set a timer for 10 minutes" and Skullcandy's assistant wouldn't be able to do that, but it could be done byAlexa. Rubenson said that timers and rejection should be able to handle even if you haven't been interacting with Amazon's voice assistant. Over the next year, Rubenson says, that feature is going to be rolled out.

The features will have to be implemented and used by developers in order for them to catch on. It is changing its revenue sharing agreement so that developers keep 80 percent of their revenue instead of 70 percent, and it will reward developers for taking the actions that we know lead to more sales. Amazon is paying developers to improve their skills.

Amazon is trying hard to incentivize developers to care about its new tech

It's hard to figure out what voice assistants can do, so most users default to music and lights, which means there's no reason for developers Amazon can get that flywheel going in the other direction by making the platform more powerful and doing more work on users' behalf. You don't have to do anything.