Microsoft's generative AI model Muse isn't creating games – and it's certainly not going to solve game preservation, expert says

Last night, Microsoft trumpeted the announcement of Muse, a new “generative AI breakthrough” designed to aid “gameplay ideation”.

The company also published some grainy-looking gifs of AI-generated gameplay footage, based on Xbox studio Ninja Theory’s multiplayer game Bleeding Edge. (The images were miniature in size, presumably to avoid highlighting some of the wonkiness AI is known for.)

Finally, Microsoft made the claim that Muse would “radically change how we preserve and experience classic games in the future”, and that the algorithm could be used to make older games compatible with “any device”.

Reaction to Microsoft’s announcement was swift, with social media awash with posts pointing out that, yes, Microsoft was absolutely jumping on the tech industry’s AI buzzword bandwagon – but also the suggesting that Xbox was about to start using Muse to pump out AI slop.

Thankfully, someone has done a better job than Microsoft itself of explaining what Muse is actually doing – and that person is AI researcher and game designer Dr Michael Cook.

For background, Cook is an expert on the subject of AI in games – he’s the guy who built an artificial intelligence to see if it could win a game jam a full decade ago, whose work Eurogamer has covered on several occasions. He’s also a senior lecturer at King’s College London, and has published and spoken extensively on the subject of AI.

As Cook lays out in an extended blog post on Muse, the AI model is not generating gameplay or creating its own original ideas.

In short, Muse was fed seven years of video footage of people playing a singular game – in this case, Bleeding Edge – to see if it could then generate further gameplay footage of it.

(If this all sounds familiar, it’s similar to the process Google used to generate footage of classic first-person shooter Doom last year.)

So, what’s the point of all this? Well, as Cook writes, it’s so Microsoft researcher could ask Muse to predict what might come next if changes to a game were made.

“They made a tool that let game developers edit a game level using existing game concepts like adding in a jump pad to a place where there wasn’t one before,” Cook explains.

“They then gave this new level to their model, and asked it to show what it thought the footage of a player playing from this new position would look like.”

In other words, the idea is Muse could be used as a shortcut tool for predicting and visualising how gameplay might adapt to a particular input by a developer. And, crucially, that developer is still a human.

Muse’s AI-generated gameplay footage of Ninja Theory multiplayer game Bleeding Edge. | Image credit: Microsoft

Microsoft’s research paper on Muse says the AI model is required to understand persistency, consistency, and diversity in order to succeed. In other words, if a human input is provided, the AI needs to ensure the effects of that input remain, whatever else is going on, and that the effects remain similar while adapting to a range of player behaviours.