<div class="statcounter"><a title="Web Analytics" href="https://statcounter.com/" target="_blank"><img class="statcounter" src="https://c.statcounter.com/12795394/0/d64e9537/1/" alt="Web Analytics" referrerPolicy="no-referrer-when-downgrade">

AI Quick Tips 303: Google’s Project Genie

A graphic title card on a dark blue background. On the left is an orange robot head with a speech bubble containing the text 'AI'. To the right is a light blue circle containing the words 'Quick Tips'.
This will be hard to put into words.  But I’ll try.

Google Labs has another experiment that is testing out world models.  It’s called Project Genie (as you probably guessed from the title).  In their words, they are creating “Interactive worlds.  Generated in real-time”.

You can basically create a world, create a character to move through that world, then explore the world with that character.  Every time you turn or move, the world around you will be dynamically generated.  If you’ve ever created video game worlds, or done 3D modeling, or even watched TV shows or movies then you have an idea of how it usually works.  Someone creates a world (no matter how small) and then you can move within that world.  Sometimes there are clever angles or camera tricks to make things seem larger than they actually are.

This isn’t any of that.  This basically starts with an image and creates everything else a fraction of a second before you experience it.  

In case you are interested in why this is even a category, I’ll go through a brief summary of why the space came to be.  
 
One of the big issues with language models (the things that power AI tools that you likely use) is that they “hallucinate,” aka make up information.  One of the reasons why they do this is because they don’t understand the “world” that they exist in.  
 
Pair that with vague instructions and they end up guessing (or hallucinating).  World models hope to address this by creating the world that the models live in first, then putting them into that world before you give any kind of prompt or instructions.

It gets much more complicated than that but I think that is a good starting point.

If you are interested in learning more about the experiment, here is the site - labs.google/projectgenie
 
 

Or, watch this video ↓

 

 

An illustration with a deep blue background features an orange, furry monster with a sly expression and a thought bubble containing a lightbulb and gear on the left. Lines connect it to a large, glowing red YouTube play button icon in the center. On the right, a blue and white robot character gestures towards the YouTube icon.

You Don’t Have to Subscribe.

But this is where the “ohhh, that’s how they did that” moments live.

Comments

Related posts