You’ve probably seen many of these visual experiments floating around the web (“Weird Dall-E Mini Generations” is a good place to find some more unusual examples), with some being extremely useful and applicable to new environments. And others are just weird, mind-bending interpretations of how the AI ​​system sees the world. Well, soon, you could have another way to experiment with AI interpretation of this type, via Meta’s new “Make-A-Scene” system, which also uses text prompts, as well as input patterns, to create completely new visual interpretations. As Meta explains: “Make-A-Scene enables people to create images using text prompts and free-form sketches. Previous AI systems that generate images typically used text descriptions as input, but the results could be difficult to predict. For example, the text input “a painting of a zebra riding a bicycle” may not reflect exactly what you imagined. the bike might be sideways or the zebra might be too big or too small.’ Make a Scene seeks to solve this by providing more controls to help guide your output – so it’s like Dall-E, but, in the Meta’s view at least, a bit better, with the ability to use more prompts to system guidance. “Make-A-Scene captures the scene layout to trigger shader sketches as input. It can also create its own layout with text-only prompts if the creator so chooses. The model focuses on learning key aspects of the image that are most likely to be important to the creator, such as objects or animals.” Such experiments highlight just how far computer systems have come in interpreting diverse inputs, and how much AI networks can now understand about what we communicate and mean, in a visual sense. Ultimately, this will help machine learning processes learn and understand more about how people see the world. Which might sound a little scary, but it will ultimately help power a number of functional applications, such as automated cars, accessibility tools, enhanced AR and VR experiences, and more. Although, as you can see from these examples, we are still quite far from AI thinking like a person or becoming sentient with its own thoughts. But maybe not as far as you think. Indeed, these examples serve as an interesting window into the ongoing development of AI, which is just for fun right now, but could have significant implications for the future. In its initial test, Meta gave various artists access to its Make-A-Scene to see what they could do with it. It’s an interesting experiment – ​​the Make-A-Scene app isn’t available to the public yet, but you can access more technical information about the project here.