top of page
Search

Nobody in the Room Knew They Were Watching AI

  • Writer: Christopher Nichols
    Christopher Nichols
  • 20 hours ago
  • 4 min read

This post is adapted from a presentation that Monstrous Moonshine delivered at the Television Academy AI Summit 2026.


We stood in front of a room full of television's most creative people and told them we were going to show them something. Writers, directors, producers, executives. People who had spent the better part of the past two years being told that AI was coming for their jobs, their credits, their craft. The anxiety in the room was real. Every panel that day was a variation on the same question: how do we protect ourselves?


We had a different question. What if you didn't need to?


Carolyn Giardina moderates Chris Nichols and Daniel Thron from Monstrous Moonshine at the Television Academy's AI Summit 2026
Carolyn Giardina moderates Chris Nichols and Daniel Thron from Monstrous Moonshine at the Television Academy's AI Summit 2026

The Reveal Nobody Saw Coming


We walked them through our workflow. A director on a bluescreen stage. A tracked camera. A fully rendered digital world composited in real time. Performers moving through environments that responded to them. A sun that moved when you asked it to. Lighting that changed in an instant. The creative team making decisions in the moment, inside the world they were building.


Nobody in the room realized they were watching AI at work.


Midway through the presentation, we dropped a number. Eight out of every nine pixels on that screen were AI-generated. The room shifted. Because nothing they had seen looked like what they feared. No uncanny faces. No hallucinated backgrounds. No waiting. No guessing. The AI was invisible, because it was doing its job.


DLSS both upscales and denoises the fully raytraced image from Chaos Vantage using AI in real-time, turning a noisy blocky image into a high resolution clean image
DLSS both upscales and denoises the fully raytraced image from Chaos Vantage using AI in real-time, turning a noisy blocky image into a high resolution clean image

Two Kinds of AI


Most of what the industry calls "AI in production" is generative and iterative. You describe what you want. You wait. You get something close. You describe again. Prompt and pray. It puts a layer of abstraction between the creative and the result. The tool mediates. The filmmaker waits.


That is not what we do.


What we built is real-time. There is no prompt. There is no waiting. The creative impulse and the result are simultaneous. A director says "move the sun" and the sun moves. That is not a faster version of generative AI. It is a different category entirely. The technology disappears. The director directs.


To learn more about how VidViz was used by Monstrous Moonshine in developing June July, read: VidViz: Planning the Heist for June July


Richard Crudo ASC and Daniel Thron looking at Chaos Vantage being composited live during a VidViz session for June July
Richard Crudo ASC and Daniel Thron looking at Chaos Vantage being composited live during a VidViz session for June July

This distinction matters more than almost anything else being discussed in the industry right now. When executives and creators conflate real-time AI with generative AI, they miss what is actually possible. One tool replaces creative judgment. The other amplifies it.


We Didn't Invent This. We Remembered It.


My co-presenter Daniel Thron made a point that stopped the room. Sydney Lumet, in his book *Making Movies*, writes at length about rehearsal. About how much time he spent with his cast before a single frame was shot on films like 12 Angry Men and Network. Performances that feel alive and urgent, decades later, because they were found before the camera rolled. The actors knew the material so deeply that the shoot became about capturing truth, not discovering it.


Film forgot that somewhere along the way. When budgets got large enough, you could afford to figure it out on the day. Except you cannot, really. You just pay more for the uncertainty.


What our workflow does is give rehearsal back its power, with the actual world inside the room. The director is not imagining the location. They are in it. The performer is not blocking against tape marks on a floor. They are moving through the space. Same fundamental principle Lumet understood decades ago. Radically expanded capability.


Sidney Lumet and Martha Pinson on the set of ‘Prince of the City.’ Photo by Louis Goldman
Sidney Lumet and Martha Pinson on the set of ‘Prince of the City.’ Photo by Louis Goldman



What Liberation Actually Looks Like


The instinct, the accident, the happy mistake. That is where great filmmaking lives. The moment a director sees something they did not plan for and says "do that again." The moment a performer finds something true in a space and the whole scene changes.


Those moments cannot happen when you are waiting for a render. They cannot happen when you are describing your vision to a tool and hoping it understands. They happen in real time, between people, inside a world that responds.


Our workflow puts that back in the room. And it does it before you are paying for the stage, the crew, the location. Every creative decision made in pre-production is already baked in by the time the shoot day arrives. You are not discovering the film on the most expensive day. You are capturing what you already know.



This Is Not Just for Big Productions


The natural assumption is that this kind of technology is for tentpole productions with unlimited budgets. It is not. The fundamentals apply to a streaming pilot, an indie feature, a short film, a commercial. If you can rehearse, you can use this. If you can make your creative mistakes early, when they are cheap, you have an advantage regardless of your budget.


We built this workflow on an independent production. That is not a limitation. That is the point. Indie productions are where real pipeline innovation happens, because there is no legacy infrastructure to protect and no one to tell you it cannot be done that way.


The Vision We Offered the Room


We did not stand at the Television Academy and tell that room that AI would not take their jobs. That felt like a small answer to a large question.


We offered them something bigger. A future where the creative is more in control than they have ever been. Where technology recedes into the infrastructure and instinct leads. Where the question is not "how do we protect ourselves from AI" but "which AI puts us back in the driver's seat."


The room that came in worried left with something else. A different way to think about what these tools are for, and who they are for.


The creative is back in the room. That is the future we are building.

 
 
 

Comments


bottom of page