
My recent conversation with Rick Carter, lead production designer for Hollywood directing legends Steven Spielberg, Robert Zemeckis, James Cameron and J. J. Abrams, reminds me of the mantra coined by another media guru, John Cummins: “Don’t do what you can do, do what only you can do.” It became clear during the exchange that Carter is a believer in the power of AI to help creatives express their unique perspectives by a collaborative synthesis of new concept-spaces and, to paraphrase the immortal words of Captain James T. Kirk, “To explore strange new worlds; to seek out new creations; to boldly go where no one has gone before.”
What is so notable about Carter’s advocacy is that, as the force behind some of the most successful movies ever made, one might expect him to be a fierce advocate for persistence of the status quo that has underpinned that success. But, instead, he argues fervently for the adoption of AI tools and methods for three primary reasons:
- To allow dynamic exploration of novel ideas and concepts that are beyond one’s own conception, without constraint
- To reduce the dependence on access to technical capabilities and tools that are prohibitive in terms of cost or required expertise
- To reduce the number of compromises attendant to the multi-level corporate approval process that defines the creation of most high-production-value movies
The latter reason is perhaps intuitively understood by anyone who has existed in a corporate environment and is familiar with the popular aphorism that it takes multiple “yeses” to get a project approved but only one “no,” making the probability of getting anything novel or original approved vanishingly small. But Carter provides a “Hollywood” perspective on typical organizational dynamics by noting that creative specifics can be critiqued by executives for myriad reasons, for example; personal preference, or maximizing return on investment by minimizing costs, or replicating what is currently in vogue, or a desire to extend existing franchises with a known track record and audience. This is consistent with Doug Shapiro’s observations in our recent interview about the preference for reuse of existing intellectual property in sequels and spin-offs over support for original concepts and narratives. When these biases are applied multiple times across the various layers of a movie studio management, the process of greenlighting a production can be a deeply frustrating one for the original creators.

The second reason above—the cost of the production process—is also well characterized by Shapiro, who estimates that above-the-line expenses, including talent and key creative roles, constitute 15 percent to 25 percent of a film’s budget and that below-the-line costs, including production and technical expenses, make up the remaining 75 percent to 85 percent. For blockbuster movies, this equates to around $1 million per minute of final film running time. In contrast, with the advent of AI and Generative AI tooling and the associated workflow optimization, these costs could be reduced by between 90 percent and 99 percent in many cases.in many cases.
So, I find that the most compelling insight from the conversation with Rick Carter centers around the ability of AI to act as a “co-journeyist'” in the creative process, allowing “human-possible” imaginative flights of fancy to be complemented by “AI/machine impossible” explorations of alien or alternative realities. Carter grounds his argument in the fact that the creative process in general—and movie-making in particular—has always been a process of “prompting” between one person and another in an interactive exchange of ideas and concepts that attempt to marry or coalesce the creative “world model” of one person with those of others. It is this co-prompting that results in the “magic” of the final creation that profoundly resonates with other humans across space, cultures and even time. Carter illustrates this process by outlining how he collaborated with directors to produce such memorable pieces of cinematic art as Jurassic Park, Avatar, Star Wars and Back to the Future, “Spielberg, Zemeckis, Cameron and JJ Abrams heard what I talked about, the way I talked about it… and they could make things into tangible things that went out to become these movies that actually have an entityness after all these years.”
He goes on to point out that such prompting “has always been in the art form … ideas and the process and the collaboration, there’s a lot of iteration within the prompting … when you design any movie, you’re prompting the artist you’re working with, you’re being prompted, as a production designer by the director as to what he wants or she wants to see. And then you come up with things, and there’s a dialog. It’s back and forth. “
Importantly, he envisages a human-to-machine prompting process that is similarly collaborative: “The prompting [of AI will] not just be like ‘give me this’ and then expect it’s going to all be fine—the refinement and exploration of the subject the movies, finding what they are, whatever the medium is, I think that’s where the place I would go to conceptually.”
And this is not just idle conjecture, this is the process he has been engaged in with tools like Midjourney and Runway since he discovered early versions of these tools by “creative necessity” when he found himself isolated from the inter-human prompting process during the COVID-19 pandemic. As is invariably the case, necessity was the mother of invention or, in this case, the mother of experimentation with artificial assistants that became collaborators.
So, what is the role he sees for the human side of the equation? He is an advocate for the David Eagleman view that humans crave an interaction with a beating heart that reaches through the medium, but he argues that the way he has experienced AI is as a reflection of his heart or a modified echo of it, saying “it is an adjunct to what I’m thinking, it starts to interface with how I’m seeing things, and it stimulates me to move to further in that direction [but] I know it has no heart; I think has to be reflective of your heart.”
An extension of this premise was eloquently expressed in a recent article by JoRoan Lazaro, executive creative director of the advertising firm Monks, which argued that while the rise of AI could mean a “coming creative catastrophe,” this will be tempered by the fact that “the [AI] algorithm can give you what you want, but only a human can give you what you didn’t know you needed.”
But there is more that Carter finds to the interaction—it goes beyond simple reflection or a modified echo of the human creative force, he finds that the AI interaction also forces more intellectual contemplation that is a catalyst for greater creativity, “the really basic thing I’ve had with AI is the experience of the paradigm shift. And the paradigm shift is to say I become more aware of my own consciousness, and thus I start to actually feel empowered to create more sometimes, in relationship to what I’m doing with the AI.”
Our conversation also contemplated the bounding constraints or conditions that should be imposed on such machine interactions. The concept of a set of ‘filters’ emerged, which can be equated to the System 1-type heuristics which humans inherently apply to inter-human interactions and we will therefore unavoidably apply to our interactions with AI tools in order to assess their validity and reliability in our world.
The first such filter is “trust,” which is perhaps the most obvious and inherent of all human gauges. Carter proposes that “trust is a good filter. Just to look at it and say, do I trust this? Why don’t I trust it? Or what would make me trust it? What do I trust? Who am I? You see when you have a filter like that, that you’re aware of your own consciousness, then what happens is you can see things more.”
He goes on to connect this filter to the experience of “uncanny valley” – essentially that the lack of a human-physical experiential realism triggers an innate trust violation reaction that cannot be overcome. Moreover, Carter recognizes the intrinsic paradox of this phenomenon, that the closer one gets to realism, the harder it is to satisfy the criterion as the perceived risk is disproportionately higher.
The second filter was “effort”—one about which I have written previously, although this is not just about physical effort, as Carter points out that there is a history of artists (for example Rothko, Warhol, Pollock) who applied less physical effort in the creation of their work. However, they applied considerable conceptual or creative effort that gave rise to new artistic movements. For AI, I think the equivalent effort is, therefore, best assessed not by the computational effort or model complexity but by the originality of feedback or stimulus that it provides to the human prompter. We also agreed that “shared value” was an important filter and that social media virality was a clear digital manifestation of this human tendency or need. But Carter also sees this extending over space and time, observing that “I discovered with some of the works that I’ve gotten to be a part of, that they had their moment in time, then they do whatever they do, but some people then later pick up on them and still champion them enough so that they live and breathe beyond the narrative and beyond the characters.” This is, in essence, a secondary metric by which the output can be judged beyond the intrinsic stimulation it provides to the creator – it is the measure of the resonance achieved with the larger human community.
An intriguing additional factor also emerged from the discussion—the role of fear. Carter points out that Spielberg learned to operate in a zone of discomfort in order to maximize his originality and not just replicate his earlier successes, “Steven Spielberg makes a point of not knowing what he’s going to do a lot of the time. He used to storyboard a lot. Now he makes a big point of not knowing at the beginning of a project, at any point … knowing that what he’s getting into scares him. He doesn’t know. And then a very Socratic way [he] leans into the ‘I don’t know.’ He’s fearful, he feels it, but then he leans in. [He says] I don’t know and that’s the point; it’s why I’m making the movie.”
So, this is fear of the unknown acting as a creative catalyst. And Carter argues that the unknown that is AI is, by extension, a new agent of creative stimulation. But there is another aspect to fear: fear of replacement. In this regard, AI is naturally seen as a negative agent. And this clear tension between risk and opportunity elegantly summarizes the inherent conundrum that AI represents, not only to the creative community but to humankind in general. In conclusion, there is no doubt that Rick Carter comes out on the positive side of the equation, with the creative opportunities and stimulus provided by AI in excess of the apparent risks, and the potential for a real democratization of creativity. He recognizes that the shift will be primarily driven by new creatives who were previously unable to fully explore or express their vision, but is also encouraged by the adoption of AI by some of the Hollywood establishment, for example, the Russo Brothers creation of an AI Lab as part of their AGBO venture.
All in all, I find Rick Carter’s erudite analysis profoundly logical. He combines his overt success as a traditional practitioner of movie production with his nascent experience as an experimenter with AI as a collaborator to create a compelling vision for the future, a future in which there is a symbiosis of human creatives and AI to co-explore new horizons and “strange new worlds.”