AI-Created Actors: How Virtual Performers Are Changing the Future of Casting
In an era where artificial intelligence (AI) reshapes industries at every turn, the world of film and entertainment is witnessing one of its most provocative transformations: AI-created actors. Virtual performers—digital, synthetic, or AI-augmented characters that can act, emote, and “perform”—are no longer sci-fi experiments. They are increasingly entering casting decisions, commercials, music videos, and even feature films. But what does this mean for human actors, casting directors, filmmakers, and audiences?
AI-created actors are changing the future of casting by introducing new flexibility, control, and cost dynamics—but also raising serious legal, ethical, and creative challenges. In this blog post, we’ll explore how this shift is unfolding, the technologies behind it, the benefits and risks, key lessons and actionable tips, and a forward look at what the future might hold.
We’ll use the focus key phrase “AI-Created Actors” along with secondary keywords like “virtual performers,” “virtual actors,” “digital casting,” “AI in film,” “casting future,” “synthetic talent,” “digital double,” and “virtual casting” to guide this exploration. Let’s dive in.
The Rise of AI-Created Actors
Virtual performers go mainstream
Over the past few years, studios and tech innovators have moved from simple CGI characters to actors powered by AI. These virtual performers can resemble human actors, mimic expressions, speak lines, move, and even interact in scenes with human actors. Some are built as digital doubles—imitations or extensions of real actors—while others are entirely synthetic, created as new entities.
In some productions, studios are investing in these digital doubles to license, manipulate, or deploy performances when the original human actor isn’t available.
Casting Networks
+1
For example, resurrecting iconic actors from the past using archival footage and AI is becoming feasible.
CHESA
+1
In short: AI-created actors are moving from novelty to practical tool.
Why now? The technological enablers
Key advances make this possible:
Deep learning and neural networks that learn to map speech to realistic lip movement and facial expressions
Speech synthesis and voice cloning that generate natural-sounding audio in specified voices
Motion capture, facial scanning, and 3D modelling that define how a face or body moves
Neural rendering / face reenactment frameworks like Neural Voice Puppetry or newer models that sync voice with facial movement
arXiv
+1
Multi-agent simulation systems that coordinate character performance, camera movement, and scene direction in virtual spaces (e.g., FilmAgent)
arXiv
Together, these enable virtual performances layered with richness and nuance that earlier CGI could not match.
Real examples: Tilly Norwood & more
One of the most talked-about cases is Tilly Norwood, an AI-generated “actress” created by the studio Xicoia (Particle6).
EW.com
+3
Wikipedia
mike-walsh.com
+3
+3
She generated buzz at the Zurich Film Festival, and talent agencies reportedly expressed interest in representing her.
Wikipedia
+2
CBS News
+2
However, there has been strong backlash from actors’ unions and creatives—some calling her a danger to the craft of acting.
CBS News
+1
In the music and influencer space, Naevis, a virtual AI idol created by SM Entertainment, performs using AI-synthesized voice and visuals, not tied to any one real person.
Wikipedia
AI models also power voices for dubbing, localization and voice acting opportunities.
allcasting.com
These examples show that AI-created actors aren’t a distant concept—they are here, challenging norms across entertainment domains.
Benefits of Virtual Performers in Casting
Flexibility and schedule independence
Virtual actors don’t need rest, travel, or breaks. They can shoot continuously, be available across time zones, and take on multiple projects simultaneously. This flexibility is especially useful for global brands, commercials, or productions requiring constant availability.
Cost efficiency and scalability
Once created, a digital performer can be reused across projects without renegotiating or paying per-day rates. There’s no cost for meals, lodging, or travel. Over time, this can yield significant savings, especially for large-scale or international productions.
Creative control and consistency
Directors can manipulate performance precisely: adjust timing, emotion, lighting, and expression post-fact. This control ensures consistency across many takes, reshoots, or language versions.
Safety and technical advantages
Virtual actors can perform dangerous stunts or visual effects sequences without risking human safety. They allow scenes that defy physical constraints—gravity shifts, morphing forms, surreal moves—without needing complex stunt crews or CGI composites.
Extending legacy actors and IP
AI technology enables the resurrection of legacy actors—bringing deceased or unavailable talents into new stories using archival footage, voice samples, and AI reconstruction. Entire cinematic universes can continue using iconic characters.
CHESA
+2
mike-walsh.com
+2
This also helps IP owners retain continuity: brands or characters can persist long after a human actor is gone.
Challenges and Risks of AI-Created Actors
Authenticity, emotion, and the “uncanny valley”
No matter how advanced the rendering, audiences often sense when a performance is synthetic. The lack of lived experience, subtle spontaneity, human vulnerability, and emotional nuance can result in performances that feel flat, robotic, or uncanny.
Many critics argue that true artistry stems from human consciousness and lived experience—things AI can’t replicate. The emotional “spark” of improvisation or subtle imperfection often eludes synthetic actors.
Legal, rights, and likeness concerns
If a virtual performer is modeled after a real person or uses voice samples from a human actor, issues of consent, ownership, and compensation become thorny. Actors’ unions are pushing back: SAG-AFTRA condemned Tilly Norwood, saying she was trained on countless performances without permission.
EW.com
+2
CBS News
+2
In past strikes and contracts, unions insisted that digital replicas or “digital doubles” require clear permissions and licensing.
Casting Networks
+2
CHESA
+2
Estates of deceased actors often control posthumous rights—but how broadly these rights extend into AI is still unsettled.
Job displacement and industry backlash
One fear is that AI actors will replace human performers, especially for background roles, voiceovers, or minor parts. The backlash in Hollywood to Tilly Norwood illustrates industry anxiety.
CBS News
+1
Many creatives fear that value will shift toward big tech, studios, and AI owners rather than acting talent.
Resistance by unions, contracts limiting AI usage, and public relations risk are significant obstacles.
Technical limitations and cost of creation
While reuse is cost-effective, building lifelike models, training the AI, voice synthesis, motion capture, rendering, and integrating into scenes remain expensive and technically demanding. The initial investment and infrastructure required make it a barrier for small productions.
Latency, integration with real actor scenes, lighting mismatches, and rendering artifacts are persistent technical challenges. Combining human and virtual performances seamlessly is still a work in progress.
How Virtual Casting Works: Key Processes
Scanning, modeling & motion capture
To create a virtual performer, producers often start with 3D scanning of faces and bodies (photogrammetry, LiDAR, structured light). Motion capture sessions track movement, facial expressions, and subtle micro-movements, allowing the digital model to mimic realistic behavior. These serve as training data for AI to generalize beyond the capture sessions.
Voice cloning and speech synthesis
AI voice models can clone voices based on samples. Actors may record audio at multiple pitches, intonations, and emotional states to train the model. The system learns to generate new lines in the actor’s voice while preserving character. This allows rewriting or dubbing without a new recording session.
Neural rendering & synchronization
Once visuals and audio exist, systems like neural rendering align lip movement, emotion cues, and expression with generated voice. Techniques like facial reenactment or identity-aware animation (e.g. Imitator models) handle fine lip sync and expressive coherence.
arXiv
+1
Virtual casting platforms & AI auditioning
Casting platforms increasingly use AI to pre-filter submissions, analyzing facial expressions, tone, and compatibility with role characteristics.
Casting Networks
Virtual casting rooms simulate chemistry reads with AI scene partners or automate scheduling and evaluation. Directors can audition virtual performers the same way they’d audition humans—selecting based on performance metrics, look, and flexibility.
Maintenance, iteration, and updates
Virtual actors are living assets. They require ongoing updates, re-training, and refinements. As technology improves, models may be enhanced, resolution increased, or features added (e.g., real-time interaction). The owner controls versioning, licensing, and “talent deployment.”
Use Cases: Where AI Actors Are Already Being Deployed
Commercials, branding & advertising
Brands are among the first adopters. Virtual actors offer tight control over brand consistency, 24/7 availability, and multi-market deployment. They can film multiple versions tailored to local markets without new shoots.
Music videos, virtual idols & influencer content
Virtual idols like Naevis use AI-generated visuals and voice to release music, interact with fans, and appear in media without being bound to a physical body.
Wikipedia
Virtual influencers, avatars, and hybrid creators increasingly populate social media, releasing content and modeling for brands.
Feature films, scene augmentation & background roles
In major films, AI actors might play secondary roles, crowd scenes, or background characters. AI can also fill in for missing shots or alter performances post-shoot. Studios may resurrect legacy actors (archival footage + AI) to appear in new films.
CHESA
+1
“Echo Hunter,” a short film produced with AI-generated visuals and performances, even used union actors’ voices as foundations.
TechRadar
Localization, dubbing & post-production use
Virtual actors facilitate dubbing by generating lip-synced voices in multiple languages. They can adjust deliveries and reshoot lines virtually in post, without new actor sessions.
Experimental and immersive media
In VR, AR, interactive storytelling, or game narratives, virtual performers can adapt to user interaction in real time, making characters more dynamic and responsive.
What Casting Directors & Filmmakers Should Know
Redefine your casting workflows
Casting teams must incorporate virtual performances into their pipelines. This may require new talent evaluation criteria, technical specs, AI readiness, and contract frameworks.
Evaluate technical maturity & use cases
Not every production needs a fully synthetic actor. For many, a hybrid approach—human actors + AI augmentation—is more practical. Use virtual performers where constraints (budget, scheduling, safety) demand them, and use human performers when depth and nuance matter most.
Licensing, rights & contracts
Draft clear agreements regarding likeness, usage rights, AI model access, residuals, and termination. Specify how and when voice and appearance may be used in perpetuity or variants.
Collaborate with technical & AI teams
Filmmakers should work closely with modelers, AI engineers, animators, and legal experts from pre-production. Ensure fidelity, consistency, and integration into VFX pipelines from the start.
Test audience acceptance
Because audience reactions to virtual actors may be mixed, run small focus groups or audience tests. Gauge whether a virtual performance resonates emotionally or falls flat.
Tips for Actors & Creatives in the Age of AI
Build your digital twin
Actors can proactively scan their face, voice, and motion to create licensed digital doubles. This ensures future opportunities and control over how their likeness is used. Some studios like Synthesia offer equity or compensation models for talent used in AI model training.
Financial Times
Understand AI contracts & rights
Before signing to allow a scan or voice clone, negotiate terms, royalty percentages, duration, usage rights, and termination clauses. Be cautious about blanket “forever” or “unlimited” licenses.
Develop hybrid skills
Creatives who combine acting, voice modulation, motion capture performance, and basic AI literacy will be in high demand. Understanding how your performance data is used gives you leverage.
Stay active in industry policy & unions
Support guilds’ efforts to regulate AI in performance, demand transparency, and ensure fair compensation and attribution. The SA G-AFTRA condemnation of AI actors is one example of collective pushback.
EW.com
+1
Experiment in personal projects
Use indie or passion projects to integrate AI actors or virtual collaborators. Test out virtual performances, hybrid scenes, or limited AI overlays. This experimentation builds experience and opens doors.
Ethical & Moral Questions to Reckon With
Who “owns” creativity?
When an AI actor is licensed, the company effectively owns the entity. The performer (if based on a real person) may lose agency over how their likeness or voice is used.
Consent, attribution & legacy
If AI models are trained on many performances without consent, is that an intellectual property violation? Virtual resurrection of deceased actors raises tough questions about posthumous wishes and estate control.
Audience deception & authenticity
Audiences trust emotional authenticity in storytelling. If viewers feel misled by AI performances, there’s backlash. Transparency in marketing, credits, and disclosures will likely matter more.
Disparities in power
AI actors may strengthen studio power and marginalize voices. The creative economy could tilt toward fewer large players controlling synthetic talent, reducing diversity and opportunity for emerging artists.
Safeguards, policy & regulation
Industry standards, union contracts, and legal frameworks must evolve. Licensing, attribution, and limits on usage will be central to protecting human creatives. Ethical guidelines around deepfakes, cloning, and AI performance should be established.
Predictions: The Future of Casting & Virtual Talent
A hybrid ecosystem
The near future is likely hybrid: human actors anchoring emotional core, AI actors filling supporting roles or technical deployments. Virtual creatures, background avatars, or stylized characters will become common.
Democratized access
As tools de-cost, smaller productions or independent creators may gain access to virtual actors for films, web series, or interactive content—leveling the playing field.
Real-time interactive performers
Virtual actors might shift toward live interaction—responding in real time to audience input or improvising based on AI models. This is especially relevant in immersive storytelling, VR, or live AI shows.
New talent pipelines
Casting agencies may expand to include AI-talent managers. New forms of auditions (prompt-based, algorithmic evaluation) might emerge. Skillsets will include data input, performance recording, and AI calibration.
Legal and industry frameworks catch up
Contracts, union rules, copyright law, and industry norms will evolve. We’ll see new standards for digital rights, AI actor accreditation, and revenue-sharing models.




