[go: up one dir, main page]

an image, when javascript is unavailable

How Generative AI Has Entered Concept Art Creation — and Why It’s a Slippery Slope

illustration of a robot hand holding a paintbrush
Illustration: Cheyne Gateley/Variety VIP+

In this article

  • Generative AI is starting to tactically enter concept art development processes
  • Critical factors are limiting the adoption of gen AI in concept art
  • Some manners of gen AI use for concept art pose risks for both business and creatives

Generative AI is beginning to enter aspects of concept art creation, though its use is consciously being limited.

Concept art in film and TV can include creating designs for characters, environments, sets, buildings, vehicles, costumes and props and is most often needed on fantasy, sci-fi and action projects. Some concept artists specialize in keyframes, creating digital paintings that visualize specific shots of pivotal moments that should be captured onscreen. Many concept artists have 3D modeling talents at varying skill levels.

Now, rather than relying on stock photos, searching the web for images or previous film and TV references, AI images are starting to be used as the initial design reference material artists use as a creative starting point.

AI imagery is being used to facilitate and speed up the communication back-and-forth between stakeholders on a production toward aligning faster on the design concept or art direction before an artist renders the final assets themselves in detail.

This process is inherently iterative, typically having concept artists go through multiple rounds of feedback and revision to sketches. Rapid iteration with a gen AI image tool can compress this process into a day versus several days to weeks. 

Similarly, studios have begun providing VFX studios with batches of AI images rather than explaining a concept.

AI imagery is also beginning to appear in pitch decks, though its entry here suggests a more substantive replacement for a concept artist’s role and creative process. Earliest-stage concept art starts in the “blue-sky period,” before a production has received a greenlight.

Artists are brought in and work together with the production designer or director to brainstorm, find references and develop the look and feel of the world in order to pitch the project to studios. The artist creates sketches and goes through similar “hot-cold” rounds of iteration before finally rendering assets. Now fewer artists might be brought on during the blue-sky period, which for some is their bread and butter, as production designers may be using gen AI tools in place of an artist to develop materials for pitch decks.

Finally, studios are also likely to be exploring or activating opportunities to fine-tune pretrained image models with art assets in order to create smaller-scale models capable of producing outputs similar to the curated dataset. Studio interest has likely focused on fine-tuning image models with existing franchise IP to amplify and accelerate franchise art creation. Output images from the fine-tuned model would be consistent with the style of the original IP.

While fine-tuning is a high-potential area, it raises some concerns for artists if a model is fine-tuned on assets artists have provided on a project without their consent or additional compensation. One anecdotal experience shared with VIP+ described an artist’s contract with a smaller studio being cut short after the artist had already produced some assets, when the studio decided it would complete their contract with AI.

“You get paid for what you’ve done, and then we don’t need you anymore,” an artist told VIP+. “But there’s a difference between delivering an asset and delivering a digital version of your skills.”

Terms around fine-tuning models will increasingly need to be reflected in artists’ initial contracts when signing onto a project.

Yet even as gen AI enters some aspects of concept art creation, studios are still likely to be restricting the use of gen AI imagery as final art assets. Use remains limited for two main reasons:

1. Copyright Legality: Uncertainty around the copyright legality of generative AI is the biggest deterrent for studios to use AI imagery. Specifically, image models trained on unlicensed copyrighted material carry risk that their outputs would infringe, as it’s been demonstrated that these models can generate near-identical imagery to existing IP.

Copyrightability of AI work is a second gaping question, though guidance from the U.S. Copyright Office is that only human authorship can be protected, proven by the outcomes of early registration attempts of AI-assisted works where quite literally only human-authored components qualified for protection.

Even if AI images were to be materially edited before being used in a production, copyright might only apply to the edits, such as the specific digital brushstrokes made by a human artist. Nor is fine-tuning a panacea for copyright concerns. A fine-tuned model isn’t a perfectly “closed universe” because it still relies on the base model for its understanding, a fact that has led studios considering the benefits of fine-tuning to prefer models that exclude copyrighted works, though that comes at a quality tradeoff.

Furthermore, even though the franchise IP is owned, it doesn’t mean fine-tuned model outputs necessarily would be. Any output of an AI model is vulnerable to rejection by the Copyright Office.

2. Performance: Design quality from image models remains a challenge. Though editing features are improving, lack of granular control over AI outputs is still a notable barrier for use in high-production value content creation.

Superficially, AI images look good, but they tend to have a recognizable and similar feel even as their style changes. AI images commonly contain artifacts, or design mistakes, including problems interpreting anatomy. Accurate anatomy and structure are especially critical to reflect in VFX concept art, as it informs the mechanics of character or object movement.

Even outside of VFX, problems or added work could be introduced if subpar AI-generated art is pushed out to the next department, such as to a prop builder or a 3D modeler.

Depending on how gen AI is used in the pipeline, artists described the possibility of degrading quality — that what gen AI produces to approximate a director’s creative vision will be deemed “good enough” by those with an untrained perception. For example, a skilled character designer understands anatomy; an artist designing a spaceship understands industrial machinery; a keyframe artist understands perspective and lighting so a shot can be replicated by the director of photography

Finally, while some artists in studio art departments or VFX may now be tasked with using these tools in aspects of their jobs, many in the community oppose them on principle, knowing their work trained image models now being used in its place. Concept artists are expected to be among the most and earliest exposed to job reduction and ethical harm due to generative AI.

VIP+ Explores Gen AI From All Angles — Pick a Story

Quantcast