The 2020 VFX awards and beyondWho won and lost ?

It’s important to remind ourselves that compositors, the artists who are often at the end of the studios’ pipeline, are those who get in touch with fruits of the labor of the many other departments.

Jakub K

 

It is decided. All awards were given out. Can the awarded nominee finally enjoy its success? The comic book Titanic, last gathering of the acting and genre legends, updated version of an animated classic, epic war drama or the space opera?
Although Compozitive is an online course for aspiring compositors, it’s important to remind ourselves that compositors, the artists who are often at the end of the studios’ pipeline, are those who get in touch with fruits of the labor of the many other departments. Let’s take a closer look at what kind of problems these studios had to tackle and the solutions they came up with…

1917 WINNER !


MPC

New Sam Mendes picture got a very enthusiastic reviews. For some time, people speculated it might even get the best picture award. There is no collaboration of the Cuarón/Lubezki duo in sight,  so the reins of one shot picture have been taken over by Mendes and cinematographer Roger Deakins. Yes, there are examples of a pure one shot movies, but with 1917, it is clear the VFX work will mainly include creation of the “invisible” edits between the separate takes and helping to keep the illusion of one continuous shot. There would be also a lot of painting out and other effects work to create the proper sense of the time and place of war. And no, Rian Johnson tweets were just a joke, 1917 was not really shot in one take, while a lot of high profile actors waited for their cue, prepared and in character. The scenes in the movie were approached with the one shot philosophy in mind, the takes had to connect to each other. The connecting points between each of the takes became the main workload for the VFX, but it was always preferred to come up with a practical solution. The set (decoration), locations and the placement of the lighting equipment was guided by the needs of the one shot illusion.

Preproduction of the movie took 24 weeks. The concept of the movie created a few specific difficulties, which needed to be dealt with. With a regular movie, VFX work is divided up by the shots and editing, with 1917, the plates were divided each 400-800 frames, in order to get workable units, which contained the seams created by the VFX department. Some takes that were supposed to connect were shot on a completely different day or a different location.  91% of the movie contains VFX, delivered in a 4K resolution for an IMAX format. Every type of VFX work was used, animated 3D models of creatures and army vehicles, completely digital environments and/or full CG shots and FX simulations like fire and water. Due to the concept of the movie, even the simple roto work became a complicated task. Effects and render quality had to be pitch perfect, because the audience is going to spend a huge chunks of time watching them and the mistakes couldn’t be hidden by the usual suspects like the editing and motion blur, while being presented in the high 4K resolution and enormous IMAX format. Usually, the placement of the assets within the scene is approached with the composition and with aesthetics in mind, with 1917, the artists had to take into account the audience’s subconscious sense of the scene geography – what was at the beginning of the scene a piece of a nice scenery, could very well be in the middle of the road the characters are taking moments later.

The production of the movie also used miniatures, mainly for the logistical purposes and for planning, but also as a tool for figuring out the placement of the camera equipment, lighting rigs and overall help for the one shot approach and rigs for the on-set special effects (fire and smoke, etc.). Effect houses had to work really close with Roger Deakins in order to reach the proper blend between each segment, but also to help the audience forget about the existence of the camera – it is not supposed to portray another character. The transitions needed to have natural character, so they wouldn’t violate the flow of the movie, without the consideration for the camera equipment and rigs used at the moment. At places, the switch between the camera crane and/or dolly happened in the middle of the scene, some scenes were devised to the deliberately overlap. Some connecting points took place during a close-up shots, because the audience wouldn’t expect it at that moment. All locations and sets were carefully LIDARed for the needs of the effects work. Nuke helped with both 2D and 3D  solutions of the transition points and with the projections of the patches. Some of the effects work needed to be art directed, mainly in the scene with the petals on the river surface. Compared with the preproduction time, the post time was fairly short, about 17 weeks between July and October. Effects reviews were done with the presentation of longer sequences of the movie, so the work could be evaluated in the context of the movie itself. 1917 has been awarded by the VES association, at BAFTAs and most importantly, by the Academy award.

AVENGERS: ENDGAME

 

Cinesite, Digital Domain (337), DNEG (about 300), Framestore (301), ILM (550), Weta (494)

In April, the silver screens and box office of cinemas worldwide were struck by the tsunami of the grand finale of the MCU saga. The majority of the shots in both movies has been touched by the VFX, in total about 5119 shots and 2496 for the Endgame alone. Such a huge amount of work is just a natural consequence of the development of the summer blockbuster type of the movie, among others also perfected by the Marvel through the years. The VFX can also solve the ever present scheduling problem with the actors and their busy calendars, when they’re often unable to shoot the same scene on a same day. So, the help of the visual effects has begun at the point where the organizational kung fu of the production scheduling ended. Thanks to more than a decade of the existence of the MCU, some VFX vendors had already some MCU veterans on staff that could pride themselves with some of the MCU movies on their portfolios and already knew what to expect from this particular client. The work of the on-set teams, gathering the necessary reference material during the production of the movies, has been extremely important. Sometimes, VFX vendors could dust off some of the older assets from the previous MCU instalments, mainly LIDAR scans of the sets, which proved to be a big advantage, because compared to the Infinity War, which had characters fairly isolated within their respective story sequences, Endgame is in this regard complete opposite.

An example of such time saving approach was the possibility to reuse work created by the FX department for the Quantum realm in the Ant-Man and the Wasp, or the older footage used for the time travelling sequences. One of the starting blocks for the VFX teams was the pre-visualisation material created by The Third Floor studio. Once the pre-vis of the VFX-heavy battle sequences got the approval from the client, it was then passed to further departments in order to determine which shots will need the live action plates and which will be completely CG shots, which to a certain extent helped with the massive amount of work on the live action side of the production. Given the fact that parts of the live action shoot took place on the fairly spare sets surrounded by the blue and green screen cycloramas, the actors and crew appreciated pre-vis footage to better understand what is their, excuse the pun, endgame. Although both final Avengers movies were shot almost back-to-back, there was still a brief, few weeks period between the two and that was also an opportunity to update and refine assets from the previous movie. Thanos got more detailed model (improved with the knowledge gained from the Alita production) and animators could work with improved rigs, Quill’s starship Benatar was „improved“ with the damage and Ant-man got better, more detailed model of his suit. Major new addition was the presence of the new „version“ of the Hulk, dubbed „Smart Hulk“, created by Framestore and ILM. Deep learning algorithms helped with the processing of the reference data, acquired by the Medusa system, and with the „translation“ of Ruffalo’s acting to his green alter-ego. This „translation“, provided by the Anyma system, was then embellished by the experienced animators. Both Anyma (used for marker-less acquisition of the actor’s performance) and Medusa (used for gaining detailed reference geometry of the subject) are tools developed by the Disney Research Studios.

Smart Hulk is also one of the cases when the whole character pipeline was reworked in order to achieve the best result. Just like with the Irishman production, there was tendency from the VFX team to work only with the on-set footage of Ruffalo, without the need to have more days in the motion capture suite, all that thanks to Anyma capabilities. Common denominator for both Hulk and Thanos was the effort to transfer the most delicate nuances of Ruffalo’s and Brolin’s acting, but with the anatomical correctness in mind. New and reworked facial workflows helped to achieve results that would be impossible to achieve with the 2012 version of the Hulk character. Although we want to cover the nominees as a whole, everybody likes the „the mosts“ of the production. One of the toughest shots was the one from climactic clash of both armies. This shot was put together by the team of over twenty people over three months. Some elements of the movie, for example time travelling suits or Stark’s post-snap wounds, were being worked upon continuously during the production, not only to view them as a part of the whole movie, but also the client wanted to avoid situation of being locked in with the footage with one particular prosthetics or suit design.

THE IRISHMAN


 

ILM (1750), Bot VFX, Distilled VFX, Screen Scene, Stereo D, Vitality, Yannix,

Each one of us ages differently and it’s true that in the cases of Samuel L. Jackson (Captain Marvel) and Will Smith (Gemini Man), both productions were lucky (and we’ll eventually get back to these) to employ the services of an actor who aged well. The opposite example is The Irishman, which contained the biggest amount of shots with de-aging, both in complexity and the volume of the work. Motion capture workflow along with the headgear is the first one that comes to mind, but the director Martin Scorsese didn’t want to restrict his three central acting veterans. His goal was the least amount of obstacles for the actors, meaning no face markers for reference and no motion capture suit. The road to the Irishman has begun in one of the meetings between Pablo Helman, VFX supervisor from the ILM, and Scorsese during the production of Scorsese’s previous movie, Silence. Scorsese mentioned a project shelved due to technical problem and the conversation then continued about the topic of changing the appearance of the actors to make them look younger.

Next step was to shoot a test of the VFX solution Helman came up with, a test that would convince Scorsese, De Niro (who is also the producer of the movie), but also the studio about the viability of the whole process and if the solution is actually working. The result of the test was a success and ILM spent next two years developing a software that was supposed to carry the main workload of the de-aging process. The result was a combination of programme called FLUX and special camera system developed in cooperation with the cinematographer Rodrigo Prieto and ARRI (a combination of two ARRI Alexa Mini cameras shooting footage in an infrared spectrum, synchronized with the primary RED camera). Infrared feed provided shadow-less reference footage of the actor and the way the performer was lit in the scene. FLUX‘s work was then used for about 500 shots, which called for render of the completely de-aged version of the character. It was extremely important to keep the acting with all its idiosyncrasies untouched. There is no part of the de-aging process that includes animation of the actor’s de-aged geometry by the animator, nobody changes the performance that the actor gave. So, the actors could act the way they were used to and visual effects team got all they needed.

FLUX renders were further integrated with retouching and warping around the neck area, collar and other cosmetic parts (hair, make-up and age related body deformations). To create the de-aged version of the character, the software takes mesh based on the current appearance of the actor, the feed from the camera rig (both visual and of the placement in the 3D space) and a wide range of light reference data captured on the set. The FLUX software then “deforms” current mesh into the de-aged mesh of the actor, but it keeps the movement, the acting part, component of it. FLUX renders are further adjusted, but the emphasis is being put to keep the acting untouched, the reason why the bodies of the de-aged version of the character were done with the 2D methods in order to avoid the use of body doubles. Apart from the main effects work, the de-aging of the hands was done by the Vitality studio, some parts of the movie needed work related to the time period the story asked for. Thanks to the nature of this movie, the effects couldn’t rely on quick editing, motion blur and wild action sequences.

THE LION KING

MPC (1490)

Is it a man, or is it a plane? You wish it was just as simple like with the Superman. One of the more intense conversation, or more of a hot take about a particular movie that some marketing departments like to come up with was about the question if The Lion King is a live action or, just like the original version, an animated movie. This PR battle doesn’t concern us, the main importance of The Lion King is in the use of the VR tools in the environment of the large scale film production. Many filmmakers sigh in the EPK materials: “We wish we could have more time..” With some movies, for example Skyfall, after a long period of pre production comes the actual production and then a fairly brief post production, with the movie premiere within six months after the last slate. Of course, only a certain type of movies can apply this style of production. But what if there’s a new workflow, which can offer the advantages of the live action shoot, but without the gray area of a majority of plates in need of heavy VFX work.

This and many others consideration were reasons for a massive application of the VR during the production of The Lion King, but of course we need to take into account that just like with the more recent The Mandalorian and its backgrounds with interactive parallax, it’s still a young technology that needs some debugging. Director Favreau is one of the directors who actually prefer to work in a virtual reality and with the green screen sets. VFX supervisor on The Lion King, Rob Legato, explains that the first steps of the VR as a filmmaking tool were taken during Avatar, where it was used during pre production for reviews of 3D models and conceptual artwork. Next steps was production of The Jungle Book with its green screen heavy production and then The Lion King itself. The goal was to reach the stage when it would be possible to “enter” the synthetic virtual world, but with the workings of the live action set. Magnopus studio helped with the development of this system, based on the heavily modified Unity engine. It’s standard for a large scale summer blockbusters to employ the services of the many VFX vendors.

Each effects shot is being reviewed countless times, there’s simply too much thinking  going into each shot. It can result in a certain sense of “stiffness” with these shots. Side goal of the VR in an art and cinema sense was to remove this effect by achieving the synergy that can happen on the live action set, when the cinematographer, the director and the actors, through the means of the film language, create an unique look and “feeling” of the movie. When they all strive for the moments of inspiration, improvisation, happy accidents, an interesting composition due to location, or simply a good idea from a member of the crew. One of the advantages of the system was the possibility to “lock” the parts of the shot the director was happy with and, for example, reshoot just for the better focus. Obviously, The Lion King’s production workflow will be used in many future projects. So how would you describe The Lion King? Is it live action or is it an animated movie? After all, the final argument is like something that Orson Welles would say, with movies, it’s all fake. With the production side of the process covered, it’s time to move to postproduction part. Los Angeles office of the MPC also helped with the VR shoot and created assets for it, the Bangalore and London offices took care of the postproduction alone and created main assets. Concepts for and development of the hero assets was continuously consulted with Disney. Process has been set up in a way so that artists could use reference material at every phase, from the early concepts up to animation. Apart from the use of a variety of nature documentaries, a team of animators has been sent out to capture reference footage. Rigs of the 3D models were getting more and more sophisticated, with emphasis on the “faces” and therefore the possibility to animate the most delicate acting choices for each character. Where needed, an animal biomechanics expert was brought in to help with the development. During the production, tools solving the interaction of the skeleton, muscles, skin and fur with each other were continuously developed. Solutions created for The Jungle Book were pushed one step further in the hunt for more detailed renders.

Concerning of the 3D environments, the original plan was to use matte paintings for the backgrounds, but it proved to be too restrictive, so the choice was made to create a whole 3D world, procedurally filled by the assets from the library full of detailed vegetation and other decorative material (except the gorge). The system was set up in way which allowed compositors to choose the passes they needed for the big complex CG shots. Those scenes were filled with the help of Houdini, which allowed set decorators to populate scenes with realistic ecosystems for that particular location. For lighting, it was important to preserve the realism, with final decision by the director of cinematography, Caleb Deschanel. From the technical point of view, it meant capturing the HDRI reference photos, in order to get the correct colour values for the digital world. Houdini also helped with the FX simulations, from the smallest (interaction between the animals and the terrain) to the biggest (water and fire).

STAR WARS: THE RISE OF SKYWALKER


ILM, 32TEN Studios, Base FX, Exceptional Minds, Ghost VFX, Hybride, Important Looking Pirates, Stereo D, The Third Floor, Virtuous, Whiskytree, Yannix


George Lucas was left empty-handed, when he, in the middle of the seventies and halfway through the preparation of the original Star Wars, looked around for a visual effects vendor. Nobody could match the ideas he had for the movie, he was refused with arguments like the shots he wants are either too difficult, or actually impossible to make. Lucas then decided to found, along with a team of young visual effects enthusiasts, the “first” incarnation of the Industrial Light & Magic, visual effects studio with aim to create the impossible shots for his crazy movie with the space wizards. He revived the despised sci-fi genre and after few years of irregular work, ILM was relocated and set up as a proper visual effects house, eventually becoming the leading force in the industry. If it wasn’t for the progress in the VFX technology, Lucas wouldn’t even consider return to the world of Star Wars. It was a courtesy for his friend Steven Spielberg, who had to leave for Schindler’s List shoot in Poland and needed somebody trustworthy to look over the postproduction of the Jurassic Park. Lucas was blown away by the digital dinosaurs and it became one of the sparks behind the idea of returning to the Star Wars world and the prequels, which also brought a lot of new, revolutionary visual effects developments themselves.

With the sale of Lucasfilm and ILM to Disney and subsequent development of the sequel trilogy, it was more than clear that ILM will help with the VFX work on the new movies. With Star Wars, it meant paying tribute to the aesthetics of the era of shooting with practical models and optical compositing idiosyncrasies, but with use of the modern tools. It might surprise some fans, but literally thousands of practical models have been also used during the production of the prequel trilogy. And let’s be honest, the retro look is one of the most relentless aesthetic trends of the last few years. One of such retro traditional techniques is the use of forced perspective when shooting the practical model, just like in the shot with the Jawa’s Sandcrawler vehicle near the end of the movie. There is also, even for Star Wars movie, above-average amount of various starships present in the final battle. The modellers, riggers and animators had to take into consideration the characteristics of the original practical models, the most obvious example being Han Solo‘s ship, the Millenium Falcon.

The sense of realism and tactile feel was the idea behind the shot design and choices for the VFX solutions, in contrast with some other blockbusters choosing completely digital, green screen and digital double route. One example is the action sequence on Pasaana (where pre-vis helped to divide the action beats, so the production knew what pieces could be shot with the actors on a preprogrammed prop), or the lightsaber fight on the wreckage of the second Death Star (ILM reworked its water simulation tool for this set piece). In other scenes, on-set special effects (various explosions and other elements) got combined with rendered simulations from the FX department and massive sets, build on the sound stages of Disney’s long-time partner, the legendary Pinewood studios, got even more massive with the help of the matte painters. Another typical part of the Star Wars is the variety of different aliens and creatures, which translates to a lot of work for the puppeteers and the prosthetics department (and other work to keep these characters ready for camera, over 500 for this episode).

That also means a lot of painting out and retouching where clever shot angle couldn’t hide the puppeteers. In a case of Maz Kanata, the puppet was controlled with the motion capture and some facial details were finished digitally. In order to get more natural light interactions, LED panels with pre-rendered backgrounds were used for some of the Millenium Falcon cockpit shots. Before the production of the movie even began, Carrie Fisher, the actress and screenwriter who portrayed Princess Leia, has unfortunately passed away. The filmmakers announced they won’t go the digital, Rogue One, route. The solution was to build the scenes around the material cut from The Force Awakens – which would preserve the original Fisher’s performance. Compositors then projected Fisher’s work on the face of the body double. The unused material was then enhanced with the new hairstyle and costume, to make the character more specific for this episode. In a training flashback, shots from the Return of the Jedi were used in a similar manner. Young version of Fisher was portrayed by her real life daughter, Billie Lourd.   

..AND WHO DIDN’T EVEN GET A CHANCE TO WIN 

So, that’s it. There can be only one.. No, don’t worry, we’re not about to analyze the visual effects of Highlander. The golden statue has been awarded to just one movie, 1917, but if there could be ten nominees for the best picture category, we can point out a few more, effects-wise, important movies. Illustrated by The Irishman among the final nominees, the human face, de-aging and digital doubles and their manipulation was, is and will remain to be the alpha and omega of the VFX industry. And in that sense, we can not forget a few movies, which tried to push the delicate line between the real and the digital world:

In the Gemini Man, Will Smith (51) struggles with his younger, 23, clone. This illusion was pulled off by creating the detailed model of the current Smith’s appearance, which was then carefully deformed into the desired 23 version. The first model was the result of the detailed scan and other photo reference. Next step was the FACS session (Weta has it’s own workflow, but generally it’s similar technique to Disney’s Medusa) and its set of facial movements the performer needs to provide. These “extreme” poses help to animate the model, but the details about the Weta’s solution are unknown, it has been speculated that the machine learning is involved. Animation then breaths life into detailed model of the younger Smith version. All of its aspects were created with aim to get as much detail as possible, from the specific flow of the skin pores distribution, to the interaction of the eyelid and the anatomical characteristics of the eyeball. The correct skin colour was solved by the sophisticated render, which took into account the amount of melanin and blood flow through the skin. As an another tool for the animators, Weta developed the “deep shapes”, the technology that solves the interaction of the face muscles and skin layers with each other. To make matters worse, the whole movie was finished for 120fps, in 4K resolution and shot in native stereo. Which is 40 times more data to handle, than usual 2k movie. More frames helped the roto department with edges, but added the work time-wise, as well for painting out and matchmove.


Just a few months later, Smith must’ve experienced a little sense of deja vu. During the development of the Aladdin, the production tested various approaches, from more to less caricature version of the genie character, until it was decided that the best solution will be to keep more or less natural appearance of Smith, albeit more muscular.

All shots with genie in his magical form include 100% digital version of Will Smith. Just like with the Endgame, ILM used both the Medusa (to acquire the digital version of Smith and basic 3D geometry) and Anyma systems (to capture Smith’s performance in several shooting sessions). The solves from Anyma were then applied to the Medusa model.

Another movie with a script that called for the younger versions of the characters was Terminator: Dark Fate. Again, now a quite familiar workflow, actors were scanned to obtain the model of their current appearance, which served as a basis for the younger versions. The T2 era versions of Arnold, Linda Hamilton and Edward Furlong were created by ILM, employing Medusa for the geometry and Anyma for performance once again. Also notable is a large action piece at the end of the movie with the photorealistic effects of the collision between the C5 and KC-10 aircrafts and subsequent use of the digital doubles for the characters inside the plane.

One of the popular “what is your favourite VFX work” choices is the “Skinny Steve” version of the Captain America in the first movie, beautifully done by the Lola VFX. The studio also worked on the de-aging, mostly, of Samuel L. Jackson in the Captain Marvel. Compared to the 3D heavy solutions of the other movies from this article, Lola’s work is 2D and compositing based, using Flame and Nuke software. Some de-aging was also done by Rising Sun Pictures and Screen Scene studios.

Weta has been at the beginnings of the creation of the digital characters, but it might come as a surprise that Alita is the first major humanoid character for the studio. After the revolutionary Glum, with its then groundbreaking 50 000 polygon model, came Alita, with her 8,5 million polygons just for a single eye. Apart from the talent of the best animators, Weta also used deep learning technology with studio’s tools to get even more detail in a race for facial details, which helped to sell the illusion. At the beginning stages of the project, Weta team obtained large amount of photographic data of the actress Rosa Salazar. This data then circled back and forth through the Weta tools with the help of the deep learning algorithms. This solution worked in situations when some aspect of the scene prevented Rosa Salazar to wear hear full motion capture headgear. Apart from the high level of the tool sophistication, there had to be also perfect coordination and collaboration among the other VFX vendors on the project, due to some shared assets and work on certain sequences.

Do you want more? The article is a summary of freely available interviews on Artofvfx.com, beforesandafters.com and fxguide.com. At these sites, you can get more detailed information about some of the above mentioned workflows and technologies. If you want to know even more, you can look around for the papers presented at the SIGGRAPH conference or presentations by the Disney Research Studios.

 

 

Vladimir Valovic - Mentor / CEO Compozitive

Vladimir Valovic - Mentor / CEO Compozitive

https://vladimirvalovic.com

A highly skilled senior / lead compositor and visual effects artist, 10+ years experience on TV and feature films and more than 20 years as digital artist. Visual effects supervisor for certain projects and member of VES (Visual effects society). Vfx teacher and mentor on workshops, high-schools and universities.

Leave a Reply

Your email address will not be published. Required fields are marked *