Recollections: The Making of Total Recall

Trevor Hogg chats with production designer Patrick Tatopoulos; visual effects supervisors Peter Chiang, Adrian de Wet, Graham Jack, Alex Pejic, Richard Higham, Charley Henley, Olivier Cauwet, Sean Farrow; visual effects producer Stefan Drury, senior compositor Martin Ofori, effects supervisor Shane Mahan, and previsualization supervisors Joshua Wassung and Todd Constantine about their work on Total Recall…

Patrick Tatopoulos

Total Recall [2012] came to us a couple of years ago and we started to brainstorm,” states Production Designer Patrick Tatopoulos (Dark City) who shares a Los Angeles office with frequent collaborator and filmmaker Len Wiseman (Underworld). “The first stage is coming up with quick concept art together which I do because I come from a background of concept artists. As we’re narrowing down the look I started hiring concept people.” The production design went beyond the preproduction stage which lasted three and half months. “You conceptualize with people, begin your builds and you give all that data to the visual effects people. When post-production kicked in, the studio and Len together asked me to come back. I was on for more than four months at the end of Total Recall to work with the visual effects in London to make sure that the visual integrity was maintained.” An idea rather than an image sent the visual tone for the movie. “We wanted to make it real. The things of the future were always being linked to things we know in today. For example, there is a giant object in the movie, an elevator. When we were looking at the futuristic element we thought this would be like a 747. All the practicality of it; the way the space is managed is something we wanted to see in our future. We referred to this specifically while we were looking at how does the future look like. What great concept could we come up with? The most important was to retain iconic things that people could relate to.”

Unlike the original 1990 cinematic adaptation of We Can Remember It for You Wholesale by Philip K. Dick which stars Arnold Schwarzenegger (True Lies), the setting has shifted from Mars to Earth. “It certainly helped us on making this movie more believable,” remarks Patrick Tatopoulos. “The whole Mars thing sets you up in a different place. The fact that we could do this on Earth felt like we needed a place where we could make the movie feel real. In a few years we don’t know where we’re going to be with our technology. It was not a choice of mine and neither Len’s; it was in the script in the early stages but I felt when I heard that, ‘Here we go. It’s going to help us do something interesting and different.’ We’re all big fans of the original movie but that aspect helped us to do something new and fresh. The simple thing is that I’m not travelling in a spacecraft to go to another planet but now we’re travelling in an elevator to go from one side to another. For a designer the great reward is when someone tells you, ‘Hey, we need this kind of object in the movie.’ And it’s an object we haven’t seen before. To a larger degree purely you’re creating something which is definitely a new thing. Cars we’ve seen them and we’re going to see much more. There are tons of versions of cars and it’s hard to be really fresh; I don’t think it’s impossible but it’s hard. We wanted something never had seen in a movie before and this was a great chance to do something different. And that’s fun.”

“The cars we built for the movie are not CG, they’re real; that said as you expand the world they become CG in some aspect,” states Patrick Tatopoulos. “We built seven cars that were mounted on chaises to making them look like they’re floating in the air and flying at 60 miles per hour on the freeway.” The practical effect approach allowed for an additional element or realism. “At the end of the day, I believe it made the sequence feel real because those cars had the actors in them.” Comparisons with The Fifth Element [1997] and Minority Report [2002] are unavoidable. “When you’re making two or three movies within five years and if they all follow what the futurists tell you they’re going to end up in the same place because those trends and ideas don’t change from one to another.” Artistic license was also incorporated into the designs. “We created things that were sometimes surprising and maybe not actual what the future is going to be but it gave us a chance to be more original for that reason.” Tatopoulos notes, “Our cars are not as capable as crazy things as the Minority Report car. Putting the magnet aside, the limitations of our cars are more realistic because they’re not as futuristic in some ways.”

“You travel from one side of world to the other side and those two worlds have to have similarities,” observes Patrick Tatopoulos when discussing the countries of United Federation of Britain and it’s colony of workers located in Australia, which has been renamed New Asia. “There is a world which is clearly poor, more cramped because it is a big melting pot of different social classes and there is a world that is cleaner and tidier. The good thing is that there are people travelling between one and the other. If you look at the costume design and other things you’re going to find some people fitting in both worlds and they become part of the visual link between the two. The crowd is all over the movie. That’s the first thing. Some of the transportation that we have in both worlds has some continuity except in one world they have been trashed and aged.” The book City of Darkness served as an influence which features a claustrophobic Asian metropolis which was demolished in 1993. “Our movie is founded on the concept of the world collapsing within; the streets are crammed with people living on top of each other. You open a window you’re about two feet away from the window of the other building.” Water is used to separate the two worlds. “They have a limited amount of space and what they do is build on top of each other but their architecture is quite different. It’s uncreative. It’s practical with no sense of aesthetic. You build a new apartment on top of the one that was there before. It’s a structure that holds the basic original apartment building complex but then you keep building on top and it becomes like a real cancer in this world. The other world still has the concept of layers within the city except it’s done in a pleasing way.”

A signature element is the giant elevator known as ‘China Fall’ which connects the United Federation of Britain with New Asia. “The thing I’ve learned over the years when an object falls if they are too intricate and real they end up looking small,” reveals Patrick Tatopolous. “We wanted to start with something that feels massive; in essence, it is like a bullet in a gun. What I wanted to do and pushed Len for this is that it should look like a building in the city that drops suddenly. We definitely felt it shouldn’t be in the ground it should be outside. It’s a structure you look at. It’s not the biggest one. That would be a big mistake. It’s one of the prominent objects. The fact that we put it outside became much more interesting for us because that object goes from one side of the world to the other side.” Multiple sketches were drawn. “It was a cylinder because you think, ‘Why would it be anything else?’ When China Fall comes out of the shaft you see it’s tall and narrow. When I was going through the process I started to get pictures of new airplane interiors. I looked at the shapes, the shielding inside the doors and the finish on the walls. I knew this thing would be magnetic and travel on tracks so when I was designing I decided that there would be three major tracks and the magnets would be driving it onto those tracks. It was a 60 story building. 4 stories would be six compartments of 50 people. I built it that way and this was real love when we start creating something from scratch. What you see in the movie is an aspect of it. We designed everything and we put everything together with a great sense of where things should belong.”

“The interactive billboards in New Britain are exposed, beautiful, and out of the way and you can look at them,” explains Patrick Tatopolous. “In New Asia we put them on the streets so those billboards are cutting people in half. You have to go through a billboard which is more aggressive way of advertising rather than displaying it in a place where you can catch it if you want to or not.” Graphics are a major part of the aesthetic found in the science fiction action tale. “We had a Graphic Design Department of six people working on creating the logo of the cars that Chrysler had to approve because we had to use their logo and change things. Those are demanding because you’re dealing with people who created the car and the logo, and they were concerned for good reason. They designed all of the signage which was practical and we had to build for the set.” The task expanded into the realm of visual effects. Those guys created more billboards. We had a full bank of data for them to work from and even in post, when I came back for four months I had to hire two additional graphic artists to help us to build yet more graphics.”

“The Synths were designed in the Art Department first,” states Patrick Tatopoulos when discussing the robotic police force. “The Synths idea was to make a new army that would be used to replace the guards of the city which were controlled by Cohaagen and his corporations. We wanted them to not feel too I, Robot [2004]. They are exciting to look at and there’s not so much storytelling around them. The only thing is that our lead character works in a factory which builds them. What I like about them is that you see them in both worlds where they work and function differently.” Tatopoulos who used to own a creature effects shop sought out Legacy Effects, a one time competitor to construct the required suits. “I went there to refine the proportions [of my design]. When that job was done I knew the guys would build it perfectly. I took off and came back a couple of times and we got our Synths; they did an amazing job.” Previs provided by The Third Floor assisted the Rekall Lounge Sequence where the character portrayed by Colin Farrell executes a squadron of Synths. “We shot the actors on a motion-capture screen. We used their motion in relationship to the set and suddenly that puts that entire sequence on a model.” The digital tool was a huge asset for the concept artists to further the design and for the set designers to begin drafting the set. “In a sense the director can work his whole action in a practical way and when it is working for him I can build my set. When Len walked onto the set there’s an exact replicate of what he wanted only better. It’s an amazing tool I’ve never had before.”

Len Wiseman
“When the show grew to 1800 shots it became ‘all hands to the deck!’” chuckles Visual Effects Supervisor Peter Chiang. “It was an easy process of grouping the tasks. We knew certain companies could handle a fair degree of 3D and there was a continuous dialogue with the facilities making sure they felt comfortable with the work they were to take on in the short time frame.” Certain scenes required more than one VFX facility. “There were a lot of shared shots, MPC and Buf had the most complicated but we had the benefit of Angus Bickerton supervising; he and I would talk about looks and approaches to ensure continuity. A lot of Dneg assets were passed on to the other companies for their own sequences but in each instance we always tried to provide an approved ‘look developed shot’ that would serve as guide.” Chiang was assisted by his Double Negative colleagues, VFX Supervisors Adrian de Wet and Graham Jack. “Adrian and Graham were vital to the project, having overseen all the principle photography; they had set up many aspects of the work we were to do in post, gathering the on-set data and shooting additional plates.” An essential member of the team was Patrick Tatopoulos. “Patrick was the key in ensuring that our designs of the environments fitted as extensions of the world he had established with his sets. Patrick continued to oversee the extensive designs of the graphics that populate the world.”


“The environments in science fiction films always need grounding in reality,” observes Peter Chiang. “Fortunately, Len felt the same and shot with as much foreground as possible giving us something to latch onto, to see how the set extensions should behave. From this we could evaluate the necessary assets we would need and gather the right reference. Setting the composition involved getting a fair amount of the 3D assets ready as quickly as possible. The car chase layouts took a while to establish the final look. Photoshop gets you so far but you need to see it with the match move camera.” The ability to have a relatable and futuristic technology was established by the director and his production designer. “Len and Patrick set the futuristic ideas of the world. Their ideas never wandered too far into fantasy. Cars can’t fly and there’s still a need to make mobile phone calls!” There was always an open line of communication available with Wiseman. “We had regular Cinesync [low rez] and RV [2K] sessions with our Polycom system. Len is an artist and was able to draw his notes explaining what he wanted to see. For the earlier stages of going through the edit outlining the visual effects we flew to LA to be in the cutting room.” Chiang states, “The whole approach to the car chase worked well. Starting out with the live-action second unit footage and then in post allowing the CG to expand the sequence.” The remake is distinct in itself when compared to the original. “It’s less violent and the worlds are much more represented. The technology that is represented benefits from the improvements in visual effects.”

“We had our initial conversation with Len Wiseman back in December 2010 after viewing the previs and reading the script, states Double Negative Visual Effects Supervisor Adrian de Wet. “It was just a very quick chat during which we briefly discussed with Len some of the ideas about shooting the car chase practically. A few weeks later we were in LA at Len’s office, viewing the previz for the main sequences and discussing methodology, and meeting various heads of department including Production Designer Patrick Tatopoulos.” De Wet notes, “Len was always very clear from the outset that he wanted a practical, physical reality as the basis for the visual effects for Total Recall; the car chase epitomized that philosophy. One of the first discussions with Len centered on his plan for building the cars practically, having the lead actors in them and driving them around a location at a considerable speed; that way we would get all the nuances of physical motion – both the motion of the vehicles and the camera moves – some of which are very subtle but add a depth of reality you might not always get with pure CG.” Double Negative Visual Effects Supervisor Graham Jack points out, “Len is a visual director; he loved to sketch drawings onto photography or on top of our renders to get across what he was after. When we were in post-production we would have regular conference calls using software which allowed him to draw on top of our work so that he could be very specific about what he wanted.”

“The biggest challenge was dealing with the scale and complexity of the world that we had to create,” reveals Graham Jack. “We had to create a completely CG world that could convince the viewer that it was a real city. We used some interesting software that let us create a set of rules that were used to generate the buildings for the city. This meant that although it took quite a while to work out the rules, once we had them we could create as many buildings as we wanted, all with slightly different shapes. This really helped us to create a world that looks like it has scale. We then had to fill it with traffic and crowds of people to make it come alive.” Jack states, “We gathered a lot of photographic reference for the UFB and the colony to define the overall mood of those shots. These didn’t necessarily show the details of the world but allowed us to discuss the feel of the shots. At the same time we gathered a lot of photographs of specific buildings over the course of several photo-shoots that started to form the basis of the detailed look of the buildings. The UFB was primarily based on neo-classical buildings around London whereas for the Colony we drew more on placed like Shanghai, Beijing and Singapore. As Double Negative has an office in Singapore we were able to send those guys out to get some specific reference there. For the Colony we were also able to use reference photographs of the extensive sets that were built for those sequences.”


“We used Patrick’s artwork as a creative starting point and then turned our attention to real world images,” says Adrian de Wet. “We looked at hundreds of reference photographs on the web, and hundreds that we took ourselves, photographs of London on a sunny / hazy day for reference of UFB, and real world photographs of Shanghai, Beijing, Singapore, for the Colony. The Colony had a very strong Asian influence, so we used many architectural styles found in those cities. But the lighting and weather conditions of the UFB and the Colony were polar opposites: the UFB being bright, clean, directional light, and the Colony being more ambient with accents of light: rainy, misty atmospherics with neon, sodium, fluorescent artificial light. Both environments required considerable real world reference. Architecturally, the Colony was more Asian in style but also had, throughout, a certain element, which had a style slightly akin to that of a Latin American Favela. This was the Habitat, which was where the bulk of the population of the Colony resided. The Habitat was based on a particular, real housing complex in Montreal: which is called Habitat 67. We took the basic style of Habitat 67, which is postmodern and reminiscent of Le Corbusier in its brutalism, and extended it so that it seemed almost infinite, made it seem to float above the waterfront buildings, and propagated it throughout the environments of the Colony.”

“For the Waterfront areas of the Colony where we see lots of action with Quaid, Patrick and his team designed a two-story high rectangular set with a dock in the middle, complete with boats,” states de Wet. “Surrounding and facing the water were businesses: shops, offices, workshops, and banks. On the second floor were more businesses, one of which is the entrance to the Rekall. We surrounded the set with green screen so that we could add set extensions such as the Habitat, more businesses, shops, transport infrastructure including a monorail system and a glide train, neon signs, advertising boards, holographic ATMs, atmospherics and sky. In the UFB, the brief from Len was that it was supposed to be a multi-layered city. The story point is that the inhabitants of UFB have run out of space and are building upwards, adding layer upon layer to create an entirely new city in the air. So at ground level you have the old [contemporary to us] London, and the further up you go, the newer it gets, complete with aerial freeways and car elevators, all running on magnetic levitation technology. But supporting structures hold it all up – nothing floats – and the architecture should be ‘neo-classical’ [in Len’s words], but with a modern twist. All this is peppered with holographic signage and detail. It’s the detail that is really important here: thousands of small pieces such as rooftop furniture, antennae, lamp posts, cables, fencing, trees and vegetation, all the stuff that seems really small all mounts up to help give the shot a convincing patina which we read as reality.”

The balcony was a practical set but everything beyond that was CG,” states Graham Jack. “We tried to make sure that we were replicating some of the important lighting elements of the foreground in the distant city. For example, the glistening wet concrete of the balcony. We also went through a long process of carefully adjusting the layers of rain and atmosphere to try to get a feeling of scale without losing too much of the detail in the background.” Another area of the movie where practical and CG elements had to be seamlessly combined was during the Hover Car Chase Sequence. “One of the things that helped was that Len was happy for use to adapt the environment to the lighting,” remarks Jack. “So if a large pillar cast a shadow over the cars in the photography we would make sure that in our CG environment we added something that would cast that shadow. We also created additional passes for reflections moving over the cars and in some cases replaced parts of the practical cars with CG where we needed to make the reflections or shadows work.” Adrian de Wet adds, “The post process for vfx was always being considered when we were shooting, especially in the car chase. You can split the car chase roughly into two different lighting environments. The first part, before they go down in the car elevator, they are on top of the freeway and are mostly in full sun. We chose a location that suited this, which was a disused Canadian Military runway at Borden Airfield, Ontario. In the other part of the car chase, after they have descended in the car elevator, the action takes place underneath the freeway, because now we are suspended underneath it rather than on top of it. This meant that we were largely in shadow and the lighting was more ambient. We chose a location for that, Lakeshore Avenue, which runs underneath the Gardiner Expressway in Toronto. Further to that, we tried to use the existing, real parts of our surroundings when we were underneath the freeway and replicate some of them in CG or even keep some of them in the shots, where possible, so that we wouldn’t have to change the lighting on the cars too much. It paid off to be constantly aware of what we were going to need to do in post, when we were shooting, and this helped getting the practical cars to sit convincingly into the environments. We also created extra reflection passes of the environments and added them onto the windshields.”

A distinctive moment in Total Recall is the Elevator Chase Sequence. “For the system of elevator shafts, canyons and chasms, we used the previs as a starting point,” explains Adrian de Wet. “Patrick and his team designed and built one section of horizontal elevator shaft and the adjoining corridor. Halfway down the horizontal shaft section there was a hole in the ceiling, from which there was a small section [just a few feet] of vertical shaft. On the floor below the hole in the ceiling we had a hole in the floor, which was to be a continuation of the vertical shaft going downwards. The hole in the floor could be covered up with a ‘plug’ of set piece whenever we needed an unbroken run. We shot a lot of the chase scene in this section of corridor and creating digital set extensions of more shafts ahead, above and below whenever needed. At each end of the shaft was a green screen, so that when Quaid and Melina reach the end of the horizontal shaft we are almost entirely onto green, giving us the freedom to put in whatever environment we wanted. For the elevators themselves, Clay Pinney and his Special Effects team rigged a fully dressed elevator car to travel and quite a considerable speed down the shaft on rails. Although most of the elevators in the final sequence are CG, there are still a few shots with a practical elevator, which proved to be invaluable as reference for the rest of the sequence. For the larger, cavernous interiors in this sequence, we used the previs as a starting point, but it was still pretty much a blank canvass when it came to designing all the architecture, and deciding which materials and textures to use. As well as that we were presented with the challenge of designing the lighting, and deciding on placement of things like scaffolding, walkways, gantries, fans, and vents. For the shots where we look down, these all of a sudden became fairly complex during our design stage. When it was suggested that rather than looking down into darkness or some rooftop architecture, we should look down and see daylight below us, and far below we should see the city sprawling out beneath us in the haze. This increased the difficulty considerably, because now we had to change the lighting so that daylight was actually coming in from below, and artificial light was coming in from the side.”

The practical Synths costumes had to be digitally altered. “There was a pretty painstaking process of painting out parts of the Synths so that we could see through to the background,” states Graham Jack. “We also body tracked all the Synths. There were a couple of things that we did on set that made this a bit easier. Every time we shot a Synth shot we would shoot clean plates that could be used to paint in the background. We also shot the Synths from different angles on our own HD cameras. We have some software that can use the multiple views along with the 3D geometry of the armour plates to solve for the location of those armour plates in 3D. This helped a lot with the body tracking. It was a case of rendering the mechanical robot pieces. In some cases we also had to render the armour plates so they could be used to replace parts that were revealed when the joints were removed.” Adrian de Wet notes, “The idea for the Synths was that we would have actors in robot suits in all the shots, and later, in post, we would paint out the parts of the actors that weren’t ‘robot’, i.e. the knees, ankles, shoulders, wrists, hips, and the neck. We would then replace those parts that we had removed with CG robot tech: cogs, metal joints, cables, and pistons. The reality was that we actually went a little further than this and ended up replacing the Synth torsos entirely, so that you really could clearly see through to the background with most of them. The challenge on set was that we had to shoot clean passes [i.e. a version of the shot with no synths] for almost every shot which is difficult to do when you’re pressed for time on set, to say the least. But one of the most important things to get right if you’re partially replacing an actor is the body-track. You need an absolutely accurate body track of the original actor otherwise the cg bits slide around and the illusion is destroyed. In order to help with the body tracking, on set we utilized hi def video cameras at angles other than the main camera. This gave our body trackers an alternative view of the synth action, which helped them get a more accurate body track, quicker.”


One thing that was a huge help was that we were able to capture lighting information on set,” remarks Graham Jack when addressing how the CG Synths were integrated into the various scenes. “We used a sphereon, which is a high dynamic range camera that captures a full 360 degree view of the environment. We were able to use this in the rendering process to light the synth so that they matched the look on set. We had also recently upgraded our lighting and shading setup to provide a much more realistic simulation of the physical interaction of light with surfaces which meant that the Synths looked very convincing pretty much out of the box.” Adrian de Wet states, “Sometimes it would be easier to completely replace the actor in robot suit with a CG Synth. We always had the hero take of the actor in a robot suit, which provided the best possible lighting reference; that in conjunction with our latest lighting and shading technology and some deft compositing meant that it was hard to tell the difference between totally CG Synths and partially replaced Synths.” Both Adrian de Wet and Graham were present during the principle photography. “The on-set work made a huge difference,” believes Jack, “having the practical Synth shots to track made the motion look very realistic and having access to the set for hdri capture made the lighting much more convincing.” De Wet agrees, “I don’t think the CG Synth shots would have been nearly as convincing had we not done the on-set work that we did.”


“There was, early on, some discussion about shooting miniatures for the Fall destruction,” states Adrian de Wet. “Instead we chose to go fully CG so that we would have complete control over all the fire, smoke, debris, and the Fall itself as it comes apart. One of the main challenges was keeping the scale huge: that’s always been the challenge with physical effects such as fire and debris. Also the sheer scope of the task: the number of shots, and each shot being as complex as it was meant that we pretty much had to work on all the shots simultaneously, yet keep a unified look throughout them all. So, to answer the question: I don’t think doing it all CG actually made it easier per se, because to shoot it practically, we would have been so physically limited – since the vehicle is so huge and travelling so fast – that we probably would have ended up changing so much of what we shot that it would have been CG anyway. Having said that, Clay Pinney and his team shot a great deal of practical explosion elements which were used extensively throughout the movie, including the Elevator Chase Sequence, and particularly for the interiors of the China Fall exploding and for some explosions on the rooftop.”

“Patrick [Tatopoulos] came back for a while in post to help us out in the design stage,” states Adrian de Wet. “Particularly for specific assets whose designs hadn’t been absolutely finalized during prep or the shoot, such as the Synth, the Attack Harrier, and some of the elevator chase cavernous interiors.” A lot effort went into making the holographic displays and signage unique. “We tried to add some subtle details to prevent them from being simple transparent billboards slapped over the top of the shots,” says Graham Jack. “Little glitches and instabilities in the graphics, projection beams to show where the hologram was emanating from and things like that.” Double Negative had to coordinate with other visual effects companies. “Some shots were shared in that we would light and render the CG and another facility would handle the compositing,” says de Wet. “Also many assets were shared, particularly the Attack Harrier, the Synth, and of course models of the Fall; including details such as the clamps, textures of the shaft whizzing past, all that was made available to other facilities for shots they were lighting, rendering and compositing. Maintaining a unified look was at times a challenge but we would try fast track a particular hero shot to look sign-off with Len and use that as a reference.”
“Dneg was fronting all the VFX work and our main contact was through the VFX Supervisors Peter Chiang and Paul Riddle,” states Prime Focus World VFX Supervisor Alex Pejic. “I’ve worked with Peter and Paul before during my time at Dneg, so understanding and communication was well established. There are sequences where discussion was more focused around ‘technical’ issues, like integrating Synths into the environment. The Lobby Sequence was more challenging, though, as it involved a number of different iterations and working really closely with Peter; he would guide you through his vision so carefully until you started seeing the shots through his eyes and from that point it becomes very easy to understand what he wants you to do. In last couple of months, once we’d got most of the shots to a presentable level for Len [Wiseman], we had a review session every night with Len in LA, and Peter, Paul and the rest of us in London. Depending on how much we had to present, Len would look at the shots and give his feedback.” Pejic remarks, “There were a number of challenges throughout the project. From raytracing 20+ Synths in the Synth Bay Sequence, to rigging to accommodate different stunt guys on-set, through to creative challenges like the Lobby Sequence. At Prime Focus World we were working across three international facilities [London, Vancouver and Mumbai], so we were able to further refine our internal management processes via actual shot execution. Sharing work on one shot across three offices is something we are very proud of. Of course, there is always space for improvement and that’s the joy of this job. There are a few things we learnt for next time and we’re already working on them: mainly in-house tools for tracking data and moving assets around the world that allow the global Prime Focus VFX team to work seamlessly, like we are all in the same building.”

CG and practical elements were integrated to make the Synths believable characters. “The key elements were definitely lighting/ shading and body tracking / animation,” observes Alex Pejic. “Our instructions were to always replicate the live actors in armour, even for full CG Synths, so for our hero animation it was interesting to do a man-playing-a-robot rather than doing straightforward robot animation. All Synths were hand animated by our animators or trackers: we had rig calibration controls to adjust for the different physiques of the various stuntmen in costume [arm / leg joint length, skinny vs muscular] which would otherwise throw off a body track. Another challenge for tracking shots with CG guts added inside of live-action armour was the fact that the live actor body armour is sliding and vibrating over muscle and fat, rather than bolted to a metal skeleton. Sometimes this added the kind of ‘happy accident’ animation that can make things seem more real, but often it meant we had to selectively replace parts of the live-action armour with CG, in which case it had to match the live armour’s lighting and reflections perfectly. As for lighting, the key component is to recreate all the environments in order to get accurate on-set lighting. Reflections were most crucial to getting it right, and blending elements seamlessly with the plate. We decided to bake the interior shading solution in a ptc and then raytrace the set. It was the perfect solution for short render times and proper spatial reflections. In terms of shading, we used microfacet distribution and shlick approximation for fresnel and that gave us the desired results. For the diffusion contribution of the moving Synth, we used brute force raytracing. For the rest of the Synths we pre-baked the environments on each Synth, then convoluted it and raytraced it.”

No visual research was required. “Dneg was the lead VFX house and all research was done by them,” remarks Alex Pejic. “We were provided all texture references from the set: Lidar scans, measurements, HDR images, everything in order to rebuild the Synth Bay in its entirety. Even though the main task was to work on the Synths and their internals in order to get accurate reflections, we ended up rebuilding most of the environment. On our side, the R&D work we did for the Synth Bay was mainly around our lighting approach and how we were going to raytrace so many Synths in the shot.” Other share assets were provided by the main visual effects vendor for the movie. “The CG Harriers, like most of the hero assets, were all delivered to us by Dneg. We started with WIP assets to get us going, and so we could start blocking animation for Peter. The CG Harriers were featured in other sequences completed by Dneg, so the biggest challenge was to make them look exactly the same: that included heat exhaust, which was all done in Houdini. For that, we met with Dneg a couple of times to understand their methodology, and worked closely with their FX team to replicate their workflow. As the sequence and assets look progressed, we were given look development turntables which we had to match. Integration of the CG harriers was trickier than expected, as we fought different lighting conditions on-set and the full CG environment we used to replace the background. Once the sequence look was established everything else fell into place naturally.”

“China Fall was challenging for a variety of reasons,” notes Alex Pejic. “There was some very difficult wire removal work to be done, as well as various set extensions; and on the creative side we were asking what there was we could do to help the sequence to look like Zero Gravity. The main challenge with wire removal on the China Falls sequence was the sheer number of wires – and their resulting shadows – and the removal of these elements in an environment where the cameras and lighting were constantly changing. The software used was exclusively Nukex and Silhouette. As every shot required tracking for set extension, bullet hits and CG, we had cameras provided by our matchmove department, they also gave us scene geo. The first task was to remove the wires from the background by projecting patches onto this geo and using various transforms to lock the patches into place. They then required extensive grading, due to the ever-changing lighting. We used the Nuke curve tool linked by expressions to grade nodes: this helped match the flickering luminosity of the shots. The next challenge was removing wires from the actors, which involved warping and morphing clothing and skin patches to match the costumes. Certain areas that could not be patched [like when wires were over moving faces/hands] were given over to our excellent Silhouette painters: they painted out the remaining wires frame-by-frame. The final task was to remove Jessica Biel’s hair bun so a CG ponytail could be added in comp to help emulate the ‘Zero G’ effect. This involved everything from background recreation to full ‘crown’ replacement. Once the paint and compositing was finalized many shots were given a Kronos retime to accentuate the feeling of weightlessness.”

“At the beginning, the Lobby Sequence looked as though it would be fairly straight forward, but it turned to be the most creatively challenging work we faced,” reveals Alex Pejic. “Size and scale was one of the biggest issues we fought. The live-action interior was about 100 feet across, and then another 60 feet of concrete balcony beyond the windows to the green screen. The balcony railings were textured and rendered as 3D objects in Nuke. Our CG freeway with massive sim cars was another 900 feet beyond the balcony, and rendered in Guerilla. The rest of the CG city background was given to us by Dneg as object and prelit renders which we projected in Nuke, and matte paintings for the far distance. Some of the elements just didn’t work with the camera move so we had to go back and create simplified 3D geometry, then project it back on, in order to see parallax between the objects. The biggest challenge was actually the look, and this is where we worked really closely with Peter and Paul to achieve what Len wanted.” Crowd replication was required for the scene. “There were two different ‘crowds’ we created for the Lobby Sequence. One was additional Synths, and the rest of the crowd in the background. In this case the Synths were full CG renders, posed in a way that works with the shots. We rendered a library of different poses as a turntable so Peter and Paul were able to pick the frame and pose they liked and address any creative changes, with the minimal time required. As for the crowd, we used 2D elements shot against a green screen provided by Dneg. The challenge on that one was to get the element working with rest of the action in the crowd. We ended up blending different sections from different elements in order to get something we were happy with, and address both Peter’s and Len’s creative direction.”

“Colin is in this apartment and he starts playing the piano and unknown to him it unlocks a key code which generates this hologram on top of the piano between these two urns,” states The Senate Visual Effects Supervisor Richard Higham. “The hologram is of him prerecording a message before his memory loss and he is able to divulge certain information in answer to particular questions.” The moment of revelation was shot with an array of 5D HD cameras along with a Red Epic to provide 180 degrees of coverage. “It definitely has lots of little areas you could fall traps into because you have to be careful with perspective, eye line, and scale. You have to choose the appropriate angle and track the performance of the actor, his face, chin, and even how the muscles move there; you used that to generate an animated mesh and on that we projected video noise, interference lines and scan lines. We would produce those back over the top of the green screen element which we composited on top of the piano. It was trying to get a specific look that Len [Wiseman] liked so we went through a number of different methods. There are shots where the camera is on the move so we either had to morph from one camera position in the array to the next one so you get the sense of perspective shift as you move around the head. In some cases you didn’t need to and in other cases we projected the green screen footage back over the tracked mesh and rendered that from the actual background plate camera. You had a true 3D system going on in those situations. It was dependent on the shot and which methodology would work because it wasn’t an exact science. There were certain areas where you’ve tracked two plates and you try to blend the two together but they need to be pixel perfect otherwise you notice that there’s something wrong.”

“I like to makeup my own tech in my head because I’m a fan of science and know in reality that a lot of the stuff we created is almost impossible,” remarks Richard Higham. “But you have to create a tech in your head and think, ‘If this existed this is how it would operate.’ There’s a scene where Colin has a phone in his hand and it starts buzzing. We had to paint out the stuff that was on his hand but when he places it against glass this little graphic icon materializes from his fingertips on the glass; that was received as graphic element. We had to give it a futuristic holographic look. I reasoned to myself that, ‘What happens is there are certain electrical charges reacting to something within the glass that causes them to glow.’ You use that made up technology as the way to figure out how it would look. It was the same with the hologram head. You wonder, ‘How is the sensor being formed and how did his head get recorded in a 3D way?’ You start thinking, ‘We’ll say that there is some kind of particle collision here and that something now can materialize which has been prerecorded and it can be dimensional.’”

Figuring out how the hologram would look upon being activated and deactivated had to be factored into the design process. “That was one of the most challenging aspects because it’s subjective,” remarks Richard Higham. “Something that I may like or you may like may not be liked by someone else. You have to literally experiment and try different things out. Initially, there was this spinning aspect idea that Peter Chiang suggested to Len so we tried it out. It was like a segment from a basketball. The faster it spun the more images were revealed; that was working quite nicely but Len felt it wasn’t broken up enough and had too much spinning so we went in another direction. Eventually, we came up with this thing by using the UV [Ultraviolet] mattes that we generated from our 3D tracks. We populated little geometric noise patterns and progress them so they moved upwards but in a randomized way. We used that to reveal aspects of the hologram that was coming in so it feels like a randomized blot that has some moments of distortion and flicker, and that slowly progressed up until a fairly completely head was in place. You invert the process for when he disappears again. That was a mixture of 2D, 3D, all sorts of different effects to try to get the exact look he was after. Len didn’t want it purely geometric like a checkerboard but it had to have some geometric element to it but at the same time it needed to feel organic. It was quite an interesting way to try and create this revealing matte which was a technology that’s the geometric element but also has an organic feel to it so it didn’t feel so obviously computer generated.”

The phone hand resulted in another hologram which needed to be developed. “We were provided the graphics element Len had approved and then it was the case of taking that element and breaking it apart based on new information you get as you start to produce shots for him,” states Richard Higham. “When Colin touches the glass and you’re seeing through to the other side so there is a certain unique level of transparency.” Issues had to be addressed such as getting the dark aspects of the hologram to glow as well as protect certain areas for dramatic purposes. Double imaging resulting from reflections was incorporated into the design of the visual effect. “We used that idea to punch some dimension into the graphic we had on the glass so it is difficult to know whether it is resonating from the back or the front.” There was also the matter of having to deal with phone hand. “It wasn’t too bad. You could see the darker sections of the prosthetic just underneath the skin and the design which they used was cool. It illuminated and glowed every time it rings. But when it rings off you can see the parts of it still there. It was the case of creating little patches of hand and tracking individual bits on. The actual device had a power thing that went into it which came back and around the thumb, and up his wrist and the back of his arm. Whenever you saw that you had to create little clean patches and track them on individually.”

“We initially received a generic backdrop of the waterfront which was fairly straight on and then we were asked to push it into all kinds of different shots,” says Richard Higham. “When you have multiple angles you can’t keep repeating the same thing. We basically broke it up a bit further and used the projection of some different parts where we managed to grab scraps of matte paintings from elsewhere. We mocked up some rough things and used what the existing look was of New Asia; we used that as a guide but added in extra dimensions to a street so you’re not always looking flat onto something.” Visual research was not a necessity. “A lot of that had been based off the New Asia which had already been extensively created by Double Negative. It was a case of working within the reference they gave us and a lot of that was provided by production.” In regards to the final fight sequence, Higham states, “We received all of geometry of the China Fall because we were going to render it ourselves but of course we had a different shader and render network than Dneg. We still had to match what they had done previously so there would be consistency and continuity. It was case of literally rebuilding the shaders to all of this geometry and lighting it using image based lighting to try to match it with the environment. The interesting thing was to receive a model as suppose to beginning from scratch; however, the challenge is that you need to match a facility which has a different way of operating 3D wise.”

“Our biggest challenge for this picture was the short time frame as we joined the project right at the end of post production” states MPC Visual Effects Supervisor Charley Henley. “The work MPC picked up included a collection of full CG shots where The Fall goes through the tunnel, a fight scene on top of the fall that required set-extension and a CG tunnel wall in the background. On the building that the lift connects to [when they arrive in New Asia] we completed CG set extensions, CG clamps and background matte paintings of the city.” A lot of time was required to technically adapt the pipeline for the London-based VFX facility. “We shared a number of assets with other facilities and there was a lot of code writing to transfer their texture tagging to our own and a lot of lighting look development work to get a good look and efficient render times.” The work had to be completed within a six week deadline. “Normally we would have had a few months to complete a project like this. You’re building these assets while they’re shooting so you have a lot more setup time. We had to compress that time to get the assets efficient and to make sure that they ran through our pipeline effectively.” The task required a great deal of forethought. “We spent time up front planning who could look after what so everything was done, where possible, in parallel rather than one department waiting for the next department. We constantly had to address all of the issues at the same time rather than in a linear way.” Henley notes, “There were shots where The Fall travels a long distance through the tunnel so you have a lot of motion blurred geometry whizzing by and the model was quite large so they proved to be heavy shots to render. We did some baking of the lighting for those shots, which is something we don’t normally do but it was needed for efficiency. We then ran a second layer of interactive lighting on the elevator and the walls.” A simple solution was employed when depicting the feeling of weightlessness when the massive elevator passes by the Earth’s core. “We completed a number of tests in trying to emphasis the moment they go into zero G with extra effects and atmosphere, and what we ended up doing was simply going into slow-motion for a beat.”

Assets were also created from scratch by MPC. “Quaid disguises himself with an anti-cognition collar, it’s like a spy device that projects a photo-realistic mask of another character,” states Charley Henley. “There’s a scene where Quaid goes through security, the gadget malfunctions and the projection starts flicking between the different heads that have been programmed, ending up with the image breaking down revealing the real Quaid.” Multiple faces had to be blended together. “It was shot in a way that Colin would do the hero performance and the other actors would match that as closely as they could on-set. To get total control over doing our transitioning between the heads we roto animated all of the different actors with 3D scans of their heads and projected back the face textures from the plate. We wanted to use a 2D approach as much as possible to be efficient in speed and be flexible in changing design. The design wasn’t locked down at that point and when you’re short on time the biggest risk is spending most of that time trying to nail a look rather than refining the quality. The aim was to be a flexible as possible with the design but always have a correct perspective on any texture added to the head; we built a Nuke setup in compositing so we could lineup all of the heads and mix between them. We could project back whatever face we wanted onto the roto animation of Colin’s original performance. “The diversity of faces allowed for interesting patterns and a unique hologram style to be developed. “The thing we investigated was the texture and quality of the edges as it mixes between photo-realism and something slightly more graphic.”

“We’re always pushing to try to do our explosions with real elements where possible,” states Charley Henley. “In this case there was a scene where the heli-jet gets hit and slides across the deck; that’s where we did some clamp explosions. They were all CG shots effectively so we laid out the camera and mocked up the shot in Maya but then we did shoot elements for the explosions. Angus [Bickerton who served as a Production VFX Supervisor] shot the elements at Shepperton studios. In this tight timeframe it was more practical to go and shoot elements for the FX. Having done the CG layout we matched the cameras, built some basic blocks to represent the walls and did a miniature shoot of explosions which was combined with debris elements and composited into the otherwise all CG shots.” Henley remarks, “In a later scene, which is the aftermath of the destruction of The Fall, our job was to extend an interior set to be outside on top of the tower. We did quite a lot of look development in compositing of what time of day would suit the set and what kind of grade would work well for the sky and background cityscape. What helped the integration was the fact that this is post destruction so we added a lot of smoke and atmosphere drifting around. Along with that we had to build an ambulance to tie in with the doorway that they had built on set and on a number of shots with on-set vehicles we had to remove the wheels; they’re all meant to be hover vehicles.”

“The rescue harriers were mostly based on a model we had from Dneg, we did some adaptations to the model and some extra texture work to dirty them up to make them feel more gritty and used,” states Charley Henley who used photographs of American ambulances as visual references for the futuristic version of the emergency vehicle. “The ambulance was something we designed from scratch. We created a CG version of the ambulance that would work for the wide shots at fairly low resolution and used that as the basis for the close-up shots. We employed our environment and DMP artists to up the quality and project textures on for the close-up shots. In compositing we used a combination of lighting passes which included reflections and combined them with these projected textures.” There was also the case of wheel replacements. “Some fire trucks were built and photographed on-set but they had wheels on them so we had to do a basic model again of those trucks without the wheels. We rendered them, did a lot of prep work cleaning up the areas where the wheels were and composited in a CG patch.”

Newsreel footage needed to be constructed. “We had reference of some real footage from helicopter views of news footage of crowds gathering,” says Charley Henley. “Len had that cut in as filler in the edit and our challenge was to do a version of which represented the real downtown New Asia. This was after the main event where The Fall had been blown-up and everybody is cheering in the streets. We went about it trying to find the most efficient approach to get something photo-real in the short space of time, generated effectively from scratch. There was a photo shoot already done of a set that was in a similar area so we took that photo shoot, did basic models and photo projections to create the buildings around the waterfront which was the area for the landscape. Most of the lighting was baked into the photos but we add CG reflections to give a wet look. For the crowd, we decided to go for a cross between using our crowd CG tools and shooting real elements. The people were fairly close-up so we didn’t have the time or resources to build CG characters. But they needed a hefty crowd of over a thousand people for a couple of different camera angles. We shot single elements of live-action crowds performing.” A propriety CG crowd software developed by MPC called Alice was utilized. “We used that tool to do the layout of the characters. For the shoot we had about 30 actors dressed up in different costumes performing in a variety of ways. One of our programmers wrote a tool for Nuke that would tag the 2D cards effectively to our Alice generated crowd. For each of those cards we could manipulate the timing and performance. We had them cheering to not cheering and each of those cards could be controlled or tweaked in compositing to give slightly different colours so we could get more diversity or shift performances; that was a new way of going from crowd of about 30 to 1500 in the end.”

“Six weeks before the VFX deadline of the film, Buf was contacted to complete 1 shot, the UFB factory establishing shot,” explains Visual Effects Supervisor Olivier Cauwet who partnered with VFX Producer Fabrice Lett, who also works for the French based VFX facility. “Then, quickly other shots were added on, such as the explosion sequence of the ‘Engine Room’ and new shots in the film where we had to create the set, animate CG robots [called ‘Synths’] and composited Kate Beckinsale into shots. By the end we had completed 17 shots for a variety of sequences.” Cauwet states, “We never had direct contact with Peter Chiang. We worked throughout the production with VFX Supervisor Angus Bickerton, with whom we had previously collaborated with on the latest Tim Burton film Dark Shadows [2012]. Angus sent us the intentions of the director, Len Wiseman, told us the context of shots and showed us the ‘work in progress’ so that we could absorb the atmosphere of the sequences. Initially, we had one Cinesync per day with Angus, then, given the time, we moved quickly to two per day. Despite the short deadline, we always presented several proposals for the staging and design, in order to allow Angus to chat with Len and offer different versions. Angus Bickerton is always positive and wants to explore all the possibilities for a shot, which we wholeheartedly agree with.”

“The design and atmosphere of the film were already clearly defined [by Dneg],” remarks Olivier Cauwet. “For the ‘UFB factory’ establishing shot, we had to create a shot that respected the UFB design. Our factory had to integrate seamlessly with the elements that characterize the city like the suspended buildings, the large pillar ‘Stanchions’, the ‘Car Elevators’, and the ‘Mag-Trains.’” A particular sequence stands out for being difficult to achieve. “We had to extract Kate Beckinsale from one close-up shot and place her into another one. For the final shot, in addition to compositing Kate Beckinsale into the foreground, we had to recreate the background set in CG, composite GS extras and animate a troop of marching CG Synths. In the shot we had to extract Kate Beckinsale from, she turns her head and the light reveals many tiny hairs and details that needed to be removed. Of course she was not on a background that would assist in her extraction – such as a green screen would. So the work consisted of rebuilding the hair in 3D rather than extracting details that would flicker. For this, we tracked Kate’s face in 3D, remodeled her hair volume and added new hairs and hair strands to preserve all of the fine hairs during extraction.”

“The main research was the position for the Synths,” says Olivier Cauwet when addressing the Elevator Shaft Explosion Sequence. “We had to determine the number of robots present, the actions for each of them to tell the story and fit in with the surrounding Dneg shots. The first challenge was the animation and understanding their characteristics. The synths are robot soldiers, they must have a staunch walk, no swaying shoulders, and do not protect themselves during the explosions. There must always be details in the animation that remind us they are robots, not humans with armor. The second challenge was the integration of the Synths. For the destruction shots in the hallway, we had to compose several explosions plates to make the sequence more violent. Once the background was done, we had to animate and integrate the CG Synths that would be destroyed by the explosions. The main concern in integrating such shots is the lighting effect that changes at each frame because of the explosions. For this, we projected the scans of the explosion on the 3D model of the set and we recovered the reflections and the lighting effect in a coherent perspective. Finally for these shots, we comped in the green screen performance of Kate Beckinsale – and added CG debris, pieces of Synths, CG embers and blast effects to improve the integration and make the shots more dangerous. For explosion shots inside the ‘Engine Room’ there was a lot of roto work. The explosion had to be extracted from a set that required CG set extension work. We found that it was easier to rebuild everything in CG than to integrate pieces of set under the explosions. In conclusion, we lit the CG set, animated piston elements being blown up by the explosion, then added blast effects and CG debris to help integrate it all together.”

“The reflection of the explosion on the helmet was a Buf suggestion that Angus approved,” reveals Olivier Cauwet. “From this idea, the director Len Wiseman then wanted the helmets’ interior to be illuminated by the intense light of the explosion and see what is behind the black visor. We animated, lit and comped the CG Synths onto a clean background. We added a reflection with an explosion element on the faceplate of the helmet and we lit the face of the Synth to distinguish his insides. The scene is lit with an hdri map and an animated light source to mark the explosion.” Practical and CG elements had to be integrated for the sequence. In this sequence, the lighting effects have a key role in the quality of CG elements integration with the live-action. CG light should be perfectly synchronized for the live shots to properly integrate the CG synths. This implies shadows, contacts and soil color temperatures in the high and low lights of CG elements. For explosions shots, we constantly adjusted our integration frame by frame because lighting was always changing.” The Synths were not only ones caught in the massive destruction. “For the shot in which Kate Beckinsale finds the exit through the window, we added embers in the beginning of the shot for continuity with the explosions. For the background, we modified the model of the “China Fall” that we received from MPC to add an exit door. We then created textures taking material references from the live-action shots. We created the falling motion of the ‘China Fall’, added rain to match the atmosphere of the sequence, and animated CG debris falling by because of damage to the top of the tower. We added a light effect on the background and SFX, the sparks and flames, on the ground to make the situation chaotic. We also built CG Synths in the hallway and a dead Synth blown by the explosion in front of the window.”

When it came to dealing with the Synths, Olivier Cauwet states, “The main issue was to keep our render in line with those in the rest of the film. Our artists, David Verbeke and Dominique Vidal did a great job to get there. Because the Synth’s shaders from Dneg were specific to their pipeline, we had to recreate them using the ‘lookdev turntable’ of the Synths and references shots that Angus sent us. Fortunately, Dneg provided files listing the hierarchy of maps connected to the shaders themselves connected to the objects. We were able to find connections between objects and maps so they could fit in identically to those of the film. We had to rebuild the rigging of the Synth to animate them. We did not create the graphics on the face helmets. They were provided by the production and we adjusted them per Len’s comments.” The tight time frame made added to the difficulty of maintaining a uniform look. “That was indeed a big challenge because we were not supposed to work on this movie when post-production started. We were given these shots that had not been started six weeks before the VFX deadline. We just had to do everything. Of course, it’s not easy to keep a consistent look between the shots in this type of situation. But luckily, we had only few shots to work on. We didn’t get an entire sequence to work on; there were shots from Dneg & The Senate before and after our shots in the edit. Angus always kept the edit up to date for us to make sure our work was matching with the shots of other facilities.”

“We got the work on Total Recall through Double Negative. VFX Supervisor Peter Chiang, who is one of the owners of Dneg, showed us the sequences, what we had to work on and what he felt the shots needed,” states Baseblack Senior Compositor Martin Ofori. “For example, we had sequences in the London Underground where we had to create a travelling tunnel background for the green screens. We came up with a tool in Nuke which generated texture which looks like travelling through a tunnel; we showed this to Peter and he would then tell us if the repetitive pattern was okay or if he was looking for something else, if the shots needed something else.” Shifting the setting from Mars to Earth made the task harder. “We had a problem to create something where people will say, ‘Yeah, that’s the way it looks when we use the Tube.’ If it would be on Mars you can put something into the window and nobody could say, ‘That’s not the way it normally looks.’” Photo and video references were captured by devices like iPhones. “We noted what kinds of patterns we saw. Those patterns were mainly created in still images as matte paintings. We then animated those matte paintings. We used motion blur and light effects to create a real travelling feeling taking in account that there were certain light effects from the shoot already in the plate. We usually had to mimic the lights outside of the carriage.”

“The biggest challenge for us was to be able to react quickly to the artistic vision that the director had,” remarks Martin Ofori. “Len had a specific vision in his head which sometimes changed and for us it was important how to react to these changes and come up with a new result in a reasonable time.” Ofori explains, “We had one specific shot which was shot in an underground station but Peter and Len then had the idea that the subway station is partially destroyed so we had to come up with a design for the outside of the subway station. We used mainly matte paintings and 2D projections in Nuke, but for the closer parts we also used 3D renders. The hard part in this was that we had to keep the actual lighting on the train which was difficult because in the underground station it was dark and now we have this element of daylight coming in. This was one of the most challenging shots because in the end its only remaining element was the train, everything else was composited in 2 and 2½D with projection or CG elements.” Other changes had to be accommodated. “At the beginning of the whole process Len and Peter wanted to have a clear view of the environment. But later on to support the story they wanted to have more atmospheric effects which on one side helped to integrate the CG elements with the real ones. On the other side it was tricky to blend in elements which travel through the smog. For instance in one of the shots we had a Harrier [a CG model provided by Dneg which we had to composite at the beginning very close to the camera but then it takes off and vanishes into the sky; to get that right, the way it blends into the atmosphere, was difficult.”

“In a lot of ways it was the usual technique for compositing 3D renders, green screens and so on,” says Martin Ofori. “What was challenging was that we had to come up with these 60 shots in quite a short time period. For me personally, I had to deal with two bigger shots and had to find a technique to handle these two shots at the same time with the minimum amount of time. Both shots used the same back plate, but for shot B the back plate was in reverse and people in the original back plate were painted out. This was my starting material. I created a script with some python in it, where I could push a button and everything that had to change from one shot to the other was changed automatically. I didn’t have to think of two shots. I constantly thought about one shot and when I received a new matte painting for the city in the background I placed it into the script, rendered it for one scene, pressed the button and everything was changed that had to be changed. Then I could render it out for the other shot. In the end it was an easy thing to do, even if both shots had their own problems.” Ofori adds, “Our pipeline helped us quite a lot in order to work on those 60 shots in a very short time. Whenever there was a creative change in one of the shots or in the sequence we could react quickly.”

“There was a solid aesthetic already established and being developed into Total Recall both from the original production design and the large scale visual effects work in progress elsewhere,” states LipSync Post VFX Supervisor Sean Farrow. “We largely took our ‘reality’ cues from those areas and then drove designs forward in collaboration with Peter and Len. Occasionally, we'd need to blur the boundaries of reality as we know it to lift the drama or danger of a particular effect.” There was some room for experimentation. “We also had great fun pushing the boundaries in our designs of this future tech,” says LipSync Post VFX Producer Stefan Drury. “When there was nothing equivalent in the film and when Len wasn't keen on something being too grounded in reality we were allowed to ramp it up to eleven to make it more exciting.” One such innovation was a missile which releases shrapnel in the form of miniature cameras upon impact. “Our starting point for the ShrapCam missile was the practical,” says Farrow. “They had a physical missile which was inserted into a Grenade launcher machine gun, which gave us the starting parameters in terms of exterior look and scale. They also had an initial design for the small flying cameras, which turned out to be too large on set. We essentially scaled them down by half and then designed the interior mechanics and animation of the missile, again by producing multiple versions, all animated and driving forward with feedback from Peter and Len. We'd generally produce a number of still concepts to narrow down the field of looks, and use a shortlist of those in grey scale animated versions. These would get further refined into a shot and materials explored in more detail, as we add tweaks to animation. Nearly everything we needed detailed feedback on was a staged process [Concept, design, texture, model build and animation] and refined as we progressed.”

“There were two looks, both of which we developed and designed from scratch,” remarks Sean Farrow when explaining the design of holograms associated with the military surveillance device. “The first was the POV view of one of the tiny cameras, technically named a ShrapCam. We design the graphical content, combined it with live-action footage, and applied a ‘scanning’ and digital screen look to it, to give the sense that you were inside the ShrapCam gathering information and surveying the room. Ultimately, the information from these ShrapCams was broadcast back to the Soldiers HoloPack, which was the second of the looks we designed. The back pack display was a physical screen and base strapped to a Soldiers back which flipped down, enabling another soldier to access the broadcast images and information from the ShrapCams. We created the interface of the backpack screens, to enable interaction, but also to show just what was going on. Inside the room, production had placed dozens of GoPro cameras in the area and shot everything on extremely wide lenses. We then took this massive amount of digital footage and converted them into what the tiny ShrapCams were seeing; over a hundred of these were then treated and comped into the back pack interface to show what the ShrapCams were seeing throughout the room. The soldier ‘activates’ the holographic function of the back pack, which then builds a 3D sphere with all the key angles taken from the room. This moment also tied in with the CG camera we placed onto Quaid’s arm, as he plies it off. Upon command the 3D sphere translates into a holographic projection of the room, in which Quaid can be seen in the room. The look of the holographic room was a creative challenge, which we designed to be reflective of the real environment; it was translated from Lidar scans, taken through 3D and eventually completely lit and controlled in Nuke. This whole process meant we could turn around version quickly since the look needed to not only look futuristic, but also be interactive and clearly show where Quaid was and drive the story forward.”

“The Door Ram was a lot of fun to develop,” states Stefan Drury. “We had a physical prop base, about an inch deep to start from and handed this over to our concept artist to sketch up a whole host of ideas of how it looked when it was extended.” Inspiration was sought from real tools. “We drew a great deal of our ideas from practical rams and drills and well as military gun muzzles, and then future-ised them by combining those ideas with more interesting devices and materials,” reveals Sean Farrow. “Once we got a feel for which concepts were working for Len we moved onto the CG build and animation, which was important as this device needed to extend to about 5 times it’s depth and still look dangerous and solid enough to bust through a steel cased door. The interior of the ram which features a drilling mechanism was actually inspired by the drill machine vehicle in the original Total Recall, with the spinning drill bits and blades.” The graphics which had to be produced for the interactive security scanner which Quaid attempts to go through was iconic assignment for Drury. “We were very excited to develop the security scanner in the ‘Terminal’ building due to its reference to an iconic scene to the original Total Recall.” Farrow remarks, “We were supplied a set of 2D motion graphics which we took into Nuke and developed into a full 3D interactive holographic mini-environment. We had numerous shots in the scene which had to have a continuous look as well changing when the action kicks off. This meant we needed to develop a look and a set of techniques which could be easily transferred to many other shots as well as other vendors who needed to follow the look of the scanner as we progressed. We additionally designed and animated the guards’ interactive graphics to follow the action precisely from shot to shot, again using similar ‘look’ techniques that we’d developed for the main scanner.”

“The most challenging aspect of our work was the Bolo weapon which although based on a real weapon, needed photographically convincing FX work to ensure it was physically and dynamically exciting,” states Sean Farrow. “It was by far the hardest look to achieve due to the unique light and motion blur, and extreme lensing effects that came off the practical elements. On top of that, the animation tests of our CG version led to a great deal of creative and photographic interpretation which took considerable time to get right. Additionally the connection between the Bolo-gun and the coils had no premise or real solid concept off which to work, meaning we had to develop a concept for something which no one really knew would look like.” Stefan Drury remarks, “Len is a receptive director and is keen to collaborate with the VFX team. In the case of the Bolo connection effect we really had an open brief, which is always an exciting challenge, and as there was no real concept behind the effect we had to come up with multiple ideas and look development tests to get Len’s feedback on. He is an articulate director and was able to understand where we were going with ideas, give his thoughts, identify what he liked and didn’t like and was able to offer up drawings and diagrams of how to progress the concepts we were presenting to him.”

“The look of the Bolo coils was largely established by the live-action plates, which had ‘real’ coils wrapped around the characters,” states Sean Farrow. “This drove a great deal of the look, which was developed photographically by using ‘flare lenses’ to increase the impact of the bright and dangerous looking coils. We took the practical coils as our foundation and built multiple versions of it in 3D using different materials and structures until we hit on something that worked for Len and Peter in each scene.” Some creative license was involved in producing the high tech weapon. “The animation of the Bolo coils was a combination of real and unreal physics. Len was particularly keen to get a sweeping third dimension to them, not only to enhance the vicious whipping round of the coils but also to separate them from the more constrictive feel of the practical ones. The attention to detail and the requirement for them to do physically impossible moves meant there was no real-world or procedural technique we could use. This meant that each coil was frame by frame animated to camera after being body tracked to the characters, giving us a sprinkling of hero frames as well as a blend of speed and danger.” A mixture of 2D and 3D were employed to create the pulsating magnetic effect.” The connection effect between Bolo-gun and Bolo-coils was a constant tug of war between “seeing” something which was not magnetic, or electric, or even very physical at all. We R&D many different looks and approaches, around 30 very different connections, from physical, electrical, magnetic, ethereal through to almost invisible, until we hit the right note with Len. The selling point with the connection between the actual gun and the wrapped live-action coils was animation and perception. No single frame would sell the effect, but on the run it felt like the characters we’re being pulled and physically manipulated. This also meant that each shot needed to be animated in different ways due to the duration, framing, lighting and movement of the camera, as well as whether the connection effect was coming on, already on, pulling, pushing and switching back off again.”

“The illuminated tattoo on the Rekall receptionist back was technically very tricky,” states Sean Farrow. “There are many subtleties to how a person moves, even just a little, and our team did an excellent job of making that work through extensive body tracking and subtle comp work. The other challenge was making it look like a futuristic tattoo and not luminous paint or a set of LEDs; it involved a great deal of finesse on the balance between illumination, brightness and fall-off. We took a practical illuminated tattoo from the film as our starting point but took it further to fit the unique environment and the way in which the receptionist moves.” Digital extras had to be integrated into scenes. “Crowd replication is all about good lighting and realistic action by the performers. Some of the elements we used were taken from out-takes from the main film drama, but others were green screen elements shot specifically for our shots. Both methods worked out very well. There’s no perfect technique to do a head replacement shot, and we used a range of number-crunching and hard work techniques to get ours just right.” Stefan Drury adds, “Production had done a fine job shooting elements in the same lighting conditions. The principle of matching one person’s moving head onto another required a great deal of re-timing, paint work, scaling and hand tracking work.”

“We had a tall challenge near the end of the schedule,” states Stefan Drury. “There was a sequence in Matthias’ Lair which had been in flux in the cut and really only settled with about four weeks to go, which as a producer is pretty scary. However, I had faith in our creative team to pull it off [so much so I had a little wager with the Producers at Sony/Columbia that this wouldn’t be the last sequence to be delivered, we duly won that wager!].” The graphics had to appear futuristic without over complicating the storyline. “It was a complex set of story beats describing a specific set of actions, past and present, all played out of Quaid’s mind,” remarks Sean Farrow. “All of which needed to be communicated on the yet to be realized on the screens. Len described what he was after in very broad conceptual strokes and tasked us to come back with an animated sequence roughed into a near locked cut. We knew we had to get it close on the first pass otherwise we’d all be facing an even steeper challenge. The first pass, and Lens notes went well enough for us to progress further with the sequence. We designed all of the graphics, animations and the look of the composited screens in a little under those four weeks, which when your starting from zero was pretty good.”

“They were shooting up in Toronto and some of the costume makers and effects people there weren’t quite comfortable doing these kinds of suits that we do,” recalls Effects Supervisor Shane Mahan as to how Legacy Effects got involved in the project. “Patrick Tatopoulos was once on the competition roster but we had always heard he was such a cool guy and interesting artist who owned his shop. Once we met the chemistry between our guys, myself and him was instantaneous because he does come from an effects background and understands the operation.” Alterations had to be incorporated into the sketches provided by Tatopoulos. “Usually the designs are a little elongated so to make a presentation that is attractive to all of the producers. The average size of the stunt people is six foot or six foot one so you remodel that design over the human figure. It takes a few weeks to get that correct.” 42 Synth suits had to be built as Len Wiseman did not what them to be completely CG characters. “We had sections of black in-between and they would shoot a plate shot and remove those. Later on the VFX companies tracked in those pieces or sometimes they might have to do the whole piece.” Mechanical versions of the Synths were also made. “We made two complete puppet versions that were completely robotic; they’re on a rostrum that comes down in the beginning of the movie. There were some puppet shots later in the elevator when one gets cut in half, its arm and body comes apart.” To assist the visual effects facilities with their CG replicas scans were made of the maquette and the puppets. “When you work on these films, it’s not an ‘us against them’ mentality; it’s everyone working together.”

“They ended up adding the phone hands to our list of work, the little telephone that comes alive in Colin’s hand and on Kate’s,” states Shane Mahan who oversaw the production of the required prosthetics. “The hands are active and the actors have to wear them for hours during the day,” explains Shane Mahan. “You don’t want to make it look fat and thick. There’s trial and error period where you’re developing those to make the letters. We reduced the letters and increased the size. We had to alter things to make them show through properly.” In reality the device was not wireless. “There was a bit of [digital] wire removal but the reflections on the face and how it interworked through the skin I was happy with that.” The end result led to the scene which features a man hunched over a chair with a glowing dragon tattoo on his back. “Because the phone hand worked quite well they decided to give us another piece. It was a wearable prosthetic that has a projection screen of lights underneath. Maybe an inch and a half space between the actor’s real skin and the outer skin was an artificial silicone piece. It’s how you would make an old fashion star field where you blackout everything and then have the tattoo design etched into the plastic veneer so when there are lights underneath it shoots up through the skin.” Because of all the LEDs being used the wiring needed to be insulated so that the sweat of the actor would not short out the device. “They did one on a girl later on which was digital because they liked the idea of it; she’s the girl who takes Colin into Rekall.” The biggest challenge was in having to contend with a preparation schedule seems to get shorter with each project. “Technically it is not so much a worry; it’s the overall big picture of delivering quality stuff within the time you’re given.” Proper time management is essential. “You start from your delivery date and backup your calendar. This has to happen and this has to happen, and so on. There’s a layer of steps and processes, and you hope the ship stays on course.”

“The director, Len Wiseman, and his producing team hired The Third Floor to create a ‘pitchvis’ test sequence to demonstrate the filmmakers’ vision for the movie,” states Previsualiztion Supervisor Joshua Wassung who works for the Los Angeles based company. “The three-and-a-half-minute pitchvis featured a futuristic city with hover cars, police vehicles and robotic police. The pitch went on to become the basis for the hover car chase in the final movie. After we did the pitchvis and the movie was approved, we moved on to previsualizing a number of sequences with Len across the film.” The filmmaker was concerned about constructing a believable cinematic environment. “Len wanted to focus on building the world of Total Recall; he described a European city that was so crammed for space, they had to build straight upwards, eventually creating a labyrinth of buildings on top of buildings. And then Len said he wanted to do a hover car chase straight through all of those levels. Right away we started collectively brainstorming cool ways to show off all of those elements. We concluded it should end with the car dropping all the way to ground level, revealing London’s Big Ben. The biggest challenge was developing that enormous world from scratch, but Len’s concepts were inspiring, and everyone was very excited.” Alterations were made to the initial ideas. “The hover car scene was not in the original script; after being presented to the studio in the green light process, it was integrated into the shooting script. While filming Len expanded the chase by several minutes, but kept all the original beats in the original pitchvis.”

The Third Floor also collaborated with Production Designer Patrick Tatopoulos. “We had a very open channel with Patrick where we would go over his concepts or drawings from his team,” states Previsualization Supervisor Todd Constantine who partnered with colleague Joshua Wassung on the remake. “We created several iterations in previs to integrate his notes and explore possibilities for Patrick. It was a constant creative back and forth.” Digital versions of key backgrounds and environments were simulated to assist in the construction of the sets. “Part of our process for scenes included building and handing assets to and fro between the previs team to the Art Department. Working with the assets in our previs scenes, we would quickly see how different facets of the environment, the location, the shot or the planned action within the scene were working, how vehicles were interacting with people, for example, or what props seemed necessary. If needed, we would change things around and hand the asset back and the Art Department so they could make it work for the buildable set. We included as many real-world technical details as we could so the previs could be a true planning tool.” Wassung believes, “For me, good previs communicates a clear picture of the creative vision for the shot/scene/project. It brings together as much feedback and input as possible from around the production, so it becomes an accurate reference point that can be used to evaluate both creative and technical ideas in advance. Effective previs generally also means everything is do-able and shoot-able on the planned budget.”

“The biggest challenge, especially in the beginning, was the sheer scale of the environments,” remarks Joshua Wassung. “Everything had a unique design, and we had to constantly add depth and expand the world to show how big the city had become. The Maya files were very large with lots of detail. We overcame this by first building the base world, then customizing on a per-shot basis. We would often hide and delete what was not visible in camera.” A sense of scale had to be provided for the various worlds. “We had a lot of discussion about how things would work in a spatial and physical sense,” says Todd Constantine. “We’d build a prop or set, place in the 3D actors and move the camera around and see what we could see in the previs environment. We were always adding new elements to help the director flesh out the world he wanted in and build the best plan for how it could be realized on screen.” Various camera angles were simulated. “Virtual cameras can be dropped into a previs scene so it’s possible to test out different views on coverage, as well as camera equipment and lenses. Len and his collaborators provided direction on the key shots they wanted and we also had the freedom to come up with and test ideas on our own.” Wassung notes, “On a film with this level of world-building, the previs was able to deliver a common view for what was needed in key shots and the most complex scenes. Having worked with the filmmakers in advance of production, it was possible to flesh out a lot of variables and form a game plan for shooting and visual effects creation before actual set building began.”

A lot in effort went into conceptualizing the Hover Car Chase Sequence. “This was our entrée into the project,” says Joshua Wassung, “so it was about getting our heads around all of the cool possibilities but also thinking through the logic and rules of how the world worked. We also always had the logistics of the physical production in mind and were getting feedback about what was needed all the time that could be brought back into the previs to make it more useful to everyone.” The signature event to envision involved the massive transportation elevator known as China Fall. “This was a great sequence and a lot of collaboration was required to represent the look of what the production designer was going for,” remarks Todd Constantine. “Beyond the look, we were concerned with form and function, looking at heights, shapes, figuring out how people and vehicles would react with it, etc. Then, of course, once you have the idea down in previs, you need to make sure that everything is achievable on the real set. So from a technical point of view, things such as equipment needs, and rigging could be identified in previs so they could be anticipated and accounted for in shooting setup.” The logistics had to be mapped out the continuous camera movement featuring Colin Farrell killing a squadron of Synths. “For the scene at Rekall, Len had the idea of trying to get a camera to fly around the room in one continuous shot, changing directions on right angles as we see Quaid take down all the attackers. Len directed a previs motion capture session, which allowed us to bring professional stunt action to the characters in our previs scene. Once we clean up and stitched the performances, we added a virtual camera with a version of the specific move Len wanted. During prep for shooting that sequence, we revisited that camera move with cinematographer Paul Cameron with the goal of figuring out the number of cameras, the speeds and the distances that were necessary on the physical set to get it as ‘one’ shot.”

“My favorite sequence both in previs and the final film is the elevator chase,” states The Third Floor Previsualization Supervisor Joshua Wassung. “Len wanted to take a traditional elevator action scene and open it up in the third dimension. He had this idea of these elevator cubes that could move along tracks vertically, horizontally, and forward and back through tunnels. It was a lot of fun to explore how to utilize all that directionality, transition in and out of the confined space, and really highlight the robots. Smack in the middle sequence, we got to mocap Len’s amazing stunt guys to create the intense fight between Lori [Kate Beckinsale], Melina [Jessica Biel], Quaid, and a Synth crammed into a single elevator. It was so much fun.” Effects Supervisor Shane Mahan of Legacy remarks, “When I saw the film I was happy with how that area worked because you sense that the helmets are there. The movements of the Synths feel real; it doesn’t feel like marching around weightless.” Mahan notes, “It was one of those enjoyable projects where the work turns out nice, and the producers, the director and people like Patrick, were great to work with; it’s an overall pleasant experience for everybody.” The $125 million production has earned $156 million worldwide. “We were well prepared for this movie,” says Production Designer Patrick Tatopoulos. “I would say previs was strong, tight and clear; we knew clearly what we were to do all the way through to the end. One of the most tightly organized movies I’ve ever worked on.”

Production stills © 2012 Columbia Pictures Industries, Inc.

VFX images © 2012 Columbia Pictures Industries, Inc. Images courtesy of Double Negative, Prime Focus World, The Senate, MPC, Buf, Baseblack, LipSync Post and The Third Floor.


Many thanks to Patrick Tatopoulos, Peter Chiang, Adrian de Wet, Graham Jack, Alex Pejic, Richard Higham, Olivier Cauwet, Charley Henley, Sean Farrow, Martin Ofori, Stefan Drury, Shane Mahan, Joshua Wassung and Todd Constantine for taking the time to be interviewed.

Trevor Hogg is a freelance video editor and writer who currently resides in Canada.

Around the Internet…

  • maddin

    I give you a big tip guys…. put more money and time into the storry than in the special effects…. this film is shit !