VFX
Visual effects with subtle perfection
Transform your vision into stunning visual reality, delivering high-impact VFX, from photorealistic explosions and set extensions to digital cosmetics—all without leaving your FotoKem workflow
A Complete Unknown • Fly Me To The Moon • Back in Action • John Wick Chapter 4 • Joy Ridge • Poker Face • Mayor of Kingstown • Yellowstone • Landman • Lioness • 1923 • 1883 • Moon Knight • She-Hulk • Better Call Saul • Breaking Bad • Spirited • Unfrosted • Borderlands • FBI: International • FBI: Most Wanted • Law and Order: SVU • Chicago Fire • Chicago Med • Chicago PD • Beauty In Black • Divorced Sistas
How we can help your film or series:
Explosions / Flashes
We can help enhance the intensity of on-screen action while maintaining visual consistency, correct lighting integration and physical accuracy across shots. From the flash of gunfire and ricocheting sparks to blood bursts and environmental damage. Realistic flames and explosions are often impractical and too dangerous to capture on set. With our vast library of 2D elements and talented 3D artists we can create seismic explosions and raging infernos.
How do you simulate a realistic explosion?
We sometimes use stock explosions or we can build explosions from scratch using Volumetric Fluid Dynamics. This involves software that handles all the math of a real explosion (e.g. how hot gas expands, how the explosion shockwave propagates, how smoke/debris gets carried by the blast, etc.) The software handles all that math to get the basics down but after that, our VFX artists need to do a lot of other work to dial it into perfection, such as the smoke/debris, the shockwave, lighting, and a lot more.
What makes an explosion look big and powerful on screen?
A truly powerful explosion has several traits: a fast shockwave (e.g. a ripple in the air), a dark plume of smoke and debris, and a big bright fireball. A lot of an explosion’s impact on screen comes from our VFX artists showing how it affects the immediate surrounding area, such as buildings or vehicles, or other atmosphere.
What are the common layers you composite together for a final explosion shot?
A complex explosion is a layered visual sandwich. We typically start with the Fireball/Core (the brightest element), then the smoke/plume (the volumetric body), debris (shrapnel, dust, and dirt chunks), the shockwave/heat layer, and the lighting that makes the explosion glow onto the actors and environment.
What’s the hardest part about making an explosion look right in slow motion?
In slow motion the biggest challenge is correctly simulating the micro-turbulence and the precise and realistic shockwave and smoke. Every tiny eddy and swirl of the gas needs to look physically plausible as it expands over many frames.
When would you use stock footage instead of a full CGI simulation?
We often use stock footage for quick, small-scale elements like distant muzzle flashes or puffs, small debris hits, or minor environmental dust explosions. It saves time and resources, especially when the effect is fast or out of focus. But for a major unique event that interacts heavily with the environment or main characters, we always do a custom CGI simulation for seamless integration.
Set Extensions
Expand your practical sets into massive, detailed environments with seamless 3D integration
What exactly is a Digital Set Extension?
A set extension is a visual effect that expands the perceived size of a physical film set or location using CGI. For example, if a practical set only goes up to the height of one floor, the digital extension can add the remaining twenty floors, the roof, and the cityscape beyond, making a small set look like a massive environment. A set extension allows filmmakers to create worlds limited only by their imagination, using only a small piece of a physical set for the actors to interact with.
What are the main steps in creating a digital set extension?
The first step is typically camera tracking of the live-action plate. Then the environment assets are modeled in 3D. Next, texturing and shading are added to the 3D models, followed by lighting the elements and rendering them, often using on-set lighting data. Finally, the FotoKem compositing team integrates the rendered image into the live-action plate.
What is the biggest challenge when doing a complex set extension?
Typically, the biggest challenge is render time and asset complexity, because the environment is fully built and lit in 3D, so rendering can take a lot of computer power.
How do you handle the “seam” where the physical set meets the digital environment?
This is often called edge integration and involves a lot of detailed compositing work, typically using roto/paint to blend the 3D-rendered elements with the live action. We use atmospheric passes, subtle motion blur, and sometimes small digital or practical foreground elements (like steam or dust) to mask the transition line naturally.
Weather
Transform your scenes with hyper-realistic rain, snow, fog, sand, wind, lightning, and more
What is the most important element in making a weather effect believable?
It’s crucial to make sure that weather vfx obey the laws of physics. For example, rain needs to make surfaces wet and splash, snow has to accumulate and stick to the ground and hair and clothes, and wind must move debris, as well as hair and clothing, convincingly. Without realistic interaction, the weather effect can look fake.
What is the typical technical workflow for creating weather vfx?
We typically create weather vfx starting with particle systems (for example, in Nuke). The goal is to simulate millions of individual raindrops or snowflakes. We also use volume/fluid simulations to create believable fog, mist, or atmospheric turbulence. The resulting passes are then composited over the live-action plate.
How is realistic digital rain achieved?
It’s typically a multi-layered composite that includes: rain streaks (fast-falling particles, often motion-blurred), interaction (splashes, ripples on water, drops running down surfaces etc.), wetness (a darkening/highlight pass added to all surfaces in the shot, and then atmosphere (for example, mist and fog in the distance, especially under intense downpour).
How do you make digital snow look natural?
Real snow has many variables, so a digital snowfall effect must account for wind, different flake sizes, and accumulation of the snow on the ground and on ledges, props, vehicles, etc. Then we often add a ground fog or haze element in order to add to the “cold.”
Digital Cosmetics
De-aging effects, makeup and wig adjustment, continuity and lighting corrections, reducing wrinkles, body slimming, and neck lifts
What are Digital Cosmetics in VFX?
Digital Cosmetics is the use of VFX to alter an actor’s appearance, ranging from minor enhancements like removing a blemish to de-aging to adding digital makeup effects or enhancing on-set makeup effects. It’s also referred to as “beauty work” or “retouching.”
What is the difference between simple “retouching” and “de-aging”?
Retouching (aka cleanup) is typically localized and subtle, and often just removes temporary flaws like blemishes, pimples, glare, and smudges, or smoothing small wrinkles. De-aging is a more complex visual effects process that involves reshaping the actor’s face and recreating their youthful skin texture over many many frames.
How are blemishes and wrinkles removed across a moving shot?
Motion tracking! Precise motion tracking. The actor’s face is tracked using specialized software that tracks geometry. FotoKem’s VFX artist then does a sort of “digital skin graft” to clone or paint the correction, which is then projected back onto the moving face data. This ensures the new fixed area on the face moves perfectly with the actor’s performance.
How does digital de-aging work on a technical level?
De-aging typically involves several steps. First: face capture, where the actor’s face is scanned, sometimes from old footage of the actor in order to get a younger reference model). Then the current aged actor’s performance is captured using performance tracking. Next, the younger 3D skin textures are overlayed onto the current footage. Finally, compositing, where the elements are blended together, keeping color, lighting, and other variables in mind.
What is the biggest challenge in beauty VFX?
The biggest challenge in digital cosmetics is that the work must be 100% invisible. The main challenge for our VFX artists is finding that balance between removing imperfections at the client’s request and retaining natural human texture and character.
Screen Comps
Seamlessly replace on-set screens with custom graphics
What is a Screen Composite (Screen Comp)?
A screen comp is a visual effect where a piece of footage or graphic is digitally placed onto a physical screen or monitor captured in the live-action plate. This can be anything from a phone interface to a stadium jumbotron or a giant Times Square billboard. VFX artists typically prefer the use of a tracking marker screen (for example, a blank screen covered in high-contrast tracking markers, like green or blue dots) or a blank, matte screen because it provides a clean surface to track, removes flicker, and minimizes unwanted reflections or light spill. That said, FotoKem’s VFX artists often do screen comps on plates of screens that have no tracking markers.
Can’t I just use real footage on the monitor during the shoot?
You often can, but it’s not always great. One big reason: flicker. Monitor refresh rates often clash with film frame rates, causing an unacceptable roll or flicker on camera. Also, control. The content you want to put on the screen often isn’t finished until post-production, or it needs to react precisely to the actor’s actions. Finally, brightness. The screen may need to be much brighter or clearer than a practical screen allows for visibility.
What are the key steps in creating a screen comp?
The key steps in creating a screen comp visual effect are: 1) Tracking: The four corners of the screen in the live-action plate are tracked across the entire shot, establishing the perspective. 2) Warping: The digital insert footage is warped to match the perspective of the tracked screen area. 3) Integration: The warped insert is composited onto the screen, applying the correct lighting, reflections, and any lens distortion to match the environment of the scene.
What is the most common mistake made in screen comps?
The most common mistake in a screen comp is when the vfx artist forgets about reflections. If a character’s face or a bright background object is reflected on the practical screen, the final composite must either preserve and overlay the reflections over the new insert, or digitally recreate and integrate new, accurate reflections onto the screen surface. Without reflections, the screen looks fake.
What if an actor’s hand passes in front of the screen you’re trying to comp?
Happens all the time and we handle it by rotoscoping (tracing) the outline of the actor’s hand or any other element that obscures the screen for every frame, creating a matte that places the physical element in front of the digital screen insert.
Wire Removal
Make your visuals more compelling and realistic by having our VFX team remove stunt wires and rigging or on-set glitches like boom poles and light stands
What is Wire and Rig Cleanup?
Wire and rig cleanup is the meticulous process of digitally removing wires, safety harnesses, supporting rigs, microphones, camera equipment, or any other visible gear used during filming, to create a seamless visual illusion. For example, removing the wires attached to a stunt person who’s doing a stunt fall.
Why is the wire not just hidden on set?
Because the safety of the stunt person is far more important than the visibility of any wires or rigging. Stunt coordinators must use thick, secure rigging for safety, which is impossible to fully hide. Plus light changes or camera movement can reveal even very thin wires, making post-production removal necessary.
What are the key vfx approaches for wire removal?
It depends on the shot. The simplest method is when there is a clean plate, meaning a pre-shot frame of the background that doesn’t show the wires. That shot is then projected and/or tracked onto the shot that shows the wires in order to cover them up. Another method is roto’ing or painting out the wire(s) frame by frame, typically cloning pixels from surrounding areas on the frame.
What is the most difficult surface to remove a wire from using vfx?
Tough surfaces to remove wires from are those that are textured, or moving surfaces like tree leaves or moving water.
How do you handle vfx wire removal on a moving camera shot?
VFX wire removal on a moving shot requires camera tracking first. Once the camera motion is digitized, the background plate can be stabilized, allowing the artist to do their paint work on a steady image. The shot is then un-stabilized (meaning the camera movement is reapplied), ensuring the painted patch stays locked to the background as the camera moves.