Skip to main content

How 250 Cameras Filmed Neill Blomkamp's Demonic

Director Neill Blomkamp sits down with WIRED to talk about the innovative camera rig his team created for his video game-inspired horror film, Demonic.

Released on 09/15/2021

Transcript

[Narrator] Demonic, a new horror film set

in a digital simulation, used a new technology

called volumetric capture

to create its video-game inspired look.

250 Cameras is compressed into one 3D object.

[Narrator] This is Neill Blomkamp, the director

of Elysium, Chappie, and the groundbreaking District 9.

All of the science fiction films that I've done before

have maybe a tinge of horror to them

and this horror film has a tinge of science fiction to it.

[Narrator] Let's walk through Neill's process

of rolling 250 cameras all at once

to create Demonic's simulated world.

[Neill] So we travel into the mind of a person who's

in a coma and this experimental technology lets them get out

of their body, which they don't have control over,

into a virtual environment.

Why are you here?

[Narrator] It might feel like a video game,

but in reality, it's a specific location

in British Columbia.

In fact, everything you see here was captured

from real life people and places

and digitized using the techniques of photogrammetry

and volumetric capture.

Here are the steps in bringing the simulation to life.

Google Earth is a good way to think of photogrammetry

if you look at the cities in Google Earth in 3D.

It's the exact same process.

Those are just planes that are flying over cities

and taking tens of thousands of photos.

If you imagine walking around the house

and taking 100 photos around the house

and then you would use drones to get things

like what the roof surfaces would look like.

And then you would do a tour through the inside of the house

using handheld, still, you know, still shots.

I mean, we were just using Canon Mark Threes.

If you give all of those photos,

which are all still images, to a photogrammetry piece

of software, it'll pull out a three-dimensional house.

Its fidelity is a little bit low,

so blades of grass and trees and stuff would be more

like blobs, but it's still pretty good.

You can see the tree on the left is left

at a resolution level that I wanted to leave it at

which just makes it look very computer generated.

So now you have a 3D object that you can look at

from any angle sitting on a computer monitor.

[Narrator] Although photogrammetry

has been around awhile, volumetric capture hasn't,

especially not in filmmaking.

Volumetric capture is the idea

of grabbing three-dimensional holographic video

instead of two dimensional video of a performance of actives

at 24 times a second.

So it's essentially a motion version of photogrammetry.

So you're not locked into a single angle,

you have a three-dimensional stage play

of their performance.

So, in the case of demonic, if we have Carly and Natalie

in the same room acting with one another,

if you were to pause any frame in there,

you could move around them in three dimensions

because any individual one frame

is a frozen three-dimensional piece of geometry.

Filming in volumetric capture is very taxing.

It's a highly synthetic environment that is not great

for actors.

I mean, it's essentially a scaffold cage of cameras

that are around them.

We had 250 cameras and you want those 250 cameras

to be incredibly close to them.

And then when you get to the front door,

we would wheel in a plywood door that obscured her

as little as possible so it would just look

like a skeleton of a door

and that would give her something to push open.

If she goes upstairs, we'd bring in plywood stairs

that she would go up.

Instead of using VFX to lift up the mother

in this demonic possession idea of just being levitated

off the ground.

You would normally use a bunch of computers to do that,

but we actually just use traditional stunts rigging

with wires to lift her out

of the volumetric capture environment.

If you imagine a video game character that you can turn

around and look at it from any angle,

that is basically what you end up with.

[Narrator] One issue the production team ran into

was wrangling all the data.

Shooting in volumetric capture yielded

about 12 terabytes per day.

My brother and I had to bring in 24 computers

that we owned on the back of a pickup truck

to be able to help get the data off.

And then they would spend an additional 12 hours

getting that data off of the cameras

and clearing them for the next morning

and so that, by 7:00 AM, being done with the day before.

[Narrator] Once the footage is finally ingested,

computers go to work crunching all the camera angles

into a three-dimensional piece of geometry.

The first thing that you actually get is a point cloud,

that when you zoom back from it,

you can see it very clearly.

When you zoom into it, all of the points become separate

like they're floating atoms.

It's anywhere between an hour to several hours

to compute one frame.

You're doing, you know, thousands of frames.

It just, it took months.

[Narrator] In fact, VFX artists with the help of Unity,

a game engine used to make games like Pokemon Go,

Monument Valley, and Cuphead,

had to create a custom workflow with enhanced processors

in order to be able to render

the enormous volumetric data sets into virtual reality,

where the shots were then filmed.

You're basically dragging a three-dimensional actor

and placing them on the floor of your building

that you've gathered, also using photogrammetry.

So now it's like a video game.

Now you can look at it from any angle and you can light it

any way that you want and you can film it any way you want.

You take that virtual camera existing in the scene

on a computer and you tell it that any motion

that comes from this real handheld object in the real world

with the monitor on it,

which has motion capture points on it,

those motion capture points,

when the camera operator moves them,

you tell the virtual camera in your scene

to reference these motion capture points

and move based on what the camera operator is doing.

So as he pans right, your virtual camera

will exactly mimic what he's doing.

And then you take the video feed coming

from the virtual camera and you pipe it to the monitor

that he's looking through.

So he's effectively now inside the computer,

looking at the scene.

And so he can then walk around and frame shots.

[Narrator] There seems to be plenty of interlacing

between the worlds of filmmaking and game design.

Will the two disciplines merge in the future?

It seems to me where video game technology will go

is in a direction of more and more photorealism

on every level, like lighting, character, physics,

and particle simulations.

So it becomes an immersive world

for the person playing the game.

The idea of narrative in future gaming

or future immersive worlds may take a back seat

to allowing the gamer to just interact

in the way that they want,

like a Grand Theft Auto kind of open world.

It's all about the agency of the gamer,

where on the film side, or the TV side,

the whole point is to be a passive audience member.

So you're sitting and being told a story,

which is a very different experience.

Film has spent 100 years refining what it does.

The only way that I can see it really changing

would be something like volumetric capture,

where you could sit and watch the actors through VR.

You could have conversations taking place between actors

where you were sitting at the table with them

and it may be something that audiences find interesting

and useful and it may also not.

It's hard to know where that will go.

But I think that the revolutions of how stories are told

are basically quite locked in now.

What we see now, I think, will be there

for quite a few years.

Up Next