Since Narrative VR filmmaking is a relatively new endeavour (especially as a creative, artistic tool), figuring out what a successful visual composition and staging means can be challenging. I hope this post and its upcoming companions can help provide a clear direction of thought and analysis for future VR shoots. I formulated most thoughts the form of questions to allow you to fill in the blanks.
THE 360 SPHERE
In VR, audience is placed at the center of a virtual sphere, with the ability to rotate 360 degrees on a vertical and horizontal plane. For VR Film, we are essentially making a decision about where this virtual sphere is placed in physical space. The placement of that sphere is what defines a shot in Virtual Reality: where does the camera go? What does it see? What is hidden? What view is obstructed and why? Where are the objects moving and what space do they occupy?
A thoughtful VR shot should includes the placement of architecture, objects and characters:
Architecture: What are the lines like in your space? Are there layers to the space? Layers replace depth of field in VR. What is hidden behind what object and how do they move in relationship to each other. A well-placed wall can add a lot to a basic space. Get creative with the distances between camera and architecture for each degree. Placing a camera closer to a corner will create meaning: obstruction, no way out, locked and in a corner. What is hidden is just as important as what is shown. How a character moves from a hidden space into the field of view guides the viewer's head. Remember there is no real camera motion when it comes to framing, your space must make up for that.
Objects: Objects can guide the audience's’ eye around the sphere. Their motion signals events and can trigger new interactions. They also represent an area in the virtual sphere that could be of interest to the audience. What additional information does an object provide? What could it tell the audience that the character doesn't know? What timed event could redirect our interest to a new area? Why would I look away one direction to look at this object?
Characters: More on this later, but characters motion within space must be spatial, considering the viewing environment is spherical, not flat. There is no edge (no limit) to the framing of the camera itself, so how do characters appear and disappear from the shot? A motion around the camera in a circle looks different that a straight motion towards it. How close is a character and why?
These three visual building blocks can overlap and affect each other, and they should. For example, how can the architecture of a space be an exit point for a character? How is a character impacted by an object?
POV VS POV
Since we are dealing with a flat image stretched onto a 3-dimensional canvas, we loose image depth. There is a clear sense of presence in the center of the sphere - something is there and is moving around a designated center coordinate. This makes the point of view of the narrative critical to a successful experience. Who am I as the viewer? Am I me? Am I one of the characters? Are they revealed later? Do characters talk to me or through me (ie is there a 4th wall)? Am I an object? What is my movement like - human, animal, surreal, mechanical, etc? What height is the camera at and why?
The cinematic 4th wall is now a spherical point, not a flat plane - which makes it an actor in itself. Anything that crosses that central point is either aware of it or not. The physical state of that actor will determine what happens when objects, sound, views and characters cross it. If a character extends its hand towards the character, does it touch the camera or go through it?
The most basic staging is POV - assuming the camera is the audience, and it seems to be the most popular staging decision at the moment. However, we shouldn't be limited by that assumption, and I think more creative shots will become more common once we are used to wearing VR headsets.
Here are three staging concepts that work well and don’t work well in 360 film.
EXAMPLE: WHAT WORKS
- 3 planes or layers: points of focus occur in three layers around the center, representing different depths in the image. The interplay between which is in focus and isn’t creates motion in the image. Lighting and stage design play a key role here.
- Reflections: not just mirrors, but also opposite or matched motion on different sides of the sphere. Sequential reflections (i.e. first left, then right, or similar), can guide the head in engaging ways if the motion is fluid and connected.
- Motion paths: A clear path for users to follow.
EXAMPLE: WHAT DOESN’T WORK
- Drastic left/right points of interest: Interest points at around 180 degrees apart make it difficult for the user to follow the experience, especially when fast changes are expected. The exception to this is an experience that attempts to confuse, hide or alter elements by limiting the user's field of view through motion, or wants the audience to make a choice about what to see and not to see. For example, a two-person conversation staged left and right at 180 degrees is hard to follow visually, and most users ends up looking at one person or somewhere else.
- Behind the viewer: The area behind the head should be used with intent, not because it CAN be there. If we want the user to look back, we need to give them a good reason to. Unless our aim is to set the user into a space they can visually explore and dissect by themselves, there is no reason to ignore what's behind us when designing the experience. Ultimately, we are conceptualising concept for 360 degrees, every direction matters.
Check out this Pre-Visualisation test of a few shots based on a two-person conversation around a table. Some shots work better than other, especially in different headsets. For example, subject tracking shot [d] works well in a Cardboard, but not on Youtube.
UPCOMING BLOG POSTS
Screenplay Conceptualisation and Writing for Virtual Reality
What is an edit in VR Film?
Here are the basics of operating a multi-GoPro camera rig for 360 video. We use a hero360 H7, which has 7 GoPros. The process below works with any GoPro Rig, and is meant to simplify the production workflow for you.
Here are the best settings for GoPro capture in 360:
- do not use SuperView
- do not use 1080P
- Choose Wide FOV
- record at 2.7k or 1440P
- record in 4:3
- Avoid ISO 6400
- Set date and time on camera
- make sure all cameras have the same settings
- Auto Low Light OFF
- Sharpness Medium
- Color Flat
- Protune ON
(Double check these settings with your manufacturer, this works for most hero360 and freedom360 rigs we've used)
After shooting a bunch of scenes you start to notice the need for a formal recording process. We came up with this sequence:
- Label cameras and SDs for easy organisation
- Delete all content form the SDs/Format the SD on the camera
- Start cameras as close together as possible, or with remote
- Count out loud when pressing to match the number
- Record 2 seconds
- Create a sync point
- One clear, defined sound – a clap, clicker, or slate;
- A clear rotating motion in both directions;
- Wait 2 seconds
- Transfer files to your computer as often as possible, at least every 3 takes, this makes file management easier, and prevents data loss.
If a shot gets messed up or something is wrong, the shot is Void. Sometimes one camera doesn’t work and the other two are filming. What you end up with is a nightmare of files that dont match. The easiest file management to have is a clean sequence of shots on each SD – take 1 – take 2 – take 3, etc
To Void a shot, place your hands over all cameras (black image) and say “Void”.
Getting your footage off the camera can be time-consuming on set. If you fill up your numerous SD cards, you’ll have to transfer 100s of GBs of data to your computer. Multi-SD USB drives help. When scheduling your shoot, account for data transfer times, or purchase extra sets of SD cards. When transferring, make sure you label each folder with its corresponding camera number.
Once on your computer, you’ll have a folder for each camera with all takes and shots in it. Go through each folder and label your files in this format: cameraNumber + Shot + Take. Make folders for each shot, and corresponding folders for each take within them. Now move all files into their corresponding folders. You should have a folder for each shot, filled with folders for each take. Each folder should contain the same amount of files as you have cameras in your rig. Each file can be characterised by camera is came from, shot and take number.
Visual Composition and Staging in 360 degrees
Screenplay Conceptualisation and Writing for Virtual Reality
What is an edit in VR Film?
Welcome to our blog!
As fellows at the CRI this year, we will be exploring Virtual Reality filmmaking, diving into the process of creating 360 films from conceptualisation, writing, directing, to analysis. Every month we will be releasing a short film that experiments with 360 concepts in filmmaking. To complement this, we will blog here about our workflow, analysis, discussions with filmmakers and scholars, and even tutorials.
Our first experiment is a short 360 film (currently on YouTube) that explores the concept of spatial editing using split-screen as a storytelling tool. Watch below, then read through our conceptualisation and analysis:
Virtual reality (VR) exists in an omnidirectional environment. Cinema on a rectangular screen is often composed using the “rule of thirds,” however no such rule exists for the composition of a shot that completely encompasses the camera. This creates a unique challenge for the 360 degree cinematographer for which there are very few documented solutions. Our first experiment is an exploration of editing within a spatial dimension.
In theatre, which is not necessarily bound to two dimensions or a restricted frame, light is an essential tool in directing the attention of the viewer. Stage-lighting techniques can enhance our ability to do the same within a VR headset.
In editing film, there is a physical seam between lengths of film that the editor uses to his or her advantage by exaggerating or hiding the cut. In either case, the goal is to make the edit seamless, to avoid breaking the suspension of the viewer’s disbelief. Edits in VR cinema are more difficult to make seamless.
We have speculated that by editing 360 degree video spatially, rather than temporally, we can create edits that are more seamless when viewed in the VR headset. Splitting the encompassing screen by hemispheres is a possible solution to edit between two shots.
We created our first short content piece, which took the form of a narrative video. Two characters, occupying opposite hemispheres of the screen, spoke on their phones with friends about an encounter they had had on the train with one another.
We expected that using a hemispherical environment would present the viewer with two persistent worlds that seem both as separate environments and as a consistent experience. The audience’s head motion determines which world they wish to follow.
We collaborated with a screenplay writer to adapt a short story by Haruki Murakami into a short VR film. We wanted to tell a story that was personal, emotional and in some way romantic - rather than rely on the audience’s awe of the technology to create a convincing experience (a debate that will come up often on this blog).
Our experiment took the form of a short film. It was filmed, blocked, and timed to blend two shots which would persist for the full length of the film. High contrast and limited/minimal on-set lighting were used to direct the viewer’s attention from one hemisphere to the other. Audio was mixed in mono to avoid a directional bias.
Our spatial edits are directly linked to the staging and composition of the shot. We played with four specific features:
Hiding vs Showing characters - Our characters appear and disappear in order to shift your focus to the other hemisphere.
Motion matching - we matched the horizontal motion of characters on each side. When our characters leaves the frame, she walks from the left to the right, creating a matching directional motion within the other sphere (Figure 1).
Light zones - What areas to light helped move characters forward in the narrative, and also added to the visual continuum. We wanted the user to be visually stimulated, so attempted to build a dynamic image through darkness.
Space within a space - The living room hemisphere had a strategically placed mirror that reflected the characters off-stage actions (in the kitchen). When the character leaves the key linear narrative for a moment (“I’ll call you back”), their presence is a minor detail that can be spotted or ignored in the overall composition (Figure 2).
Lighting is a critical feature of our storytelling toolkit in VR. Consider the sequence of light below from beginning to end of the film:
The character's motion is being told through the lights in the staging. We move between spaces - both existing and hidden (i.e. the mirror and window reflection) - by following the light itself.
For editing, we do believe that the segmentation of the screen is a useful way of blending at least two shots. However, overlapping action should be choreographed so that the viewer can continuously explore content, even though it may not be the intended focus. We can’t assume that people will follow our intended motion - since their head motion moves the field of view, its their choice what to look at. We need to make sure that there is enough content to fill the entire sphere. Maybe in the form of hidden content that can be discovered as details, or an exciting visual environment to discover. The list will grow over the next half year.
Upcoming Blog Posts
Basics of operating a multi-GoPro 360 rig
Visual Composition and Staging in 360 degrees