Constructing a useful film index amidst a constant fluctuation of industry conditions feels a bit like drawing a box in the sand on a windy beach. Any grains of data that fall within the box would be included in the index because of their perceived usefulness toward projecting future performance. However, with current viewing habits and production and distribution practices continually shifting, the outline of that box continues to transform. I’m currently concentrating on two key challenges in defining the boundaries of the sandbox: normalizing the index scoring system and distinguishing films according to their level of risk.
Normalization of the index score
Simply looking at box office performance at a given point in time tells you very little. It tells you something about the strength of titles currently in theaters. It tells you a lot about the importance of the release cyclicality (i.e., Thanksgiving and July 4 weekends are huge overall ticket sellers), which then becomes a self-fulfilling prophecy for distributors planning their release timing. Film investors fully expect to see ticket sales spike in at certain times because that’s what they’ve consistently done historically. But they need more context to begin to understand more about performance of specific titles and categories.
Scoring this year’s data against the same week from the year prior begins to give a little more information. I’m of the belief that comparing the current year to anything further back than the previous year would begin to warp the conclusions one might draw. For instance, consider how much Netflix, HBO Go, Hulu and other platforms have shifted viewing patterns over the last two years. To blindly weigh 2013’s July box office performance against 2005’s would be to compare two very different environments and would confuse or mislead any investor trying to draw meaningful conclusions.
Many other forms of normalization will likely need to be employed as I develop the index’s underlying formula – analysis of the statistical significance and reliability of each type of data available will help lead toward meaningful normalization factors. For instance, I might account for inflation and/or remove 3D titles from consideration. Eventually, a model might be able to account for more nuanced factors like macro weather patterns and events. In the end, simply comparing one year to another provides limited information for many reasons. Release patterns are in constant motion. Aside from several standard big weekends, which are likely to remain bellwether box office indicators, releasing a specific film during a specific week tends to be based on a host of considerations unique to that period. Not to mention the fact that holidays don’t appear as the same “week” each year. When viewed as a graph, this calendar variation will create peaks and valleys that don’t make sense in comparing two years. Such a comparison also doesn’t speak to the actual volume of revenue earned. Clearly, while scoring one year against another can provide certain information, it is only one tool in the toolbox.
To the left is a basic representation of 2013’s weekly box office as compared to 2012’s. Any positive percentage indicates 2013 performed better that week than the same week in 2012.
* For those curious about the pre-Thanksgiving dip, 2012’s opening of Breaking Dawn, Part 2 dwarfed anything 2013 had at the box office that week.
A common factor used to categorize various investments is level of risk. Risk level is a core consideration for investors, who need to be able to compare other holdings in their portfolio with the one under consideration and weigh the entire basket against their overall risk tolerance. Film investors attempt to identify these same risk characteristics as they relate to specific projects. Sophisticated film investors attempt to create diverse portfolios to try to ensure several flops don’t overshadow the hits. A film portfolio, typically referred to as a slate, is not a new idea. However, successful film slates are not nearly as common as lauded mutual funds because visibility into specific films’ actual level of risk is typically not great.
Rather than getting bogged down with limitations of identifying real projects to make up a film slate, I want to first describe the categories I might employ, using stock category definitions as a model.
- The least risky category of stocks is often referred to as “income stocks.” These are typically thought of as slightly higher risk and return potential than bonds. Utility stocks are in this category, since investors have faith that people will continue to need things like electricity and water.
- “Value stocks” are those that have historically been a good investment and which are thought to be undervalued at their current price. The idea is that buying undervalued shares allows you to enjoy additional upside as the price returns to its proper level.
- Stocks with strong historical and projected growth rates are categorized “growth stocks.” Investors expect a strong return on equity with these. Many technology and alternative energy stocks are placed in this category, assuming they’re at a scale offering reasonably low volatility and established enough to have historical data to reflect a trend of high growth.
- Then there are riskier types of investments, including smaller companies with no proven track record or significant market presence and companies from emerging markets. These investments are exposed to a higher than average level of volatility due to lack of liquidity, political and currency issues that might disproportionately affect them and generally uneven growth, often due to their small scale.
I believe there are some analogies to be drawn between risk categorization in films and stocks - translating these definitions may help me in drawing lines around films for my own index. Once the boundaries themselves are clearly defined, determining the content to be included within becomes much easier.