We are constantly thinking about how to conduct impactful analysis that will actually deliver improved outcomes, not just doing work for the sake of doing work. Unlike public market investors who receive audited financials and curated investor presentations to analyze, we are not only deciding which data actually drives outcomes, we are also creating the actual systems and processes needed to capture the data in the first place, which often feels like tedious work with lots of stops and starts. Thankfully, our reading this week reminded us that we are not the only ones who struggle with how to accurately collect and interpret data.
In a recent Harvard Business Review article, Arthur Brooks, president of think-tank American Enterprise Institute (AEI), discussed his challenge in Measuring the Impact of Ideas. In his position, Brooks lives at the intersection between academia and the real world, where teams conduct research and then try to translate that research into policies which they believe will positively impact the country and economy. When Brooks made the move from academia to AEI, he struggled to quantify the value his group created for those providing the funding, in part due to a data collection mistake which he refers as the “lamppost error” which he explains in the article:
“[N]amed for the story about a guy who loses his keys in the street and spends hours looking for them under a lamppost because the light is better there. Nonprofits struggling to measure effectiveness will frequently turn to whatever is easiest to see- usually inputs such as how much they’ve received in contributions or outputs such as how busy they have been. This is obviously inadequate, because what we’re really interested in isn’t input or outputs but impact.”
By comparison, our portfolio companies have it easy as they deal with tangible goods and services that can be tied directly to financial performance. Nonetheless, we have found that Brooks and Chenmark face similar struggles in our respective efforts to tie data collection to useful analytics. Brooks explains:
“As you might expect, these changes weren’t always easy to implement. I made plenty of mistakes along the way. Some colleagues complained that I was asking them to spend too much time and energy collecting data—and in some cases they were right. More than once, I fell prey to measuring the wrong thing entirely. For example, I became concerned when attendance to a series of live events started to trend down. But after a few months, someone pointed out that we’d begun live-streaming the events on the web, where they were getting a lot of traffic. This insight led to an even better metric, subscribers to events and original video programming on our YouTube channel—a measure by which AEI now leads the think-tank industry.”
We empathize with this sentiment, as we are very familiar with the feeling that comes from spending precious resources to collect data that end up accurately measuring precisely the wrong thing. Upon reflection, this is because we too fall victim to the lamppost error, especially when the desire for progress trumps relevance. On the search side, for example, it is very easy to track the number of prospect financials reviewed, but unfortunately, that does not necessarily equate to the presence of high-quality deals in the pipeline. As we have come to recognize this foible, we have started tracking “positive interactions” with potential business prospects, which, despite being vague, we believe will create the results we seek over the long-term.
While we continue to grind away at the minutia of creating reliable data collection processes, our reading this week is a good reminder that our efforts can’t stop with the mere collection of data. Rather, they must extend to the analytics we choose to perform on that data since the last thing we want to do is love lamp just because we can see it.