Building a Model for Stakeholder Analytics, Part 2: How SAM Derives Predictions & How Strategists can Use It

Introduction

In our earlier blog post on the development of SAM (Stakeholder Analytics Model) we described some of the thorniest contemporary challenges of developing expert insights around complex policy issues to systematize predictive foresight. With the signal to noise ratio as low as it often seems to be, and the channels for public discourse and influence continuing to expand in new ways, the odds of ‘getting it right’ and even more so to do that on a repeated basis, are daunting. 

In fact, the ability to predict relevant policy outcomes is becoming more like the ability of mutual fund managers to consistently outperform market returns. Call it ‘a random walk down K street’.  There are people who get it right from time to time, but it’s impossible to identify in advance, and past performance is no guarantee of future results. 

At the same time, corporate strategists and communications leaders absolutely need a baseline understanding to anchor their decisions about where to invest finite time, energy, and resources. You can’t work on every possible challenge, and if a policy problem is likely to land in a place that the firm can live with, sometimes the best decision may be to do nothing and just let the issue land on the equilibrium, so that resources are freed up for other issues where your efforts can make a more consequential impact. 

Everybody has a model of this kind in their heads — but most are informal, implicit, and based on intuition. Intuition isn’t always wrong or bad when it’s in the heads and hands of smart and experienced professionals. But this kind of informal gut feel is hard to query, very hard to debate in a structured way, and almost impossible to replicate. 

That’s why we are building SAM — a structured and transparent model for systematic integration of expert insights and opinions that generates a baseline prediction about where a policy outcome is likely to land  if the game continues to play out along its current lines. The best and highest use of SAM is to inform strategic conversation around that baseline. Can we live with that outcome? Do we need to work to change it? Can the efforts we propose work? Is it worth it?  

In this follow up post, we explain how SAM works to transform expert inputs into a useful baseline prediction and offer thoughts on how organizations can most effectively use this tool to support better strategy decisions.

Key Ingredients & Inputs

SAM is a simple weighted-choice model that uses subject matter experts’ assessments of three things about key stakeholders: preference, influence, and salience. Here’s what each of those variables means in practice.

  1. Preference: we assume that each stakeholder has an outcome that they would prefer over all the other possibilities. This seems intuitive; it’s almost implicit in the definition of being a stakeholder. Simply put, if this stakeholder could pick their preferred outcome unilaterally and without compromise, what outcome would they choose?
  2. Influence: stakeholders vary in their ability to coerce, cajole, or convince others that what they want deserves support. For our purposes, precisely what tools of influence are used doesn’t really matter. It’s simply a question of how much total influence the stakeholder has in this situation. In Robert Dahl’s language, how much ability does the stakeholder have in this situation to get someone else to do something they otherwise would not do?
  3. Salience: stakeholders typically have many issues that they care about at any given time. While there may be a couple of stakeholders for whom the issue at hand matters more than anything, it’s generally the case that most stakeholders have a long list of things they care about and the issue we’re talking about is just one entry on that list. The question here is how high does this issue fall on the list? How much energy and attention is this stakeholder likely to devote to this issue?

These are the three core ingredients that make up the inner data workings of the model. In addition to these generalized inputs, we also need a few starting parameters that set the stage for the scenario to consider. These can be obvious in principle but defining them accurately and precisely in practice takes some work. 

First, we need to clearly define the question at hand. In scenario thinking, the term of art is ‘the focal question’ and it often takes some work to articulate that question at the most appropriate level of abstraction. We need the focal question to be specific enough that stakeholders can express a meaningful preference about it, but not so specific that it excludes the interests of any stakeholders who are likely to have a voice. For example, returning to our example from Part 1 and the on-going debate over Section 230 reform, if the focal question were simply about ‘internet regulation’, that would likely be too broad and abstract to elicit useful information. If the focal question were about a single consequence of Section 230 — say, takedown notices — that’s probably too narrow of a focus. 

There’s no precise rule about how to do this well. It’s as much art as it is science and it often takes some iteration, but it does become easier with experience.  What we’re really looking for is a pragmatic focal question that would be recognizable to stakeholders and that — importantly — can be answered with a continuum of definable outcomes. This is the second assumption we need to get right from the start. 

What is the range of possible outcomes that could plausibly happen? We say ‘continuum’ because the model works by arraying a set of potential outcomes on a linear scale (we use 0 to 100). When you get the focal question right, this becomes easier than it might sound. Consider this example: if we ask, “what will be the range and impact of protests at the upcoming Winter Olympics?”, we might define 0 as a ‘major boycott’ where the US, Canada, and most EU countries refuse to attend; 100 as a ‘normal Olympics’ without boycotts or noticeable protests. We then array a manageable set of points between 0 and 100 in logical order — for example, ‘major sponsor withdrawal’ probably falls around 30 on this scale, and ‘scattered protests’ that are visible but don’t impact the games, national participation, or sponsorships in a meaningful way probably falls around 90. 

The third ingredient we need to specify is the most obvious:  who are the relevant stakeholders that we want to assess? It’s clear that we can’t list every potential stakeholder who might self-define as such, because that list quickly becomes unmanageable. How we do this in practice is again simple pragmatism. If we rely on first-cut estimates of influence and salience as a rough filter, we can safely exclude from the model most stakeholders who have very little of either. If the issue at hand is way down on my list of priorities, and I have almost no ability to influence the outcome, I might call myself a stakeholder and I might even have a preferred outcome — but no one else really needs to take me and what I want into account, so leaving me off the list won’t change things at all. 

In practice, we’ve found that most of the issues we want to model generate a list of relevant stakeholders that’s somewhere between 15 and 25. You could go longer, but the added value of the data at the end of the list tends to diminish quickly as the model converges on an equilibrium outcome.

After aligning on assumptions, all we need to do is collect the data. So how do we reliably assess preference, influence, and salience for a list of stakeholders?

It’s this simple: we ask a group of subject matter experts what they think.

At face value, this might seem terribly un-scientific, and it certainly would be better if there were precise algorithms and concrete datasets to derive those parameters. Perhaps in the future we’ll use natural language processing and other machine learning tools to derive the parameters independently from publicly available datasets — but that future looks a ways off right now.

What we’re doing instead is to rely on the best judgement of seasoned experts using a simple tool that we’ve built to elicit those judgements in a disciplined way. No single expert needs to get it right; it’s almost certain that they won’t. What we need is a group of experts who have some shared knowledge and some distributed knowledge – like a ‘wisdom of the crowd’ exercise, but with more sophisticated inputs and a more precise aggregation formula. What they agree upon and what they disagree upon become critical inputs.  

Getting a Result & What to do with It

SAM doesn’t require hordes of experts to start to see the data converge. As with most qualitative data collection, after a reasonable number of observations we start to see diminishing returns. In a traditional setting, strategists may struggle to convene more than a handful of one-on-one discussions, or a crowded strategy session, to hear a small number of opinions and assessments. Most in-depth interview studies start to see converging insights somewhere between 10 and 20 interviews. The advantage of our SAM model is that we don’t use time and resource-intensive interviews to gather this data. Instead, we’ve built a web-based interface on commodity software that surveys experts and pulls out data in a fraction of the time. This is obviously more efficient and, of course, it lets us increase the number of experts we can reasonably query. We don’t expect to gain much by expanding that number excessively, but what we can do is observe how and when the data begin to converge and correlate how many more expert inputs we want to survey against that observation. 

We then aggregate the judgements according to a defined formula. We start by deriving a series of utility calculations, which is a measure of how much benefit or pay-off each stakeholder would receive at each potential outcome. The further away on the continuum a potential outcome is from the stakeholder’s preferred outcome, the less utility she would derive from it. Then, we calculate the actual degree of support each stakeholder offers toward their desired outcome, and that is a combined function of their influence and salience. Finally, we use each stakeholder’s share of total support to weight each stakeholder’s desired outcome and predict an overall equilibrium. The equilibrium is derived from a simple compromise model which maximizes total utility based on each actor’s ability and desire to influence the outcome. 

Now let’s be very clear about what this predicted equilibrium is and what it isn’t. What it is: the outcome you would expect to see if none of the stakeholders substantially changes their position or desire to engage on the issue. This is the outcome expected at a singular point in time without further bargaining or deal making. Therefore, we call it a ‘baseline’ prediction. You can think of it as a story about what happens prior to the bargaining games that unfold ‘in the shadow’ of this baseline. 

What it is not: a prediction about where the ‘final’ outcome will land. Bargaining is the lifeblood of politics and so much of politics is about how you play the bargaining game. ‘Final’ outcomes can move substantially away from the baseline equilibrium, and often do. There’s no crystal ball here, or anywhere else for that matter, if there were, it would all be so easy. 

But don’t underestimate the value of a baseline equilibrium prediction. 

In our practice, we work every day with clients who must decide where to allocate their market and non-market strategy efforts, simply because no one can do everything. Many times, that means deciding if you can live with a sub-optimal outcome on a particular issue and use your energy to move the needle on another issue. Sometimes it means dipping an experimental toe in the water, to test what it will take to move an issue away from an equilibrium outcome, and then deciding if the additional effort is worth it. 

These are the strategic conversations that often matter most. These conversations shouldn’t rest on a single person’s opinion, and they shouldn’t be vulnerable to groupthink. In our experience, one of the most dysfunctional patterns is when these strategic conversations get buffeted around too quickly by the short-term news and social media cycle. We’ve all seen it happen, a bad story shows up in a prominent outlet or goes viral on a social platform, or both, and suddenly people feel like they must do something.  

That instinct can be right or wrong, but most of the time the decision should not be shackled to a short-term shock. After all, we know that some shocks propagate and lead to deeper crises and change. Others, if left alone, just diffuse themselves and dissolve over a period of time to barely register later on. A disciplined model that generates a baseline equilibrium prediction can be a very powerful tool in helping decision makers bet smartly on which scenario they are facing.

Ultimately, we want to help our clients manage these kinds of situations with a clear decision-making discipline.  What kind of a situation are we facing? What do we consider our best options in a situation like this? If we were to choose option X, can it work?  Will it work?  And is it worth it?  

The best strategists and strategy teams bring more than one tool to this discipline, and they learn from experience to get better at the overall discipline over time. We believe the SAM model belongs prominently in the toolbox. And that practice with SAM, as with so many tools, will cumulate over time to more effective use.

If you’re interested or intrigued to learn more about how SAM can help you and your team in a specific situation you’re facing, or in general practice overall, give us a call. We’d love to explore how to make this tool helpful to you.