Nobody would dispute the claim that in business, as in all walks of life, knowledge boosts performance.
A more contentions issue is how any large enterprise can bring together all the knowledge that its employees possess into a meaningful whole.
The most common approach is to capture that knowledge in the form of data through IT systems and then, as far as is possible, to compile the data into large repositories. Having been robbed of the context in which it was created, that data is then subjected to translation and analysis so that the business might figure out what it all means.
There are some circumstances in which this works perfectly, but it is not the only way.
Employees, as a rule, understand the environment in which they work. If a product in development is going to fail, or if a marketing campaign is not going to deliver results, there’s a reasonable chance that someone within the organisation already knows. And while a good manager will elicit these judgements, more often than not they go unheard.
Presented below are two examples of technologies that aim to address this. They both work by mathematically aggregating the opinions of groups of employees, in one case to detect the sentiments of the workforce, in the other to facilitate transparent and collaborative decisions.
The two examples provide a glimpse of the theory and practice behind an emerging category of software, in which the collective wisdom of the workforce is regarded as an asset and which may influence mainstream business software in the coming years.
Until 2007, Mat Fogarty worked as financial forecaster for corporations, starting with Unilever in the UK before emigrating to the US to join video game maker Electronic Arts. It was his job to predict how much money given projects were going to cost and earn – predictions that the chief financial officer would use to allocate budgets.
Typically, these predictions were pretty inaccurate, he recalls. “When you tried to predict how much of a given product you were going to sell in Q1, your estimate for the length of the launch phase would be unreliable, your sales predictions would be off and you would have positive or negative biases depending on the season.”
He soon discovered, though, that all the information required to make more accurate predictions was available within the organisation, just not in the finance department’s models. Instead, he says, it was “with the people”.
“When I worked for EA, I used to play football with my colleagues,” Fogarty explains. “Often, the information I was getting from the engineers, game testers and marketing guys on the football pitch was very different from the information that I was giving to the CFO from the systems.
“I thought, ‘This is ridiculous – there has to be another way.’”
This led Fogarty to explore the possibility of using ‘prediction markets’ in corporate planning. A prediction market uses the model of a financial trading market – whereby participants buy and sell ‘assets’ – but instead of shares in a company they trade bets on the likelihood of a given outcome. Famous examples include the Iowa Electronic Markets, in which participants bet real money on outcomes including the US presidential election.
Proponents argue that prediction markets are a highly accurate way of making forecasts, as rather than relying on the influence of a few so-called experts they aggregate what the ‘crowd’ knows. (Like the ‘rationality’ of financial markets, however, this is much debated).
So convinced was Fogarty of the potential of prediction markets in a corporate environment that in 2007 he quit his job and founded CrowdCast, a company that now sells prediction market software as a web service. Among his first hires was Dr Leslie Fine, a scientist who had studied prediction markets both academically and at Hewlett-Packard’s research labs.
Fine explains that a number of modifications to the traditional prediction market model were necessary to make it usable in everyday business. For example, rather than following a strict market metaphor – in which participants give their trading positions on various assets (e.g. ‘At 80% I short and at 90% I buy’) – CrowdCast wanted a system where participants could make simple probability estimates (e.g. ‘I think there’s between an 80% and 90% chance of this happening’).
Making this modification and others required some “pretty cool math”, says Fine. The resulting system allows employees to place bets (using imaginary money) on the probability range of certain outcomes. If they bet $10 that there is, for example, between an 80% and 90% chance of a product shipping on time, they win $100 back if it does indeed ship on time; a broader estimate and they would win less. Employees are motivated to participate by small prizes and a leaderboard of their performance.
Aggregating all the bets together creates a probability distribution, demonstrating in graphic form not only the average of all predictions but also the spread of predictions. This allows managers to see what the crowd knows. CrowdCast is used by a number of corporate entities including, Fine says, “one of the world’s largest greeting cards manufacturers”.
“The company has a very positive culture, and one side effect is that they are very bad at killing bad projects early in the cycle. So they got us in to create a forum where the creatives, who are designing their products, can say, ‘Actually I don’t think that’s such a good idea.’”
This allows the company, Fine says, “to ‘de-risk their projects’ earlier, when there is still time to take proactive action”.
In March 2010, CrowdCast launched a dashboard that allows managers to monitor the aggregated employee sentiment on an ongoing basis; they can set up alerts to let them know when confidence that a goal will be achieved dips below a certain point. It also allows them to slice the aggregate predictions to see which departments are positive and which are negative.
Fogarty claims that with this added functionality, CrowdCast is moving away from simple prediction markets towards what he describes as ‘social business intelligence’. “Normal BI is mostly around aggregating information from systems,” he explains. “We see social BI as aggregating information from humans within the organisation.”
‘Social business intelligence’ is just one of the terms being used to describe an emerging category of software that aims to make business management more democratic, more transparent and – it is hoped – more effective as a result. Another, as coined by influential analyst company Gartner, is ‘collaborative decision-making’.
This term was minted in response to a number of enquiries that Gartner analyst Rita Salaam received in the wake of the credit crunch. “A couple of those inquiries were from very large banks that had been, I would say, blindsided by the financial crisis,” she recalls.
They wanted to know why certain decisions had been made in the run-up to and immediately after the crisis, but found they had no insight into the decision-making process. The problem is that, for many organisations, this process involves many different stakeholders using information from many different systems and serving many different agendas – there is therefore no single record of the lifecycle of a decision.
And because organisations cannot trace back decisions that were made in the past, Salaam explains, they cannot learn from the mistakes or successes. “The important thing is they aren’t able to mine past decisions in order to develop best practices,” she says. Salaam believes that for this retrospective view to be achieved, businesses need to adopt a new operating model that allows the decision-making process to be tracked. “We need a new way of working that allows a new level of transparency into decisions.”
She sees in the various social technologies gradually creeping into the enterprise an opportunity to achieve that operating model, but adds that today there are many different tools that embody a variety of approaches, none of which provides a complete platform for transparent decision-making.
“There are companies taking a business intelligence approach to this, while others are taking a business process management approach, and others still a collaboration approach.”
A few software suppliers, however, focus directly on the problem of how groups of people take decisions. One example of this, says Salaam, is Decision Lens.
In the 1960s, mathematics professor Thomas Saaty was employed by the US State Department to facilitate nuclear disarmament talks between the US and Soviet governments. He found that the US representatives, mainly economists, were being outmanoeuvred by their Soviet counterparts.
The reason, Saaty concluded, was that the US delegation had no mechanism with which to prioritise the numerous objectives they were tasked with achieving. Hence, they were arguing with one another even as they sat down at the negotiating table. This experience inspired Saaty to research methods for multiple stakeholders to prioritise various objectives collaboratively. This research led him to develop the analytic hierarchy model (AHM).
Page 3 of 3
The AHM is based on an insight into the function of the brain – human beings are far better at making pairwise comparisons (‘X weighs more than Y’) than individual assessments (‘X weighs this amount’). It breaks any prioritisation task into a number of pairwise comparisons, which can be undertaken by any number of people. The model then mathematically evaluates the sentiment of the group by aggregating these pairwise comparisons.
In 2003, Thomas Saaty’s sons, John and Daniel, founded Decision Lens, a company that sells collaborative decision-making software based on the AHM. Today, it is employed by organisations including the US Department of Defense, NASA and Lockheed Martin, which uses the software to prioritise projects for its F35 production line.
“The idea is to bring together a broad stakeholder group and have them structure a decision,” explains Decision Lens CEO John Saaty. The software invites each member of a group, which could be the board of directors or a marketing operations team, to complete the pairwise comparisons individually. This, says Saaty, helps to counteract some of the psychological factors that are often at play in decision-making meetings.
“You will often have A-type personalities that drive these meetings, and there will be other people in there with real expertise but because they are not the types that want to offer the information in the meeting they’re just quiet about it.” he explains. “This software allows everyone to understand each other’s priorities. You don’t have to agree, but you have to be explicit about what your judgement is.”
The second phase of the process is to assess alternative courses of action according to the aggregate priorities of the group. “For some of our customers, this is the first time they are actually able to tie the evaluation of alternatives directly back to their strategic imperatives.”
During his tenure as head of marketing strategy for auction website eBay, Kip Knight – now a consultant – used Decision Lens to choose which marketing projects merited investment. This, he reports, made the decision-making process more collaborative, more transparent and, therefore, more successful.
“We made better decisions for two reasons: first, they were a little more thoughtful because they were looked at from various criteria; and second, because of the debate and the collaboration, because everybody felt they had a little more ownership of the decision,” he explains.
“Previously, there were too many back-room deals going on,” he adds. “I’ve done traditional budgeting for many years, and you can game it so that you get the budget that you want. But you can’t game this, it’s way too transparent.”
Saaty argues that the popularity of his product – the company grew 70% in 2009 and was used to allocate around $98 billion of budget, he says – reveals that conventional decision-making practices are broken. A common malaise, he says, is the expectation that information systems and data analysis can somehow direct strategy.
“We met with [pharmaceuticals giant] Astra Zeneca’s head of global portfolio,” recalls Saaty. “He said they had tried to drive decision-making off data. They had a whole project last summer where they were looking into various possible products, and they thought by evaluating all this data and plotting it on a graph that it would soon emerge what the correct direction would be for the products. But what happened was that all the products ended up landing right on top of each other – there was really no differentiation.”
“His point was that looking for your strategy to emerge out of the data is not really that effective,” Saaty concludes. He adds that making the priorities of the various stakeholders within the organisation explicit – and providing a mechanism for reconciling those priorities – is more likely to provide useful guidance than statistical analysis of financial records.
Reflected in Saaty’s comments is a broader point. Not only are the minds of an organisation’s employees its most valuable source of information, they also contain invaluable insight, analysis and forecasts. Any tool that helps an organisation build all of this into its operational management and strategic leadership must surely make that organisation stronger.