"People are all, to a greater or lesser degree, storehouses of information," write Bryan Bonner and Michael Baumann. "To the extent that people can effectively pool their knowledge and expertise with others, groups, by extension, become vast resources with immense potential impact." Professors at the Univ. of Utah and Univ. of Texas at San Antonio, respectively, the pair demonstrated a way to tap that potential in a study published in the Journal of Personality and Social Psychology. Reading a few sentences may be enough to help your team answer problems as well as an expert. To summarize, they say, "encouraging individuals to consider… knowledge they already possess that may be task-relevant positively impacts the quality of group discussion, decisions, and performance."
In their study, 540 undergraduate students wrote down answers to trivia questions that had numerical answers. Then they were given time to think about the answers for 45 minutes and try again. Half were put into three-person groups to discuss possible answers in that time and the other half worked alone. Also, each person either read some extra instructions or not, so the study had four "conditions":
Overall, the top third of individuals did just as well as groups in coming up with accurate answers. At first I thought this differed from studies showing that groups are better at decision-making than individual experts. Then it clicked with me that this was only one test, and three of the five trivia questions were about geography (such as, "What is the minimum freeway driving distance from Salt Lake City to New York City? What is the population of Utah?"). Thus individuals with a lot of knowledge in geography would be expected to do well. Repeat the test over different topics using the same individuals, and the scientific literature says groups would come out ahead. Indeed, the average answers of all the people in the "working alone" conditions were worse than those of the group averages. In fact, on most of the questions, groups without the help of the extra instructions outperformed individuals that got that help.
The regular instructions told everybody how to answer the questions. The extra instructions were designed to help people consider what they already knew while doing so. "Even though you may not know the precise answers to these questions," the additional lines read, "take a moment to reflect on the knowledge that you do possess. To do your best on this task, you must build bridges between what you do know and what you don’t know. You may not know the answer to a given question, but you may know other information that is related to the question. For each question, generate two items of associated knowledge to help you answer as accurately as possible." They were told to write down those items in addition to their initial answer.
Groups that read the extra lines did better than those that didn't. However, the intervention had no effect on accuracy for top- and middle-scoring individuals, and poor performers actually did worse when they read them. Bonner and Baumann mention studies showing that when poor individual problem-solvers are given questions they know nothing about, they rely too much on whatever facts they have even if those facts are irrelevant, rather than going with their instincts.
How did the extra guidance make a difference? To answer this, the discussions were recorded and analyzed. Groups with that help were more likely to bring up relevant facts in their discussions than those without it. Mentions of relevant expertise were higher as well. The article explains, "Expertise statements were assertions of mastery of a problem domain made either in self-reference (e.g., 'I do a lot of hiking and know a great deal about mountains') or in reference to another group member (e.g., 'Since you mentioned hiking a lot, you must know a great deal about mountains')."
In other words, people who read the extra instructions were better at bringing up relevant facts and at recognizing which people on the teams were doing that—recognizing experts, in other words. This is important. Bonner and Baumann say studies show that teams that make bad decisions tend to rely on the average of members' positions, or the most extreme positions, rather than weighing facts and expertise. To illustrate, say a team was working on a question for which the answer was "7," but the members' individual answers averaged "5." Poor decision-making teams would probably choose 5, or maybe the extremes of 1 or 10. Good decision-makers would openly talk about facts and expertise and answer 6, 7, or 8.
Bonner and Baumann say the extra-instruction groups did not talk any more than the others. Although other studies indicate quantity of discussion (up to a point) also helps decision making, in the limited time period in this study only the quality of the discussions could make the difference.
When you facilitate your team meetings, you can easily apply these results:
Problem-solving seems daunting because of the number of unknowns. The simple steps from this study reduces that number by helping people see they know more about the problem than they realize. They may not get the right answer every time, but over time they should do so more often, and get much closer to it when even they don't hit the mark exactly.
Action Item: Print this post out and stick it somewhere your eyes will hit it often. That will serve as a reminder to try the technique I describe the next time your team has a big problem to solve.
Source: Bonner, B., and M. Baumann (2012), "Leveraging Member Expertise to Improve Knowledge Transfer and Demonstrability in Groups," Journal of Personality and Social Psychology 102(2):337.