Good post from Jim Fallows on the use of visual “thinking tools” for understanding complex issues. Argument mapping, the subject of the post, is like the structured, ultralogical sibling of graphic recording — both seek to distill the work of groups into a coherent, easily understandable form. For more on the need for such thinking tools, Fallows points to a paper, Enhancing Our Grasp of Complex Arguments, originally delivered as a conference opener. Anyone involved in group process work — or indeed, who has ever attended a conference — will have faced the questions raised here:
For the next two days, a series of individuals are going to address you, all approaching the matter of population and environment in 21st century Australia from different angles. How much of what they say will you retain? How clearly? How much overlap will there be between what any two of you retain, not to mention the whole gathering? How will you know? How much congruence will there be between the questions you ask of the different speakers? How cogent will their answers be? How will we be clear about the significance of their answers? What consensus, if any, will be generated by the conference? To what extent will such consensus be justified? How will we know? These are all questions about the cognitive process of deliberating. It is this process, not the substantive matter in hand, that I will address this morning.
My take is that we are just at the beginning of understanding how to do group decision-making right, and that we will increasingly rely on technology to do so as the scale increases. What’s interesting so far though is how non-technological a process this has been. Understanding cognitive biases and the subjectivity of expert judgment — these are primarily individual-scale psychological insights. Argument mapping and graphic recording are incredibly analog activities — we’re talking about pen on paper. So the question in my mind is: do all these threads presage an era of tech-enabled massively multiplayer decision-making, or is it the case that while our issues may grow infinitely in complexity, our sense-making tools need to stay human-scale?