Extremists think "communication" means agreeing with them.

Leo Rosten

Introduction

Baruch Fischhoff's 1995 review of twenty years of process in risk communication research and practice revealed some effective and ineffective techniques for telling the public about technology. This page outlines the Fischhoff report and then reviews additional viewpoints on risk communication offered by Paul Slovic and William Freudenburg. These three scholars express concerns about the quality of public discourse that takes place regarding complex and controversial technologies. They wonder if the social fabric of society, itself, might be harmed by contentious, overly adversarial public debate about new technologies. They discuss obligations of citizens and societal institutions to facilitate well-reasoned discourse that is respectful of the opinions of others. They seek ways to alleviate negative social consequences arising from public discourse on the "risk-assessment battlefield."


Compass

    Key Questions

      Is the quality of the social fabric itself at risk as a result of contentious public discourse about complex technologies?
      Can society flourish in the "risk-assessment battlefield"?

    Examples

      Apart from the arguments made in favor of or in opposition to the sampler technologies, is the nature of the public discourse itself harmful to the well-being of society?

      Are we arguing too much with one another?

      Do we respect viewpoints different from our own?

      Is there a better way to go about the task of formulating wise technology policy?



Risk Perception and Communication Unplugged: Twenty Years of Process.

Baruch Fischhoff: Risk Analysis, vol. 15(2) 1995. Full text article.

Fischhoff's review of the history of risk communication research and practice is organized within eight developmental stages that span the 20 year period from 1975 to 1995. He describes each stage and its strengths and limitations. He concludes his review by offering suggestions for improvements that need to be made in future efforts at risk communication.

First Developmental Stage: "All We Have to Do is Get the Numbers Right"
    Fischhoff notes that communication often begins before a word is said; that is, the viewpoint that nothing needs to be said is a form of communication in itself. This form of noncommunication often represents the initial reaction of technical experts regarding public input to risk assessments. One can understand, for example, the perspective of risk experts who painstakingly master the assessments of technologies when they believe that little communication is needed or should be expected from them with a public who is for the most part ignorant of the risk issues associated with a technology. And certainly, the "paradox of democracy" precludes public discussion ad infinitum regarding the adoption of new or maintenance of existing technologies. Yet, because within democratic societies the public will have input to decision making, it becomes requisite de facto for risk experts to convey their findings to the public.
Second Developmental Stage: "All We Have to Do is Tell Them the Numbers"
    When requested to do so, risk managers present their findings to the public, often with little interpretation or explanation of them. Although this approach to information delivery seems forthright in its intent at objectivity, it can be viewed by the public as an indication of distance or even arrogance by the risk managers. And subsequent attempts by the public to provide interpretation can be hindered by lack of information or expertise or politicized by subjective evaluations of the meaning and usefulness of the raw data. This approach is further hampered by its premise that the numbers are correct. As has been often noted (see: Risk Assessment Critique: Part 1), risk assessments can be limited in their applicability or outright flawed for many reasons, including occasional acts of dishonesty by scientists or technology managers (see: Consumer Skepticism). Therefore, just presenting facts, in addition to being limited by its appearance of condescending distance, is flawed in its premise that the numbers provide a complete and accurate assessment of risk.
Third Developmental Stage: "All We Have to Do is Explain What We Mean by the Numbers"
    Once one begins to explain numbers, inevitably one begins to introduce subjective evaluations of these numbers. And the public recognizes the subjective nature of these explanations. This recognition sometimes leads to contentious public discourse about interpretations. Typically, proponents and opponents of a technology will offer their conflicting interpretations of the numbers, and in this exercise, the viewpoints of opponents will influence public opinion more so than those of proponents because negative information, initially, carries more weight (see: Consumer Skepticism and The Social Problem). The ensuing dilemma for scientists (see: Science, Technology, and Society, Part 1) is deciding how much explanation about a technology to provide to the public. These dilemmas and the consequent unfavorable public reactions to any confessed limitations of the technology lead some risk communication experts to declare that what is at stake is control over the language of risk (see the viewpoints offered by Paul Slovic, below). This approach has its limitations as well. At this developmental stage, it should be recognized that explanation of the numbers can engender controversies that proponents likely will lose because negative information, initially, carries disproportionate weight.
Fourth Developmental Stage: "All We Have to Do is Show Them That They've Accepted Similar Risks in the Past"
    It might seem intuitively appealing to compare a technology under consideration with a technology previously considered as risky but now accepted as being of little risk. This approach in effect says to the public, "See how silly you are now to doubt now when your doubts in the past have proven to be groundless." This approach to risk communication, first, assumes that the risk assessments are correct, which sometimes is not the case. Secondly, the condescending attitude it conveys is unlikely to sway public opinion favorably. Third, technology comparisons are difficult to make, even when the public is willing to accept some risk, when often they prefer to bear no risk.
Fifth Developmental Stage: "All We Have to Do is Show Them That It's a Good Deal for Them"
    Explaining both costs and benefits can be a highly effective approach to helping the public reach decisions about a technology. The public is in effect asked to join with risk experts in evaluating the merits and limitations of a technology. This approach suffers from similar pitfalls as the third developmental stage, wherein explanations of costs and benefits themselves can become problematic within contentious public discourse. Because no technology can claim 100% safety, the public must eventually weigh benefits and risks, presuming these can be agreed upon.
Sixth Developmental Stage: "All We Have to Do is Treat Them Nicely"
    Aretha Franklin had it right, it's about R.E.S.P.E.C.T. In addition to honest, balanced messages, people want to be treated with respect, and from respect and a sense that experts have sufficient expertise, comes trust. The public needs to feel as if their opinions, and even their emotions, are respected as legitimate.
Seventh Developmental Stage: "All We Have to Do is Make Them Partners"
    Treating the public with respect is an essential element of the decency they deserve. Yet this approach can seem patronizing if it is not accompanied by a true partnership of ideas. In fact, even a less educated public can make valuable suggestions to technology improvement. The "indigenous knowledge" of the public can enhance the effectiveness of a technology. Simply asking for input from the public can significantly improve the relationship with it. "Partnerships are essential to creating the human relations needed to damp the social amplification of minor risks--as well as to generate concern where it is warranted."
Summary

All technologies are flawed, and all have negative consequences for some segments of the population. Hence, some conflict is inevitable. And because negative information carries disproportionate weight, initially, among a justifiably and responsibly skeptical public, it should be noted that opponents will initially be successful at raising concerns about new technologies. Assuming that the overall benefits of the new technology outweigh the risks, then true partnerships with the public will be the most effective means of overcoming initial skepticism to move toward adoption of the technology.

Trust, Emotion, Sex, Politics, and Science: Surveying the Risk-Assessment Battlefield

Paul Slovic: Risk Analysis, vol. 19(4) 1999. Full text article.

Slovic asserts that risk management has become increasingly politicized and contentious. He expresses concerns that controversy and conflict might have become too pervasive. It might be that the quality of society erodes with too contentious public discourse about technology policy.

The irony, he states, is that at the same time our nation has expended considerable resources to make life safer, many persons have become more, not less, concerned about risk. To understand this phenomenon, Slovic describes the nature of risk assessment and its relationship to public perceptions. He distinguishes between hazard--technical assessment of potential physical harm--and risk--socially constructed perceptions of risk. [Earlier, we said that Risk = Hazard (technical assessment) + Outrage (emotional assessment). Here, Slovic is saying that Risk is the socially constructed sum of hazard and public perceptions. Thus the two perspectives are very similar.] Slovic states that assessments of danger, both by technicians and the public, are influenced by political, economic, cultural, and other social factors. Importantly, it is definitions of risk that affect risk policy. As argued by Bell and Mayerfeld in their manuscript that we reviewed in the Philosophy of Technology, Part 2 section, Slovic states that defining risk is an exercise in power.

Thus, risk controversies are not about science versus misguided public perceptions of science, wherein the unwashed public needs to be educated about "real" risks. Rather, risk controversies are struggles over whom will define risk. The public cannot be labeled as irrational because their judgments about risk are influenced by emotion. The viewpoints of scientists also are influenced by emotion, politics, economics, and so forth. Technology policy discourse, therefore, is not about whom is correct about assessment of danger, but whose assumptions about political, social, and economic conditions win out in the risk assessment battlefield. Thus, danger is real, but risk is socially constructed. Scientific literacy and public education are important, therefore, but they are not central to risk controversies.

Slovic raises concerns about how disparities between "real" and "perceived" risk might engender public discourse that, itself, is a risk to the social fabric of society. Trust is a critical factor in risk assessment and management. Social relationships of all kinds, notes Slovic, are strongly influenced by trust. Unfortunately, trust is fragile. Slovic states that the limitations of risk science, the importance and difficulty of maintaining trust, and the complex, sociopolitical nature of risk point to the need for a new approach to risk assessment--one that focuses upon introducing more public participation into both risk assessments and risk decision-making to make the decision process more democratic, improve the relevance and quality of technical analysis, and increase the legitimacy and public acceptance of the resulting decisions.

Slovic argues that the system destroys trust:
  1. The pervasiveness of media attention to technology and risk assessments destroys trust because most of what the media reports is trust-destroying news.
  2. Powerful special interest groups find access to the media.
  3. The young science of risk assessment cannot prevail against the level and intensity of assaults against it.
  4. Whoever controls the definition of risk controls technology policy. (See similar argument by Bell and Mayerfeld).
In contrast to others who note also the seemingly disproportionate effect of negative citizen opinion upon risk assessment, Slovic states that more, not less, citizen involvement is needed to adequately manage risk. It seems like Slovic is not comfortable with technology policy formed through contentious debate between scientific experts and special interest groups and therefore urges more widespread involvement in risk management by the public.



Risk and Recreancy: Weber, The Division of Labor, and Rationality of Risk Perceptions.

William Freudenburg, Social Forces, Vol. 71(4) 1993. Full text article.

The earliest social science discussions of risk were framed almost exclusively in terms chosen by engineers. Within the technical community, two explanations typically are given for public reactions to what the technicians deem to be objectively defined risk. The first is that the public is ignorant and/or irrational. From this definition of the situation, policy focuses on education of the ignorant and easily manipulated public. The second explanation, associated more with economists, is that public reactions represent economically rational, although understandably selfish, responses to risk. From this view, policy focuses on providing adequate compensation for risks endured.

The problem with the first view is that technical definitions of objective risk are not always precise (see Risk Assessment Critique: Part 1) and always are influenced by social and cultural factors. The problem with the economic approach is defining adequate compensation. Events vary in the amount of outrage they create and it is difficult to assign monetary value to risk and negative health outcomes.

Increasingly, the findings from empirical studies are providing paradoxical with respect to the individualistic theoretical perspectives that have predominated in the past. Research shows that differences in risk perceptions cannot be attributed to differences in information levels, but are attributed more to differences in cultural outlooks and personal values.

Freudenburg notes that with increasing complexity of technological innovations and societal division of labor people find themselves in a position of not knowing much about highly complex and potentially dangerous technologies. They therefore must rely upon their judgments about whom to trust. Like Slovic, Freudenburg is aware that:
  • the public is not irrational in their skepticism about complex technologies, but rather cautious in deciding whom to trust in their understandable state of ignorance about these technologies,
  • the public and scientists rely upon social as well as technical criteria to evaluate risk,
  • claims that the public is irrational in part are responsible for increasingly contentious debate about complex technologies,
  • some special interest groups profit from fear mongering within this atmosphere of ignorance and fragile trust,
  • the media have a difficult job of presenting varying viewpoints on technical issues.
Freudenburg wants to look at societal not individualist explanations for this pervasive problem because contentious public debate can:
  • delay implementation of valuable technologies,
  • hasten implementation of undesirable technologies, and
  • create public discourse that in itself might be harmful to the social fabric of society.
Freudenburg uses the term recreancy, which means institutional failure resulting either from lack of competence and/or fiduciary responsibility, to refer to societal-level inadequacies in risk assessment, management, and communication. Recreancy does not necessarily result from "villainy," but instead comes about from inadequate societal-level definitions of risk, procedures for evaluating risk, risk management practices, and poor risk communication techniques. In other words, it is societal structure and functioning that is inadequate, not familiarity with technology or irrational thinking, in bringing about wise technology development and policy.

Freudenburg offers suggestions for improving societal-level capacity in risk assessment, management, and communication:
  1. Assess the level of recreancy in American society,
  2. Become more aware of societal-level influences on risk assessment, management, and communication,
  3. Build institutional capacity to facilitate wise technology policymaking.
Go Back to Previous Page Go Forward to Next Page Go to the Home Page Go to the Reading Assignments Page