Technological Innovation: Mistakes of Omission and Commission

It is often suggested that adequate technology assessment (TA) studies should be required for any technical innovation before proceeding with commercial applications--—that the burden of proof be placed on the people who want the innovation. It sounds reasonable to say that it is up to the innovator to prove that his innovation is safe, but there are some difficulties in this position. If as a general matter high standards of justification were set and enforced, many important projects would not get off the ground. Full and definitive TA studies of complex projects and phenomena are often simply not feasible. We have never seen an a priori analysis that would justify the conclusion: "Let's go ahead with the project; we understand the innovation and all of its first-, second- and third-order effects quite well. There can be no excessive danger or difficulties."

Indeed, many times the people looking for second-, third-and even fourth-order effects have often seriously erred about the first; in any case, they usually cannot establish the others with any certainty. For example, most of the limits-to-growth studies discussed in earlier chapters have many first-order facts wrong--—a revealing sample of how difficult the problem is.

None of the above is meant as an argument against doing TA studies. On the contrary, in many cases much will be learned from such studies. But one cannot expect them to be complete and reliable, and placing too great a requirement on innovators doing such studies can simply be an expensive way of doing less; it entails all the problems and disutilities of excessive caution and of slowing down innovation in a poorly designed--—and often capricious--—manner. The two basic kinds of innovative mistakes are those of commission and of omission. The first is illustrated by the case of DDT, discussed in the previous chapter, and by the cyclamate episode. In 1969, the U.S. Food and Drug Administration banned cyclamates (a widely used substitute for sugar in diet food and soft drinks) because rats that were fed heavy doses during most of their lives developed bladder cancer. It has now been revealed, however, that the original research that led to this finding did not permit any firm conclusions to be drawn about cyclamates since they were tested in combination with other chemicals. In addition, subsequent studies have failed to corroborate the original findings. Not only may this abrupt and premature ban have deprived numerous persons suffering from diabetes and hypertension of a medical benefit, but it also cost the food and soft drink industries an estimated $120 million. In this case the mistake of commission swamped the potential cost of a mistake of omission. In choosing between avoiding a clear danger by doing something and avoiding a less clear--—though potentially much greater—--danger by deciding not to do something, society usually does prefer the former.

The mistake of omission can be illustrated by considering what might happen today if a firm tried to get aspirin accepted as a new product. It is known that even a small amount of aspirin can create stomach or intestinal bleeding, and in some persons larger amounts can cause ulcers or other serious side effects. Furthermore, we still know very little about how aspirin operates. Thus one could argue rather persuasively that if a pharmaceutical company tried to introduce aspirin now it would fail to pass the standards. And yet, because of its effectiveness as a cure or palliative for so many ailments, it is probably one of the most useful drugs available. Indeed, there is now a good deal of argument that the FDA is causing more harm by excessively slowing down the introduction of new remedies than it would if the rules were relaxed a bit.

As another example, let us assume that the U.S. authorities had made a TA study of the automobile in 1890. Assume also that this study came up with an accurate estimate that its use would result eventually in more than 50,000 people a year being killed and maybe a million injured. It seems clear that if this study had been persuasive, the automobile would never have been approved. Of course, some now say that it never should have been. But we would argue that society is clearly willing to live with this cost, large and horrible as it is. In Bermuda, which restricts drivers to 20 miles an hour, there are almost no fatal accidents except with cyclists. On Army bases, which restrict speed to 15 miles an hour, fatal accidents are unknown. Similar speed limits could be introduced in the United States if they were wanted, but the majority of Americans apparently prefer 50,000 deaths a year to such drastic restrictions on their driving speeds. In fact, the recent nationwide reduction to a maximum speed of 55 miles an hour to save gasoline clearly is saving thousands of lives a year, but there is little pressure to go further in this direction.

Another problem with technology assessments is that even a good TA would not have made a satisfactory prediction of the impact of the automobile--—that is, on the one hand predicting the accident rate and related first-order difficulties, and on the other what society would be willing to accept. And it is even less likely that the TA would have foreseen accurately many of the secondary impacts of the automobile on society (just to take a small example, recall the influence of the automobile on social and sex mores in the 1920's and 1930's, or the role of the U.S. automobile industry in helping to win World War II.

This is precisely the question that confronts us: Every technology assessment study depends on having reasonable data, theory, and criteria available, and all are unreliable and quite limited in practice. Perhaps 100 or 200 years from now man will both analyze and control his future much better than at present; thus it seems plausible that there may be fewer problems of misunderstood or inappropriate innovation two centuries from now. And especially if man has become dispersed throughout the solar system in independently survivable colonies, there would be a much smaller possibility of doomsday. Moreover, if such a disaster were to occur on earth, it probably would be through politics or bureaucratic mistakes associated with war, rather than inexorable or accidental physical processes leading to total catastrophe. It is easy and even tempting to many people simply to ignore the costs and moral issues associated with mistakes of omission. Indeed, most people might prefer being responsible for a mistake of omission than one of commission, even if the latter were much smaller. This is particularly true, as we have pointed out, if one has been raised in an upper-middle-class environment and has achieved a comfortable status. But most of the world is not satisfied with the economic status quo. It is important for these people to move forward; they are willing to accept great costs if necessary and to take great risks as well in order to improve their economic status. They want aspirins and automobiles, whatever uncertainties and terrible costs may be associated with them. On the other hand, the major pressures to retard economic development and technological progress in many parts of the world are for safety—--safety from the environment, safety from the possibility of outside intervention, safety from internal political unrest, and safety from accidental disturbance of natural balances in the forces of nature.

Some years ago, after nuclear testing began in the Pacific, the debate arose about the acceptability of subjecting people to the threat that these tests could cause bone cancer or leukemia. The main question was whether this possibility was sufficiently large to justify suspending further testing. Almost everybody at that time accepted the assumption that every megaton of fission yield would probably cause 1,000 new cases of bone cancer or leukemia worldwide. Because this increase might not actually be detectable in the incidence of these diseases, many people argued that the harm was negligible. Others argued that no one would test the bomb with even one person on the island if it meant killing this person. What then gave us the right to continue testing just because the deaths would be anonymous? It would appear that people are more willing to accept deaths which are not traceable to specific causes, but only when they cannot clearly identify the victims ahead of time--—and thereby possibly prevent those deaths.

It is simply a truism that most activities in our society have a finite chance of resulting in some death. For example, it was once the rule of thumb that, on the average, every $1 million worth of construction resulted in the death of one worker; this appalling ratio has decreased dramatically until now it must be something like one worker per $100 million in construction. But obviously this expectation does not stop us from putting up buildings, even of the most frivolous kind. The same principle is involved in an example cited earlier—--society's unwillingness to lower the death rate in traffic accidents by reducing speed limits. It is not a sufficient answer that in the case of the automobile one voluntarily accepts the risks to which he is subjected. There are many people who would like to curb the automobile but who nevertheless run the same risk of accident as those who oppose curbs. In our view, there is nothing intrinsically immoral about society subjecting its citizens to this risk when a majority have evidently concluded that the benefits outweigh the risks and the risk is of a more or less customary sort.

Another important issue arises when the damages are spread out over time--—an issue that was long misunderstood partly because of a misleading theory of the English biologist John Haldane. According to his theory, any negative genetic mutation was bad, but minor mutations could ultimately cause more damage than lethal mutations. The argument went as follows: Assume a fixed population. Assume that a parent with a defective gene passes it to one of his children. Thus every defective gene, if it does not result in premature death, is transmitted to an individual in the next generation. If the gene is lethal, it results in immediate death in the next generation and the matter is finished; there is no further inheritance. If the gene is not lethal but has a tendency to cause colds, then this gene can be passed on for many, many generations until eventually it will cause a cold in the bearer at a time when catching a cold tips the scales against the bearer and causes him to die. Then, of course, the gene would no longer be passed along in the future. Notice what happened here. The lethal gene caused an immediate death and was finished. But not only did the less lethal gene also cause death eventually and with mathematical certainty, but along the way it resulted in much damage, giving many people colds over many generations. Therefore, according to Haldane's theory, if anything, the nonlethal mutated gene caused more damage than the lethal mutation. This is certainly mathematically correct, but it ignores such issues as time-discounting and rate of occurrence, both of which should be added to the analysis when the damage is spread over many generations. It is difficult for most people to understand this concept because they often interpret it to mean that the damage is more tolerable because it is our grandchildren, not we, who will bear it; an inference which appears to be the height of irresponsibility. As a result, many scientists have come to the improper conclusion that damage spread out over time is just as bad as damage which occurs in one generation.

But consider the following counterexamples: Imagine that society must choose between four situations: (1) 100 percent of the next generation would be killed; (2) 10 percent of the next 10 generations would be killed; (3) 1 percent of the next 100 generations would be killed; and (4) a tenth of a percent of the next 1,000 generations would be killed. In the first case, one has an end of history--—everybody is dead. In the last case, great damage occurs, yet it is scarcely apparent because it is spread out over such a long period of time and among so many people. Clearly the first choice is intolerable; the fourth, while tragic and nasty, could certainly be better tolerated under most circumstances--—indeed, in many situations similar to the fourth case, it would not be possible to measure the damage or prove that it existed. Any analysis of the difference between the first and fourth situations must take account of this spread over time, even though the total number of people killed is exactly the same.

This example is applicable to many of the environmental problems we should consider, such as the disposal of radioactive wastes and various toxic chemicals, both of which entail the remote possibility of an accident to some unknown group of people in the distant future. It also applies to many of the issues involving genetic damage of one sort or another, in which the injury may be shared by many generations or be inflicted on future generations.

One last point in this connection is almost frivolous and we would hesitate to mention it if it did not come up so often. If there is a constant probability of some random event occurring, then no matter how small the probability, sooner or later the event will occur. This is an accurate but insignificant observation because the underlying assumptions and conditions practically never happen. It is similar to noting that exponential growth will not continue indefinitely because of the finite character of the earth, solar system or galaxy. Since we know that in reality exponential processes cannot be sustained, the question is simply what causes such curves to turn over, when this is likely to occur and what happens when they do. Similarly, what about the argument that mankind must be disaster-prone because so many of its activities carry with them some small probability of causing catastrophes? One reply is that conditions are changing with extraordinary rapidity and the problems associated with present activities may have little or no validity in the long term. In fact, this may be particularly true of such things as genetic damage caused by radiation or chemical pollution--—or by pollution generally. It seems quite probable that, within a century or so, man will be able to prevent such damage, and that calculations of accumulated damage 10 to 100 generations from now will probably turn out to be irrelevant. In fact, it is a major theme of this chapter that most predictions of damage hundreds of years from now tend to be incorrect because they ignore the curative possibilities inherent in technological and economic progress. Of course, this reasoning would not apply to a world in which technological and economic progress were halted, but we do not consider that a likely possibility.