How do you prove an uncertainty?

Everyone knows you can’t prove a negative, but can you prove an uncertainty? First, I’d better explain why you would want to try to do something so seemingly perverse. What I am trying to do is to show that the precautionary principal requires that science does something that is not normally called upon to do; which is to demonstrate our ignorance and uncertainty rather than knowledge.

The Precautionary principal  is, roughly, the proposition that we should not wait for full scientific certainty before intervening to prevent a harm. One example would be the rule from the Third North Sea Conference that usage of toxic chemicals (such as antifouling paint) at sea should be assumed to harm the biosphere. The person wishing to use toxic paint bears the burden of proof that his actions won’t be harmful.

However, no-one would argue that any action should be regulated merely based on an unsupported accusation. So there must be another, earlier burden of proof which triggers the shifting of the full burden of proof to the person wishing to act, from the person wishing to prevent action. But by the definition of the precautionary principle, this early burden cannot be based on full scientific certainty. Instead, it requires us to show sufficient uncertainty that an action is harmless (and sufficiently bad potential consequences) that precaution should be applied. In other words, we have to prove, or at least show, an uncertainty.

It’s not entirely settled that the precautionary principle is a good thing. For example, the American Council on Science and Health argue here that myopically focussing on potential harm, without taking into account benefits can have adverse consequences. They give the example of a discontinuation of water chlorination in Peru which led to a Cholera epidemic. Nevertheless, even those holding relatively radical positions , such as the ‘proactionary principle’ against precaution don’t rule out that ‘precautionary measures’ may need to be taken sometimes.

So this leaves us needing to make decisions – if only provisionally – based on incomplete or unreliable information. But this is not something which science normally does, or what the public expects and trusts it to do.  So far I have been using the common usage of the word ‘uncertainty’. Wynne  divides uncertainty into four classes: risk, a known probability of harm based on full scientific knowledge; uncertainty, where you know all the moving parts but not the probabilities, ignorance; where there is something that you don’t know that you don’t know, and indeterminacy, where there is something which you cannot know. Of these, you would most likely make a precautionary case on uncertainty or indeterminacy (In the case or risk the case is not precautionary, and in the case of ignorance you don’t know you need to make it).

How would you show that it is uncertain that something is harmless? An example might be that of the TBT antifouling paint in Santillo et al. They are pessimistic that good evidence can usually be found, especially for low levels of pollutant: TBT was unusual in that a specific marker of its effects was found. This suggests that for chemicals shown to be toxic at high concentrations, it should be assumed that they are toxic at low ones.

One problem with making decisions without complete scientific knowledge is that the natural biases which scientific procedures have helped us control are likely to recur.  Kahneman and Tversky   give various biases which people exhibit when dealing with uncertain information. Apparently  people use a ‘Representativeness heuristic’ when judging how likely something is. In their example, we might think an acquaintance is likely to be a librarian because he is like librarians we have known. But this heuristic fails when the absolute number of the guessed class is small. Unfortunately experts, even those with knowledge of statistics, are prone these kinds of errors. So, going back to the case of the paint, we expect that a chemical toxic at high levels will be toxic at low levels, because all chemicals toxic  at low levels will be toxic at high levels. But maybe there are very few chemicals toxic at low levels. I’m not saying that this is a conclusive argument in this case, but that these kind of biases are applicable to the judgements experts need to make.

A more positive picture is given by a study by Krynski and Tenenbaum . They note that statistical methods apply when we have lots of data about a simple system, but humans are adapted to make quick judgements about a complex world. They find that some apparent biases can be justified by appealing to causal models, but that people have a ‘bias towards deterministic mechanisms’. If scientists are going to address these kinds of issues, some mechanism for checking for these kinds of bias seems to be required.

Even if we can find the most rational way for experts to make these judgements, there is the problem of whether the public will find them legitimate.  If people expect scientists to produce clear knowledge, will they accept a case built on its lack?

Advertisements

Responsible Innovation and the Lean Start-up

The question I want to think about in this blog post is whether the concepts of ‘Anticipatory Governance’ and ‘Upstream Engagement’ are applicable to internet-based entrepreneurship.

A start-up company is frequently the point at which a technological innovation is first exposed to the public, and therefore it sits at an intermediate point in the time-frame defined by Collingridge in in “The Social Control of Technology”: Not so early that we have no idea what the effect will be, but not so late that the technology has become entrenched and difficult to change.

So a start-up company would seem to be an interesting object from the point of view of those desiring to avoid problematic technology. However, the concepts of ‘Anticipatory Governance’ and ‘Upstream Engagement’ have been formulated with rather different scenarios  in mind.

Guston’s ‘Anticipatory Governance’, defined in “Innovation Policy: Not Just A Jumbo Shrimp” is defined with Nanotechnology as the motivating example; although he undoubtedly intends his points to be more widely applicable. “Upstream Engagement”, set out in “See Through Science” by Wilsdon & Willis, is also motivated by Nanotechnology and  GM foods, but also by ‘science crises’ such as the BSE and MMR controversies. In these cases, the involvement of publicly funded science is an active one, and the target audience seems to be policy-makers.  Both therefore propose that engagement with the public ought to be integrated with scientific and technical work. Wilsdon and Willis explicitly argue that engagement should be pushed as far ‘upsteam’ as possible from the point where a controversy arises. On the other hand, neither actually go far as to say that these activities ought to be compulsory; rather, they argue that their recommendations will avoid acrimonious disputes.

An influential methodology for start-up companies is the ‘Lean Startup’ method, promoted by Steve Blank. A start-up is defined as a company which is in search of a viable business model. In order to maximise the likelihood of launching a profitable enterprise, the Lean Start-up  is supposed to remain as flexible (‘agile’) as possible, maximise the feedback it obtains from paying customers, and ‘iterate’ (test and discard ideas) as fast as possible.

This engagement-intensive model seems superficially like a good match for that of Anticipatory Governance/Upsteam Engagement. However, the match is not exact. The engagement of a start-up is specifically with those who are, or might become, its customers, not all those it might affect. For example, AirBnB (a startup which allows people to easily (sub)let their house/flat for short periods), has been criticised for causing problems for the neighbours of its customers.

Anticipatory governance “prescribes the explicit inclusion of values in deliberations”,which would include the values of

Startups are a point at which we can still affect technology, but at the very start, it may well be considered too early. A start-up might go through several alternative ideas before hitting one which is viable. Clearly, until – or unless – it hits a growth strategy, there is little point in worrying about its social effects. But if the start-up succeeds, that point may be reached.

Willsdon and Willis cite low levels of public engagement in innovation-based companies, due to the countervailing pressures of the profit motive, and the need to protect competitative information.  However, they point to arguments that more open models of innovation do not in fact lose out to the more secretive. The same point is made  by Steve Blank in the ‘Lean Start-up’: he argues that the ‘stealth mode’ practice which was often used by earlier start-ups is being discarded, on the grounds that customer feedback matters more than secrecy.  Although the interests of these proposals are different ones, their similarity in method suggests that they may not be too far apart.

What about from the perspective of the Start-up founders? They may not agree that start-ups ought to be the focus of any kind of governance.  I have not found any systematic research on their views, but my sense is that while not all of them follow Peter Thiel, who apparently “no longer believe[s] that freedom and democracy are compatible” , they do have a strong libertarian streak on average. Startup-companies are private enterprises, and as such it may be argued that they don’t have the same duty of care to the public at large as does publicly-funded research. Nevertheless, internet companies have caused social change; which suggests that some input from public values is appropriate.  Apart from ideological concerns, start-up founders are likely to want to avoid heavyweight governance processes as sources of economic ‘friction’, not necessarily wrongly. If only a low proportion of startups are problematic, it may be unreasonable to place a burden on them all.

As Guston notes, “governance does not consist simply of government or the activities of public sector organizations, but rather also includes governing activities that are more broadly distributed across numerous actors.”  Start-ups may be more receptive to grass-roots involvement of citizens, especially if this provides them with valuable feedback.  I’m not sure whether a twitter mob can be counted as governance, but perhaps it can evolve in that direction.

Net Neutrality: What is it, and why should you care?

Net Neutrality is described in different ways. In Tim Wu’s Network Neutrality FAQ, it is described in purely economic terms:“a neutral network should be expected to deliver the most to a nation and the world economically, by serving as an innovation platform, and socially, by facilitating the widest variety of interactions between people.“

In “No Tolls on the Internet”, Lessig and McChesney describe Neutrality, or rather its absence, in rather more moralistic terms. Internet service providers, or carriers, should not be “extorting protection money” from services who their users want to communicate.(These authors, by the way, are not in conflict with each other; Lessig and McChesney quote Wu as describing carriers as having “the Tony Soprano business model”).

The concept, then, is that if you sign up for internet service, you are signing up with the freedom to communicate with whom you please, you are not granting your service provider the license to demand baksheesh from anyone who wants to offer you a service. As Wu notes, this is an expansion of the concept of a ‘common carrier’ from the transport sector.

One might well question whether a technology can be neutral at all. Balabian argues that the image of technology as a neutral tool is one at least in part promoted in order to hide the interests of those who have produced the technology. However, at least some technologies can be neutral: most obviously, those designed to be. A (very simple) technology of this kind would be  the method of drawing straws.  Another would be weights and measures. Even vested interests benefit if some technologies are neutral.

Getting back to telecommunications, Almon Strowger; reputedly devised the first successful automated telephone exchange (video) because suspected the local manual switchboard operators of having been bribed to favour his competition. Strowger’s switch was neutral in a useful sense. It wasn’t neutral in every sense (for example, towards those switchboard operators which it put out of work) but it provided a useful guarantee. And it did so because the technology of the time wasn’t capable of being biased in this way.  But since it now is, we are faced with the question of whether we need to enforce this form of neutrality in law.

But when is it necessary to do so? If you sign up for internet service, you are signing up to the terms and conditions offered to you. So why is it not sufficient for the free market to sort this out? If people are willing to sign up to terms under which the carrier does act as a gatekeeper, should government intervene to prevent them?

This is certainly an argument made by the carriers. Another is that, given the extraordinary growth of internet traffic due to the public’s consumption, to finance the necessary investment required it is necessary to charge internet companies, as well as their users (eg, Litan here)

One problem with the free market argument is that many wireline internet providers have a monopoly. It is more difficult to say that users have accepted the terms, when they have to take them or have no access. However, this argument makes it more problematic to argue that Net Neutrality should be applied to mobile carriers, where there usually is competition.

One argument is that discrimination can be covert, or at least, not announced. To take the Strowger example, the operator did not have to say “my brother in law is a much better undertaker, wouldn’t you like to call him instead?” Rather, she could simply connect the caller to here preferred undertaker, as by mistake. If users are unaware how much discrimination is occurring, how can the market remove it?

Litan’s argument seems confused to me. It assumes that Net Neutrality is the same as requiring flat fee access, which it isn’t. It also assumes incorrectly that ‘service’ companies such as google don’t pay for internet access. (A full discussion here would require a description of peering, but I don’t have space).

Other difficulties with Net Neutrality are: Why does it only apply to carriers? The modern internet has many more intermediaries above the level of the carriers of raw bits. Google, Facebook, Skype, etc arguably have exactly the same opportunity to discriminate, if not more so. Indeed, Google has a severe conflict of interest, because its value to users lies in providing unbiased search results, but its revenue comes from paid advertisements.  However, Morozov argues that it is pushing the net neutrality concept too far to apply it to the likes of Google and Apple, that although they provide messaging services, they are too different for the old rules to be applied. It may be that we have to  move from discussing whether their actions are neutral, to whether we have influence over them commensurate with their influence over us. Or maybe we have to devise some new guarantee that they ought to provide.

Nevertheless, the original Net Neutrality is still a live issue today. Just as it is ‘always profitable to add chalk to flour’, there will always be pressure to weaken the rules. So that’s why you should care.