Robot Insurance

The EU is not really giving robots human rights; but some of its proposals are cause for concern. James Vincent on the Verge provides a sensible commentary which corrects the rather alarming headlines generated by the European Parliament draft report issued recently. Actually there are several reasons to keep calm. It’s only a draft, it’s only a report, it’s only the Parliament. It is said that in the early days the Eurocrats had to quickly suppress an explanatory leaflet which described the European Economic Community as a bus. The Commission was the engine, the Council was the driver; and the Parliament… was a passenger. Things have moved on since those days, but there’s still some truth in that metaphor.

A second major reason to stay calm, according to Vincent, is that the report (quite short and readable, by the way, though rather bitty) doesn’t really propose to treat robots as human beings; it mainly addresses the question of liability for the acts of autonomous robots. The sort of personhood it considers is more like the legal personhood of corporations. That’s true, although the report does invite trouble by cursorily raising the question of whether robots can be natural persons. Some other parts read strangely to me, perhaps because the report is trying to cover a lot of ground very quickly. At one point it seems to say that Asimov’s laws of robotics should currently apply to the designers and creators of robots; I suspect the thinking behind that somewhat opaque idea (A designer must obey orders given it by human beings except where such orders would conflict with the First Law?) has not been fully fleshed out in the text.

What about liability? The problem here is that if a robot does damage to property, turns on its fellow robots (pictured) or harms a human being, the manufacturer and the operator might disclaim responsibility because it was the robot that made the decision, or at least, because the robot’s behaviour was not reasonably predictable (I think predictability is the key point: the report has a rather unsatisfactory stab at defining smart autonomous robots, but for present purposes we don’t need anything too philosophical – it’s enough if we can’t tell in advance exactly what the machine will do).

I don’t see a very strong analogy with corporate personhood. In that case the problem is the plethora of agents; it simply isn’t practical to sue everyone involved in a large corporate enterprise, or even assign responsibility. It’s far simpler if you have a single corporate entity that can be held liable (and can also hold the assets of the enterprise, which it may need to pay compensation, etc). In that context the new corporate legal person simplifies the position, whereas with robots, adding a machine person complicates matters. Moreover, in order for the robot’s liability to be useful you would have to allow it to hold property which it could use to fund any liabilities. I don’t think anyone is currently thinking that roombas should come with some kind of dowry.

Note, however, that corporate personhood has another aspect; besides providing an entity to hold assets and sue or be sued, it typically limits the liability of the parties. I am not a lawyer, but as I understand it, if several people launch a joint enterprise they are all liable for its obligations; if they create a corporation to run the enterprise, then liability is essentially limited to the assets held by the corporation. This might seem like a sneaky way of avoiding responsibility; would we want there to be a similar get-out for robots? Let’s come back to that.

It seems to me that the basic solution to the robot liability problem is not to introduce another person, but to apply strict liability, an existing legal concept which makes you responsible for your product even if you could not have foreseen the consequences of using it in a particular case. The report does acknowledge this principle. In practice it seems to me that liability would partly be governed by the contractual relationship between robot supplier and user: the supplier would specify what could be expected given correct use and reasonable parameters – if you used your robot in ways that were explicitly forbidden in that contract, then liability might pass to you.

Basically, though, that approach leaves responsibility with the robot’s builder or supplier, which seems to be in line with what the report mainly advocates. In fact (and this is where things begin to get  a bit questionable) the report advocates a scheme whereby all robots would be registered and the supplier would be obliged to take out insurance to cover potential liability. An analogy with car insurance is suggested.

I don’t think that’s right. Car insurance is a requirement mainly because in the absence of insurance, car owners might not be able to pay for the damage they do. Making third-party insurance obligatory means that the money will always be there. In general I think we can assume by contrast that robot corporations, one way or another, will usually have the means to pay for individual incidents, so an insurance scheme is redundant. It might only be relevant where the potential liability was outstandingly large.

The thing is, there’s another issue here. If we need to register our robots and pay large insurance premiums, that imposes a real burden and a significant number of robot projects will not go ahead. Suppose, hypothetically, we have robots that perform crucial work in nuclear reactors. The robots are not that costly, but the potential liabilities if anything goes wrong are huge. The net result might be that nobody can finance the construction of these robots even though their existence would be hugely beneficial; in principle, the lack of robots might even stop certain kinds of plant from ever being built.

So the insurance scheme looks like a worrying potential block on European robotics; but remember we also said that corporate responsibility allows some limitation of liability. That might seem like a cheat, but another way of looking at it is to see it as a solution to the same kind of problem; if investors had to acept unlimited personal liability, there are some kinds of valuable enterprise that would just never be viable. Limiting liability allows ventures that otherwise would have potential downsides too punishing for the individuals involved. Perhaps, then, there actually is an analogy here and we ought to think about allowing some limitation of liability in the case of autonomous robots? Otherwise some useful machines may never be economically viable?

Anyway, I’m not a lawyer, and I’m not an economist, but I see some danger that an EU regime based on this report with registration, possibly licensing, and mandatory insurance, could significantly inhibit European robotics.

Platforms for the gravy train

platformsThe European Human Brain Project seems to be running into problems. This Guardian report notes that an open letter of protest has been published by 170 unhappy neuroscientists. They are seeking to influence and extend a review that is due, hoping they can get a change of direction. I don’t know a great deal about the relevant EU bureaucracy, but I should think the letter-writers’ chances of success are small, not least because in Henry Markram they’re up against a project leader who is determined, resourceful, and not lacking support of his own. There’s a response to the letter here.

It is a little hard to work out exactly what the disagreement is about; the Guardian seems to smoosh together the current objections of former insiders with the criticisms of those who thought the project was radically premature in the first place. I find myself trying to work out what the protestors want, from Markram’s disparaging remarks about them, rather the way we have to reconstruct some ancient heresies from the rebuttals of the authorities, the only place where details survive.

We’re told the disagreement is between those who study behaviour at a high level and the project leaders who want to build simulations from the bottom up. In particular some cognitive neuroscience projects have been ‘demoted’ to partner status. People say the project has been turned into a technology one: Markram says it always was:  he suggests that piling up more data is useless and that instead he’s doing an ICT project which will provide a platform for integrating the data, and that it’s all coming out of an ICT budget anyway.

Us naive outsiders had picked up the impression that the project had a single clear goal; a working simulation of a whole human brain. That is sort of still there, but reading the response it seems to be a pretty distant aspiration. Apparently a mouse brain is going to be done first, but even that is a way off; it’s all about the platforms. Earlier documents suggest there will actually be six platforms, only one of which is about brain simulation; the others are neuroinformatics, high performance computing, medical informatics, neuromorphic computing, and neurorobotics – fascinating subjects. The implicit suggestion is that this kind of science can’t be done properly just by working in labs and publishing papers, it requires advanced platforms in which research can be integrated. Really? Speaking as a professional bureaucrat myself, I have to say frankly that that sounds uncommonly like the high-grade bollocks emitted by a project leader who has more money than he knows what to do with. The EU in particular is all about establishing unwanted frameworks and common platforms which lie dead in drawers forever after. If people want to share findings, publishing papers is fine (alright, not flawless). If it’s about doing actual research, having all the projects captured by a common platform which might embody common errors and common weaknesses doesn’t sound like a good idea at all. My brain doesn’t know, but my gut says the platforms won’t be much use.

Let’s be honest, I don’t really know what’s going on, but if one were cynical one might suppose that the success of the Human Genome Project made the authorities open to other grand projects, and one on the brain hit the spot. The problem is that we knew what a map of the genome would be like, and we pretty much knew it could be done and how. We don’t have a similarly clear idea relating to the brain. However, the concept was appealing enough to attract a big pot of money, both in the EU and then in the US (an even bigger pot). The people who got control of these pots cannot deliver anything like the map of the human genome, but they can buy in the support of fund-hungry researchers by disbursing some of the gold while keeping the politicians and bureaucrats happy by wrapping everything in the afore-mentioned bollocks. The authors of the protest letter perhaps ought to be criticising the whole idea, but really they’re just upset about being left out. The deeper sceptics who always said the project was premature – though they may have thought they were talking about brain simulation, not a set of integrative platforms – were probably right; but there’s no money in that.

Grand projects like this are probably rarely the best way to control research funding, but they do get funding. Maybe something good somewhere will accidentally get the help it needs; meanwhile we’ll be getting some really great European platforms.