Split Brain not reproducible

Classic ‘Split Brain’ experiments by Sperry and Gazzaniga could not be reproduced, according to Yair Pinto of the University of Amsterdam. Has the crisis of reproducibility claimed an extraordinary new victim?

The original experiments studied patients who had undergone surgery to disconnect the two halves of their cerebrum by cutting the corpus callosum and associated commisures. This was a last-ditch method of controlling epilepsy, and it worked. The patients’ symptoms were improved and they did not generally suffer any cognitive deficits. They felt normal and were able to continue their ordinary lives.

However, under conditions that fed different information to the two halves of the brain, something strange showed up. Each half reported only its own data, unaware of what the other half was reporting. The effect is clearly demonstrated in this video with Gazzaniga himself…

On the basis of these remarkable results, it has been widely assumed that these patients have two consciousnesses, or two streams of consciousness, operating separately in each hemisphere but generally in such harmony they don’t notice each other. Now, though, performing similar experiments, Pinto and colleagues unexpectedly failed to get the same results. Is it conceivable that Sperry and Gazzaniga’s patients were to some degree playing along with the Doc, giving him the results he evidently wanted? It seems more likely that the effects are just different in different subjects for some reason. In fact, I believe the details of the surgery as well as the pre-existing condition of the patients varies. Recently the tendency has been towards less radical surgery and new drugs; in fact there may not be any more split-brain patients in future.

I don’t think that accounts for Pinto’s results, though, and they certainly raise interesting problems. In fact the interpretation of split-brain experiments has always been disputed. Although in experimental conditions the subjects seem divided, they don’t feel like two people and generally – not invariably -behave normally outside the lab. We do not have cases where the left hand and right hand write separate personal accounts, nor do these patients

Various people have offered interpretations which preserve the essential unity of consciousness, including Michael Tye and Charles E. Marks. One way of looking at it is to point out how unusual the experimental conditions are. Perhaps these conditions (and occasionally others that arise by chance) temporarily induce a bifurcation in an essentially united consciousness. Incidentally, so far as I know no-one has ever tried to repeat the experiments and get the same effects in people with an intact corpus callosum; maybe now and then it would work?

More recently Tim Bayne has proposed a switching model, in which a single consciousness is supported by different physical bits of brain, moving from one physical basis to another without sacrificing its essential unity.

There is, I think, a degree of neurological snobbery about the whole issue, inasmuch as it takes from granted that consciousness is situated in the cerebrum and that therefore dividing the cerebrum can be expected to divide consciousness. Descartes thought the essential unity of consciousness meant it could not reside in parts of the brain which even in normal people are extensively bifurcated into two hemispheres (actually into a number of quite distinct lobes). We need not follow him in plumping for the pineal gland as the seat of the soul – and we know that the cerebellum, for example, can be removed without destroying consciousness). But what about the lowly brain stem? Lizards make do with a brain that in evolutionary terms is little more than our brain stem, yet I’m inclined to believe they have some sense of self, and though they don’t have human cogitation, they certainly have consciousness in at least the basic sense. Perhaps we should see consciousness as situated as much down there as up in the cortex. Then we might interpret split-brain patients as having a single consciousness which has some particular lateralised difficulties in pulling together the contributions of its fancy cortical operations.

It might be that different people have different wiring patterns between lower and higher brain that either facilitate or suppress the split-brain effects; but that wouldn’t altogether explain why Pinto’s results are so different.

Secrets still

Secrets of the Mind is a lively IAI discussion of consciousness. Iain McGilchrist speaks intriguingly of matter being a phase of consciousness, and suggests the brain might simply be an expert transducer. Nicholas Humphrey, who seems to irritate McGilchrist quite a lot, thinks it’s all an illusion. As McGilchrist says, illusions seem to require consciousness, so there’s a problem there. I also wonder what it means to say that pain is an illusion. If I say a fairy palace is an illusion, I mean there is no real palace out there, just something in my mind, but if I say a pain is illusory, what thing out there am I denying the reality of? Roger Penrose unfortunately talks mainly about his (in my view) unappealing theory of microtubules rather than his more interesting argument against computationalism. His own description here suggests he settled on quantum mechanics just because it seemed the most likely place to find non-computational processes, which doesn’t seem a promising strategy to me.

Ultimately those secrets remain unrevealed;  McGilchrist (and Penrose in respect of the Hard Problem) seem to think they probably always will.

Robot Insurance

The EU is not really giving robots human rights; but some of its proposals are cause for concern. James Vincent on the Verge provides a sensible commentary which corrects the rather alarming headlines generated by the European Parliament draft report issued recently. Actually there are several reasons to keep calm. It’s only a draft, it’s only a report, it’s only the Parliament. It is said that in the early days the Eurocrats had to quickly suppress an explanatory leaflet which described the European Economic Community as a bus. The Commission was the engine, the Council was the driver; and the Parliament… was a passenger. Things have moved on since those days, but there’s still some truth in that metaphor.

A second major reason to stay calm, according to Vincent, is that the report (quite short and readable, by the way, though rather bitty) doesn’t really propose to treat robots as human beings; it mainly addresses the question of liability for the acts of autonomous robots. The sort of personhood it considers is more like the legal personhood of corporations. That’s true, although the report does invite trouble by cursorily raising the question of whether robots can be natural persons. Some other parts read strangely to me, perhaps because the report is trying to cover a lot of ground very quickly. At one point it seems to say that Asimov’s laws of robotics should currently apply to the designers and creators of robots; I suspect the thinking behind that somewhat opaque idea (A designer must obey orders given it by human beings except where such orders would conflict with the First Law?) has not been fully fleshed out in the text.

What about liability? The problem here is that if a robot does damage to property, turns on its fellow robots (pictured) or harms a human being, the manufacturer and the operator might disclaim responsibility because it was the robot that made the decision, or at least, because the robot’s behaviour was not reasonably predictable (I think predictability is the key point: the report has a rather unsatisfactory stab at defining smart autonomous robots, but for present purposes we don’t need anything too philosophical – it’s enough if we can’t tell in advance exactly what the machine will do).

I don’t see a very strong analogy with corporate personhood. In that case the problem is the plethora of agents; it simply isn’t practical to sue everyone involved in a large corporate enterprise, or even assign responsibility. It’s far simpler if you have a single corporate entity that can be held liable (and can also hold the assets of the enterprise, which it may need to pay compensation, etc). In that context the new corporate legal person simplifies the position, whereas with robots, adding a machine person complicates matters. Moreover, in order for the robot’s liability to be useful you would have to allow it to hold property which it could use to fund any liabilities. I don’t think anyone is currently thinking that roombas should come with some kind of dowry.

Note, however, that corporate personhood has another aspect; besides providing an entity to hold assets and sue or be sued, it typically limits the liability of the parties. I am not a lawyer, but as I understand it, if several people launch a joint enterprise they are all liable for its obligations; if they create a corporation to run the enterprise, then liability is essentially limited to the assets held by the corporation. This might seem like a sneaky way of avoiding responsibility; would we want there to be a similar get-out for robots? Let’s come back to that.

It seems to me that the basic solution to the robot liability problem is not to introduce another person, but to apply strict liability, an existing legal concept which makes you responsible for your product even if you could not have foreseen the consequences of using it in a particular case. The report does acknowledge this principle. In practice it seems to me that liability would partly be governed by the contractual relationship between robot supplier and user: the supplier would specify what could be expected given correct use and reasonable parameters – if you used your robot in ways that were explicitly forbidden in that contract, then liability might pass to you.

Basically, though, that approach leaves responsibility with the robot’s builder or supplier, which seems to be in line with what the report mainly advocates. In fact (and this is where things begin to get  a bit questionable) the report advocates a scheme whereby all robots would be registered and the supplier would be obliged to take out insurance to cover potential liability. An analogy with car insurance is suggested.

I don’t think that’s right. Car insurance is a requirement mainly because in the absence of insurance, car owners might not be able to pay for the damage they do. Making third-party insurance obligatory means that the money will always be there. In general I think we can assume by contrast that robot corporations, one way or another, will usually have the means to pay for individual incidents, so an insurance scheme is redundant. It might only be relevant where the potential liability was outstandingly large.

The thing is, there’s another issue here. If we need to register our robots and pay large insurance premiums, that imposes a real burden and a significant number of robot projects will not go ahead. Suppose, hypothetically, we have robots that perform crucial work in nuclear reactors. The robots are not that costly, but the potential liabilities if anything goes wrong are huge. The net result might be that nobody can finance the construction of these robots even though their existence would be hugely beneficial; in principle, the lack of robots might even stop certain kinds of plant from ever being built.

So the insurance scheme looks like a worrying potential block on European robotics; but remember we also said that corporate responsibility allows some limitation of liability. That might seem like a cheat, but another way of looking at it is to see it as a solution to the same kind of problem; if investors had to acept unlimited personal liability, there are some kinds of valuable enterprise that would just never be viable. Limiting liability allows ventures that otherwise would have potential downsides too punishing for the individuals involved. Perhaps, then, there actually is an analogy here and we ought to think about allowing some limitation of liability in the case of autonomous robots? Otherwise some useful machines may never be economically viable?

Anyway, I’m not a lawyer, and I’m not an economist, but I see some danger that an EU regime based on this report with registration, possibly licensing, and mandatory insurance, could significantly inhibit European robotics.

What is consciousness good for?

Further to the question of conscious vs non-conscious action, here’s a recent RSA video presenting some evidence.

Nicholas Shea presents with Barry Smith riding shotgun. There’s a mention for one piece of research also mentioned by Di Nucci; preventing expert golfers from concentrating consciously on their shot actually improves their performance (it does the opposite for non-experts). There are two pieces of audience participation; one shows that subliminal prompts can (slightly) affect behaviour; the other shows that time to think and discuss can help where explicit reasoning is involved (though it doesn’t seem to help the RSA audience much).

Perhaps in the end consciousness is not essentially private after all, but social and co-operative?

Sign of the times to see two philosophers unashamedly dabbling in experiments. I think the RSA also has to win some kind of prize in the hotly-contested ‘unconvincing brain picture’ category for using purple and yellow cauliflower.

Disbelieving alieving

Did my aliefs make me do that? Ezio Di Nucci thinks we need a different way of explaining automatic behaviour. By automatic, he means such things as habits and ‘primed’ or nudged behaviour; quite a lot of what we do is not the consequence of a consciously pondered decision; yet it manages to be complex and usually sort of appropriate.

Di Nucci quotes a number of examples, but the two he uses most are popcorn and rudeness. Popcorn is an example of a habit. People who were in the habit of eating a lot of popcorn at the cinema still ate the same amount even when they were given stale popcorn. People who did not have the habit ate significantly less if it was stale; as did people who had the habit if they were given it outside the context of a cinema visit (showing that they did notice the difference; it wasn’t just that they didn’t care about the quality of the popcorn).

Rudeness is an example of ‘primed’ behaviour. Subjects were exposed to lists of words which either exemplified rudeness or courtesy; those who had been “primed” with rude words went on to interrupt an interviewer more often. There are many examples of this kind of priming effect, but there is now a problem, as Di Nucci notes; many of these experiments have been badly hit by the recent wave of problems with reproducibility in psychological experiments. It is so bad that some might advocate putting the whole topic of priming aside for now until more of the underlying research has been underpinned. Di Nucci proceeds anyway; even if particular studies are invalidated the principle probably captures something true about human psychology.

So how do we explain this kind of mindless behaviour? Di Nucci begins by examining what he characterises as a ‘traditional’ account given by action theory, particularly citing Donald Davidson. On this view an action is intentional if, described in a particular way, it corresponds with a ‘pro’ attitude in the agent towards actions with a particular property and the belief that the section has that property. For Di Nucci this comes down to the simple view that an action is intentional if it corresponds with a pre-existing intention.

On that basis, it doesn’t seem we can say that the primed subject interrupted unintentionally. It doesn’t really seem that we can say that the subject ate stale popcorn unintentionally either. We might try to say that they intended to eat popcorn and did not intend to eat stale popcorn; but since they clearly knew after the first mouthful that it was stale, this doesn’t really work. We’re left with no distinct account of automatic behaviour.

Di Nucci cites two other arguments. In another experiment, people primed to be helpful are more likely to pick up another person’s dropped pen; but if the pen is visibly leaking the priming effect is eliminated. The priming cannot be operating at the same level as the conscious reluctance to get ink on one’s fingers or the latter would not erase the former.

Another way to explain the automatic behaviour is to attribute it to false beliefs; but that would imply that the behaviour was to some degree irrational, and neither interrupting nor eating popcorn is actually irrational.

What, then, about aliefs? This is the very handy concept introduced by Tamar Szabo Gendler back in 2008; in essence aliefs are non-conscious beliefs. They explain why we feel nervous walking on a glass floor; we believe we’re safe, but alieve we’re about to fall. They may also explain unconscious bias and many other normal but irrational kinds of behaviour. I rather like this idea; since beliefs and desires are often treated as a pair in studies of intentionality, maybe we could have the additional idea of unconscious desires, or cesires; then we have the nicely alphabetic set of aliefs, beliefs, cesires and desires.

Aliefs look just right to deal with our problems. We might believe the popcorn is stale, but alieve that cinema popcorn is good. We might believe ourselves polite but alieve that interrupting is fine.

Di Nucci suggests three problems. First, he doesn’t think aliefs deal with the leaky pen, where the inclination to help disappears altogether, not partially or subject to some conflict. Second, he thinks aliefs end up looking like beliefs. He cites the case of George Clooney fans who are less willing to pay for George’s bandanna if it has been washed. Allegedly this because of an irrational alief that a bandanna contains George’s essence; but the conscious belief that it has been washed interferes with this. If aliefs can interact with beliefs like this they must be similarly explicit and propositional and so not really different from beliefs. To me this argument doesn’t carry much force because there seem to be lots of better ways we could account for the Clooney fan behaviour.

Third, he thinks again that irrationality is a problem; aliefs are supposed by Gendler to be arational, but the two regular examples of automatic behaviour seem rational enough.

I think aliefs work rather well; if anything they work too well. We can ask about beliefs and challenge them; aliefs are not accessible and we are free to attribute any mad aliefs we like to anyone if they get our explanatory dirty work done. There’s perhaps too much of a “get out jail free” about that.

Anyway, if all that is to be rejected, what is the explanation? Di Nucci suggests that automatic behaviour simply fills in when we don’t want to waste attention on the detail. Specifically, he suggests they come into play in “Buridan’s Ass” cases; where we are faced with choices between alternatives that are equally good or neutral, as often happens. It’s pointless to direct our attention to these meaningless choices, so they are left to whatever habits or priming we may be subject to.

That’s fine as far as it goes, but I wonder whether it doesn’t just retreat a bit; the question was really how well-formed actions come from non-conscious thought. Di Nucci seems in danger of telling us that automatic action is accounted for by the fact that it is automatic.

Fish Pain

Fish don’t feel pain, says Brian Key.  How does he know? In the deep philosophical sense it remains a matter of some doubt as to whether other human beings really feel pain, and as Key notes, Nagel famously argued that we couldn’t know what it was like to be a bat at all, even though we have much more in common with them than with fish. But in practice we don’t really feel much doubt that humans with bodies and brains like ours do indeed have similar sensations, and we trust that their reports of pain are generally as reliable as our own. Key’s approach extends this kind of practical reasoning. He relies on human reports to identify the parts of the brain involved in feeling pain, and then looks for analogues in other animals.

Key’s review of the evidence is interesting; in brief he concludes that it is the cortex that ‘does’ pain; fish don’t have anything that corresponds with human cortex, or any other brain structure that plausibly carries out the same function. They have relatively hard-wired responses to help them escape  physical damage, and they have a capacity to learn about what to avoid, but they don’t have any mechanism for actually feeling pain with. It is really, he suggests, just anthropomorphism that sees simple avoidance behaviour as evidence of actual pain. Key is rightly stern about anthropomorphism, but I think he could have acknowledged the opposite danger of speciesism. The wide eyes and open mouths of fish, their rigid faces and inability to gesture or scream incline us to see them as stupid, cold, and unfeeling in a way which may not be properly objective.

Still, a careful examination of fish behaviour is a perfectly valid supplementary approach, and Key buttresses his case by noting that pain usually suppresses normal behaviour. Drilling a hole in a human’s skull tends to inhibit locomotion and social activity, but apparently doing the same thing to fish does not stop them going ahead with normal foraging and mating behaviour as though nothing had happened. Hard to believe, surely, that they are in terrible pain but getting on with a dancing display anyway?

I think Key makes a convincing case that fish don’t feel what we do, but there is a small danger of begging the question if we define pain in a way that makes it dependent on human-style consciousness to begin with. The phenomenology really needs clarification, but defining pain, other than by demonstration, is peculiarly difficult. It is almost by definition the thing we want to avoid feeling, yet we can feel pain without being bothered by it, and we can have feelings we desperately want to avoid which are, however, not pain. Pain may be a tiny twinge accompanying a reflex, an attention-grabbing surge, or something we hold in mind and explore (Dennett, I think, says somewhere that he had been told that examining pain introspectively was one way to make it bearable. On his next dentist visit, he tried it out and found that although the method worked, the effort and boredom involved in continuous close attention to the detailed qualities of his pain was such that he eventually preferred straightforward hurting.) Humans certainly understand pain and can opt to suffer it voluntarily in ways that other creatures cannot; whether on balance this higher awareness makes our pain more or less bearable is a difficult question in itself. We might claim that imagination and fear magnify our suffering, but being to some degree aware and in control can also put us in a better position than a panicking dog that cannot understand what is happening to it.

Key leans quite heavily on reportable pain; there are obvious reasons for that, but it could be that doing so skews him towards humanity and towards the cortex, which is surely deeply involved in considering and reporting. He dismisses some evidence that pain can occur without a cortex, and therefore happens in the brain stem. His objections seem reasonable, but surely it would be odd if nothing were going on in the brain stem, that ‘old brain’ we have inherited through evolution, even if it’s only some semi-automatic avoidance stuff. The danger is that we might be paying attention to the reportable pain dealt with by the ‘talky’ part of our minds while another kind is going on elsewhere. We know from such phenomena as blindsight that we can unconsciously ‘see’ things; could we not also have unconscious pain going on in another part of the brain?

That raises another important question: does it matter? Is unconscious or forgotten pain worth considering – would fish pain be negligible even if it exists? Pain is, more or less, the feeling we all want to avoid, so in one way its ethical significance is obvious. But couldn’t those automatic damage avoidance behaviours have some ethical significance too? Isn’t damage sort of ethically charged in itself? Key rejects the argument that we should give fish the ‘benefit of the doubt’, but there is a slightly different argument that being indifferent to apparent suffering makes us worse people even if strictly speaking no pains are being felt.

Consider a small boy with a robot dog; the toy has been programmed to give displays of affection and enjoyment, but if mistreated it also performs an imitation of pain and distress. Now suppose the boy never plays nicely, but obsessively ‘tortures’ the robot, trying to make it yelp and whine as loudly as possible. Wouldn’t his parents feel some concern; wouldn’t they tell him that what he was doing was wrong, even though the robot had no real feelings whatever. Wouldn’t that be a little more than simple  anthropomorphism?

Perhaps we need a bigger vocabulary; ‘pain’ is doing an awful lot of work in these discussions.

Parfit

Derek Parfit, who died recently, in two videos from an old TV series…

Parfit was known for his attempts in Reasons and Persons to gently dilute our sense of self using thought experiments about Star Trek style transporters and turning himself gradually into Greta Garbo. I think that by assuming the brain could in principle be scanned and 3D printed in a fairly simple way, these generally underestimated the fantastic intricacy of the brain and begged questions about the importance of its functional organisation and history; this in turn led Parfit to give too little attention to the possibility that perhaps we really are just one-off physical entities. But Parfit’s arguments have been influential, perhaps partly because in Parfit’s outlook they grounded an attractively empathetic and unselfish moral outlook, making him less worried about himself and more worried about others. They also harmonised well with Buddhist thought, and continue to have a strong appeal to some.

Myself I lean the other way; I think virtue comes from proper pride, and that nothing much can be expected from someone who considers themselves more or less a nonentity to begin with. To me a weaker sense of self could be expected to lead to moral indifference; but the evidence is not at all in my favour so far as Parfit and his followers are concerned.

In fact Parfit went on to mount a strong defence of the idea of objective moral truth in another notable book, On What Matters, where he tried to reconcile a range of ethical theories, including an attempt to bring Kant and consequentialism into agreement. To me this is a congenial project which Parfit approached in a sensible way, but it seems to represent an evolution of his views. Here he wanted to be  a friend to Utilitarianism, brokering a statesmanlike peace with its oldest enemy; in his earlier work he had offered a telling criticism in his ‘Repugnant Conclusion’

The Repugnant Conclusion: For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better, even though its members have lives that are barely worth living.

This is in effect a criticism of utilitarian arithmetic; trillions of just tolerable lives can produce a sum of happiness greater than a few much better ones, yet the idea we should prefer the former is repugnant. I’m not sure this conclusion is necessarily quite as repugnant as Parfit thought. Suppose we have a world where the trillions and the few are together, with the trillions living intolerable lives and just about to die; but the happy few could lift them to survival and a minimally acceptable life if they would descend to the same level; would the elite’s agreement to share really be repugnant?

Actually our feelings about all this are unavoidably contaminated by assumptions about the context. Utilitarianism is a highly abstract doctrine and we assume here that two one-off states of affairs can be compared; but in the real world our practical assessment of future consequences would dominate. We may, for example, feel that the bare survival option would in practice be unstable and eventually lead to everyone dying, while the ‘privileged few’ option has a better chance of building a long-term prosperous future.

Be that as it may, whichever way we read things this seems like a hit against consequentialism. The fact that Parfit still wanted that theory as part of his grand triple theory of ethical grand union probably tells us something about the mild and kindly nature of the man, something that no doubt has contributed to the popularity of his ideas.

Irritating Robots

OutputMachine learning and neurology; the perfect match?

Of course there is a bit of a connection already in that modern machine learning draws on approaches which were distantly inspired by the way networks of neurons seemed to do their thing. Now though, it’s argued in this interesting piece that machine learning might help us cope with the vast complexity of brain organisation. This complexity puts brain processes beyond human comprehension, it’s suggested, but machine learning might step in and decode things for us.

It seems a neat idea, and a couple of noteworthy projects are mentioned: the ‘atlas’ which mapped words to particular areas of cortex, and an attempt to reconstruct seen faces from fMRI data alone (actually with rather mixed success, it seems). But there are surely a few problems too, as the piece acknowledges.

First, fMRI isn’t nearly good enough. Existing scanning techniques just don’t provide the neuron-by-neuron data that is probably required, and never will. It’s as though the only camera we had was permanently out of focus. Really good processing can do something with dodgy images, but if your lens was rubbish to start with, there are limits to what you can get. This really matters for neurology where it seems very likely that a lot of the important stuff really is in the detail. No matter how good machine learning is, it can’t do a proper job with impressionistic data.

We also don’t have large libraries of results from many different subjects. A lot of studies really just ‘decode’ activity in one context in one individual on one occasion. Now it can be argued that that’s the best we’ll ever be able to do, because brains do not get wired up in identical ways. One of the interesting results alluded to in the piece is that the word ‘poodle’ in the brain ‘lives’ near the word ‘dog’. But it’s hardly possible that there exists a fixed definite location in the brain reserved for the word ‘poodle’. Some people never encounter that concept, and can hardly have pre-allocated space for it. Did Neanderthals have a designated space for thinking about poodles that presumably was never used throughout the history of the species? Some people might learn of ‘poodle’ first as a hairstyle, before knowing its canine origin; others, brought up to hard work in their parent’s busy grooming parlour from an early age, might have as many words for poodle as the eskimos were supposed to have for snow. Isn’t that going to affect the brain location where the word ends up? Moreover, what does it mean to say that the word ‘lives’ in a given place? We see activity in that location when the word is encountered, but how do we tell whether that is a response to the word, the concept of the word, the concept of poodles, poodles, a particular known poodle, or any other of the family of poodle-related mental entities? Maybe these different items crop up in multiple different places?

Still, we’ll never know what can be done if we don’t try. One piquant aspect of this is that we might end up with machines that can understand brains, but can never fully explain them to us, both because the complexity is beyond us and because machine learning often works in inscrutable ways anyway. Maybe we can have a second level of machine that explains the first level machines to us – or a pair of machines that each explain the brain and can also explain each other, but not themselves?

It all opens the way for a new and much more irritating kind of robot. This one follows you around and explains you to people. For some of us, some of the time, that would be quite helpful. But it would need some careful constraints, and the fact that it was basically always right about you could become very annoying. You don’t want a robot that says “nah, he doesn’t really want that, he’s just being polite”, or “actually, he’s just not that into you”, let alone “ignore him; he thinks he understands hermeneutics, but actually what he’s got in mind is a garbled memory of something else about Derrida he read once in a magazine”.

Happy New Year!