Archive for January, 2018

You may already have seen Jochen’s essay Four Verses from the Daodejingan entry in this year’s FQXi competition. It’s a thought-provoking piece, so here are a few of the ones it provoked in me. In general I think it features a mix of alarming and sound reasoning which leads to a true yet perplexing conclusion.

In brief Jochen suggests that we apprehend the world only through models; in fact our minds deal only with these models. Modelling and computation are in essence the same. However, the connection between model and world is non-computable (or we face an infinite regress). The connection is therefore opaque to our minds and inexpressible. Why not, then, identify it with that other inexpressible element of cognition, qualia? So qualia turn out to be the things that incomprehensibly link our mental models with the real world. When Mary sees red for the first time, she does learn a new, non-physical fact, namely what the connection between her mental model and real red is. (I’d have to say that as something she can’t understand or express, it’s a weird kind of knowledge, but so be it.)

I think to talk of modelling so generally is misleading, though Jochen’s definition is itself broadly framed, which means I can’t say he’s wrong. In his terms it seems anything that uses data about the structure and causal functioning of X to make predictions about its behaviour would be a model. If you look at it that way, it’s true that virtually all our cognition is modelling. But to me a model leads us to think of something more comprehensive and enduring than we ought. In my mind at least, it conjures up a sort of model village or homunculus, when what’s really going on is something more fragmentary and ephemeral, with the brain lashing up a ‘model’ of my going to the shop for bread just now and then discarding it in favour of something different. I’d argue that we can’t have comprehensive all-purpose models of ourselves (or anything) because models only ever model features relevant to a particular purpose or set of circumstances. If a model reproduced all my features it would in fact be me (by Leibniz’ Law) and anyway the list of potentially relevant features goes on for ever.

The other thing I don’t like about liberal use of modelling is that it makes us vulnerable to the view that we only experience the model, not the world. People have often thought things like this, but to me it’s almost like the idea we never see distant planets, only telescope lenses.

Could qualia be the connection between model and world? It’s a clever idea, one of those that turn out on reflection to not be vulnerable to many of the counterarguments that first spring to mind. My main problem is that it doesn’t seem right phenomenologically. Arguments from one’s own perception of phenomenology are inherently weak, but then we are sort of relying on phenomenology for our belief (if any) in qualia in the first place. A red quale doesn’t seem like a connection, more like a property of the red thing; I’m not clear why or how I would be aware of this connection at all.

However, I think Jochen’s final conclusion is both poignant and broadly true. He suggests that models can have fundamental aspects, the ones that define their essential functions – but the world is not under a similar obligation. It follows that there are no fundamentals about the world as a whole.

I think that’s very likely true, and I’d make a very similar kind of argument in terms of explanation. There are no comprehensive explanations. Take a carrot. I can explain its nutritional and culinary properties, its biology, its metaphorical use as a motivator, its supposed status as the favourite foodstuff of rabbits, and lots of other aspects; but there is no total explanation that will account for every property I can come up with; in the end there is only the carrot. A demand for an explanation of the entire world is automatically a demand for just the kind of total explanation that cannot exist.

Although I believe this, I find it hard to accept; it leaves my mind with an unscratched itch. If we can’t explain the world, how can we assimilate it? Through contemplation? Perhaps that would have been what Laozi would have advocated. More likely he would have told us to get on with ordinary life. Stop thinking, and end your problems!



It’s not just that we don’t know how anaesthetics work – we don’t even know for sure that they work. Joshua Rothman’s review of a new book on the subject by Kate Cole-Adams quotes poignant stories of people on the operating table who may have been aware of what was going on. In some cases the chance remarks of medical staff seem to have worked almost like post-hypnotic suggestions: so perhaps all surgeons should loudly say that the patient is going to recover and feel better than ever, with new energy and confidence.

How is it that after all this time, we don’t know how anaesthetics work? As the piece aptly remarks, it’s about losing consciousness, and since we don’t know clearly what that is or how we come to have it, it’s no surprise that its suspension is also hard to understand. To add to the confusion, it seems that common anaesthetics paralyse plants, too. Surely it’s our nervous system anaesthetics mainly affect – but plants don’t even have a nervous system!

But come on, don’t we at least know that it really does work? Most of us have been through it, after all, and few have weird experiences; we just don’t feel the pain – or anything. The problem, as we’ve discussed before, is telling whether we don’t feel the pain, or whether we feel it but don’t remember it. This is an example of a philosophical problem that is far from being a purely academic matter.

It seems anaesthetics really do (at least) three different things. They paralyse the patient, making it easier to cut into them without adverse reactions, they remove conscious awareness or modulate it (it seems some drugs don’t stop you being aware of the pain, they just stop you caring about it somehow), and they stop the recording of memories, so you don’t recall the pain afterwards. Anaesthetists have a range of drugs to produce each of these effects. In many cases there is little doubt about their effectiveness. If a drug leaves you awake but feeling no pain, or if it simply leaves you with no memory, there’s not that much scope for argument. The problem arises when it comes to anaesthetics that are supposed to ‘knock you out’. The received wisdom is that they just blank out your awareness for a period, but as the review points out, there are some indications that instead they merely paralyse you and wipe your memory. The medical profession doesn’t have a good record of taking these issues very seriously; I’ve read that for years children were operated on after being given drugs that were known to do little more than paralyse them (hey, kids don’t feel pain, not really; next thing you’ll be telling me plants do…).

Actually, views about this are split; a considerable proportion of people take the view that if their memory is wiped, they don’t really care about having been in pain. It’s not a view I share (I’m an unashamed coward when it comes to pain), but it has some interesting implications. If we can make a painful operation OK by giving mnestics to remove all recollection, perhaps we should routinely do the same for victims of accidents. Or do doctors sometimes do that already…?