Is consciousness a matter of entropy in the brain? An intriguing paper by R. Guevara Erra, D. M. Mateos, R. Wennberg, and J.L. Perez Velazquez says

normal wakeful states are characterised by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values.

What the researchers did, broadly, is identify networks in the brain that were operative at a given time, and then work out the number of possible configurations these networks were capable of. In general, conscious states were associated with states with high numbers of possible configurations – high levels of entropy.

That makes me wrinkle my forehead a bit because it doesn’t fit well with my layman’s grasp of the concept of entropy. In my mind entropy is associated with low levels of available energy and an absence of large complex structure. Entropy always increases, but can decrease locally, as in the case of the complex structures of life, by paying for the decrease with a bigger increase elsewhere; typically by using up available energy. On this view, conscious states – and high levels of possible configurations – look like they ought to be low entropy; but evidently the reverse is actually the case. The researchers also used the Lempel-Ziv measure of complexity, one with strong links to information content, which is clearly an interesting angle in itself.

Of the nine subjects, three were epileptic, which allowed comparisons to be made with seizure states as well as those of waking and sleeping states. Interestingly, REM sleep showed relatively high entropy levels, which intuitively squares with the idea that dreaming resembles waking a little more than  fully unconscious states – though I think the equation of REM sleep with dreaming is not now thought to be as perfect as it once seemed.

One acknowledged weakness in the research is that it was not possible to establish actual connection. The assumed networks were therefore based on synchronisation instead. However, synchronisation can arise without direct connection, and absence of synchronisation is not necessarily proof of the absence of connection.

Still, overall the results look good and the picture painted is intuitively plausible. Putting all talk of entropy and Lempel-Ziv aside, what we’re really saying is that conscious states fall in the middle of a notional spectrum: at one end of this spectrum is chaos, with neutrons firing randomly; at the other we have then all firing simultaneously in indissoluble lockstep.

There is an obvious resemblance here to the Integrated Information Theory (IIT) which holds that consciousness arises once the quantity of information being integrated passes a value known as Phi. In fact, the authors of the current paper situate it explicitly within the context of earlier work which suggests that the general principle of natural phenomena is the maximisation of information transfer. The read-across from the new results into terms of information processing is quite clear. The authors do acknowledge IIT, but just barely; they may be understandably worried that their new work could end up interpreted as mere corroboration for IIT.

My main worry about both is that they are very likely true, but may not be particularly enlightening. As a rough analogy, we might discover that the running of an internal combustion engine correlates strongly with raised internal temperature states. The absence of these states proves to be a pretty good practical guide to whether the engine is running, and we’re tempted to conclude that raised temperature is the same as running. Actually, though, raising the temperature artificially does not make the engine run, and there is in fact a complex story about running in which raised temperatures are not really central. So it might be that high entropy is characteristic of conscious states without that telling us anything useful about how those states really work.

But I evidently don’t really get entropy, so I might easily be missing the true significance of all this.


  1. 1. quentin says:

    > “But I evidently don’t really get entropy, so I might easily be missing the true significance of all this.”

    Your understanding of *thermodynamic* entropy is quite right actually. Information-theoretic entropy is just a generalisation of it that is applicable in different contexts. It can be applied to thermodynamics once one identifies micro-states with position/momentum of individual molecules in a physical system and macro-states with temperature, pressure, etc. of the whole system. Then the entropy of a macro-state corresponds to the logarithm of the number of possible micro-states for this macro-state, i.e. to what extent the macro-state is “probable”, and one recovers the traditional thermodynamic entropy. But from the information-theoretic point of view, micro-states need not be identified this way.

    Here micro-states are apparently defined in terms of brain signals and connections, so this is not thermodynamic entropy. It already assumes a lot from the start (that we’re interested in a neural network, not any physical system). There’s no contradiction with such systems having a low thermodynamic entropy, as compared to a gas in equilibrium for example.

  2. 2. Peter says:

    Thanks, Quentin!

  3. 3. SelfAwarePatterns says:

    Peter, I think your last two paragraphs are spot on. The problem with these theories and studies looking at low level organization is that they ignore the significance of higher level organization in the brain. Reading these theories, you might think the brain was an undifferentiated blob of neurons that magically produce cognition, and our goal is to discover what generates the magic.

    In reality, the brain is a modular system, with various cortices and nuclei performing specific functions. Granted, being an evolved biological system, the modules don’t have the clean separation between them of an engineered system, leading to a lot of seemingly idiosyncratic oddities. But the occipital lobe generally handles vision, the somatosensory cortex handles touch sensations, the frontal lobe handles planning, the upper brainstem structures generate primal emotions, etc.

    Attempting to understand consciousness without trying to understand what these modules functionally contribute to it seems like trying to understand car movement without understanding cylinders, transmissions, fuel lines, and all the rest.

  4. 4. Paul Torek says:

    Complexity can increase along with entropy. For example this paper: defines complexity as informational-entropy times disequilibrium. There’s a nice graph on page 9.

  5. 5. Jochen says:

    In a sense, the thermodynamic entropy of a system is the amount of information we could gain by knowing the microstate exactly—so, a high thermodynamic entropy means that there are very many microstates that lead to the same macrostate, as in gas molecule configurations leading to the same macroscopic volume of gas. So thermodynamic entropy is ‘hidden’ information.

    We can think of information-theoretic (Shannon) entropy in a similar way: consider a random string of bits, e.g. 010011… The ‘macrostate’ is then just ‘random bitstring of length n’, i.e. discarding the information about the individual bits, leaving only the gross, coarse-grained properties. There are then 2^n ways of creating that macrostate by some microstate; and if we were to learn the microstate, we would acquire log(2^n) = n bits of information.

    It’s in this sense that random objects, with many possible configurations, carry the highest entropy, and the highest information. Another way to think of this is that given a random bit string, you have no way of predicting the next bit, even given complete knowledge of all preceding bits—so that bit carries a maximum of new information (in fact, one bit). If we instead had a less random distribution of bits, the information gained per bit would be less—up to the extreme case, where we know for certain that the next bit will be 1, and hence, don’t get any new information out of that.

    (It’s in fact often very useful to not work with the highest possible entropy, since in that case, if you lose a bit, it’s lost; whereas you can generally reconstruct the full message with a code that has some built-in redundancy, such as natural language—dropping a few letters here and there scarcely influences the readability of the message.)

    I haven’t read the paper, but one thing I’m curious about is that complexity usually doesn’t occur at a maximum of entropy, but rather, at intermediate values: think about two volumes of gas, separated at first, and then being allowed to mix. Entropy will increase up to the point of maximal intermixing, but we wouldn’t normally consider that a ‘complex’ state—it’s just a homogeneous mixture of two gasses, just as the outset was two separated volumes of gas, which is likewise very boring and simple to describe.

    Rather, complexity occurs when the two gasses are mixing—when there’s lots of eddies and swirls and whatnot, areas where there’s more of one gas, or more of the other. But these aren’t the states of maximum entropy. So I think that the entropy they consider probably isn’t straightforwardly just that of the brain-macrostate generated by neuron firing microstates…

  6. 6. Jochen says:

    Also, perhaps to close the circle, you can use information you have about the system in order to perform work: think about a little box with a single particle in it. If it has some temperature, it will be bouncing around. If you now know precisely which half of the box the particle is in at some point in time (one bit of information), you can slide a piston into the other half (frictionlessly, that is, without needing to perform work), which will then, as the ‘gas’ expands—i.e. as the particle impacts on the piston—be driven out again, performing work.

    If, on the other hand, you don’t know in advance where the particle is, all you can do is insert the piston at random—but in half the cases, this will necessitate doing work ‘compressing’ the gas, confining the particle to one half of the box; this will exactly balance out the work you get in the other half of the cases. Consequently, knowing the microstate (left half vs. right half) allows to extract work, while if the microstate is unknown, the entropy of the system is maximal (two possible microstates for the macrostate ‘particle in box’), and you can’t use the system to perform work.

    So, knowledge might not be power, but it’s work; and work over time is power.

  7. 7. Jayarava says:

    1. The second law of thermodynamics (entropy) only applies to closed systems. The brain is emphatically not a closed system. Brain networks are similarly emphatically not closed networks. Nor is our body, or Gaia (all life taken as a single complex system) or anything in between. Life is an open system at all levels.

    If we are not talking about a closed system then the second law does not apply. A brain is constantly being pumped full of low entropy energy in the form of glucose molecules and radiating it out as high entropy heat. Surfing the energy gradient is part of what allows complex structures to temporarily persist.

    2. Gaia is a very efficient way of increasing entropy. For every high energy (low entropy) photon of sunlight that falls on the planet, Gaia radiates 20 low energy (high entropy) photons into space. Our brains do this in miniature. People don’t seem to get this about life and entropy.

    3. If we consider the brain to be a closed system – if we cut off the blood and other fluid supplies – the *highest* entropy state will be for the brain to die and break down into component molecules. A dead brain has considerably higher entropy than a living brain, but holds considerably less information – so beware of information theorists also.

    Consider the universe. At 10^0 years (big bang) entropy is minimal and complexity is minimal. At 10^10 years (now) entropy is middling and complexity is maximal. At 10^100 years (heat death of the universe) entropy is maximal and complexity is back to minimal.

    Information is in fact correlated more closely to complexity than to order/entropy. Information is at a maximum in the middle of the entropy scale, where there is both complexity and order.

    4. The best researchers can do at present is measure the relative blood flow to different regions using MRI. Blood-flow to any region is never zero except in a stroke (or death). A stroke produces a rapid increase in local entropy because it creates a closed system of dying and dead brain cells, and dead neurons have higher entropy than living ones (and lower information). In a living brain all brain networks are active all the time.

    So what they’re really trying to say is that relatively increased blood flow can happen in more different ways than relatively normal or decreased blood flow. However, increased blood flow also increases heat production and thus contributes to higher entropy in the universe.

    5. What you seem to be describing is an interpretation of brain imaging data which largely *ignores* the entropy of the local system, and limits investigation to thresholds of blood-flow to regions as very roughly indicating “activity” in network. It is then suggested that the possible ways of connecting regions is *analogous* to entropy.

    This method is certainly not connected to entropy as understood by a physicist, because it is an attempt to describe an *open system* and ignores the *energy* in the system. So at best “entropy” is a metaphor here. And the use of scientific terms as metaphors for other concepts is the hallmark of pseudo-science.

  8. 8. David Duffy says:

    Dear Jayarava – this study actually looks at EEG (and MEG).

    There is much literature and commercial devices using entropy measures of EEG measures to assess level of consciousness in anaesthesia eg

    “All the entropy indices monotonically decreased as anesthesia deepened, then increased during recovery”

  9. 9. Peter says:

    Some interesting bits in the last part of this article about differences of emphasis between the researchers and thoughts about where they might go next.

  10. 10. john davey says:

    This not entropy in the proper sense – i,e in the sense that heat causes a closed system of particles to occupy a larger number of states. This is not physics. This is a (typically meaningless) adaptation of information theory.

    Information theory is not a waste of time. It’s useful when discussing the utility of ciphers and transmission of information. It has practical utility as an engineering tool for spooks, networking system techies and the like. As an analytical tool for the brain is a total waste of hot air. In fact this article is a demonstration of entropy – it gets more meaningless the more you read it.

    The brain is not an information processor, it is a meaning processor. Who cares how it transmits the symbols that represent meaning ? What matters is how it realises the meaning in the first place.


  11. 11. Jayarava says:

    #8 David. The article you reference is clearly aware of the distinction I was making. It talks specifically about “Shannon entropy”, then it drops the Shannon part and just talks about “entropy”. But it is always talking about entropy in a *metaphorical* sense, i.e. something which is analogous to entropy, but which is not in fact *entropy* at all.

    As far as I can tell, Shannon’s use of the word “entropy”, in preference to his initial choice “uncertainty”, is still a matter of dispute amongst scientists. See for example: Thermodynamics ? Information Theory: Science’s Greatest Sokal Affair

    As far as EEG goes, it is even less precise that MRI! In terms of Peter’s last image of the internal combustion engine, we have to imagine that the experiment is conducted on an engine in place and the hood of the car is closed!

  12. 12. David Duffy says:

    Dear Jayarava. EEG measurement of mental imagery is now sufficiently precise to encourage development of “mentally controlled” devices (mental imagery based brain-computer interfaces). EEG detection of externally commanded thought, eg mental arithmetic, in individuals with locked-in or minimal consciousness states seems to be frequently published on as being particularly easy to carry out. Perhaps this is just “signals traffic analysis” at the moment, but it looks like brain oscillations involved in particular mental tasks engage large regions all at once eg

    As to entropy, I have previously put up links at this website to papers which suggest there is no difference between informational entropy and thermodynamic entropy eg

    This is especially true of nonequilibrium systems doing useful work.

  13. 13. David Duffy says:

    As to what useful work is (including useful mental work a la Friston), it is that which allows us to continue our dissipative lifestyle.

  14. 14. Jayarava says:

    Anyone still confused about the entropy in physics, but wanting an accessible explanation might enjoy some videos made recently by Sean Carroll: The first one is here: I found these very informative and helpful.

    #12. David. I’m not saying that EEG has no resolution. I’m saying that it has low resolution and that it cannot give us information about activity below the top fem millimetres of the cortex. As I understand it the resolution is still very much lower than MRI. We’re talking many cubic millimetres at best, which contain probably millions of neurons.

    The EEG article you link to seem to be talking about coordinated oscillations between large areas such as the entire pre-frontal cortex and the entire visual cortex. That’s lower resolution than the entropy-people are talking about.

    I said the relation between informational entropy and thermodynamic entropy is *disputed*. That is still the case, is it not?

  15. 15. Alex says:

    No wonder neuroscience has problems. My fridge generates more entropy (heat) then my table. Therefore my fridge is more conscious then my table. ?

Leave a Reply