Some deep and heavy  philosophy in another IAI video: After the End of Truth. This one got a substantial and in places rather odd discussion on Reddit recently.

Watch more videos on

Hilary Lawson starts with the premise that Wittgenstein and others have sufficiently demonstrated that there is no way of establishing objective truth; but we can’t, he says, rest there. He thinks that we can get a practical way forward if we note that the world is open but we can close it in various ways and some of them work better for us than others.

Perhaps an analogy might be (as it happens) the ideas of truth and falsity themselves in formal logic. Classical logic assigns only two values to propositions; true or not true. People often feel this is unintuitive. We can certainly devise formal logics with more than two values – we could add one for ‘undetermined’, say. This is not a matter of what’s right or wrong; we can carve up our system of logic any way we like. The thing is, two-valued logic just gives us a lot more results than its rivals. One important reason is that if we can exclude a premise, in a two-value system its negation must be true; that move doesn’t work if there are three or more values). So it’s not that two-valued logic is true and the others are false, it’s just that doing it the two-valued way gets us more. Perhaps something similar might be true of the different way we might carve up the world.

John Searle, by video (and briefly doing that thing old people seem mysteriously prone to; sinking to the bottom of the frame as though peering over a wall) goes for common sense, as ever, albeit cloaked in technical terms. He distinguishes between epistemic and ontological senses of objectivity. Our views are unavoidably ontologically subjective to some degree (ie, different people have different views:  ‘perspectivalism’ is true); but that does not at all entail that epistemic objectivity is unattainable; indeed, if we didn’t assume some objective truths we couldn’t get started on discussion. That’s a robust refutation of the view that perspectivalism implies no objective truth, though I’m not sure that’s quite the case Lawson was making.  Perhaps we could argue that after all, there are such things as working assumptions; to say ‘let’s treat this as true and see where we get’ does not necessarily require belief in objectively determinable truth.

Hannah Dawson seems to argue emphatically on both sides; no two members of a class gave the same account of an assembly (though I bet they could all agree that no pink elephant walked in half-way through). It seems the idea of objective truth sits uneasily in history;  but no-one can deny the objective factuality of the Holocaust; sometimes, after all, reality does push back. This may be an expression of the curious point that it often seems easier to say that nothing is objectively true than it is to say that nothing is objectively false, illogical as that is.

Dawson’s basic argument looks to me a bit like an example of ‘panic scepticism’; no perfect objective account of an historical event is possible, therefore nothing at all is objectively true. I think we get this kind of thing in philosophy of mind too; people seem to argue that our senses mislead us sometimes, therefore we have no knowledge of external reality (there are better arguments for similar conclusions, of course). Maybe after all we can find ways to make do with imperfect knowledge.


  1. 1. Tom says:

    I am not sure what “undetermined” as a truth value would mean. It seems like an epistemic value rather than objective truth value.

    It seems that even logics that admit more “truth values” in addition to true and false eventually fall back on the true/false binary: for example, if something has truth value “undetermined” then it is true that it has truth value “undetermined” and it is false that it does not have truth value “undetermined”.

  2. 2. Michael D says:

    Relative to the pink elephant in the room, you certainly must have seen this:

  3. 3. Scott Bakker says:

    I came to comment on the previous piece, only to find myself railroaded. I actually take these debates as evidence for my position. ‘Truth’ is another tool, a way to communicate the predictive reliability of certain comportments on the cheap. Like everything biological, it is adapted to solving local problems. Philosophers have found ways to regiment that tool, of course, knap it into specialized devices, some possessing enormous usefulness. Even still, as a cognitive tool, it possesses limits of applicability, limits that only become perceptible when it crashes. But this is only obvious once we understand that it is a tool: otherwise we assume universality and find ourselves in crash space, perpetually debating our respective misapplications.

    [What were you going to say about the previous piece??? -Peter]

  4. 4. Scott Bakker says:

    Peter – “[What were you going to say about the previous piece??? -Peter]”

    To say thanks for bringing it to our attention, that I was linking it to a piece of my own, which I was intent on putting up yesterday, but decided to put up Sunday instead, when I will repeat all this in the comments… 😉

  5. 5. Callan S. says:

    So it’s not that two-valued logic is true and the others are false, it’s just that doing it the two-valued way gets us more.

    More of what? Results? Maybe it’s just preferable, like a neatly made bed is preferable to one with the sheets and blankets all scrunched up and all over the place.

  6. 6. Michael Murden says:

    Tom (1)

    The previous article about neural networks talks about how they implement processes that are not logically determinable. As with some human behavior, the trouble is not so much in the set of possible values for the outputs as in the inability to determine the process that connects the inputs to the outputs. Both humans and neural networks have to have some way to value the various outputs. For example in a system that diagnoses illnesses lower mortality is a higher valued outcome than higher mortality. As long as the system delivers higher valued outcomes we tend not to feel too curious about the internal process. When the system delivers lower valued outputs we start to question the internal processes. The thing about logic is that in ordinary life decisions we have multiple inputs, some of which we don’t consciously understand, multiple possible outputs, some of which we don’t consciously understand, and multiple internal processes, some of which we don’t consciously understand. Logic, particularly two valued true/false logic, only works for the few times when we know the inputs, internal processes and range of possible outputs. You could say logic functions as a kind of intellectual hygiene, ruling out inputs, processes and outputs that can’t be precisely defined. But of course you can be too fastidious.

  7. 7. John Davey says:

    If your senses ‘mislead’ you, then that deception is an objective fact. It has no bearing on the existence of objective truths, which include such basic ‘deceptions’ as sense data is capable of

  8. 8. Sergio Graziosi says:

    thanks for flagging this one up. I would have missed it otherwise and it’s the only example I can recall of a video-debate on philosophical matters which I found interesting throughout.
    It’s also nice to be able to (partially) agree with Searle, every once in a while! ^_^

    Overall, it’s interesting to me because I find it surprisingly hard to dismiss anti-realism. This video convinced me of my (usually prevailing) hunch: I’m very close to critical realism, after all.

    A few reflections:
    I think it’s an understandable (does sum-up the concept very well), but unfortunate wording choice. No particular interpretational framework is truly closed. On one side, expecting a closure to be static, not available for tweaking, is not only false (even sacred texts are constantly being re-interpreted!), it’s also unnecessarily limiting. Secondarily, even if we could formalise a given framework to the tiniest detail (which we can’t because it’s too much work for any vaguely useful framework, and because the approach is endless, as frameworks do change all the time) we already know that the result will be either inconsistent, incomplete or both. Closures are not so close, after all.

    When asked, Lawson explains that the anti-realist position is useful because by abandoning the idea of objective truth you can undermine dangerous fanaticism. If only! In fact, fanatics are impervious to both logical arguments and empirical falsification, so I doubt the solution would work in practice. I however (surprisingly!) find the idea appealing because it does allow (as Lawson does state) to expand our explorative options. Caveats aside, the potential of inclusively (at least partially) accept perspectives which are mutually contradictory (they can’t all be 100% right!) is what keeps me attracted to the anti-realist position. It does because it automatically allows to save what is useful in each perspective on offer, which, when translated to philosophical and even scientific theorising (with all their differences!), automatically provides access to more predictive/explanatory power. Unlike fanatics, scientists and philosophers should be at least a little bit permeable to logical and empirical arguments, so this “quality” of anti-realism strikes me as marginally less utopian.
    Thus, for me, the question becomes: how do we retain the special advantages of anti-realism without having to discard the idea that there is a common reality that embeds us all? (Included here: why should we assume a common reality? Sketchy answer: because, as Searle notes, we have to!)
    As far as I can tell, an attempt to do this is in fact critical realism. I’m no expert, but what I understand of it is fairly close of my current position.
    I’ve expressed where I stand many times here, but I’ll take the chance of writing a recap (for my own benefit, primarily).

    1. To be able to think (let alone discuss) about anything, we need to slice up the reality we perceive into distinct entities. That is: cognition relies on our innate tendency of using equivalence classes.
    2. These distinctions are opportunistic: we latch on those which “work”. We retain and rely on distinctions which allow to produce reliable-enough predictions. (The ultimate aim and arbiter of what is good enough is always reproduction and self-preservation).
    2.1 Causality enters here: it’s the side effect of our innate way of slicing. A acting on B causes C: this is cognition, by distinguishing A, B and C, we map perceptions onto distinct entities. From there, the action of A on B creates something else, C. This observed phenomenon is a function of the slicing, and requires the concept of causality to represent the relation between A, B and C. Causation is an artefact of how cognition works. (Apparently some of the weirdest theories in theoretical physics use symmetric equations with respect to time, so may be giving up on the concept of causality altogether. This fits in well, IMHO, as these theories use different slicing approaches which don’t need to create this particular artefact – see below).
    3. Senses and our ways of making sense(!) of them are the result of natural selection. The ones we have exist (in the overwhelming majority) because they work (in the sense mentioned above).
    4. From 3 you derive that because we share a good amount of our evolutionary history, we also have plenty of cognitive common ground, allowing communication and thus language. We perceive and naturally slice the world in similar ways, thus we can “assume” the world is how we perceive, and use this assumption to ground symbolic communication – this is my agreement with Searle, I suppose.
    4.1 Such a description of perception and cognition relates to the necessity of grounding intentionality on our way of perceiving the world. See my previous discussion on “Ecological Representations” and that strand of lucubrations.
    5. However, nothing in the world states that one way of slicing up the world (Lawson’s Closures) is objectively better a priori (i.e. better in a context-independent way). Each slicing approach comes with advantages and inherent disadvantages. While highlighting some regularities in the world, by necessity a given approach will obfuscate some others (this is what I think is my weakest point: it makes sense to me, but I’m not sure how I can substantiate it).
    6. From 5, you derive that having separate, mutually contradicting approaches, each with their own non-overlapping strengths and weaknesses is not only useful, it is inevitable. (Aside: it also rescues the fact that cognitive dissonance doesn’t feel like dissonance at all!).
    7. You can go further than 5, and hypothesise that some approaches are generically better than others simply because they are less context dependent. This conjecture incorporates the idea that science does makes progress: by ever-accumulating refinements (and the occasional radical-changes) it makes progress because it finds new ways of slicing reality which allow to be less context dependent, this might suggest that we are capturing more and more ‘real’ features. You would expect scientific theories (a particular brand of ‘slicing’) to be optimised with respect to context-dependency: because they strive to be a “third party view” or “a view from nowhere”, they are our best shot at being context-independent (albeit never 100% so).
    8. However, as far as I can tell, we have no way of establishing a priori what features of a given slicing method actually match “reality” (or to what extent they do) and what others don’t. We expect every slicing to introduce its own artefacts and to have its own blind spots, but we can only indirectly figure out which is which (if at all). When a prediction fails, that’s a sign we can improve something, but we can’t assume that being able to find said improvements automatically guarantees our slicing method is Right. We may be digging deeper into a rabbit hole: while the soil is soft we can still make progress, but when we hit a hard bottom someone might get frustrated and start another hole somewhere else (attempt to get a paradigm shift started, etc.).

    In short, reality exists, but we can’t tell for sure how much of it we’ve understood, or which parts of our current understandings are misleading. We also allow for the co-existence of theories which contradict one-another, but simultaneously understand why finding a way to reconcile (or surpass, with a third theory) contradictions between theories feels like a very useful way of progressing. At the same time, we are reconciling Lawson’s and Searle’s views, precisely how I think Dawson was trying to. In this “meta-slicing” perspective, it should be the best kind of progress. 😉

    So why bothering to cover all this on a blog about consciousness? (You may also ask: why did Peter decide this sort of debate was relevant?) I can’t speak for Peter, but I do know I’m interested about all this because of the implications for consciousness studies, scientific and philosophical.

    If you think about point 7, striving to construct a “view from nowhere” is what makes scientific knowledge more versatile and less context dependent. What you find via the “view from nowhere” are predictions that are more likely than most to hold up in a wider range of settings. To make a trivial example, aerodynamics predicts what planes will fly, but also applies to paper-planes (a huge change of scale), to the trajectory of falling leaves (a huge change of scope) and to the effect of air on cars (a complete change of context). Fine. Following 8, you would expect that our view from nowhere needs to have weak points, but where would they be? There is an interesting “coincidence” exposed by the crisis of reproducibility: is it by chance that, while it is spreading to multiple disciplines, it first started in psychology? I don’t think so: because of how it works, the scientific stance is bound to find significant obstacles when dealing with subjectivity. You could argue that striving to produce a view from nowhere necessarily generates a blind spot about subjective views. I could do it formally, but I’m sure people here can see the gist of it.

    Thus, the Hard Problem is hard, not because it’s hard in absolute terms (many philosophical systems have no problem with subjective experience), the hardness is in fact the inevitable consequence of the chosen method of inquiry (or slicing approach). The situation is made even harder because we (Westerners) have been trained to consider the scientific outlook as the bearer of truth (for good reasons, but with too much zeal, I fear). You thus get people who find it acceptable to think that consciousness or subjective experience don’t even exist, since science can do without them. The outlook I’m exploring, my meta-slicing strategy, explains why it is so, and simultaneously allows to maintain that even if we could produce a full theory of human behaviour without including subjective experience, this does not mean that subjective experience doesn’t exist or doesn’t have its own explanatory power.
    Does this view suggest that we should not try to tackle consciousness scientifically? Not at all, it merely explains why it is hard, and why it might require some tweaking of the original approach. In other words, it points to the fact that the hard problem is both scientific and philosophical: it seems likely that neither field can solve it in isolation: as vanilla types of the “view from nowhere” are likely to be blind to what subjectivity is, tweaking them is a quintessentially philosophical endeavour, even if scientists might find it hard to accept it as such.
    Just like physics is trying to change in order to make sense of the weird stuff we discovered in the last 100 years (it is, pace John), also neuro-psy-sciences may need to re-adjust their premises. Regretfully, this does mean that I don’t have grounds to dismiss the theories I dislike: even if they look nonsensical to my own eyes, this may be because the assumptions I’m making are not fit for the purpose at hand :-/. Of course, I can also reverse the argument in my favour!

    Question for all readers/contributors here: should I try to solidify the above and try to make it credible enough for anyone to take note? It’s a very serious question: my life plans need to change if the answer is “yes”.

    PS Jochen, if you’re reading. Yes I know: underdeveloped BFD, I just can’t help myself!

Leave a Reply