KantWe’ve done so much here towards clearing up the problems of consciousness I thought we might take a short excursion and quickly sort out ethics?

It’s often thought that philosophical ethics has made little progress since ancient times; that no firm conclusions have been established and that most of the old schools, along with a few new ones, are still at perpetual, irreconcilable war. There is some truth in that, but I think substantial progress has been made. If we stop regarding the classic insights of different philosophers as rivals and bring them together in a synthesis, I reckon we can put together a general ethical framework that makes a great deal of sense.

What follows is a brief attempt to set out such a synthesis from first principles, in simple non-technical terms. I’d welcome views: it’s only fair to say that the philosophers whose ideas I have nicked and misrepresented would most surely hate it.




The deepest questions of philosophy are refreshingly simple. What is there? How do I know? And what should I do?

We might be tempted to think that that last question, the root question of ethics, could quickly be answered by another simple question; what do you want to do? For thoughtful people, though, that has never been enough. We know that some of the things people want to do are good, and some are bad. We know we should avoid evil deeds and try to do good ones – but it’s sometimes truly hard to tell which they are. We may stand willing to obey the moral law but be left in real doubt about its terms and what it requires. Yet, coming back to our starting point, surely there really is a difference between what we want to do and what we ought to do?

Kant thought so: he drew a distinction between categorical and hypothetical imperatives. For the hypothetical ones, you have to start with what you want. If you’re thirsty, then you should drink. If you want to go somewhere, then you should get in your car. These imperatives are not ethical; they’re simply about getting what you want. The categorical imperative, by contrast, sets out what you should do anyway, in any circumstances, regardless of what you want; and that, according to Kant, is the real root of morality.

Is there anything like that? Is there anything we should unconditionally do, regardless of our aims or wishes? Perhaps we could say that we should always do good; but even before we get on to the difficult task of defining ‘good’, isn’t that really a hypothetical imperative? It looks as if it goes: if you want to be good, behave like this…? Why do we have to be good? Let’s imagine that Kant, or some other great man, has explained the moral law to us so well, and told us what good is, so correctly and clearly that we understand it perfectly. What’s to stop us exercising our freedom of choice and saying “I recognise what is good, and I choose evil”?

To choose evil so radically and completely may sound more like a posture than a sincere decision – too grandly operatic, too diabolical to be altogether convincing – but there are other, appealing ways we might want to rebel against the whole idea of comprehensive morality. We might just seek some flexibility, rejecting the idea that morality rules our lives so completely, always telling us exactly what to do at every turn. We might go further and claim unrestricted freedom, or we might think that we may do whatever we like so long as we avoid harm to others, or do not commit actual crimes. Or we might decide that morality is simply a matter of useful social convention, which we propose to go along with just so long as it suits our chosen path, and no further. We might come to think that a mature perspective accepts that we don’t need to be perfect; that the odd evil deed here and there may actually enhance our lives and make us more rounded, considerable and interesting people.

Not so fast, says Kant, waving a finger good-naturedly; you’re missing the point; we haven’t yet even considered the nature of the categorical imperative! It tells us that we must act according to the rules we should be happy to see others adopt. We must accept for ourselves the rules of behaviour we demand of the people around us.

But why? It can be argued that some kind of consistency requires it, but who said we had to be consistent? Equally, we might well argue that fairness requires it, but we haven’t yet been given a reason to be fair, either. Who said that we had to act according to any rule? Or even if we accept that, we might agree that everyone should behave according to rules we have cunningly slanted in our own favour (Don’t steal, unless you happen to be in the special circumstances where I find myself to be) or completely vacuous rules (Do whatever you want to do). We still seem to have a serious underlying difficulty: why be good? Another simple question, but it’s one we can’t answer properly yet.

For now, let’s just assume there is something we ought to do. Let’s also assume it is something general, rather than a particular act on a particular occasion. If the single thing we ought to do were to go up the Eiffel Tower at least once in our life, our morality would be strangely limited and centred. The thing we ought to do, let’s assume, is something we can go on doing, something we can always do more of. To serve its purpose it must be the kind of behaviour that racks up something we can never have too much of.

There are people who have ethical theories which are exactly based on general goals like that, namely consequentialists. They believe the goodness of our acts depends on their consequences. The idea is that our actions should be chosen so that as a consequence some general desideratum is maximised. The desired thing can vary but the most famous example is the happiness which Jeremy Bentham embodied in the Utilitarians’ principle: act so as to bring about the greatest happiness of the greatest number of people.
Old-fashioned happiness Utilitarianism is a simple and attractive theory, but there are several problems with the odd results it seems to produce in unusual cases. Putting everyone in some kind of high-tech storage ward but constantly stimulating the pleasure centres in their brains with electrodes appears a very good thing indeed if we’re simply maximising happiness. All those people spend their existence in a kind of blissful paralysis: the theory tells us this is an excellent result, something we must strive to achieve, but it surely isn’t. Some kinds of ecstatic madness, indeed, would be high moral achievements according to simple Utilitarianism.

Less dramatically, people with strong desires, who get more happiness out of getting what they want, are awarded a bigger share of what’s available under utilitarian principles. In the extreme case the needs of ‘happiness monsters’ whose emotional response is far greater than anyone else’s, come to dominate society. This seems strange and unjust; but perhaps not to everyone. Bentham frowns at the way we’re going: why, he asks, should people who don’t care get the same share as those who do?

That case can be argued, but it seems the theory now wants to tutor and reshape our moral intuitions, rather than explaining them. It seems a real problem, as some later utilitarians recognised, that the theory provides no way at all of judging one source or kind of happiness better or worse than another. Surely this reduces and simplifies too much; we may suspect in fact that the theory misjudges and caricatures human nature. The point of life is not that people want happiness; it’s more that they usually get happiness from having the things they actually want.

With that in mind, let’s not give up on utilitarianism; perhaps it’s just that happiness isn’t quite the right target? What if, instead, we seek to maximise the getting of what you want – the satisfaction of desires? Then we might be aiming a little more accurately at the real desideratum, and putting everyone in pleasure boxes would no longer seem to be a moral imperative; instead of giving everyone sweet dreams, we have to fulfil the reality of their aspirations as far as we can.

That might deal with some of our problems, but there’s a serious practical difficulty with utilitarianism of all kinds; the impossibility of knowing clearly what the ultimate consequences of any action will be. To feed a starving child seems to be a clear good deed; yet it is possible that by evil chance the saved infant will grow up to be a savage dictator who will destroy the world. If that happens the consequences of my generosity will turn out to be appalling. Even if the case is not as stark as that, the consequences of saving a child roll on through further generations, perhaps forever. The jury will always be out, and we’ll never know for sure whether we really brought more satisfaction into the world or not.

Those are drastic cases, but even in more everyday situations it’s hard to see how we can put a numerical value on the satisfaction of a particular desire, or find any clear way of rating it against the satisfaction of a different one. We simply don’t have any objective or rigorous way of coming up with the judgements which utilitarianism nevertheless requires us to make.

In practice, we don’t try to make more than a rough estimate of the consequences of our actions. We take account of the obvious immediate consequences: beyond that the best we can do is to try to do the kind of thing that in general is likely to have good consequences. Saving children is clearly good in the short term, and people on the whole are more good than bad (certainly for a utilitarian – each person adds more satisfiable desires to the world), so that in most cases we can justify the small risk of ultimate disaster following on from saving a life.

Moreover, even if I can’t be sure of having done good, it seems I can at least be sure of having acted well; I can’t guarantee good deeds but I can at least guarantee being a good person. The goodness of my acts depend on their real consequences; my own personal goodness depends only on what I intended or expected, whether things actually work out the way I thought they would or not. So if I do my best to maximise satisfaction I can at least be a good person, even if I may on rare occasions be a good person who has accidentally done bad things.

Now though, if I start to guide my actions according to the kind of behaviour that is likely to bring good results, I am in essence going to adopt rules, because I am no longer thinking about individual acts, but about general kinds of behaviour. Save the children; don’t kill; don’t steal. Utilitarianism of some kind still authorises the rules, but I no longer really behave like a Utilitarian; instead I follow a kind of moral code.

At this point some traditional-looking folk step forward with a smile. They have always understood that morality was a set of rules, they explain, and proceed to offer us the moral codes they follow, sanctified by tradition or indeed by God. Unfortunately on examination the codes, although there are striking broad resemblances, prove to be significantly different both in tone and detail. Most of them also seem, perhaps inevitably, to suffer from gaps, rules that seem arbitrary, and ones which seem problematic in various other ways.

How are we to tell what the correct code is? Our code is to be authorised and judged on the basis of our preferred kind of utilitarianism, so we will choose the rules that tend to promote the goal we adopted provisionally; the objective of maximising the satisfaction of desires. Now, in order to achieve the maximal satisfaction of desires, we need as many people as possible living in comfortable conditions with good opportunities and that in turn requires an orderly and efficient society with a prosperous economy. We will therefore want a moral code that promotes stable prosperity. There turns out to be some truth in the suggestion that morality in the end consists of the rules that suit social ends! Many of these rules can be worked out more or less from first principles. Without consistent rules of property ownership, without reasonable security on the streets, we won’t get a prosperous society and economy, and this is a major reason why the codes of many cultures have a lot in common.

There are also, however, many legitimate reasons why codes are different, too. In certain areas the best rules are genuinely debatable. In some cases, moreover, there is genuinely no fundamental reason to prefer one reasonable rule over another. In these cases it is important that there are rules, but not important what they are – just as for traffic regulations it is not important whether the rule is to drive on the left or the right, but very important that it is one or the other. In addition the choice of rules for our code embodies some assumptions about human nature and behaviour and which arrangements work best with it. Ethical rules about sexual behaviour are often of this kind, for example. Tradition and culture may have a genuine weight in these areas, another potentially legitimate reason for variation in codes.

We can also make a case for one-off exceptions. If we believed our code was the absolute statement of right and wrong, perhaps even handed down by God, we should have no reason to go against it under any circumstances. Anything we did that didn’t conform with the code would automatically be bad. We don’t believe that, though; we’ve adopted our code only as a practical response to difficulties with working out what’s right from first principles – the impossibility of determining what the final consequences of anything we do will be. In some circumstances, that difficulty may not be so great. In some circumstances it may seem very clear what the main consequences of an action will be, and if it looks more or less certain that following the code will, in a particular exceptional case, have bad consequences, we are surely right to disobey the code; to tell white lies, for example, or otherwise bend the rules. This kind of thing is common enough in real life, and I think we often feel guilty about it. In fact we can be reassured that although the judgements required may sometimes be difficult, breaking the code to achieve a good result is the right thing to do.

The champions of moral codes find that hard to accept. In their favour we must accept that observance of the code generally has a significant positive value in itself. We believe that following the rules will generally produce the best results; it follows that if we set a bad example or undermine the rules by flouting them we may encourage disobedience by others (or just lax habits in ourselves) and so contribute to bad outcomes later on. We should therefore attach real value to the code and uphold it in all but exceptional cases.

Having got that far on the basis of a provisional utilitarianism, we can now look back and ask whether the target we chose, that of maximising the satisfaction of desires, was the right one. We noticed that odd consequences follow if we seek to maximise happiness; direct stimulation of the pleasure centres looks better than living your life, happiness monsters can have desires so great that they overwhelm everything else. It looked as if these problems arise mainly in situations where the pursuit of simple happiness is too narrowly focused, over-riding other objectives which also seem important.

In this connection it is productive to consider what follows if we pursue some radical alternative to happiness. What, indeed, if we seek to maximise misery? The specifics of our behaviour in particular circumstances will change, but the code authorised by the pursuit of unhappiness actually turns out to be quite similar to the one produced by its opposite. For maximum misery, we still need the maximum number of people. For the maximum number of people, we still need a fairly well-ordered and prosperous society. Unavoidably we’re going to have to ban disorderly and destructive behaviour and enforce reasonable norms of honesty. Even the armies of Hell punish lying, theft, and unauthorised violence – or they would fall apart. To produce the society that maximises misery requires only a small but pervasive realignment of the one that produces most happiness.

If we try out other goals we find that whatever general entity we want to maximise, consequentialism will authorise much the same moral code. Certain negative qualities seem to be the only exceptions. What if we aim to maximise silence, for example? It seems unlikely that we want a bustling, prosperous society in that case: we might well want every living thing dead as soon as possible, and so embrace a very different code. But I think this result comes from the fact that negative goals like silence – the absence of noise – covertly change our principle from one of maximising to one of minimising, and that makes a real difference. Broadly, maximising anything yields the same moral code.

In fact, the vaguer we are about what we seek to maximise, the fewer the local distortions we are likely to get in the results. So it seems we should do best to go back now and replace the version of utilitarianism we took on provisionally with something we might call empty consequentialism, which simply enjoins us to choose actions that maximise our own legacy as agents, without tying us to happiness or any other specific desideratum. We should perform those actions which have the greatest consequences – that is, those that tend to produce the largest and most complex world.

We began by assuming that something was worth doing and have worked round to the view that everything is: or at least, that everything should be treated as worth doing. The moral challenge is simply to ensure our doing of things is as effective as possible. Looking at it that way reveals that even though we have moved away from the narrow specifics of the hypothetical imperatives we started with, we are still really in the same territory and still seeking to act effectively and coherently; it’s just that we’re trying to do so in a broader sense.

In fact what we’re doing by seeking to maximise our consequential legacy is affirm and enlarge ourselves as persons. Personhood and agency are intimately connected. Acts, to deserve the name, must be intentional: things I do accidentally, unknowingly, or under direct constraint don’t really count as actions of mine. Intentions don’t exist in a free-floating state; they always have their origin in a person; and we can indeed define a person as a source of intentions. We need not make any particular commitment about the nature of intentions, or about how they are generated. Whether the process is neural, computational, spiritual, or has some other nature, is not important here, so long as we can agree that in some significant sense new projects originate in minds, and that such minds are people. By adopting our empty consequentialism and the moral code it authorises, we are trying to imprint our personhood on the world as strongly as we can.

We live in a steadily unrolling matrix of cause and effect, each event following on from the last. If we live passively, never acting on intentions of our own, we never disturb the course of that process and really we do not have a significant existence apart from it. The more we act on projects of our own, the stronger and more vivid our personal existence becomes. The true weight of these original actions is measured by their consequences, and it follows that acting well in the sense developed above is the most effective way to enhance and develop our own personhood.

To me, this a congenial conclusion. Being able to root good behaviour and the observance of an ethical code in the realisation and enlargement of the self seems a satisfying result. Moreover, we can close the circle and see that this gives us at last some answer to the question we could not deal with at first – why be good? In the end there is no categorical imperative, but there is, as it were, a mighty hypothetical; if you want to exist as a person, and if you want your existence as a person to have any significance, you need to behave well. If you don’t, then neither you nor anyone else need worry about the deeper reasons or ethics of your behaviour.

People who behave badly do not own the outcomes of their own lives; their behaviour results from the pressures and rewards that happen to be presented by the world. They themselves, as bad people, play little part in the shaping of the world, even when, as human beings, they participate in it. The first step in personal existence and personal growth is to claim responsibility and declare yourself, not merely reactive, but a moral being and an aspiring author of your own destiny. The existentialists, who have been sitting patiently smoking at a corner table, smile and raise an ironic eyebrow at our belated and imperfect enlightenment.

What about the people who rejected the constraints of morality and to greater or lesser degrees wanted to be left alone? Well, the system we’ve come up with enjoins us to adopt a moral code – but it leaves us to work out which one and explicitly allows for exceptions. Beyond that it consists of the general aspiration of ‘empty consequentialism’, but it is for us to decide how our consequential legacy is to be maximized. So the constraints are not tight ones. More important, it turns out that moral behaviour is the best way to escape from the tyranny of events and imprint ourselves on the world; obedience to the moral law, it turns out, is really the only way to be free.


  1. 1. Jochen says:

    First of all, this post deperately needs a link to existential comics, since following it will maximize the happiness of all who read it.

    But I think this result comes from the fact that negative goals like silence – the absence of noise – covertly change our principle from one of maximising to one of minimising, and that makes a real difference. Broadly, maximising anything yields the same moral code.

    But (speaking here from the maybe biased perspective of someone who solves optimization problems on a near-daily basis), isn’t maximizing A the same as minimizing not-A? Isn’t maximizing suffering the same as minimizing non-suffering (i.e. happiness, if that’s the true polar opposite)? So I’m afraid I don’t really see this difference between maximization and minimization…

    I’ve also always found the reading of utilitarianism under which it is the optimization of some quantity in an absolute, rather than a relative sense somewhat odd. Is a world in which there are ten billion miserable people really in any objective sense ‘happier’ than a world in which there is just a really, really happy million? Should we not strive to optimize happiness (if that is what we want to optimize) per person, rather than in sum? Because ultimately, the sum total of all happiness is nobody’s happiness—that is, there’s nobody that necessarily gets any happier under this reading. So why would one aspire to that? It’s the same reason we don’t compare countries by total GNP, but rather, by GNP per capita.

    For me, ethics is one of these things I keep thinking I’ll properly tackle after having solved all the easy issues—you know, unified general relativity and quantum mechanics, found the theory of everything, explained consciousness and the like. Afterwards, I might take on sociology or politics. And then, just maybe, solve the issue of where the other socks in the washing machine go to. But that one might take me some time.

    Less facetiously, recently I’ve begun to believe that maybe a lot of the problems in ethics (and similar) are due to the fact that we try to understand a neural network using a Turing machine—we keep looking for a simple, unified principle that will pick out all the ‘morally good’ cases from the ‘morally bad’ cases; but neural networks just don’t necessarily organize concepts according to such easily defined boundaries. That’s also what I think is at issue with Gettier problems: we try to find a simple and unified description of what knowledge is, and then keep finding counterexamples; but this must mean that we already have some idea of what knowledge is, otherwise, we couldn’t realize that those counterexamples are actual counterexamples. So we do, in fact, already have a concept of knowledge; it’s just not easily capturable in a short formula (like, for instance, the set of images that a neural network might group in a specific class isn’t easily determinable by a short set of criteria).

    Where to go from there, though… For now, my working theory is simply that there may be no ultimate moral code, no final facts about the world of the form ‘such-and-such is the right thing to do’ but that the sort of rules that are followed are subject to an ever-ongoing discussion within society (staving off relativism). But I do recognize that this may be nothing more but a momentary band-aid. Well, but let’s get back to the theory of everything for now!

  2. 2. Peter says:

    isn’t maximizing A the same as minimizing not-A

    That’s a very good point; logically, yes, but in ontological terms I’m inclined to think there may be a real difference between ‘less’ and ‘more’.

    I think Bentham would have taken the view that the misery of the ten billion could be treated as a negative to offset any happiness in that world. You’d just work out a net total. Like Benthamite economics this is clearly somewhat unrealistic and over-simplified (but both are a lot easier to work with than messy reality); we could try something a bit more complex.

    There are those who take the view that ethical qualities are non-natural and the best we can do is sit and think about them (unfortunately this tends not to produce agreement).

    Can I say I hate all that Gettier stuff? Knowledge is true belief. People who think otherwise are falling foul of a desire to know that they know; but you don’t have to know that you know in order to know. (Am I making sense…?)

    You say no ultimate moral code, but are you really willing to say nothing at all ultimately deserves condemnation? I’m not willing to relinquish the right to denounce!

  3. 3. Jochen says:

    Hmm, couldn’t I end up with exactly the same world, whether I maximized A or minimized not-A? But then, what’s the difference?

    And I think I expressed myself poorly re the maximization of happiness in sum—what I meant was that the ten billion would be relatively miserable, as in, possessing one unit of happiness each, while the million each possess, say, ten units; so each of the million is pretty damn happy, while each of the ten billion is just barely not entirely unhappy. Yet, the total amount of happiness in the more highly populated world would be higher, and hence, that world would be more desirable; but I’d rather live in the less populated one (and that not just on the grounds of general misanthropy).

    Regarding Gettier, well, it’s a tad overblown I guess. In most cases, we really just believe things more or less strongly, and the world very rarely conforms exactly to the sorts of things we believe. Maybe those paradoxes are not any worse than those you get when you overstretch other concepts that are somewhat inherently vague, like the species concept in biology.

    As for morality, I did say it was a band-aid. In fact, I feel very strongly that some things deserve to be condemned absolutely; the hope is that in all societies that exist long enough to have a meaningful discourse on setting standards of morality, those are just the sort of things that will end up being condemned, simply due to being the kinds of things that tend to destabilize or destroy societies—as you note, even the armies of hell need to enforce some moral code. (Although that is just the sort of hope that eventually some enterprising philosopher may come around and demolish, probably with an argument involving trolleys.)

  4. 4. Lyndon says:

    “We might be tempted to think that that last question, the root question of ethics, could quickly be answered by another simple question; what do you want to do? For thoughtful people, though, that has never been enough. We know that some of the things people want to do are good, and some are bad. We know we should avoid evil deeds and try to do good ones – but it’s sometimes truly hard to tell which they are.”

    No, you (anyone) do not “^know^ that some of the things people want to do ^are good^”, unless you circularly define good and bad as things that humans in general want to do (or as core drives that evolution has given us). It’s either an absurd intuition, one we should disavow, or an absurd way to describe human thoughts, behaviors, and social structures. That is, if you want to actually explain humans in their engagement with the world, such language is not useful.

    In the end, there is good reason to think that when people have set aside their empty beliefs (of the soul, the self, morality), they will still ask where they want to take their selves and societies, but they will not pretend they are taking their selves or societies in “moral” directions. They will accept they are simply creating selves and societies that look like this or look like that, with these behaviors and these social characteristics. They will understand or try to understand what the consequences of such social structures means for humans, now and in the future. They will not squeeze in the idea that those social structures and behaviors are “moral,” “good,” or “evil.” Even as they reedit our genomes to be more sociable creatures.

    If we are trying to give good descriptions of the world, then taking the belief “we know there is a moral order to the world” and that there are right and wrong things to do, is placing an overconfidence in human (folk) claims on ontology.

    If you want to play the game of “morality,” of having a say in where we take this society and our selves, then I find it bizarre that we (say the denizens of internet intelligentsia) would not first be honest about what it is that we are and what it is we do within our moralizing. That is, what these social wranglings and what these claims of morality have meant.

    Surely among friends we should be more transparent about human behaviors and discourses (say moral nonrealism) at a place and time where we are asking basic questions about brain/mind. I do not trust my folk psychologically derived mind and the unnecessary connotation and emotions I hang on certain words (It is “wrong” to take Suzie’s toy). In the end, we will want our social and moral discourses to be open to reflection and not hang on the poorly wrought constructions that make up our piecemeal brains and social structures.

  5. 5. Peter says:


    Yes, we can describe maximising as minimising and vice versa. But describing expansion of the river as shrinkage of the not-river doesn’t keep our feet dry!

    I see about the miserable millions.

    Gettier is a whole ‘nother thing. I think the problem is a confusion between what defines knowledge and the circumstances in which we’d actually be prepared to say someone has knowledge. But we could have a long discussion about that.

    I don’t know whether it says something about me, but I find I’m much clearer about wanting to condemn some things than I am about wanting to praise anything!

  6. 6. Peter says:


    Really there I’m just mentioning the intuition that motivates the enquiry initially. I do think it’s a strong one, but I don’t expect it to carry the theory.

    It’s the rest of what I say that is meant, ultimately, to establish that morality is a real thing and the proper concern of any rational agent.

  7. 7. Jochen says:

    But describing expansion of the river as shrinkage of the not-river doesn’t keep our feet dry!

    Isn’t that really just a habit of language, though? We describe the river as expanding, simply because it’s the smaller of the two systems; an island in the river we’d probably describe as shrinking. But locally, the phenomenon is the same: water encroaching upon land. Your feet get wet either way, whether you’re on a shrinking island or the shore of an expanding river.

    And yes, finding something absolutely despicable does seem curiously easier than finding something absolutely praiseworthy; I can much more easily think of circumstances that call into question an act’s righteousness than I can think of circumstances to exonorate acts that I consider to be damnable. Perhaps that’s just a fear of guilt-by-association, though: if you try and find grounds for redeeming evil acts, you’re basically just as bad as Hitler already, so we instinctively steer clear of doing so.

    And when one finds grounds to call into question something near-universally praised—like pointing out that some starlet really just gives a pittance to charity when considering their income, and that only to gain popularity, say—then there’s always a welcome opportunity for moral high-horsing. So the cynic is always on safe grounds: he condemns what everybody condemns, plus must have higher standards for finding fault with what other people praise.

  8. 8. Peter says:

    Perhaps that’s why moral codes generally seem to be more about forbidding things than requiring them… but now you’re going to tell me that forbidding adultery is really just the same as requiring faithfulness, aren’t you? 🙂

  9. 9. Jochen says:

    The difference might just be one of efficiency: often, specifying a set versus specifying its co-set are tasks of very different complexity—for instance, the set of things that are me is easily exhaustively specified, while it would take a considerably longer amount of time to enumerate all the members of the set of things that are not-me. (There’s also a link to computability here: some sets are recursively enumerable, but not co-recursively enumerable, that is, there is a Turing machine that spits out all the members of the set (eventually; the computation is generally non-halting), but no TM that spits out all the members of its complement.)

  10. 10. Anom says:

    Derek Parfit, one of the most important contemporary philosophers in the filed of ethics as you probably already know, does not agree about ethics not making progress, though he admits that there is still a lot of room for improvement. I recomend wholeheartedly to anybody really interested on ethics his “On what matters”, an absolute masterpiece.

Leave a Reply