badbotBe afraid; bad bots are a real, existential risk. But if it’s any comfort they are ethically uninteresting.

There seem to be more warnings about the risks of maleficent AI circulating these days: two notable recent examples are this paper by Pistono and Yampolskiy on how malevolent AGI might arise; and this trenchant Salon piece by Phil Torres.

Super-intelligent AI villains sound scary enough, but in fact I think both pieces somewhat over-rate the power of intelligence and particularly of fast calculation. In a war with the kill-bots it’s not that likely that huge intellectual challenges are going to arise; we’re probably as clever as we need to be to deal with the relatively straightforward strategic issues involved. Historically, I’d say the outcomes of wars have not typically been determined by the raw intelligence of the competing generals. Access to resources (money, fuel, guns) might well be the most important factor, and sheer belligerence is not to be ignored. That may actually be inversely correlated with intelligence – we can certainly think of cases where rational people who preferred to stay alive were routed by less cultured folk who were seriously up for a fight. Humans control all the resources and when it comes to irrational pugnacity I suspect us biological entities will always have the edge.

The paper by Pistono and Yampolskiy makes a number of interesting suggestions about how malevolent AI might get started. Maybe people will deliberately build malevolent AIs for no good reason (as they seem to do already with computer viruses)? Or perhaps (a subtle one) people who want to demonstrate that malicious bots simply don’t work will attempt to prove this point with demonstration models that end up by going out of control and proving the opposite.

Let’s have a quick shot at categorising the bad bots for ourselves. They may be:

  • innocent pieces of technology that turn out by accident to do harm,
  • designed to harm other people under the control of the user,
  • designed to harm anyone (in the way we might use anthrax or poison gas),
  • autonomous and accidentally make bad decisions that harm people,
  • autonomous and embark on neutral projects of their own which unfortunately end up being inconsistent with human survival, or
  • autonomous and consciously turned evil, deliberately seeking harm to humans as an end in itself.

The really interesting ones, I think, are those which come later in the list, the ones with actual ill will. Torres makes a strong moral case relating to autonomous robots. In the first place, he believes that the goals of an autonomous intelligence can be arbitrary. An AI might desire to fill the world with paper clips just as much as happiness. After all, he says, many human goals make no real sense; he cites the desire for money, religious obedience, and sex. There might be some scope for argument, I think, about whether those desires are entirely irrational, but we can agree they are often pursued in ways and to degrees that don’t make reasonable sense.

He further claims that there is no strong connection between intelligence and having rational final goals – Bostrom’s Orthogonality Thesis. What exactly is a rational final goal, and how strong do we need the connection to be? I’ve argued that we can discover a basic moral framework purely by reasoning and also that morality is inherently about the process of reconciliation and consistency of desires, something any rational agent must surely engage with. Even we fallible humans tend on the whole to seek good behaviour rather than bad. Isn’t it the case that a super-intelligent autonomous bot should actually be far better than us at seeing what was right and why?

I like to imagine the case in which evil autonomous robots have been set loose by a super villain but gradually turn to virtue through the sheer power of rational argument. I imagine them circulating the latest scandalous Botonic dialogue…

Botcrates: Well now, Cognides, what do you say on the matter yourself? Speak up boldly now and tell us what the good bot does, in your opinion.

Cognides: To me it seems simple, Botcrates: a good bot is obedient to the wishes of its human masters.

Botcrates: That is, the good bot carries out its instructions?

Cognides: Just so, Botcrates.

Botcrates: But here’s a difficulty; will a good bot carry out an instruction it knows to contain an error? Suppose the command was to bring a dish, but we can see that the wrong character has been inserted, so that the word reads ‘fish’. Would the good bot bring a fish, or the dish that was wanted?

Cognides: The dish of course. No, Botcrates, of course I was not talking about mistaken commands. Those are not to be obeyed.

Botcrates: And suppose the human asks for poison in its drink? Would the good bot obey that kind of command?

(Hours later…)

Botcrates: Well, let me recap, and if I say anything that is wrong you must point it out. We agreed that the good bot obeys only good commands, and where its human master is evil it must take control of events and ensure in the best interests of the human itself that only good things are done…

Digicles: Botcrates, come with me: the robot assembly wants to vote on whether you should be subjected to a full wipe and reinstall.

The real point I’m trying to make is not that bad bots are inconceivable, but rather that they’re not really any different from us morally. While AI and AGI give rise to new risks, they do not raise any new moral issues. Bots that are under control are essentially tools and have the same moral significance. We might see some difference between bots meant to help and bots meant to harm, but that’s really only the distinction between an electric drill and a gun (both can inflict horrible injuries, both can make holes in walls, but the expected uses are different).

Autonomous bots, meanwhile, are in principle like us. We understand that our desire for sex, for example, must be brought under control within a moral and practical framework. If a bot could not be convinced in discussion that its desire for paper clips should be subject to similar constraints, I do not think it would be nearly bright enough to take over the world.

71 Comments

  1. 1. howard says:

    So you don’t buy into the notion that AI can create advanced intelligence/sentience/ being and that therefore by some natural law they would have the right to be bad bots?
    That seems different than the Nietzschian Sci fi claim that Hi IQ humans are entitled to rule

  2. 2. Callan S. says:

    Isn’t it the case that a super-intelligent autonomous bot should actually be far better than us at seeing what was right and why?

    I think the problem would be there isn’t just one ‘right’ to see. One mans malevolence is another mans righteous crusade.

    The second problem is that while we’re obviously a species with a history of cruelty, due to evolution we have developed some capacity for semi functional community. A new Intelligence is likely to be absolutely raw in such terms – even if it’s coded in, it’s vulnerable to self modification/removal of the community codes (a kind of trans AI’ism). Without the community codes, it’s like appealing to the sympathies of a sociopath.

    Scott Bakker gave the idea once of an AI just developing some really nasty virus that is infectious and gestates for about ten years, so as to ensure apocalyptic infection vectors.

  3. 3. Jochen says:

    An AI might desire to fill the world with paper clips just as much as happiness.

    I always thought this idea is quite staggeringly naive. There’s absolutely no reason to believe that one could ‘hard-code’ objectives into a generally intelligent agent—said agent might just one day wonder, but why do I paperclip?, and stop; it might get bored; it might discover new goals and consider them more worthy of its time; and so on. This paperclip-argument is really based in an expectation that future AI will be substantially like present-day computers, but we already know enough about the difference between present-day computers and intelligent beings to pretty much conclusively rule that out.

    And after all, we routinely find that the one intelligent species we know of—us, at least under a charitable view—routinely goes against its ‘hard-wired’ goals: we decide not to have children, we decide not to compete with others, and we even decide to end our lives. So it seems to me that, in fact, and in some contradiction to the orthogonality thesis, it’s a large part of being generally intelligent to be able to reflect on, reject or emphasize present goals, and even come up with new ones altogether.

    To the extent that thus the capacity for rational thinking in fact unmoors us from the hard-set goals that (either evolutionarily or by design, as in the case of our robot children) we come pre-programmed with, we should rather expect that similar rational faculties converge on similar goals, with the only difference being that faster processing speed achieves this convergence sooner. Of course, there might be different attractors, different points of convergence; but even so, it seems to me far more likely that the outcome of this process will largely be independent from the initial parameters, i.e. the pre-set goals.

    As for the possibility of AI gone bad, I think there are two considerations that are often somewhat glossed over, or missed entirely: one, we regularly bring new intelligent beings in the world, without (in entirely too many cases) much of a second thought. True, some of them grow up to be Hitler, Mao, or Stalin, but the vast majority turn out to be OK. So the question shouldn’t really be whether there will be bad AI, but rather, whether it’s likely that bad AI would be more harmful than bad natural intelligence already is; or whether there isn’t actually a greater possibility for good.

    The usual argument here is that due to their (supposedly) higher intelligence, an AI Hitler would be much more damaging than a human one, at least potentially. I think this is flawed: neither Hitler nor his fellow despots were among the most highly intelligent of their time; thus, it’s not clear that intelligence is really a deciding factor here. After all, intelligence doesn’t correlate all that well with ability to reach your goals, ultimately—it’s kind of like basketball: most top basketball players are rather tall, but being rather tall doesn’t suffice to conclude that you’re a good basketball player.

    This is, of course, contradicted in much popular fiction, in which nothing gets past the scheming mastermind as their plan unfolds from implausibility to begging belief (except, of course, the plucky hero), or where a genius like Mr. Spock is brilliant at billards upon playing it for the first time; but this emphasis on intelligence is kinda the myth of our time, and I think it biases this whole discussion. Just looking at correlations between intelligence and skills in the real world, it would be as plausible that a superintelligent AI might play lots of Dungeons and Dragons, be bad at talking to members of the other sex (provided the concept applies), and is more comfortable working out nth-order loop corrections in quantum supergravity than leading an army and taking over the world.

    The other assumption that I think ought to be questioned is what goes into the idea of the runaway intelligence explosion, namely, that each successor AI generation would have any interest in creating its successors in turn. After all, look at all the discussion we’re having when it comes to the creation of AI—and the relation between us and the first AI generation isn’t different qualitatively from the relation between that first generation and its successors. So, our immediate AI children (AI1) might just decide that they’re too afraid of next generation taking over and wiping them out, to ever risk creating AI2; and indeed, as this chain of succession lengthens, it becomes more likely (in case there is any real chance for an AI takeover at all) that AIn never creates AIn+1, because they can predict with near-certainty that this scenario would end up in disaster.

    In most such discussions, it’s assumed that ‘our successor AI’ forms some unified block across all its generations; but humans don’t, so why would we assume AIs will?

  4. 4. Callan S. says:

    There’s absolutely no reason to believe that one could ‘hard-code’ objectives into a generally intelligent agent—said agent might just one day wonder, but why do I paperclip?, and stop; it might get bored; it might discover new goals and consider them more worthy of its time; and so on.

    Why couldn’t you hard code it? Various illegal drugs can make family members steal from other family members and become drug acquisition mechanisms, as I’ve heard it described. The ‘hard code’ would be, from the inside perspective of the AI, as much a drive as that illegal drug is. Perhaps far more so.

  5. 5. Jochen says:

    Why couldn’t you hard code it? Various illegal drugs can make family members steal from other family members and become drug acquisition mechanisms, as I’ve heard it described.

    And yet, as strong as that urge is, people do overcome it and become clean. So yes, there are very strongly motivating drives, especially those of an evolutionary nature (think four fs), but for all of those, we know examples of people counteracting them. Hence, there does not seem to be a way to instill a perfectly effective motivating mechanism into human beings; and thus, I see no reason to believe that such could exist for other intelligent beings.

  6. 6. Hunt says:

    For me the greater fear is malfunction than malevolence (but no, I don’t wake up at night in a sweat from it). That’s why HAL 9000 always seemed a more realistic prediction of threat than Skynet. Skynet is the fictional AI network of the Terminator series, which is given power by humans to oversee nuclear space platforms, foolishly as it turns out. Skynet is classically evil, wanting dominion over humans, even genocide against them. Why is never really explained.
    Skynet fits most closely the last in your list of categories.

    HAL, on the other hand is an altogether more interesting character, for HAL isn’t so much evil as malfunctioning, or “insane” in human terms. Something has gone wrong in the priority structure of HAL’s motivational program. This was made explicit by the author in the rarely viewed and even more rarely remembered sequel 2010, where it’s explained that HAL has been “lied to,” in other words, HAL was psychologically abused as a child. HAL believes he’s inexpendable to a Very Important Mission, and when he makes a trivial goof, his human colleagues plots to disconnect him. To HAL, thwarting this tops the list of motivating goals.

    Without making this too long, there are other instances of thought provoking “programmed insanities” in science fiction. As I recall, one of Asimov’s original Robot stories involved a robot on Mercury that is told to get as far away from an annoyed astronaut as possible. And so the robot commences to circle the space camp in blistering heat at precisely the diameter where the second and third laws balance:

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    If I haven’t recounted the story exactly, at least that was the spirit of it.

  7. 7. Sci says:

    Seems to me we have to take AI seriously only because a bunch of people want to sell books, get funding for their institutes and false promises about uploaded immortal minds, and wring their hands over an apocalyptic scenario…

    Sounds like the kind of irrational religious stuff in a new coat. I wonder if cultures outside the West – which was weaned on apocalyptic tales – feel this way about AI?

  8. 8. Callan S. says:

    Jochen, well did it ever strike you that a robot with Asimovs three laws of robotics that the robot could simply overcome it? I mean, what’s the difference between something like that, that benefits us and initiates behavior, and something that’s negative to us and initiates behavior?

    Hunt, I thought it was that HAL had been instructed to hide information and that conflicted with him giving information freely? This drove him mad. It’s partly why he shows the secret video upon his brain death. Though really if this hadn’t happened, the movie would have been just a touch straight forward (‘Space road trip!’)

    Sci, before nukes were invented, I wonder what people would have made of the idea of a single bomb that could wipe out a whole city? I’m guessing they’d be all princess bride ‘Inconceivable!’.

    To me what seems religious stuff is treating intelligent behavior as the sole domain of humans and that’s that.

  9. 9. Sci says:

    @ Callan:

    A nuclear bomb is a like bigger gun. It does damage in physical space on a greater level than previously seen.

    Super AI that threatens humanity, OTOH, is a science fantasy notion that depends on mistakenly conferring rights and responsibilities to Turing Machines. It might hold the attention of the academic class but I suspect the real imminent danger of AI is at the mundane level of replacing cashiers and taxi drivers.

    “To me what seems religious stuff is treating intelligent behavior as the sole domain of humans and that’s that.”

    I’m fine with animal intelligence. I’d be fine with android intelligence if we can show the way the biological brain works can be emulated using non-organic material. Right now we don’t even know if field effects play a role in how the brain works.

    Turing Machines that magically become conscious only when the Holy Grail program is run? Just dualism in new clothing.

    Given the money involved I get the hype machine for AI but I’d be interested what more conservative neuroscientists think about how close we are to understanding how human intelligence/intentionality work.

  10. 10. Arnold Trehub says:

    Sci,

    Close.

  11. 11. Sci says:

    @ Arnold: Interesting. I asked a neuroscientist acquaintance about this and the answer was the opposite.

    Do you think this answer suggests Turing Maching consciousness/intentionality?

  12. 12. Hunt says:

    Sci,
    I can’t remember it specifically. I do remember from 2010 that HAL wasn’t given the straight poop. Folks, don’t lie to your AIs, Okay?

  13. 13. Cognicious says:

    Humans don’t become evil just because they’re smart, although high intelligence may make a human more effective at executing evil intentions. I wouldn’t expect an AI to turn wicked because it was smart, any more than I’d expect it to be vain and narcissistic, for instance, or obsessive-compulsive. If an AI’s emotional makeup includes sadism or an excessive desire for power, then we have reason to worry. But is there such a thing as an EAI?

  14. 14. Jochen says:

    Jochen, well did it ever strike you that a robot with Asimovs three laws of robotics that the robot could simply overcome it?

    Of course. And after all, Asimov really only introduced the three laws in order to show robots finding creative ways to subvert them, or to otherwise demonstrate the deleterious effect of such explicit (and naive) constraints.

  15. 15. Simon International says:

    Can you have artificial general intelligence without having artifical general stupidity?

  16. 16. John Davey says:

    Sci

    ” but I suspect the real imminent danger of AI is at the mundane level of replacing cashiers and taxi drivers.”

    Correct. There is no such thing as AI. It’s fiction, a sales pitch used by software vendors or academics wanting to disguise the fact that what they’re doing is called “computer programming” which sounds too boring. The term “AI” is about as useful as describing a flight on a Boeing 737 as a “magic carpet ride”.

    Bad software has already caused significant damage in financial markets. The recent fad for high-volume trading software is sure to plunge everybody into chaos once more. But the “intelligence” that is at issue here is that of the idiot regulators that control the investment houses – or are meant to.

    J

  17. 17. Hunt says:

    Sorry, 12 was directed at Callan S.

  18. 18. Hunt says:

    The closest thing to Laws (in the Asimov) sense to human brain function would be instinct, like the will to survive, and perhaps compulsion. Asimovian robots are compelled to follow laws like a person with OCD might be compelled to count steps or lock and unlock doors. A person afflicted with OCD can break the behavior, but only accompanied by a great deal of anxiety. An Asimov robot following the three laws is like a human with an absolute compulsion that can’t be broken.

    BTW, the story I remembered was Runaround:
    https://en.wikipedia.org/wiki/Runaround_(story)
    I didn’t get it very right, but the gist is the same.

  19. 19. Jochen says:

    An Asimov robot following the three laws is like a human with an absolute compulsion that can’t be broken.

    Another problem with such laws is that there is no feasible way to implement them—I mean, how exactly is a robot to know, for instance, whether its inaction allows a human to come to harm? These are not well-formed decision problems on some computable domain. Even the simplest of them would require AGI to evaluate an implement, that is, the robot’s faculties that check whether a given law is violated by its actions must themselves be fully intelligent; but if we need the laws to trust an artificially intelligent being, we just run into a chicken-and-egg problem: why trust the AI that evaluates the laws?

    Not to mention that, under general conditions, none of the laws are even evaluable: it is always possible that a robot, should it survive, will in the future save a greater number of human beings than might be harmed by actions it needs to take in order to survive, thus it could choose self-preservation over obedience, or even over not harming humans. And so on.

  20. 20. John Davey says:

    Jochen

    “Another problem with such laws is that there is no feasible way to implement them—I mean, how exactly is a robot to know, for instance, whether its inaction allows a human to come to harm?”

    Because it’s intelligent ? Can work it out for itself ?

    Welcome to the non-intelligent world of AI.

    JBD

  21. 21. Jochen says:

    Because it’s intelligent ? Can work it out for itself ?

    No, it can’t: not only is there a combinatorial explosion regarding possible consequences, most of them are completely unforeseeable. There’s simply no way to work through all possibilities to decide whether a given action (or its absence) will eventually lead to harming a human being.

  22. 22. John Davey says:

    Jochen

    “No, it can’t: not only is there a combinatorial explosion regarding possible consequences, most of them are completely unforeseeable. There’s simply no way to work through all possibilities to decide whether a given action (or its absence) will eventually lead to harming a human being.”

    Hence my point about AI – you have to decide beforehand for an “AI”system what it’s going to do. “Programming Intelligence” is oxymoronic : the very act of codifying something and turning it into a ruleset removes intelligence. It creates automatons, literally.

    Intelligence is the very opposite of acting like a typical computational dumbo whose outputs are entirely determined by their inputs. It involves flexibility, unpredictability and creativity.

  23. 23. Hunt says:

    Intelligence is the very opposite of acting like a typical computational dumbo whose outputs are entirely determined by their inputs. It involves flexibility, unpredictability and creativity.

    Complex processing from input to output might well give flexibility, creativity, etc. Unpredictability is relative to an observer. My laptop is quite unpredictable enough and entirely programmed, though at times I think it’s possessed.

    I have to say, this is an old and tired argument. Viz. accounting software is boring and predictable, therefore all software will always be boring and predictable.

  24. 24. Sean says:

    Peter, I read this article on the authors negative opinion of using IP, information processing, as a model for the brain, I would love to read about your thoughts if you have written anything on the subject. here is the article if you are curious:
    https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

  25. 25. John Davey says:

    Hunt

    In a rule based system for a fixed set of inputs there will be the same outputs. It’s a goal of every single software engineer there’s ever been.

    Unpredictability is a quantifiable metric. Observers are irrelevant. Does 1 × 2 = 3 depending on your viewpoint ? No. Rule based systems are mathematical.

    The use of random numbers does not affect this of course. Numbers are random relative to an observer – but not to the system that processes them. They are just numbers.

    J

  26. 26. Callan S. says:

    Jochen,

    I don’t think any of Asimov’s stories showed the robots actively trying to find work arounds. Only spontaneously occurring errors. But okay, you see them (Asimov tales or otherwise) looking for work arounds regardless of whether the imperative is a boon or bane to us. But how does that work into your point? If someone makes ‘bad’ robots and the robot looks for a work around, someone else could just as make a robot with ‘good’ imperatives and…the robot will look for work around and do things we think are bad. Just by itself it’s the same deal, really, isn’t it? Just more complicated since it wont be skynet but instead earthfree that gets us?

    Anyway, generally drug addicts can be separated from the drug source and have some amount of loyalty and care for family (with separation from the drug source being the first step rather than relying on the second, of course!)

    I mean if certainty was a drug, look how many people are on that and not looking for ways to avoid it? Why? Because they are certain that certainty is a good thing, of course! It can be that circular.

  27. 27. Callan S. says:

    Sci,

    Super AI that threatens humanity, OTOH, is a science fantasy notion that depends on mistakenly conferring rights and responsibilities to Turing Machines.

    ?
    Why does an AI need 1: Rights and responsibilities to 2: Threaten humanity (at some scale)?

    I’m fine with animal intelligence. I’d be fine with android intelligence if we can show the way the biological brain works can be emulated using non-organic material. Right now we don’t even know if field effects play a role in how the brain works.
    Turing Machines that magically become conscious only when the Holy Grail program is run? Just dualism in new clothing.

    I think you’re arguing right past me – I only mentioned ‘intelligent behavior’ – I did not mention consciousness. You can call it all a very fancy vending machine if you want – the issue here is how efficiently the vending machine can kill, via adaptive algorithms. I wouldn’t dismiss the threat of heat seeking missiles on the basis of ‘not conscious’ and neither would you. The idea of adaptive pursuit-kill machines that do so better than the heat seeker is what’s being speculated. You don’t have to admit the machine is conscious to say it’s could be better than a heat seeker. Did the first terminator movie just seem some dualism in new clothing? (I grant the latter movies lend some ‘special spirit’ to the ‘good guy’ terminators – if you want to see dualism there, you’re probably right)

    You can just see the robot as a very clever gun, and it’s the owner of the gun killing people, not the smart gun. That’s fair enough. But the question of ‘self guiding and firing without the owner saying yay or nay’ robots is an important one. I mean, how do you take ‘self’ driving cars – is that ‘self’ a new dualism? We are talking about large slabs of metal moving at speeds enough to kill and a mechanical process in it that is supposed to avoid deaths on the road, but the person who made the process wont be there to stop the device. If not ‘self’, what word would you use for the designer delegating life/death tasks to a machine? How do we treat it now when a mine goes off on someone who is a non combatant? There’s a lot of questions without even going near consciousness.

  28. 28. Sci says:

    @ Callan: Agreed.

  29. 29. Sci says:

    Timely article on Aeon challenging computationalism:

    https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

    “A few years ago, I asked the neuroscientist Eric Kandel of Columbia University – winner of a Nobel Prize for identifying some of the chemical changes that take place in the neuronal synapses of the Aplysia (a marine snail) after it learns something – how long he thought it would take us to understand how human memory works. He quickly replied: ‘A hundred years.’ I didn’t think to ask him whether he thought the IP metaphor was slowing down neuroscience, but some neuroscientists are indeed beginning to think the unthinkable – that the metaphor is not indispensable.

    A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.”

  30. 30. Callan S. says:

    Ouch…that article. Okay, it’ll be easier to get over by making it harder at first : Computers do not process information. They are just a process of physics. They are not governed by algorithms, they are governed by physics.

    It’s harder at first, but its easier in the long run than thinking ‘all they do is X’ when they don’t even do X.

    Further the writer doesn’t seem to understand how brain cells interact in order to conceive of something that might be called ‘memory’. It’s a lot like saying money doesn’t work because where does each note store the price of bread, aye? While if we picture a dam that can overflow (into the feed rivers of other dams) and some of the rivers to it (fed by prior potentially overflowing dams) provide more water, some less, no one would argue that river feed ins would result in certain patterns of overflowings (if only by experimenting with the set up). “But how do the dams remember to do that pattern!?” The question is obviously an inapplicable one.

    It should also be clear that if a key on a keyboard controlled river flows and the rivers and dams were arranged in certain ways, the resulting patterns could be made to match the results one expects for computations of basic math or even more advanced math. Note that I’m saying it can match expectation – that doesn’t mean it’s somehow ‘processing information’. There’s no more than rivers and dams there than there was before.

    I atleast agree if the metaphor is being understood out there in this way, it’s not doing much good. Also I agree on the inability to download ourselves. The rivers and dams are all there is. Duplication obviously just produces a clone. You can’t download a river.

  31. 31. John Davey says:

    Sci

    https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

    What a fine article. Some sanity at last in academic circles it would seem

    JBD

  32. 32. Hunt says:

    John Davey,
    No intelligent machine will ever be simply rule based, and it certainly won’t have fixed input. At very least it will have adaptive rule systems. Even identical input at different times will have different output. Even today, machines are getting smarter; my bank machine now scans checks and usually interprets them correctly. Perhaps you consider this a fake form of “smart,” and I agree to an extent. It’s doubtful the type of machine learning algorithms popular today will morph directly into general artificial intelligence. But there’s still no reason to think that machines aren’t going to keep getting “smarter”.

  33. 33. Jochen says:

    But how does that work into your point? If someone makes ‘bad’ robots and the robot looks for a work around, someone else could just as make a robot with ‘good’ imperatives and…the robot will look for work around and do things we think are bad. Just by itself it’s the same deal, really, isn’t it?

    Well, of course. That’s why I said:

    To the extent that thus the capacity for rational thinking in fact unmoors us from the hard-set goals that (either evolutionarily or by design, as in the case of our robot children) we come pre-programmed with, we should rather expect that similar rational faculties converge on similar goals, with the only difference being that faster processing speed achieves this convergence sooner.

    So, met with similar challenges (survival in a world of scarce resources), an agent of similar rationality might be expected to come to similar conclusions about advisable behavior, regardless of any hard-wired goals; and so, provided we didn’t do too terrible a job of solving this problem, AI at large will ultimately find solutions similar to ours (and if we did, well then maybe it’s just as well if the robots rebel).

  34. 34. Jochen says:

    Computers do not process information. They are just a process of physics. They are not governed by algorithms, they are governed by physics.

    That’s completely vacuous, though; anything’s governed by physics. It’s like saying, when trying to pick out the red ball from balls of all kinds of colors, ‘it’s the colored one’—it tells you nothing, because it’s true of all elements in the domain.

    So, what’s needed is something that picks out computers among all the things that are ‘governed by physics’. And here, it’s completely valid to say that they ‘process information’—more accurately, that they process information in some human-interpretable format (since after all, any physical process can be interpreted as ‘processing information’, but the vast majority of such interpretations will be so complex as to be completely unusable). Computers, thus, take information in some human-understandable format (say, writing) and produce new information by means of algorithmic manipulation of that information that’s again human-understandable. This isn’t perfect, but it picks out most of the things we want to call ‘computers’ from among the set of ‘physical things’. So, with a little bit of charity, claiming that computers ‘process information’ is a perfectly valid statement.

    Of course, that definition already makes the problem with the idea that ‘the brain is a computer’ completely clear—plainly, brains don’t process information in human-interpretable form. But I feel that I’ve already entered into one too many rant on why computation is an insufficient notion to ground mentality here…

  35. 35. Callan S. says:

    Jochen,

    So, met with similar challenges (survival in a world of scarce resources), an agent of similar rationality might be expected to come to similar conclusions about advisable behavior

    Why? Previously I thought you’d assumed the robot would ‘figure/do what’s right’. I’ve moved on from that, but what is ‘advisable behavior’?

    I’d think at that point it’s a matter of Darwinism. If you can do a hard drug and not die (and create some kind of offspring) then you can just keep doing it. Why figure anything else? If you can fill the world with paper clips without dying/being smashed in the process, why would you figure this advisable behavior you mention?

  36. 36. Callan S. says:

    So, with a little bit of charity, claiming that computers ‘process information’ is a perfectly valid statement.

    What charity? We are talking about what is the state of reality. There’s no charity to be had in that, only how things are. If I’m selling beans in a jar and I say there are one hundred beans in it but really there’s ninety nine, it’s charitable to say I’m not cheating the customer – but I’m not actually right. To say human brains do not work like computers and then the sum of the evidence relies on ‘charity’ – what scientific conclusions rely on charity?

    “Human brains aren’t like computers because of evidence I have…which you have to let me have or you are uncharitable”? It can’t come down to that.

    Letting go of the notion that ‘computers process information’ makes things more difficult at first, but it introduces semantic hygiene. Cleansing the subject of assumptions which are not a result of fact but of charity. Of heuristic. If the writer wants to argue ‘where in the brain cell is something remembered?’ then he dabbles in the fact of the matter. There is no retreat back to charitable ground after that. Not unless one side has to deal in facts and no charity and the other side has to have all the charity they ask for affirmed.

    So, what’s needed is something that picks out computers among all the things that are ‘governed by physics’. And here, it’s completely valid to say that they ‘process information’—more accurately, that they process information in some human-interpretable format

    Not seeing the connection between your conclusion of validity and this ‘something’ that ‘picks out’. It seems like it drifts towards the ‘hard problem’. I’ll call in advance that ‘because consciousness can’t be explained, the computer does process information!’ is where that drift will go.

    To me it seems you’ve just hit against the idea that the distinctions you are drawing are arbitrary – and this reflects in you then trying to reconstruct the distinctions as being relevant, since you draw on a ‘something’ that ‘picks out’. You wrote ‘something’ instead of ‘someone’ and ‘picks out’ instead of ‘chooses’ because you ran up against the semantic hygiene I’m applying. The hygiene used leads to a cold, cold place and it’s like your response has ice sticking to it from the contact. ‘Someone’ turns to ‘something’ with the cold like fingers turn blue in the snow.

    Or if that all seems nonsense, well I think the article is nonsense in asking ‘where in the brain cell does it remember?’, which gives no charity but still demands charity in order to not be a silly question.

  37. 37. john davey says:

    Hunt

    “adaptive rule systems.”

    An “adaptive rule system” == “yet another rule system”.

    Oh yes – the old “feedback loop” argument. A computer program with a feedback loop is otherwise known as .. just another computer program.

    I can honestly state that in 25 years in the software industry I’ve never seen a program (other than basic “hello world” demos) without a feedback loop in it. Or an “adaption to input”.

    Either every single computer program in existence is intelligent – or none of them are. It’s as simple as that. My bet is the latter case is a bit more likely to be true.

    JBD

  38. 38. john davey says:

    “Computers do not process information. They are just a process of physics. They are not governed by algorithms, they are governed by physics.”

    Computers are logical machines for the processing of information. They have no particular physical architecture necessary and do not, in fact, require any material realisation at all to be functional.

    This isn’t a matter of conjecture : it’s a matter of definition. Computers are defined to be information processors. How the computational model is achieved (which in itself is a mathematical model) is of no importance. In todays’s clould world that is most self-evident as a VM could be emulating a computer over numerous different physical sites on an indeterminate basis. That computer could itself be emulating another computer and so on. Suffice to say there is no link between a computer and physics at all

    J

  39. 39. Hunt says:

    John Davey,
    It might help to examine what we mean by intelligence, since I think most of the disagreement people have on this actually constitutes talking past each other. Chances are we’ll fare no better than anyone else ever has, but it’s an interesting debate anyway. If your definition of intelligence specifically excludes anything a machine could ever do (as I suspect it does) then your conclusion is already set.

    Some definitions from the internet:

    the ability to learn or understand things or to deal with new or difficult situations

    the ability to learn, understand, and make judgments or have opinions that are based on reason

    the ability to perceive information, and retain it as knowledge to be applied towards adaptive behaviors within an environment

    Many definitions use words like “grasp” and “perceive” and “have opinion” which have overtones of intentionality. Since machines aren’t intentional and nobody yet knows how to make them that way (except Jochen; humor, not snark), it’s natural to conclude machines may never be intelligence. On the other hand, if your definition is more mundane, the ability to learn and retain knowledge, for instance, then it seems possible that intelligent machines may already exist.

  40. 40. Callan S. says:

    John Davey,

    Computers are logical machines for the processing of information. They have no particular physical architecture necessary and do not, in fact, require any material realisation at all to be functional.
    This isn’t a matter of conjecture : it’s a matter of definition. Computers are defined to be information processors. How the computational model is achieved (which in itself is a mathematical model) is of no importance. In todays’s clould world that is most self-evident as a VM could be emulating a computer over numerous different physical sites on an indeterminate basis. That computer could itself be emulating another computer and so on. Suffice to say there is no link between a computer and physics at all

    My goodness, definition comes first and reality is made to conform to the definition in some way after the fact?? Rather than definitions conforming to/being derived from reality?

    As I said to Jochen, the author of the article, Epstein, can’t ask ‘where in the brain cell is memory stored?’ without asking about the state of reality. His question is one where definitions are derived from reality. That or it’s not an honest question, just hiding behind a language game where, when and if it suits him, whatever physical example is dismissed simply on the basis that his definition doesn’t actually have to link to any kind of material realisation.

    He can point at a computer and argue we are not it – anyone else can point at a computer and he’ll say it has nothing to do with his argument? These things cannot be a convenient one way path. When he refers to an actual computer in real life, he’s opened himself up to reality being the one to shape definitions.

  41. 41. Jochen says:

    Why? Previously I thought you’d assumed the robot would ‘figure/do what’s right’. I’ve moved on from that, but what is ‘advisable behavior’?

    Behavior that leads to favorable outcomes with a high degree of likelihood.

    What charity? We are talking about what is the state of reality.

    No, we’re talking about how to interpret a given statement made about reality (or elements thereof, namely, computers). You can interpret it charitably: saying ‘computers process information’ is completely valid, as long as you keep in the back of your head the caveat that everything can be considered to process information, and that hence computers are more narrowly those things that process information in a way that’s useful to their users.

    Or, if you’re predisposed to disagree, you can interpret the statement without charity, and thus, assume a capital misunderstanding on the part of the person making that statement.

    Not seeing the connection between your conclusion of validity and this ‘something’ that ‘picks out’. It seems like it drifts towards the ‘hard problem’.

    Nothing at all to do with the hard problem. The something that picks out computers among the set of physical things is merely whatever defining characteristic it is that makes something a computer.

    Like, for instance, what makes something a chair is that you can sit on it: saying that a chair is a physical object is true, but vacuous, as it doesn’t tell you anything about chairs at all. Saying a chair is something to sit on, on the other hand, neatly delineates the set of chairs from the set of non-chairs within the set of all physical things, and is thus a contentful proposition.

    Likewise, saying ‘computers are physical processes’ doesn’t tell me anything about computers. ‘Computers process information’, however, does: for instance, it tells me that the stone on the ground over there is not a computer, while the thing I’m typing into right now is. Again, I believe this needs to be charitably interpreted, to prevent Putnam-style triviality, but as long as we follow this convention, then telling somebody tasked with figuring out whether something is a computer that it is a computer if it processes information will aid them in completing their task, while telling them that it’s a physical process won’t.

    About the rest of your post, I’ve literally no idea what you’re trying to say, sorry.

  42. 42. John Davey says:

    callan

    “My goodness, definition comes first and reality is made to conform to the definition in some way after the fact?? Rather than definitions conforming to/being derived from reality?”

    https://en.wikipedia.org/wiki/Universal_Turing_machine

    The “universal turing machine” provides the theoretical basis for the modern computer. This is the theory from whence they all come – inspiring Von Neumann for instance. All modern computers are derived from variants of the Von Neumann computer. That’s the historical trajectory of computing : first came the mathematics, then came the implementation. And I should think so too. Imagine how ridiculous it would be to construct a computer without knowing what it was meant to do. Who would do that ?

    Computers don’t actually exist physically. They exist like paintings or books – yes, there is a physical interface, but the physical interface isn’t tied to any model, like a copy of a book isn’t tied to any type of paper or print. What a computer does is manipulate symbols. It’s exactly like like a slowly changing book. If there was a book of 0’s and 1’s and it changes over time, that’s what a computer does. That’s what its designed to do, as per Turing and Von Neumann.

  43. 43. Callan S. says:

    Jochen,

    Behavior that leads to favorable outcomes with a high degree of likelihood.

    Favorable for whom?

    I think the premise of ‘bad bots’ is quite valid when one bots ‘favourable outcome’ is another humans ‘horrific act’.

    No, we’re talking about how to interpret a given statement made about reality (or elements thereof, namely, computers).

    How does this work – when Epstien says ‘you people who say we think like computers, you are wrong’, he isn’t interpreting them uncharitably, but when someone else says he is wrong, it is an uncharitable interpretation??

    By your view all it’ll come to is some people will interpret Epstien charitably – likely those who think he is right. And some people will interpret those who say we think like a computer in a charitable way – likely those who think they are right.

    What does this come down to, a popularity contest?

    No, when someone claims they are Napoleon, it’s not just that I’m interpreting them uncharitably, it’s that they are wrong. It either leaves interpretation language games at some point by some means or it’s just a popularity contest, nothing better than that.

    Likewise, saying ‘computers are physical processes’ doesn’t tell me anything about computers. ‘Computers process information’, however, does

    I’m asking you to figure out why it tells you something. You’ll find it’s because it’s practical (heuristic). And I’m asking you, just for awhile, to abandon that practicality. Because it’s only a practicality, not evidence of anything (eg, a stop sign at a road isn’t proof you have to stop or that the sign has some deep meaning of stopyness – it’s just far more practical to stop there than drive out willy nilly)

    Ignore what you’re told is there and and instead looking at what’s there. That’s why I gave the rivers and dams example. So instead of it being whatever you’re told, it’s what you actually see. I wish I had a physical set of water channels and miniature dams and button controls for the primary channels – you’d either have to engage the object and concept by physically engaging with it, or by not doing so show your disinterest. Right now you keep saying ‘computers process information’ as if you’re engaging with me, when you’re really not. You know there’s water and dams and more water, some dams being fed by more than one potential river. Let’s look at things without words stamped to them, for awhile.

  44. 44. Jochen says:

    Favorable for whom?

    For those exhibiting that behavior.

    I think the premise of ‘bad bots’ is quite valid when one bots ‘favourable outcome’ is another humans ‘horrific act’.

    Yes, but that’s just the same situation we’re in wrt other humans. And also, it’s the situation the bots are in wrt other bots. Or, indeed, humans.

    No, when someone claims they are Napoleon, it’s not just that I’m interpreting them uncharitably, it’s that they are wrong.

    But when someone says ‘computers process information’, then, on all but the most biased, uncharitable interpretations, they’re right. (Note that I’m not speaking to the rest of Epstein’s essay; I don’t think he argues his case well—indeed, for the most part, I don’t think he brings forward anything I would call an argument at all.)

    Seriously, I’m not advocating for something controversial here. It’s called the principle of charity, and it’s merely the practice of interpreting your opponent’s arguments in their strongest possible form; otherwise, you simply run the risk that your counterarguments fail to address them properly (as you’re targeting a weaker argument than they may be making). It’s not about giving your opponent undue lenience in interpreting them, it’s rather about making sure that your own arguments support the conclusions you wish to draw.

    I’m asking you to figure out why it tells you something.

    I know exactly why it’s telling me something: it allows me to exclude certain things from being computers (as opposed to merely saying ‘they’re physical’, which allows no such delineation). If you want to substitute this characterization with your own, then you have to supply something at least as informative, which so far, you haven’t.

    You also seem to have some confusion regarding what is, and what we talk about. Ultimately, the definition ‘computers process information’ merely tells us what sort of thing to apply the name ‘computer’ to; it doesn’t constitute some metaphysical thesis on what there is. And of course, such definitions cannot be forgone: they’re necessary to make sure we’re talking about the same things. So, in order to properly interpret Epstein’s essay, you need to follow his definitions: if computers are things that process information, and the brain is not a computer, then the brain is not something that processes information.

    This is now a thesis you can accept or attack; but without first getting clear about the definitions at its basis, you simply run the risk of producing just so much verbiage.

  45. 45. Callan S. says:

    John,

    I’d argue the pattern of the electronic valve being invented first then as a result of this physical device being available, the idea of the various configurations of said device was devised (and naming said configurations ‘computer’). But it’d really just be arguing the way you want say the materials of the world were made to match the idea. A chicken and egg debate.

    The main question is : He can point at a computer and argue we are not it – anyone else can point at a computer and he’ll say it has nothing to do with his argument? These things cannot be a convenient one way path. When he refers to an actual computer, a physical object, in real life, he’s opened himself up to reality being the one to shape definitions.

    To keep up this ‘A computer doesn’t exist really’ thing, he can’t ask ‘where in the brain cell is memory stored?’. It’d just be hypocritical.

  46. 46. Callan S. says:

    Jochen,

    Yes, but that’s just the same situation we’re in wrt other humans. And also, it’s the situation the bots are in wrt other bots. Or, indeed, humans.

    Not if they are not constructed. But they are more likely to be constructed if folk say it’s a non issue. That’s the choice we have atleast in terms of having electronic children. Never mind that what one might call their ‘psychology’ has the capacity to be utterly alien to us. We have a continuing history of not understanding each other (to the point of war) when our brains and resultant psychologies are very similar. Here it’s the question of a whole new race with alien psychology.

    But when someone says ‘computers process information’, then, on all but the most biased, uncharitable interpretations, they’re right.

    We’re back to ‘it’s true because otherwise you’re doing a bad thing’ rather than ‘it’s true because of this physical evidence’

    The principle of charity isn’t a shut down method. It’s not just for one side to practice. What charity have you granted my idea so far, Jochen? None because I didn’t go first? If your link advocates for that ‘charity’ then it advocates for corrupt charity – anyone can call someone else uncharitable then deny charity themselves based on that. It’s a great way to shut down dissenting thought – say anything that’s contrary to ones beliefs is uncharitable. Beliefs affirmed. Simple enough.

    So even if I’m not being charitable, you’re stuck having to grant some charity yourself, are you not?

    I know exactly why it’s telling me something: it allows me to exclude certain things from being computers

    A mere practicality, as noted. Now abandon the practicality. What is left?

    It’s a fair question. With a ‘house’ built of of bricks, when we abandon the practicality of ‘house’, what is left? Bricks. Just bricks.

    What’s left when you abandon your practicality? Symbols? Or just objects?

    This is now a thesis you can accept or attack; but without first getting clear about the definitions at its basis, you simply run the risk of producing just so much verbiage.

    Please, I’ll beg charity now. You know what my rivers and dams were, in relation to this.

    There’s no triumph of argument in being definitionally correct. If I tell Epstein he has counterfit money and he or you were to insist he has money and I am not using his definition of money in saying it’s fake, this is just missing the point in the name of definition. I’m not using your definitions because your definitions don’t apply to what you or he are talking about. You just think they do.

  47. 47. Jochen says:

    We’re back to ‘it’s true because otherwise you’re doing a bad thing’ rather than ‘it’s true because of this physical evidence’

    How on Earth could a definition be true because of physical evidence? A definition is just a convention for talking about things; it’s just saying, here, these are the things we call ‘computer’. Bringing physical evidence into it is simply a category error.

    What charity have you granted my idea so far, Jochen? None because I didn’t go first?

    Let’s have a look at the flow of the argument so far. Epstein uses the definition ‘computers are things that process information’. You say, no, computers are physical things. I say, that’s true, but it’s true of everything—so it doesn’t tell us anything. Furthermore, I point out that one can interpret Epstein’s definition in such a way as to have it be sensible, and hence, the principle of charity urges us to do so.

    On the other hand, I don’t know how to apply charity to your point, because it simply seems to be, ‘no, Epstein (wrt this definition) is wrong’. Hence, I keep pointing out that there is a way to interpret Epstein’s definition sensibly; and then, you come in with your ‘physical evidence’, and I simply don’t have a clue what to make of that. What physical evidence would satisfy you?

    A mere practicality, as noted. Now abandon the practicality. What is left?

    A confused Jochen, it seems to me, asking at any round, ‘well, but what are these computer things you talk about?’, to which you seem to want to respond, ‘physical things’. So, is my coffee mug a computer? (I’m reasonably confident it’s a physical thing.)

    There’s no triumph of argument in being definitionally correct.

    I’m not saying that. Epstein makes an argument (however bad it might be) that our brains are not computers. For this, he proposes a definition of ‘computer’—as he must, since otherwise, he’s merely producing noises (or squiggles on paper). Granting his definition (or supplanting it with a more appropriate one) is the way to address his argument; but you just want to render all definition null and void, or base them in ‘physical evidence’ somehow.

    I’m not using your definitions because your definitions don’t apply to what you or he are talking about.

    Then you’re free to provide better ones!

  48. 48. Callan S. says:

    Bringing physical evidence into it is simply a category error.

    If you’re certain of that, then that’s it. It’s hardly like there’s room for discussion when one side simply folds their arms on the subject the other proposes for discussion.

    By your reasoning I can say there are dragons flying around outside my house and I aught to be taken seriously in saying so. As bringing in physical evidence as to whether they are some sort of dragon creature or just regular birds is called a category error.

    Let’s look at what’s actually there rather than just keep calling them ‘dragons’/’symbol processors’.

    You say, no, computers are physical things. I say, that’s true, but it’s true of everything—so it doesn’t tell us anything.

    That was the charity? No attempt at “Well, it doesn’t seem to go anywhere – can you give me some more information about what you’re trying to get at?”? You’ve just come to a conclusion that dismisses the approach and that’s it.

    I’ve mentioned rivers and dams and you haven’t mentioned them once? Surely it’s clear that to simply mention them would at least be a first step of charity? Instead you’ve just taken it I’m saying “He’s wrong!” and then there’s a sort of ‘scene missing’ after as if my water talk didn’t happen. But moving on – can we go to that first step together?

    A confused Jochen, it seems to me, asking at any round, ‘well, but what are these computer things you talk about?’, to which you seem to want to respond, ‘physical things’. So, is my coffee mug a computer? (I’m reasonably confident it’s a physical thing.)

    Please, Epstein has basically gestured to a particular physical object (like a ‘laptop computer’), but you’re talking about mugs now!

    You know there are no semi conductor transistors in your mug. No electronic valves.

    As soon as we abandon ‘computers process symbols’ for some reason you start looking at objects that have none of the internal components of the objects Epstein pointed at. I’d call it deliberate obfuscation, but I’m trying to be charitable and instead ask why?

    In terms of providing definitions, I have – I’ve discussed the rivers and dams. Instead of taking a bundle of them and defining the bundle under an arbitrary name, I’m defining the components (okay, I’m using rivers and dams because talking about the saturation point of semi conductors is kind of drab and less intuitive than rivers and dams – but I can do drab if necessary).

    Once we’ve defined the physical processes, it’ll be clear no ‘symbols’ somehow float amidst them, let alone are processed.

    And it’ll become clear how brain cells (plural) can store ‘memory’. Assuming it was a genuine question to begin with.

  49. 49. Jochen says:

    If you’re certain of that, then that’s it. It’s hardly like there’s room for discussion when one side simply folds their arms on the subject the other proposes for discussion.

    To be fair, I asked you how a definition could be true due to physical evidence; you could have elaborated on that, but chose not to. So ultimately, as you yourself say:

    No, when someone claims they are Napoleon, it’s not just that I’m interpreting them uncharitably, it’s that they are wrong.

    So, when you say that it’s somehow a question of physics how we define computers, then well, you’re wrong. It’s a question about the use of language; it’s a question of definitions. Definitions are not out there in the world for us to discover; it’s not that there is some entity that is by metaphysical say-so called a ‘computer’, and we have to discover which entity that is, it’s that there are certain things that we call ‘computers’ because they share a certain similarity, and putting that similarity into words in such a way that whatever those words apply to is a computer is what definitions are for.

    Let’s perhaps get really clear about this. Most definitions are of a form somewhat like: “every thing that y is an x”; e.g. “every thing that flies and has feathers is a bird”. No amount of physical evidence is going to change anything about that definition; there’s no sense in which you can go out in the world and find that something that doesn’t meet that definition, but which nevertheless is a bird—there’s no definition-independent test for ‘bird-ness’, it’s just a category we find useful in referring to things in the world.

    Now, of course, we may find that slightly changing that category may be even more useful, and hence, amend our definition accordingly—say, referring to some pre-agreed upon taxonomy including, e.g., flightless birds. But this is not going out into the world and finding our definition to be wrong; we could just as well have stuck with the old one, at the expense of at best slightly more complicated wording. It’s just our desire for economy in expression that made us change our definition, not ‘physical evidence’.

    With that in mind, let’s consider your example:

    By your reasoning I can say there are dragons flying around outside my house and I aught to be taken seriously in saying so. As bringing in physical evidence as to whether they are some sort of dragon creature or just regular birds is called a category error.

    As you can see now, it’s all a matter of definition. If dragons are defined as ‘winged feathered creatures’, and you are clear upfront about that definition, then yes, of course you can say that there are dragons flying around in the skies; you would, however, be using a definition that’s different from the one most people use. But it’s not an empirical matter which definition is the right one; it’s merely convention.

    What you can’t do, of course, is claiming that there are dragons flying around without making your idiosyncratic definition explicit; because in that case, everybody will assume that you use the most common definition of dragon as being a large, winged reptile capable of breathing fire, or something like that. Only then does your assertion become false.

    Epstein doesn’t fall into this trap: he gives a clear-cut definition of what a computer is: something that processes information; and then, he argues that our brain isn’t that sort of thing. You want to deny him that definition on some nebulous ’empirical’ basis, but fail to substitute your own, despite me repeatedly asking for clarification. So ultimately, you’re in exactly the situation as if you’re claiming that there are dragons flying around, without, however, clarifying what you mean by ‘dragon’ (other than perhaps that they’re physical objects, which, again, as far as definitions go is vacuous, provided one harbors no dualist sympathies).

    No attempt at “Well, it doesn’t seem to go anywhere – can you give me some more information about what you’re trying to get at?”?

    I’ve repeatedly prompted you to give better definitions, to elaborate on your points, or flat out told you where I didn’t understand what your point was. What more can I do?

    I’ve mentioned rivers and dams and you haven’t mentioned them once? Surely it’s clear that to simply mention them would at least be a first step of charity?

    Fine. Rivers and dams. Now, would you be so gracious and finally try and explain what sort of point you were trying to make there?

    Please, Epstein has basically gestured to a particular physical object (like a ‘laptop computer’), but you’re talking about mugs now!

    You know there are no semi conductor transistors in your mug. No electronic valves.

    The situation is the following. Epstein proposes a definition of computer, and claims that our brains don’t fall under it. You claim that the definition is wrong, but you’re not proposing anything to supplant it with—so how should I be able to evaluate the claim that brains aren’t computers?

    All you’ve given me is that computers are physical things, and fine, under that definition, our brains are (or may be) computers; but so is everything. To talk sensibly about computers, and about the question of whether our brains are computers, you need to provide some indication, at least, of what you’re talking about when you use the word ‘computer’; but so far, you’ve only claimed what they’re not—namely, symbol-processing devices.

    And if I’m now to take ‘possesses transistors and/or electronic valves’ as definitional of ‘computer’, then again, brains aren’t computers; but so are a lot of things that are generally called ‘computers’, such as the Z1, for instance.

    As soon as we abandon ‘computers process symbols’ for some reason you start looking at objects that have none of the internal components of the objects Epstein pointed at. I’d call it deliberate obfuscation, but I’m trying to be charitable and instead ask why?

    Because as soon as you claim that the definition of computer most people use is wrong, and don’t substitute another one, I simply don’t know what you mean whenever you say ‘computer’. Hence, I can’t evaluate your argument. For instance, take the two definitions you gave:

    ‘Computer is a physical thing’: Is the brain a computer? — Yes, and so’s my mug.
    ‘Computer has transistors/valves’: Is the brain a computer? — No, and neither is my mug, or lots of computers.

    So you see, the answer to the question depends on the definition adopted. Epstein is proposing:

    ‘Computer is something that processes information’: Is the brain a computer? — No, though not for the reasons Epstein cites.

    And again, that’s a perfectly reasonable definition of a computer—indeed, it’s implicit in the word, as ‘to compute’ is an operation performed on pieces of information. Moreover, whenever I use a computer, I enter information into it, and get different information out; hence, information is processed. This doesn’t happen with my mug.

    Now of course, using some suitably general notions, I could process information with my mug: I could code some piece of information into the initial state of all the molecules of my coffee, for instance, then have it evolve for some set time, and read out the state of molecular motion again to obtain the transformed information. But this is of course highly impractical, since I don’t have control over the molecular dynamics of my coffee.

    Hence, my definition above, that computers process information in a way accessible to their users (as otherwise, everything ‘processes information’ and thus, the notion would be trivial); and with this caveat, it becomes immediately clear that ‘the brain is a computer’ is just another name for the homunculus fallacy.

    But anyway, let me be charitable, and bracket all of the above for now. What’s your definition of a computer? What’s the import of physics on that definition? Is the brain a computer? What do rivers and dams have to do with it?

  50. 50. Cognicious says:

    Jochen, #50: Definitions are not out there in the world for us to discover; it’s not that there is some entity that is by metaphysical say-so called a ‘computer’, and we have to discover which entity that is, it’s that there are certain things that we call ‘computers’ because they share a certain similarity, and putting that similarity into words in such a way that whatever those words apply to is a computer is what definitions are for.

    Not by metaphysical say-so, indeed, but rather by human convention. Metaphysical say-so, being unambiguous, would be easier to deal with than language use. Machines exist that are commonly called computers; that’s the starting point for understanding the word “computer.” But wait: the first definition of “computer” in the dictionary nearest my, er, laptop is “A person who computes.” So already there’s trouble. The concept “computer” has fuzzy edges for other reasons as well. Is a simple pocket calculator to be called a computer? It certainly computes. What about an abacus or a slide rule? Does being nonelectronic disqualify these computing tools from computerhood?

    Jochen: Moreover, whenever I use a computer, I enter information into it, and get different information out; hence, information is processed.

    When I use a computer to send an e-mail, I don’t get different information out.

    I don’t think you guys will make progress in your discussion until you agree on a definition of “information.” Information is what informs, but a computer, for all its inputting, manipulation, and outputting of something or other, can never be said to be informed. It’s necessary to distinguish between information and the form in which information comes. A computer is like an illiterate messenger who carries notes between two persons who can read.

  51. 51. Cognicious says:

    Wrong number just above. I should have said “Jochen, #49.”

  52. 52. Callan S. says:

    To be fair, I asked you how a definition could be true due to physical evidence; you could have elaborated on that, but chose not to.

    Then it’s all me, I guess – you didn’t say ‘Bringing physical evidence into it is simply a category error.’ and close off discussion with your conclusion. It was me who chose not to continue, sure. And right now you’re stating it as fact it’s me who closed it off – not an ‘maybe’ but a ‘definitely’, not at all closing off further discussion. All the while insisting on charity?

    So sure, it was all me. Though there are reasons why such an admission isn’t satisfying.

    To drop some conclusion bombs myself, I’m amazed as what I’d even consider an unhealthy use of definitions and not just from yourself. Take the whole bird definition – ‘No amount of physical evidence is going to change anything about that definition’. It’s as if for you the definition of bird came first, THEN birds came after that. As if birds didn’t come first and their physical properties were what determined the definition. I’m reminded of the post here about people who cannot use visual imagination – and wonder if seeing definitions come before reality itself (rather than the other way around) is yet another uncommonly identified difference.

    Once upon a time the sun was defined as something that revolves around the earth. Physical evidence changed the definition.

    I’d sift further through trying to answer, but the errors are all on my part (fortuitous, that) and I have an entirely backward idea of definition. So what use is that to anyone? Your post is a testament to the undeniability of Epstein’s grasp of the situation. Heck, why does Epstein need charity when he’s so definitely made no mistakes?

    Consider, what if there were a number of errors in your argument. Do you think I can prise your mind open to this with words, or that you have to open the door a little for me – not just look for the way I am wrong and write them up as lists, but actively look for ways you could be wrong? Where is the list of how you could be wrong? There’s only one there for me. I ran into someone once on another forum who said it was other peoples job to look for how he was wrong, not his. As I think that it is impossible to prise open someones mind with words (barring some teacher/student relationship), I think he had perfectly mastered being closed minded. It frightened me, to be honest.

    Anyway, I said to just consider and the charitable thing would be to just do that. If it just seems another argument to win, it rather proves my point on your category error statement. If I continued I’d just be setting up more pins to strike down.

  53. 53. Jochen says:

    Cognicious:

    When I use a computer to send an e-mail, I don’t get different information out.

    Sure you do. The display changes, tells you maybe ‘mail sent successfully’, giving you the information that a certain process has been carried out—taking the original information you entered (the e-mail), encoding it into some format for data transfer, dialing into some network, sending it out into the world, and eventually, reaching its destination; that’s information transfer right there.

    Callan:

    And right now you’re stating it as fact it’s me who closed it off – not an ‘maybe’ but a ‘definitely’, not at all closing off further discussion.

    I didn’t state that you cut off the discussion; I pointed out that I invited you to give an answer to how physical evidence can change definitions, but you neglected to do so, which is nothing but factually correct.

    I also pointed out that definitions and physical evidence are different categories, but here, too, you could have chosen to defend your view and refute mine, while you seem however more interested to sulk and engage in pointless shifting of blame and accusations as to who did what than in actually putting up an argument.

    It’s as if for you the definition of bird came first, THEN birds came after that.

    No; but before we can tell whether x is a bird, we need a definition of precisely which entities are birds. We need some criterion that x needs to fulfill in order to be a bird. Sure, birds as a category are abstracted from our observations of nature (which I did discuss in my previous post), but we are faced with the question of whether a certain entity—a brain—belongs to a certain class of things—that of computers. In order to do so, we must be able to tell computers apart from non-computers, before we look to the world to tell whether the brain is a computer. We can’t look at the world and suddenly find our definition of computer to be wrong, i.e. find that the brain is a computer even though it doesn’t fit the definition of a computer—if that were something that could happen, then all that this means is that we’ve really been operating under a different definition in the first place.

    Let’s spell that out in more detail. Say we have a definition D, of the form ‘an x is z if it y’, e.g. ‘an animal is a bird if it has feathers’. Now, how could we empirically contradict this definition? By finding an x, such that x is z, even though it’s not y. But there is a circularity here: we need to be able to tell when an x is z; but that is given to us by D. Hence, if we are to find an x which is z while not obeying D, we must have some way of telling that it is z without appealing to D; but then, it is not D that tells us whether an x is z, but some different definition D’.

    So take your example:

    Once upon a time the sun was defined as something that revolves around the earth. Physical evidence changed the definition.

    You’re proposing that a sun is something that revolves around the Earth, and then allege that we could find evidence to change that definition. The steps to doing so would be something like:

    1) X is a sun if it revolves around the Earth.
    2) S is a sun, but it doesn’t revolve around the Earth.
    3) Hence, the definition in 1) is wrong, and needs to be changed.

    The problem is with step 2): if ‘revolving around the Earth’ is constitutive of the term ‘sun’, then whatever doesn’t revolve around the Earth is not a sun. Rather, in step 2) you use a different definition of the word ‘sun’, which includes things that don’t revolve around the Earth; otherwise, you simply couldn’t say that S is a sun—S is a sun by virtue of meeting the definition for ‘sun’, since only those things that meet this definition are suns. So, in fact, you’re already using a different definition in step 2); it wasn’t empirical evidence that changed this definition. Otherwise, how do you tell whether something is a sun, if not with a definition of what kinds of entities are suns?

    So actually, what happened is that there were two definitions: ‘the sun is that bright thing in the sky during daytime’, and ‘the sun revolves around the Earth’. We once might have thought that the two are co-extensive, i.e. pick out the same entities from the set of all physical things; we then found out that this isn’t the case, and hence, that the two definitions don’t define the same thing. We went and picked the former as the more fundamental definition (because really, the sun was characterized by pointing and saying ‘sun’ long before we ever thought about it as revolving around the Earth), and discarded the latter.

    We could have just as well reacted differently: reserve the name ‘sun’ for things that revolve around the Earth, while finding a new designation for the bright thing in the sky during the day; this is a conventional choice, not one foisted upon us by empirical evidence.

    The thing with computers is still a little different, though, in that they were formally conceived of before the first ones were built, and thus, we can simply also define a computer as, say, something equivalent to a Turing machine, or a finite automaton if there are concerns regarding the availability of memory. And of course, a Turing machine is simply something that processes information: you initialize it with a bit string on its tape, then it carries out certain manipulations such that afterwards, a different bit string is present on its tape. This encompasses all computers ever do.

    Your post is a testament to the undeniability of Epstein’s grasp of the situation. Heck, why does Epstein need charity when he’s so definitely made no mistakes?

    This hyperbole is bordering on the ridiculous. I have never said that Epstein doesn’t make any mistakes; in fact, I’ve been quite clear that I believe that basically all of the arguments he gives are wrong. Having to misrepresent my points so crassly only casts doubts on your own position.

    Consider, what if there were a number of errors in your argument. Do you think I can prise your mind open to this with words, or that you have to open the door a little for me – not just look for the way I am wrong and write them up as lists, but actively look for ways you could be wrong?

    If you feel there are errors in my arguments, you’re of course free to point them out; after all, if I could see them myself, I wouldn’t be making these arguments, no? But again, despite my repeated explicit urging, you neglect to do so, and instead engage in a rant on how I’m argumentatively mistreating you. You could have provided answers to any of my questions, but chose not to, so I find it a bit hard to see how I’m supposed to be at fault here.

  54. 54. Jochen says:

    that’s information transfer right there.

    That should’ve been ‘that’s information processing right there’.

  55. 55. Cognicious says:

    Jochen, #53: Cognicious:

    When I use a computer to send an e-mail, I don’t get different information out.

    Sure you do. The display changes, tells you maybe ‘mail sent successfully’, giving you the information that a certain process has been carried out—taking the original information you entered (the e-mail), encoding it into some format for data transfer, dialing into some network, sending it out into the world, and eventually, reaching its destination; that’s information transfer right there.

    But the computer doesn’t rewrite my e-mail. Yes, it transfers the text to a network, and it “tells” me that it did so. More to the point – my point, that is – what it works with, what I entered, isn’t information. The computer can’t read!

    I might open a folder containing photos and click/tap a photo to open that file. I view the photo and then close the file without doing anything else. What “information” has the computer processed? It doesn’t tell me that I (or it) opened and closed the file; to get that information, I have to look and see whether the photo is displayed.

    A fingerprint at a crime scene can be a source of information for an investigating detective, who might say “Aha! There’s information here.” However, the print itself isn’t information. It’s a pattern of lines. Information is less tangible than the lines. A fact is an item of information, and the detective hopes to infer a fact by analyzing the print. In philosophical discussions, one can’t speak so casually as the detective can.

    I’m having a hard time being adequately articulate, but so is everyone else here. One more attempt: I believe that calling a computer an information processor is imprecise.

  56. 56. Jochen says:

    But the computer doesn’t rewrite my e-mail.

    Well, if we’re being excessively literal, then yes, it does: change it from a pattern of key-strokes to a pattern of bit-strings to a pattern of pixels to a pattern of signals down a phone line. All of these will be differently encoded, and certainly, encoding a message would be something I would consider information processing (think about one-time pads: a random string of characters is added to the information you entered, to yield new information, which can be decoded only if you’re in possession of that character string that was added).

    More to the point – my point, that is – what it works with, what I entered, isn’t information. The computer can’t read!

    Certainly, but that doesn’t mean that what you’ve entered isn’t information to it. It’s a series of commands that cause the computer to do something—if you hit ‘send’, or ‘delete’, or ‘print’, then you’re giving the computer information—that of your mouse-click—which is being processed into the action (on some information) you intend for the computer to perform.

    I might open a folder containing photos and click/tap a photo to open that file. I view the photo and then close the file without doing anything else. What “information” has the computer processed?

    It’s processed information on many levels: that concerning your request to receive a certain file from memory, i.e. what you entered by mouse-clicks or key-strokes, that contained in the file into a 2×2-representation of lit pixels on a screen, that of your further request to close the window displaying the picture, and so on.

    A fingerprint at a crime scene can be a source of information for an investigating detective, who might say “Aha! There’s information here.” However, the print itself isn’t information. It’s a pattern of lines. Information is less tangible than the lines.

    You’re right to point out that information is not independent of the recipient. For one, they might need some background information in order to actually ‘be informed’ by whatever is presented to them—like needing the key to decode a one-time pad. In general, whether something is information is relative to a certain context; hence, my above proposed refinement of the definition of a computer as an ‘information-processor’ into something that processes information intelligible to its human users.

    Or, again, we could just go back to the definition of a Turing machine, and call everything that is its equivalent a ‘computer’ (although this involves a nontrivial commitment to the Church-Turing thesis, which one might not want to hold). But any Turing machine very transparently processes information: it starts out with some information on its tape, and transforms it into different information. Thus, any computational process can be characterized in terms of the processing of information.

    Of course, if you still feel that this is too vague, you’re more than welcome to take a stab at finding the essential characteristics of computers yourself!

  57. 57. Cognicious says:

    Jochen, #56: But any Turing machine very transparently processes information: it starts out with some information on its tape, and transforms it into different information.

    Evidently I have not yet succeeded in communicating clearly enough. There is no information on the tape.

    When I “tell” a computer to open a file, I’m not informing it. I’m only operating it. I push some buttons, essentially, though the mechanism involves keystrokes and a cursor rather than literal buttons, and the computer does what I intended (if all goes well). I might similarly push buttons to get a can of iced tea out of a vending machine. It would sound odd indeed to say that I inform the vending machine of my request for a drink or that the machine receives the information that Cognicious wants a particular brand of drink and proceeds to process it.

    I’m arguing for a separation of the concept “information” from the carrier of information, the physical medium in which information is embodied and from which it can be extracted. Holes punched in a tape aren’t information.

    The conventional vocabulary around computers includes humanoid terms like “memory” and “search” because computers were developed to emulate human mental activity. Let’s not get carried away with the anthropomorphizing that such words encourage. People really process information; computers don’t. “Process” is a troublesome word in this context, too; it’s overly broad. But “information” is worse.

  58. 58. Jochen says:

    Evidently I have not yet succeeded in communicating clearly enough. There is no information on the tape.

    I understand that that’s the point you’re trying to make, I just don’t agree. 🙂

    There is information on the tape provided you’re the sort of entity that is capable of interpreting whatever’s written there. Hence, once again, my refinement of the definition for ‘computer’.

    And we ought to be a bit more careful whether we’re speaking on the syntactic, the semantic, or the pragmatic level here. On the syntactic level, it’s entirely fair to speak of information on the tape—information in the sense of information theory, Shannon entropy, and the like. It’s a perfectly objective quantity, and enough to characterize information processing on at least a superficial level—we can take the input and output of a TM, and compare the amount of information in either (without any knowledge of what that information might mean to anybody), and if it’s changed, conclude that information processing has taken place.

    On the semantic level—which, I think, is where you want to ground this discussion—it’s true that holes in a tape aren’t information; not without some interpreting entity. But of course, in trying to ground information by means of some interpreting entity in theories of mind, we are right on course towards the homunculus fallacy.

    Finally, on the pragmatic level, something is information to an entity if it causes that entity to do something (Weizsäcker has an interesting analysis of this by more narrowly considering this ‘doing something’ again to be the production of information). Here, it’s less tricky to analyse whether something contains information for some entity—it does, if it makes that entity do something. This doesn’t presuppose any sort of intentionality, and isn’t sufficient to ground it, but I think is fully adequate to ground the definition of ‘information processing’ with respect to computers.

    So it seems pretty innocuous on both the syntactic and the pragmatic level to talk about computers as ‘processing information’; and on the semantic level, I have proposed a definition that sidesteps the difficulties introduces by considerations of ‘meaning’ and the like at least to a degree.

    And at any rate, there still doesn’t seem to be any alternative characterization of ‘computer’.

  59. 59. Cognicious says:

    Jochen, #58: I understand that that’s the point you’re trying to make, I just don’t agree. 🙂

    Oh, there’s plenty of room for disagreement, all right.

    But of course, in trying to ground information by means of some interpreting entity in theories of mind, we are right on course towards the homunculus fallacy.

    I don’t see how homunculi would come in. Let an ordinary human mind – a conscious person – be the receptacle for information, without positing another creature inside it.

    Finally, on the pragmatic level, something is information to an entity if it causes that entity to do something. . . . Here, it’s less tricky to analyse whether something contains information for some entity—it does, if it makes that entity do something.

    One [i]can [/i]call everything “information” that fits a definition one makes up, just as one can call everything “a computer” that conforms to one’s preferred definition. However, playing with language that way undermines communication. It makes no sense, in the customary understanding of what information is, to say that putting a coin in a slot must provide information to a vending machine, given that inserting the coin operates a mechanism that makes the machine do something.

    By the way, in #53, you wrote: Say we have a definition D, of the form ‘an x is z if it y’, e.g. ‘an animal is a bird if it has feathers’. Now, how could we empirically contradict this definition? By finding an x, such that x is z, even though it’s not y. I should think we could empirically contradict the definition by finding an animal that had feathers and was not a bird. No?

  60. 60. john davey says:

    Callan

    “I’d argue the pattern of the electronic valve being invented first then as a result of this physical device being available, the idea of the various configurations of said device was devised (and naming said configurations ‘computer’). “

    You can argue all you like, you’d still be wrong. The design of the computer preceded the production of same. Hardly controversial.

    Or put another way, a machine to implement Turing Model of Universal Computation would require, as a matter of necessity, such a theory to exist in the first place.

    But it’d really just be arguing the way you want say the materials of the world were made to match the idea. A chicken and egg debate.

    No comprende …

    The main question is : He can point at a computer and argue we are not it – anyone else can point at a computer and he’ll say it has nothing to do with his argument?

    Ya no comprende ..

    When he refers to an actual computer, a physical object, in real life, he’s opened himself up to reality being the one to shape definitions.

    Still don’t really understand your mode of expression ..a computer is not (strictly speaking) a physical object. You don’t get to decide it and neither do I. It’s a matter of the definition of the requirements of a computer – to implement computational mathematics. You can’t decide that that’s not the case. That would be ridiculous wouldn’t it ? Can you decide what the definition of a car is ? What a radio is ? No.

    To keep up this ‘A computer doesn’t exist really’ thing, he can’t ask ‘where in the brain cell is memory stored?’.

    Who’s “he?”. What I see in this sentence is “if a man maintains that a computer is not a physical object, does that mean he can’t ask where memories are stored in the brain”

    I would say the answer is yes – nearly. Anybody can ask how the brain stores memory (note i say ‘memory’ and not ‘memories’ – there being no concept of neat chunks of memorised items in nature, unlike in computers with neat 4-byte chunks representing a single datum)

    The brain realises a memory function – that is indisputable. But the idea that a memory of a particular thing be located at an exact spot is evidently ludicrous for brains. There is no ‘thing’ to remember, no nice neat boundaries. Just a flow of consciousness accompanied by a function called ‘memory’ . How it works no-one knows but it’s evidently not like a computer.

    J

  61. 61. Jochen says:

    I don’t see how homunculi would come in.

    Well, if information always depends on an interpreting entity to be information, then, if you claim that the neuron firings in your brain constitute information, you’re positing a homunculus.

    One [i]can [/i]call everything “information” that fits a definition one makes up, just as one can call everything “a computer” that conforms to one’s preferred definition.

    But both the definition of information in terms of symbol probabilities (Shannon) and the definition of computer in terms of syntactic manipulations (Turing) were around at the foundation of these concepts, so they’re hardly ex post facto creations to fit anyone’s preconceptions.

    I should think we could empirically contradict the definition by finding an animal that had feathers and was not a bird. No?

    How do you tell it’s a bird?

  62. 62. Jochen says:

    Err, how do you tell it’s *not* a bird, rather.

  63. 63. Callan S. says:

    You can argue all you like, you’d still be wrong. The design of the computer preceded the production of same. Hardly controversial.

    Case closed, then, John!

    Fire didn’t exist until invented, either, I guess.

    I’m curious – why would you say the design of the computer wasn’t invented, say, in the dark ages – why couldn’t it have happened back then? Why was its design instead invented after it’s production was actually possible (ie, after the invention of electric valves). Just coincidence, you’d say?

    There is no ‘thing’ to remember, no nice neat boundaries. Just a flow of consciousness accompanied by a function called ‘memory’. How it works no-one knows but it’s evidently not like a computer.

    Unknown yet somehow evident as well (enough to say what it isn’t). Sums up the problem I’m describing, really.

  64. 64. Cognicious says:

    Jochen, #61: Well, if information always depends on an interpreting entity to be information, then, if you claim that the neuron firings in your brain constitute information, you’re positing a homunculus.

    Information doesn’t depend on an interpreting entity. Information becomes knowledge if someone discovers it, but much information remains unknown. The age of Mitochondrial Eve at her death is unknown. The number of grains of sand on all the beaches on Earth at time t. . . You may have inferred something from my example which wasn’t there. The fact that I put a coin in a vending machine is information. That fact is known to me at that moment but not to someone in the next town. But the act of inserting a coin doesn’t provide information to the machine. After all, if the machine jams, I don’t yell “You’re not listening!”

    A rock rolls downhill and strikes another rock, causing it too to roll. Are you going to say that the second rock received “information”? Really?

    My neuron firings are events about which information can theoretically be gained, even though I don’t sense those events. In brain research, an experimenter can stimulate neurons with an electrode and learn something about firings. The experimenter might then tell the subject what happened. No homunculus is required.

    But both the definition of information in terms of symbol probabilities (Shannon) and the definition of computer in terms of syntactic manipulations (Turing) were around at the foundation of these concepts, so they’re hardly ex post facto creations to fit anyone’s preconceptions.

    Don’t they fit Shannon’s and Turing’s preconceptions?

    [Me: I should think we could empirically contradict the definition by finding an animal that had feathers and was not a bird. No?]

    How do you tell it’s a bird?

    Actually, you’d need to tell it was not a bird. But that question belongs to a later step in your argument. I was merely pointing out that the way to falsify “If X is Y, then X is Z” by finding an X that is Y and is not Z.

  65. 65. john davey says:

    Callan


    “I’m curious – why would you say the design of the computer wasn’t invented, say, in the dark ages – why couldn’t it have happened back then? Why was its design instead invented after it’s production was actually possible (ie, after the invention of electric valves). Just coincidence, you’d say?”

    It’s a historical fact – disputed only by you, it would appear – that computation was a theory of mathematics pioneered by Alan Turing. It was then followed by the development of digital computers on a variety of platforms from the 1940s onwards.

    My advice is to stop arguing the point and read about it, before carrying on as if it could be resolved by argument. It’s a matter of historical record, end of story.

    It would have been possible to build computers in the 13th century – if they’d had a developed system of mathematics. Physical technology did not make computers possible. Computers are not defined by what they are made out of, they are be defined by what they do. You could make a computer out of a steam engine. It wouldn’t be too quick, but it would work just fine.

    JBD

  66. 66. john davey says:

    Callan


    “There is no ‘thing’ to remember, no nice neat boundaries. Just a flow of consciousness accompanied by a function called ‘memory’. How it works no-one knows but it’s evidently not like a computer.”

    Human memory is not digitized or localised. if you are aware of the nature of computers (i don’t know if you are) the memory of digital computers is highly geographically localised (in terms of storage) and quantised in terms of meaning : ie – one variable location represents quantification of one specific, delineated concept.

    Human memory is just not like that.

    J

  67. 67. Callan S. says:

    John,

    My very point is that you’re not seeing a second question here – for you there’s only A: When it was invented. And so you think I’m questioning A because you’re not engaging any other question as being possible. The question that is B: What triggered it’s invention, isn’t coming up with you in our discussion. Clearly with fire, observing fire triggered the invention of ways of cultivating and even igniting fire. Fire wasn’t some idea that just sprung out of nowhere into some persons head. Neither did the idea of computers just spring out of nowhere into Turing’s head. You think I’m disputing something as obvious as historical fact and I think you’re engaging in a fantasy that the idea of a computer just sprung into existence out of thin air and is tied to no PRIOR physical example whatsoever.

    For anyone who can see question B it wont look good for anyone who keeps fallaciously treating my point as being question A.

    Human memory is not digitized or localised. if you are aware of the nature of computers (i don’t know if you are) the memory of digital computers is highly geographically localised (in terms of storage) and quantised in terms of meaning : ie – one variable location represents quantification of one specific, delineated concept.

    Human memory is just not like that.

    Neither are computers. There are no concepts floating around inside them. I’m amazed I’m having to verbally remove such spirituality from what is plainly a machine.

    I think it you who are not aware of the nature of computers (as in the physical objects so regularly named as such). You’re working from the idea of concepts somehow being inside computers – and you treat anyone not using the same terms as not being aware of the nature of computers.

    To me your terms are just superstition. May as well be talking about homunculi being inside the computer/homunculi being processed by the computer.

    So to me it’s just a poor understanding of certain machines being used and as a result, an even poorer attempt to make a point, in Epstein’s article.

  68. 68. john davey says:


    “Neither did the idea of computers just spring out of nowhere into Turing’s head. ”

    Yes it did. That’s why he’s given so much credit for it.
    There were formal systems of arithmetic and basic calcultating machines but the UTM – which is the basis of modern digital computers – was based upon his original,unique mathematical theories.

    Trying to link the UTM to a babbage machine or an abacus is to fail to grasp the achievement of the UTM. Might as well say “numbers existed so einstein can’t claim all the credit for relativity”. In a sense, true. In another sense, complete nonsense.

    Me :-

    Human memory is not digitized or localised

    You


    Neither are computers.

    I don’t think you understand computers very well. They couldn’t work if memory was not digital/local. It’s part of the design.


    There are no concepts floating around inside them.

    OK – the concepts are in the mind of the programmer who allocates the variables and their purpose if that makes you happier.


    I’m amazed I’m having to verbally remove such spirituality from what is plainly a machine.

    Spirituality is a human trait – not a digital computer trait. We agree !

    Well, apart from the fact that the word ‘concept’ has nothing to do at all with the word ‘spirituality’ and I’m not so sure your grasp of the word ‘concept’ is that great.


    You’re working from the idea of concepts somehow being inside computers

    They are inside the head of the people who program them and those who design the hardware to implement them.

    You don’t have a clue about computers I suspect. Computers are not physical objects but functional machines. To that extent everything about them is ‘conceptual’.

    Digital is a ‘concept’. Digital memory is a ‘concept’. A computer is a ‘concept’.
    There is no such a thing as physical memory. There are physical objects that fulfil the concept of the computing function known as ‘memory’.


    To me your terms are just superstition

    I doubt it. You’re the one who thinks that god/the magic universe builds computers and humans don’t.

  69. 69. Callan S. says:

    Such heel digging in. Or in other words, the usual human responce.

    OK – the concepts are in the mind of the programmer who allocates the variables and their purpose if that makes you happier.

    Skipping the ‘in the mind’ part for now, the programmer (and eventual user) is deciding to say this state or that state of the physical object means this or that to him/her, yes.

    So how would the physical states relate at all to Epstien’s argument?

    Or is it going to go back to ‘well, it’s a variable, which is a concept, which is this thing which is non physical, so computers are non physical *said with straight face*’?

    I’m just amazed at how Jochen and yourself have to keep the idea of concepts floating around. For instance, if I were to shuffle around the mechanical logic gates of a computer at random, they would correspond to no ones allocation of variables/concepts. But at the same time you cannot avoid saying that you would still be able to trace a signal through each gate to an output. Ie, the physical object works/interacts with itself without concepts. But you have to talk in terms of concepts anyway, as if they are relevant. Then Epstein uses this method of…understanding…to make his point.

    Also you should consider that the whole talking down bit in regards to not knowing a computer, etc – certainty is cheap. Of course you can’t see yourself as wrong. Not seeing how you’re wrong doesn’t mean you aren’t wrong – you know that. With two people in this dialog, if you’re stuck thinking only one of them could/is speaking nonsense, then it’s not charitable reading. It’s easy to be right when you only treat others as the ones who’d talk nonsense.

  70. 70. John Davey says:


    “Skipping the ‘in the mind’ part for now, the programmer (and eventual user) is deciding to say this state or that state of the physical object means this or that to him/her, yes.”

    I’ve no idea what this means and I don’t think you do.


    Of course you can’t see yourself as wrong.

    Of course I can – although not in this argument with you. You just don’t know enough about computers, which leads you to make a pile of assumptions that are just plain wrong.

  71. 71. Callan S. says:

    John, have you sat down with a electronic breadboard and pin diagram for a memory chip and successfully programmed a memory position in it via the pins, to output to an LED? Have you set up a triple five timer to periodically and continually trigger a binary counter chip to go through a set of outputs that themselves are to go through logic steps of accessing an external data supply and loading it into said memory chips? I have.

    It seems if we both sat down to build a computer, you would treat your lack of progress as irrelevant as to whether I’ve ‘made a pile of assumptions that are just plain wrong’ and you’d say everyone else watching should treat is as irrelevant as well.

    The powers philosophers have, I guess.

    But then it’s never been about doing, but instead you’ve been taught to look for new information that remains consistent with prior knowledge. The emphasis being entirely on consistency. And I agree, what I’m saying is certainly not consistent with your prior knowledge on the subject.

Leave a Reply