To make things as direct and explicit as possible, let's interrogate at this question: Can computers think?We could spend some time, and many have, trying to nail down a definition of 'thinking'. Then, even if we could agree, it might be difficult to measure. If we agreed on practical metrics we might still have reservations like, "Is it not the programs themselves that think and operate, surely the circuits are 'just' metal." or, "is the computer thinking independently of humans? Do the programmers and engineers have no agency in or responsibility for the systems they build?"Are we getting anywhere? It's not that I doubt truths of possible answers, but I do doubt whether these truths are convergent. This kind of inquiry seems to scatter and retreat down multiple avenues at once, each level of analysis leaving us further and further from the original question. I would offer a rephrasing that might encourage a different method of inquest: If we say computers think, what do we mean?This arrangement seems more manageable. Without having to nitpick definitions or look at evidence (yet) we can bring some clarity the issue. To say that computers think is to suggest some likeness between their internal activity and our own. It is to mean that the capabilities, function and perhaps even experience of the computer is similar to that of the thinker. We are proposing a homology, or an isomorphism shared by thinkers and computers, aligned tightly enough that it might be useful to talk about one in terms of the other. But do computers think, like, really think?I think if we are to keep the discussion agile and honestly curious, we have to deny the temptation to answer that directly. This kind of question is asking for authoritative disambiguation. But is authoritative disambiguation what we were looking for? The core issue at hand (as oft in philosophical discourse) is a linguistic one, not a psychological, neurological, or electronics question. Do we like this metaphor enough to use it? Is the likeness close enough to shed light on the matter, or does the comparison muddy it (and if so how and where)? After all, we can only determine the proper use of a statement after we've considered it's meaning. Answering 'yes, computers think' puts the cart before the horse, assuming the thing we're trying to clarify, that is, the possible meanings of the phrase. Issues of this abstract nature are not strictly about affirmations or negations. Instead, they are openings for discourse - and no less rigorous thereby. What would yes mean? What would no mean? these are more productive responses to such questions. Since the proliferation of the microprocessor and the ubiquity of personal computers, comparing electronic computers with the brain or even with the mind has become commonplace. Employed as a handy metaphor for cognition, computation threatens to usurp its illustrative sense, daring, when wielded by some, to be an explanation for the brain, mind or cognition. So of course the computer is like a brain, but how much like one?As S. E. Figura points out, books, made from dead and inert matter can only carry a force of their embodied intention and though their holding of symbols. What is it that gives us cause to call one thing alive and another dead? A valid and valuable question, and one tempting to try and answer. Metabolism? Reproduction? Cells? Something called 'information'? But this question may be more worth asking than answering.