Central Problem

The philosophical problem posed by Artificial Intelligence concerns the nature of mind, thought, and intelligence: Can machines truly think? What is the relationship between human intelligence and computational processes? The development of AI since the 1956 Dartmouth conference—which officially launched the discipline—has generated fundamental questions about whether the operations of electronic computers can genuinely reproduce or merely simulate human cognitive capacities.

The central tension lies between those who maintain that intelligence is essentially computational (and therefore reproducible in machines) and those who argue that human intelligence possesses irreducible features—intentionality, consciousness, embodiment, situatedness in the world, common sense understanding—that cannot be captured by formal symbol manipulation. The practical difficulties encountered by AI research (in robotics, speech recognition, natural language understanding, and especially in programming “common sense”) have intensified philosophical scrutiny of its foundational assumptions.

The debate involves not only theoretical questions about the nature of mind but also ethical questions about what machines should or should not do, and potentially about the rights of future intelligent machines.

Main Thesis

The chapter presents the development of AI and the philosophical critiques that have prompted a shift from “strong AI” to “weak AI”:

Functionalism and the Mind-Computer Analogy:

  • Putnam‘s functionalism holds that mental states are defined by their functional roles (input-output relations) rather than their material constitution. A mind could theoretically be “instantiated” by any physical substrate capable of generating the same functional relations—even “Swiss cheese.”
  • This leads to the mind-computer analogy: the mind relates to the brain as software relates to hardware. The mind is a program that can “run” on different physical substrates (biological neurons or electronic circuits).

The Turing Test:

  • Turing proposed an operational criterion for machine intelligence: if an expert in blind conversation cannot reliably distinguish between a human and a machine, the machine can be said to “think.”
  • This behaviorist-operationalist approach defines intelligence by external performance rather than internal processes.

Philosophical Critiques:

Searle’s Chinese Room:

  • Searle‘s thought experiment imagines someone following instructions to manipulate Chinese symbols without understanding their meaning. This demonstrates that syntactic manipulation (what computers do) does not constitute semantic understanding.
  • Computers operate “as if” they understood, but lack consciousness and intentionality. Their apparent intelligence exists only in the minds of their programmers.

Dreyfus’s Critique:

  • Dreyfus argues that human intelligence is fundamentally different from computational processes: it is holistic (grasping parts within wholes, not building from atoms to totality) and situational (organized by interests, needs, and cultural context).
  • Human intelligence presupposes a background of common-sense beliefs that cannot be formalized or explicitly programmed—leading to infinite regress.
  • Only in completely formalizable domains (games, theorems) can AI succeed; in domains involving flexible, context-dependent understanding (natural language, practical wisdom), it fails.

Winograd and Flores:

  • Drawing on Heidegger and Gadamer, they argue that computers lack Dasein—concrete being-in-the-world with corporeality and emotivity—and therefore cannot possess the contextual pre-understanding that constitutes common sense.
  • “Intelligence” applied to both natural and artificial systems expresses homonymy rather than genuine analogy.

From Strong to Weak AI:

  • The difficulties of AI have prompted abandonment of the original ambition to create a “synthetic mind” that duplicates human cognition.
  • The distinction between “simulation” (reproducing human cognitive powers) and “emulation” (creating effective intelligent tools without anthropomorphic pretensions) marks the shift to a more pragmatic, technologically-oriented approach.

Historical Context

Artificial Intelligence emerged as a distinct discipline at the 1956 Dartmouth conference organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The field inherited the optimism of postwar cybernetics and information theory, along with the development of electronic computers.

The functionalist paradigm, with Putnam as a major theoretician, provided philosophical support for the strong AI program by arguing that mental states are substrate-independent functional states. This suggested that minds could in principle be realized in machines.

However, by the 1970s and 1980s, AI encountered persistent difficulties that fell short of initial expectations—particularly in robotics, speech recognition, natural language understanding, and the notorious problem of programming “common sense.” These failures stimulated philosophical critique.

Dreyfus’s What Computers Can’t Do (1972) offered the first systematic philosophical criticism of AI, drawing on phenomenological and hermeneutic traditions. Searle’s Chinese Room argument (1980) attacked the foundations of strong AI from within analytic philosophy. The emergence of connectionism (neural networks) represented an alternative paradigm that sought to model the brain rather than formal logic.

By the late twentieth century, the field largely shifted toward “weak AI”—the construction of useful intelligent tools rather than the reproduction of human intelligence—though ambiguity persists about ultimate goals.

Philosophical Lineage

flowchart TD
    Descartes --> Turing
    Turing --> McCarthy
    Turing --> Minsky
    Frege --> Turing
    Putnam --> Functionalism
    Functionalism --> Strong-AI
    Heidegger --> Dreyfus
    Gadamer --> Winograd
    Dreyfus --> Weak-AI
    Searle --> Weak-AI
    Austin --> Searle

    class Descartes,Turing,McCarthy,Minsky,Frege,Putnam,Heidegger,Dreyfus,Gadamer,Winograd,Searle,Austin,Functionalism,Strong-AI,Weak-AI internal-link;

Key Thinkers

ThinkerDatesMovementMain WorkCore Concept
Turing1912-1954Philosophy of MindComputing Machinery and IntelligenceTuring test, computability
Putnam1926-2016Analytic PhilosophyPhilosophical PapersFunctionalism, multiple realizability
Searleb. 1932Analytic PhilosophyMinds, Brains and ProgramsChinese Room, intentionality
Dreyfus1929-2017PhenomenologyWhat Computers Can’t DoEmbodied intelligence, situatedness
Minsky1927-2016Cognitive ScienceSemantic Information ProcessingAI research, frames

Key Concepts

ConceptDefinitionRelated to
Artificial IntelligenceThe attempt to make machines do things that would require intelligence if done by humansMinsky, Cognitive Science
FunctionalismThe view that mental states are defined by functional roles rather than material constitutionPutnam, Philosophy of Mind
Mind-computer analogyThe thesis that mind relates to brain as software relates to hardwarePutnam, Cognitive Science
Turing testOperational criterion: a machine “thinks” if indistinguishable from humans in blind conversationTuring, Philosophy of Mind
Chinese RoomThought experiment showing syntactic symbol manipulation does not constitute understandingSearle, Philosophy of Mind
IntentionalityThe mind’s capacity to be “about” or directed toward objects; lacking in computersSearle, Phenomenology
Strong AIThe thesis that computers can genuinely think and have mindsPhilosophy of Mind, Cognitive Science
Weak AIThe thesis that computers are useful tools for studying or emulating intelligencePhilosophy of Mind, Cognitive Science
ConnectionismResearch program modeling intelligence through neural networks rather than symbol manipulationCognitive Science, Philosophy of Mind
Common senseBackground of pre-understandings and beliefs that cannot be formalized; AI’s “black beast”Dreyfus, Phenomenology

Authors Comparison

ThemeTuringSearleDreyfus
Definition of intelligenceBehavioral, operationalIntentional, consciousEmbodied, situational
Can machines think?Yes, if behaviorally indistinguishableNo, syntax ≠ semanticsNo, intelligence requires Dasein
CriterionExternal performanceInternal understandingContextual pre-understanding
View of mindComputationalBiological, intentionalPhenomenological, holistic
AI assessmentOptimisticCritical of strong AICritical, limited domains possible
Philosophical traditionLogic, behaviorismAnalytic philosophyPhenomenology, hermeneutics

Influences & Connections

  • Predecessors: Turing ← influenced by ← Frege, Russell, mathematical logic
  • Predecessors: Dreyfus ← influenced by ← Heidegger, Merleau-Ponty, phenomenology
  • Predecessors: Searle ← influenced by ← Austin, speech act theory
  • Contemporaries: Putnam ↔ debate with ↔ Searle, Dreyfus
  • Followers: Dreyfus → influenced → embodied cognition, situated AI
  • Followers: Searle → influenced → critiques of computationalism
  • Opposing views: Dreyfus ← criticized by ← AI researchers; Searle ← criticized by ← functionalists

Summary Formulas

  • Turing: A machine can be said to think if its performance in conversation is indistinguishable from that of a human; intelligence is defined operationally by behavior.
  • Putnam: Mental states are functional states that can be multiply realized; the mind relates to brain as software to hardware, making machine minds theoretically possible.
  • Searle: Syntactic symbol manipulation (what computers do) does not constitute semantic understanding; computers lack intentionality and consciousness, so strong AI is impossible.
  • Dreyfus: Human intelligence is holistic and situational, grounded in embodiment and common sense that cannot be formalized; AI succeeds only in limited, fully formalizable domains.

Timeline

YearEvent
1936Turing develops concept of Turing machine
1950Turing publishes “Computing Machinery and Intelligence” with the Turing test
1956Dartmouth conference officially launches AI as a discipline
1964Putnam publishes “Robots: Machines or Artificially Created Life?“
1967Putnam develops functionalism in philosophy of mind
1972Dreyfus publishes What Computers Can’t Do
1980Searle publishes “Minds, Brains and Programs” with Chinese Room argument
1986Connectionism/neural networks gain prominence
1987Winograd and Flores publish Understanding Computers and Cognition

Notable Quotes

“The question is not whether machines can think, but whether we can distinguish their performance from that of thinking beings.” — Turing

“No system that limits itself to formal manipulation of symbols, without being conscious of their meanings, can be considered identical to a thinking being.” — Searle

“Every intelligibility and every intelligent behavior must be traced back to the common sense of what we are, which necessarily, if we want to avoid infinite regress, is knowledge that cannot be made explicit.” — Dreyfus


NOTE

This summary has been created to present the key points from the source text, which was automatically extracted using LLM. Please note that the summary may contain errors. It serves as an essential starting point for study and reference purposes.