According to naturalism, the world can be adequately described in terms of what is physical. But what is a good naturalistic explanation of thought? Beliefs, desires, and other propositional attitudes appear not to fit within the physical universe. While the physical universe can be analyzed in terms of causation, and laws of nature, the life of the mind appears to involve more. What it is to reason about some action seems very different to what it is for a neuron to initiate a causal chain that results in an action. If causation cannot account for reasoning, then what it is that does the reasoning cannot be accounted for in physical terms. Thoughts also appear to be about something, they point at something beyond themselves. But it is difficult to imagine physical entities being about anything and, since the mind is what does the thinking, it would follow that the mind could not be a physical entity. The language of thought hypothesis (LOTH) attempts to provide a naturalist answer according to which thought should be analyzed in terms of an internal language physically realized in brains.
The most prominent defender of the language of thought hypothesis is Jerry Fodor (pictured above) who argues that attempts to naturalize the mind through analysis of behavior or some form of reductive physicalism fail.[2] Fodor posited that a computational/representational theory of mind combined with an innate language of thought could provide a materialist account for thought in terms of a psychological project. The language of thought hypothesis suggests that thinking is done in a mental language, a symbolic system physically realized in the brain. On this view, the objects of propositional attitudes are sentence-like concrete objects in the brain that represent the world to a subject.
LOTH seeks to account for thought in terms of causal processes. It is, therefore, a form of functional materialism in which mental representations are realized by physical properties of the brain.[3] Alan Turing developed the idea of a machine capable of complex tasks comparable to the human mind.[4] Indeed, Fodor argues that the computational theory of mind is the only plausible candidate for a psychological project.[5] It is not difficult to see why a computational theory of mind is so plausible for a functional materialist. There is obvious evidence for the view that material things can process information.
However, the view that minds are computational does not say how information processing can carry meanings. One either needs some story in which process carries meanings or one should abandon intentional realism. Since Computers are “environments in which symbols are manipulated in virtue of their formal features”[6] the obvious candidate for the job are symbols combined, systematized and produced in various ways in order to be candidates for intentionality and rationality. Crucially, it is maintained that computers preserve the semantic content during their processing. In the same way, the mind, according to CTM, preserves semantic content during the processing of symbolic representations.
Fodor argues that if the mind is computational, then it must be representational: “no representations, no computations. No computations, no model.”[7] A representational theory of mind suggests that the world is represented to the mind through some medium. Fodor argues that this accounts for human cognitive processing in way other theories cannot. He argues that the human ability to deliberate, perceive, and learn concepts can be best explained by conceiving of the mind in terms of a capacity to represent possibilities, compute various options and come to some plausible or rational conclusion.
Fodor gives three reasons to conceive of the mind as representational. First, when agents are confronted with choices they represent options to themselves and compute the outcomes of actions. The best way to describe considering options is in terms of computation. And if the process is computational, then the agent must represent to itself the various options. Second, concept learning is “essentially a process of hypothesis formation and confirmation… the experiences which location is the learning in such situations… send in a confirmation relation to what was learned.”[8] In order for such learning to take place, Fodor argues, we must account for inductive reasoning by assuming that the human subject can represent relevant experiences to herself. She must be able to come to an informed belief about predication based on multiple experiences that lead to a hypothesis and the application of confirmation of the truth of her experiences. Crucially, Fodor argues, this is only possible if the subject represents to herself the experiences as experiences that confirm the hypothesis.[9] Finally, perception relies on picking out invariant properties of entities that have been experienced in the past. Therefore, perception relies an inference to the best explanation based on representing past experiences to oneself and forming a conclusion based on confirming a hypothesis.
In order for there to be a representation to oneself, there must be some medium by which one represents. This medium, says Fodor, is an internal language of thought: “Representation presupposes a medium of representation, and there is no symbolization without symbols. In particular, there is no internal representation without an internal language.”[10] The language of thought is an innate language by which symbols are manipulated in order to combine constituent symbols to form complete thoughts and produce various combinations of symbols in order to produce various thoughts. Just as computers manipulate and process formal languages, the brain manipulates and processes natural languages. Sentences in natural languages are translated into mentalese, the language of thought.
The language of thought is causally efficacious, structurally combinatorial, and computable.[11] On the LOTH view, thoughts and their contents are causally efficacious. That is to say, reasoning is a matter of causal relationships between thoughts. Thoughts are composed of parts that stand in physical and causal relations to one another. When parts are assembled according to logical form they succeed in exhibiting sound structure. Finally, sentences rely on the completeness of higher order logic and so when sentences succeed in obtaining syntactic properties of well-formed formed formulas, they can become part of a mechanical process of inferences in the same way a computer calculates outputs according to the wffs of its inputs.
It is easy to see how, given these features of the LOT, the mind is analogous to a computer. As Fodor suggests, “the operations of the machine consist entirely of transformations of symbols; In the course of performing these operations, the machine sensitive solely to syntactic properties of the symbols; And the operations that the machine performs on the symbols are entirely confined to altering their shapes.”[12]
In sum, Fodor provides a picture that is supposed to account for thought. However, there are at least two remaining stories to be told for an adequate naturalistic account for thinking. The first involves an account of mental states in relationship with propositions. In other words, what it means for a subject to believe something to be true. The second involves an account of semantic content: How is it that sentences in the LOT carry meaning?
In regards to propositional attitudes, Fodor argues that a subject’s mental state can be analyzed in terms of a relationship the subject has with a sentence-like concrete entity in the brain that represents the world. Propositional attitudes have syntactic structure in virtue of which they have semantic content and truth conditions which can be attached to them in virtue of their composition.[13] Propositional attitudes are generally taken to be the relations subjects have to propositions.[14] Propositions are grasped, entertained, and assented to or rejected. They are capable of bearing truth values and are expressed by sentences of natural languages. The language of thought is not a natural language but an internal representational system that represents concepts to the agent. On the LOTH view, for a subject to have a propositional attitude is for the subject to stand in relation to sentence-like entities in an internal language that is capable of linguistically representing semantic meaning to the subject. Consequently, it is not propositions that bear truth values but sentences in mentalese.
For this picture to work, the subject must have a propositional attitude toward some entity that has content. On the LOTH view, semantics supervene on syntax. If the mind is computational and its cognitive processes are linguistic, then the meaning of sentences in mentalese supervene on the syntactic structure of those sentences. In contrast, many philosophers of language contend that syntax is determined by semantics. The structure of sentences is determined by the intended meaning the sentence is supposed to communicate. On the LOTH view, however, the brain computes in symbols that, in turn, provide the meaning.[15] Natural languages consist in physical entities that are causally efficacious. They are used by speakers to cause a change in the mental state of hearers whose mental state, if the communication is successful, changes to accord with the mental state of the speaker. To communicate by producing wave forms is for the speaker to produce a wave form of words standardly used for communicating an intended description that is recognized by the hearer: “communication consists in establishing a certain kind of correspondence between their mental states.”[16]
Those who hold to the LOTH are generally intentional realists. Intentional realism is the view that thoughts or symbols succeed at being about something.[17] Traditionally, what thoughts are about are objects of some kind, propositions. Propositions are entities that have aboutness, they are about something and bear truth values dependent upon whether what they say about that something turns out to be true or false. Linguistic intentional realism, however, differs with respect to the mental state itself, the attitude the subject has. On the LOTH view, the mental state is a complex structure that “the syntactic structure of mental states mirrors the semantic relations among their intentional objects.”[18] Thoughts have combinational semantics, they combine meanings and are productive of novel thoughts. Fodor argues that since natural languages have the ability to produce novel semantic content through combining their parts in novel ways, the likelihood of mental states having the same property is increased. What else is there? Fodor argues that since we lack a convincing story about non-linguistic intentional realism apart from reducing it to a version of neuroscience according to which one refers to the connections of neurons as an explanation, then the LOTH presents the best option.
In Sum, if what is required for thinking is a symbolic system that can capture semantic properties and a mechanistic physical base, then we have a powerful option for a naturalist understanding of the mind. However, I will argue that the account fails. First, I will argue that the account fails to show how sentences in the language of thought can carry any semantic content. Second, I will argue that what we believe—propositions—cannot be concrete and that if they are not concrete, then they are not in languages of thought, at least not in languages of thought that are physically realized.
My first claim involves the suggestion that sentences cannot have intentionality. Intentionality is a property of thinking. If thoughts are representations in an internal language of thought, then the question is: what is it about that language that gives it intentionality? On the LOTH, one’s propositional attitude consists in a mental representation of P in mentalese that means P. Therefore, Brentano’s question becomes a question about mentalese. What gives mentalese the ability to be about anything?
The obvious answer is that intentionality is an appeal to Tarski-style truth conditions according to which the truth conditions of LOT are analogous to the truth conditions of propositional logic. A complex sentence is true or false in virtue of the truth values of the simple sentence from which it composed combined with the rules which govern the functions of the connectives between each simple sentence. The most relevant semantic property, after all, is the truth value of a sentence.[19] As in logic, the semantic properties of a sentence are determined by atomic sentences which compose it combined with the rules which determine their relations. This, combined with some naturalistic understanding of mental states such as a referential and causal account, should provide enough material to solve the problem.
The latter claim involves the input from the external world which causes the internal representation in LOT. A causal theory appeals to the idea that entities, events and such in the external world cause representations to occur. Given this view, LOTH proponent is committed to saying that a subject’s mental state is intentional only if it is in a causal relation both with other mental states and with inputs from the subject’s environment.[20] The problem with such a view is that many thoughts bear no relation to the environment in which the subject is actually in. One might be thinking about fictional characters, or one might be thinking about people or things that are not and have never been in one’s environment. One reply is to say that thought about entities not in our environment are parasitic on entities which are. It might be possible to construct representations based on what actually in one’s environment in order to generate thoughts about representations that are not.
The deeper problem of a causal theory was raised by Karl Popper. Popper argues that any causal series has a beginning—the entity causing the representation—and an end—the mental state representing the entity. The problem Popper highlights is that given the mass of options for inputs from a particular environment there is nothing that can determine which particular bit of the environment triggers the series. If so, then one needs to appeal to some other factor apart from the causal series to determine which bit of the environment is the beginning of the chain (and, mutatis mutandis the end of the series). According to Popper, there is a set of mind-dependent factors such as purpose and interest that determine what, out of an incredibly complex and large amount of environmental inputs ought to be the thing to think about.[21]
A further argument can be advanced against the language of thought itself. Sentences themselves are composed of parts. Indeed, the LOTH is supposed to hold explanatory power because it explains how thoughts can have compositionality. However, it is not clear at what level, or at which part, of a sentence in mentalese meaning resides. There appear to be two options. Either meaning is carried at the atomic, primitive level of symbols from which sentences are composed or at the molecular, sentence level. If it is at the atomic level, then it cannot have the property of bearing truth on a Tarskian view since, on the Tarskian view, it is sentences that have truth conditions. If, on the other hand, it is at the sentence level then it may be a candidate for truth conditions but it is not clear how one obtains the components except by having a semantic interpretation at the atomic level.[22] Therefore, as Aydede and McLaughlin suggest, “officially LOTH would only contribute to a complete naturalization project if there is a naturalistic story at the atomic level.”[23]
In The Language of Thought: A New Philosophical Direction, Susan Schneider proposes that we analyze meaning at an atomic level but instead of loading the symbol with the burden of carrying some property of meaning derived from the molecular level, we should look to a pragmatic theory at the atomic level in virtue of which meaning is a function of the role the symbol plays in the total system. “symbols must be individuated by what they do, that is by the role they play in one’s cognitive economy, where such is specified by the total computational role of the symbol.”[24] In Schneider’s view, pragmatic atomism is the thesis that symbols are individuated by the role they play or by what they do. They stand in relation to entities in the world causally and then play a role in the total system in order to produce relevant results according to internal rules.
The problem with this response is that pragmatism cannot privilege the role an entity plays according to any intrinsic property the entity has. Consider, for example, the carrot. Is it for nutrition or for a snowman’s nose? On pragmatism, neither is privileged according to anything internal to the carrot. The carrot can function as either and neither is more privileged than the other. On pragmatism, what is true of the carrot is true for anything else and mutatis mutandis, for symbols.
Furthermore, if the theory of meaning is pragmatic at its most fundamental level, it is not easy to see how one could justify the claim. In seeking a naturalistic explanation for thought one is seeking for something that is true about the world, some way in which thought can be accounted for in terms of what is permitted in a world described in naturalistic terms. If one of those truths is: “thought is explained by the existence of a mental language physically realized in the brain” then surely the desiderata of such an account is that it be true. But how should one justify the claim? If we say that meaning is at the atomic level and that it is determined by the role symbols play, then we shall have to appeal to pragmatism to justify our claim. But pragmatism only gives us what it is rational to believe in order to further our own interests.[25] Surely, though, an argument for LOTH is supposed to show us evidence from which we can infer that it is true. One might believe something for one’s best interest despite it being false. Making pragmatism the final court of appeal, therefore, undermines the justification of the claim and ostensibly the whole LOTH project.
My second objection comes from Alvin Plantinga who argues that propositions are not the kinds of things that can be concrete. However, in order for a sentence in the language of thought to have causal powers it must, on the naturalistic assumption, be a concrete object.[26] There are some straight forward objections to the view that propositions are concrete. For example, there are propositions that no one has yet entertained let alone believed. If there are such propositions, then LOTH is false since the LOTH admits only sentences in the language of thought and if there are sentences that have not been represented to any subject then they either do not exist or LOTH is false and those propositions exist outside the language of thought.
Second, different language speakers believe the same thing. But it cannot be the same thing if it is a concrete object, an inscription in the brain, since different brains contain different inscriptions. One might reply that the inscriptions are the same, but that would be to assume that there is something more than the inscription itself, an abstract object beyond the inscription that is expressed by the inscription. But if the inscription is all there is, then the two inscriptions, no matter how similar they are in shape and form, cannot be said to be the same.
A LOTH proponent could reply that the LOT is not concrete but rather is a description of mental states which are what we call propositions. This reply would be a species of conceptualism whereby propositions are mind dependent thoughts which exist in virtue of being thought, in which case the LOT would serve as propositions. However, this would defeat the aim of providing a naturalist account of thought by admitting some entities that are not describable in physical terms. Furthermore, such a view only generates a finite quantity of thoughts. But surely there are thoughts that have not been had which are nonetheless true.
Plantinga’s strongest argument suggests that the cost of believing that propositions are concrete are modal truths. On LOTH, if there had been no human beings, then there would be no propositions. Yet, if there had been no human beings, “there are no human beings” would have been true. The feature most associated with such an assertion is that it is necessarily true that if there were no human beings, then “there are no human beings” would have been true.
The response might be that the proposition in question cannot both exist and be false, a kind of weak necessity. “for a proposition p to be necessary, it is not required that p be such that it could not have failed to be true: all that’s required is that p be such that it could not have been false.”[27] In other words, for a propositions to be weakly necessarily true what is required is and if p exists, then p is true. The problem, according to Plantinga, is that the belief, held by the LOTH proponent, that “there are brain inscriptions” cannot be have been false since the LOTH proponent thinks it is true in which case it cannot have been false that there are brains. Plantinga concludes that if propositions were concrete then there would be far too many necessary truths and far too few possible truths.
The LOTH is supposed to give us a naturalistic account of thought. It does so by postulating the existence of an innate language of thought combined with a representational/computational theory of mind. I have argued that syntax cannot convey intentionality unless it is conceived of pragmatically in terms of functional materialism in which case it cannot be epistemically justified. Furthermore, I have argued that propositions cannot be concrete and, therefore, cannot be sentences.
[1] There, perhaps, many more reasons for rejecting the view. However, for the sake of brevity I will focus on objections about rationality and intentionality on the LOTH view rather than objections to the computational theory of mind.
[2] Jerry Fodor, The Language of Thought (New York: Thomas Y. Crowell Company, 1975), 1–2. Fodor, Jerry A., and Zenon W. Pylyshyn. 2014. Minds Without Meanings: An Essay on the Content of Concepts. (Cambridge, Massachusetts: The MIT Press, 2014), 3–6.
[3] Murat Aydede and Brian McLaughlin “The Language of Thought Hypothesis”, The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL =
[4] Alan Turing, “Computing Machinery and Intelligence”, Mind 59 (October, 1950): 433-60
[5] The Language of Thought, 27.
[6] McLaughlin, 16.
[7] The Language of Thought, 31.
[8] Ibid., 34–35.
[9] Ibid., 37.
[10] Ibid., 55.
[11] José Luis Bermúdez, Thinking Without Words (New York: Oxford University Press, 2003), 22–24.
[12] Jerry Fodor, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (Cambridge, Mass: A Bradford Book, 1987), 19.
[13] Mark Richard, “Propositional Attitudes”, in A Companion the Philosophy of Language eds. Bob Hale and Crispin Wright (Malden, Mass: Blackwell Publishers, 1997), 208.
[14] Ibid., 197.
[15] John Searle’s “Chinese room argument” is supposed to show that semantics cannot come from syntax. John Searle, “Minds, Brains, and Programs” in The Nature of Mind ed. David Rosenthal (New York: Oxford University Press, 1991), 509–519.
[16] The Language of Thought, 108.
[17] Psychosemantics, xi.
[18] Ibid., 138.
[19] Alfred Tarski, Logic, Semantics, Metamathematics; Papers from 1923 to 1938 (Oxford: Clarendon, 1956).
[20] Edward Feser, Philosophy of Mind (Oxford: Oneworld Publications, 2005),
[21] Karl Popper, Conjectures and Refutations; the Growth of Scientific Knowledge (New York: Routledge, 1963), 395–402.
[22] Aydede and McLaughlin, 18.
[23] Ibid., 19.
[24] Susan Schneider, The Language of Thought: A New Philosophical Direction (Cambridge, Mass: The MIT Press, 2011), 163.
[25] Michael Rea, World Without Design: The Ontological Consequences of Naturalism (New York: Oxford University Press, 2002), 139.
[26] Alvin Plantinga, Warrant and Proper Function (New York: Oxford, 1993), 117–119.
[27] Ibid., 119.