Medical studies have shown that men who periodically donate blood are at a lowered risk for heart disease. Males are at a higher risk altogether than women, for various reasons, but one is that they have an increased red blood cell count that is precipitated by their natural levels of testosterone. Higher red blood cell counts are correlated with higher risk of heart disease. This is interesting to me because lots of "pre-modern" cultures, historically and presently, practice bloodletting for medicinal reasons. Actually, bloodletting was the routine medical procedure of choice for two millenia prior to the 19th century. It is well-documented that bloodletting (in men) was seen in Hippocratic medicine as a purging process sharing isomorphic functions with menstruation in women. As cited on Wikipedia, Hippocrates maintained that one of the functions of menstruation was to "purge women of bad humors" (humors = essential biological substances, of which there were four in Greco-Roman medicine). So, testified by this renewing ecological function of the female mammal, blood-letting was appropriate for achieving similar things in non-female-bodied individuals. Wow, did I just feel a shred power given to the processes of the female body? That sure went away in the middle ages in Europe, when we suddenly developed a fear of female overflow. But of course, it was okay for Jesus to do it on the cross.
My mother grew up in a small mountain village in Greece in the wake of the Greek civil war (she was born in 1948). When I was in high school, my mom, with the help of my older brother, shot and edited a documentary on women healers in Greek pastoral culture. She was able to focus on several healers that had lived and worked in her home village, interviewing her siblings, cousins, and surviving elders who she knew as a young girl. A story of bloodletting was included in the film - and it featured a woman who for some reason could not menstruate. This woman would go to a local healer once a month, and have blood let out of her leg.
It isn't hard to see that this form of folk medicine was passed down, in the landlocked mountain villages of Greece, from ancient Greek and Mesopotamian culture, for whom bloodletting was as routine as gargling with salt water when you have a sore throat. Many historians point out, however, that bloodletting was definitely over-prescribed (by our standards, perhaps, which are much more ontologically informed about physiology and pathology - for better or for worse). Indeed, the exemplars of ancient medicine - like Hippocrates and Galen - performed bloodletting sparingly and with caution.
Addendum: It is worth noting that the state of the medical art declined in the middle ages, which is to say - it stopped progressing with the momentum it had in the ancient world. This was not in small part due to the politicizing of medicine by the Catholic Church, who decried any kind of "mutilation" of dead bodies (it might make it harder for their souls to escape?). This means, of course, no dissection. And, not surprisingly, disease pathology was theologized, too.
more on medieval medicine & the church
Why farmpunk?
A farmpunk could be described as a neo-agrarian who approaches [agri]culture, community development and/or design with an anarchistic hacker ethos. "Cyber-agrarian" could supplant neo-agrarian, indicating a back-to-the-land perspective that stands apart from past movements because it is heavily informed by conceptual integration in a post-industrial information society (thus "forward to the land" perhaps?) The art and science of modern ecological design—and ultimately, adapting to post-collapse contexts—will be best achieved through the combined arts of cybermancy and geomancy; an embrace of myth and ritual as eco-technologies. In other words: the old ways of bushcraft and woodlore can be combined with modern technoscience (merely another form of lore) in open and decentralized ways that go beyond pure anarcho-primitivism. This blog is an example of just that. Throughout, natural ecologies must be seen as the original cybernetic systems.
**What we call for at the farmpunk headquarters**
°Freedom of information
°Ground-up action + top-down perspectives
°Local agricultural systems (adhering to permaculture/biodynamic principles) as the nuclei of economies
°Bioregional autonomy
°Computers are optional but can be used for good—see peer to peer tech, social media for direct popular management of natural or political disasters (e.g. Arab Spring), or the mission of the hacker collective Anonymous
°Computers are optional but can be used for good—see peer to peer tech, social media for direct popular management of natural or political disasters (e.g. Arab Spring), or the mission of the hacker collective Anonymous
°You
Wednesday, April 14, 2010
Thursday, April 1, 2010
Computers: Confounding philosophy since the atomic age
Or, why your mind is more like a steam engine than a pocket calculator...
Today, mainstream philosophy of mind and cognitive science has a little limp in the methodology department. The root of it, perhaps, is computationalism: the notion of mind as computer. It is more specifically a notion of cognition as computation - cognition being the processes of thinking, knowing, and assimilating sense perception into something coherent. For many, "mind" and "cognition" mean roughly the same thing - I like "cognition" because it denotes process and dynamic nature more than the word "mind" does.
One of the reasons computational methodologies are so hard to grow out of is that the idea of the theoretical Turing machine in the 1940s - and its actualization shortly thereafter in computers - was readily adopted as the ultimate heuristic for understanding minds. Briefly (if I can!) a Turing machine is an abstract 'conceptual' machine, although it can be - and is - realized in actual machines i.e. computers. At the time of its conception by mathematician Alan Turing in the 1930's, the technology available for him to draw from aesthetically was basically state of the art printing technology. Thus early descriptions of Turing Machines are reminiscent of typewriter-like contraptions, except instead of like 50 typebars, there is only one, and this is a magical typebar because it can not only print a symbol on the paper, it can also read the symbols on the paper. The Turing Machine has basically two elements - a 'tape' or strip of paper or something divided into boxes or cells where symbols are read, written and overwritten (1 symbol per cell) and a read-write head (our magic typebar!) It can do a few discrete things including be in a finite number of states, carry out instructions, move to the left or to the right, and read symbols and write symbols. The instructions typically go something like this “if the machine is in state X and the current cell contains a zero then move into state Y and change the 0 to a 1”. This is called a transition rule and is the basic building block of what we would now call a computer program for the Turing Machine to carry out! The machine stops when it cannot carry out the specified transition rule that it is currently on (i.e. when the program's purpose has been served). If you've heard of binary you know that ones and zeroes form the most fundamental level of code for all computers. The computer you're using is a really, really fancy Turing Machine - or rather, a snazzy conglomeration of them. One of the reasons 'the original' Turing Machines are kind of a bitch to understand (at least for a social science critter like me) is that we really take computers for granted nowadays. They are such a part of our cultural fabric and daily lives - and for most of us it is only their highest-level functions - like a graphical user interface - that we deal with. So to most of us "computer" evokes memories of like, fiddling around on your desktop, browsing the interwebs, and maybe just how cool your laptop looks with its new skin on. Don't worry. I understand.
The following is excerpted from the Stanford Encyclopedia of Philosophy's entry on Functionalism. Please check it out and scroll down to find "Machine state functionalism" for more.
To many, a computer was (or, is) more than just a metaphor for mind. Rather, the cybernetics of the computer came to constitute the new optimal solution for explaining how minds work. Cybernetics is the study of the structure of information flow within a system - in this case, a digital computer. To be clear, we are not hampered by a focus on cybernetics per se - in fact, cybernetic theory as applied to living systems (yielding focus on complex/dynamic/adaptive systems) has lots of promise in terms of revolutionizing the surprisingly tidy, curiously inorganic allegations made by computational theories of mind. When quantum computing is the state of the art, philosophers of mind will probably say that the brain is a quantum computer. Whether or not that would be "a step in the right direction" is irrelevant (indeed, some existing theories in that vein are quite fascinating and deserve much attention) - the point is simply that technophilia by itself is a poor basis for philosophy.
It's no coincidence that the "cognitive revolution" began to bloom in the immediate wake of the commercial computer's entrance on the cultural scene of the 1950s. In this world, data (represented information) was king. More specifically, syntax was king - the linear way in which data are organized. Variables and algorithms, the fundamental building blocks of computer programs, are purely syntactic and linear in nature. They follow if/then protocols and adhere to numerical precision. And they are fixedly sequential; only one action can be executed after a necessary preceding action. A theory of mind founded on computational characteristics pictures the mind as a network of cognitive modules that deal in some sort of symbolic language akin to machine code; our internal states, dispositions, feelings, are thus precise results of some sort of strict procedural (not to mention linear) sequence. Thus many theories that could be called computationalist are lazily founded on the idea of static mental representations. I have been wondering, though, given the sheer number of neurons in the brain (100 billion), and thus the staggering number of possible neural connections - are any two brain states ever exactly the same? I think not... it's not a river we can step in twice. I realize that neural states and "mental representations" aren't really the same thing to theorists. But then what the hell are mental representations? Are they cognitive phenomena that we perceive? In that case, they belong to a phenomenology of perception, not to cognitive science (and in the former realm they would be welcome!) I simply wonder if we need theories of representation at all here. But I digress - back to our sketch of the computational trajectory in cog sci. For such theorists, the target question becomes: what are these fundamental data by which intra-cranial commerce is achieved? This is in fact, an irrelevant question, because the brain does not deal in symbolic representations! There is no "data" as such in the brain. To see this clearly, we must attempt to shed the influence of our cultural obsession with data (which could be said to be the most ethereal incarnation of oil...) Instead of being like a collection of Turing machines, imagine that a cognitive system is more like a network of self-regulating mechanical gizmos - whose design and functioning is sourced from the laws of nature like gravity and thermodynamics. Instead of being made up of daisy chains of transistors and switches, like a computer, cognitive and perceptual systems are made up of a bricolage of "natural machines" - devices like gyroscopes and centrifuges, that are linked in webs of cause and effect with each other and with the encompassing environment.
This latter proposition, which could fall under the umbrella of dynamicism, has been well illustrated by Australian philosopher Tim Van Gelder, who maintains that a better characterization of cognitive systems are dynamical systems that self-regulate.
Before I go any further I have to couch this into a broader theoretical context. Connectionism is an umbrella term for a range of theoretical positions in any philosophy or science of the mind that see cognition as an emergent phenomenon arising out of the procession of a complex adaptive system. The complex system is, namely, the totality of the neural networks contained in at least the entire nervous system (not just the brain). In this system, and in many other complex adaptive systems (bee or ant colonies, for example) the constituent units that make up the system at large often have a very narrow scope of agency. In other words, they have a simple repetoire of relational actions - actions or events that can effect other units or nodes in the system. (Indeed, in the jargon of the field such units are called agents.) For example, we know of several factors that modulate the degree of agency one neuron can have on another. Theories embracing complexity, however, do not focus on the units of the system in a reductionist way [in isolation] but rather they focus principally on relationships and patterns in the system. It is there - in the temporal dimension of those process-based phenomena - where the closest thing to 'information' - and perhaps meaning - can be divined. The structural foundations of complex systems are, contrary to what one might think, often very simple from a design standpoint. It is not how they work at a systemic level that is mind-boggling, but rather it is what such systemic operation enables them to do and to be on a large scale. A key feature of complex adaptive systems like the brain's neural network is that processing tasks are distributed over a huge (ginormous, really) number of agents. This makes them very resilient, among other things.
So, proponents of connectionism favor the description of information storage and manipulation as being the arrangements and rearrangements of neural pathways in the brain, as opposed to explaining mental models and their syntax, a hallmark of computationalist theories. Jerry Fodor's theory on the language of thought is an exemplary computational approach to cognition. As you may discern, a lot of computational talk is simply a totally different manner of talking about cognition than is the connectionist dialogue. I would say it's a more narrative theoretical language, very grounded in linguisitcs, and thus a little bit more compatible with our native folk psychology (how the average person typically perceives their own thought processes). It is mythic, in a sense - but I don't think in the good sense. If you're familiar with this blog you know that I'm quite a fan of mythic language and thought when it's self-aware and utilized as a spiritual and emotional tool. My beef with computationalism is simple really: my brain ain't a Turing Machine, yo! Y'all think it's so good at making decisions - ze brain? Discrete decisions don't really even exist on a neurological level. Perceptually, they do - but inside there it's just warring groups of neurons, vying for your attention.
Searching for an alternative model to the ever-popular Turing Machine, Van Gelder describes the centrifugal governor, a mechanical device invented in the 1780s to enable factory steam engines to maintain a constant speed.
I'll try to briefly summarize the workings of this device for the purposes of our discussion. Essentially, the genius of this governor lies in its inclusion of a centrifugal mechanism; an object with a component that revolves around a fixed central axis, and of course requires earth's gravity to function. This centrifuge is connected in such a way to and from various part of the steam engine so as to enable the engine to continually adjust its incoming flow of steam - effectively producing a static speed. In this case, the "centrifuge" is the component to the far right with the two fly balls. The belt wheel below the fly-balls is connected to an 'output shaft' attached to the engine whose rotation reflects the engine's speed. If this shaft spins too fast or too slow, the horizontally-spinning fly-balls will either rise or lower (thanks gravity!), which through a series of connected spindles, adjusts a throttle valve on the pipe carrying steam into the engine. Viola! You have a mechanical feedback control system.
The centrifugal governor is thus temporally synchronized with the steam engine; it is a mechanical extension of the engine itself. It is like a limb extended into the environment, designed to gather dynamic "information" using a sense modality - in this case to sense an effect of gravitation - that then modifies the functioning of the engine. The steam engine has something like a sense organ, perhaps!
I put "information" in quotes above because there is something curious about self-regulating contraptions such as James Watt's governor compared to the technology we take for granted today - these contraptions are nonrepresentational. That is, they don't rely on any static set of programmed commands to function. A computerized device could be designed to do the exact same thing as the governor - but wouldn't that be a waste of energy, what with gravity already here to help? To make a computerized governor we would have to create an abstract program - a set of rules for the computer to follow - that would have to continuously engage in a sequence of tests in order to help the engine achieve a constant speed. But with the centrifugal governor, to quote Van Gelder, "there are no distinct processing steps, [so] there can be no sequence in which those steps occur." The centrifugal governor constitutes a cyclic program, not a sequential one with a defined beginning and an end.
The brain is a kluge of such self-regulating contraptions - jerry-rigged and feedback-looped together by great spans of geological time. They run on pathways carved by gravity, thermodynamics, electromagnetism, and surely quantum mechanics! A cognitive system contains intercomplementary parts; quantities that are "coupled" to feed back each others' energy. Such couplings are analogous, in the governor, to 1the position of the fly-balls on the centrifuge and 2the amount of fuel (steam) running through the engine. This sort of coupling creates consistency - not precise consistency, of course - for that does not really exist on the level of complex organ(ism)s. Rather, it creates a sufficient consistency - a seamlessness in operation that suffices for whatever particular goal the system has. In the case of the governor, the goal may be to keep a steam engine in a textile factory from slowing down and causing an industrial loom to malfunction. In a cognitive system perhaps consistency is a range of neural state-space necessary for, say, seamless visual perception to occur (or, as the case is, the cognitive apparition of seamless visual perception, since our eyes have large blind spots, etc). What is maintained in the case of the governor, and perhaps similarly in cognitive systems, is a dynamic equilibrium; a system that maintains a steady state by regulating its outputs in proportion to its inputs.
This leads me to a compelling thought: such systems of dynamic equilibrium (or stable disequilibrium) have a functional metabolism, of sorts. Which leads me to this next idea:
A cognitive system is much more like an ecosystem than a computer.
Using ecosystem as metaphor could at least lead us down a more intellectually fruitful path than explaining a mind in terms of a tool of its own creation. The computer is a tool of the mind. But not so with an ecosystem - indeed, it is the other way around: the mind is a tool created by the ecosystem. Let us then defer to systems ecology, situated in the safe embrace of physics and biology, as an arena within which we can continue our inquiries into the nature of mind and cognition.
A computer is a simplified electro-mechanical system. Quasi-similarly, a garden is a simplified ecosystem. Unlike a garden or an organism, a computer is a CLOSED system. Computers do not draw from the edge of chaos for their continued functioning. They turn off and on and experience "new" things only when we give them formal input. But dynamical systems, like the earth's biosphere or a cell, do balance on a thin edge between chaos and order, where novelty is at a maximum. Computers don't have mechanisms that enable metabolic processes - and this is one of the reasons that we haven't realized strong AI. As Pierre Teilhard de Chardin wrote; to think, we must eat. Metabolism is what allows a system to maintain a relatively steady state through controlled process of decomposition and growth. Specialized structural (physiological) features allow an organism to engage in both entropic (involving the dispersion of energy) and complex syntropic (involving the organization of energy) processes. These coupled processes occur at many levels and on many scales - from cellular respiration to breathing. The intercomplementary relationship between entropy and syntropy is, to put it one way, the scale-invariant rule of the Jungle. All organisms - and, in fact, some systems that we think of as nonliving like ecosystems and planets - need to devise mechanisms for combating the second law of thermodynamics if they want to stick around for more than a little while. But to be clear, organisms care more about sticking around in the same form than do ecosystems or stars, which are a little more adventurous and all like "blaaaaargh!"
Can you guess what would happen if your brain reached a state of thermodynamic equilibrium? Yep, you would die. There needs to be a constant influx of energy into living systems, thus there need to be mechanisms for gathering that energy and then there need to be ways to sufficiently utilize energy once it is in the system. Lastly, the system needs to figure out a way of..er... gracefully liberating expended energy. :P
This is not just true for living systems! As Star Larvae points out, paraphrasing James Lovelock's Gaia Hypothesis:
"Lovelock...describes Gaia as being in a state of stable disequilibrium. Gaia operates far from equilibrium, not in a haphazard way with wild fluctuations, but with remarkable stability. For what now has been at least three billion years, the conditions of Earth have remained within the narrow chemical and thermal range that has enabled life to proliferate and evolve to its present state of complexity. Lovelock lists ranges of specific physical conditions within which Gaia must remain to survive as a living entity. A slight decrease in the proportion of oxygen in the atmosphere, for example, would suffocate all but the most anaerobic forms of life. A slight increase, and the planet’s surface would incinerate.... [e]arth's chemistry is finely tuned to keep life alive."1
We can use this discussion of metabolic process to shed light onto another confounded area of philosophy: artificial intelligence. "Strong AI" refers to synthetic intelligence that is equal to or greater than the intelligence of an adult human. What this means, formally, is for a machine of some sort to be able to carry out the same intellectual tasks as a human can. I'm not alleging that proponents of strong AI claim that intelligence is analogous with sentient understanding (although it sort of has to be if you really want to duplicate the "human intellect"). But many functionalists, who don't care to distinguish between ability and understanding have made that claim [if quacks/walks like a duck, then = duck]. American philosopher John Searle, making a refreshing case against a functionalist interpretation of strong AI, famously argues that digital computers - regardless future technological advancement - simply cannot possess sentient intelligence because their operation solely relies on their formal syntactic structure - and such structure does not in any way enable or cause the sort of semantic content that exists in a living, situated mind. Even as well as we may be able to simulate neural networks nowadays, it cannot be said that these networks are intelligent, just as it cannot be said that a weather simulation program is creating a hurricane when it is displaying an animation of one the computer screen. You can't separate the syntactic structure from a brain, translate it into some sort of material network of inorganic stuff and claim that intelligence or understanding can emerge from this network's functioning. What arises is merely the simulation of part of the brain's formal structure - nothing more. As Searle says, "no simulation by itself ever constitutes duplication". He finishes his article Can Computers Think? with this delightfully flippant paragraph:
To paraphrase something Bill Hicks once said, it's really no more miraculous than eating a burger and a turd coming out of your ass.
Cognitive systems have not just syntax, but unique semantic content. Semantics connects syntax to the world. But wait; this semantics is not unique because it is sui generis, strictly irreducible OR contains some immaterial quality - as many chomping at the bit to yell "dualist" would like to be the case! Cognitive systems have semantics because of the existence and interplay of four basic characteristics and their iterations - which differentiate us from the theoretically "intelligent" digital computer. These are 1) metabolism, 2) non-linear memory, 3) self awareness and 4) ability to autonomously and spontaneously experience one's environment (i.e. being perambulatory). I think that these four things are essential to this elusive "semantic" dimension of mind that even the most syntactically complex system could not touch. Our physical movement through the world, which is an environment of continuously emerging novelty, forms feedback loops with these four functions, which are in turn provide feedback for one another. Our agency is coupled with them.
The unique thing about the realization of consciousness is that it is (was) ultimately caused by millions of years of biological evolution and adaptation. Proximately (like, right now) it's caused by a bedlam of emergent biochemical phenomena in your nervous system. And ontologically consciousness seems like no biggie so it's no wonder we think that we can duplicate that shit with technology.
Machines will start to think when they have to eat and attract mates to survive (seriously!)
As we read earlier, James Lovelock made clear that the earth's biosphere is in a very graceful state of stable disequilibrium. We can undoubtedly say the same thing about consciousness. Indeed, it is a state very, very far from equilibrium - it is a complex metabolic state, that is to say, a system continually straddling entropy and complexity - in a very special way. A brainy way.
A brief addendum: I don't refute the possibility of strong artifial intelligence. But like Searle I just don't think a computer as we know it is capable of strong AI. I don't think the intelligent machine will be "built" in the mechanistic sense. It will be grown, more likely. We've got a lot to learn, that's for sure - consider me along for the ride!
***
Thanks to the folk at Star Larvae whose work taught me to take the concept of metabolism to whole new levels. Please read their excellent essay on "Metabolic Metaphysics".
Also see Tim Van Gelder's article, parts of which I summarized above, here (downloadable pdf).
Some more concise resources
What are Complex Adaptive Systems?
The Core Concepts of Neuroscience
YouTube vid of a Turing Machine made out of LEGOS
Machine State Functionalism - Stanford Encyclopedia of Philosophy
Today, mainstream philosophy of mind and cognitive science has a little limp in the methodology department. The root of it, perhaps, is computationalism: the notion of mind as computer. It is more specifically a notion of cognition as computation - cognition being the processes of thinking, knowing, and assimilating sense perception into something coherent. For many, "mind" and "cognition" mean roughly the same thing - I like "cognition" because it denotes process and dynamic nature more than the word "mind" does.
One of the reasons computational methodologies are so hard to grow out of is that the idea of the theoretical Turing machine in the 1940s - and its actualization shortly thereafter in computers - was readily adopted as the ultimate heuristic for understanding minds. Briefly (if I can!) a Turing machine is an abstract 'conceptual' machine, although it can be - and is - realized in actual machines i.e. computers. At the time of its conception by mathematician Alan Turing in the 1930's, the technology available for him to draw from aesthetically was basically state of the art printing technology. Thus early descriptions of Turing Machines are reminiscent of typewriter-like contraptions, except instead of like 50 typebars, there is only one, and this is a magical typebar because it can not only print a symbol on the paper, it can also read the symbols on the paper. The Turing Machine has basically two elements - a 'tape' or strip of paper or something divided into boxes or cells where symbols are read, written and overwritten (1 symbol per cell) and a read-write head (our magic typebar!) It can do a few discrete things including be in a finite number of states, carry out instructions, move to the left or to the right, and read symbols and write symbols. The instructions typically go something like this “if the machine is in state X and the current cell contains a zero then move into state Y and change the 0 to a 1”. This is called a transition rule and is the basic building block of what we would now call a computer program for the Turing Machine to carry out! The machine stops when it cannot carry out the specified transition rule that it is currently on (i.e. when the program's purpose has been served). If you've heard of binary you know that ones and zeroes form the most fundamental level of code for all computers. The computer you're using is a really, really fancy Turing Machine - or rather, a snazzy conglomeration of them. One of the reasons 'the original' Turing Machines are kind of a bitch to understand (at least for a social science critter like me) is that we really take computers for granted nowadays. They are such a part of our cultural fabric and daily lives - and for most of us it is only their highest-level functions - like a graphical user interface - that we deal with. So to most of us "computer" evokes memories of like, fiddling around on your desktop, browsing the interwebs, and maybe just how cool your laptop looks with its new skin on. Don't worry. I understand.
The following is excerpted from the Stanford Encyclopedia of Philosophy's entry on Functionalism. Please check it out and scroll down to find "Machine state functionalism" for more.
"In a seminal paper (Turing 1950), A.M. Turing proposed that the question, “Can machines think?” can be replaced by the question, “Is it theoretically possible for a finite state digital computer, provided with a large but finite table of instructions, or program, to provide responses to questions that would fool an unknowing interrogator into thinking it is a human being?” Now, in deference to its author, this question is most often expressed as “Is it theoretically possible for a finite state digital computer (appropriately programmed) to pass the Turing Test?” (See Turing Test entry)
In arguing that this question is a legitimate replacement for the original (and speculating that its answer is “yes”), Turing identifies thoughts with states of a system defined solely by their roles in producing further internal states and verbal outputs, a view that has much in common with contemporary functionalist theories. Indeed, Turing's work was explicitly invoked by many theorists during the beginning stages of 20th century functionalism, and was the avowed inspiration for a class of theories, the “machine state” theories most firmly associated with Hilary Putnam (1960, 1967) that had an important role in the early development of the doctrine."1
To many, a computer was (or, is) more than just a metaphor for mind. Rather, the cybernetics of the computer came to constitute the new optimal solution for explaining how minds work. Cybernetics is the study of the structure of information flow within a system - in this case, a digital computer. To be clear, we are not hampered by a focus on cybernetics per se - in fact, cybernetic theory as applied to living systems (yielding focus on complex/dynamic/adaptive systems) has lots of promise in terms of revolutionizing the surprisingly tidy, curiously inorganic allegations made by computational theories of mind. When quantum computing is the state of the art, philosophers of mind will probably say that the brain is a quantum computer. Whether or not that would be "a step in the right direction" is irrelevant (indeed, some existing theories in that vein are quite fascinating and deserve much attention) - the point is simply that technophilia by itself is a poor basis for philosophy.
It's no coincidence that the "cognitive revolution" began to bloom in the immediate wake of the commercial computer's entrance on the cultural scene of the 1950s. In this world, data (represented information) was king. More specifically, syntax was king - the linear way in which data are organized. Variables and algorithms, the fundamental building blocks of computer programs, are purely syntactic and linear in nature. They follow if/then protocols and adhere to numerical precision. And they are fixedly sequential; only one action can be executed after a necessary preceding action. A theory of mind founded on computational characteristics pictures the mind as a network of cognitive modules that deal in some sort of symbolic language akin to machine code; our internal states, dispositions, feelings, are thus precise results of some sort of strict procedural (not to mention linear) sequence. Thus many theories that could be called computationalist are lazily founded on the idea of static mental representations. I have been wondering, though, given the sheer number of neurons in the brain (100 billion), and thus the staggering number of possible neural connections - are any two brain states ever exactly the same? I think not... it's not a river we can step in twice. I realize that neural states and "mental representations" aren't really the same thing to theorists. But then what the hell are mental representations? Are they cognitive phenomena that we perceive? In that case, they belong to a phenomenology of perception, not to cognitive science (and in the former realm they would be welcome!) I simply wonder if we need theories of representation at all here. But I digress - back to our sketch of the computational trajectory in cog sci. For such theorists, the target question becomes: what are these fundamental data by which intra-cranial commerce is achieved? This is in fact, an irrelevant question, because the brain does not deal in symbolic representations! There is no "data" as such in the brain. To see this clearly, we must attempt to shed the influence of our cultural obsession with data (which could be said to be the most ethereal incarnation of oil...) Instead of being like a collection of Turing machines, imagine that a cognitive system is more like a network of self-regulating mechanical gizmos - whose design and functioning is sourced from the laws of nature like gravity and thermodynamics. Instead of being made up of daisy chains of transistors and switches, like a computer, cognitive and perceptual systems are made up of a bricolage of "natural machines" - devices like gyroscopes and centrifuges, that are linked in webs of cause and effect with each other and with the encompassing environment.
This latter proposition, which could fall under the umbrella of dynamicism, has been well illustrated by Australian philosopher Tim Van Gelder, who maintains that a better characterization of cognitive systems are dynamical systems that self-regulate.
Before I go any further I have to couch this into a broader theoretical context. Connectionism is an umbrella term for a range of theoretical positions in any philosophy or science of the mind that see cognition as an emergent phenomenon arising out of the procession of a complex adaptive system. The complex system is, namely, the totality of the neural networks contained in at least the entire nervous system (not just the brain). In this system, and in many other complex adaptive systems (bee or ant colonies, for example) the constituent units that make up the system at large often have a very narrow scope of agency. In other words, they have a simple repetoire of relational actions - actions or events that can effect other units or nodes in the system. (Indeed, in the jargon of the field such units are called agents.) For example, we know of several factors that modulate the degree of agency one neuron can have on another. Theories embracing complexity, however, do not focus on the units of the system in a reductionist way [in isolation] but rather they focus principally on relationships and patterns in the system. It is there - in the temporal dimension of those process-based phenomena - where the closest thing to 'information' - and perhaps meaning - can be divined. The structural foundations of complex systems are, contrary to what one might think, often very simple from a design standpoint. It is not how they work at a systemic level that is mind-boggling, but rather it is what such systemic operation enables them to do and to be on a large scale. A key feature of complex adaptive systems like the brain's neural network is that processing tasks are distributed over a huge (ginormous, really) number of agents. This makes them very resilient, among other things.
So, proponents of connectionism favor the description of information storage and manipulation as being the arrangements and rearrangements of neural pathways in the brain, as opposed to explaining mental models and their syntax, a hallmark of computationalist theories. Jerry Fodor's theory on the language of thought is an exemplary computational approach to cognition. As you may discern, a lot of computational talk is simply a totally different manner of talking about cognition than is the connectionist dialogue. I would say it's a more narrative theoretical language, very grounded in linguisitcs, and thus a little bit more compatible with our native folk psychology (how the average person typically perceives their own thought processes). It is mythic, in a sense - but I don't think in the good sense. If you're familiar with this blog you know that I'm quite a fan of mythic language and thought when it's self-aware and utilized as a spiritual and emotional tool. My beef with computationalism is simple really: my brain ain't a Turing Machine, yo! Y'all think it's so good at making decisions - ze brain? Discrete decisions don't really even exist on a neurological level. Perceptually, they do - but inside there it's just warring groups of neurons, vying for your attention.
Searching for an alternative model to the ever-popular Turing Machine, Van Gelder describes the centrifugal governor, a mechanical device invented in the 1780s to enable factory steam engines to maintain a constant speed.
I'll try to briefly summarize the workings of this device for the purposes of our discussion. Essentially, the genius of this governor lies in its inclusion of a centrifugal mechanism; an object with a component that revolves around a fixed central axis, and of course requires earth's gravity to function. This centrifuge is connected in such a way to and from various part of the steam engine so as to enable the engine to continually adjust its incoming flow of steam - effectively producing a static speed. In this case, the "centrifuge" is the component to the far right with the two fly balls. The belt wheel below the fly-balls is connected to an 'output shaft' attached to the engine whose rotation reflects the engine's speed. If this shaft spins too fast or too slow, the horizontally-spinning fly-balls will either rise or lower (thanks gravity!), which through a series of connected spindles, adjusts a throttle valve on the pipe carrying steam into the engine. Viola! You have a mechanical feedback control system.
The centrifugal governor is thus temporally synchronized with the steam engine; it is a mechanical extension of the engine itself. It is like a limb extended into the environment, designed to gather dynamic "information" using a sense modality - in this case to sense an effect of gravitation - that then modifies the functioning of the engine. The steam engine has something like a sense organ, perhaps!
I put "information" in quotes above because there is something curious about self-regulating contraptions such as James Watt's governor compared to the technology we take for granted today - these contraptions are nonrepresentational. That is, they don't rely on any static set of programmed commands to function. A computerized device could be designed to do the exact same thing as the governor - but wouldn't that be a waste of energy, what with gravity already here to help? To make a computerized governor we would have to create an abstract program - a set of rules for the computer to follow - that would have to continuously engage in a sequence of tests in order to help the engine achieve a constant speed. But with the centrifugal governor, to quote Van Gelder, "there are no distinct processing steps, [so] there can be no sequence in which those steps occur." The centrifugal governor constitutes a cyclic program, not a sequential one with a defined beginning and an end.
The brain is a kluge of such self-regulating contraptions - jerry-rigged and feedback-looped together by great spans of geological time. They run on pathways carved by gravity, thermodynamics, electromagnetism, and surely quantum mechanics! A cognitive system contains intercomplementary parts; quantities that are "coupled" to feed back each others' energy. Such couplings are analogous, in the governor, to 1the position of the fly-balls on the centrifuge and 2the amount of fuel (steam) running through the engine. This sort of coupling creates consistency - not precise consistency, of course - for that does not really exist on the level of complex organ(ism)s. Rather, it creates a sufficient consistency - a seamlessness in operation that suffices for whatever particular goal the system has. In the case of the governor, the goal may be to keep a steam engine in a textile factory from slowing down and causing an industrial loom to malfunction. In a cognitive system perhaps consistency is a range of neural state-space necessary for, say, seamless visual perception to occur (or, as the case is, the cognitive apparition of seamless visual perception, since our eyes have large blind spots, etc). What is maintained in the case of the governor, and perhaps similarly in cognitive systems, is a dynamic equilibrium; a system that maintains a steady state by regulating its outputs in proportion to its inputs.
This leads me to a compelling thought: such systems of dynamic equilibrium (or stable disequilibrium) have a functional metabolism, of sorts. Which leads me to this next idea:
A cognitive system is much more like an ecosystem than a computer.
Using ecosystem as metaphor could at least lead us down a more intellectually fruitful path than explaining a mind in terms of a tool of its own creation. The computer is a tool of the mind. But not so with an ecosystem - indeed, it is the other way around: the mind is a tool created by the ecosystem. Let us then defer to systems ecology, situated in the safe embrace of physics and biology, as an arena within which we can continue our inquiries into the nature of mind and cognition.
A computer is a simplified electro-mechanical system. Quasi-similarly, a garden is a simplified ecosystem. Unlike a garden or an organism, a computer is a CLOSED system. Computers do not draw from the edge of chaos for their continued functioning. They turn off and on and experience "new" things only when we give them formal input. But dynamical systems, like the earth's biosphere or a cell, do balance on a thin edge between chaos and order, where novelty is at a maximum. Computers don't have mechanisms that enable metabolic processes - and this is one of the reasons that we haven't realized strong AI. As Pierre Teilhard de Chardin wrote; to think, we must eat. Metabolism is what allows a system to maintain a relatively steady state through controlled process of decomposition and growth. Specialized structural (physiological) features allow an organism to engage in both entropic (involving the dispersion of energy) and complex syntropic (involving the organization of energy) processes. These coupled processes occur at many levels and on many scales - from cellular respiration to breathing. The intercomplementary relationship between entropy and syntropy is, to put it one way, the scale-invariant rule of the Jungle. All organisms - and, in fact, some systems that we think of as nonliving like ecosystems and planets - need to devise mechanisms for combating the second law of thermodynamics if they want to stick around for more than a little while. But to be clear, organisms care more about sticking around in the same form than do ecosystems or stars, which are a little more adventurous and all like "blaaaaargh!"
Can you guess what would happen if your brain reached a state of thermodynamic equilibrium? Yep, you would die. There needs to be a constant influx of energy into living systems, thus there need to be mechanisms for gathering that energy and then there need to be ways to sufficiently utilize energy once it is in the system. Lastly, the system needs to figure out a way of..er... gracefully liberating expended energy. :P
This is not just true for living systems! As Star Larvae points out, paraphrasing James Lovelock's Gaia Hypothesis:
"Lovelock...describes Gaia as being in a state of stable disequilibrium. Gaia operates far from equilibrium, not in a haphazard way with wild fluctuations, but with remarkable stability. For what now has been at least three billion years, the conditions of Earth have remained within the narrow chemical and thermal range that has enabled life to proliferate and evolve to its present state of complexity. Lovelock lists ranges of specific physical conditions within which Gaia must remain to survive as a living entity. A slight decrease in the proportion of oxygen in the atmosphere, for example, would suffocate all but the most anaerobic forms of life. A slight increase, and the planet’s surface would incinerate.... [e]arth's chemistry is finely tuned to keep life alive."1
We can use this discussion of metabolic process to shed light onto another confounded area of philosophy: artificial intelligence. "Strong AI" refers to synthetic intelligence that is equal to or greater than the intelligence of an adult human. What this means, formally, is for a machine of some sort to be able to carry out the same intellectual tasks as a human can. I'm not alleging that proponents of strong AI claim that intelligence is analogous with sentient understanding (although it sort of has to be if you really want to duplicate the "human intellect"). But many functionalists, who don't care to distinguish between ability and understanding have made that claim [if quacks/walks like a duck, then = duck]. American philosopher John Searle, making a refreshing case against a functionalist interpretation of strong AI, famously argues that digital computers - regardless future technological advancement - simply cannot possess sentient intelligence because their operation solely relies on their formal syntactic structure - and such structure does not in any way enable or cause the sort of semantic content that exists in a living, situated mind. Even as well as we may be able to simulate neural networks nowadays, it cannot be said that these networks are intelligent, just as it cannot be said that a weather simulation program is creating a hurricane when it is displaying an animation of one the computer screen. You can't separate the syntactic structure from a brain, translate it into some sort of material network of inorganic stuff and claim that intelligence or understanding can emerge from this network's functioning. What arises is merely the simulation of part of the brain's formal structure - nothing more. As Searle says, "no simulation by itself ever constitutes duplication". He finishes his article Can Computers Think? with this delightfully flippant paragraph:
"The upshot of this discussion I believe is to remind us of something that we have known all along: namely, mental states are biological phenomena. Consciousness, intentionality, subjectivity, and mental causation are all a part of our biological life history, along with growth, reproduction, the secretion of bile, and digestion."(Emphasis mine)
To paraphrase something Bill Hicks once said, it's really no more miraculous than eating a burger and a turd coming out of your ass.
Cognitive systems have not just syntax, but unique semantic content. Semantics connects syntax to the world. But wait; this semantics is not unique because it is sui generis, strictly irreducible OR contains some immaterial quality - as many chomping at the bit to yell "dualist" would like to be the case! Cognitive systems have semantics because of the existence and interplay of four basic characteristics and their iterations - which differentiate us from the theoretically "intelligent" digital computer. These are 1) metabolism, 2) non-linear memory, 3) self awareness and 4) ability to autonomously and spontaneously experience one's environment (i.e. being perambulatory). I think that these four things are essential to this elusive "semantic" dimension of mind that even the most syntactically complex system could not touch. Our physical movement through the world, which is an environment of continuously emerging novelty, forms feedback loops with these four functions, which are in turn provide feedback for one another. Our agency is coupled with them.
The unique thing about the realization of consciousness is that it is (was) ultimately caused by millions of years of biological evolution and adaptation. Proximately (like, right now) it's caused by a bedlam of emergent biochemical phenomena in your nervous system. And ontologically consciousness seems like no biggie so it's no wonder we think that we can duplicate that shit with technology.
Machines will start to think when they have to eat and attract mates to survive (seriously!)
As we read earlier, James Lovelock made clear that the earth's biosphere is in a very graceful state of stable disequilibrium. We can undoubtedly say the same thing about consciousness. Indeed, it is a state very, very far from equilibrium - it is a complex metabolic state, that is to say, a system continually straddling entropy and complexity - in a very special way. A brainy way.
A brief addendum: I don't refute the possibility of strong artifial intelligence. But like Searle I just don't think a computer as we know it is capable of strong AI. I don't think the intelligent machine will be "built" in the mechanistic sense. It will be grown, more likely. We've got a lot to learn, that's for sure - consider me along for the ride!
***
Thanks to the folk at Star Larvae whose work taught me to take the concept of metabolism to whole new levels. Please read their excellent essay on "Metabolic Metaphysics".
Also see Tim Van Gelder's article, parts of which I summarized above, here (downloadable pdf).
Some more concise resources
What are Complex Adaptive Systems?
The Core Concepts of Neuroscience
YouTube vid of a Turing Machine made out of LEGOS
Machine State Functionalism - Stanford Encyclopedia of Philosophy
Labels:
cognitive science,
complexity,
cybernetics,
dynamicism,
thermodynamics
Subscribe to:
Posts (Atom)