Every day human reasoning: how we, scientists and engineers are cognitive agents in distributed knowledge-producing systems Philosophy of science: Gooding, David C., “Visual Cognition; Where Cognition and Culture Meet.” 2006.
Our days are filled with the decision making that is a hallmark of what we call our intelligence (Carter 2009, 164), and the intelligent cognitive strategies we use individually and collectively during our daily rounds of knowledge production, knowledge distribution and decision-making are essentially no different than those that scientists employ. (Einstein, Space-Time 1922, 2) [1]
Model construction is a universal cognitive skill learned from observation irrespective of culture or profession. (Gooding 2006, 689)
But because creative thinking is considered pre-logical and cannot be analyzed, the study of these cognitive processes is moving away from formal theory (explanation) and focusing instead on inventory and descriptions of the empirical models that scientist’s employ: description instead of explanation.
Individual cognition is defined as the distributed mental activities associated with thinking, knowing, memory and communications (Myers 2010, 369) but Gooding’s models are extended hybrid-systems that came into existence during the nineteenth century’s industrial expansions and large scale production of goods. Such cognitive systems create the merger of human organizations, mind and machine in what Gooding refers to as…
“… large-scale knowledge-production systems… Such systems combine different kinds of objects and entities—mental, verbal, visual, numerical, and symbolic representations; material technologies, designs, plans, and institutions—to manage production and regulate output.” (Gooding 2006, 694)
The foundation of such systems is often image based representations that scientists develop to alternately simplify, elaborate and interpret with a general cognitive strategy common to all scientific domains and the cognitive capacities we all use every day.
Representations are plastic and often adaptable to controlled transformation between 2-D patterns, 3-D forms and 4-D temporal processes. Combining a variety of sensory and verbal, numeric or symbolic representations such models display interpretations and are produced in a variety of ways.
Collaborative products of individuals, human organizations and machines, scientific models are becoming increasingly complex as technological sensors and computers extend perception and analysis beyond our intuitive and intellectual range. This produces hybrid information production, storage and distribution bodies that are distributed, knowledge producing cognitive systems.
Where cognition and culture meet
Research studies demonstrate that cognitive model-building, our recognition of patterns, structures, connections and processes, is fundamentally the same as that used by scientists who are producing and distributing knowledge representations of such elaborate processes as high-energy physics (fig 1). (Gooding, Visual Cognition: Where Cognition and Culture Meet 2006, 689) [2]
Visual models mediate between interpretation of fundamental information and the resulting explanation. These models are usually visual hybrids: multimodal models that use representational plasticity and variation during the creative process.
Visualization and Cognition: 6 generalizations of representations
The visual representations scientists use display the following features.
1. Hybrid features.
Representations are usually hybrid models that transform with combinations of visual, verbal, numerical or symbolic modes to visually display interpretations. The Camera Lucida technique of figure 2 (just as in figure 1) transforms a photographic pattern into a structure diagram with numerical symbols for identification (Figure 1 uses the non-numerical symbols that appear as boxes.)
2. Multimodal features.
Representations are often multimodal models that rely on combinations of information that invoke different sensory modes.
These are the multimodal combinations of information input that scientists use when they create [surrogate] technological sensors along with information modifying computer programs.
Complex sensors and computational systems processed the photograph in figure 1: a photo of multiple subatomic collisions produced with sophisticated imaging methods[3].
Scientists extended their sensory information input and intellectual processing to produce a visual model of high-energy proton-electron collisions that visually displays interpretation and understanding.
Multimodal combinations of sensory information display structure: these are the multimodal combinations of information input that scientists use when they create [surrogate] technological sensors along with information modifying computer programs
Complex sensors and computational systems processed the photograph in figure 1: this photo of multiple subatomic collisions is produced with sophisticated imaging methods[4].
Scientists extended their sensory information input and intellectual processing to produce a visual model of high-energy proton-electron collisions that visually displays interpretation and understanding.
3. Plasticity feature: creative thinking.
Cognitive representational plasticity sets the initial stage for scientists and us as well: sitting in the office brainstorming solutions as a group, the cognitive models used are malleable forms as creative thinking experiments on the perceived objects in the world.
“Thought, like perception, goes straight out to the world itself. But a difference between them is that in the case of thought, how the actual object of thought ‘is’ at the moment I am thinking of it does not in any way constrain my thinking of it.” (Crane 2008, 2.2.1)
Perceived forms are explored and altered, often playfully, and this is an important source of insights, possible directions and potential solutions as we and scientists develop the representational variations that are benchmarks of developing ideas.
4. Variable representations feature.
An example of variable representation is the transformation of the 2-D photographic pattern (fig 2) into the inscribed structural diagram
to its right. Further transformation can produce the 3-D forms of figure 3 and 4-D animation (fig 4) that is process: explanatory visual experience.
Another example is engineering graphics software that begins a design process with 2-D drawings that evolve into 3-D visualizations that can also transform next into 4-D process representations. Architectural clients can virtually walk-through a 4-D software presentation of their proposed new building and engineers can create software construction simulations: these as the cognitive aids of a visual experience.
5. Plasticity Constraint feature.
Representational plasticity constraint is also part of the solutions seeking, knowledge producing modeling process. Sitting in the office brainstorming plastic (malleable) cognitive models again, the concepts will start coming together and in the process incremental constraints appear in the form of desired and unalterable structures.
These are the sought-for ideas; components that constrain plasticity.
Scientists and engineers share such cognitive plasticity constraints with us. Scientists bring constraining theories and physical laws to bear on their models just as architects and engineers bring constraining physical and civil laws to bear on their developing models. These are the transformational rules that include any limitations imposed by available techniques and technologies.
These examples clarify the general meaning of how Gooding’s knowledge bearing models develop and what it means to say they are distributed.
6. Distribution features.
There are three features of distribution for a model’s synthesized components: three classifications of
mind-machine distribution hybrid mental-material objects distribution multiple knowledge origin and storage distribution
6.1 Mind Machine distribution.
The data, image development and interpretational process of high-energy physics in figure 1, and the paleontology fossil data, image development and interpretational processes of figures 2, 3 and 4 are examples of synthesized mind-machine knowledge representation. Mind and machine work together inseparably.
Architectural software programs[6].
6.2 Hybrid mental-material objects distribution.
Another form of representative distribution is hybrid mental-material objects that enable visual tactile thinking. Gooding expressed his concept of mental-material objects with Michael Faraday’s electromagnetic ‘lines of force’ in a field. This distinguished theoretical electromagnetic force and theoretical gravitational force. (Newton 2007, 139) Such lines are not physical objects but render the concept of a theoretical force as a visual 4-D field.
Architectural software programs provide a rough analogy. Manipulating computer generated mental-material objects such as relationships between a building’s structural components and forces in cyberspace[7] enables intuitive visual-tactile thinking during the design and virtual-construction processes. Buildings can be fully constructed today virtually using computer software before the actual process of construction begins.
6.3 Multiple knowledge origin and storage distribution.
Such a virtual building is also a knowledge representation that originates and is stored in varied ways; by various individuals, organizations and machines as an example.
Dynamics
The above examples show how the components of Gooding’s models are distributed through a range of:
mind-machine distribution hybrid mental-material objects distribution multiple knowledge origin and storage distributions
The process of cognitive modeling in science and engineering is a complex and inseparable combination of mind-machine and individuals and organizations combined with sensor technology and computer analysis.
Such a complex process is dependent upon the dynamics between internal and external representations- avoiding static representations- to capture and visually display self-explanatory processes; hybrid cognitive models are self-explanatory.
“Although visual thinking is not the only approach to modeling, most modeling involves a visual element… the key aims of most sciences are to capture process and the invariant features of change, and to use the latter to explain the former. Scientists constantly attempt to escape the limitations of static, printed representations such as plots, state descriptions and images by producing ones that can convey process as well as structure.” (Gooding 2006, 693)
Cognitive modeling is creative imagination recognizing visual structures and connections as novel, discovered processes (Myers 2010, 411). This is a bi-directional process: forward moving identifications and retrospective verifications of figure 6. These transformations allow reduction or increase of content and complexity as needed, moving between pattern, structure and process while gaining or shedding content. (Gooding 2006, 693)
Distributed Cognitive Systems
The merger of human mind and machine (and human mind and environment) blurs any meaningful distinction between internal and external representations[9]
Despite the powerful changes in scientific technology and theoretical reach during the last five hundred years the cognitive strategies employed by scientists in all domains has not changed: scientists devise technology and theory to assist cognitive strategy, not change it. Just as Einstein’s special theory of relativity did not create new cognitive strategy but enabled cognitive strategies to incorporate advancing science, the pragmatic cognitive roots of all scientific domains, and global human thought, remain fundamentally unaltered.
Image attribution:
Figure 1 image source: CERN, 2011; CERN-EX-9106038 L3: Decay of Z0 to three jets. Retrieved from: http://cdsweb.cern.ch/record/629156
Figure 2 Arthropod Sidneyia Inexpectans. Gooding, David C. “Visual Cognition: Where Cognition and Culture Meet.” 2006. http://www.jstor.org/stable/10.1086/518523.
Figure 3 Arthropod Sidneyia Inexpectans. Gooding, David C. “Visual Cognition: Where Cognition and Culture Meet.” 2006. http://www.jstor.org/stable/10.1086/518523. 692
Figure 4 arthropod Sidneyia Inexpectans, retrieved from: http://en.wikipedia.org/wiki/File:Sidneyia1.JPG
Figure5 architectural rendition retrieved from: http://www.google.com/search?q=green+building+photographs&hl=en&rlz=1I7GGLD_en&prmd=ivns&tbm=isch&tbo=u&source=univ&sa=X&ei=ZhhGTrvVJpDKiALA_ZTeAQ&ved=0CCsQsAQ&biw=1280&bih=499
Figure 6Visual inference diagram. Gooding, David C. “Visual Cognition: Where Cognition and Culture Meet.” 2006. http://www.jstor.org/stable/10.1086/518523. 693
Works Cited
Baird, Davis. “Thing Knowledge: A Philosophy of Scientific Instruments.” 2004. http://www.jstor.org/pss/40060833.
Blomberg, Olle. “Do socio-technical systems cognize?” 2009. http://www.aisb.org.uk/convention/aisb09/Proceedings/COMPPHILO/FILES/BlombergO.pdf.
Carter, Rita et al, Susan Aldridge, Martyn Page, Steve Parker. The Human Brain Book. New York: DK Books, 2009.
Clark, Andy, and David Chalmers. “The Extended Mind.” 1998. http://consc.net/papers/extended.html.
Crane, Tim. “The Problem of Perception.” Edited by Edward N. Zalta. Fall 2008. http://plato.stanford.edu/archives/fall2008/entries/perception-problem/.
Einstein, Albert. “Space-Time.” 1922. http://preview.britannica.co.kr/spotlights/classic/e ins1.html.
Einstein, Albert. “Space-Time.” In Entry title: Relativity, by Encyclopedia Britannica, 4. 1922.
Gooding, David C. “Visual Cognition: Where Cognition and Culture Meet.” 2006. http://www.jstor.org/stable/10.1086/518523.
—. “Visual Cognition: Where Cognition and Culture Meet.” Proceedings of the 2004 Biennial Meeting of the Philosophy of Science. Chicago: The University of Chicago Press, 2006. 687-698.
Myers, David. Psychology. Holland , Michigan: Worth Publishers, 2010.
National Institute of Building Sciences. “WBDG Designing for Organizational Effectiveness.” Whole Building Design Guide. July 23, 2010. http://www.wbdg.org/resources/design_orgeff.php?r=productive (accessed 09 2012, January).
Newton, Roger. from Clockwork to Crapshoot- a history of Physics. Cambridge, MA: The Belknap Press of Harvard University Press, 2007.
Wilczek, Frank. The Lightness of Being. New York: Perseus Books Group, 2008.
Endnotes:
[1] “So far as the way is concerned in which concepts are connected with one another and with the experiences there is no difference of principle between the concept-systems of science and those of daily life.” (Albert Einstein, 1922)
[2] “Case studies provide important clues about inference making. Everyday human reasoning combines visual, auditory, and other sensory experience with nonsensory information and with verbal and symbolic modes of expression. Scientific reasoning is no different.” (Gooding, Visual Cognition: Where Cognition and Culture Meet 2006, 688)
[3] Regarding the high-energy proton/electron collisions at CERN and their photographic imagery: “… some very fancy image processing is involved… the central technique of the Friedman-Kendall-Taylor experiments… was precisely to concentrate on measuring the energy and momentum… To get a sharply resolved space-time image you can—and must—combine results from ‘many’ collisions with different amounts of energy and momentum going into the proton. Then, in effect, image processing runs the uncertainty principle backwards.” (Wilczek 2008, 46)
Figure 1: Large Electron-Positron collider, computer generated image of multiple collisions, a collage of sorts: the pictured ‘particle jets’ are used for ‘deep-structure’ data translation.
[4] Regarding the high-energy proton/electron collisions at CERN and their photographic imagery: “… some very fancy image processing is involved… the central technique of the Friedman-Kendall-Taylor experiments… was precisely to concentrate on measuring the energy and momentum… To get a sharply resolved space-time image you can—and must—combine results from ‘many’ collisions with different amounts of energy and momentum going into the proton. Then, in effect, image processing runs the uncertainty principle backwards.” (Wilczek 2008, 46)
[5] “Organizations in the industrial period had a highly mechanical, bureaucratic structure and functioning as described by the Machine metaphor. Beginning in the 1950’s, organizations began to show more features of the Organism metaphor largely due to concern that internal rigidity was maladaptive and could lead to competitive stagnation (Peters and Waterman, 1982; Scott, 1987). This concern coincided with the human relations movement in psychology and its emphasis on employee motivation, satisfaction, participation, and quality of work life (Weisbord, 1987). The organic model is commonplace today. An emerging form, one which resembles a brain in its structure and functioning, is associated with innovative, high-tech firms.” (National Institute of Building Sciences 2010)
[6] Advanced architectural software programs that incorporate building specifications and environmental data; and enact management procedures, design clash detection, environmental analysis and simulate building construction are another excellent example of synthesized knowledge production and distribution between the human mind and computer-machine.
Machines embody knowledge:
“An eighteenth-century orrery, the first cyclotron, and a grating spectrometer are unlikely to occupy a lab shelf together. Yet each involves physical understanding—not only of planetary motion, particle beams, or spectral lines, but also of whatever else attends its use, design, and makeup. They not only provide new evidence, they embody knowledge of a kind that Davis Baird terms “thing knowledge.” Baird coherently develops its various forms, its emergence as equal counterpart to text-based theoretical knowledge, and its impact on knowledge-making today and for the future.
Books contain knowledge, but how can a thing?” (Baird 2004)
[7] As another example, the needed and effective force distribution of a building’s weight beneath and along its concrete foundation is visualized as an inverted light-bulb extending directly beneath and along the concrete foundation. This ‘bulb’ area cannot be impinged upon by a subsequent building’s (built alongside) foundation requirements (its own ‘bulb’ area needed to provide an effective foundation must be calculated accordingly).
[8] Gooding’s’ concept here is in step with contemporary Philosophy of Mind and Philosophy of Cognitive Science regarding the ideas of ‘active externalism’ and ‘distributed cognition’. The human cognitive system is not universally considered an isolated entity within the boundaries of our brain. (Blomberg 2009, 1)
“Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the intuitive demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by the arguments of Putnam and Burge that the truth-conditions of our thoughts “just ain’t in the head”,[*] and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We will advocate an externalism about mind, but one that is in no way grounded in the debatable role of truth-conditions and reference in fixing the contents of our mental states. Rather, we advocate an active externalism, based on the active role of the environment in driving cognitive processes.”
“ …. the human organism is linked with an external entity in a two-way interaction, creating a coupled system that can be seen as a cognitive system in its own right.” (Clark and Chalmers 1998)
[9] “The generation and use of visual images in the examples (Figures 2, 3 and 4- mine) undermine the distinction between ‘internal’ and ‘external’ representations. In a truly hybrid cognitive system there is no dualism of subjective and collective knowledge (pace Knorr-Cetina 1999, 25). Consider the Hubble telescope as a knowledge-production system in which teams of scientists interpret computer-generated images. Although the most important representations appear to be the external representations on the computer screens (Giere 2004), these technology-based images are no more important than visual mental images. The end process involves evaluating the implications of each Hubble image for knowledge claims about galaxies that are 13 billion years old. Such evaluations cannot be made without engaging mental processes. The fact that image manipulation is accomplished by mental as well as object-based methods does not prevent it from being a collective process.” (Gooding, Visual Cognition: Where Cognition and Culture Meet 2006, 694)