Copyright © 2009 Corvus International Inc. All Rights Reserved
Behind every piece of paper lies
a human situation
Edward Twitchell Hall
Form follows function
Louis Henri Sullivan
The medium is the message
Herbert Marshall McLuhan
Software? What software?
It is a tad ironic but software people don't actually use much in the way of software to produce, well, software. Most of the work is done on bits of paper or the electronic equivalent of bits of paper which largely describes what a word processor produces. True we have CASE tools, but most of these are little more than constrained drawing tools that allow the user to draw a picture and then look at the picture. These tools fail the software litmus test of executability. If a "software" artifact doesn't execute, doesn't do something it is not software--it is really an paper analogue.
The Act of Writing
The simple act of putting something into words, particularly in writing it down does things to the knowledge that is being transcribed. This is due to the nature or ontology of paper and the cognitive mechanisms involved in reading and writing.
Ideas in words and on paper have the following characteristics:
Sequence: words are always in sequence. We can change the sequence words are in, but in sequence words are nonetheless.
Single tasking: we can only look at one word at a time1 If I am reading this I am not reading that.
Control oriented: reading is a place holding function that uses the single tasking activity to enforce the sequence.
Few connections: the primary relationship words2 have with each other is next-prior. Hypertext notwithstanding, it is hard for words to relate to other words that some distance away.
Limited and defined scope: a word has a particular meaning3. We can adjust the meaning by using irony, context, and other rhetorical devices, but what a word means is limited. That meaning is generally defined elsewhere (like a dictionary or a glossary) since including the definition of each word with each word would make our documents rather difficult to read, not to mention rather big.
The Systems We Build
Given the characteristics of words above, it is inevitable that when we put systems ideas into words and on paper we perform the following operations on the ideas and knowledge content:
Enforce sequence, even if sequence is not a systems characteristic
Render the process into single tasking form, though we can sometimes overlay with other single tasking processes to resemble a true multi-tasking set of functions
Focus on the control sequence of the operations. This tendency is reinforced when we use control sequence programming languages which is to say most programming languages
Limit the connections to a close subset of possible links. This is also encouraged through the "high cohesion/minimal coupling" ideas of systems design
Limit what each piece of software does to a small and well-defined function. Also encouraged through information hiding approaches.
This translation and these operations, which are not necessarily "bad" have more to do with human comprehension limitations and the structure of the medium (paper) than they have to do with the characteristics of the systems we are building.
Indeed, we used to build simple single tasking control-oriented sequential systems for which this medium was well suited. It is not as appropriate for today's complex, highly linked, event-drive, multi-tasking systems.
We need to reconsider the ontology of our systems development modalities.
Words, writing, and programming languages look the way they do simply because this is the way a small section of the human brain functions.
Our visual cortex has no problem dealing with massive parallelism--otherwise we wouldn't be able to drive a car--but it cannot use this parallel processing for certain cognitive functions. Reading employs a different part of the visual brain function than does looking at a picture. It is this and the understanding processes behind it that drive the ontology of paper and the cognitive mechanism of writing.
Similarly, our development methods owe more to our psychology and physiology than they do to any objective "reality" of the systems we build. We don't build systems using object-oriented methods involving classes and inheritance hierarchies because the systems need such devices. They don't. We need such devices to understand the systems.
1. We actually read blocks of words at the same
time, and we don't even actually "look at" the words really--it is a form of
pattern recognition that mostly keys off the first couple and last letters
of a word.
Te shtw tjus--ypu ctn prgtwjyly unfajstdnd th3s sebghnce wihktst tho m8uh dijfmstty
2. We use all kinds of devices to get around this including indentation, tables of contents, glossaries, citations, and indexes.
3. My favorite word with embedded meaning is a German word: Zusammengesetzheit. If you know what it means, then you will understand the meaning of the sentence: "The Zusammengesetzheit of words varies". If you don't know what the word means then, while you can read the sentence just fine, but its meaning will be obscure.