The KM / IM Debate

I don’t really see any worthwhile debate – the buzz of turf-wars may keep the subjects in the headlines, but there is no definitional problem not already adequately sorted by the data > information > knowledge > wisdom stack. (Thanks to David Gurteen’s tweets prompting this post.)

Anyone with strong allegiance to any one part of the stack will widen (blur) their definitions into the adjacent layers, but anyone interested in the whole stack can see worthwhile (pragmatic and valuable) distinctions, each being a layer of patterning built on the previous layer.

  • data – is about significant difference – bits and bytes being distinct from one another, at any level of granularity from fundamental physics upwards to whole books and libraries.
  • information – is about the significance of those data differences, their semantics – what the patterns of data mean.
  • knowledge – is about how that information is applied to add value, valuable patterns of use, applied information.
  • wisdom – is about understanding (knowing, experiencing, appreciating) value and the fact that it depends on how the whole stack works, and the pragmatic need to balance interests and priorities across (two-way) interactions between all levels of the stack. A more “holistic” view, if that’s not a dirty word.

Personally, like anyone else who’s given the matter any thought I guess, the aims are always towards the higher level – wisdom – whatever our (current) level of activity as a practitioner. In my particular case as an engineer I started and worked for 20 years in the applied space – learning and using knowledge of how to apply information to specific ends. It’s all about “decision-support” of course – all worthwhile activities involve decisions, so that truism in itself doesn’t add much to any definitions. One of the things you learn – wisdom you gain – is that those practitioners in the data and information layers can, by inadvertent presumptions about decision-making and use-in-action, create constraints on usage in the knowledge layers. So for the last 12 years or so, I shifted my focus down a couple of layers to understand the presumptions and how (unnecessary) constraints can arise.

I have to say in the process, I’ve developed a huge respect for librarians. Anyone who thinks it’s “just” thorough record keeping – some clerical admin task – misses the need for good strategies and architectures for how data, meta-data, information and relations between these are organized. We benefit from some types of “constraint”. The more virtual our libraries become, the more we need to avoid librarianship becoming a dying art.

Ubiquitous, real-time, interactive connectivity is not necessarily entirely good in and of itself.

Mobile McLuhan

Piece by Peter Benson in Philosophy Now (posted on Facebook by ex-MoQer David Morey) – Marshall McLuhan on the Mobile Phone.

Unsurprising to find McLuhan on the money when it comes to the social effects of our communications age but, for me, a couple of interesting points on value and memetics.

Print is the technology of individualism” (The Gutenberg Galaxy pp.157-8) whereas with [mobile technology and the net], the tendency is once more towards interconnected thinking in a community of minds, and so perhaps less ‘free ideation’.

Less free, notice. It’s the usual Darwinian call for evolutionary balance between fidelity and fecundity. If it is too easy to copy patterns of information in hi-fidelity it is harder for mutations to be introduced in ways that create new value. Too hard is obvious, but too easy is not good. Less is more. Life’s just complicated enough. McLuhan continues:

It is important to recognize the subtlety of McLuhan’s views. He is not saying that modern technology distorts an original human nature, which must be protected from such distortions. Instead, from the moment humans began to create tools, our nature was shaped by the tools we used. The silent reading of texts proliferated after Gutenberg’s invention. This activity is not ‘natural’, in the sense of resulting through evolution from the necessities of survival; but it can be regarded as having value, conferred on it by our judgement as individuals and as a society. [His emphasis]

It is entirely possible that a future society could reverse this judgement; but in the interim we need to give consideration to the potential change in our values due to actual changes in our dominant communications media. [My emphasis]

Did we ever need a little conservatism to moderate mediation in the mix. The art of editing.

The Wrong Boson

Interesting, after all the press buzz last week about possible hints and indications that might suggest the speculative Higgs Boson (all designed to sell Cox’s book in time for Christmas no doubt), that this week the paper published indicates a new “Chi_b(3P)” boson, whatever that is.

What is really interesting, given yesterday’s post about the workings of science, is the paper itself appears as a 17 page PDF, 13-1/2 of which are the acknowledgements and references to the LHC Atlas team 2590 individuals (excluding deceased!) and 212 institutions by name. What is the point?

Bad Scientism, a Messy Business

I read this Ben Goldacre piece a couple of weeks ago. The problem one always has to ask is … is this kind of bad science accidental or in some sense deliberate – a skilled incompetence either by the practitioners or their managers / editors / reviewers, or both in a kind of tacit collusion. In situations of complex human endeavours some hypocrisy is inevitable, to balance motives and goods across multiple levels, and a degree of trust is therefore also inescapable. Science is no different, taken as a whole “business”.

Personally, I’m more against bad scientism, using science badly in situations that are far from scientific – rather than good or bad science per se. With infinite time and resources you could argue all situations can be reduced to science, but the reduction can discard the real world value. Statistics is of course one of those techniques used to bring the vagaries of human behaviour into the scientific space in quantifiable chunks. This adds another level of complexity to the whole exercise leading to more possibilities of evaluating the wrong things, and/or evaluating them wrongly.

Ben’s story above is about the statistical methods, this story today in The Scholarly Kitchen (via David Gurteen and Stephen Downes) is about choosing the wrong inputs for the wrong motives – citations, again. Proves the point that science is a messy business, parts of which are far from scientific.

And of course, the “Measuring the Wrong Things” headline is one a long line including Einstein’s “Not everything that counts can be counted.”

Boeing vs Airbus

Interesting having posted twice about the Air France A330 disaster (including just yesterday) to see this Slashdot story (via Johan on Facebook) about a Quantas A330 problem around the same time, 3 years ago. The comment thread is interesting, kinda reinforces my comment of yesterday:

A. … the number of [computer bug] accidents will likely still be fewer than those caused by human drivers.

B. Which is actually [why] Airbus relies on sensor input over the “pilot”. Boeing believes in the opposite. I’m inclined to believe Airbus in that the majority of accidents are human error over computer error.

C. The problem with aviation accidents is the relatively small sample size. With cars [in the Google auto-driving story] there will be much more data points.

I guessed B’s point yesterday, though I have no specific knowledge. The point is really this, fly-by-wire or not, pilots and the automation technology together form one complex “system” – the behaviour of one affects the other. The people and the software are both subject to (imperfect) testing and validation. Even with fly-by-wire, the total system (including pilot behaviour and psychology) can be designed with greater total inherent safety – fewer failure modes that lead to loss of control.

I’m a big fan of Airbus, but these are, as I said, scary problems.

Scary AF447 A330 Crash Report

I blogged this link the day the story came out, to Facebook and/or Linkedin, but of course that doesn’t preserve it in my database, so I’m repeating it here. What is really scary is not the persistent pilot error: The inexperienced co-pilot may well have been disorientated or even in some kind of “personal mental autopilot” denial as to the true state of the aircraft despite clear and specific audible and verbal warnings. Schoolboy error to pull the stick back under those conditions, let alone a qualified pilot. I would say it must count as a design fault in the A330 (and presumably all the current generation Airbuses) that the crew do not get any direct feel or instrumented feedback of the control surfaces. The experience in the cockpit then counts (counted) for nothing. How is that “averaging” stick behaviour design rationalised ? Do two wrongs somehow make a right !

02:13:40 (Co-Pilot) Climb… climb… climb… climb… 
02:13:42 (Captain) No, no, no… Don’t climb… no, no.
02:13:43 (Co-Pilot) Descend, then
(Whilst all the while the other co-pilot has his stick pulled back anyway ?!?) 

Fly-by-wire is great until the pilots are unaware the various overrides – that prevent them doing stupid things – have been switched-off.

[T]he crash raises the disturbing possibility that aviation may … be plagued by a subtler menace, one that ironically springs from the never-ending quest to make flying safer. Over the decades, airliners have been built with increasingly automated flight-control functions. These have the potential to remove a great deal of uncertainty and danger from aviation. But they also remove important information from the attention of the flight crew. While the airplane’s avionics track crucial parameters such as location, speed, and heading, the human beings can pay attention to something else. But when trouble suddenly springs up and the computer decides that it can no longer cope—on a dark night, perhaps, in turbulence, far from land—the humans might find themselves with a very incomplete notion of what’s going on. They’ll wonder: What instruments are reliable, and which can’t be trusted? What’s the most pressing threat? What’s going on? Unfortunately, the vast majority of pilots will have little experience in finding the answers.

Very, very scary.