Scientists Will Hate This

I mentioned John C Doyle as a candidate for a new real-life (living) “hero” in my research quest here in 2019 and again here in 2021. I say “new” hero because my long term hero has been Dan Dennett. Of course since then, both Iain McGilchrist and Mark Solms have taken up a good deal of attention with their own heroic contributions, but I mentioned Doyle again the other day in an exchange with Anatoly Levenchuk and “Systems Thinking”.

(Doyle is actually a reference in Gazzaniga who – like Sperry – is an important source in the cognitive science space, used directly by McGilchrist and many others, but not Solms so far as I can see. It’s how I first came across Doyle.)

I sensed, and still sense, that his work is going to prove important to how architectural systems thinking is applied to everything from fundamental physics to global human issues as well as brains / minds and IT/Comms networks. Trouble is, he admits, he’s very “disorganised” – unstructured presentations (oral and slides)  given to technical audiences and files on public shared drives. He’s prolific, but it’s all papers written with collaborators and students, no book(s) beyond his original control systems specialist field, with no obvious indexing or structure to his topics. In a sense that’s probably justified by the content of his current subject matter which demonstrates “universal” trade-off features of all multi-level systems – almost all his graphic abstractions are versions of each other.

I had already shared this presentation:
John Doyle, “Universal Laws and Architectures in Complex Networks”
March 2018 @ OFC Conference

Anatoly shared this recent one (as well as many papers, and in fact Doyle drops many paper references into his presentations, acknowledging his student contributions):
John Doyle – “Universal Laws and Architectures and Their Fragilities”
October 2021 @ C3.ai Digital Transformation Institute
(And this folder of public papers highlighting Social Science Architecture and Systemic Fragility.)

Now there is a thread of overturning scientific orthodoxy running through all the above, counter to the received wisdom of logical objective rationality of causal chains where wholes are reduced to summation of the history of their parts. In doing so, ignoring (a) ergodicity, that not just the end states of individual parts, but their network of paths through possible histories affects the whole outcomes, and (b) strong emergence, that wholes have their own properties and causes not causally determined by their parts.

At the level of political and aesthetic endeavours, no-one would bat an eyelid. The problem is, the problem exists in would-be science too and rational thinking more generally.

Dennett – warns scientific types to avoid greedy reductionism, and to suspend disbelief and hold-off on definitively objective definitions as rational arguments themselves evolve over repeated cycles.

McGilchrist – having debunked cortical & hemispherical misunderstandings of how our brains and conscious minds evolved to work, pleads for recognition of the naturally sacred beyond the reach of our orthodox (objectively verifiable) scientific model of reality.

Solms – having debunked cortical and mid-brain misunderstandings of those same brains and conscious minds and having established the basis of consciousness in subjectively felt experience and their evolved existence as distinct causal entities through fundamental information computation processes, makes a plea for objective scientists to cross the Rubicon to take-in the view from the subjective side.

Doyle – whose work arises explicitly from IT/Comms computing networks, demonstrates repeatedly, with all manner of real-word examples (from talking or riding a bike to mobile apps & social media), that there are universal abstraction features of multi-layer systems network architectures that mean the virtualised wholes do more – better, different – than any of their parts. “Scientists will hate this!” he repeats in throwaway remarks to his technical audiences, recognising that strongly emergent causal identity of virtual entities is contentious for objective science, STEM and Engineering. (Slightly infuriating, unlike the better informed brain scientists above Doyle uses “cortex” as shorthand to mean that part of the human system inside our heads. The cortical fallacy.)

There is a common “problem with the received wisdom of orthodox science” running through all of this, and a lot of “systems thinking” and “information processing” common ground in where the problems arise.

It’s a “bug” in the received wisdom of “science-led” human rationality. The one that’s been driving this Psybertron project for 22 years.

We’ve barely scratched the surface with Doyle, I’ve mentioned elsewhere that in his terms this problem really is a bug. Viruses are especially adapted to hijacking vulnerable layers in multi-layer-architected complex systems, without needing to carry the overheads of more complex organisms such as ourselves and our social organisations. Humans are particularly badly adapted to deal with viruses that work against human interests – especially memetic ones in society’s information and communication layers. Our social systems – including science – are much more fragile than our rationality admits. Unless we want to give-up on humans and declare viruses and the simpler single-celled organisms as “the winners by headcount” in the cosmic game of evolution we need to find memetic vaccines that work.

(With Anatoly’s help) I need to dig further into Doyle.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: