I’ve already documented my take that there really is no longer any mystery behind consciousness and our conscious (free) will – my simplest single reference being Mark Solms “Hidden Spring”. Surely, massively valuable in its own right to have solved that long-standing human riddle? And, more importantly, it takes away a massive source of confusion and wasted argument from would-be scientific approaches to the future of humanity and our planet more widely. Also not insignificant, surely? An enhancement to our knowledge of the world that science and humanity benefit from.
As an old man in a hurry, I’m already focussed on the “so what next?” to achieve such global aims given that understanding, rather than the detail of cementing the underlying agreement in all stakeholders -that’s all of us – and a whole Kuhnian paradigm or Kondratieff cycle, typically three human generations. I’ve already documented every which way my “systems thinking” position says that the appropriateness of detail is a very real consideration driven by understanding the system(s) you’re currently dealing with. Yes, the devil says all details matter eventually, but the angels really are in the abstractions, here and now.
The intersection of the philosophy of consciousness and the science of brains with systems thinking is precisely what my reference to Mark Solms is about. The particular version of systems thinking being “Active Inference”, based on the Free Energy Principle and the statistical-thermodynamics information-processing idea of Bayesian inference across the Markov Blanket boundaries of the systems we’re dealing with. Fortunately amongst the Active Inference Institute’s 750 members and more guests there are people concerned with firming-up understanding and agreement of those explanatory principles and models, as well as exploiting their future value.
Two such people, in particular Maxwell Ramstead and Mahault Albarracin, gave a presentation to the AII yesterday:
It was very good.
Firstly it was admittedly a summary of – a crash course in – the underlying principles that are maybe taken as given by AII members. But, useful in itself.
Secondly it developed the FEP idea all the way to the many-layered experience – affect – we call consciousness. Particularly striking for me was the meta-layering on multiple dimensions at every level of granularity and scale.
In general / dynamic systems thinking we may find ourselves talking about processes, procedures, methods and methodologies seemingly interchangeably or redundantly, yet needing to make distinctions when appropriate. Well, even starting back at the level of fundamental physics we have principles, mechanics and dynamics with layered explanatory dependencies. And remember we’re starting with “a principle”, the FEP.
The aim – theirs and mine – is not a new theory n+1 of consciousness, but an integrative unification of n partially agreed theories – exploiting their iso-morphism across many layers and aspects to provide an explanatory view of the whole. A “minimum unifying model” (MUM).
One attractive feature of the FEP-based explanation is its sparseness, a sparseness that is iso-morphic with both the problem domain (life, the universe and everything) and with the ontology of our brains, wrestling with with that problem of daily life. A 100 billion neurons, each with only (max) 10’s of thousands of connections. Yes, everything is connected eventually, even those molecules in the proverbial box of gas, but only a tiny proportion interact directly with each other. There are degrees of separation – a small number relative to the population itself. Sparse.
One corollary of this is that hierarchy – the dirty word in power politics – is the entirely natural view of the organisation. The natural nesting of overlapping systems. And with any ontology, there is a concentration of information – a compression – at every interface, each level in the hierarchy. It’s simply efficient, minimising free energy, at all scales from quanta to black holes. Whilst our system may be arbitrarily networked – a neural net, the apparent opposite of a pure hierarchy – remember it is sparsely networked. Nevertheless hierarchical but heterarchical with many overlapping hierarchies – but tractably few population-wise. #GoodFences
Another iso-morphism, and the primary point of this particular pre-print presentation, is an “inner screen model of consciousness”. Bearing in mind we’re treating all interfaces as Markov Blankets and that their functional / logical definitions need not coincide or map one-to-one with physical sub-system boundaries. One such interface we can think of as the view from consciousness – an inner screen – which conjures up a neo-cartesian homunculus, but I think Dennett would love this whole explanation of reality, a view seen as an illusion yet nevertheless real and explicable.
Two more corollaries – One, the iso-morphism of a Markov blanket and a holographic “screen” interface in physics. Two – the most natural 2D surface view of any complex reality from one point in space-time. Think ancient Egyptian spreadsheets or clay tablets as our tabula rasa.
Anyway, passing (neutral) reference to IIT and multiple references to the usual suspects – Friston, Glazebrook, Fields, Levin as well as Solms – and several specific papers with collaborators mentioned here already. (No Levenchuk?)
Ironic that at the same time I published this post, the infamous 25 year bet between Chalmers and Koch was called in favour of the former’s prediction that consciousness wouldn’t be “solved”:
— David Chalmers (@davidchalmers42) June 24, 2023
Of course my post headline is exactly that – no one thing “explains” anything – least of all a “principle” but Active Inference use of the the FEP is undoubtedly the last piece of the jigsaw in explaining how consciousness arises and functions. Obviously there are details of exactly which aspects of consciousness we’re talking about in any number of contexts, but there are no mysteries, even if it takes those three human generations of scientists to socialise the knowledge. It really is time to move on.
Believing is not the same as knowing. @Mark_Solms believes the FEP explains consciousness. But the FEP computer models are an abstraction of an abstraction of an abstraction. A ghost in the machine. Just because it makes sense doesn’t make it true. You wanting it to be true with…
— Stuart Sims (@SimsYStuart) June 24, 2023
“It makes too much sense not to be true.”
I’m a skeptic like anyone else – finding fault is easy, making progress is harder – but there needs to be a division of human labour. We can’t all be expected to learn and go through every detail as individuals. That’s teamwork. The bigger story elsewhere here, is that the (marketing) success of objective (reductive) science has destroyed the intuitive (subjective) value of the no-less-real abstractions.
That’s what needs fixing.
Crossing back over that Rubicon.