The EEM Institute – “Institute for Augmented Intelligence in Entrepreneurship, Engineering and Management” is the evolved public face of the (Russian) Systems Management School, where Systems Thinking is now one of the courses under the EEMI.
“Open-Endedness” is an aspect of the wider curriculum at EEMI and below are the slides and a recording of the presentation on that topic by Anatoly Levenchuk their “Head of Science”.
(I consider Anatoly a friend after some years of working together for Russian infrastructure projects on generic systems information modelling aspects, where Anatoly also invited me to speak at a couple of INCOSE events in Moscow and Nizhny-Novgorod on those knowledge modelling standards aspects. Anatoly was chair of the INCOSE Russian chapter at the time when I was working with him and his colleague Victor Agroskin in their “TechInvestLab” information systems consultancy.)
Previously here I have reviewed his book “Systems Thinking 2020”
Below I have some comments on his latest presentation (which unfortunately I missed in real time).
The presentation itself:
(PLEASE NOTE: The following review is preliminary based on early sub-titled version of the presentation which has since been updated above. I will update the review in the line with the improved captions and in the light of clarifying dialogue with Ivan and Anatoly.)
All OK, up to the point he talks about the “singularity”. I’m more sceptical that the kind of AI that could ever overtake humans exists at all (yet) – but I’m absolutely on-board with the idea that as “Cyborg” AI-Augmented Humans we are already behind our own curve in dealing mentally / cognitively / organisationally with the systems complexity of human reality on the planet we call home. Very pleased to see “Engineering” defined as the human endeavour intended to “improve” that. It’s how I’ve always seen the point of my being an engineer.
Interesting to see Tononi’s IIT referenced, much referenced here. Also, Graziani’s “Standard Model of Consciousness”? Not sure I understand what that is, but I have a strong position on the reality of consciousness anyway.
Love the idea that the customers of EEMI training are seen as these Augmented Intelligence Cyborgs – both the individual humans and the organisations. Quite right. Education and training as part of “our” evolution. (Not introduced the topic in the title yet ~17:20 Open-Endedness?)
Like the Intelligence Stack (some errors in the subtitle captions).
Not sure why the distinction of human vs agent activity in the applied stack – just more generic? Deontic modality? No applied engineering at the human level – not sure I understand. Isn’t that what Systems Thinking is? (For any thinking agent, including humans?) Ah, this is at the level of the whole of humankind, not individuals and organisations OK. The Applied-Level Stack makes sense too.
(I completely agree with this information systems levels view of the entire stack of evolutionary levels.)
Bayesian decision theory based. Good (but errors in captions again). Recognition that this is bleeding-edge unproven approach for “elite” learning “not the whole peleton”.
Bertalanffy, Checkland systems theory basis – not simple “lifecycle” view – systems evolve from systems. Ah, hence the “3rd Generation” claim in terms of what people understand by cybernetics and systems. Not such a concern for my “Deflationary” approach – I just see this a focussing on the fundamentals of the original first-generation theories.
Constructor Theory (Deutsch and Marletto) lifecycles are in the intelligent agent level – agreed – hence dropping use of “lifecycle” language for the “physical” systems. Again, this is just linguistic baggage to me, so less concerned. Understood. (As I posted on INCOSE LinkedIn “Systems Engineering is first and foremost about humans”)
The evolution of systems is continuous. Systems engineering / thinking is top of the stack. 20 key “State of the Art” references all VERY recent.
Memome ->Phenome distinction. I like it.
Constructors construct constructors. Multi-level evolution. Smart evolution (is still evolution, but more than extended modern Darwin synthesis.) Systematic is not systemic. Agreed. Affordances view. Decomposition by attention. Functional systems view. Disruption comes from below.
Doxatic (?) modality, not (strictly) requirements, but more like “hypotheses” or “best guesses”- hypothetical / proposed “affordances”. Functional definition. (Already made this implicit translation when using orthodox “requirements” language.) Entrepreneurship?
Still never mentioned “Open Endedness”?
A lot in there, (lots of common references) – not all clear / agreed, but fascinating. Needs dialogue.
I have a different tactic, I understand the need for changing the words used, but would be nervous about an approach that relies on understanding a radical new technical language – much prefer to evolve usage of recognisable language, even words that have old baggage. Applying Systems Thinking to the body of systems thinking? But I think this is tactical rather than fundamental, once the processes are understood the language will also evolve.