Are We Nearly There Yet? #htlgi18 #htlgi2018

So many intellectual inputs and opportunities for dialogue of which barely a tenth can be grasped over a weekend at Hay on Wye, at the How the Light Gets In HTLGI 2018 Festival. [Meta – specific sessions and update on the weekend experience here.] Below is my summary of the messages I heard in the content of a dozen or more sessions out of hundreds on offer:

Setting the theme on Darkness, Authority and Dreams, Hilary Lawson highlights our fears that no matter how recently we were feeling that we’re almost there with the modernist scientific enlightenment project, the more we seem to inhabit a new dark-ages of post-truth conspiracy. We are all Post-Post-Modernists now with a lot more work to do and the hope we can do more than merely dream of doing it. I know I call myself PoPoMo.

[Having seen more of Hilary Lawson this time than any previous HTLGI attendance, it’s fair to say I feel I understand more now than when I first came across his concept of “Closure” in 2003 and read it in 2009.]

The actual theme on the ground in 2018 was very much moral epistemology: what do we mean in practice about what we say we want? And this ranged from the wishful objectivity of algorithmic information in our tech-mediated lives, to the counter-intuitive seeming-subjectivity of the fundamental sciences. All human life is here, fortunately.

Before that, however much we look at the imperfections of love and hate and the meaning of life, we see that these are timeless pre-Socratic questions. And we sense that we may be no closer to arriving at answers because the very idea of full and final objective answers is a misguided quest. We can never even be nearly there. Even time it seems is timeless.

These first two questions – are love and hate forever entangled in good and bad? and does the meaning of life have any meaning? – set the scene on the Friday evening with Renata Saleci, Rowan Williams, Robert Rowland-Smith, Michael Crick, Steve Taylor, Julian Baggini, Helen Lederer and Joanna Kavenna. Frankly, on meaning as the human driver, and on the balance of immediate and long-run goals, I came away with Abraham Maslow. Your experience will vary, but there is an evolutionary hierarchy starting from yourself and your loved ones surviving to live, culminating in your striving towards our best contribution to wider humanity and the cosmos.

[Aside – I intended to follow-up with Steve Taylor and, on another topic, with Julian Baggini, but our paths merely crossed. Baggini’s own post-festival notes.]

[Aside – Also, since then Baggini has published his “Meaning of Life Redux“:

“I took part in [the] debate about the meaning of life …
the thought that bugged me most was: Not again!”

And yet again I respond with my yes, it’s all already been done by Maslow (*). The Meaning of Life Redux Redux I say. There really is no further need to look anywhere else for an answer to this question. (*) and yes Maslow comes with caveats about spurious prescriptive objectivity – like all rules, it’s for guidance of the wise.]

So, beyond ubiquitous questions of the True and the Good, several other topics converged on two main threads.

One is, whether we’re talking of culture in terms of creative and aesthetic arts or we’re talking of knowledge about facts-of-the-matter in public life and institutions, one real focus was clearly the democratising and algorithmic processing of information, the benefits and threats to humanity of power and interests.

The other, conversely, even when talking actual physical science, fundamental objectivity turns out to be a misguided function of aesthetics and human limits to knowledge at the level of fundamental information in both time and space.

I shall take the second first. It’s important to notice, that this is not simply imperfections in current models of contingent scientific knowledge that will be forever refined by the ongoing scientific process. The “tweaks” – like Dark Matter and Dark Energy – seem to be getting larger, not smaller. There are informational limits to what can ever be knowable such that current best models may themselves be fundamentally wrong rather than simply imperfect.

Two theoretical physicists with concerns for epistemological boundaries of their art are Sabine Hossenfelder and Erik Verlinde. Both might agree that science has been “fucked-up” by humans. Hossenfelder’s take on the problem is that we are Lost in Math in our misguided search for aesthetic beauty and elegance. She may have a point, but I don’t buy what it is yet, so we may need to read the whole book when it comes out. Verlinde on the other hand, as well as his own presentation and individual sessions, became the effective focus of two other group discussions, where the enormous significance of his proposed new theory of gravity is beginning to emerge.

In one session on gravity, theoretical physicists Verlinde and Hossenfelder were in discussion with practical physicists Catherine Heymans (telescopes at the cosmic scale) and Jonas Rademacker (accelerators and detectors at the micro scale). The current interest in gravity is due to serious doubts driven by the scale of unknowns implicit in having had to postulate Dark Matter and Dark Energy. In another Verlinde was joined by Huw Price, Alison Fernandes and James Ladyman on the reality of time.

Neither gravity nor time exist as fundamental components of the cosmos. Both, like all other forces, particles and even laws are emergent. They are emergent as patterns of information, information that exists fundamentally at (something like) the Planck-scale level.

The dynamics in the Verlinde / Hossenfelder / Heymans interaction were fascinating and I wasn’t the only one to notice:

[That’s my face in profile on the extreme right hand edge of the pic, incidentally.]

You’d think that experimental physicists have little to fear from the theorists? Except of course that enormous investments in their very large kit may have been justified to look for the wrong things. They need to hope that null and surprise results from already justified experiments still add to our new models and body of knowledge. The theories of theoretical physicists are of course competing very directly for our (hearts and) minds.

Heymans and Verlinde have collaborated on at least one paper looking at hypotheses that might be amenable to experimentation. Hossenfelder has written at least once critiquing Verlinde’s work. One comment of hers from the stage about mathematical rigour was almost certainly intended as an attempt at humour but came across as a unnecessarily barbed for a public discussion. Cough, shuffle.

Contrary to Hossenfelder’s point I get the feeling that focussing on mathematical rigour at the expense of meaningful human metaphor risks garbage in, garbage out?

All acknowledged however that far from a necessarily self-correcting virtuous cycle, between multiple theories and multiple experimentation strategies, that is presumed for science generally, there is in fact a much more vicious cycle of null findings and inconclusive critiques that mislead our knowledge when each is targetted specifically at the other. The creative and the serendipitous have their own value.

[Verlinde’s is not the only theory that suggests all else is emergent from a fundamental quantum information level. This is a position I have already bought. For more discussion of gravity, time and information, in a science AND humanities context, see here. PoPoMo Epistemology.]

A corollary of such theory – for science – is that:

Even the best model(s) of physical reality
can only ever be valid locally (spatially and temporally)
from a human sense-making – anthropic – perspective.

Other than these context-specific models and emergent relationships between their objects, there can never be a single unified explanatory and directly causal theory of everything.  Whether there is an absolute reality-out-there or not, and whatever that may be, we can never model all of the physical and ideas of a multiverse and/or universes beyond any horizon are meaningless. Alternatively, emergence from fundamental information may be the only fundamental model of reality.

Now, if even science is having to come to terms with the parochial nature of our best possible science, the post-modern humanities ought to have a field day, right? Well no actually. Remember we’re all post-post-modernists now. There is no slippery slope.

With little else besides information – available data – actually being fundamental, even objectivity is emergent from dialogue. We can’t choose our own facts, but we really do have to agree what counts as objective evidence for the facts we choose to use.

However fundamental information itself is, the other side of the coin is that all human life is now seen through an information technology lens. Whether that content purports to be scientific, cultural or political, it has been democratised and processed for our consumption and interactive experience. Notwithstanding the truth, good or otherwise of such content, there is enormous interest in both the benefits and the risks to humanity in the remote human interests and inhuman power inherent in the algorithms of big data and artificial intelligence.

Dare we dream lest they be nightmares?

Not surprisingly, given the scene-setting theme, a great many of the sessions fell under this umbrella, so just a selection of voices heard here.

On the objectivity or otherwise of truth in general, Matthew d’Ancona, Hannah Dawson, Hilary Lawson and Robert Rowland-Smith agreed that however objective we might agree certain facts to be, even historical ones, they could never be absolute or final. Dawson was new to me and I need to follow-up. Excellent, wise but entertaining session covering a lot of ground that no summary can really do justice: Objectivity has its limits and nowhere should we seek to make it absolute. Facts don’t change minds, but nevertheless we need “digital literacy” for popular understanding of  information dynamics. Meantime, journalist d’Ancona in particular, “having entered the philosophical debate via the tradesman’s entrance” was not giving-up on the pragmatic idea of weaponising objective truths to fight against nefarious conspiracies and alt-facts of powerful interests. Alan Alda’s Hawkeye character in MASH in the combat-zone of daily politics isn’t really concerned with the medical-research of metaphysics.

Timandra Harkness led a session on big data with the title Numbers vs Narrative with Anders Sandberg, Noortje Marres and James Ladyman. It stuck mainly to the general opportunities and risks of algorithmic delivery of content, but pretty much conflated all the big-data / AI topics, with lots of good examples from Marres and Sandberg. The latter is a noted optimistic futurist – and futurism is about making stuff happen rather than prediction. Marres and Ladyman both brought up “spurious objectivity” and the constraints on “displaced understanding” that needed to be designed into technology and the risks understood by human users. The algorithm vs judgement debate invariably needs to lead to balance – machine algorithms as tools to support human judgement.

Particularly polarising in another session was the unpleasant disagreement between the pessimism of Rupert Read and the optimism of Anders Sandberg and Kate Devlin, refereed by Mark Salter. There was an element of talking past each other, and whilst I might have plenty of disagreement with green politician and theologian Rupert Read myself, he does in fact have a point about the precautionary principle when applied to matters of degree that turn out to be category errors. As the title of this session suggested, Beyond Human can be either transhumanism or inhumanity depending not just on your perspective but on what in fact we’re proposing. Certain kinds of human augmentation by algorithmic AI – with loss of subjective autonomy – could indeed have catastrophic inhuman downsides and well-informed public agreement on taking risks is absolutely essential, even if futuristic individuals are prepared for risky experiments on our behalf.

A round-table dinner and discussion with Margaret Boden entitled Human Thought and Creativity was almost completely hijacked by a similar big data / AI risk-benefit debate. This ranged across several topics from personal care to meaningful employment and the value of artistic products in a post-automation economy. In that format most of the best contributions came anonymously from the audience. Boden herself raised the distinction between algorithmic AI’s designed for specific uses and more Artificial General Intelligence. I recommended that a model for a post-automation economy designed to avoid a professional elite existing uncomfortably alongside those displaced to “bullshit jobs”, even if not literally unemployed, was a primary part of Paul Mason’s PostCapitalism. Dashiel Shulman pointed out that already 100,000’s of US truck drivers were at risk of redundancy from automated driving technology. Others pointed out that even with personal care occupations, the key factor may be the human attention and caring, even if 95% of the mechanics is simply “heavy process”.

The Edge of Reality session with George Ellis, Nancy Cartwright and Hilary Lawson chaired by Phillip Ball continued the debate at the metaphysical level, warning that mathematical and logical abstractions are not “real” and don’t actually do anything in the real world. They’re about “tools” that work for humans. Continually striving to get an absolute grip on “the real” is a religious notion. [Video posted here: with the “Does Physics Lie” click-bait headline.]

A discussion on Democracy and Culture with Warren Ellis, Kristen Sollee and John McWhorter chaired by Serena Kutchinsky was focussed very much on the democratisation of culture (and not to do with democracy as a form of governance). A very entertaining discussion on the many examples – good and bad effects – of fragmentation through social channels and new technology networks. Particular focus was the sense that algorithmic feeds were displacing human gatekeepers. Gatekeeper was too restrictive a term, but there was good agreement that the net effect on creative media was wholly positive and that business models could, should and probably therefore would naturally value human curators and influencers. Loved Ellis quote that “we’re a workaround culture” in terms of how humans will always find convenient uses for technology whatever it is originally designed for. Put me in mind of Popper / Deutsch – that all life is problem solving and humans are a “universal constructor” of new solutions. One specific creative suggestion from the audience was that if provider channels were using algorithms to prioritise attention and delivery, that there was clearly a market for a provider to open-up their algorithmic black-box and offer personalisation tools as a mixing-desk for individual algorithms. This adds a whole new dimension of possibilities for curation and influence of personalisation algorithms. It’s probably all there and just needs opening up.

Having been a participant in the earlier session on objectivity of The Truth, Hilary Lawson hosted a particularly fascinating discussion on the absolute and objective nature of The Good with Michael Ruse, Paul Boghossian and Naomi Goulder. A philosophy full-house and impressive all. Plenty of good-humoured disagreement – picking on examples from the front row – disguising deep agreement at many levels.  Highlight for me was Ruse referring to me as “young man”! I found myself thinking of McIntyre’s After Virtue. Apart from the inevitable convergence on the absence of any literally  absolute objective good, even if objectivity could be agreed by dialogue, there was a strong leaning towards some effectively absolute hierarchical aspects of the good – where matters of degree become Platonic kinds for practical purposes. Whether taking the Individual Character <> Individual Intent <> Actual Outcomes in Action view of where the good resides, Goulder, the philosopher-of-action provided a good summary of where the effective absolutes lay. They’re only objective and able to be treated as effectively absolute because we rely on reflection as opposed to dogma in our internal as well as external dialogue. The Intent may seem the dominant view, but only in the context of the Character, and only with honest evaluation of action and likely Outcomes. Evaluation relies on values we come to treat as absolute after reflective consideration and agreement. The kind we find in stories. The values we hold dear may not be absolute, but they are maintained above the fray by our widely shared agreements. If Goulder provided the summary, Ruse and Boghossian are past-masters in story-telling.

So, we may be nearly there after all.

Physical science or humanity itself,
the only absolutes are the raw data.

Concepts like The Truth, The Good and The Real can be made objective by human use, but seeking their ultimate absolute nature remains a fool’s errand, a religious quest. The information is fundamental. All those higher objects and values we treat as objective in the logic of our considerations are emergent. Emergent from dynamic patterns of interaction in information.

“… to sustain a culture in which intellectual enquiry is valued and supported, people on all sides of every debate have to stand together to defend the shared intellectual values that make it possible.”
Julian Baggini

6 thoughts on “Are We Nearly There Yet? #htlgi18 #htlgi2018”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.