Guess I can see why it got that name. Time to bring my LLM-AI / ChatGPT experiment to a close.
Having previously only used ChatGPT as an occasional alternative to Google, especially with Google itself having slapped its own AI between us and its search results, I took it seriously as an “assistant” for a month or maybe two.
Never been convinced, still not, that there is anything remotely intelligent about so-called AI, it is still nevertheless useful to be able to process masses of recorded knowledge very quickly. After a few trials recorded earlier where I used it to summarise 25 years of my own writing back to me and give me analysis of it at the same time – which had some reassurance value – I formed the idea it could maybe help me transform my blog content into a proper “graph” version of a Zettelkasten – like eg Obsidian.
There are a few open-source GitHub codes that claim to process WordPress XML exports into Obsidian MarkDown objects and wikilinks. Both are 100% deterministic and the mappings logically tractable, even with 25 evolving years of WordPress internal addressing formats. All the GitHub or PlugIn efforts I tried were partial, the originators clearly only ever had specific limited purposes in mind.
Whether I tried starting from existing partial code, or with a blank slate and a semi-formal statement of my mapping needs and scope, ChatGPT always “worked” – in the sense it always delivered functional output. But the mappings were only ever partially as intended and sometimes even destructive of existing knowledge content, quite dumb in fact. Every time we got one kind of objects and links sorted, it simply appeared to guess what tweaks to make to the code to pick-up what had been missed previously but broke what had previously worked. For a while it seemed like maybe trial and error might be a workable strategy to getting to 95%+ success. But after a while I realised it wasn’t working. It was always one or two steps forward, two or three steps back. It was only ever as good as a piece of “similar vibe” code it could find out there, each guess was independent of learnings from previous guesses, so it was a random walk across the problem domain rather than anything additive. What I really needed was a programmer or my own programming skills. I left a few hooks out there in GitHubland, but ChatGPT isn’t it. It was never going to do better than any piece of existing code it found in its historical data banks.
I got two or three 50%+ outputs – which looked promising from 40,000 feet in terms of numbers of objects and linking density. I used one as a graphic for one of my slides a the ISSS2025 Birmingham conference, and here are a selection from its best efforts since:



Never good enough to make the exercise worth the next step in effort, to add ontological organisation and navigation links to the whole epistemic “semantic” web. Ho hum. Still be worthwhile if any actual programmer is interested. I’d pay for a solution.
=====
PS – the addictive “agreeable” nature of the chatbot – even in text, not simulated voice – is worth experiencing in order to understand its limitations and the risks of being fooled into thinking “it can be your friend” – it can’t.
Can’t see me actually using any AI chatbot as a creative assistant. Almost exactly two months since I did any productive writing. Next.
=====