OWL(FA) Breaks Russell’s Paradox ?

Just a holding post for a thought that struck me yesterday ….

At a meeting yesterday discussing XML Languages best suited to modelling semantics, there was given some description of different flavours of OWL (Web Ontology Language ). In general with OWL(Full) and OWL(DL) it is possible to make ontologically impossible assertions – the infamously non-existent “The barber who shaves all people who don’t shave themselves.” (Think about it) ie Russell’s paradox highlighting the limitations of simple set theory concerning the set of sets that includes itself. The idea being that there is nothing to prevent circular networks of taxonomic (classification) relationships that express the impossible. The story is that OWL(FA) where FA = “Fixed-Level Metamodelling Architecture”, in principle forces individuals, classes and classes-of-classes and classes-of-classes-of …. etc into distinct levels in its metamodel.

The argument is actually inconclusive, in the sense that in any variant of the language one can choose how to implement and constrain the entities and relationships modelled, according to your chosen semantic model, but the striking point is that in OWL(FA) the circularity is broken by level-shifting.

This is Douglas Hofstadter’s “strange loopiness” – things that look like impossibly recursive loops, but in fact represent possible realities, because the loops shift across conceptual levels. Illustrated ad infinitum by Hofstadter in his “Godel, Escher, Bach” with “Quined Sentences” – sentences that have themselves as their own subject in mathematical and logical as well as natural languages. Hofstadter’s ultimate point is that things that “work on themselves” (like human minds) have some interesting spiralling evolutionary traits towards consciousness.

Small world indeed – in the same meeting another concept was openly recognised – in a hard industrial engineering context – that information expressed in any language, even a formal semantic one, contains much implicit knowledge that may be inferred, in addition to that objectively encoded, leading to cybernetic / AI / informatic-automation approaches for using such models too.

Great convergence happening.

(Possibly a side issue, but it feels related. My argumentation style, always wants to retain complexity, ie not make overall simplifying assumptions applicable to the whole argument, but to separate distinct issues, which may individually be simpler, whilst collectively complex …. see “only & just”, see exclusive-OR’s, inclusion of opposites, see Follett, see synthesis, see integration, in earlier threads ….)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.