Been debating this Register article shared by @jonmbutterworth, with others over on Facebook. As an engineer in the information management domain, I’m obviously very interested in and impressed by the IT challenge here.
100 Peta-Bytes so far,
27 PB in the past year, and
400 PB per year by 2023
Once collected, the possibilities for massively modular and parallel collaborative analysis, and alternative (say) neural network analyses, etc, … the sky’s the limit if the IT architecture supports it – hence the article. But of course that doesn’t make it “intelligent”. A human mind processes huge amounts of raw data daily, but it doesn’t store it as is, for later analysis. It is processed starting in real time experience with inferences, compressions, associations, and continues to be shuffled and re-shuffled between the conscious and subconscious and new associations thereafter. Refining the “remembered” world model we hold. But that’s not the really interesting thing here.
What is interesting is what this says about CERN / LHC / ATLAS Physics. Firstly it says nothing directly. It’s about an area of spin-off technology – unrelated to the particle physics – that benefits mutually from CERN needing it. (Think space-race and Teflon-coated pans.) But secondly, what it seems to suggest is something like this:
“We have so little understanding of how our physical theories explain reality, that we can’t decide in advance what we’re really looking for, so we’ll up the sample rate, record as much as we can, post analyse it, and hope we get lucky.”
And of course we will get lucky – some interesting and significant patterns will be found, interpreted and related to some aspects of the developing physical model. But this doesn’t change the basic point, that the physics is a long way from being completely explanatory of reality in these areas.