And it seems that the media scrutiny of Ferguson's sordid personal life has put the wind up some of the government's scientific advisers...
One scientific adviser to the government said Ferguson’s resignation had created “an awful lot of concern” and that the mood in the community was “very depressed”. The events revealed how university academics who lent their advice to government were having to cope with an increasingly difficult situation, the adviser added.Oh dear, what a pity—how sad. Hey, science bods—you know you might avoid this kind of media scrutiny? Yes, that's right: don't take political appointee jobs and refuse the fat taxpayer-funded salaries. You don't want to be caught up in politics? Then don't play at politics.
“He’s an academic researcher. He doesn’t make decisions. He’s not paid for any of this. We are being drawn into a political situation which is very unpleasant,” they said.
Are these people simple, or what?
In the meantime, a programmer has finally got around to looking at the Imperial College modelling code—and her assessment is not pretty. [Emphasis mine—DK]
I wrote software for 30 years. I worked at Google between 2006 and 2014, where I was a senior software engineer working on Maps, Gmail and account security. I spent the last five years at a US/UK firm where I designed the company’s database product, amongst other jobs and projects. I was also an independent consultant for a couple of years.So, a reasonably credible source then. I wonder what she found? Let's cite some choice extracts from the assessment, shall we?
Clearly, Imperial are too embarrassed by the state of it ever to release it [the original model code] of their own free will, which is unacceptable given that it was paid for by the taxpayer and belongs to them.Yikes. And the conclusion...?
Due to bugs, the code can produce very different results given identical inputs. They routinely act as if this is unimportant.
Investigation reveals the truth: the code produces critically different results, even for identical starting seeds and parameters.
I’ll illustrate with a few bugs. In issue 116 a UK “red team” at Edinburgh University reports that they tried to use a mode that stores data tables in a more efficient format for faster loading, and discovered – to their surprise – that the resulting predictions varied by around 80,000 deaths after 80 days...
Because their code is so deeply riddled with similar bugs and they struggled so much to fix them that they got into the habit of simply averaging the results of multiple runs to cover it up… and eventually this behaviour became normalised within the team.
Although the academic on those threads isn’t Neil Ferguson, he is well aware that the code is filled with bugs that create random results.
Imperial are trying to have their cake and eat it. Reports of random results are dismissed with responses like “that’s not a problem, just run it a lot of times and take the average”, but at the same time, they’re fixing such bugs when they find them. They know their code can’t withstand scrutiny, so they hid it until professionals had a chance to fix it, but the damage from over a decade of amateur hobby programming is so extensive that even Microsoft were unable to make it run right.
The Imperial code doesn’t seem to have working regression tests. They tried, but the extent of the random behaviour in their code left them defeated.
Much of the code consists of formulas for which no purpose is given. John Carmack (a legendary video-game programmer) surmised that some of the code might have been automatically translated from FORTRAN some years ago.
This code appears to be trying to calculate R0 for “places”. Hotels are excluded during this pass, without explanation.
R0 is both an input to and an output of these models, and is routinely adjusted for different environments and situations. Models that consume their own outputs as inputs is problem well known to the private sector – it can lead to rapid divergence and incorrect prediction.
Despite being aware of the severe problems in their code that they “haven’t had time” to fix, the Imperial team continue to add new features; for instance, the model attempts to simulate the impact of digital contact tracing apps.
Adding new features to a codebase with this many quality problems will just compound them and make them worse.
All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.Well, I think that this fucking debacle goes some way to explaining why Ferguson's models have been such a complete and utter failure from the get-go.
And let us remind ourselves that our government were stupid enough to believe this fucking team of charlatans, and that they are busy cratering the economy on the strength of a computer "model" (I use the term advisedly) that produces complete garbage.
On a personal level, I’d go further and suggest that all academic epidemiology be defunded. This sort of work is best done by the insurance sector. Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on. Academic efforts don’t have these people, and the results speak for themselves.Indeed they do.
It's odd though. There's something at the back of my head, something niggling at me—a real sense of familiarity about this situation...
Where ales have we encountered a commentary on a computer model that has huge political and economic consequences but which, having been written by a bunch of amateur fuck-wits, provides absolute fucking garbage...?
Oh yes—it's Harry again.
Do you remember, in November 2009, that there was a leak from the University of East Anglia's Climate Research Unit (CRU)? Most of the media spent their time exposing the dirty tricks revealed in the emails between the "scientists" who are the main proponents of the Catastrophic Anthropogenic Climate Change (CACC) theory—including using collusion and blackmail to prevent dissident papers appearing in "reputable" journals.
But what was less widely reported was that, along with the emails, the computer "models" (again, advisedly) were released—alongside a very long commentary by an unfortunate programmer who was tasked with making sense of them.
The programmer was called Ian Harris, and his HARRY_READ_ME.txt file was, for those of us who like to delve into these things, an absolute treasure trove—revealing the incompetence of these so-called "scientists", and the utter invalidity of their much-vaunted "climate models".
And here we are again: with a government fucking our economy and freedoms, all on the basis of useless, garbage-spouting models.
Dear Boris (and every other government): for fuck's sake, stop giving any credence at all to these models. Models are not evidence and they are not science: even the most well-coded model would be nothing more than a theory—and, as we have seen with both COVID-19 and CACC, the people building said programmes are nowhere near being competent.
These so-called scientists are not: they are hobbyist coders (and bad ones at that). And where they attempt to sell their models as reliable, these people are frauds—and they should be prosecuted. If not, then a class-action lawsuit might find a large number of backers—especially if a case carries the prospect of personally bankrupting Neil Ferguson. Certainly, I would happily donate.
UPDATE: Tim Almond explains why "it's stochastic" is no excuse at all, and the Streetwise Professor is as incensed as your humble Devil...