Analysis of COVID-19 data — Part II

By Deborah Kruse Guebert - Contributing Columnist



The four possible WuFlu cases, two from January and two from February, that popped up casually in the course of a 20-minute stroll certainly got my attention. But no one would claim that as scientific evidence for the early spread of these pesky little spheres with spikes.

There is, as promised, much better evidence that by late March when bureaucrats started bolting barn doors, the horse had long since left the stable. It is pretty much common knowledge by now, but here’s how that information started to come out in scientific form.

Towards the end of February, the first genetic evidence for an early COVID-19 spread appeared in the unusual genetic similarity between two Washington cases with no traceable connection (Coronavirus May Have Spread in U.S. for weeks, Gene Sequencing Suggests, WSJ 3/1/2020).

One was the first confirmed case in the U.S. on Jan. 20, a man who had returned from Wuhan five days earlier. The second case was identified on Feb. 24, just days after the state laboratory, finally freed from the restrictive requirements of the CDC, was able to test more broadly.

These apparently unconnected cases shared a rare genetic variation that marked them as highly likely to be from the same original source. This indicated that community spread had happened there by at least mid-February.

As testing continues, earlier and earlier cases are being unearthed.

Europe’s first identified case was in France, a man who turned up ill at a hospital in Paris on Dec. 27. His wife, who worked in a large supermarket near Charles de Gaulle airport, had had only a slight cough (NYT, May 5).

In “The Search is on for America’s Earliest Coronavirus Deaths” (WSJ, May 4), Matthew Memoli of NIH states, “Some of the earliest reports in China are from the fall, and you have plenty of people traveling back to the U.S. all the time.” Americans are a highly mobile population, travelling for business, pleasure, and semesters abroad. In addition, the United States is host to international students from across the globe, including 370,000 from China, spread out in colleges and universities across the country, and travelling home for holidays. Among all these pathways, it is not hard to imagine that an element as contagious as the latest coronavirus would move quickly.

Tracking the spread of those turning up for treatment is one thing. Quantifying it in the general population is more complicated, but essential for reliable policy making. As Stanford professor of medicine and professor of epidemiology and population health, John P. A. Ioannidis, wrote on March 17, “The most valuable piece of information would be to know the current prevalence of infection in a random sample of a population and to repeat this exercise at regular time intervals to estimate the rate of new infections. Sadly, that’s information that we don’t have” (“A Fiasco in the Making? As the Coronavirus Pandemic Takes Hold, We Are Making Decisions Without Reliable Data”, STAT online).

With an annual budget of billions of dollars, several earlier coronavirus outbreaks, and a core mission to monitor and control deadly diseases, is it not odd that the Centers for Disease Control was caught flat-footed on this?

In the May 9 Gazette, OWU Physics professor Barbara Anderek and Biology professor Heather Fair do an excellent job of explaining how mathematical models for the spread of a disease are constructed. Dr. Anderek describes the shapes of three different potential trajectories for the WuFlu, commenting, “Predictions for models…vary widely for Ohio…”

Yes, predicting the future is a Rorschach blot process when there is almost no data.

While the CDC banned any testing but their own, two other Stanford professors of medicine, doctors Bendavid and Bhattacharya, did not give up. They utilized existing sample population testing data from around the globe to estimate prevalence rates for those populations. They found much wider infection rates, which dramatically lowered fatality rates. In Italy, for example, where the fatality rate was 8% for those already seriously ill, data from an entire town indicated a general population fatality rate of just 0.06% (Is Covid-19 as Deadly as They Say? WSJ, March 25).

Once freed to gather their own data, the Stanford researchers recruited over 3,000 Santa Clara County volunteers and deployed a pin prick blood test for COVID-19 antibodies. As the immune system takes several days to ramp up antibody production, this type of test may miss an infection in the early stages. However, it is useful far beyond the 2-14 day active virus period during which the CDC-preferred PCR viral gene test is effective.

Of course, this assumes that all tests are validated for purpose.

The Stanford team compensated for any perceived skewing in their sample population by weighting their results according to local demographic ratios. They found a prevalence of infection that was 50 to 84 times higher than previous estimates. The corollary of this enormous increase in prevalence was a proportional decrease in the estimated mortality rate, which fell to 0.1% to 0.2%. The range is an acknowledgment of experimental uncertainty.

Sample population testing from the other side of the country was just as paradigm shifting. In NYC, a notorious virus hotspot, 21% of participating shoppers tested positive (1 in 5 New Yorkers May Have Had Covid-19…, NYT, April 23).

Gov. Cuomo commented that “the death rate in New York from COVID-19 would most likely be far lower than previously believed, possibly 0.5% of those infected. This is significantly higher than the Santa Clara County figures, but so much better than the WHO initial estimate of 3.4%, as to be a cause for universal rejoicing, although I am aware that some are committed to the more apocalyptic version.

Perhaps it would be helpful to explain that using populations with a higher prevalence is not cherry picking, since their number of deaths will also be higher. It is the ratio that matters.

Researchers are trying to determine what proportion of those exposed to this virus will end up dying. Obviously, there is more pressure to investigate in places where more people are dying, so the large sample testing data that we have comes from those places. The variation in mortality rates across the country indicates that there are other factors involved than the toxicity of the virus.

To return to the estimated fatality rate, 0.1% is the figure that the CDC estimates for seasonal flu mortality and is also the figure that CDC Director Dr. Robert Redfield offered as an initial estimate for the new coronavirus mortality rate back in January. Even Dr. Anthony Fauci, before deciding to make full use of the hobgoblin factor, is on record saying that “… the overall clinical consequences of COVID-19 may ultimately be more akin to those of a severe seasonal influenza (which has a case fatality rate of approximately 0.1%) … rather than a disease similar to SARS or MERS…” (New England Journal of Medicine, published March 26).

How many deaths would that mean? A couple months ago, the CDC figure for the 2017-18 flu season, considered the most recent severe flu season, stood just shy of 80,000. That “settled science” statistic has suddenly dropped to 61,000. Interesting.

Would this have anything to do with the fear that U.S. COVID-19 death toll, despite all the extra help from “presuming,” might still not reach much beyond a severe flu toll? Unfortunately for hobgoblin promoters, the estimates from population sample data fall much closer to the 0.1% average seasonal flu than to the 3.4% used by all the best global health experts as an excuse to practice communism. Oops.


By Deborah Kruse Guebert

Contributing Columnist

Delaware resident Deborah Kruse Guebert is a longtime educator who has taught in Europe and currently tutors students in mathematics in the local area.

Delaware resident Deborah Kruse Guebert is a longtime educator who has taught in Europe and currently tutors students in mathematics in the local area.