As someone who flirts with the hospitality industry, I actually find handwashing morbidly fascinating. The CDC has officially changed its stance on the temperature you should use for washing, saying cold water is similarly effective, while being more carbon friendly. The study that underlies this change is called “Handwashing Water Temperature Effects on the Reduction of Resident and Transient (Serratia marcescens) Flora when using Bland Soap,” published in 2001 by Barry Michaels et al. The study seemed flawed to me, so I asked someone I respect who is a practicing American MD with two specialties, internal medicine and interventional radiology. He knows about washing hands.
Four whole people were sampled! Dumb bad or good luck could bolster or refute this “study.” Having said this, I could believe it. But this did not account for the water temperature effects on soap, just bacteria. Soap work based on micelles [an aggregate of molecules in a colloidal solution]. And micelles have to contact a hydrophobic and hydrophilic item to work. Hard and soft water differences with soap and its temperature come to mind. I am a bottom line physiology thinker here: When molecules move faster, there is more effect from the movement. Why does the body innately increase its temperature when infected? And why serratia [a bacteria responsible for hospital infections] of all choices!?
I am going to stick with hot water hand washing, for now.
Machine learning can be a check on conventional thinking, if we let it.
On the new analytics Linkedin group started by Vincent Granville, Dan Rice wrote a personal account of his frustrations with the Alzheimer’s research of 20 years ago, before we understood more about the preclinical period of the disease:
The problem that I have with domain expert knowledge selecting the final variables that determine the model is that it no longer is data mining and it often is no longer even good science. From the time of Galileo, the most exciting and important findings in what we call science are those data-driven findings that prove the experts wrong. The problem is that the prior domain knowledge is usually incomplete or even wrong, which is the reason for research and analytics in the first place. I understand that the experts are helpful to generate a large list of candidate variables, but the experts will often be wrong when it comes to determining how, why and which of these variable combinations is causing the outcome.
I had an experience early in my research career that has made me forever distrustful of the expert. I was doing brain imaging research on the origins of Alzheimer’s disease in the early 1990’s and all the experts at that time said that the cause of Alzheimer’s disease must be happening right when the dementia and serious memory problems are observed which may be at most a year before the ultimate clinical diagnosis of dementia consistent with Alzheimer’s. We took a completely data-driven approach and measured every variable imaginable in both our brain imaging measure and in cognitive/memory testing. From all of these variables, we found one very interesting result. What the experts had referred to as a “silent brain abnormality” that is seen in about 25% of “normal elderly” at age 70 was associated with minor memory loss problems that were similar to but much less severe than in the early dementia in Alzheimer’s disease. We knew that the prevalence of clinically diagnosed dementia consistent with Alzheimer’s disease was 25% in community elderly at age 80. Thus, we had a very simple explanatory model that put the causal disease process of Alzheimer’s disease back 9-10 years earlier than anyone had imagined.
The problem was that all the experts who gave out research funding disagreed and would not even give me another grant from the National Institute on Aging to continue this research. For years, nobody did any of this preclinical Alzheimer’s research until about 10 years ago when people started replicating our very same pattern of results with extensions to other brain imaging measures. What is still controversial is whether you can accurately PET image the beta-amyloid putative causal protein in living patients, but it is no longer controversial that Alzheimer’s has an average preclinical period of at least 10 years. Ironically, one of the experts who sat on the very committee that rejected my grant applications suddenly became an expert in preclinical Alzheimer’s disease over the past 5 years. The experts are very often dead wrong. We allow experts to select variables in the RELR algorithm, but our users tell us that they seldom use this feature because they want the data to tell the story. The data are much more accurate than the experts if you have an accurate modeling algorithm.
(Quoted with permission of the author.)