Back in January, a panelist at the Health 2.0 WinterTech conference made a wisecrack about mHealth "quantified self fitness wearables" all coming with a "for entertainment purposes only" disclaimer tucked down in the obtuse Terms and Conditions fine print.
A headline this morning:
Fitbit Trackers Are 'Highly Inaccurate,' Study Finds
by KALYEENA MAKORTOFF, CNBC
A class action lawsuit against Fitbit may have grown teeth following the release of a new study that claims the company's popular heart rate trackers are "highly inaccurate."
Researchers at the California State Polytechnic University, Pomona tested the heart rates of 43 healthy adults with Fitbit's PurePulse heart rate monitors, using the company's Surge watches and Charge HR bands on each wrist...
Comparative results from rest and exercise — including jump rope, treadmills, outdoor jogging and stair climbing — showed that the Fitbit devices miscalculated heart rates by up to 20 beats per minute on average during more intensive workouts.
"The PurePulse Trackers do not accurately measure a user's heart rate, particularly during moderate to high intensity exercise, and cannot be used to provide a meaningful estimate of a user's heart rate," the study stated...
Interesting. My wife bought me a Fitbit HR for Christmas (the "Plum" model depicted at the top of this post). It is a frustrating piece of crap. It has never held a charge for more than half the time they claim it will. And, after taking it off the charger, more than half of the time I could not get it to sync with the Fitbit app on my iPhone. Consequently, the data it captures and then sends back to me (replete with the dopey little congratulatory "badges") are of nil value; they are woefully incomplete and wildly off. Now that I've joined a fitness club and have resumed a strenuous weight training and cardio regimen, having some accurate and useful activity metrics would be nice, particularly in the wake of my health travails of late.
As of late last week, my Fitbit HR will not even charge at all.
Finished this book. Nicely done.
Among other things, excellent non-technical explanation of Bayesian reasoning spanning chapters 9 and 10.
Color me thoroughly Bayesian. As I wrote back around 2003:
Bayesian methods are used to refine a posterior probability estimate by using anterior probability knowledge. The table above is familiar to anyone who works in health care or epidemiological analysis. For example, we know both the approximate prevalence of a clinical condition (the proportion of people in the population with the condition) and the historical false positive and false negative rates of a relevant lab test. Using Bayes formula (below), we can better estimate the likelihood you in fact have a disease given that your test comes back positive, or the probability that you are actually disease-free given a negative lab test...In two words, "conjunctive hedging." Relatedly, see Chapter 3 of my grad thesis (on analytical methodology in the lab).
Chapter 9Bayes, man. Base rates matter.
Not much is known about Rev. Thomas Bayes, who lived during the eighteenth century. Serving mostly as clergyman to his local parish, he published two works in his lifetime. One defended Newton’s theory of calculus, back when it still needed defending, and the other argued that God’s foremost aim is the happiness of his creatures.
In his later years, however, Bayes became interested in the theory of probability. His notes on the subject were published posthumously, and have subsequently become enormously influential— a Google search on the word “Bayesian” returns more than 11 million hits. Among other people, he inspired Pierre-Simon Laplace, who developed a more complete formulation of the rules of probability. Bayes was an English Nonconformist Presbyterian minister, and Laplace was a French atheist mathematician, providing evidence that intellectual fascination crosses many boundaries.
The question being addressed by Bayes and his subsequent followers is simple to state, yet forbidding in its scope: How well do we know what we think we know? If we want to tackle big-picture questions about the ultimate nature of reality and our place within it, it will be helpful to think about the best way of moving toward reliability in our understanding.
Even to ask such a question is to admit that our knowledge, at least in part, is not perfectly reliable. This admission is the first step on the road to wisdom. The second step on that road is to understand that, while nothing is perfectly reliable, our beliefs aren’t all equally unreliable either. Some are more solid than others. A nice way of keeping track of our various degrees of belief, and updating them when new information comes our way, was the contribution for which Bayes is remembered today.
Among the small but passionate community of probability-theory aficionados, fierce debates rage over What Probability Really Is. In one camp are the frequentists, who think that “probability” is just shorthand for “how frequently something would happen in an infinite number of trials.” If you say that a flipped coin has a 50 percent chance of coming up heads, a frequentist will explain that what you really mean is that an infinite number of coin flips will give equal numbers of head and tails.
In another camp are the Bayesians, for whom probabilities are simply expressions of your states of belief in cases of ignorance or uncertainty. For a Bayesian, saying there is a 50 percent chance of the coin coming up heads is merely to state that you have zero reason to favor one outcome over another. If you were offered to bet on the outcome of the coin flip, you would be indifferent to choosing heads or tails. The Bayesian will then helpfully explain that this is the only thing you could possibly mean by such a statement, since we never observe infinite numbers of trials, and we often speak about probabilities for things that happen only once, like elections or sporting events. The frequentist would then object that the Bayesian is introducing an unnecessary element of subjectivity and personal ignorance into what should be an objective conversation about how the world behaves, and they would be off...
Carroll, Sean (2016-05-10). The Big Picture: On the Origins of Life, Meaning, and the Universe Itself (pp. 61-62). Penguin Publishing Group. Kindle Edition.
On the topic of "evolution," I recommend triangulating Sean Carroll's book with these two:
"The Story of the Human Body" and "A Natural History of Human Morality." The Lieberman book goes principally to the evolution of human physiology, and what he terms our current prevalence of "evolutionary mismatch diseases." Tomasello's book sets forth the scientific evidence underpinning the adaptive utility of prosocial/empathic/altruistic behaviors.Sean Carroll's book goes more to the physical fundamentals and evolutionary processes at the atomic/subatomic levels. It squares with the take proffered by CERN:
The theories and discoveries of thousands of physicists since the 1930s have resulted in a remarkable insight into the fundamental structure of matter: everything in the universe is found to be made from a few basic building blocks called fundamental particles, governed by four fundamental forces. Our best understanding of how these particles and three of the forces are related to each other is encapsulated in the Standard Model of particle physics. Developed in the early 1970s, it has successfully explained almost all experimental results and precisely predicted a wide variety of phenomena. Over time and through many experiments, the Standard Model has become established as a well-tested physics theory.__
All matter around us is made of elementary particles, the building blocks of matter. These particles occur in two basic types called quarks and leptons. Each group consists of six particles, which are related in pairs, or “generations”. The lightest and most stable particles make up the first generation, whereas the heavier and less stable particles belong to the second and third generations. All stable matter in the universe is made from particles that belong to the first generation; any heavier particles quickly decay to the next most stable level. The six quarks are paired in the three generations – the “up quark” and the “down quark” form the first generation, followed by the “charm quark” and “strange quark”, then the “top quark” and “bottom (or beauty) quark”. Quarks also come in three different “colours” and only mix in such ways as to form colourless objects. The six leptons are similarly arranged in three generations – the “electron” and the “electron neutrino”, the “muon” and the “muon neutrino”, and the “tau” and the “tau neutrino”. The electron, the muon and the tau all have an electric charge and a sizeable mass, whereas the neutrinos are electrically neutral and have very little mass.
Forces and carrier particles
There are four fundamental forces at work in the universe: the strong force, the weak force, the electromagnetic force, and the gravitational force. They work over different ranges and have different strengths. Gravity is the weakest but it has an infinite range. The electromagnetic force also has infinite range but it is many times stronger than gravity. The weak and strong forces are effective only over a very short range and dominate only at the level of subatomic particles. Despite its name, the weak force is much stronger than gravity but it is indeed the weakest of the other three. The strong force, as the name suggests, is the strongest of all four fundamental interactions.
Three of the fundamental forces result from the exchange of force-carrier particles, which belong to a broader group called “bosons”. Particles of matter transfer discrete amounts of energy by exchanging bosons with each other. Each fundamental force has its own corresponding boson – the strong force is carried by the “gluon”, the electromagnetic force is carried by the “photon”, and the “W and Z bosons” are responsible for the weak force. Although not yet found, the “graviton” should be the corresponding force-carrying particle of gravity. The Standard Model includes the electromagnetic, strong and weak forces and all their carrier particles, and explains well how these forces act on all of the matter particles. However, the most familiar force in our everyday lives, gravity, is not part of the Standard Model, as fitting gravity comfortably into this framework has proved to be a difficult challenge. The quantum theory used to describe the micro world, and the general theory of relativity used to describe the macro world, are difficult to fit into a single framework. No one has managed to make the two mathematically compatible in the context of the Standard Model. But luckily for particle physics, when it comes to the minuscule scale of particles, the effect of gravity is so weak as to be negligible. Only when matter is in bulk, at the scale of the human body or of the planets for example, does the effect of gravity dominate. So the Standard Model still works well despite its reluctant exclusion of one of the fundamental forces...
I may tackle this book next.
Dr. Carroll on DNA:
Erwin Schrödinger, in What Is Life?, recognized the need for information to be passed down to future generations. Crystals don’t do the job, but they come close; with that in mind, Schrödinger suggested that the culprit should be some sort of “aperiodic crystal”— a collection of atoms that fit together in a reproducible way, but one that had the capacity for carrying substantial amounts of information, rather than simply repeating a rote pattern. This idea struck the imaginations of two young scientists who went on to identify the structure of the molecule that actually does carry genetic information: Francis Crick and James Watson, who deduced the double-helix form of DNA.UPDATE
Deoxyribonucleic acid, DNA, is the molecule that essentially all known living organisms use to store the genetic information that guides their functioning. (There are some viruses based on RNA rather than DNA, but whether or not they are “living organisms” is disputable.) That information is encoded in a series of just four letters, each corresponding to a particular molecule called a nucleotide: adenine (A), thymine (T), cytosine (C), and guanine (G). These nucleotides are the alphabet in which the language of genes is written. The four letters string together to form long strands, and each DNA molecule consists of two such strands, wrapped around each other in the form of a double helix. Each strand contains the same information, as the nucleotides in one strand are paired up with complementary ones in the other: A’s are paired with T’s, and C’s are paired with G’s. As Watson and Crick put it in their paper, with a measure of satisfied understatement: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”
In case it has managed to escape your notice, the copying mechanism is this: the two strands of DNA can unzip from each other, then act as templates, with free nucleotides fitting into the appropriate places on each separate strand. Since each nucleotide will match only with its specific kind of partner, the result will be two copies of the original double helix— at least as long as the duplication is done without error.
The information encoded in DNA directs biological operations in the cell. If we think of DNA as a set of blueprints, we might guess that some molecular analogue of a construction worker comes over and reads the blueprints, and then goes away to do whatever task is called for. That’s almost right, with proteins playing the role of the construction workers. But cellular biology inserts another layer of bureaucracy into the operation. Proteins don’t interact with DNA directly; that job belongs to RNA... [Carroll, op cit, pp. 257-258]
"The Gene" may have to take a back seat to this one that just came to my attention via Science Based Medicine:
Medicine is an uncertain business. It is an applied science, applying the results of basic science knowledge and clinical studies to patients who are individuals with differing heredity, environment, and history. It is commonly assumed that modern science-based doctors know what they are doing, but quite often they don’t know for certain. Different doctors interpret the same evidence differently; there is uncertainty about how valid the studies’ conclusions are and there is still considerable uncertainty and disagreement about things like guidelines for screening mammography and statin prescriptions.Might we also head back "down in the Weeds'?"
Snowball in a Blizzard by Steven Hatch, MD, is a book about uncertainty in medicine. The title refers to the difficulty of interpreting a mammogram, trying to pick out the shadows that signify cancer from a veritable blizzard of similar shadows...
A culture of denial subverts the health care system from its foundation. The foundation—the basis for deciding what care each patient individually needs—is connecting patient data to medical knowledge. That foundation, and the processes of care resting upon it, are built by the fallible minds of physicians. A new, secure foundation requires two elements external to the mind: electronic information tools and standards of care for managing clinical information.
Electronic information tools are now widely discussed, but the tools depend on standards of care that are still widely ignored. The necessary standards for managing clinical information are analogous to accounting standards for managing financial information. If businesses were permitted to operate without accounting standards, the entire economy would be crippled. That is the condition in which the $2 1⁄2 trillion U.S. health care system finds itself—crippled by lack of standards of care for managing clinical information. The system persists in a state of denial about the disorder that our own minds create, and that the missing standards of care would expose.
This pervasive disorder begins at the system’s foundation. Contrary to what the public is asked to believe, physicians are not educated to connect patient data with medical knowledge safely and effectively. Rather than building that secure foundation for decisions, physicians are educated to do the opposite—to rely on personal knowledge and judgment—in denial of the need for external standards and tools. Medical decision making thus lacks the order, transparency and power that enforcing external standards and tools would bring about... [Lawrence Weed, MD and Lincoln Weed, JD, Medicine in Denial, pp 1-2]
Props to Jane Sarasohn-Khan:
Not seeing any "bend" in the "cost curve."
A worthy find, SlowMedUpdates.com.
Slow MedicineCame upon these folks by way way of a STATnews article, Why we won’t stop providing routine wellness visits.
Slow Medicine promotes a thoughtful, evidence-based approach to clinical care, emphasizing careful clinical reasoning and patient-focused care. Slow Medicine draws on many of the principles of the broader “Slow Movement”, which have been applied to a wide range of fields including food, art, parenting, and technology, among others. Like the broader “Slow Movement,” which emphasizes careful reflection, Slow Medicine involves careful interviewing, examination, and observation of the patient. It reminds us that the purpose of health care is to improve the wellbeing of patients, not simply to utilize the ever growing array of medical tools and gadgets. In addition, Slow Medicine recognizes that many clinical problems do not yet have a technological “magic bullet” but instead require lifestyle changes that have powerful effects over time. Importantly, Slow Medicine practitioners are eager to promote innovation, new ideas and adopt new technologies early, but aim to do so in a methodical manner and only after it’s clear that newer really is better for the individual patient
"Slow Medicine." I am reminded of my review of Victoria Sweet's book "God's Hotel."
More to come...