ForewordI am liking it. Fairly comprehensive topical coverage.
This short book provides the skills and tools to empower the reader to make better sense of clinical evidence. Present-day journal articles reflect ever-increasing complexity in research design, methods and analyses, and this welcome addition to the field will help readers to get the most from such papers.
With a little practice the book will indeed make it easier to understand the evidence related to healthcare interventions; it provides a clear and accessible account across the whole subject area. The authors avoid unnecessary jargon and have designed the book to be flexible in its use – it can be read from cover to cover or dipped into for specific topics.
Clinical Evidence Made Easy is helpfully structured into two main sections. The first provides the reader with the necessary skills underpinning evidence-based practice, the second gives invaluable tools for appraising different types of articles together with practical examples of their use. Moreover, the configuration within the sections makes for easy reading: common headings are used across chapters so that the reader quickly becomes familiar with the structure and the way ideas are presented.
This is a great book for busy clinicians who want to learn how to deliver evidence-based practice and have at their fingertips the tools to make sense of the burgeoning research literature. Indeed, it will also be valuable for those engaged in research, to aid the planning and delivery of their own projects.
Preface
This book is designed for healthcare professionals who need to know how to understand and appraise the clinical evidence that they come across every day.
We do not assume that you have any prior knowledge of research methodology, statistical analysis or how papers are written. However basic your knowledge, you will find that everything is clearly explained.
We have designed a clinical evidence appraisal tool for each of the main types of research method. These can be found in the second section of the book, ‘Clinical evidence at work’, and you can use them to help you evaluate research papers and other clinical literature, so that you can decide whether they should change your practice…
Harris, Michael; Harris, Michael; Taylor, Gordon; Taylor, Gordon; Jackson, Daniel; Jackson, Daniel. Clinical Evidence Made Easy. Scion Publishing. Kindle Edition.
WHAT COUNTS AS "EVIDENCE?"TABLE OF CONTENTS
Understanding clinical evidence
1. The importance of clinical evidence
2. Asking the right questions
3. Looking for evidence
4. Choosing and reading a paper
5. Recognizing bias
6. Statistics that describe
7. Statistics that predict
8. Randomized controlled trials
9. Cohort studies
10. Case–control studies
11. Research on diagnostic tests
12. Qualitative research
13. Research that summarizes other research
14. Clinical guidelines
15. Health economic evidence
16. Evidence from pharmaceutical companies
17. Applying the evidence in real life
Clinical evidence at work
18. Asking the right questions
19. Choosing the right statistical test
20. Randomized controlled trials
21. Cohort studies
22. Case–control studies
23. Research on diagnostic tests
24. Qualitative research
25. Research that summarizes other research
26. Clinical guidelines
27. Health economic evidence
28. Evidence from pharmaceutical companies
29. Putting it all together…
More broadly. "evidence" is information (typically comprising lexical/discoursive and more structured alphanumeric "data") that makes a true conclusion more likely (or, more rarely, constitutes dispositive "proof").
A "fallacy" is any assertion purporting to contain "evidence" but in fact does not. Fallacies are legion, both structural/formal, and "informal/rhetorical." Also worth noting here are the numerous "cognitive biases" that chronically afflict our ability to "reason" accurately. I have long been a student of this stuff, and spent a number of fun years teaching post-secondary "Critical Thinking."
THE SOAP PROCESS
Subjective - Objective - Assessment - Plan
Simple example here.
NOTE: My former Sup in the Meaningful Use program, Keith Parker, argued that "SOAP" should properly be "SOAPe" ("e" for Evaluation). Scroll down in this post. He's right.
A cute, brief YouTube SOAP note video:
"CHEIF COMPLAINT"? Lordy. Nonetheless...
A couple more of my graphic riffs on the process.
"SOAP Note" on the wiki.
I've noted the point many times that there's a lot going on in the exam room, usually with insufficient time for deeply deliberative assessment given the still-dominant economic regime of the "Productivity Treadmill."
BACK TO CLINICAL EVIDENCE MADE EASY
Search the text for "SOAP." Nothing. Search the text for "Bayes" and "Bayesian." Nothing.
(Nothing either for "exam," "differential," "rule out," "digital," "EMR," "EHR," "electronic.")"P Value?"
23 hits. to wit,
The P valueYeah. That's the way they continue to teach it. Way simplistic. First a "p value" is a probability estimate, one that will also yield a variability distribution in the wake of repeated trials. Second, it assumes a perfectly Gaussian distribution (bell curve). See a 1996 ASQ newsletter column of mine, "Probability from 'C' to 'G'." (pdf)
The P value gives the probability of an observed difference having happened by chance.
P = 0.5 means that the probability of a difference having happened by chance is 0.5 in 1, or 50%.
P = 0.05 means that the probability of the difference having happened by chance is 0.05 in 1, or 5%. This is the level when we traditionally consider the difference to be sufficient to reject the null hypothesis.
The lower the P value, the lower the likelihood that the difference occurred by chance and therefore the stronger the evidence for rejecting the null hypothesis and concluding that the intervention really does have a different effect. As the P value that is normally used for this is 0.05, when P < 0.05 we can conclude that the null hypothesis is false… [op cit, pg 38]
I worked in credit risk modeling and management for five years (large pdf link). We never took p-values and distributional assumptions at face value. The name of the game was (and is) stress-tested expected value computations. We made successive record profits every year I was there. (Wrote about that time in my life here.)
In fairness, the authors do make one brief cite concerning a statistical test useful for "skewed data." But, just one simple example.INITIAL SUMMARY TAKE
I've not read the book closely yet, but I have skimmed the chapters, and I do like what I find therein. Every chapter closes with a "Putting it all together" closing paragraph or two. It's really about assessing the "external clinical evidence" originating beyond the exam room or patient bedside.
I am a regular at SBM, the "Science Based Medicine" blog. You might like the search results there for "Evidence-Based Medicine."
There is a bit of pedantic nit-picking out there as to whether EBM differs materially from SBM. I don't think so. From the SBM site:
Interesting."Good science is the best and only way to determine which treatments and products are truly safe and effective. That idea is already formalized in a movement known as evidence-based medicine (EBM). EBM is a vital and positive influence on the practice of medicine, but it has limitations and problems in practice: it often overemphasizes the value of evidence from clinical trials alone, with some unintended consequences, such as taxpayer dollars spent on “more research” of questionable value. The idea of SBM is not to compete with EBM, but a call to enhance it with a broader view: to answer the question “what works?” we must give more importance to our cumulative scientific knowledge from all relevant disciplines."
Again, "Evidence" -- "that which makes a true conclusion more likely." It behooves us keep in mind that evidence itself spans a distribution, e.g.: "nil - weak - indeterminate - likely - dispositive." Gets even hairier when you add in "conjuncts" i.e., "given this and that, and that over there..." (just for starters).
I think about this stuff all the time. But what spurred this post in particular was this cool Atlantic article:
Yeah. Which returns me to this book I've been studying. Cited it earlier.
Below, another one I need to report on. Goes to the EBM thing.
Beyond those, a number of additional recent books inform my thinking (many of which I've cited on the blog before):
There are more, but this will do for now on the topics relating to cognition. My long-time abiding interest goes to improving diagnostic (and px/tx) reasoning via understanding and explicating the salient aspects of rational clinical cognition. Inextricably intertwined with this is an understanding of the relevant aspects of Health IT. To the extent that the latter impedes the former (poor UX), well (as many complain), it contributes to adversity.The Enigma of Reason
The Knowledge Illusion
The Distracted Mind
The Secret Life of the Mind
Touching a Nerve
World Without Mind
How to Think
Big Mind
Truth
The Book of Why
Snowball in a Blizzard
How Doctors Think
Thinking, Fast and Slow
Moral Tribes
More Harm Than Good
Changing minds
Levers of Influence
Pre-suasion
How to Change Your Mind
Being Wrong
ERRATUM
18 days 'til my heart surgery. Keep singing "woke up this mornin'..." 18 more times.
AUG 7TH UPDATE ADDENDUM
apropos of continuing to wake up, saw a post about this book over at THCB.
Amazon link here. Looks interesting. I am reminded of Ann Neumann's book The Good Death.
COMING SOON
Interesting. Stay Tuned. Source, a WIRED article.
_____________
More to come...
No comments:
Post a Comment