...“[A]dverse drug reactions” are common: every year more than 2 million North Americans are hospitalized because of adverse reactions to prescription drugs. The reason why drugs work in some people and cause bad reactions in others can usually be traced back to differences in genetic makeup. Medicines that work for most people may not work for you. They may, indeed, harm you. So we need two things: first, we need ways of predicting and detecting disease well before it becomes life threatening; and second, we need medicines that work for you and your unique body.A great read. At once informed, erudite, accessible, ethically introspective, and genial. Dr. Cullis has considerable cred.
Medicine has been trying to do these two things for millennia. While enormous progress has been made, it is still not good enough. And so it is that we are nearing the biggest revolution of our time — perhaps of all time.
This revolution has many names and guises. It is sometimes called personalized medicine, sometimes precision medicine, sometimes stratified medicine. It is a cousin of “evidence-based” medicine, a relatively new concept in medical practice. (Whoever came up with that name was clearly trying to make a point.) Whatever the name, what we will call personalized medicine — medicine based on the unique molecular makeup of our individual selves and a molecular-level understanding of whatever disorder we may have — is on our doorsteps...
Medical progress to this point has been mainly based on advances that benefit the population as a whole rather than you as an individual...
[A] family practice physician, on being asked how he selects the best drug for patients suffering from depression, answered, “Well, I have a dartboard hanging behind my door. It depends what number I hit.” There is no way for him to know in advance which drug will work best for which patient and which patient will suffer a nasty side effect. And so the patient and the doctor embark on a risky trial-and-error journey to find the best, most effective drug for him or her...
Your doctor sees the macroscopic version of you and, on the basis of a physical examination and your symptoms, can often diagnose what is wrong quite efficiently. But your doctor does not know much about the microscopic version of you, where disease and your responses to treatment are first manifested.
Your doctor does not know details of your genetic code and therefore cannot know how you will respond to a drug he or she may prescribe. Your doctor does not know the composition of molecules in your blood, which contain a huge amount of diagnostic information regarding diseases that you may have or be trending towards, whether the drugs you are taking are working to cure whatever disorder you are suffering from, or whether your diet is appropriate. Your doctor does not know the types and amounts of micro-organisms that are living in you and on you, which influence how well your immune system is working and play important roles in inflammatory diseases. In short, your doctor does not have access to a lot of important molecular-level information about you to guide many of his or her decisions. This can lead to wrong or delayed diagnoses and inappropriate therapeutic interventions.
The medicine of the future will be more personalized — and much more effective — because detailed molecular-level information about you and whatever disorder you may have is increasingly available...
This future is not very far away. [Kindle locations 44-147]
A nice follow-on perspective to this one I'd first cited in a prior post back in May.
I then cited "Biocode" again in my July 16th post "Personalized Medicine" and "Omics" -- HIT and QA considerations.
Let me return for my core concerns aired a month ago:
I have a couple of concerns. Docs often don’t have enough time TODAY to get through an electronic SOAP note effectively, given workflow constraints. Adding in torrents of “omics” data may be problematic, both in terms of the sheer number of additional potential dx/tx variables to be considered in a short amount of time, and questions of “omics” analytic competency. To that latter point, what will even constitute dx "competency" in the individual patient dx context, given the relative infancy of the research domain? (Not to mention issues of genomic lab QC/QA -- a particular focus that I will have, in light of my 80's lab QC background).
President Obama’s current infatuation with “Precision Medicine” notwithstanding, just dumping a bunch of “omics” data into EHRs (insufficiently vetted for accuracy and utility, and inadequately understood by the diagnosing clinician) is likely to set us up for our latest HIT disappointment -- and perhaps injure patients in the process.apropos, Dr. Cullis, Chapter 5:
THE TIPPING POINT, where medical practice will suddenly adopt the principles of personalized medicine, will be reached within the next five years. You will experience a memorable personal tipping point when you first get your genome sequenced, your microbiome analyzed, your metabolome assayed, and your proteome measured and then sit down with your doctor or wellness coach to discuss the implications of this very definitive data that is all about you. The data will be so precise and all-encompassing that it will show not only what may be wrong with you but also what you ate for breakfast yesterday and what type of dog you own. The impact of this information on you and the way you live your life will be greater than any other technological advance you have ever experienced. Remember when you started using Google and suddenly had access to all the information in the world and couldn’t imagine how you operated before? Or, for those who are old enough, remember the feeling you had when you started using email and suddenly had immediate communication with all parts of the world, for free? Or, for those of you who are really old, the first time you realized that having your own computer could actually be useful? Or, for those of you who, like the author, are positively ancient, the time you picked up your first hand-held calculator that could multiply and divide and take square roots? Well the personalized medicine revolution will trump them all.
The harbingers of the revolution are all around us.
Early versions of the digital version of you are starting to appear, although progress has been slow because of the enormous institutional, technical, and societal issues involved. The deeply conservative instincts of the medical profession have not helped either. The first manifestation of the digital you is (or will be) your electronic medical record (EMR) sometimes known as your electronic health record (EHR). The first attempts to introduce EMRs date back to the late 1960s; in the 1970s, the Department of Veterans Affairs had a working Computerized Patient Record System that established the ability of EMRs to reduce medical errors. Problems ranging from lack of standards, security concerns, and aversion to change prevented general adoption of EMRs until, with some frustration, President Obama pushed the Health Information Technology for Economic and Clinical Health Act into law in 2009. The legislation mandated a transition to EMRs for physicians and hospitals that treat patients covered by government insurance.
Still, it is surprising that in the U.S. in 2012 (the latest year for which records are available), only 72 percent of physicians used any form of electronic health record, ranging from just 54 percent in New Jersey to 89 percent in Massachusetts. In 2009, only 48 percent of physicians used EMRs. And right now, EMRs are not all that complicated. They consist of a digital store of your complete medical history, including medications and allergies, immunization status, laboratory test results, radiology images, vital signs, and personal statistics like age and weight. The lack of EMRs has meant untold duplication, errors due to incomprehensible handwriting, lack of knowledge about pre-existing conditions, and ongoing patient frustration with doctors who refuse to enter the digital age that the rest of us embraced twenty years ago. How many times have you been referred to a doctor, only to be asked the same questions all over again or be required to do a test that you’ve already done, simply because the doctor has no access to an electronic version of your medical history?
Some of the reasons for delay, it has to be admitted, are not the fault of the medical profession. Privacy is an enormous Your EMR, because it is in electronic form, is susceptible to the same sort of hacking as any other personal data stored on your computer or by your credit card company or by your bank. Clearly you don’t want an insurer or employer to get hold of your medical record without your authorization. But if your bank can achieve a secure online system for you to conduct your financial transactions, why can’t one be created for your medical information? Regardless, the digitization dam that’s been holding back universal EMRs has clearly burst, and we are finally entering the digital age of medicine.
Assuming you have one, can you get hold of your own electronic medical record? You should be able to — after all, it’s all about you. But ownership can be complicated, and doctors who have purchased an EMR system may feel that information in it about you belongs to them. Often, a strange distinction exists: the doctor or hospital owns your medical record, but you own the data in your medical record. In any event, if your medical record is digitized, you should be able to get hold of a digital copy.
The next step will be to add your genomic, proteomic, microbiomic, and all the other data to your EMR to achieve a more complete digital version of yourself. Early signs of development of such personalized data clouds and their utility are being seen for a small number of individuals who have access to the sophisticated resources currently required in order to study themselves in detail at the molecular level... [op cit, Kindle Locations 882-920].
"The next step will be to add your genomic, proteomic, microbiomic, and all the other data to your EMR to achieve a more complete digital version of yourself."Yeah, OK, but we can't even get our wailing incumbent EHR vendors to agree to comply en masse with the functionally feeble Meaningful Use Stage 3 specs.
Then there's the issue of "omics" analytic competency.
So, you say, “That’s good so far, very impressive. I can see how we’ll generate all this information and store it, but you still haven’t told me how I’m going to use this damn data to prevent or cure my disease or make myself feel better.” Ah, yes — slight problem there. That’s where the bottleneck is, and any readers who want a well-paid, highly secure occupation for the next twenty years should become experts in bioinformatics, particularly as it pertains to interpreting the large datasets surrounding genomic, proteomic, and other “omic” information... [ibid, Kindle Locations 793-797].Beyond the requisite Omics dx acumen (in short supply), I remain skeptical regarding the notion of adding "your genomic, proteomic, microbiomic, and all the other data to your EMR."
For one thing, we think we're having "interoperability" problems now? Lordy.
UPDATE: AN ADMONITION FROM ANOTHER SOURCE
The use of genomic sequence data in a clinical setting is truly a new phenomenon in this 21st century. In the year 1999, no human had ever had their genomes sequenced; by 2009 there were seven individuals who it had their genomes sequenced, and it has been predicted by some that the millionth person will have their genome sequenced in calendar year 2014. Although it is not clear if or when the remarkable uptake of this technology might level off, it is clear that this technology will increasingly affect the way medicine is practiced. All healthcare practitioners will be increasingly asked to put this [sic] data into an appropriate clinical context.I haven't bought this book. Yet. I may. I used Dragon to talk the foregoing excerpt in.
Genomics has the potential to improve our approach to care in almost every aspect of medicine from cancer to infectious diseases.
We are moving into an increasingly DNA–first world of clinical medicine, where DNA sequence data will be available for decision-making prior to the patient’s visit to your office or hospital. In this DNA – first world, we will all need point of care decision support and we hope in these early years of genomic medicine you will find this book to be useful in your decision support needs.
Importantly, enthusiasm about genomic medicine should not lead to an abandonment of the classic tools of clinical medicine that are needed to inform care. First and foremost among these tools are the history and physical examination. The importance of environmental influences on health must not be overlooked. While this text gives great attention to the “nature” side of the nature versus nurture equation, there are many other sources of important information on the role of environment in all diseases including that which is considered as “genetic”.
The evolving role for family health history
Family health history has been used in clinical medicine for generations as a proxy for genetic information in efforts to protect disease risk in patients. In contrast to its previous application, in the era of DNA–first genomic medicine, family health history will increasingly be used to add context to the DNA Since every patient will be revealed to have rare genetic changes, so rare that they may only exist within their immediate family, the health history of others in the family who share these changes will be necessary in order to gain insight into how these changes will affect health. It will only be through knowing how these changes played out for the patient’s immediate relatives that we will be able to interpret some sequence variations in the patient who is being seen. It is because of this “interpretive need” that we will ultimately require detailed family histories that are annotated with DNA sequence variant information...
It's rather expensive. But, if you surf through the Amazon Preview chapter topic listings, you can see how a clinician would find it useful. A principal takeaway for me is that a truly effective "genetic counselor" might also have to be an MD. And, we might ask, will such "Omics" curricula become a requisite component of physician training -- or, will it become the domain of yet another specialty subset (or, more troubling, some sub-MD pay-to-play "online certificate" or otherwise for-profit diploma-mill holder)? I seriously doubt that my urologist did anything other than take my OncoType dx genetic assay prostate cancer report at face value.
Just like my RadOnco docs likely (necessarily) did little to nothing beyond reading the radiologists' "impression" narrative summaries for my CT, MRI, and bone scans.
__
"The next step will be to add your genomic, proteomic, microbiomic, and all the other data to your EMR"
Yeah, but, beyond workforce capacity and dx acumen, what about the chronic, persistent data silo/opacity issue? A recent THCB post asks
What’s the Definition of Interoperability?Seriously? We already have it. The IEEE definition. As I commented at THCB:
We already HAVE a concise definition of “interoperability," via the IEEE, “interoperability: Ability of a system or a product to work with other systems or products without special effort on the part of the customer. Interoperability is made possible by the implementation of standards.”Responding to the interviewee in the THCB post, I quoted him and offered a response.
This other stuff is merely about “data exchange.” What is happening is that we’re “defining interoperability down,” removing the “without special effort” part. Were there a Data Dictionary Standard, then we could talk about interoperability. Data are called “the lifeblood of health care.” Fine. Think Type-O, the universal blood type, by way of precise analogy.
Maybe the API will comprise data exchange salvation. Maybe.
“One of the key ways we build good support systems is by having good data. It’s a “garbage in – garbage out” problem. One can’t make good decisions without good data. One of the problems is that a lot of the time data exists but it isn’t in the computer or isn’t in MY computer. Maybe someone has had a test somewhere else and I might not have any info, or if I’m lucky I might have a scanned PDF of the results. But it’s rarer still that I’ll have good, structured data that I’ve been able to pull in from outside sources without a lot of transcription or effort. So I became very interested in this problem of interoperability and have been doing a range of different kinds of work. Some of it actually focuses on how you do decision support, in the cloud or across systems. So the question becomes: how can you build a decision support system that spans several electronic health records and integrated data from multiple sources to make more accurate suggestions for patient care?”It's an excellent post, albeit replete with the usual obtuse "interoperababble" we have all come to know and love. On the upshot of Meaningful Use:
__
By having a comprehensive standard data dictionary. Absent that, you have to have “n” variables x n(n-1) translative “interfaces” (if you’re after computable “structured data” rather than document-centric reports).
[Adam Wright] We were going to create these strong incentives for people to adopt EHR’s, knowing that EHR’s were not yet perfectly interoperable or even always perfectly usable and didn’t have all the functionality that we wanted. And now we’re trying to go back and patch that. The thing is we now have had a lot of opportunity to learn how, with these EHR’s that were developed with large hospitals or academic systems in mind, how do they really work in critical access hospital or in a single doctor practice. So we’ve learned, and I think the key is going to be to translate what we’ve learned into concrete improvements. But I think that’s been hard. I talk to some of my friends who are vendors and they’ve said “A lot of people are giving us feedback and we’re working on it as fast as we can but at the same time we’re getting a lot of pressure from Meaningful Use. So we can’t even use our best developers to build the stuff our customers are asking for asking for.” So I think some way of fixing how we do innovation in health IT is going to be important and I don’t know how exactly how we’ll do it, given how many competing priorities there are.
I absolutely think that seeing a complete picture of a patient’s information is key for safety and I do think that lack of interoperability is 100% a safety issue. It’s something that we need to work on. But we need to get beyond the “unconscious patient in Wyoming.” I think that there’s so many more complex, subtle and insidious issues. The thing is that it’s often hard to measure. It’s hard to say “this is the one piece of information that changed my mind”. But I do think complete information like that is going to be very important. I’ll also offer you a flip side that I don’t really have an answer for yet: How much information am I responsible for viewing about a patient? Somehow I now have every piece of information about a patient from the moment of birth to the present. That might be more information than I can review before my brief visit with a patient. Part of the solution to that is going to be in technology or tools that will help me summarize the information, spot key information, spot trends etc. I think it’s going to be really exciting when we have that problem and have to build those tools. I look forward to the day when we have so much information that we really need sophisticated tools to organize and sort through it. Right now we’re a long way from that. But I think you’re 100% right that it’s a safety issue.
At the conclusion of the THCB post:
LK: So the last question is something Dan Monro brought up. Dan did a three-part series on interoperability on the site I write for called HL7 Standards.com. Essentially if you don’t have a patient identifier, then interoperability is a waste of time. He alluded to the idea of patient identifiers being something like a social security number in that they’re kind of old-school. There’s better ways to do it with cryptography and being able to ID people biometrically. So what can you tell us about patient identifiers?
AW: I think it’s awfully important. It’s certainly the case that when I have a database and you have a database and we want to link them together, it matters that we have a key so that we can tell who is the same person. The approach right now with using a social security number has problems. Not everyone always has a SSN, not everyone remembers them, there are errors, they lack any way to validate them, etc. Using something like your Blue Cross member number is no good either because you can get another job. So I think the solutions we have now are rotten. We need some way to identify patients across systems. The most commonly proposed, and probably simplest or most parsimonious solution, is a national patient identifier. A numerator who sits in the government and assigns everyone a number shortly after birth and that’s their number. That is just not going to fly palatably. I just can’t see us creating the political will and I also am not sure that it is that desirable. Travelers could come to the country and not have numbers or Americans can go elsewhere and how will this all work? It seems problematic. But I think there are smart ways we could use technology to approximate that. For a long time we had probabilistic record linking approaches where we look at your name, date of birth, address, age, sex, etc. and try to figure out what is the probability that these two people are the same, and we’ve had some pretty good results. The Indianapolis Network for Patient Care has had some pretty good results there, using probabilistic linking rather than an identifier. The reality is that if we can put the patient at the center of this and creates a credential and authentication that they control, I think that would be a lot more palatable than if we put some sort of central government number assigned to people...
I think there’s probably a lot we can learn from internet authentication about how to create reliable patient identifiers with creating better identifiers, more security around them. The patient could see more of their information and how it was shared. I just think that there are better solutions than a single government based national patient identifier. And even if it’s the best solution, I just don’t think it’s politically possible. So I think we ought to be focusing our efforts on something else.
SO, WHAT MIGHT COMPRISE THE "SOMETHING ELSE"?
Enter the "cryptocurrency" model?
WHAT!? you say? Stay with me.
to wit, "YouBase."
For those who really want to get their geek on.
AbstractePHI "controlled by the individual"? Yeah, Bushkin's "Medkaz" riff comes to mind.
YouBase enables individuals to create and maintain a personal data store on a distributed public network, allowing the unprecedented ability to easily gather, analyze, and share private data for any purpose imaginable. Data is structured hierarchically so that increasingly identifiable data can be placed at levels closer to the root, allowing arbitrarily anonymized data to be shared with whomever is requesting access to it. The data format is flexible, enabling easy integration with third parties. In addition, read-only or read/write access can be granted at any node in the tree, allowing the user to tightly control access to every subtree in the data store. YouBase thus provides the building blocks for the ultimate peer-to-peer central repository for private data, enabling individuals, organizations, and the world to make smarter decisions.
Introduction
Cryptography combined with distributed applications and databases in peer-to-peer networks provide the fundamental building blocks required for securing stores of individual-centered digital property in an open standards-based manner. By using encryption, digital signatures, digital wallets, and distributed data, ownership of digital information can be managed in a decentralized store. Such a store will be simultaneously secure and private, with strong identity services, while also available anywhere.
Information and rights to that information will ideally follow an individual as she moves through various contexts in her daily life, enabled with the ability to provide trusted, verified identity within those contexts. A longitudinal record could be created, including consumer-generated application data, with the individual as the primary controller of access - all independent of a third party.
YouBase provides an individual-centric security structure that separates personal data from identity while allowing for secure and structured read and/or write access to trusted parties on a peer-to-peer data store. This structure provides several benefits, including:
With these tools in place, we imagine a world where, rather than storing personal data, third parties could simply subscribe to data owned and controlled by the individual.
- a way to securely input, access and share any kind of file or record
- a way to organize authorized access to information into a structured hierarchy
- improved anonymous information sharing that could be used as a public or shared private asset
- information sharing transactions can be to be tied to financial transactions
Cutting to the chase of the YouBase paper:
ConclusionSounds great. What's not to love? If you take all this gauzy "secure, seamless interop" stuff uncritically. The "HD Wallet" is unvarnished Bitcoin crypto.
In summary, using an HD Wallet to access a personal data store provides multiple benefits:
- Data agnostic. Securely input, access and share any kind of file or record by providing keys and digital signatures.
- Structured access. A way to organize authorized access to information into a structured hierarchy.
- Authenticated data and proven identity without storing third party personal information.
- Blockchain available, but not dependent. Information sharing transactions can be tied directly to a universal ledger as a public proof and/or as a financial transaction, but are not required to be. Wallets can be used solely as structured access to a data store, even though wallets are valid bitcoin addresses.
- Encapsulation. Because each data element has a unique address, any one breach can be insulated from an attack on another.
- Privacy. Without a high-level key, no one element of data can be tied to any other. With this level of data anonymity, more data can be donated by users for use in various forms of data commons.
- A universal set of addresses. Using a public-private key hierarchy for data access provides for a universal address to read or write information based on an individual’s HD Wallet. A physician's office could send secure patient information to fulfill Meaningful Use requirements, for example.
- Longitudinal tracking. All transactions are time-stamped, so all records can become a longitudinal record.
- Security. Encryption of data by default.
- Personal data as a service. Opens the door to have an API to share what you want with whom you want.
Structured dataArchitecturally, I am reminded a bit here of my 80's lab days in Oak Ridge. Once we'd progressed to having a Netware LAN, we bought a LIMS package (Laboratory Information Management System) that was written in the "System J" language (basically a grab-bag mishmash of purloined elements of C, Fortran, Basic, Cobol, APL, Pascal, etc). It was ostensibly totally end-user modifiable and customizable. The database was not RDBMS, but rather a "hierarchical" node/stem/leaf "forms-based" construct.
The core of the YouBase solution is a new kind of BIP32 hierarchical deterministic wallet or "HD Wallet" tree for the control of access to personal data stores. YouBase's wallet implementation follows the recommendations in BIP43 for extending BIP32, and can thus be thought of as BIP46, following the examples set in BIP44 and BIP45.
The HD Wallet contains a tree structure with extended keys such that each parent key can derive the children keys, children keys can derive the grandchildren keys, etc. An extended key consists of a private or public key and a chain code. Sharing an extended key gives (private or public) access to the entire branch. A useful application is that a user can provide an extended private key to a trusted source that can then write (deposit) information in that branch of the tree without having access to information in other branches.
Providing public/private key pairings in such a structure offers a number of benefits. First, specific branches can be used as data stores for specific types of information. Information secured in wallets can follow a pre-specified structure where different kinds of information are stored in different branches of the tree.
Secondly, along with that data structure, different rights to the tree have permissions structured such that different parties can have different read/write permissions to single nodes or to entire branches. Data can be partitioned into separate branches, allowing users to grant access on a granular level, down to the individual record, or even changes to a record. Specifically, a private key has read/write access to the data while a public key has read access only.
Third, HD Wallets are flexible in that they can create sequences of public keys without having access to the private keys, so that read-only or receive-only permission can be granted in less secure environments without risking access to the private keys. From the outside (with access to a public key), there is no indication that the key is part of any larger structure. It becomes a bitcoin address like any other...
We eventually ditched it and went full-bore Oracle RDBMS, first on a PC dumb-terminals client/server LAN running Unix (SCO Xenix), then, having scrapped that as well, on a DEC 3100 MicroVAX. Most of my bench-level apps work was xBase, with specialty library calls; all this other stuff just gave me whiplash. I moved on in 1991. Others toiled on with the endless LIMS project. I don't think it ever bore the fruit we'd all envisioned. Part of the problem was one that endures today. Lab managers and their staffs wanted to more effectively and efficiently manage lab throughput -- i.e., workflows. Corporate Suits wanted to surveill "productivity," in terms of money.
Sound familiar?
"YouBase is designed to provide a substrate on which any individual-centric service can be built."
HealthcareBe interesting to see whether this gets any traction in the healthcare space. Color me skeptical at this point. Securely, accurately, and efficiently moving around the hundreds to thousands of ePHI variables (many of them multi-encounter/longitudinal) connected to a given patient record differs materially from paying for CDs (or dope) with Bitcoins.
There are a number of widely-known problems in health IT that an individual-centric data store could help solve.
Security Breaches in health care have become all-too-common because of the high value to identity thieves. A premium is paid on the black market for such records, estimated at $50 per record. In the US, in the first quarter of 2015 alone, nearly 90 million medical records have been compromised, including identity, clinical and financial information. At least part of the problem is misaligned incentives between third parties and patients. Providing a repository of data independent of a third party could provide a framework where personal data is provided on a subscription basis, rather than stored in multiple locations by multiple third parties.
These are just a few examples. Our goal here is not to identify every use for the YouBase platform but to provide a starting point around which new ways of using trusted identity and privacy to manage health information can be imagined.
- Access. At the same time, a person's ability to access and manage personal information currently resting with third parties is difficult, recently leading to the #NoMUWithoutME and getmyhealthdata.org campaigns for personal health information access. Ideally, each person will have a private, universal and secure container of their own medical data that serves as a reference for health care stakeholders.
- Interoperability. YouBase is data agnostic and could act as a single source for data, shared as needed and controlled by the patient, independent of data types. Data within the store could be translated between the various profiles. Any kind of data profile can be accepted and defined in the schema, so there's no need to pre-define the data structure before receiving the payload.
- Research. Consumers can anonymously donate their validated data with minimal metadata or personal information. Consumers can have a personal container to directly connect their validated clinical and claims data, or provide this validated information to an external party.
- Sending records from provider to patient. A set of universal addresses will allow for secure transmission of a patient record, simply by scanning a public address for which the health care provider has a private key.
- Identification at point of care. YouBase can provide validated identity at the point of care through IDs, unique addresses and digital signatures, while maintaining privacy. Digital signatures will improve data quality.
For example, a user could enter a lab and show that his identity matches with a traditional ID, or simply present his YouBase token that has been signed. The public key token is then verified as belonging to the user and documented in the system. The phlebotomist takes a blood sample, notes the date/time, and the sample is then permanently associated with the user's token/private key. The sample is processed using the lab's existing system. The results (data) are sent to the user's YouBase wallet and can only be opened/viewed by the token/secure-key.
This could be applied to many health care transactions, creating validated identity and a universal set of addresses to which patient information could be signed. Using this kind of system will also improve data quality as each data entry will require a digital signature linked to the person who entered the information.
We'll see.
IN CLOSING, BACK TO PIETER CULLIS
Chapter 7:
THE GENIE IS OUT OF THE BOTTLE SO, WHERE ARE WE? In the previous six chapters, you have seen how the development and application of modern science over the last 400 years have led to an understanding of everything from planetary motion to the innermost workings of the cells in your body. You have learned about many of the bits and pieces that you’re made of, what they do, and how you can measure them, resulting in the “molecular you.” We have seen how the advent of the digital age allows us to store all this information electronically, and how analysis of the “digital you” embodied by this massive data cloud can identify biomarkers that provide an incredibly accurate picture of your state of health and disease. Remote-sensing devices can now analyze every breath you take and every beat of your heart and alert you well before you pass your best before, or rest in peace, date. Through social media, you will soon be able to share these intimate details with sympathetic listeners suffering from disorders just like yours, compare your digital selves to find the most effective therapies that should work for you, and locate where these are available. Taken together, these advances are driving massive changes in the practice of medicine as we know it today. But that’s only the beginning of the potential disruptions molecular medicine may cause...As I asked in the title:
The rapid advances in accurate diagnostics that can be anticipated in the near future, and their availability to you, will challenge the medical establishment’s role as the gatekeeper of medical advances, because knowledge and power will be passed to you, the consumer. For example, the current fifteen-year wait time for a new medical advance to reach the doctor’s office — to reach you — will not be tenable when you know with certainty what disorder you have, and have done your research as to which therapy is most appropriate to you and where it is available.
Certainly, personalized medicine will be creating havoc within the medical profession. The role of doctors in making diagnoses will be increasingly supplanted by computer analyses of the digital you. Accurate diagnoses combined with advanced imaging techniques and analyses of genomic and other data in the digital you will mean that safe and effective treatments will be readily identified. Thus the role of doctors will be in transition, as it has been for some time. Fifty years ago, doctors cared for people who were really unwell: 80 percent of their job was looking after the dying or seriously ill. Today, the treatment of chronic disease has become the norm. Care of type-2 diabetes, high blood pressure, arthritis, and cancer survivors takes up the majority of time. As these chronic disorders become increasingly controlled through a molecularly based, personalized approach, and as diagnosis and treatment are largely decided by analysis of the digital you, only complex and severe problems will need the doctor. So what will doctors be doing?
Two scenarios are possible. Those who do not have access to a doctor — or advanced health care — will find the playing field dramatically leveled. Relatively inexpensive omic data — potentially in the range of $ 100 for a complete analysis — and free online analyses available through the Internet will enable patients around the world to access state-of-the-art diagnostic resources. This information, combined with Internet searches and social media such as PatientsLikeMe, CureTogether, and other disease-specific websites, will also make it possible for you to discover what the most appropriate treatment is and where it is available... [op cit, Kindle Locations 1732-1810].
Personalized Medicine" - will Health IT be up to the task?Not a lot of time to be wasting, with tsunamis of Omics data approaching.
UPDATE
apropos of the increasing need for data exchange/"Interoperability" and the YouBase proffer, some interesting thoughts in Dr. Carter's latest post at EHR Science:
Future-Proofing Clinical Care Systems—Because, Well, Change is Constant'eh? "Modular EHR components as, in effect, plug & play "apps" that play nice together? Could such comprise the mature evolution of the still-tentative"API"? Read the entire post. Jerome Carter never disappoints.
by JEROME CARTER on AUGUST 17, 2015
Everything changes. The workflows from last year may not exist next year. There will be new reporting requirements, new data elements, new forms, and new coding systems. Creating software that grows gracefully with the times takes serious architectural thinking from the outset...
It is no accident that current systems tend to be monolithic and hard to change without additional programming. As with most things in life, creating modular clinical systems is easier said than done. More importantly, modularity and configurability MUST be design goals from the very beginning, as they are not easily added after the fact...
Consider something as basic to EHR systems as the problem list. Problem list functionality could be provided in any number of ways. The simplest would be to treat the list as an array/collection of terms with start/stop dates. In a relational database, such a list would be stored in a single table. At the user interface, rendering of the list could be done using a simple grid, a multi-line text box, or be printed plainly in a window. Here we have the list in three modalities: in memory, in a data store, and presented on screen. In each instance, the problem list is simply treated as data—a series of values to be stored or displayed.
However, what if we decide to make the problem list (PL) a component? We know what the information role of the problem list is for users, but what is the functionality of a PL computationally? Here the question becomes what role the PL plays as a component of the EHR beyond its role as an information source for the user.
Now, consider this scenario. Chronic renal disease is added to the PL. Should there be a function in the PL component that automatically checks meds to suggest dosage adjustments? Should a PL component have a function that scans new labs to look for undiagnosed problems?
Treating a PL as a component that controls all diagnosis/problem functions requires a completely different software design than one that acts as a list and simply records/presents what the user has entered. Now we are considering functions that happen independently of any user interaction with the PL. Going a step further, let’s make the PL swappable (i.e., third-party vendors could sell a PL component that snapped into the EHR). Providing users the ability to swap out components would require yet another architectural adjustment...
The second version of modularity allows one to add a PL module to an EHR system in a way similar to adding a new app to a smartphone. This second version would allow updates to be installed after the EHR system is up and running. Imagine how owning an EHR would be when one could buy the PL from one company and the medication list from another. Don’t like the labs display? Swap it! Both versions of modularity are helpful, but the latter also prevents users from being tied to the innovativeness, or lack thereof, of any particular vendor...
__
CODA
Is our "interoperability" obsession a case of Tail-Wags-Dog? See my post "Post-HIMSS15 Interoperababble Update: Margalit Gur-Arie hits one out of the park."
___
More to come...
No comments:
Post a Comment