Search the KHIT Blog

Tuesday, June 20, 2017

#ShowMeTheBill


The GOP is still trying to sell the lame canard that "ObamaCare" was written and passed "in secret" by Democrats with no Republican input. That's simply not true. I followed every draft of the legislation as it wound its way through Congress. See my 2009 post "Public Optional."

It passed with no GOP votes, yes, but it did contain 147 GOP amendments, and had been the subject of many hours of hearings spanning a year.
"You're going to have such great health care, at a tiny fraction of the cost—and it's going to be so easy.” - Donald Trump, Oct 2016 rally in Florida
apropos of the federal health policy debate, see my prior post Rationing by "Price."
Just got my Miraca Life Sciences bill stemming from a recent local dermatology visit biopsy for a chronic arms and torso rash I've had for about 30 years. The "Billed Charges" total came to $1,565.00. Medicare paid $439.39. Part-B "Patient Amount due" was $112.11 (I paid it immediately). Were Trump HHS Secretary Tom Price to get his "balance billing" way, I'd be on the hook for an additional $1,013.40.
See also my prior posts on "An American Sickness." Buy her book. Also, stay up with the #ShowMeTheBill hashtag activity here.

Worth re-posting a quick graph I did a little while back.



The core question remains one of how we pay for health care equitably.

JUNE 22 UPDATE


Just Google "H.R. 1628." 132 pages in my PDF download. 36 allusions to the administrative/regulatory discretion of "the Secretary" (HHS Secretary Tom Price -- e.g., "as the Secretary shall determine..." "...as determined by the Secretary"). The words "quality," "improvement," and "affordability" are nowhere to be found. "Subsidy"? How about "SEC. 131. REPEAL OF COST-SHARING SUBSIDY." (pg. 59)

Hmmm... "Sec. 117. Permitting States to apply a work requirement for nondisabled, non-elderly, nonpregnant adults under Medicaid."

Wonkistan will be busy today with reactions. News reports of protests and arrests outside Senator Mitch McConnell's office.
"FEHB" is not cited in the bill. FEHB is the program that gives members of Congress a 70% taxpayer subsidy on their health care premiums.
MY LATEST PHOTOSHOP MOMENT


Click to enlarge.

Again, see my prior posts reviewing An American Sickness.

Let's see, the usual stuff: "A third to half of health care is wasteful, unnecessary, and even harmful, blah, blah, blah..." Not to summarily discount all of that widely repeated assertion, but we are not gonna Lean Process QI our way to macro expenditure reductions getting even close to that. Similarly with respect to more accurate dx's and more efficacious tx's. Health care will likely remain a vexingly expensive endeavor. I first addressed this stuff in 2009:
Some reform advocates have long argued that we can indeed [1] extend health care coverage to all citizens, with [2] significantly increased quality of care, while at the same time [3] significantly reducing the national (and individual) cost. A trifecta "Win-Win-Win." Others find the very notion preposterous on its face. In the summer of 2009, this policy battle is now joined in full fury.
BTW, per this post, I should add that I don't buy the claim that everyone in the space is "losing money."

FORMER PRESIDENT OBAMA REACTS ON FACEBOOK
Our politics are divided. They have been for a long time. And while I know that division makes it difficult to listen to Americans with whom we disagree, that’s what we need to do today.

I recognize that repealing and replacing the Affordable Care Act has become a core tenet of the Republican Party. Still, I hope that our Senators, many of whom I know well, step back and measure what’s really at stake, and consider that the rationale for action, on health care or any other issue, must be something more than simply undoing something that Democrats did.


We didn’t fight for the Affordable Care Act for more than a year in the public square for any personal or political gain – we fought for it because we knew it would save lives, prevent financial misery, and ultimately set this country we love on a better, healthier course.


Nor did we fight for it alone. Thousands upon thousands of Americans, including Republicans, threw themselves into that collective effort, not for political reasons, but for intensely personal ones – a sick child, a parent lost to cancer, the memory of medical bills that threatened to derail their dreams.


And you made a difference. For the first time, more than ninety percent of Americans know the security of health insurance. Health care costs, while still rising, have been rising at the slowest pace in fifty years. Women can’t be charged more for their insurance, young adults can stay on their parents’ plan until they turn 26, contraceptive care and preventive care are now free. Paying more, or being denied insurance altogether due to a preexisting condition – we made that a thing of the past.
We did these things together. So many of you made that change possible.

At the same time, I was careful to say again and again that while the Affordable Care Act represented a significant step forward for America, it was not perfect, nor could it be the end of our efforts – and that if Republicans could put together a plan that is demonstrably better than the improvements we made to our health care system, that covers as many people at less cost, I would gladly and publicly support it.


That remains true. So I still hope that there are enough Republicans in Congress who remember that public service is not about sport or notching a political win, that there’s a reason we all chose to serve in the first place, and that hopefully, it’s to make people’s lives better, not worse.


But right now, after eight years, the legislation rushed through the House and the Senate without public hearings or debate would do the opposite. It would raise costs, reduce coverage, roll back protections, and ruin Medicaid as we know it. That’s not my opinion, but rather the conclusion of all objective analyses, from the nonpartisan Congressional Budget Office, which found that 23 million Americans would lose insurance, to America’s doctors, nurses, and hospitals on the front lines of our health care system.


The Senate bill, unveiled today, is not a health care bill. It’s a massive transfer of wealth from middle-class and poor families to the richest people in America. It hands enormous tax cuts to the rich and to the drug and insurance industries, paid for by cutting health care for everybody else. Those with private insurance will experience higher premiums and higher deductibles, with lower tax credits to help working families cover the costs, even as their plans might no longer cover pregnancy, mental health care, or expensive prescriptions. Discrimination based on pre-existing conditions could become the norm again. Millions of families will lose coverage entirely.


Simply put, if there’s a chance you might get sick, get old, or start a family – this bill will do you harm. And small tweaks over the course of the next couple weeks, under the guise of making these bills easier to stomach, cannot change the fundamental meanness at the core of this legislation.


I hope our Senators ask themselves – what will happen to the Americans grappling with opioid addiction who suddenly lose their coverage? What will happen to pregnant mothers, children with disabilities, poor adults and seniors who need long-term care once they can no longer count on Medicaid? What will happen if you have a medical emergency when insurance companies are once again allowed to exclude the benefits you need, send you unlimited bills, or set unaffordable deductibles? What impossible choices will working parents be forced to make if their child’s cancer treatment costs them more than their life savings?


To put the American people through that pain – while giving billionaires and corporations a massive tax cut in return – that’s tough to fathom. But it’s what’s at stake right now. So it remains my fervent hope that we step back and try to deliver on what the American people need.


That might take some time and compromise between Democrats and Republicans. But I believe that’s what people want to see. I believe it would demonstrate the kind of leadership that appeals to Americans across party lines. And I believe that it’s possible – if you are willing to make a difference again. If you’re willing to call your members of Congress. If you are willing to visit their offices. If you are willing to speak out, let them and the country know, in very real terms, what this means for you and your family.


After all, this debate has always been about something bigger than politics. It’s about the character of our country – who we are, and who we aspire to be. And that’s always worth fighting for.
____________

More to come...

Friday, June 16, 2017

Given that "EHRs are a dying technology," should we kill MU Stage 3?


Picking back up on a recent riff I started pursuant to a young (English major) reporter's assertion that "EHRs are a dying technology." MUfraud

The American Hospital Association now recommends doing away with MU Stage 3.
AHA Calls for Stage 3 Meaningful Use Cancellation and More
AHA is calling for CMS to cancel Stage 3 Meaningful Use as part of recommended changes to federal EHR reporting changes.


The American Hospital Association (AHA) recently submitted a letter [pdf] to CMS requesting reduced administrative complexity as a way to save healthcare providers billions in annual costs, including the cancellation of Stage 3 Meaningful Use.

AHA outlined 29 recommendations to reduce regulatory burden in response to the federal organization’s request for information regarding CMS flexibilities and efficiencies

“The regulatory burden faced by hospitals is substantial and unsustainable,” opened AHA. “As one small example of the volume of recent regulatory activity, in 2016, CMS and other agencies of the Department of Health and Human Services (HHS) released 49 hospital and health system-related rules, comprising almost 24,000 pages of text.”...
Given that explicit references requiring Meaningful Use "Stages" are found nowhere in the ARRA/HITECH statute (P.L. 111-5), this action would appear to be fully within HHS/CMS discretion. Seems to me that HHS Secretary Tom Price could simply order the killing on MU3 himself. The MU incentive money is pretty much all out the door anyway, and Price is documentably no friend of MU. From Healthcare IT News back in January:
Tom Price takes aim at the inefficiencies of meaningful use, questions how to pay for precision medicine
The HHS nominee decries a law that has turned physicians "into data entry clerks." Meanwhile, genomics represents a "brave new world," he said – but "the challenges of how we afford to be able to make that available to our society are real."
Notwithstanding that I am no fan of Dr. Price (see here and here as well), you have to give him his due here:
"Electronic health records are so important because, from an innovation standpoint they allow the patient to have their health history with them at all times and be able to allow whatever physician or provider to have access to that," Price responded. "We in the federal government have a role in that, but that role ought to be interoperability: to make sure the different systems can talk to each other so it inures to the benefit of the patient.”

With regard to the EHR Incentive Program, "I've had more than one physician tell me that the final rules and regulations related to meaningful use were the final straw for them," said Price. "And they quit. And they've got no more gray hair than you or I have. And when that happens we lose incredible intellectual capital in our society."...
Ah, yes, "interoperability." I've been griping about what I have called "interoperababble" for years on this blog.

"We in the federal government have a role ... but that role ought to be interoperability..."
ONC missed the boat on that right at the outset by not requiring "standard data" (i.e., a metadata/dictionary standard) as part of EHR certification. APIs may be just fine for data exchange among lightweight consumer-facing apps comprising few data elements in need of exchange, but EHRs (which are not "dying") typically house around 4,000 variables -- hundreds of which may have to be exchanged during any given "interop" episode of care involving multiple providers on different systems.

Finally (pedantically, for my umpteenth time), no amount of calling n-dimensional "interfaced" point-to-point data translation/exchange "interoperability" will make it so. Had we "Type-O" universal standard data (my "lifeblood of health care" analogy again), we might come closer to conforming to the IEEE definition of "interoperability."

ERRATUM

Just updated my March 2017 post "Health Care Needs an Uber Like It Needs Another Gruber."

____________

More to come...

Wednesday, June 14, 2017

A bad day in Alexandria


OK, cheap @RandPaul political talk on Twitter a year ago.


Notwithstanding that "shooting at the government" comprises the exact Constitutional and statutory definition of "treason."

I wish Congressman Steve Scalise and the other three shooting victims full and fast recoveries. As horrific as the incident was, there could have been dozens of people killed.

apropos of health care, see
What Bullets do to Bodies
____________

Thursday, June 8, 2017

"EHRs are a dying technology?"

Seriously?
MUfraud
From yet another press report on the eClinicalworks fraud settlement:
The eClinicalWorks False Claims Act case: Implications for health IT

Experts say the False Claims Act case against eClinicalWorks highlights problems in health IT that center on a lack of interoperability, the failure of meaningful use and the failure of electronic health records, or EHRs.

The Department of Justice noted in its press release about the case that a lack of interoperability played a role: "ECW's [eClinicalWorks'] software failed to satisfy data portability requirements intended to permit healthcare providers to transfer patient data from ECW's software to the software of other vendors."

Kirk Nahra, an attorney at Wiley Rein LLP in Washington, D.C., who specializes in privacy, information security and compliance issues, explained. "One of the points [of these EHR certification requirements] ... is [for] systems to be able to work together," Nahra said. "[It's] the whole idea of interoperability."...
To Kate McCarthy, senior analyst at Forrester Research, this case just reinforces something she's believed in for a while: "It's kind of mystifying that they were able to get away with this as long as they did. But my opinion on health records has been, for a while, that [EHRs are] a dying technology."

A perfect design for failure

McCarthy explained that healthcare organizations try to use EHRs to run hospitals and drive everything, from scheduling to patient workflow to revenue cycles, in addition to using EHRs simply as clinical document storage -- which is what they are more suited for than anything else, she said.

"They're systems of record," McCarthy said. "They're not systems of insight, and they're not systems of engagement. And so the way that people have tried to make them work in the industry was basically a perfect design for failure."

But more than the failure of EHRs, McCarthy said she believes this case against eClinicalWorks also demonstrates the failure of meaningful use.

"The issue I see is more that meaningful use, in and of itself, is a pretty big failure," she said. "And even organizations that are successfully attesting meaningful use are not meeting customer organizations' expectations with the products and services that they're delivering."

She added that "not only have we not [achieved meaningful use], but now vendors are out there faking meaningful use attestation."...
Well...


I know critics are having great sport these days piling on the Meaningful Use program, summarily calling it an unequivocal "failure." While I have never held fire criticizing the initiative where I found it necessary (e.g., "interoperababble," anyone) -- even while working for the HealthInsight REC --  I'm not so sure. I find the results decidedly mixed.
Responses shortly (my daughter's been finishing chemo round 3 today). First of all, briefly, I take issue with the summary conflation: whether the Meaningful Use program has "failed" (in what regard?) is a separate issue from whether EHRs have "failed" (and, they decidely have not; paper charts are not "better," the converse is true. All we can differ over is the relative extent).

None of which is to argue that the "current state of HIT is 'acceptable'." When you work in QI, little to nothing of the status quo is ever "acceptable." Technology is never static, and to the extent that dominant market incumbency stifles 'innovation" in any business/tech domain, it has nil to do with technology per se. (And, yes, I'm hip to the phrase "regulatory capture." I dispute the extent to which it applies in Health IT -- in marked contrast to, say, the FIRE sector.)

Two broad accusations stand out in general as proffered by MU critics: [1] "Interoperability" has yet to be accomplished, and [2] we have failed to "bend to cost curve (down)."

Fair enough. Search back through the MU-governing ARRA/HITECH Act (Public Law 111-5, pdf). Interoperability is alluded to exactly twice, once with respect to "promoting research into interoperability (which has indeed commenced, however haltingly)," and once touting the (obvious) utility of interoperability with respect to pubic health databases and "registries."

Search also on "cost curve" and its numerous synonym phrases. You won't find anything. For an in-depth at why health care costs continue to rise despite all of the policy chatter and health IT initiatives, you can't do better that Elisabeth Rosenthal's fine book "An American Sickness." Yes, it was hoped and intended that HIT would play a role in reducing health care costs, but other far more potent economic factors keep confounding that goal (and, we simply cannot know where costs would be today absent the significant accelerated penetration of HIT pursuant to HITECH).
__

UPDATE

Coming shortly, after I look into this (below) further.


A "free" 8-week online course.
"Participants in this 8-week course will engage with top experts in the field of public health as they grapple with the nature of high-quality healthcare: What is quality? How do we define it? How is it measured? And most importantly, how can we make it better? Whether you’re a healthcare provider; student of medicine, public health, or health policy; or a patient who simply cares about getting good care—this course is for you."
We'll see what's new since I got my health care QI Cert in 1994.
____________

More to come...

Tuesday, June 6, 2017

Should we abolish the ONC?

"The Office of the National Coordinator for Health Information Technology (ONC) is the lead agency charged with formulating the federal government’s health information technology (IT) strategy and coordinating federal health IT policies, standards, programs, and investments. ONC supports the Department’s goal to strengthen health care by modernizing the care delivery infrastructure of the nation through the adoption, implementation, meaningful use, and optimization of health IT.

The FY 2017 Budget for ONC is $82 million, $22 million above FY 2016. This Budget reflects ONC’s commitment to advancing progress towards a safe and secure nationwide system of interoperable health IT that focuses on safety and usability. Through the engagement and collaboration of public and private sector stakeholders, ONC will facilitate care delivery transformation and better health and health care nationwide.

In FY 2017, ONC will focus on encouraging market transparency and competition, improving electronic health record usability, and offering technical assistance to providers to help them get the most out of their health IT."
In light of my prior post.
MUfraud
Go to HealthIT.gov. Search around for "eClinicalWorks" (including via the "Newsroom" link). See if you find any mention of the eCW fraud.

First "Newsroom" hit (which ranks eCW at 3rd):
Health Care Professional EHR Vendors
Certified Health IT Vendors and Editions Reported by Ambulatory Health Care Professionals Participating in the Medicare EHR Incentive Program


Note: Certified health information technology (health IT) meets the technological capability, functionality, and security requirements adopted by the Department of Health and Human Services. The order above reflects vendors of commercial certified health IT only; totals for self-developed certified health IT are summarized as a whole. 2014 certified health IT is certified under the 2014 Edition Health Information Technology Certification Critiera, and 2011 certified health IT is certified under the 2011 Edition Health Information Technology Certification Criteria...
"Certified health information technology (health IT) meets the technological capability, functionality, and security requirements adopted by the Department of Health and Human Services."

Unless it doesn't.

ONC "$82 million" budget this year? That'd perhaps pay for Trump's 2017 weekend trips to Mar-a-Lago. That's less than HIMSS recent annual gross (~$97 million, per their IRS Form 990 via Guidestar). Hey, Donald, while you're privatizing the FAA, why don't you sell off ONC to HIMSS? They just bought Health 2.0. We could have one Big Happy Free Market HIT Family.

SPEAKING OF FEDERAL HIT INITIATIVES

From Health Affairs,
Across all medical specialties, there is a severe lack of high-quality clinical evidence, in part because the gold standard for evidence is large-scale randomized controlled clinical trials. Such trials are on an unsustainable cost trajectory, as they require expensive, stand-alone data capture infrastructures. Furthermore, they typically enroll highly selected populations that are not necessarily representative of real-world patients. Although the emergence of the electronic health record (EHR) holds great promise for generating much-needed evidence, medical research lags far behind other industries in its ability to use big data to get the answers decision makers need in health care. The ability to harness good quality, usable data from EHRs will likely be as revolutionary to health care as the Internet was to other industries.

The problem is complex, and one facet of the issue is that data from health systems are not interoperable; for example, information such as date of birth, blood pressure, or diagnoses can be recorded in a myriad of ways. Although the Centers for Medicare and Medicaid Services encourages and incents “meaningful use” of EHRs, these systems are customizable to each institution’s needs, and as a result, data from individual health care systems and providers are housed in silos of babel—with limited ability to exchange information between them. Compounding the issue, most organizations erected proprietary systems of digital health data capture before standardized formats were developed and before thoughtful consideration about reuse of these data for research activities gained traction. As a result, it has been infeasible to ask questions as seemingly simple and important as “Which dose of aspirin is associated with better outcomes?”
To counter these problems, the Patient-Centered Outcomes Research Institute (PCORI) funded PCORnet, the National Patient-Centered Clinical Research Network, to support clinical research. PCORnet has built strong partnerships between clinical researchers and patient advocacy networks. In addition, PCORnet has established a Common Data Model to support pragmatic trials and observational research. Use of PCORnet’s Common Data Model will enable large-scale clinical research from data gathered during patient care as well as rapid execution of queries. Data can be collected and harmonized across more than 130 diverse organizations representing more than 122 million individuals who had a medical encounter in the past five years. Additionally, 41 million patients are available for enrollment in clinical trials and other studies...
PCORI was legislated as part of "ObamaCare" (PPACA), not HITECH, btw. If the GOP actually goes through with ACA repeal, PCORI will go down the tubes as well.
____________

More to come...

Saturday, June 3, 2017

Fraud at eClinicalWorks. Are they alone in this regard?

Electronic Health Records Vendor To Pay The Largest Settlement In The District Of Vermont
eClinicalWorks LLC to Pay $155 Million to Resolve Civil False Claims Act Allegations


BURLINGTON, VT – One of the nation’s largest vendors of electronic health records (EHR) software, eClinicalWorks (ECW), and certain of its employees will pay a total of $155 million to resolve a False Claims Act lawsuit alleging that ECW misrepresented the capabilities of its software, the Justice Department announced. The settlement also resolves allegations that ECW paid kickbacks to certain customers in exchange for promoting its product. ECW is headquartered in Westborough, Massachusetts.

“This settlement is the largest False Claims Act recovery in the District of Vermont and we believe the largest financial recovery in the history of the State of Vermont,” said Acting United States Attorney for the District of Vermont Eugenia A.P. Cowles. “This significant recovery is a testament to the hard work and dedication of this office and our partners in the Commercial Litigation Branch of the Civil Division and at HHS. This resolution demonstrates that EHR companies will not succeed in flouting the certification requirements.”

The American Recovery and Reinvestment Act of 2009 established the Electronic Health Records (EHR) Incentive Program to encourage healthcare providers to adopt and demonstrate their “meaningful use” of EHR technology. Under the program, the U.S. Department of Health and Human Services (HHS) offers incentive payments to healthcare providers who adopt certified EHR technology and meet certain requirements relating to their use of the technology. To obtain certification for their product, companies that develop and market EHR software must attest that their software satisfies applicable HHS-adopted criteria and pass testing by an accredited, independent, HHS-approved certifying entity.

In its complaint-in-intervention, the government contends that ECW falsely obtained that certification for its EHR software when it concealed from its certifying entity that its software did not comply with the requirements for certification. For example, in order to pass certification testing without meeting the certification criteria for standardized drug codes, the company modified its software by “hardcoding” only the drug codes required for testing. In other words, rather than programming the capability to retrieve any drug code from a complete database, ECW simply typed the 16 codes necessary for certification testing directly into its software. ECW’s software also did not accurately record user actions in an audit log, and in certain situations did not reliably record diagnostic imaging orders or perform drug interaction checks. In addition, ECW’s software failed to satisfy data portability requirements intended to permit healthcare providers to transfer patient data from ECW’s software to the software of other vendors. As a result of these and other deficiencies in its software, ECW caused the submission of false claims for federal incentive payments based on the use of ECW’s software.

“Every day, millions of Americans rely on the accuracy of their electronic health records to record and transmit their vital health information,” said Acting Assistant Attorney General for the Civil Division of the Department of Justice Chad A. Readler. “This resolution is a testament to our deep commitment to public health and our determination to hold accountable those whose conduct results in improper payments by the federal government.”

Under the terms of the settlement agreement, ECW and three of its founders (Chief Executive Officer Girish Navani, Chief Medical Officer Rajesh Dharampuriya, M.D., and Chief Operating Officer Mahesh Navani) are jointly and severally liable for the payment of $154,920,000 to the United States. Separately, Developer Jagan Vaithilingam will pay $50,000, and Project Managers Bryan Sequeira, and Robert Lynes will each pay $15,000.

As part of the settlement, ECW entered into a Corporate Integrity Agreement (CIA) with the HHS Office of Inspector General (HHS-OIG) covering the company’s EHR software. This innovative 5-year CIA requires, among other things, that ECW retain an Independent Software Quality Oversight Organization to assess ECW’s software quality control systems and provide written semi-annual reports to OIG and ECW documenting its reviews and recommendations. ECW must provide prompt notice to its customers of any safety related issues and maintain on its customer portal a comprehensive list of such issues and any steps users should take to mitigate potential patient safety risks. The CIA also requires ECW to allow customers to obtain updated versions of their software free of charge and to give customers the option to have ECW transfer their data to another EHR software provider without penalties or service charges. ECW must also retain an Independent Review Organization to review ECW’s arrangements with health care providers to ensure compliance with the Anti-Kickback Statute.

“Electronic health records have the potential to improve the care provided to Medicare and Medicaid beneficiaries, but only if the information is accurate and accessible,” said Special Agent in Charge Phillip Coyne of HHS-OIG. “Those who engage in fraud that undermines the goals of EHR or puts patients at risk can expect a thorough investigation and strong remedial measures such as those in the novel and innovative Corporate Integrity Agreement in this case.”

The settlement with ECW resolves allegations in a lawsuit filed in the District of Vermont by Brendan Delaney, a software technician formerly employed by the New York City Division of Health Care Access and Improvement. The lawsuit was filed under the qui tam, or whistleblower, provisions of the False Claims Act, which permit private individuals to sue on behalf of the government for false claims and to share in any recovery. The Act also allows the government to intervene and take over the action, as it did in this case. As part of today’s resolution, Mr. Delaney will receive approximately $30 million.

This matter was jointly handled by Assistant United States Attorneys Owen C.J. Foster and Nikolas P. Kerest of the U.S. Attorney’s Office for the District of Vermont, Kelley Hauser and Edward Crooke of the Commercial Litigation Branch of the Civil Division, the HHS Office of Inspector General, and multiple HHS agencies and components.

The case is captioned United States ex rel. Delaney v. eClinialWorks LLC, 2:15-CV-00095-WKS (D. Vt.). The claims resolved by the settlement are allegations only, and there has been no determination of liability.
Wow. One has to wonder; how many other EHR vendors have been defrauding the ARRA/HITECH program by gaming their ONC Certs?
MUfraud
This irritates me personally. Back when I was in the HealthInsight REC, I served Meaningful Use client practices using 14 different EHR platforms:
eClinicalWorks
e-MDs
SOAPware
Amazing Charts
Greenway
athenaHealth
Allscripts (MyWay and Professional)
Practice Fusion
Praxis
EncounterPro
Optum CareTracker
ChartLogic
MicroMD
Care360
eClinicalWorks (eCW) alone was at least a 3rd of my book. Back during the prior DOQ-IT era, HealthInsight had sent me to Westborough to train on eCW, so it was the platform with which I was most familiar and adept. Good UX functionality and features (though the workflow paths to some of the MU compliance criteria were "too many clicks" -- albeit hardly unique to eCW).

During the 2014 Health 2.0 Conference, eCW CEO Girish Navani blew a bunch of smoke up my butt, telling me he wanted to sponsor my KHIT blog. Follow-up went nowhere with that. No doubt a good thing, in hindsight.


So, now eCW merely gets Double Secret Probation and a "hefty" fine. But they don't have to admit guilt (LOL, spare me) nor lose their ONC Certification.
"The claims resolved by the settlement are allegations only, and there has been no determination of liability."
Class-action lawsuit comprised of eCW users perhaps in their future? Will CMS attempt to claw back MU incentive funds paid to eCW attestors?


Are other EHR vendors pulling this kind of crap? The "paper charts are better" and anti-ONC, anti-regulatory "free markets uber alles" crowds are all over this CusterFluck.

Jacob Reider at THCB:
Is eClinicalWorks the Next Volkswagen?
From HealthcareIT News:
Concerned the eClinicalWorks fiasco could happen to your EHR? Take these steps now
From FierceHealthcare:
eClinicalWorks settlement hints at broader certification infractions throughout the EHR industry
__

INTERESTING COMMENTS AT THCB
As Adrian pointed out and with my previous blog post that was referenced, it’s not surprising that events such as this have occurred with this proprietary software with certification boondoggle. Other industries have recognized the immense value of open source and it’s all around us (just not with much fanfare). When will physicians recognize there is much in common to what we do in our profession and the intrinsic ethos of open source software (which does exactly that Dr Zwerling had suggested – having the code out in the open to review, improve with physician input?). This is not a novel concept and the only ones who can make a choice and break away from this proprietary with or without certification duopoly are the end users themselves – patients and physicians.

As others have pointed out in other related blog posts on this site, we can’t realistically go back to pen and paper. We also can’t turn back time to undo the damage and $$$ lost with MU. With our political climate, we can’t hope and wait for legislation to turn things around in the next few months or even years. It’s in our hands. The tools are there for us. It’s not a theoretical proposition. We need to seriously talk about the prospect of open source and for those that are unfamiliar with it, get immersed and get to know more about it. It costs nothing and you can try it out (HIE and One and NOSH) and start that conversation. These open source projects only thrive if a community supports it. To me, the key value propositions of a meaningful open source health project are better patient care with happier physicians like me who are back in the business of taking care of their patients. Just that simple.
- Michael Chen, MD
I'm not persuaded that Open Source applications (and I happily use Firefox, Thunderbird, and Filezilla, to cite just three), as effective as they can be generally, represent the HIT panacea (Wiki list of open source Health IT here). A bit of false dilemma there, IMO. I began my white collar career in the 80's writing (what would now be called) "apps" in a forensic-level environmental radioanalytical lab in Oak Ridge (e.g., here, pdf). While my work was "proprietary" (IT/ORL was a privately-held commercial lab) the comprehensive QA packages having to be thoroughly reviewed and signed off by my Sups (inclusive of SOPs, source code, RDBMS schema, logic flowcharts, and validation sample data) before any of my work could be put into production were routinely available to hordes of regulatory and client examiners. I've been audited right down to my rounding algorithms, frequently under adversarial conditions. I've long been a critic of the feebleness of the simplistic and toothless ONC cert program.

A prior comment (same thread) at THCB:
ONC policy is the root of the problem and has caused the general erosion of trust in federal health initiatives that we have today.

Certification is a poor substitute for sunshine. The reason we have cheating in Volkswagen or eCW is that software that impacts our health is not open source. However, the harm of secrecy is clearer when the software can produce a medical error directly rather than just statistically impact our health through environmental damage.

Certification institutionalizes the growing practice of making medicine itself proprietary and secret. Open source medical software, like medical knowledge itself, is safe and trusted because it’s open to peer review and has no economic incentive to hide bugs or to cheat. On the contrary, open source software creates the incentive for bugs and shortcuts to be publicized in the hope that they will be quickly fixed. This is how medicine works as well, where we understand the critical importance of reporting adverse events.

My colleague Michael Chen, MD has authored the New Open Source Health (NOSH) EHR that’s at the core of our HIE of One initiative. His comments about certification and ONC policy in April and our other posts before predicted the failure of confidence that we are seeing with eCW and ONC today: https://noshemr.wordpress.com/2017/04/07/picking-a-scab-with-a-zombie/

ONC policy to protect proprietary EHRs by erecting a certification barrier to open source software is the root of the problem. As Michael notes, it’s also the root of the “information blocking” / interoperability problem we have. How many lives and hundreds of $Billions is that costing us? Adding more regulations by certifying more secret software will not solve the problem of trust. Our physicians need to be using tools that are open to inspection, free to teach, to share, and to improve. It’s medicine.
- Adrian Gropper, MD
All points well-taken. But, still, I have false dilemma reservations. And I'm a bit put off by Dr. Gropper's imputations of ONC ill will and "regulatory capture" ulterior motives.

BREAKING: The VA just selected Cerner for their new EHR system. So much for Open Source (VistA).

MONDAY MORNING UPDATE

From HealthcareDIVE:
eClinicalWorks false claims settlement could kick off more EHR investigations

Dive Brief:
  • Legal experts believe the recent $155 million settlement with EHR vendor eClinicalWorks in a False Claims Act case may be the start of greater Department of Justice (DOJ) activity in the EHR field.
  • FierceHealthcare reported that eClinicalWorks isn’t the only vendor that has “taken liberties with certification criteria” and Healthcare IT News predicted DOJ will shine the light on other EHR vendors for false claims.
  • The DOJ obtained more than $4.7 billion in settlements and judgments from civil cases that involved the False Claims Act last fiscal year. The DOJ has averaged nearly $4 billion on False Claims Act cases since fiscal year 2009 and has collected $31.3 billion during that period...
What a mess.
____________

More to come...

Wednesday, May 31, 2017

Continuing with NLP, a $4,200 "study."

OK, I've been keyword-searching around on Natural Language Processing (NLP) topics in the wake of my prior post, while I finish Daniel Dennett's glorious new witty and detailed 496 pg book on the evolution of the human mind (btw, $9.18 Kindle price, and of direct relevance to AI/NLP/CL). #NLP

Ran into this beaut.

Physician Computer Assisted Coding for Professionals: Market Shares, Strategies, and Forecasts, Worldwide, 2017 to 2023

LEXINGTON, Massachusetts (March 13, 2017) – WinterGreen Research announces that it has published a new study Professional Physician Practice and Ambulatory Clinical Facility Computer Assisted Coding: Market Shares, Strategy, and Forecasts, Worldwide, 2017 to 2023. Next generation Computer Assisted Coding of medical information is able to leverage natural language software technology to support some automation of the billing process and use of analytics to achieve higher quality patient outcomes. The study has 299 pages and 110 tables and figures.

Computer assisted coding of medical information uses natural language solutions to link the physician notes in an electronic patient record to the codes used for billing Medicare, Medicaid, and private insurance companies. 

Natural language processing is used determine the links to codes. 88% of the coding can occur automatically without human review. Computer assisted coding is used in all parts of the healthcare delivery system. The coding systems work well to implement automated coding process.

Physicians think about patient conditions in terms of words. Software is configured to achieve working with physicians who are more comfortable describing a patient treatment in words than codes. The electronic patient record, created using physician dictation, is used to form the base for the coding.  Natural language solutions implement computer coding to identify key words and patterns of language. The physician dictation can be done using regular language that the software recognizes and translates into billing codes.  

Properly designed natural language processing (NLP) solutions do not require physicians to change the way they work. They can dictate in a free-flowing fashion, consistent with the way they think, and are not limited to structured inputs that may or may not fully capture the unique circumstances of each patient encounter.

Matching codes generated from physician notes to standard treatment protocols promises to improve health care delivery. Accompanying that type of physician patient management against best practice promises to revolutionize health care delivery. The ability to further check as to whether the recommendations for follow up made by radiologists and matching the commendations with the actual follow up heralds’ significant promise of vastly improved health care delivery. 

Computer assisted coding applications depend on the development of production quality natural language processing (NLP)-based computer assisted coding applications. This requires a process-driven approach to software development and quality assurance. 

A well-defined software engineering process consists of requirements analysis, preliminary design, detailed design, implementation, unit testing, system testing and deployment. NLP complex technology defines the key features of a computer assisted coding (CAC) application.

Automation of process will revolutionize health care delivery. In addition to automating the insurance, billing, and transaction systems, streamlined care delivery is an added benefit. The ability to look at workflow and compare actual care to best practice is fundamental to automated business process. 

The ability to link diagnostic patient information to treatment regimes and drug prescriptions is central to improving medical care delivery. Once a physician can see what conditions need to be followed, and see that appropriate care has been prescribed 100% of the time, care delivery improves dramatically. Diagnosis of conditions using radiology frequently results in detection of events that need follow-up.

According to Susan Eustis, lead author of the team that prepared the study, “Growing acceptance of computer assisted coding for physician offices represents a shift to cloud computing and billing by the procedure coded. Because SaaS based CAC provides an improvement over current coding techniques the value is being recognized. Administrators are realizing the benefits to quality of care. Patients feel better after robotic surgery and the surgeries are more likely to be successful.” 

The worldwide market for Computer Assisted Coding is $898 million in 2016, anticipated to reach $2.5 billion by 2023. The complete report provides a comprehensive analysis of Computer Assisted Coding in different categories, illustrating the diversity of software market segments. A complete procedure analysis is done, looking at numbers of procedures and doing penetration analysis. 

Major health plans report a smooth transition to ICD-10. This is due to rigorous testing for six years. ICD-10 has had a positive impact on reimbursement. ICD-10 coding system requires use of 72,000 procedure codes and 68,000 CM codes, as opposed to the 4,000 and 14,000 in the ICD-9 system. Managing high volume of codes requires automation. Healthcare providers and payers use complex coding systems, which drives demand for technologically advanced CAC systems. 

The market for computer-assisted coding grows because it provides management of workflow process value by encouraging increasing efficiency in care delivery for large Professional Physician Practice and Ambulatory Clinical Facility. By making more granular demarcation of diagnoses and care provided for each diagnosis, greater visibility into the care delivery system is provided. Greater visibility brings more ability to adapt the system to successful treatments...
Need I elaborate? Seriously? The writing is painful, as are the topic-skipping lack of focus and blinding-glimpses-of-the-obvious/not-exactly-news "research analysis" observations.
"Patients feel better after robotic surgery and the surgeries are more likely to be successful."
And, this is a result of back-office NLP/CAC precisely how?

Okeee-dokeee, then. A mere 14 bucks a page for a PDF file?

First of all, EHR narrative fields "text mining" functionality has been a commonplace for years now across a number of mainstream platforms (as is the converse; turning codes and data into faux-narrative text dx "impressions'). Re-labeling such now with the trendy "NLP" moniker doesn't change that (none of which is to imply that the infotech is not improving). Second, I'm not gonna pay $4,200 to maybe verify whether "[exactly?] 88% of the coding can occur automatically without human review" (in payor audit-defensible fashion). Finally, we all "think" about things in terms of "words," but dx narrative impressions are a result of the SOAP process, not the cause. They're the "A" and the "P." The "S" and "O" comprise a mix of numeric data, informatics codes, and open-ended deliberative textual information.

Beyond those, the rest of the foregoing is a poorly-written rambling melange of platitudes and unhelpfully vague filler assertions.

CEO and "Senior Analyst" Susan Eustis:


Lordy, mercy. A proxy spokesmodel? How about the CEO herself?

Stay tuned, just getting underway. Behind the curve this week, tending to my ailing daughter.

UPDATE

From TechCrunch:
Mary Meeker’s latest trends report highlights Silicon Valley’s role in the future of health care

Mary Meeker’s latest Internet Trends Report, out today, was full of insights on how tech is shaping our future — including now in health care. This was the first year Meeker included healthcare in her report and it shows just how much of a role tech is going to play in improving our lives going forward...
Free download, 355 pages.

Mary Meeker, 2016:


__

NEXT, PREPARE TO ENJOY

Finished Daniel Dennett's book.

Consider medical education. Watson is just one of many computer-based systems that are beginning to outperform the best diagnosticians and specialists on their own turf. Would you be willing to indulge your favorite doctor in her desire to be an old-fashioned “intuitive” reader of symptoms instead of relying on a computer-based system that had been proven to be a hundred times more reliable at finding rare, low-visibility diagnoses than any specialist? Your health insurance advisor will oblige you to submit to the tests, and conscientious doctors will see that they must squelch their yearnings to be diagnostic heroes and submit to the greater authority of the machines whose buttons they push. What does this imply about how to train doctors? Will we be encouraged to jettison huge chunks of traditional medical education— anatomy, physiology, biochemistry— along with the ability to do long division and read a map? Use it or lose it is the rule of thumb cited at this point, and it has many positive instances. Can your children read road maps as easily as you do or have they become dependent on GPS to guide them? How concerned should we be that we are dumbing ourselves down by our growing reliance on intelligent machines?

So far, there is a fairly sharp boundary between machines that enhance our “peripheral” intellectual powers (of perception, algorithmic calculation, and memory) and machines that at least purport to replace our “central” intellectual powers of comprehension (including imagination), planning, and decision-making ...We can expect that boundary to shrink, routinizing more and more cognitive tasks, which will be fine so long as we know where the boundary currently is. The real danger, I think, is not that machines more intelligent than we are will usurp our role as captains of our destinies, but that we will over-estimate the comprehension of our latest thinking tools, prematurely ceding authority to them far beyond their competence.

Dennett, Daniel C. (2017-02-07). From Bacteria to Bach and Back: The Evolution of Minds (Kindle Locations 6649-6666). W. W. Norton & Company. Kindle Edition.
"How concerned should we be that we are dumbing ourselves down by our growing reliance on intelligent machines?"

Well, I recall a couple of books I've heretofore cited on that issue.


Back to Daniel Dennett, some concluding 'Bacteria to Bach" broad stroke thoughts:.
We have now looked at a few of the innovations that have led us to relinquish the mastery of creation that has long been a hallmark of understanding in our species. More are waiting in the wings. We have been motivated for several millennia by the idea expressed in Feynman’s dictum, “What I cannot create, I do not understand.” But recently our ingenuity has created a slippery slope: we find ourselves indirectly making things that we only partially understand, and they in turn may create things we don’t understand at all. Since some of these things have wonderful powers, we may begin to doubt the value— or at least the preeminent value— of understanding. “Comprehension is so passé, so vieux jeux, so old-fashioned! Who needs understanding when we can all be the beneficiaries of artifacts that save us that arduous effort?”

Is there a good reply to this? We need something more than tradition if we want to defend the idea that comprehension is either intrinsically good— a good in itself, independently of all the benefits it indirectly provides— or practically necessary if we are to continue living the kinds of lives that matter to us. Philosophers, like me, can be expected to recoil in dismay from such a future. As Socrates famously said, “the unexamined life is not worth living,” and ever since Socrates we have taken it as self-evident that achieving an ever greater understanding of everything is our highest professional goal, if not our highest goal absolutely. But as another philosopher, the late Kurt Baier, once added, “the over-examined life is nothing to write home about either.” Most people are content to be the beneficiaries of technology and medicine, scientific fact-finding and artistic creation without much of a clue about how all this “magic” has been created. Would it be so terrible to embrace the over-civilized life and trust our artifacts to be good stewards of our well-being?

I myself have been unable to concoct a persuasive argument for the alluring conclusion that comprehension is “intrinsically” valuable— though I find comprehension to be one of life’s greatest thrills— but I think a good case can be made for preserving and enhancing human comprehension and for protecting it from the artifactual varieties of comprehension now under development in deep learning, for deeply practical reasons. Artifacts can break, and if few people understand them well enough either to repair them or substitute other ways of accomplishing their tasks, we could find ourselves and all we hold dear in dire straits. Many have noted that for some of our high-tech artifacts, the supply of repair persons is dwindling or nonexistent. A new combination color printer and scanner costs less than repairing your broken one. Discard it and start fresh. Operating systems for personal computers follow a similar version of the same policy: when your software breaks or gets corrupted, don’t bother trying to diagnose and fix the error, unmutating the mutation that has crept in somehow; reboot, and fresh new versions of your favorite programs will be pulled up from safe storage in memory to replace the copies that have become defective. But how far can this process go?

Consider a typical case of uncomprehending reliance on technology. A smoothly running automobile is one of life’s delights; it enables you to get where you need to get, on time, with great reliability, and for the most part, you get there in style, with music playing, air conditioning keeping you comfortable, and GPS guiding your path. We tend to take cars for granted in the developed world, treating them as one of life’s constants, a resource that is always available. We plan our life’s projects with the assumption that of course a car will be part of our environment. But when your car breaks down, your life is seriously disrupted. Unless you are a serious car buff with technical training you must acknowledge your dependence on a web of tow-truck operators, mechanics, car dealers, and more. At some point, you decide to trade in your increasingly unreliable car and start afresh with a brand new model. Life goes on, with hardly a ripple.

But what about the huge system that makes this all possible: the highways, the oil refineries, the automakers, the insurance companies, the banks, the stock market, the government? Our civilization has been running smoothly— with some serious disruptions— for thousands of years, growing in complexity and power, Could it break down? Yes, it could, and to whom could we then turn to help us get back on the road? You can’t buy a new civilization if yours collapses, so we had better keep the civilization we have running in good repair. Who, though, are the reliable mechanics? The politicians, the judges, the bankers, the industrialists, the journalists, the professors— the leaders of our society, in short— are much more like the average motorist than you might like to think: doing their local bit to steer their part of the whole contraption, while blissfully ignorant of the complexities on which the whole system depends. According to the economist and evolutionary thinker Paul Seabright (2010), the optimistic tunnel vision with which they operate is not a deplorable and correctable flaw in the system but an enabling condition. This distribution of partial comprehension is not optional. The edifices of social construction that shape our lives in so many regards depend on our myopic confidence that their structure is sound and needs no attention from us.

At one point Seabright compares our civilization with a termite castle. Both are artifacts, marvels of ingenious design piled on ingenious design, towering over the supporting terrain, the work of vastly many individuals acting in concert. Both are thus by-products of the evolutionary processes that created and shaped those individuals, and in both cases, the design innovations that account for the remarkable resilience and efficiency observable were not the brain-children of individuals, but happy outcomes of the largely unwitting, myopic endeavors of those individuals, over many generations. But there are profound differences as well. Human cooperation is a delicate and remarkable phenomenon, quite unlike the almost mindless cooperation of termites, and indeed quite unprecedented in the natural world, a unique feature with a unique ancestry in evolution. It depends, as we have seen, on our ability to engage each other within the “space of reasons,” as Wilfrid Sellars put it. Cooperation depends, Seabright argues, on trust, a sort of almost invisible social glue that makes possible both great and terrible projects, and this trust is not, in fact, a “natural instinct” hard-wired by evolution into our brains. It is much too recent for that. 104 Trust is a by-product of social conditions that are at once its enabling condition and its most important product. We have bootstrapped ourselves into the heady altitudes of modern civilization, and our natural emotions and other instinctual responses do not always serve our new circumstances.

Civilization is a work in progress, and we abandon our attempt to understand it at our peril. Think of the termite castle. We human observers can appreciate its excellence and its complexity in ways that are quite beyond the nervous systems of its inhabitants. We can also aspire to achieving a similarly Olympian perspective on our own artifactual world, a feat only human beings could imagine. If we don’t succeed, we risk dismantling our precious creations in spite of our best intentions. Evolution in two realms, genetic and cultural, has created in us the capacity to know ourselves. But in spite of several millennia of ever-expanding intelligent design, we still are just staying afloat in a flood of puzzles and problems, many of them created by our own efforts of comprehension, and there are dangers that could cut short our quest before we— or our descendants— can satisfy our ravenous curiosity. [Dennett, op cit, Kindle Locations 6729-6787]
He closes,
[H]uman minds, however intelligent and comprehending, are not the most powerful imaginable cognitive systems, and our intelligent designers have now made dramatic progress in creating machine learning systems that use bottom-up processes to demonstrate once again the truth of Orgel’s Second Rule: Evolution is cleverer than you are. Once we appreciate the universality of the Darwinian perspective, we realize that our current state, both individually and as societies, is both imperfect and impermanent. We may well someday return the planet to our bacterial cousins and their modest, bottom-up styles of design improvement. Or we may continue to thrive, in an environment we have created with the help of artifacts that do most of the heavy cognitive lifting their own way, in an age of post-intelligent design. There is not just coevolution between memes and genes; there is codependence between our minds’ top-down reasoning abilities and the bottom-up uncomprehending talents of our animal brains. And if our future follows the trajectory of our past— something that is partly in our control— our artificial intelligences will continue to be dependent on us even as we become more warily dependent upon them. [ibid, Kindle Locations 6832-6840]
'eh? A lot to think about, in the context of "AI/IA" broadly (and "NLP/NLU" specifically).

Back to my original "NLU" question: will we be able to write code that can accurately parse, analyze, and "understand" arguments composed in ordinary language?

A lot more study awaits me. Suffice it to say I'm a bit skeptical at this point.

Maybe we could put the hackers at Hooli on it! ;) (NSFW)

MORE NLP NEWS
The evolution of computational linguistics and where it's headed next
May 31, 2017 by Andrew Myers

Earlier this year, Christopher Manning, a Stanford professor of computer science and of linguistics, was named the Thomas M. Siebel Professor in Machine Learning, thanks to a gift from the Thomas and Stacey Siebel Foundation.

Manning specializes in natural language processing – designing computer algorithms that can understand meaning and sentiment in written and spoken language and respond intelligently. His work is closely tied to the sort of voice-activated systems found in smartphones and in online applications that translate text between human languages. He relies on an offshoot of artificial intelligence known as deep learning to design algorithms that can teach themselves to understand meaning and adapt to new or evolving uses of language...
Worth re-citing/linking these prior posts at this point:
The Great A.I. Awakening? Health care implications?
Are structured data the enemy of health care quality?
"I'm a bit skeptical at this point."

15 Computational Semantics
CHRIS FOX

1 Introduction
In this chapter we will generally use ‘semantics’ to refer to a formal analysis of meaning, and ‘computational’ to refer to approaches that in principle support effective implementation, following Blackburn and Bos (2005). There are many difficulties in interpreting natural language. These difficulties can be classified into specific phenomena – such as scope ambiguity, anaphora, ellipsis and presuppositions. Historically, different phenomena have been explored within different frameworks, based upon different philosophical and methodological foundations. The nature of these frameworks, and how they are formulated, has an impact on whether a given analysis is computationally feasible. Thus the topic of computational semantics can be seen to be concerned with the analysis of semantic phenomena within computationally feasible frameworks...

2.1 A standard approach
In general it is difficult to reason directly in terms of sentences of natural language. There have been attempts to produce proof-theoretic accounts of sentential reasoning (for example, Zamansky et al., 2006; Francez & Dyckhoff 2007), but it is more usual to adopt a formal language, either a logic or some form of set theory, and then translate natural language expressions into that formal language. In the context of computational semantics, that means a precise description of an algorithmic translation rather than some intuitive reformulation of natural language. Such translations usually appeal to a local principle of compositionality. This can be characterized by saying that the meaning of an expression is a function of the meaning of its parts...

2.2 Basic types
When considering the representations of nouns, verbs, and sentences as properties, relations, and propositions respectively, we may have to pay attention to the nature of the permitted arguments. For example, we may have: properties of individuals; relationships between individuals; relationships between individuals and propositions (such as statements of belief and knowledge); and, in the case of certain modifiers, relations that take properties as arguments to give a new property of individuals. Depending upon the choice of permitted arguments, and how they are characterized, there can be an impact on the formal power of the underlying theory. This is of particular concern for a computational theory of meaning: if the theory is more powerful than first-order logic, then some valid conclusions will not be derivable by computational means; such a logic is said to be incomplete, which corresponds with the notion of decidability (Section 1, and Section 1.2 of Chapter 2, COMPUTATIONAL COMPLEXITY IN NATURAL LANGUAGE)...

2.3 Model theory and proof theory
There are two ways in which traditional formal semantic accounts of indicatives have been characterized. First, we may be interested in evaluating the truth of indicatives (or at least their semantic representation) by evaluating their truth conditions with respect to the world (or, more precisely, some formal representation or model of a world). This can be described as model-theoretic semantics. Model-theoretic accounts are typically formulated in set theory. Set theory is a very powerful formalism that does not lend itself to computational implementation. In practice, the full power of set theory may not be exploited. Indeed, if the problem domain itself is finite in character, then an effective implementation should be possible regardless of the general computational properties of the formal framework (see Klein 2006 for example).

On the second characterization of formal semantic accounts, the goal is to formalize some notion of inference or entailment for natural language. If one expression in natural language entails another, then we would like that relation to be captured by any formalization that purports to capture the meaning of natural language. This can be described as proof-theoretic semantics. Such rules may lend themselves to fairly direct implementation (see for example van Eijck and Unger (2004); Ramsay (1995); Bos and Oka (2002), the last of which supplements theorem proving with model building).

Although a proof-theoretic approach may seem more appropriate for computational semantics, the practical feasibility of general theorem proving is open to question. Depending on the nature of the theory, the formalization may be unde-cidable. Even with a decidable or semi-decidable theory, there may be problems of computational complexity, especially given the levels of ambiguity that may be present (Monz and de Rijke 2001)...


(2013-04-24). The Handbook of Computational Linguistics and Natural Language Processing (Blackwell Handbooks in Linguistics) (pp. 394 - 402). Wiley. Kindle Edition.
Most of what I'm finding thus far is a lot of jargon-laden academic abstraction, none of it going to answering my core NLP question: can we develop code capable of accurately analyzing the logic in prose arguments -- the aggregate "semantic" "meanings" comprising a proffer? This book, notwithstanding its 801 pg. heft, frequently begs off with "beyond the scope" apologies.
Perhaps the difficulties are simply too numerous and imposing to surmount (as of today, anyway) in light of the innumerable ways to state any given prose argument -- ranging from the utterly inelegant (e.g., Eustis) to the eloquently evocative (e.g., Dennett), from the methodically Socratic/trial-lawyer-like to the rambling, unfocused, and redundant, from the fastidiously lexically and grammatically precise to the sloppily mistake-ridden, from the explicit and accessible to the obtusely inferential...
But, then,
"Computers will understand sarcasm before Americans do."  - Geoffrey Hinton
There's certainly a thriving international academic community energetically whacking away at this stuff.


From one of the "invited papers" (pdf) in this edition:



UPDATE

Good article.
How Language Led To The Artificial Intelligence Revolution
Dan Rowinski

In 2013 I had a long interview with Peter Lee, corporate vice president of Microsoft Research, about advances in machine learning and neural networks and how language would be the focal point of artificial intelligence in the coming years.

At the time the notion of artificial intelligence and machine learning seemed like a “blue sky” researcher’s fantasy. Artificial intelligence was something coming down the road … but not soon.
I wish I had taken the talk more seriously.

Language is, and will continue to be, the most important tool for the advancement of artificial intelligence. In 2017, natural language understanding engines are what drive the advancement of bots and voice-activated personal assistants like Microsoft’s Cortana, Google Assistant, Amazon’s Alexa and Apple’s Siri. Language was the starting point and the locus of all new machine learning capabilities that have come out in recent years...
____________

More to come...