Search the KHIT Blog

Wednesday, December 13, 2017

Recapping the 2017 Health 2.0 Technology for Precision Health Summit


What a delight, the Julia Morgan Ballroom.






Matthew Holt kicked things off, and then remained offstage until the wrap at the end of the day. He roamed the ballroom all day shooting smartphone pics and uploading them, basically live-tweeting the entire day. Scooped the hell out of me and my ever-more obsolete Sony alpha DSLRs.


I have some 70 photos to consider for upload. Need to also review my notes for my impressions of the day.


A ton of intellect on that stage and in the audience.

My summary takeaway tweet.

Conference Chair Linda K. Molnar, PhD
UPDATE

I have some family matters to engage before finishing out this post, but in the interim the Health 2.0 staff has already published an excellent recap.
HEARD ONSTAGE AT THE INAUGURAL TECHNOLOGY FOR PRECISION HEALTH SUMMIT

This week a group of investors, entrepreneurs, precision health researchers, and data wonks all gathered at the opulent Julia Morgan Ballroom in San Francisco for a day of sharing, charged discussion, and live technology demos. The goal of the Technology for Precision Health Summit was to shine a spotlight on advancements and opportunities within an industry where the stakes are high and the imperative to innovate is often life or death.

Linda Molnar, Co-Chair of the summit, opened the day by offering some historical context and major milestones to illustrate how we got to where we currently stand…
Very cool.


Stay tuned.
____________

More to come...

Monday, December 11, 2017

Net Neutrality and the Trump FCC


Joint Comments of Internet Engineers, Pioneers, and Technologists on the Technical Flaws in the FCC’s Notice of Proposed Rule-making and the Need for the Light-Touch, Bright-Line Rules from the Open Internet Order.
"The undersigned submit the following statement in opposition to the Federal Communications Commission's Notice of Proposed Rulemaking – WC Docket No. 17-108, which seeks to reclassify Broadband Internet Access Service (BIAS) providers as “information services,” as opposed to “telecommunications services.” Based on certain questions the FCC asks in the Notice of Proposed Rulemaking (NPRM), we are concerned that the FCC (or at least Chairman Pai and the authors of the NPRM) appears to lack a fundamental understanding of what the Internet's technology promises to provide, how the Internet actually works, which entities in the Internet ecosystem provide which services, and what the similarities and differences are between the Internet and other telecommunications systems the FCC regulates as telecommunications services. Due to this fundamental misunderstanding of how the technology underlying the Internet works, we believe that if the FCC were to move forward with its NPRM as proposed, the results could be disastrous: the FCC would be making a major regulatory decision based on plainly incorrect assumptions about the underlying technology and Internet ecosystem..."
53 page heavily, credibly documented PDF. You should read all of it carefully.
V. Conclusion
"While we appreciate the FCC’s desire to accurately classify BIAS providers, the theories posed and the questions asked by the NPRM indicate a tremendous lack of technical understanding on the part of its authors. We remain concerned that any decision to reclassify based on this lack of technical understanding could have dangerous consequences, including stifling future innovation and depressing future investment in the wealth of Internet services that drive such a large part of the U.S. economy. 


Furthermore, based on the above examples, coupled with our collective knowledge and experience in designing, building, and operating various parts of the Internet ecosystem, it is clear to us that if the FCC were to reclassify BIAS providers as information services, and thereby put the bright-line, light-touch rules from the Open Internet Order in jeopardy, the result could be a disastrous decrease in the overall value of the Internet. Fortunately, the current rules that the FCC operates under will effectively prevent this worst-case scenario from occurring, so long as the NPRM is not approved. 


That is why we, the undersigned computer scientists, network engineers, and Internet professionals, based on our technical analysis and an understanding of both how the Internet was designed, and how it currently functions, respectfully encourage the FCC not to adopt the Notice of Proposed Rulemaking – WC Docket No. 17-108."

This is probably headed to court, given that the Trump FCC will likely vote to roll back current "end-to-end net neutrality" regulations. See also "The Web Began Dying in 2004 -- Here's How."

ON DECK, BLOCKCHAIN AND HEALTH CARE

Stay tuned. See here and here. Spoiler alert: pardon my dubiety.
“…once a block is added to the blockchain, it is immutable, which results in improved data accuracy and maintenance. While blockchain has mostly seen uses in the financial space, it’s clear that such technology could potentially revolutionize other areas, such as healthcare and medical records.”
Right. What could possibly go wrong?

Tangentially, see "Bitcoin Could cost Us Our Clean-Energy Future."
____________

More to come...

Friday, December 8, 2017

Susannah Fox on the power of peer-to-peer communities for health



My brilliant friend Susannah Fox, former Chief Technology Officer of HHS, has launched a great new campaign.


The direct URL link is here: https://youtu.be/jCGk2n2zJ-s

I've covered Susannah's presentations for years at Health 2.0 and other events. She rocks.


I hope this new initiative gets huge traction. Pay it forward. I certainly will.

PS

Don't forget next Tuesday's Health 2.0 Technology for Precision Health Summit.
____________

More to come...

Tuesday, December 5, 2017

On deck: the Health 2.0 Technology for Precision Health Summit

Imagine if you will, a future in which a cancer diagnosis will be treated with a lifestyle change, like a chronic condition. Survivable. Manageable. Like Diabetes. Sure, to receive a cancer diagnosis today does not mean what it meant twenty years ago, but we are also unlikely to reach a point of ever acting casual about the term or the treatment plan.

In the meantime though, the increasing prevalence of personal data collection is driving new approaches in care plans that have a real shot at improving quality of life. The narrative of one's life can be seen in the data
- everything from where you live, what you eat, how you workout, even what you search for on the internet. The sources of such personal data come from places like clinical trials, biosensors, and wearables and is being stored in your Electronic Medical Record.

The sticking point though is the advancement of technological tools to view, aggregate, extract, and analyze relevant data to derive a meaningful plan of attack (er, treatment plan). One interoperable tool that plugs right into the EMR is Cota Healthcare. Pair this with omics data and genome sequencing technology, like 2bPrecise, and physicians are gaining insight into what makes you, you. And thus are better able to customize a bespoke cancer treatment plan, designed for you and only you.

Learn more about how omics data is driving new care plans, and see a live demo from Cota Healthcare and others at the Technology for Precision Health Summit next week in San Francisco.

Why wait. Register today for next week's event and save 50% on the ticket by using discount code TPH50.
Hope to see you there. Hashtag #tph2017.

"Omics," 'eh? See my prior "Personalized Medicine" and "Omics" -- HIT and QA considerations."

UPDATE

apropos of my prior post "Science, intellectual property, taxpayer funding, and the law."
Why a lot of important research is not being done
Aaron Carroll MD


We have a dispiriting shortage of high-quality health research for many reasons, including the fact that it’s expensive, difficult and time-intensive. But one reason is more insidious: Sometimes groups seek to intimidate and threaten scientists, scaring them off promising work.

By the time I wrote about the health effects of lead almost two years ago, few were questioning the science on this issue. But that has not always been the case. In the 1980s, various interests tried to suppress the work of Dr. Herbert Needleman and his colleagues on the effects of lead exposure. Not happy with Dr. Needleman’s findings, the lead industry got both the federal Office for Scientific Integrity and the University of Pittsburgh to conduct intrusive investigations into his work and character. He was eventually vindicated — and his discoveries would go on to improve the lives of children all over the country — but it was a terrible experience for him.

I often complain about a lack of solid evidence on guns’ relationship to public health. There’s a reason for that deficiency. In the 1990s, when health services researchers produced work on the dangers posed by firearms, those who disagreed with the results tried to have the National Center for Injury Prevention and Control shut down. They failed, but getting such work funded became nearly impossible after that…
And, this is sequally interesting:
Benign Effects of Automation: New Evidence from Patent Texts

Researchers disagree over whether automation is creating or destroying jobs. This column introduces a new indicator of automation constructed by applying a machine learning algorithm to classify patents, and uses the result to investigate which US regions and industries are most exposed to automation. This indicator suggests that automation has created more jobs in the US than it has destroyed…
As always at Naked Capitalism, spend some time reading the comments. Predominantly an articulate, well-informed crowd there.

ERRATUM

Posted by one of my FB friends. Lordy.


The jokes just write themselves.
____________

More to come...

Sunday, December 3, 2017

Science, intellectual property, taxpayer funding, and the law


Taxpayers, whether they know it or not, fund a huge proportion of scientific research across a breadth of fields. Fruits borne of such investigations subsequently often become the highly valuable "intellectual property" (IP) of academic institutions and private parties.

to wit, from a recent issue of my AAAS Science Magazine:
Racing for academic glory and patents: Lessons from CRISPR
Arti K. Rai, Robert Cook-Deegan


The much-publicized dispute over patent rights to CRISPR-Cas9 gene-editing technology highlights tensions that have been percolating for almost four decades, since the U.S. Bayh-Dole Act of 1980 invoked patents as a mechanism for promoting commercialization of federally funded research. With the encouragement provided by Bayh-Dole, academic scientists and their research institutions now race in dual competitive domains: the quest for glory in academic research and in the patent sphere. Yet, a robust economic literature (1, 2) argues that races are often socially wasteful; the racing parties expend duplicative resources, in terms of both the research itself and the legal fees spent attempting to acquire patents, all in the pursuit of what may be a modest acceleration of invention. For CRISPR, and future races involving broadly useful technologies for which it may set a precedent, the relationship between these competitive domains needs to be parsed carefully. On the basis of legal maneuvers thus far, it appears that the litigants will try for broad rights; public benefit will depend on courts reining them in and, when broad patents slip through, on updating Bayh-Dole's pro-commercialization safeguards with underused features of the Act,
       
Science, Commerce, CRISPR

The University of California (UC) and the Broad Institute of the Massachusetts Institute of Technology (MIT) and Harvard University are tangling over U.S. patent rights to CRISPR technology. In June 2012, Doudna of UC, Charpentier, and others demonstrated in vitro that a system comprising a Cas9 DNA endonuclease could be programmed with a single chimeric RNA molecule to cleave specific DNA sites (3). Six months later, in January 2013, Zhang of the Broad Institute reported genome editing in mammalian cells using CRISPR-Cas9 (4). The research underlying both of these seminal publications was supported by the U.S. National Institutes of Health (NIH) and thus were funded for public benefit and subject to terms of the Bayh-Dole Act (3, 4).
       
From the outset, CRISPR science has been intertwined with commercial considerations. UC filed its provisional patent application in May 2012, one month before publication of the research findings. Some of UC's initial claims swept very broadly, encompassing any DNA-targeting RNA containing a segment complementary to the target DNA and a segment that interacts with a “site-directed modifying polypeptide” (5). Although UC subsequently restricted its claims to a type II CRISPR-Cas9 system, it continues to claim this system in any species as well as in vitro.
       
The Broad Institute began filing patent applications in December 2012. Broad paid for expedited examination, with the result that its patents began to issue in April 2014. The relevant Broad patents claim alteration of eukaryotic cell DNA by using type II CRISPR-Cas9 systems.
       
After the Broad patents began to issue, UC asked for a legal proceeding known as an interference, arguing that because the subject matter of its application, which had not yet been granted, overlapped with that of the Broad patents, the U.S. Patent and Trademark Office (USPTO) had to declare who was the “first to invent.” In February 2017, the USPTO ruled that the overlap did not exist (6). Specifically, USPTO held that the Broad team's success in eukaryotes was not scientifically “obvious” in light of UC's demonstration of success in vitro…
       
Patent Races, Scientific Credit
As histories of key technologies such as the telephone demonstrate (12), and as studies of patent records have documented on a large scale empirically (13), patent races are common. Moreover, legal rights can get confused with scientific credit (12). It is important to get the rules right, not only for the patent system but also for academic science.

Whatever the outcome of the patent interference proceeding, scientific and patent priority are not the same. The rules for securing a patent should differ in important ways from the rules for scientific credit and recognition. Although current rules for academic credit may seem murky or unfair to some individuals who feel left out, winner-take-all scenarios for important patents, which confer legal exclusivity over inventions, are more pernicious because they confer powerful rights to exclude…
       
Shifting Roles of Universities
The problem of inefficiency in races is exacerbated if the racing parties believe they can, via an overly broad patent, make the race a winner-take-all fight: the bigger the prize at the end of the race, the more likely we will see inefficiency. So, overly broad patents that emerge from winner-take-all races are not only likely to hamper downstream development, they are also likely to encourage upstream duplication. In contrast, to the extent that racing parties have engaged not in wholesale duplication but have pursued valuable divergent paths (14), narrow patents can be awarded to multiple racers, allowing diversity to be rewarded…

Improving Bayh-Dole
A first-best result in the CRISPR race would be narrow patents that both diminish incentives for wasteful future racing and prevent any player from imposing substantial control over downstream innovation. But to the extent broad patents that impede development nonetheless slip through, presumably because CAFC does not choose to follow its admittedly controversial precedent on written description, the CRISPR controversy suggests the need for renewed attention to prior literature that highlights avenues for improving Bayh-Dole's pro-commercialization safeguards. Regulatory improvements that have been suggested in the literature include clarifying government-use rights, extending them to grantees and contractors; recognizing situations in which patenting is not the shortest or best path to widespread application; and simplifying procedures for “march in” to compel additional licensing when health and safety needs are not being met by those with exclusive rights (25).

To our knowledge, none of these academic proposals for improvement have been investigated seriously by the U.S. Department of Commerce, which is responsible for administering Bayh-Dole. The CRISPR controversy may be a catalyst for action. Improving pro-commercialization safeguards would be prudent even if narrow rights were ultimately at stake. But, improvement could prove particularly critical in the event that courts let overly broad rights emerge.
'eh?

Ahhh... patent actions.


Taxpayers, recall, paid for the initial development of the internet itself -- U.S. commercial control of which is soon to be handed over to a handful of major corporations courtesy of the Trump Administration's FCC.

apropos, from the bracingly iconoclastic Yves Smith (re: "Net Neutrality"):
The Trinet
The internet will survive longer than the web will. GOOG-FB-AMZN will still depend on submarine internet cables (the “Backbone”), because it is a technical success. That said, many aspects of the internet will lose their relevance, and the underlying infrastructure could be optimized only for GOOG traffic, FB traffic, and AMZN traffic. It wouldn’t conceptually be anymore a “network of networks”, but just a “network of three networks”, the Trinet, if you will. The concept of a workplace network which gave birth to the internet infrastructure would migrate to a more abstract level: Facebook Groups, Google Hangouts, G Suite, and other competing services which can be acquired by a tech giant...
Ya gotta spend serious time at #Naked Capitalism. I do daily. You comment there with care. They do not suffer fools gladly.
__

Regarding digital IP broadly, recall my prior post citing apparent "patent troll" Robert Budzinksi, "NLP, meet NPE: the curious case of Robert Budzinski."

CLIMATE SCIENCE IN COURT

From Public Radio's KPBS:
Climate Paper At Center Of Scientist-Versus-Scientist Legal Dispute

A Stanford University climate scientist has taken the unusual step of suing another scientist, arguing that a rebuttal paper critiquing his work amounted to defamation.

Two years ago, Stanford professor Mark Jacobson published a scientific paper that quickly grabbed national attention: in it he said that a completely renewable energy grid could be possible by 2050, as new ways to store energy would allow the system to keep up with demand even in moments when winds slow down or clouds cover solar panels. The paper was mentioned by Sen. Bernie Sanders and was named one of the best of 2015.

But this June, a group of researchers published its own paper, which said Jacobson ignored important costs, making the type of system Jacobson envisioned theoretically possible, but likely too expensive to be realistic. The lead author was Christopher Clack, a former researcher at the University of Colorado Boulder and now CEO of the nonprofit Vibrant Clean Energy. UC San Diego professor David Victor was one of 20 other authors.

Jacobson sued Clack and the National Academy of Sciences, which published the paper, in late September. He said Clack’s work defamed him…
Audio interview here:



Interesting stuff, all of it.

NEW READING


Stay tuned. About 10% through, very nice thus far. Written by one of the Principals in San Francisco's M34 Capital, Inc. There's a specific reason for my interest in these people.

UPDATE

More "AI" news...

November 27, 2017 - Depending on who is making the statement, artificial intelligence is either the best thing to happen to healthcare since penicillin or the beginning of the end of human involvement in the medical arts.

Robot clinicians are coming to take over every job that can possibly be automated, warn the naysayers.

That might not be such a terrible thing, say the enthusiasts. The sooner the healthcare system can eliminate human error and inefficiency, the safer, happier, and healthier patients will be.

In reality, artificial intelligence is still many, many years away from replacing the clinical judgement of a living, thinking person, says Dr. Joe Kimura, Chief Medical Officer at Atrius Health. And it may not ever do so.

While AI and machine learning hold enormous potential to improve the way clinicians practice, AI proponents should try to temper their expectations, and cynics worried for their jobs can relax for the moment – there is a great deal of work to be done before providers can or should trust computers to make reliable decisions for them…
Interesting. See my prior post "AI vs IA: At the cutting edge of IT R&D." See also "The future of health care? "Flawlessly run by AI-enabled robots, and 'essentially' free?"

All warrants continued attention.

MORE

Amid the Monday morning news, from THCB:
Could Artificial Intelligence Destroy Radiology by Litigation Claims?
By HUGH HARVEY, MD


We’ve all heard the big philosophical arguments and debate between rockstar entrepreneurs and genius academics – but have we stopped to think exactly how the AI revolution will play out on our own turf?

At RSNA this year I posed the same question to everyone I spoke to: What if radiology AI gets into the wrong hands? Judging by the way the crowds voted with their feet by packing out every lecture on AI, radiologists would certainly seem to be very aware of the looming seismic shift in the profession – but I wanted to know if anyone was considering the potential side effects, the unintended consequences of unleashing such a disruptive technology into the clinical realm?…
Read all of it.

ANOTHER BOOK TO READ?


Ran across this title. Looks interesting, given the amount of thought and blogging I've given to AI issues (mostly, albeit not exclusively, pertaining to health care).
"Artificial Intelligence helps choose what books you buy, what movies you see, and even who you date. It puts the "smart" in your smartphone and soon it will drive your car. It makes most of the trades on Wall Street, and controls vital energy, water, and transportation infrastructure. But Artificial Intelligence can also threaten our existence.

In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI's Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine..."
Dunno, my daughter's illness continues to hamper my reading pace. I have about four books in process at the moment, including this one:


First cited it here.

James Barrat's's website here.
"Intelligence, not charm or beauty, is the special power that enables humans to dominate Earth. Now, propelled by a powerful economic wind, scientists are developing intelligent machines. Each year intelligence grows closer to shuffling off its biological coil and taking on an infinitely faster and more powerful synthetic one." 
The Mark 13

We'll see. So much to continue to learn.
____________

More to come...

Wednesday, November 29, 2017

"SlaughterBots?" #AI-driven autonomous weaponry?


Is this a realistic, exigent concern? We're all ga-ga these days over the beneficent potential of AI in the health care space, but,



From the autonomousweapons.org site:
What are killer robots?
Killer robots are weapons systems that, once activated, would select and fire on targets without meaningful human control. They are variously termed fully autonomous weapons or lethal autonomous weapons systems.

The concern is that low-cost sensors and rapid advances in artificial intelligence are making it increasingly possible to design weapons systems that would target and attack without further human intervention. If this trend towards autonomy continues, the fear is that humans will start to fade out of the decision-making loop, first retaining only a limited oversight role, and then no role at all.

The US and others state that lethal autonomous weapon systems “do not exist” and do not encompass remotely piloted drones, precision-guided munitions, or defensive systems. Most existing weapons systems are overseen in real-time by a human operator and tend to be highly constrained in the tasks they are used for, the types of targets they attack, and the circumstances in which they are used.

While the capabilities of future technology are uncertain, there are strong reasons to believe that fully autonomous weapons could never replicate the full range of inherently human characteristics necessary to comply with international humanitarian law’s fundamental rules of distinction and proportionality. Existing mechanisms for legal accountability are ill suited and inadequate to address the unlawful harm that fully autonomous weapons would be likely to cause...
Wonder what Elon Musk thinks about this stuff? Recent reporting.


'eh? See also my prior post "Artificial Intelligence and Ethics."

ERRATUM

Read an interesting set of posts over at Medium today.

"What to look (out) for when raising money."
Good stuff.
If You Take Venture Capital, You’re Forcing Your Company To Exit
To understand venture capital, you must understand the consequences of how VCs return capital to their investors


Why the Most Successful VC Firms Keep Winning
The best companies seek out the most successful investors, and gains accumulate at the top
 

One Investor Isn’t Enough
A company’s success is highly reliant on peer validation of investor decisions. This stunts diversity and must change if we want the best founders working on the biggest opportunities.
I follow VC news tangentially in the health care space, e.g., "#WinterTech2016: Venture Capital in #HealthIT."

Just became apprised of this company.

"We are a private investment company that practices a disciplined, evidence-based approach to define innovative business models capable of delivering new products and services that can positively impact the way we live, work and play.

What We Look For
We are not traditional seed investors. Rather, we are active and engaged co-founders who seek to get the most promising opportunities from the lab to the market. We bring to the table decades of entrepreneurial success and experience, offering seasoned perspectives. We seek individuals who share our desire for exploration, who are intellectually honest and have insatiable curiosity, who have the ability and the desire to systematically test their assumptions in the real world. We believe that the best entrepreneurs have the ability to blend technical insight with market-based feedback, allowing their innovations to mature into a successful company. Even our name evokes this passion. M34 is short for Mach 34, the speed that an object needs to achieve to escape the gravitational pull of the earth. Our goal for all of our companies is to achieve a successful liftoff from traditional forces that hold most new ventures back...."
Interesting. I found this noteworthy in particular:
M34 Capital founders understand the risks associated with developing new and innovative ideas. We have pioneered the principles of the lean startup approach to accelerate and improve the efficiency of the commercialization of new technologies.
More on this stuff perhaps later. "Lean startup?" Hmmm...

Seriously; this old school Deming/Shewhart dude loves me some "Lean."

UPDATE

apropos of the upcoming December 12th Health 2.0 "Technology for Precision Health Summit."
Health 2.0 sat down with Linda Molnar to discuss the evolution of Precision Health, the imperatives at stake in a fast-paced field, and empowerment through big data. Linda has over 20 years in the field of Life Sciences and is responsible for a number of initiatives that further the field with start-ups, the feds, and for investors. Her current endeavor is leading the upcoming Technology for Precision Health Summit in San Francisco alongside Health 2.0. “We’re never going to pull together all of this disparate data from disparate sources in a meaningful (i.e. clinically actionable) way, unless we talk about it” she says. The Summit is an attempt to bring together the worlds of Precision Medicine and Digital Healthcare to realize the full potential of a predictive and proactive approach to maintaining health...

____________

More to come...

Monday, November 27, 2017

From Pythagoras to petabytes

A few days back, I was surfing through a number of news articles recounting Elon Musk's latest freakout over The Existential Menace of AI. See, e.g., "AI is highly likely to destroy humans, Elon Musk warns."


A commenter beneath one of the articles (TechCrunch's "WTF is AI?") briefly cited this book.


OK, Elon's pearl-clutching will have to wait.

Garth's book is merely $4.61 Kindle price (and worth every penny). A must-read, IMO. Spanning, well, "Pythagoras to petabytes." (I'd have chosen "yottabytes," but it didn't have the cutesy alliterative post title ring.)
IT’S EASY TO FORGET that the digital age, and so the existence of computer programmers, still only spans a single working life, and one of those lives is mine. In my career as a commercial computer programmer I’ve experienced most of the changes that have occurred in the programming world in all their frequent craziness. This is my attempt to preserve that odyssey for the historical record...
For myself and many others, commercial computer programming has been a rewarding career since at least the late 1960s and during that time programming has seen massive changes in the way programmers work. The evolution of many different programming models and methodologies, the involvement of many extraordinary characters, endless ‘religious wars’ about the correct way to program and which standards to use. It would be a great shame if a new generation of programmers were unaware of the fascinating history of their profession, or hobby, and so I came to write this book...
Four of the overall themes in the book can be identified by searching for #UI (User Interface), #AI (Artificial Intelligence), #SOA (Service Oriented Architecture) and #PZ (Program componenti-Zation) and the Table of Contents is detailed enough to guide dipping in/ dipping out…
___
In Years BC (Before Computers)
Sometimes it seems like computers and the digital revolution sprang out of nowhere but of course they didn’t. They were the result of at least two and a half millennia of evolutionary development which can be traced all the way back to the ancient Greek philosopher Pythagoras. When I graduated from university in 1968 I had a degree in philosophy but I somehow managed to be awarded it without having absorbed much ancient Greek philosophy or the history and philosophy of science. There were just too many well documented distractions in the sixties. It was only later that I came to realise how critical they were in creating the context for digital computers and with them the new career of commercial computer programmer.

So we need to briefly cover some history in order to set the scene for my account though only the necessary facts will be covered. We’ll only be scratching the surface so don’t panic but if you prefer to skip this Prologue that’s OK. By the way I use BC and AD for dates and not BCE and CE, there is no particular reason, it’s just habit.

550 – 500 BC
During this period Pythagoras the ancient Greek philosopher was active in a Greek colony in southern Italy. No written record of his thinking survives from the time but he is known to have influenced Plato and Aristotle and through them all of Western thinking. He is rumored to have been the first person to ever call himself a philosopher…

Eaglesfield, Garth . The Programmer's Odyssey: A Journey Through The Digital Age (Kindle Locations 94-129). Pronoun. Kindle Edition.
Garth and I are roughly the same age (both born in 1946). He came to computing about 12 years before I did. My initiation came tangentially in my 30's amid the course of my undergrad research (I was studying Psychology and Statistics at Tennessee; we were investigating "math anxiety" among college students taking stats courses). We wrote SAS and SPSS code via primitive line editors (e.g., edlin and vi), with laboriously entered inline research results data, and prefaced with JCL headers (Job Control Language), all submitted to the DEC/VAX over a 300 baud dialup modem (after which we'd schlep over to the computer room to fetch the 132-column greenbar printouts once available).

My, my, how times have changed.

After graduation in 1985, I got my first white collar gig in January 1986, writing code in a radiation laboratory in Oak Ridge (pdf). After those years, my time was spent principally as an analyst, using mostly SAS and Stata, in health care (pdf) and bank risk management.

I found this particularly interesting in Garth's book:
On The Road To Bell Labs
I felt that I needed to get Unix and C on to my CV and where better to do that than at its birthplace where it all started? Bell Labs was in New Jersey on the other side of the Hudson river and would involve some travel but I knew that other contractors from Manhattan worked there so I reasoned that there must be some way of coping. The Holy Grail of the various Bell Labs, which were scattered around New Jersey, was the Murray Hills lab, it was the real Unix/ C holy of holies. But it was hard to get into and so I eventually got a contract at the Whippany lab. It was closer to Manhattan and was more of a traditional telephone company engineering lab but they used Unix and C so that would get it on to my CV. The project I worked on was the software for a diagnostic device used in network line testing…

At the Labs I was given a copy of the now legendary ‘White Book’ written by Kernighan and Ritchie accompanied by a Unix manual and I set about absorbing them both.

Entering your C source code for a program was done with the ubiquitous Unix editor vi from a dumb terminal. When vi was developed at Bell Labs the quest for portability had been continued by using ordinary characters for edit commands so that it could be used with any dumb terminal’s keyboard. Those keyboards of course did not have all the control keys that modern keyboards come with such as arrow keys, Home, End, Page Up, Page Down, and so on. As with everything in the Unix world the vi key combinations ranged from simple and intuitive to complex and barely comprehensible. It operated in 2 modes, insert and command modes, and in command mode, for instance, it used the normal keyboard characters ‘h’ ‘l’ ‘k’ and ‘j’ as substitutes for the absent left, right, up and down arrow keys.

Compared to the VMS operating system and to high level programming languages like COBOL, Coral 66 and Pascal, Unix and C certainly represented a much lower level, closer to the hardware and more like assembler coding. There were relatively few reserved words in the C language with the main ones being for, data classes (int, char, double, float), data types (signed, unsigned, short, long), looping verbs (for, while, continue, break), decision verbs (if, else, switch, case) and control verbs (goto, return). C took the componentization of computer programs several steps further through the use of functions, a specific form of subroutine. Functions, returned a single value that might then be part of an equation...
(ibid, Kindle Locations 1677-1706).
My late Dad came back from WWII (minus a leg), mustered out of the military, and went to work for Bell Labs, first at Murray Hill, and subsequently at the Whippany location. He worked in semiconductor R&D his entire career at Bell Labs (the only civilian job he ever had, until he took his tax-free early VA disability pension in 1972 and dragged my Ma off to the humid swamps of Palm Bay, Florida so he could play golf all the time.

Looks like Garth came to Bell Labs a few years after Pop had retired.

"The Programmer's Odyssey" covers a ton of ground, weaving great historical and technical detail into a fun read of a personal story. I give it an enthusiastic 5 stars.
"It would be a great shame if a new generation of programmers were unaware of the fascinating history of their profession, or hobby, and so I came to write this book."
Yeah. I am reminded of my prior post "12 weeks, 1,200 hours, and $12,000, and you're a "Software Engineer"? Really?"

In closing,
Postscript
We have reached the end of the odyssey. But where have we arrived?

Well in some ways, like Odysseus himself, we have finally come home, right back to where we started. Some things have changed, we now have a massive multi-layered global network of increasingly smart devices, including the ultimate smart device the human being. Yet ultimately it’s all based on the same old Von Neuman computer architecture and on the two digits 0 and 1, the binary system. The whole extraordinary software edifice rests on that foundation, which is really mindboggling.

Have we learned anything on our odyssey that suggests this is likely to change? Perhaps a new computer hardware architecture will finally emerge? Articles regularly appear about quantum computing devices and IBM have announced a project to research cognitive SyNAPSE chips but so far these are still very much research projects. It is also being said that the minituarisation of silicon chips by Intel and others will reach its limit by 2020 and Moore’s Law will finally come to an end.

On our odyssey we, like Odysseus, have encountered dangerous and worrying phenomena, in particular whether any conceivable developments in computer technology will put the human race in danger. It’s hard to believe that based on existing system components a self-conscious sentient being will come into existence like HAL in 2001 and threaten our own survival. Throughout history major technological advances have tended to become models for explaining human beings, always incorrectly. In the digital age this has meant starting to see human beings as just computing machines. Will this prove to be true or false? The jury is still out.

But if we are in danger from our technology perhaps it’s most likely to come from an artificial living creature/ cyborg/ replicant enhanced with implanted computer chips and interfaces that we have created? Created perhaps by using advanced tools such as the already being discussed DNA editors? The recent TV series ‘Humans’ is an interesting attempt to confront some of these issues and two recent books have added significantly to the debate; Rise of the Machines: The Lost History of Cybernetics; Thomas Rid; Scribe Publications and Homo Deus: A Brief History of Tomorrow, by Yuval Noah Harari, Harvill Secker.

Questions, questions. As the philosophically inclined outlaw replicant Roy Batty so presciently remarked in Blade Runner
(ibid, Kindle Locations 2646-2666)
'eh?


Kudos, Mr. Eaglesfield. I'd make this book required reading at "software engineer boot camp." (BTW, Garth's website is here.)

Also, Garth turned me on to this resource where he blogs:


Check out in particular the "History of Computing and Programming" page.
___

Other prior posts of mine of relevance here?
Then there are my posts on "NLP" (Natural Language Processing).

NEXT UP

Got a new Twitter follower.

The Web's Best Content Extractor
Diffbot works without rules or training. There's no better way to extract data from web pages. See how Diffbot stacks up to other content extraction methods:
Pretty interesting. Stay tuned. We'll see.

SAVE THE DATE

They just approved my press pass. Follow hashtag #health2con and Twitter handle @Health2con.

Details
UPDATE

From "history" to "futurism" predictions. I saw this infographic on Pinterest, and chopped it up into 4 segments for viewability.


 See also "Types of AI."

UPDATE

Garth just reciprocated my foregoing tout with the kindest mention of me. See "Cometh the hour, cometh the man."

Again, not kidding about how cool his book is.
____________

More to come...

Wednesday, November 22, 2017

The ultimate health issue

My December issue of Harpers arrived in the snailmail yesterday. apropos of the worrisome topic that is surely on many minds these days, in light of who has sole access to the U.S. "nuclear button,"

Destroyer of Worlds
Taking stock of our nuclear present


In February 1947, Harper’s Magazine published Henry L. Stimson’s “The Decision to Use the Atomic Bomb.” As secretary of war, Stimson had served as the chief military adviser to President Truman, and recommended the attacks on Hiroshima and Nagasaki. The terms of his unrepentant apologia, an excerpt of which appears on page 35, are now familiar to us: the risk of a dud made a demonstration too risky; the human cost of a land invasion would be too high; nothing short of the bomb’s awesome lethality would compel Japan to surrender. The bomb was the only option.

Seventy years later, we find his reasoning unconvincing. Entirely aside from the destruction of the blasts themselves, the decision thrust the world irrevocably into a high-stakes arms race — in which, as Stimson took care to warn, the technology would proliferate, evolve, and quite possibly lead to the end of modern civilization. The first half of that forecast has long since come to pass, and the second feels as plausible as ever. Increasingly, the atmosphere seems to reflect the anxious days of the Cold War, albeit with more juvenile insults and more colorful threats. Terms once consigned to the history books — “madman theory,” “brinkmanship” — have returned to the news cycle with frightening regularity.

In the pages that follow, seven writers and experts survey the current nuclear landscape. Our hope is to call attention to the bomb’s ever-present menace and point our way toward a world in which it finally ceases to exist.
From the subsequent “Haywire” essay therein:
...Today, the time frame of an attack has been reduced to mere seconds. It once took three or four nuclear warheads aimed at every silo to render an adversary’s missiles useless, a redundancy thought necessary for certain destruction. Intercontinental ballistic missiles may now be made inoperable with a single keystroke. Computer viruses and malware have the potential to spoof or disarm a nuclear command-and-control system silently, anonymously, almost instantaneously. And long-range bombers and missiles are no longer required to obliterate a city. A nuclear weapon can be placed in a shipping container or on a small cabin cruiser, transported by sea, and set off remotely by cellular phone.

A 2006 study by the Rand Corporation calculated the effects of such a nuclear detonation at the Port of Long Beach, California. The weapon was presumed to be two thirds as powerful as the bomb that destroyed Hiroshima in 1945. According to the study, about 60,000 people in Long Beach would be killed, either by the blast or by the effects of radiation. An additional 150,000 would be exposed. And 8,000 would suffer serious burns. At the moment, there are about 200 burn beds at hospitals in California — and about 2,000 nationwide. Approximately 6 million people would try to flee Los Angeles County, with varying degrees of success. Gasoline supplies would run out. The direct cost of that single detonation was estimated to be about $1 trillion. In September, North Korea detonated a nuclear device about thirty times more powerful than the one in the Rand study…
'eh?

If you're not a Harpers subscriber, I would suggest you get a newsstand copy and read all of this.

As if we didn't have enough to be concerned about, I give you a comment from a post at the Naked Capitalism blog post "Capitalism: Not With a Bang But With a (Prolonged) Whimper."
Michael
November 21, 2017 at 11:01 am


In the article the author claims it may take centuries to evolve away from neo-classical economics. I believe it will be much shorter. As pointed out by 15,000 climate scientists we only have a few years to take corrective action before the climate goes into a runaway state.

The head of United Planet Faith & Science Initiative, Stuart Scott, believes the root of the problem is neo-classical economics since it makes the assumption that the planet’s resources are infinite, and the ability to destroy the planet without consequence is also infinite. There is no measure in this paradigm for the ecological costs of capitalism.

Dr. Natalia Shakhova and colleagues from the University of Alaska have been monitoring methane eruptions for a number of years and claim there will be a 50 gigaton methane eruption in the East Siberian Arctic Shelf within the next ten years, since as of 2013 the amount of methane released from this area had doubled since previous measures. Dr. Peter Wadhams of Cambridge University calculates that should this occur, the average global temperature of the planet would increase by .6 degrees Celsius within a decade.

It is predicted that this will cause an drastic jet stream disruption creating drought, flood, and heat waves that will make food production for 7 billion people impossible. There will probably be a population crash along with accompanying wars.

Additionally, inasmuch as we are already 1.1 degrees Celsius above preindustrial levels this would put us close to 2 degrees Celsius and on a path to a runaway climate. We currently have no via means of removing excess methane nor CO2 from the atmosphere, although it is assumed in the IPCC models that geo-engineering is employed to keep us below both 2 and 4 degrees Celsius of warming.

IMHO we are nearing an inflection point of survivability. What happens within the next 5 years will determine the chances of human civilizational survival. Everything else is just rearrangement of the deck chairs…
I am reminded of one of my prior posts "The ultimate population health 'Upstream' issue?" Tangentially, see my prior post "How will the health care system deal with THIS?"

Yeah, I'm a barrel of laughs today.

No shortage of exigent and otherwise daunting issues to contemplate, right? Buy, hey, POTUS 45 is all over it.


Yeah, "Happy Thanksgiving" to you too.

UPDATE

President Trump visits the Coast Guard on Thanksgiving Day to compliment them on their "Brand."


__

ERRATUM

Our friends at Uber have yet another shitstorm on their hands.
Uber Paid Hackers to Delete Stolen Data on 57 Million People

Hackers stole the personal data of 57 million customers and drivers from Uber Technologies Inc., a massive breach that the company concealed for more than a year. This week, the ride-hailing firm ousted its chief security officer and one of his deputies for their roles in keeping the hack under wraps, which included a $100,000 payment to the attackers…
I've cited Uber before. See "Health Care Needs an Uber." It links to a raft of Naked Capitalism thorough long-read analyses of their dismal prospects.

Recent news reports announcing their intent to do an IPO in 2019 are just so much whistling-past-the-graveyard BS to me, but, I know that doing one is their only path toward foisting off their untenable business model on the clueless public before their entire "Market Cap valuation" house of cards comes crashing down.

SAVE THE DATE

Details
____________

More to come...

Tuesday, November 21, 2017

For your health, Casa Suenos

We got home around midnight Saturday from Manzanillo, Mexico, where we'd spent eight days at the incredible, unique Casa Suenos. A "bucket list" trip for our ailing daughter, Danielle.


I think my son shot that sunset pic. Maybe it was his girlfriend Meg.

A decade ago, Danielle held a fundraiser for the non-profit youth golf organization in Las Vegas where she served as Executive Director at the time. One of the auction items was a week-long retreat at Casa Suenos.

My wife bid on it successfully. She and Danielle and my niece April and several others subsequently went down. I stayed behind because my friend Bill Champlin (former Chicago lead singer) was in Las Vegas for weekend performances at South Point. See my photo blog posts here and here.

I have subsequently never heard the end of it from Danielle and Cheryl for not going to Manzanillo.

Some months ago, Danielle inquired of the proprietor Denny Riga regarding the possibility of getting a week at the villa at "cost." He'd not known of Danielle's illness, and that she was no longer working and had had to move back in with us.

He gave us the week gratis. We only had to pay for our food, drink, and travel. They even told us they were waiving the customary aggregate staff gratuity.

We were having none of that latter forbearance. There were eleven of us. They worked hard all week to enure we had a fabulous time. Cheryl and I insisted on leaving an ample gratuity.


On Wednesday we had outdoor lunch and a swim at their private beach. There are no words, really.


Every meal for the entire week was beyond 5-star quality. Carlos, the chef, is an absolute culinary wizard. The service was impeccable. Every person we met while there, both staff and other locals, was beyond gracious and friendly. We could not have had a better time.


Danielle really needed this: muchos gracias.

EIGHT DAYS WITHOUT COMPUTER OR TV

I left my Mac Air home. They do have secure WiFi, so we could all use our iPhones to get online, mostly to upload photo updates to Facebook as the week ensued. I brought both of my big cameras, shot a good number of pics with them (which I've yet to process), but we all mostly just used our smartphones for photos.

I brought Walter Isaacson's hardbound Steve Jobs bio, which I'd bought several years ago and never gotten around to reading.


A long read. Excellent. I finished it. Having been a resolute "Mac snob" all the way back to 1991, consumed it with fascination. Complex, difficult guy. Visionary, but a major "***hole."
"This is a book about the roller-coaster life and searingly intense personality of a creative entrepreneur whose passion for perfection and ferocious drive revolutionized six industries: personal computers, animated movies, music, phones, tablet computing, and digital publishing. You might even add a seventh, retail stores, which Jobs did not quite revolutionize but did reimagine. In addition, he opened the way for a new market for digital content based on apps rather than just websites. Along the way he produced not only transforming products but also, on his second try, a lasting company, endowed with his DNA, that is filled with creative designers and daredevil engineers who could carry forward his vision. In August 2011, right before he stepped down as CEO, the enterprise he started in his parents’ garage became the world’s most valuable company.

This is also, I hope, a book about innovation. At a time when the United States is seeking ways to sustain its innovative edge, and when societies around the world are trying to build creative digital-age economies, Jobs stands as the ultimate icon of inventiveness, imagination, and sustained innovation. He knew that the best way to create value in the twenty-first century was to connect creativity with technology, so he built a company where leaps of the imagination were combined with remarkable feats of engineering. He and his colleagues at Apple were able to think differently: They developed not merely modest product advances based on focus groups, but whole new devices and services that consumers did not yet know they needed.

He was not a model boss or human being, tidily packaged for emulation. Driven by demons, he could drive those around him to fury and despair. But his personality and passions and products were all interrelated, just as Apple’s hardware and software tended to be, as if part of an integrated system. His tale is thus both instructive and cautionary, filled with lessons about innovation, character, leadership, and values…"


Isaacson, Walter (2011-10-24). Steve Jobs (Kindle Locations 341-355). Simon & Schuster. Kindle Edition.
Given my daughter's dire diagnosis, reading the particulars of Steve's ultimately fatal pancreatic cancer struggle was rather difficult.

I brought my iPad, wherein I have hundreds of Kindle edition books, but I never once fired it up. I finished the compelling Jobs bio while sitting on the tarmac at SFO awaiting a gate as we returned.

A highly recommended read.

BTW, amid the mail pile upon our return home was a signed copy of Steve Tobak's excellent book.


I'd previously cited and reviewed in on the blog, e.g., here. Out of the blue one day a couple of weeks ago he emailed me asking for my street address. We've never met in person, we're simply online "friends" via shared interests.

Steve's book is another highly recommended read (as is his blog; one of my requisite daily stops). I did a quick Kindle search, and I can report that Steve Tobak's numerous cites of Steve Jobs and Apple are uniformly spot-on (despite his admission to me that he'd not read the Isaacson bio).

Steve Tobak is an astute, straight-shooter. Buy his book, and bookmark him.

HORSEBACK IN MANZANILLO


Coronas are required, LOL. Left to Right, Grandson Keenan (now 23), me, and my wife Cheryl.

CODA

My last topical post prior to our Mexico trip was that of "Artificial Intelligence and Ethics." This MIT Technology Review article showed up in my LinkedIn feed while we were in Manzanillo: "AI Ca Be Made Legally Accountable For Its Decisions."

Stay tuned.

"OH, AND ONE MORE THING..."

Save the date.

Link
I asked for a press pass. No response this time thus far.
____________

More to come...