Search the KHIT Blog

Friday, December 8, 2017

Susannah Fox on the power of peer-to-peer communities for health



My brilliant friend Susannah Fox, former Chief Technology Officer of HHS, has launched a great new campaign.


The direct URL link is here: https://youtu.be/jCGk2n2zJ-s

I've covered Susannah's presentations for years at Health 2.0 and other events. She rocks.


I hope this new initiative gets huge traction. Pay it forward. I certainly will.

PS

Don't forget next Tuesday's Health 2.0 Technology for Precision Health Summit.
____________

More to come...

Tuesday, December 5, 2017

On deck: the Health 2.0 Technology for Precision Health Summit

Imagine if you will, a future in which a cancer diagnosis will be treated with a lifestyle change, like a chronic condition. Survivable. Manageable. Like Diabetes. Sure, to receive a cancer diagnosis today does not mean what it meant twenty years ago, but we are also unlikely to reach a point of ever acting casual about the term or the treatment plan.

In the meantime though, the increasing prevalence of personal data collection is driving new approaches in care plans that have a real shot at improving quality of life. The narrative of one's life can be seen in the data
- everything from where you live, what you eat, how you workout, even what you search for on the internet. The sources of such personal data come from places like clinical trials, biosensors, and wearables and is being stored in your Electronic Medical Record.

The sticking point though is the advancement of technological tools to view, aggregate, extract, and analyze relevant data to derive a meaningful plan of attack (er, treatment plan). One interoperable tool that plugs right into the EMR is Cota Healthcare. Pair this with omics data and genome sequencing technology, like 2bPrecise, and physicians are gaining insight into what makes you, you. And thus are better able to customize a bespoke cancer treatment plan, designed for you and only you.

Learn more about how omics data is driving new care plans, and see a live demo from Cota Healthcare and others at the Technology for Precision Health Summit next week in San Francisco.

Why wait. Register today for next week's event and save 50% on the ticket by using discount code TPH50.
Hope to see you there. Hashtag #tph2017.

"Omics," 'eh? See my prior "Personalized Medicine" and "Omics" -- HIT and QA considerations."

UPDATE

apropos of my prior post "Science, intellectual property, taxpayer funding, and the law."
Why a lot of important research is not being done
Aaron Carroll MD


We have a dispiriting shortage of high-quality health research for many reasons, including the fact that it’s expensive, difficult and time-intensive. But one reason is more insidious: Sometimes groups seek to intimidate and threaten scientists, scaring them off promising work.

By the time I wrote about the health effects of lead almost two years ago, few were questioning the science on this issue. But that has not always been the case. In the 1980s, various interests tried to suppress the work of Dr. Herbert Needleman and his colleagues on the effects of lead exposure. Not happy with Dr. Needleman’s findings, the lead industry got both the federal Office for Scientific Integrity and the University of Pittsburgh to conduct intrusive investigations into his work and character. He was eventually vindicated — and his discoveries would go on to improve the lives of children all over the country — but it was a terrible experience for him.

I often complain about a lack of solid evidence on guns’ relationship to public health. There’s a reason for that deficiency. In the 1990s, when health services researchers produced work on the dangers posed by firearms, those who disagreed with the results tried to have the National Center for Injury Prevention and Control shut down. They failed, but getting such work funded became nearly impossible after that…
And, this is sequally interesting:
Benign Effects of Automation: New Evidence from Patent Texts

Researchers disagree over whether automation is creating or destroying jobs. This column introduces a new indicator of automation constructed by applying a machine learning algorithm to classify patents, and uses the result to investigate which US regions and industries are most exposed to automation. This indicator suggests that automation has created more jobs in the US than it has destroyed…
As always at Naked Capitalism, spend some time reading the comments. Predominantly an articulate, well-informed crowd there.

ERRATUM

Posted by one of my FB friends. Lordy.


The jokes just write themselves.
____________

More to come...

Sunday, December 3, 2017

Science, intellectual property, taxpayer funding, and the law


Taxpayers, whether they know it or not, fund a huge proportion of scientific research across a breadth of fields. Fruits borne of such investigations subsequently often become the highly valuable "intellectual property" (IP) of academic institutions and private parties.

to wit, from a recent issue of my AAAS Science Magazine:
Racing for academic glory and patents: Lessons from CRISPR
Arti K. Rai, Robert Cook-Deegan


The much-publicized dispute over patent rights to CRISPR-Cas9 gene-editing technology highlights tensions that have been percolating for almost four decades, since the U.S. Bayh-Dole Act of 1980 invoked patents as a mechanism for promoting commercialization of federally funded research. With the encouragement provided by Bayh-Dole, academic scientists and their research institutions now race in dual competitive domains: the quest for glory in academic research and in the patent sphere. Yet, a robust economic literature (1, 2) argues that races are often socially wasteful; the racing parties expend duplicative resources, in terms of both the research itself and the legal fees spent attempting to acquire patents, all in the pursuit of what may be a modest acceleration of invention. For CRISPR, and future races involving broadly useful technologies for which it may set a precedent, the relationship between these competitive domains needs to be parsed carefully. On the basis of legal maneuvers thus far, it appears that the litigants will try for broad rights; public benefit will depend on courts reining them in and, when broad patents slip through, on updating Bayh-Dole's pro-commercialization safeguards with underused features of the Act,
       
Science, Commerce, CRISPR

The University of California (UC) and the Broad Institute of the Massachusetts Institute of Technology (MIT) and Harvard University are tangling over U.S. patent rights to CRISPR technology. In June 2012, Doudna of UC, Charpentier, and others demonstrated in vitro that a system comprising a Cas9 DNA endonuclease could be programmed with a single chimeric RNA molecule to cleave specific DNA sites (3). Six months later, in January 2013, Zhang of the Broad Institute reported genome editing in mammalian cells using CRISPR-Cas9 (4). The research underlying both of these seminal publications was supported by the U.S. National Institutes of Health (NIH) and thus were funded for public benefit and subject to terms of the Bayh-Dole Act (3, 4).
       
From the outset, CRISPR science has been intertwined with commercial considerations. UC filed its provisional patent application in May 2012, one month before publication of the research findings. Some of UC's initial claims swept very broadly, encompassing any DNA-targeting RNA containing a segment complementary to the target DNA and a segment that interacts with a “site-directed modifying polypeptide” (5). Although UC subsequently restricted its claims to a type II CRISPR-Cas9 system, it continues to claim this system in any species as well as in vitro.
       
The Broad Institute began filing patent applications in December 2012. Broad paid for expedited examination, with the result that its patents began to issue in April 2014. The relevant Broad patents claim alteration of eukaryotic cell DNA by using type II CRISPR-Cas9 systems.
       
After the Broad patents began to issue, UC asked for a legal proceeding known as an interference, arguing that because the subject matter of its application, which had not yet been granted, overlapped with that of the Broad patents, the U.S. Patent and Trademark Office (USPTO) had to declare who was the “first to invent.” In February 2017, the USPTO ruled that the overlap did not exist (6). Specifically, USPTO held that the Broad team's success in eukaryotes was not scientifically “obvious” in light of UC's demonstration of success in vitro…
       
Patent Races, Scientific Credit
As histories of key technologies such as the telephone demonstrate (12), and as studies of patent records have documented on a large scale empirically (13), patent races are common. Moreover, legal rights can get confused with scientific credit (12). It is important to get the rules right, not only for the patent system but also for academic science.

Whatever the outcome of the patent interference proceeding, scientific and patent priority are not the same. The rules for securing a patent should differ in important ways from the rules for scientific credit and recognition. Although current rules for academic credit may seem murky or unfair to some individuals who feel left out, winner-take-all scenarios for important patents, which confer legal exclusivity over inventions, are more pernicious because they confer powerful rights to exclude…
       
Shifting Roles of Universities
The problem of inefficiency in races is exacerbated if the racing parties believe they can, via an overly broad patent, make the race a winner-take-all fight: the bigger the prize at the end of the race, the more likely we will see inefficiency. So, overly broad patents that emerge from winner-take-all races are not only likely to hamper downstream development, they are also likely to encourage upstream duplication. In contrast, to the extent that racing parties have engaged not in wholesale duplication but have pursued valuable divergent paths (14), narrow patents can be awarded to multiple racers, allowing diversity to be rewarded…

Improving Bayh-Dole
A first-best result in the CRISPR race would be narrow patents that both diminish incentives for wasteful future racing and prevent any player from imposing substantial control over downstream innovation. But to the extent broad patents that impede development nonetheless slip through, presumably because CAFC does not choose to follow its admittedly controversial precedent on written description, the CRISPR controversy suggests the need for renewed attention to prior literature that highlights avenues for improving Bayh-Dole's pro-commercialization safeguards. Regulatory improvements that have been suggested in the literature include clarifying government-use rights, extending them to grantees and contractors; recognizing situations in which patenting is not the shortest or best path to widespread application; and simplifying procedures for “march in” to compel additional licensing when health and safety needs are not being met by those with exclusive rights (25).

To our knowledge, none of these academic proposals for improvement have been investigated seriously by the U.S. Department of Commerce, which is responsible for administering Bayh-Dole. The CRISPR controversy may be a catalyst for action. Improving pro-commercialization safeguards would be prudent even if narrow rights were ultimately at stake. But, improvement could prove particularly critical in the event that courts let overly broad rights emerge.
'eh?

Ahhh... patent actions.


Taxpayers, recall, paid for the initial development of the internet itself -- U.S. commercial control of which is soon to be handed over to a handful of major corporations courtesy of the Trump Administration's FCC.

apropos, from the bracingly iconoclastic Yves Smith (re: "Net Neutrality"):
The Trinet
The internet will survive longer than the web will. GOOG-FB-AMZN will still depend on submarine internet cables (the “Backbone”), because it is a technical success. That said, many aspects of the internet will lose their relevance, and the underlying infrastructure could be optimized only for GOOG traffic, FB traffic, and AMZN traffic. It wouldn’t conceptually be anymore a “network of networks”, but just a “network of three networks”, the Trinet, if you will. The concept of a workplace network which gave birth to the internet infrastructure would migrate to a more abstract level: Facebook Groups, Google Hangouts, G Suite, and other competing services which can be acquired by a tech giant...
Ya gotta spend serious time at #Naked Capitalism. I do daily. You comment there with care. They do not suffer fools gladly.
__

Regarding digital IP broadly, recall my prior post citing apparent "patent troll" Robert Budzinksi, "NLP, meet NPE: the curious case of Robert Budzinski."

CLIMATE SCIENCE IN COURT

From Public Radio's KPBS:
Climate Paper At Center Of Scientist-Versus-Scientist Legal Dispute

A Stanford University climate scientist has taken the unusual step of suing another scientist, arguing that a rebuttal paper critiquing his work amounted to defamation.

Two years ago, Stanford professor Mark Jacobson published a scientific paper that quickly grabbed national attention: in it he said that a completely renewable energy grid could be possible by 2050, as new ways to store energy would allow the system to keep up with demand even in moments when winds slow down or clouds cover solar panels. The paper was mentioned by Sen. Bernie Sanders and was named one of the best of 2015.

But this June, a group of researchers published its own paper, which said Jacobson ignored important costs, making the type of system Jacobson envisioned theoretically possible, but likely too expensive to be realistic. The lead author was Christopher Clack, a former researcher at the University of Colorado Boulder and now CEO of the nonprofit Vibrant Clean Energy. UC San Diego professor David Victor was one of 20 other authors.

Jacobson sued Clack and the National Academy of Sciences, which published the paper, in late September. He said Clack’s work defamed him…
Audio interview here:



Interesting stuff, all of it.

NEW READING


Stay tuned. About 10% through, very nice thus far. Written by one of the Principals in San Francisco's M34 Capital, Inc. There's a specific reason for my interest in these people.

UPDATE

More "AI" news...

November 27, 2017 - Depending on who is making the statement, artificial intelligence is either the best thing to happen to healthcare since penicillin or the beginning of the end of human involvement in the medical arts.

Robot clinicians are coming to take over every job that can possibly be automated, warn the naysayers.

That might not be such a terrible thing, say the enthusiasts. The sooner the healthcare system can eliminate human error and inefficiency, the safer, happier, and healthier patients will be.

In reality, artificial intelligence is still many, many years away from replacing the clinical judgement of a living, thinking person, says Dr. Joe Kimura, Chief Medical Officer at Atrius Health. And it may not ever do so.

While AI and machine learning hold enormous potential to improve the way clinicians practice, AI proponents should try to temper their expectations, and cynics worried for their jobs can relax for the moment – there is a great deal of work to be done before providers can or should trust computers to make reliable decisions for them…
Interesting. See my prior post "AI vs IA: At the cutting edge of IT R&D." See also "The future of health care? "Flawlessly run by AI-enabled robots, and 'essentially' free?"

All warrants continued attention.

MORE

Amid the Monday morning news, from THCB:
Could Artificial Intelligence Destroy Radiology by Litigation Claims?
By HUGH HARVEY, MD


We’ve all heard the big philosophical arguments and debate between rockstar entrepreneurs and genius academics – but have we stopped to think exactly how the AI revolution will play out on our own turf?

At RSNA this year I posed the same question to everyone I spoke to: What if radiology AI gets into the wrong hands? Judging by the way the crowds voted with their feet by packing out every lecture on AI, radiologists would certainly seem to be very aware of the looming seismic shift in the profession – but I wanted to know if anyone was considering the potential side effects, the unintended consequences of unleashing such a disruptive technology into the clinical realm?…
Read all of it.

ANOTHER BOOK TO READ?


Ran across this title. Looks interesting, given the amount of thought and blogging I've given to AI issues (mostly, albeit not exclusively, pertaining to health care).
"Artificial Intelligence helps choose what books you buy, what movies you see, and even who you date. It puts the "smart" in your smartphone and soon it will drive your car. It makes most of the trades on Wall Street, and controls vital energy, water, and transportation infrastructure. But Artificial Intelligence can also threaten our existence.

In as little as a decade, AI could match and then surpass human intelligence. Corporations and government agencies are pouring billions into achieving AI's Holy Grail—human-level intelligence. Once AI has attained it, scientists argue, it will have survival drives much like our own. We may be forced to compete with a rival more cunning, more powerful, and more alien than we can imagine..."
Dunno, my daughter's illness continues to hamper my reading pace. I have about four books in process at the moment, including this one:


First cited it here.

James Barrat's's website here.
"Intelligence, not charm or beauty, is the special power that enables humans to dominate Earth. Now, propelled by a powerful economic wind, scientists are developing intelligent machines. Each year intelligence grows closer to shuffling off its biological coil and taking on an infinitely faster and more powerful synthetic one." 
The Mark 13

We'll see. So much to continue to learn.
____________

More to come...

Wednesday, November 29, 2017

"SlaughterBots?" #AI-driven autonomous weaponry?


Is this a realistic, exigent concern? We're all ga-ga these days over the beneficent potential of AI in the health care space, but,



From the autonomousweapons.org site:
What are killer robots?
Killer robots are weapons systems that, once activated, would select and fire on targets without meaningful human control. They are variously termed fully autonomous weapons or lethal autonomous weapons systems.

The concern is that low-cost sensors and rapid advances in artificial intelligence are making it increasingly possible to design weapons systems that would target and attack without further human intervention. If this trend towards autonomy continues, the fear is that humans will start to fade out of the decision-making loop, first retaining only a limited oversight role, and then no role at all.

The US and others state that lethal autonomous weapon systems “do not exist” and do not encompass remotely piloted drones, precision-guided munitions, or defensive systems. Most existing weapons systems are overseen in real-time by a human operator and tend to be highly constrained in the tasks they are used for, the types of targets they attack, and the circumstances in which they are used.

While the capabilities of future technology are uncertain, there are strong reasons to believe that fully autonomous weapons could never replicate the full range of inherently human characteristics necessary to comply with international humanitarian law’s fundamental rules of distinction and proportionality. Existing mechanisms for legal accountability are ill suited and inadequate to address the unlawful harm that fully autonomous weapons would be likely to cause...
Wonder what Elon Musk thinks about this stuff? Recent reporting.


'eh? See also my prior post "Artificial Intelligence and Ethics."

ERRATUM

Read an interesting set of posts over at Medium today.

"What to look (out) for when raising money."
Good stuff.
If You Take Venture Capital, You’re Forcing Your Company To Exit
To understand venture capital, you must understand the consequences of how VCs return capital to their investors


Why the Most Successful VC Firms Keep Winning
The best companies seek out the most successful investors, and gains accumulate at the top
 

One Investor Isn’t Enough
A company’s success is highly reliant on peer validation of investor decisions. This stunts diversity and must change if we want the best founders working on the biggest opportunities.
I follow VC news tangentially in the health care space, e.g., "#WinterTech2016: Venture Capital in #HealthIT."

Just became apprised of this company.

"We are a private investment company that practices a disciplined, evidence-based approach to define innovative business models capable of delivering new products and services that can positively impact the way we live, work and play.

What We Look For
We are not traditional seed investors. Rather, we are active and engaged co-founders who seek to get the most promising opportunities from the lab to the market. We bring to the table decades of entrepreneurial success and experience, offering seasoned perspectives. We seek individuals who share our desire for exploration, who are intellectually honest and have insatiable curiosity, who have the ability and the desire to systematically test their assumptions in the real world. We believe that the best entrepreneurs have the ability to blend technical insight with market-based feedback, allowing their innovations to mature into a successful company. Even our name evokes this passion. M34 is short for Mach 34, the speed that an object needs to achieve to escape the gravitational pull of the earth. Our goal for all of our companies is to achieve a successful liftoff from traditional forces that hold most new ventures back...."
Interesting. I found this noteworthy in particular:
M34 Capital founders understand the risks associated with developing new and innovative ideas. We have pioneered the principles of the lean startup approach to accelerate and improve the efficiency of the commercialization of new technologies.
More on this stuff perhaps later. "Lean startup?" Hmmm...

Seriously; this old school Deming/Shewhart dude loves me some "Lean."

UPDATE

apropos of the upcoming December 12th Health 2.0 "Technology for Precision Health Summit."
Health 2.0 sat down with Linda Molnar to discuss the evolution of Precision Health, the imperatives at stake in a fast-paced field, and empowerment through big data. Linda has over 20 years in the field of Life Sciences and is responsible for a number of initiatives that further the field with start-ups, the feds, and for investors. Her current endeavor is leading the upcoming Technology for Precision Health Summit in San Francisco alongside Health 2.0. “We’re never going to pull together all of this disparate data from disparate sources in a meaningful (i.e. clinically actionable) way, unless we talk about it” she says. The Summit is an attempt to bring together the worlds of Precision Medicine and Digital Healthcare to realize the full potential of a predictive and proactive approach to maintaining health...

____________

More to come...

Monday, November 27, 2017

From Pythagoras to petabytes

A few days back, I was surfing through a number of news articles recounting Elon Musk's latest freakout over The Existential Menace of AI. See, e.g., "AI is highly likely to destroy humans, Elon Musk warns."


A commenter beneath one of the articles (TechCrunch's "WTF is AI?") briefly cited this book.


OK, Elon's pearl-clutching will have to wait.

Garth's book is merely $4.61 Kindle price (and worth every penny). A must-read, IMO. Spanning, well, "Pythagoras to petabytes." (I'd have chosen "yottabytes," but it didn't have the cutesy alliterative post title ring.)
IT’S EASY TO FORGET that the digital age, and so the existence of computer programmers, still only spans a single working life, and one of those lives is mine. In my career as a commercial computer programmer I’ve experienced most of the changes that have occurred in the programming world in all their frequent craziness. This is my attempt to preserve that odyssey for the historical record...
For myself and many others, commercial computer programming has been a rewarding career since at least the late 1960s and during that time programming has seen massive changes in the way programmers work. The evolution of many different programming models and methodologies, the involvement of many extraordinary characters, endless ‘religious wars’ about the correct way to program and which standards to use. It would be a great shame if a new generation of programmers were unaware of the fascinating history of their profession, or hobby, and so I came to write this book...
Four of the overall themes in the book can be identified by searching for #UI (User Interface), #AI (Artificial Intelligence), #SOA (Service Oriented Architecture) and #PZ (Program componenti-Zation) and the Table of Contents is detailed enough to guide dipping in/ dipping out…
___
In Years BC (Before Computers)
Sometimes it seems like computers and the digital revolution sprang out of nowhere but of course they didn’t. They were the result of at least two and a half millennia of evolutionary development which can be traced all the way back to the ancient Greek philosopher Pythagoras. When I graduated from university in 1968 I had a degree in philosophy but I somehow managed to be awarded it without having absorbed much ancient Greek philosophy or the history and philosophy of science. There were just too many well documented distractions in the sixties. It was only later that I came to realise how critical they were in creating the context for digital computers and with them the new career of commercial computer programmer.

So we need to briefly cover some history in order to set the scene for my account though only the necessary facts will be covered. We’ll only be scratching the surface so don’t panic but if you prefer to skip this Prologue that’s OK. By the way I use BC and AD for dates and not BCE and CE, there is no particular reason, it’s just habit.

550 – 500 BC
During this period Pythagoras the ancient Greek philosopher was active in a Greek colony in southern Italy. No written record of his thinking survives from the time but he is known to have influenced Plato and Aristotle and through them all of Western thinking. He is rumored to have been the first person to ever call himself a philosopher…

Eaglesfield, Garth . The Programmer's Odyssey: A Journey Through The Digital Age (Kindle Locations 94-129). Pronoun. Kindle Edition.
Garth and I are roughly the same age (both born in 1946). He came to computing about 12 years before I did. My initiation came tangentially in my 30's amid the course of my undergrad research (I was studying Psychology and Statistics at Tennessee; we were investigating "math anxiety" among college students taking stats courses). We wrote SAS and SPSS code via primitive line editors (e.g., edlin and vi), with laboriously entered inline research results data, and prefaced with JCL headers (Job Control Language), all submitted to the DEC/VAX over a 300 baud dialup modem (after which we'd schlep over to the computer room to fetch the 132-column greenbar printouts once available).

My, my, how times have changed.

After graduation in 1985, I got my first white collar gig in January 1986, writing code in a radiation laboratory in Oak Ridge (pdf). After those years, my time was spent principally as an analyst, using mostly SAS and Stata, in health care (pdf) and bank risk management.

I found this particularly interesting in Garth's book:
On The Road To Bell Labs
I felt that I needed to get Unix and C on to my CV and where better to do that than at its birthplace where it all started? Bell Labs was in New Jersey on the other side of the Hudson river and would involve some travel but I knew that other contractors from Manhattan worked there so I reasoned that there must be some way of coping. The Holy Grail of the various Bell Labs, which were scattered around New Jersey, was the Murray Hills lab, it was the real Unix/ C holy of holies. But it was hard to get into and so I eventually got a contract at the Whippany lab. It was closer to Manhattan and was more of a traditional telephone company engineering lab but they used Unix and C so that would get it on to my CV. The project I worked on was the software for a diagnostic device used in network line testing…

At the Labs I was given a copy of the now legendary ‘White Book’ written by Kernighan and Ritchie accompanied by a Unix manual and I set about absorbing them both.

Entering your C source code for a program was done with the ubiquitous Unix editor vi from a dumb terminal. When vi was developed at Bell Labs the quest for portability had been continued by using ordinary characters for edit commands so that it could be used with any dumb terminal’s keyboard. Those keyboards of course did not have all the control keys that modern keyboards come with such as arrow keys, Home, End, Page Up, Page Down, and so on. As with everything in the Unix world the vi key combinations ranged from simple and intuitive to complex and barely comprehensible. It operated in 2 modes, insert and command modes, and in command mode, for instance, it used the normal keyboard characters ‘h’ ‘l’ ‘k’ and ‘j’ as substitutes for the absent left, right, up and down arrow keys.

Compared to the VMS operating system and to high level programming languages like COBOL, Coral 66 and Pascal, Unix and C certainly represented a much lower level, closer to the hardware and more like assembler coding. There were relatively few reserved words in the C language with the main ones being for, data classes (int, char, double, float), data types (signed, unsigned, short, long), looping verbs (for, while, continue, break), decision verbs (if, else, switch, case) and control verbs (goto, return). C took the componentization of computer programs several steps further through the use of functions, a specific form of subroutine. Functions, returned a single value that might then be part of an equation...
(ibid, Kindle Locations 1677-1706).
My late Dad came back from WWII (minus a leg), mustered out of the military, and went to work for Bell Labs, first at Murray Hill, and subsequently at the Whippany location. He worked in semiconductor R&D his entire career at Bell Labs (the only civilian job he ever had, until he took his tax-free early VA disability pension in 1972 and dragged my Ma off to the humid swamps of Palm Bay, Florida so he could play golf all the time.

Looks like Garth came to Bell Labs a few years after Pop had retired.

"The Programmer's Odyssey" covers a ton of ground, weaving great historical and technical detail into a fun read of a personal story. I give it an enthusiastic 5 stars.
"It would be a great shame if a new generation of programmers were unaware of the fascinating history of their profession, or hobby, and so I came to write this book."
Yeah. I am reminded of my prior post "12 weeks, 1,200 hours, and $12,000, and you're a "Software Engineer"? Really?"

In closing,
Postscript
We have reached the end of the odyssey. But where have we arrived?

Well in some ways, like Odysseus himself, we have finally come home, right back to where we started. Some things have changed, we now have a massive multi-layered global network of increasingly smart devices, including the ultimate smart device the human being. Yet ultimately it’s all based on the same old Von Neuman computer architecture and on the two digits 0 and 1, the binary system. The whole extraordinary software edifice rests on that foundation, which is really mindboggling.

Have we learned anything on our odyssey that suggests this is likely to change? Perhaps a new computer hardware architecture will finally emerge? Articles regularly appear about quantum computing devices and IBM have announced a project to research cognitive SyNAPSE chips but so far these are still very much research projects. It is also being said that the minituarisation of silicon chips by Intel and others will reach its limit by 2020 and Moore’s Law will finally come to an end.

On our odyssey we, like Odysseus, have encountered dangerous and worrying phenomena, in particular whether any conceivable developments in computer technology will put the human race in danger. It’s hard to believe that based on existing system components a self-conscious sentient being will come into existence like HAL in 2001 and threaten our own survival. Throughout history major technological advances have tended to become models for explaining human beings, always incorrectly. In the digital age this has meant starting to see human beings as just computing machines. Will this prove to be true or false? The jury is still out.

But if we are in danger from our technology perhaps it’s most likely to come from an artificial living creature/ cyborg/ replicant enhanced with implanted computer chips and interfaces that we have created? Created perhaps by using advanced tools such as the already being discussed DNA editors? The recent TV series ‘Humans’ is an interesting attempt to confront some of these issues and two recent books have added significantly to the debate; Rise of the Machines: The Lost History of Cybernetics; Thomas Rid; Scribe Publications and Homo Deus: A Brief History of Tomorrow, by Yuval Noah Harari, Harvill Secker.

Questions, questions. As the philosophically inclined outlaw replicant Roy Batty so presciently remarked in Blade Runner
(ibid, Kindle Locations 2646-2666)
'eh?


Kudos, Mr. Eaglesfield. I'd make this book required reading at "software engineer boot camp." (BTW, Garth's website is here.)

Also, Garth turned me on to this resource where he blogs:


Check out in particular the "History of Computing and Programming" page.
___

Other prior posts of mine of relevance here?
Then there are my posts on "NLP" (Natural Language Processing).

NEXT UP

Got a new Twitter follower.

The Web's Best Content Extractor
Diffbot works without rules or training. There's no better way to extract data from web pages. See how Diffbot stacks up to other content extraction methods:
Pretty interesting. Stay tuned. We'll see.

SAVE THE DATE

They just approved my press pass. Follow hashtag #health2con and Twitter handle @Health2con.

Details
UPDATE

From "history" to "futurism" predictions. I saw this infographic on Pinterest, and chopped it up into 4 segments for viewability.


 See also "Types of AI."

UPDATE

Garth just reciprocated my foregoing tout with the kindest mention of me. See "Cometh the hour, cometh the man."

Again, not kidding about how cool his book is.
____________

More to come...

Wednesday, November 22, 2017

The ultimate health issue

My December issue of Harpers arrived in the snailmail yesterday. apropos of the worrisome topic that is surely on many minds these days, in light of who has sole access to the U.S. "nuclear button,"

Destroyer of Worlds
Taking stock of our nuclear present


In February 1947, Harper’s Magazine published Henry L. Stimson’s “The Decision to Use the Atomic Bomb.” As secretary of war, Stimson had served as the chief military adviser to President Truman, and recommended the attacks on Hiroshima and Nagasaki. The terms of his unrepentant apologia, an excerpt of which appears on page 35, are now familiar to us: the risk of a dud made a demonstration too risky; the human cost of a land invasion would be too high; nothing short of the bomb’s awesome lethality would compel Japan to surrender. The bomb was the only option.

Seventy years later, we find his reasoning unconvincing. Entirely aside from the destruction of the blasts themselves, the decision thrust the world irrevocably into a high-stakes arms race — in which, as Stimson took care to warn, the technology would proliferate, evolve, and quite possibly lead to the end of modern civilization. The first half of that forecast has long since come to pass, and the second feels as plausible as ever. Increasingly, the atmosphere seems to reflect the anxious days of the Cold War, albeit with more juvenile insults and more colorful threats. Terms once consigned to the history books — “madman theory,” “brinkmanship” — have returned to the news cycle with frightening regularity.

In the pages that follow, seven writers and experts survey the current nuclear landscape. Our hope is to call attention to the bomb’s ever-present menace and point our way toward a world in which it finally ceases to exist.
From the subsequent “Haywire” essay therein:
...Today, the time frame of an attack has been reduced to mere seconds. It once took three or four nuclear warheads aimed at every silo to render an adversary’s missiles useless, a redundancy thought necessary for certain destruction. Intercontinental ballistic missiles may now be made inoperable with a single keystroke. Computer viruses and malware have the potential to spoof or disarm a nuclear command-and-control system silently, anonymously, almost instantaneously. And long-range bombers and missiles are no longer required to obliterate a city. A nuclear weapon can be placed in a shipping container or on a small cabin cruiser, transported by sea, and set off remotely by cellular phone.

A 2006 study by the Rand Corporation calculated the effects of such a nuclear detonation at the Port of Long Beach, California. The weapon was presumed to be two thirds as powerful as the bomb that destroyed Hiroshima in 1945. According to the study, about 60,000 people in Long Beach would be killed, either by the blast or by the effects of radiation. An additional 150,000 would be exposed. And 8,000 would suffer serious burns. At the moment, there are about 200 burn beds at hospitals in California — and about 2,000 nationwide. Approximately 6 million people would try to flee Los Angeles County, with varying degrees of success. Gasoline supplies would run out. The direct cost of that single detonation was estimated to be about $1 trillion. In September, North Korea detonated a nuclear device about thirty times more powerful than the one in the Rand study…
'eh?

If you're not a Harpers subscriber, I would suggest you get a newsstand copy and read all of this.

As if we didn't have enough to be concerned about, I give you a comment from a post at the Naked Capitalism blog post "Capitalism: Not With a Bang But With a (Prolonged) Whimper."
Michael
November 21, 2017 at 11:01 am


In the article the author claims it may take centuries to evolve away from neo-classical economics. I believe it will be much shorter. As pointed out by 15,000 climate scientists we only have a few years to take corrective action before the climate goes into a runaway state.

The head of United Planet Faith & Science Initiative, Stuart Scott, believes the root of the problem is neo-classical economics since it makes the assumption that the planet’s resources are infinite, and the ability to destroy the planet without consequence is also infinite. There is no measure in this paradigm for the ecological costs of capitalism.

Dr. Natalia Shakhova and colleagues from the University of Alaska have been monitoring methane eruptions for a number of years and claim there will be a 50 gigaton methane eruption in the East Siberian Arctic Shelf within the next ten years, since as of 2013 the amount of methane released from this area had doubled since previous measures. Dr. Peter Wadhams of Cambridge University calculates that should this occur, the average global temperature of the planet would increase by .6 degrees Celsius within a decade.

It is predicted that this will cause an drastic jet stream disruption creating drought, flood, and heat waves that will make food production for 7 billion people impossible. There will probably be a population crash along with accompanying wars.

Additionally, inasmuch as we are already 1.1 degrees Celsius above preindustrial levels this would put us close to 2 degrees Celsius and on a path to a runaway climate. We currently have no via means of removing excess methane nor CO2 from the atmosphere, although it is assumed in the IPCC models that geo-engineering is employed to keep us below both 2 and 4 degrees Celsius of warming.

IMHO we are nearing an inflection point of survivability. What happens within the next 5 years will determine the chances of human civilizational survival. Everything else is just rearrangement of the deck chairs…
I am reminded of one of my prior posts "The ultimate population health 'Upstream' issue?" Tangentially, see my prior post "How will the health care system deal with THIS?"

Yeah, I'm a barrel of laughs today.

No shortage of exigent and otherwise daunting issues to contemplate, right? Buy, hey, POTUS 45 is all over it.


Yeah, "Happy Thanksgiving" to you too.

UPDATE

President Trump visits the Coast Guard on Thanksgiving Day to compliment them on their "Brand."


__

ERRATUM

Our friends at Uber have yet another shitstorm on their hands.
Uber Paid Hackers to Delete Stolen Data on 57 Million People

Hackers stole the personal data of 57 million customers and drivers from Uber Technologies Inc., a massive breach that the company concealed for more than a year. This week, the ride-hailing firm ousted its chief security officer and one of his deputies for their roles in keeping the hack under wraps, which included a $100,000 payment to the attackers…
I've cited Uber before. See "Health Care Needs an Uber." It links to a raft of Naked Capitalism thorough long-read analyses of their dismal prospects.

Recent news reports announcing their intent to do an IPO in 2019 are just so much whistling-past-the-graveyard BS to me, but, I know that doing one is their only path toward foisting off their untenable business model on the clueless public before their entire "Market Cap valuation" house of cards comes crashing down.

SAVE THE DATE

Details
____________

More to come...

Tuesday, November 21, 2017

For your health, Casa Suenos

We got home around midnight Saturday from Manzanillo, Mexico, where we'd spent eight days at the incredible, unique Casa Suenos. A "bucket list" trip for our ailing daughter, Danielle.


I think my son shot that sunset pic. Maybe it was his girlfriend Meg.

A decade ago, Danielle held a fundraiser for the non-profit youth golf organization in Las Vegas where she served as Executive Director at the time. One of the auction items was a week-long retreat at Casa Suenos.

My wife bid on it successfully. She and Danielle and my niece April and several others subsequently went down. I stayed behind because my friend Bill Champlin (former Chicago lead singer) was in Las Vegas for weekend performances at South Point. See my photo blog posts here and here.

I have subsequently never heard the end of it from Danielle and Cheryl for not going to Manzanillo.

Some months ago, Danielle inquired of the proprietor Denny Riga regarding the possibility of getting a week at the villa at "cost." He'd not known of Danielle's illness, and that she was no longer working and had had to move back in with us.

He gave us the week gratis. We only had to pay for our food, drink, and travel. They even told us they were waiving the customary aggregate staff gratuity.

We were having none of that latter forbearance. There were eleven of us. They worked hard all week to enure we had a fabulous time. Cheryl and I insisted on leaving an ample gratuity.


On Wednesday we had outdoor lunch and a swim at their private beach. There are no words, really.


Every meal for the entire week was beyond 5-star quality. Carlos, the chef, is an absolute culinary wizard. The service was impeccable. Every person we met while there, both staff and other locals, was beyond gracious and friendly. We could not have had a better time.


Danielle really needed this: muchos gracias.

EIGHT DAYS WITHOUT COMPUTER OR TV

I left my Mac Air home. They do have secure WiFi, so we could all use our iPhones to get online, mostly to upload photo updates to Facebook as the week ensued. I brought both of my big cameras, shot a good number of pics with them (which I've yet to process), but we all mostly just used our smartphones for photos.

I brought Walter Isaacson's hardbound Steve Jobs bio, which I'd bought several years ago and never gotten around to reading.


A long read. Excellent. I finished it. Having been a resolute "Mac snob" all the way back to 1991, consumed it with fascination. Complex, difficult guy. Visionary, but a major "***hole."
"This is a book about the roller-coaster life and searingly intense personality of a creative entrepreneur whose passion for perfection and ferocious drive revolutionized six industries: personal computers, animated movies, music, phones, tablet computing, and digital publishing. You might even add a seventh, retail stores, which Jobs did not quite revolutionize but did reimagine. In addition, he opened the way for a new market for digital content based on apps rather than just websites. Along the way he produced not only transforming products but also, on his second try, a lasting company, endowed with his DNA, that is filled with creative designers and daredevil engineers who could carry forward his vision. In August 2011, right before he stepped down as CEO, the enterprise he started in his parents’ garage became the world’s most valuable company.

This is also, I hope, a book about innovation. At a time when the United States is seeking ways to sustain its innovative edge, and when societies around the world are trying to build creative digital-age economies, Jobs stands as the ultimate icon of inventiveness, imagination, and sustained innovation. He knew that the best way to create value in the twenty-first century was to connect creativity with technology, so he built a company where leaps of the imagination were combined with remarkable feats of engineering. He and his colleagues at Apple were able to think differently: They developed not merely modest product advances based on focus groups, but whole new devices and services that consumers did not yet know they needed.

He was not a model boss or human being, tidily packaged for emulation. Driven by demons, he could drive those around him to fury and despair. But his personality and passions and products were all interrelated, just as Apple’s hardware and software tended to be, as if part of an integrated system. His tale is thus both instructive and cautionary, filled with lessons about innovation, character, leadership, and values…"


Isaacson, Walter (2011-10-24). Steve Jobs (Kindle Locations 341-355). Simon & Schuster. Kindle Edition.
Given my daughter's dire diagnosis, reading the particulars of Steve's ultimately fatal pancreatic cancer struggle was rather difficult.

I brought my iPad, wherein I have hundreds of Kindle edition books, but I never once fired it up. I finished the compelling Jobs bio while sitting on the tarmac at SFO awaiting a gate as we returned.

A highly recommended read.

BTW, amid the mail pile upon our return home was a signed copy of Steve Tobak's excellent book.


I'd previously cited and reviewed in on the blog, e.g., here. Out of the blue one day a couple of weeks ago he emailed me asking for my street address. We've never met in person, we're simply online "friends" via shared interests.

Steve's book is another highly recommended read (as is his blog; one of my requisite daily stops). I did a quick Kindle search, and I can report that Steve Tobak's numerous cites of Steve Jobs and Apple are uniformly spot-on (despite his admission to me that he'd not read the Isaacson bio).

Steve Tobak is an astute, straight-shooter. Buy his book, and bookmark him.

HORSEBACK IN MANZANILLO


Coronas are required, LOL. Left to Right, Grandson Keenan (now 23), me, and my wife Cheryl.

CODA

My last topical post prior to our Mexico trip was that of "Artificial Intelligence and Ethics." This MIT Technology Review article showed up in my LinkedIn feed while we were in Manzanillo: "AI Ca Be Made Legally Accountable For Its Decisions."

Stay tuned.

"OH, AND ONE MORE THING..."

Save the date.

Link
I asked for a press pass. No response this time thus far.
____________

More to come...

Friday, November 10, 2017

Cloud hidden, whereabouts unknown


I will be offline from November 11th through the 18th. Taking my ailing daughter on a 'bucket list" vacation retreat out of the U.S. She's done better thus far than we'd initially expected, but our world these days is one of always anxiously waiting for the next shoe to drop.

She'll be back on chemo once we return (round 15), with the next follow-up CTs and MRIs in December.

Not taking my Mac Air (don't want the TSA and Customs hassles), so, I'll be back in the fray once I return. I've left you plenty of material.
________

Thursday, November 9, 2017

Artificial Intelligence and Ethics


I was already tee'd up for the topic of this post, but serendipitously just ran across this interesting piece over at Naked Capitalism.
Why You Should NEVER Buy an Amazon Echo or Even Get Near One
by Yves Smith


At the Philadelphia meetup, I got to chat at some length with a reader who had a considerable high end IT background, including at some cutting-edge firms, and now has a job in the Beltway where he hangs out with military-surveillance types. He gave me some distressing information on the state of snooping technology, and as we’ll get to shortly, is particularly alarmed about the new “home assistants” like Amazon Echo and Google Home.

He pointed out that surveillance technology is more advanced than most people realize, and that lots of money and “talent” continues to be thrown at it. For instance, some spooky technologies are already decades old…
Read all of it, including the numerous comments.

"Your Digital Mosaic"

My three prior posts have returned to my episodic riffing on AI and robotics topics: see here, here, and here.

Earlier this week I ran across this article over at Wired:
WHY AI IS STILL WAITING FOR ITS ETHICS TRANSPLANT

There’s no lack of reports on the ethics of artificial intelligence. But most of them are lightweight—full of platitudes about “public-private partnerships” and bromides about putting people first. They don’t acknowledge the knotty nature of the social dilemmas AI creates, or how tough it will be to untangle them. The new report from the AI Now Institute isn’t like that. It takes an unblinking look at a tech industry racing to reshape society along AI lines without any guarantee of reliable and fair results…
Again, highly recommend you read all of it.

That led me to the "AI Now 2017 Report." (pdf)


The AI Now authors' 36-pg report examines in heavily documented detail (191 footnotes) four topical areas of AI applications and their attendant ethical issues: [1] Labor and Automation; [2] Bias and Inclusion; [3] Rights and Liberties, and, [4] Ethics and Governance.

From the Institute's website:
Rights & Liberties
As artificial intelligence and related technologies are used to make determinations and predictions in high stakes domains such as criminal justice, law enforcement, housing, hiring, and education, they have the potential to impact basic rights and liberties in profound ways. AI Now is partnering with the ACLU and other stakeholders to better understand and address these impacts.

Labor & Automation
Automation and early-stage artificial intelligence systems are already changing the nature of employment and working conditions in multiple sectors. AI Now works with social scientists, economists, labor organizers, and others to better understand AI's implications for labor and work – examining who benefits and who bears the cost of these rapid changes.

Bias & Inclusion
Data reflects the social, historical and political conditions in which it was created. Artificial intelligence systems ‘learn’ based on the data they are given. This, along with many other factors, can lead to biased, inaccurate, and unfair outcomes. AI Now researches issues of fairness, looking at how bias is defined and by whom, and the different impacts of AI and related technologies on diverse populations.

Safety & Critical Infrastructure
As artificial intelligence systems are introduced into our core infrastructures, from hospitals to the power grid, the risks posed by errors and blind spots increase. AI Now studies the way in which AI and related technologies are being applied within these domains and to understand possibilities for safe and responsible AI integration.
The 2017 Report proffers ten policy recommendations:
1 — Core public agencies, such as those responsible for criminal justice, healthcare, welfare, and education (e.g “high stakes” domains) should no longer use ‘black box’ AI and algorithmic systems. This includes the unreviewed or unvalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum such systems should be available for public auditing, testing, and review, and subject to accountability standards…

2 — Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design. As this is a rapidly changing field, the methods and assumptions by which such testing is conducted, along with the results, should be openly documented and publicly available, with clear versioning to accommodate updates and new findings…

3 — After releasing an AI system, companies should continue to monitor its use across different contexts and communities. The methods and outcomes of monitoring should be defined through open, academically rigorous processes, and should be accountable to the public. Particularly in high stakes decision-making contexts, the views and experiences of traditionally marginalized communities should be prioritized…

4 — More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR. This research will complement the existing focus on worker replacement via automation. Specific attention should be given to the potential impact on labor rights and practices, and should focus especially on the potential for behavioral manipulation and the unintended reinforcement of bias in hiring and promotion…

5 — Develop standards to track the provenance, development, and use of training datasets throughout their life cycle. This is necessary to better understand and monitor issues of bias and representational skews. In addition to developing better records for how a training dataset was created and maintained, social scientists and measurement researchers within the AI bias research field should continue to examine existing training datasets, and work to understand potential blind spots and biases that may already be at work…

6 — Expand AI bias research and mitigation strategies beyond a narrowly technical approach. Bias issues are long term and structural, and contending with them necessitates deep interdisciplinary research. Technical approaches that look for a one-time “fix” for fairness risk oversimplifying the complexity of social systems. Within each domain — such as education, healthcare or criminal justice — legacies of bias and movements toward equality have their own histories and practices. Legacies of bias cannot be “solved” without drawing on domain expertise. Addressing fairness meaningfully will require interdisciplinary collaboration and methods of listening across different disciplines…

7 — Strong standards for auditing and understanding the use of AI systems “in the wild” are urgently needed. Creating such standards will require the perspectives of diverse disciplines and coalitions. The process by which such standards are developed should be publicly accountable, academically rigorous and subject to periodic review and revision…

8 — Companies, universities, conferences and other stakeholders in the AI field should release data on the participation of women, minorities and other marginalized groups within AI research and development. Many now recognize that the current lack of diversity in AI is a serious issue, yet there is insufficiently granular data on the scope of the problem, which is needed to measure progress. Beyond this, we need a deeper assessment of workplace cultures in the technology industry, which requires going beyond simply hiring more women and minorities, toward building more genuinely inclusive workplaces…

9 — The AI industry should hire experts from disciplines beyond computer science and engineering and ensure they have decision making power. As AI moves into diverse social and institutional domains, influencing increasingly high stakes decisions, efforts must be made to integrate social scientists, legal scholars, and others with domain expertise that can guide the creation and integration of AI into long-standing systems with established practices and norms…

10 — Ethical codes meant to steer the AI field should be accompanied by strong oversight and accountability mechanisms. More work is needed on how to substantively connect high level ethical principles and guidelines for best practices to everyday development processes, promotion and product release cycles…
I printed it out and went old-school on it with yellow highlighter and red pen.


It is excellent. A must-read, IMO. It remains to be seen, though, how much traction these proposals get in a tech world of transactionalism and "proprietary IP/data" everywhere.

Given that my grad degree is in "applied ethics" ("Ethics and Policy Studies"), I am all in on these ideas. The "rights and liberties" stuff was particularly compelling for me. I've had a good run at privacy/technology issues on another of my blogs. See my post "Clapp Trap" and its antecedent "Privacy and the 4th Amendment amid the 'War on Terror'."


"DIGITAL EXHAUST"

Another recent read on the topic.

Introduction

This book is for everyone who wants to understand the implications of the Big Data phenomenon and the Internet Economy; what it is, why it is different, the technologies that power it, how companies, governments, and everyday citizens are benefiting from it, and some of the threats it may present to society in the future.

That’s a pretty tall order, because the companies and technologies we explore in this book— the huge Internet tech groups like Google and Yahoo!, global retailers like Walmart, smartphone and tablet producers like Apple, the massive online shopping groups like Amazon or Alibaba, or social media and messaging companies like Facebook or Twitter— are now among the most innovative, complex, fast-changing, and financially powerful organizations in the world. Understanding the recent past and likely future of these Internet powerhouses helps us to appreciate where digital innovation is leading us, and is the key to understanding what the Big Data phenomenon is all about. Important, too, are the myriad innovative frameworks and database technologies— NoSQL, Hadoop, or MapReduce— that are dramatically altering the way we collect, manage, and analyze digital data…


Neef, Dale (2014-11-05). Digital Exhaust: What Everyone Should Know About Big Data, Digitization and Digitally Driven Innovation (FT Press Analytics) (Kindle Locations 148-157). Pearson Education. Kindle Edition.
UPDATE

From MIT Technology Review:
Despite All Our Fancy AI, Solving Intelligence Remains “the Greatest Problem in Science”
Autonomous cars and Go-playing computers are impressive, but we’re no closer to machines that can think like people, says neuroscientist Tomaso Poggio.


Recent advances that let computers play board games and drive cars haven’t brought the world any closer to true artificial intelligence.


That’s according to Tomaso Poggio, a professor at the McGovern Institute for Brain Research at MIT who has trained many of today’s AI leaders.


“Is this getting us closer to human intelligence? I don’t think so,” the neuroscientist said at MIT Technology Review’s EmTech conference on Tuesday.

Poggio leads a program at MIT that’s helped train several of today’s AI stars, including Demis Hassabis, cofounder of DeepMind, and Amnon Shashua, cofounder of the self-driving tech company Mobileye, which was acquired by Intel earlier this year for $15.3 billion.

“AlphaGo is one of the two main successes of AI, and the other is the autonomous-car story,” he says. “Very soon they’ll be quite autonomous.”


But Poggio said these programs are no closer to real human intelligence than before. Responding to a warning by physicist Stephen Hawking that AI could be more dangerous than nuclear weapons, Poggio called that “just hype.”…
BTW - apropos, see The MIT Center for Brains, Minds, and Machines.

"The Center for Brains, Minds and Machines (CBMM)
is a multi-institutional NSF Science and Technology Center
dedicated to the study of intelligence - how the brain produces intelligent
behavior and how we may be able to replicate intelligence in machines."

Interesting. See their (dreadful video quality) YouTube video "Discussion Panel: the Ethics of Artificial Intelligence."

In sum, I'm not sure that difficulty achieving "general AI" -- "one that can think for itself and solve many kinds of novel problems" -- is really the central issue going to applied ethics concerns. Again, read the AI Now 2017 Report.

WHAT OF "ETHICS?" ("MORAL PHILOSOPHY")

Couple of good, succinct resources for you, here and here. My elevator speech take on "ethics" is that it is not about a handy "good vs bad cookbook." It goes to honest (albeit frequently difficult) moral deliberation involving critical thinking, deliberation that takes into account "values" that pass rational muster -- surpassing the "Appeal to Tradition" fallacy.

UPDATE

Two new issues of my hardcopy Science Magazine showed up in the snailmail today. This one in particular caught my attention.

What is consciousness, and could machines have it?

Abstract
The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.
Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain?

I question here the scientific and strategic underpinnings of the runaway enthusiasm for industrial-scale projects at the interface between “wet” (biology) and “hard” (physics, microelectronics and computer science) sciences. Rather than presenting the achievements and hopes fueled by big-data–driven strategies—already covered in depth in special issues of leading journals—I focus on three major issues: (i) Is the industrialization of neuroscience the soundest way to achieve substantial progress in knowledge about the brain? (ii) Do we have a safe “roadmap,” based on a scientific consensus? (iii) Do these large-scale approaches guarantee that we will reach a better understanding of the brain?

This “opinion” paper emphasizes the contrast between the accelerating technological development and the relative lack of progress in conceptual and theoretical understanding in brain sciences. It underlines the risks of creating a scientific bubble driven by economic and political promises at the expense of more incremental approaches in fundamental research, based on a diversity of roadmaps and theory-driven hypotheses. I conclude that we need to identify current bottlenecks with appropriate accuracy and develop new interdisciplinary tools and strategies to tackle the complexity of brain and mind processes…
Interesting stuff. Stay tuned.
__

CODA

Save the date.

Link
____________

More to come...