Search the KHIT Blog

Sunday, March 18, 2018

The "costs of unchecked innovation?"



Is "innovation" an unalloyed societal good, and thus a no-brakes-necessary technological and economic priority? A staple "conservative" political stance opposing "government regulation" it that "stifles innovation." The libertarian-leaning entrepreneurs and venture capitalists of Silicon Valley and beyond regard the fevered pursuit of "innovation" as a cardinal virtue.

Inferred in the common definition "introduction of new things or methods" is that innovation always implies "improvement."**
** Attempts at innovation that don't bear fruit obviously don't count. Efforts that do make the cut, though, should be subjected to an honest accounting of "net utility," comprising candor with respect to the extent and consequences of "side effects" / "adverse outcomes."
It shouldn't be difficult to come up with counterexamples upon the briefest reflection. For one thing, there are frequently casualties among those "disrupted" by "disruptive innovation." Relatedly, consider the new "Tracking Point XS1" Artificial Intelligence-enabled assault style semiautomatic rifle.


An "innovative" way to more effectively "disrupt" a person's life? Permanently, in the worst case. As a military weapon, its net utility is rather obvious. It is not, however, simply a "deer rifle" or personal protection appliance, marketing spin of its manufacturer notwithstanding.

No, "innovations" (and those who develop them) don't rightfully get a moral blank check. apropos, see my November post "Artificial Intelligence and Ethics." See also my post on "Slaughterbots."

OK, comes a new wrinkle, reported at WIRED:

Meltdown, Spectre, and the Costs of Unchecked Innovation

…Even if Intel wouldn't quite agree that Moore's law is over, its real-world performance benefits may be substantially erased after Meltdown and Spectre are tamed. The long-standing computing trope that should be even more concerning in this context, however, is "it's all just ones and zeroes." We're not just talking about bits once those bits drive our robots, drones, and 3-D printers. New technologies now often manifest in the real world, since for now that is still where most of the money is, but even Bitcoin melts the polar icecaps.

On the mind-boggling cosmic scale, these exploits will affect our ability to create and edit organisms, but on a more tangible level, they also decreased the operational speeds of both processors and online timing measurements, thereby reversing advances we thought we'd made both in hardware and with the general sophistication of the web as a platform. In both fields, we had quite literally been racing toward something terrible.

We’ve built technology too quickly for our own good, quantifiable now in dollars and microseconds, using a wide range of tools and metrics even though SharedArrayBuffer is no longer around to take the measurements. Anything that seeks to reshape the infrastructure built by our past selves should deserve our most aggressive scrutiny, regulation, and suspicion. If backtracking overeager technology is already proving so catastrophic for the cheap chips in our laptops and phones, then we certainly have no hope of reversing its changes to our homes, cities, and oceans. Some things can't be patched or safely versioned. We just have to get it right the first time.
Yikes. Read all of it. Broad implications.

See also my January 2017 post "Disruption ahead on all fronts, for good and ill."

Another good read relevant to the topic:

"...The financial markets were changing in ways even professionals did not fully understand. Their new ability to move at computer, rather than human, speed had given rise to a new class of Wall Street traders, engaged in new kinds of trading. People and firms no one had ever heard of were getting very rich very quickly without having to explain who they were or how they were making their money..."

Lewis, Michael. Flash Boys: A Wall Street Revolt (p. 17). W. W. Norton & Company. Kindle Edition.
INNOVATION UPDATE


Google Naked Capitalism Uber. Bring a Snickers; you're going to be a while.

FROM SCIENTIFIC AMERICAN
Intelligent to a Fault: When AI Screws Up, You Might Still Be to Blame
Interactions between people and artificially intelligent machines pose tricky questions about liability and accountability, according to a legal expert
By Larry Greenmeier 
Artificial intelligence is already making significant inroads in taking over mundane, time-consuming tasks many humans would rather not do. The responsibilities and consequences of handing over work to AI vary greatly, though; some autonomous systems recommend music or movies; others recommend sentences in court. Even more advanced AI systems will increasingly control vehicles on crowded city streets, raising questions about safety—and about liability, when the inevitable accidents occur.

But philosophical arguments over AI’s existential threats to humanity are often far removed from the reality of actually building and using the technology in question. Deep learning, machine vision, natural language processing—despite all that has been written and discussed about these and other aspects of artificial intelligence, AI is still at a relatively early stage in its development. Pundits argue about the dangers of autonomous, self-aware robots run amok, even as computer scientists puzzle over how to write machine-vision algorithms that can tell the difference between an image of a turtle and that of a rifle.

Still, it is obviously important to think through how society will manage AI before it becomes a really pervasive force in modern life…
_____________

More to come...

No comments:

Post a Comment