« SCOTUS Term: Where are the opinions? | Main | SCOTUS Term: Can the Court De-Politicize Masterpiece Cakeshop and Janus? »

Tuesday, May 29, 2018

What is Artificial Responsibility (and How Does It Relate to Bitcoin)?

There has been an explosion of articles in the popular press about the dangers of artificial intelligence (“AI”). Some fear that machines with human-like intelligence could someday develop goals at odds with our own. For example, a suitably intelligent AI that seeks to maximize the number of paper clips might, as Nick Bostrom has suggested, enslave humanity if doing so will best achieve its cold, calculated objective.

But as these fears imply, what really concerns us is not so much machine intelligence. What we’re really worried about is giving machines control over important matters. Control and intelligence are not the same thing. I use the expression "artificial responsibility" to refer to what scares us more directly: the ability of machines to control important matters with limited opportunities for humans to veto decisions or revoke control.

Even if an AI is a little smarter than the smartest human, it doesn’t mean it can enslave us. Dominance over others isn’t just a function of intelligence. We needn’t be especially worried about a machine superintelligence that has no tangible control over the world unless it effectively has substantial control because of its ability to coax or manipulate us into doing its bidding. Our real concern is how easy it will be to wrest control back from machines that no longer serve our best interests and to avoid giving them control in the first place.

Responsibility is related to intelligence because we might be inclined to give greater control to more intelligent machines. But even unintelligent machines can be dangerous when they’re given a lot of responsibility. And herein lies the connection to bitcoin and blockchains more generally. Even though the blockchain technology that enables bitcoin is low on the scale of artificial intelligence (so low it is not usually thought of as artificially intelligent at all), it is nevertheless surprisingly high on the scale of artificial responsibility, as I argue after the jump. 

Bitcoin is a kind of digital currency invented in 2008 by a person or group of people pseudonymously known as Satoshi Nakomoto. The bitcoin ecosystem enables users to store and transfer value, in the form of bitcoin, across a decentralized computer network. While heady math underlies the cryptographic principles that keep bitcoin secure, most would say the network is rather unintelligent. It doesn’t recognize our voices or faces, and it certainly wouldn’t pass a Turing Test.

Nevertheless, it can accomplish quite a bit with limited human intervention. If bitcoin or a competitor coin is able to scale up properly, it could enable millions of people to easily transfer substantial value without the intervention of banks or other trusted intermediaries. Transactions that take banks days to accomplish, such as clearing checks, will be done with cryptocurrency in minutes or seconds. Unintelligent as it may be, bitcoin still has substantial artificial responsibility because the network accomplishes the important task of transacting billions of dollars in value through a network spread across the globe with no person, bank, or government in charge of it. 

As I discuss in a forthcoming article, the blockchain technology that underlies bitcoin can be used for more than just  digital currencies. One can create what are called "smart contracts" and can put a group of smart contracts together to make a "decentralized autonomous organization" ("DAO"). The first high-profile DAO, oddly called “TheDAO,” was formed in 2016 and used blockchain smart contracts to allow strangers to come together online to vote on and invest in venture capital proposals. Newspapers raved about the $160 million it quickly raised, even though it purported to have no central human authority, including no managers, executives, or board of directors.

TheDAO itself, however, is now a cautionary tale. A bug in its smart contract code was exploited to drain more than $50 million in value. And here was can see our willingness to endow blockchains with artificial responsibility: despite the loss of funds, there was no easy mechanism and certainly no central authority that could recover the money. It would take substantial agreement among the community running the blockchain platform used by TheDAO to mitigate the damage. Eventually, such consensus was reached. But it caused a continuing rift in the community, and this solution may not be available in the future as those running a blockchain will not easily come together to make alterations (indeed, blockchains are often advertised as immutable and "unstoppable"). So not only is it difficult to revoke the control given to a DAO, many people prefer not to do so as a matter of principle. Some purists denounced efforts to mitigate TheDAO exploit, arguing that the alleged hacker simply withdrew money in accordance with the organization’s agreed-upon contractual terms in the form of computer code. 

TheDAO had tremendous “artificial responsibility” in that we gave it considerable control that couldn’t be easily revoked or reined in. Not-so-smart contracts in the future may prove even more dangerous: guests at a DAO hotel might be locked out of their rooms; DAO self-driving cars might drive off bridges. Blockchains have great promise. But we should be thoughtful about how we endow machines with artificial responsibility, even when (and perhaps especially when) these machines are not very intelligent. (This post is adapted from an article forthcoming in the Stanford Technology Law Review; footnotes are omitted.)

Posted by Adam Kolber on May 29, 2018 at 02:30 PM | Permalink


Just another actual illustration to what has been written in my comment above , here :



Posted by: El roam | Jun 2, 2018 1:00:39 PM

Interesting , but I am afraid , that the respectable author of the post , doesn't really exceed the meaning of the AI . It is not solely about decentralized machines , which can cause chaotic mess and apocalyptic scenarios . The latter , has simply to do , with technical out of control situations. AI , is first of all , about a robot , which has the autonomous capability , to think and behave like human being . Its thinking , its reactions , is human alike . It is acting or driven by human insightful autonomy . Since it is autonomous , you can't really control him . If you would , it would not be human alike creature or machine.

However , beyond those chaotic scenarios of all around mess ( which are rather technical ) you deal with a creature , that is developing and growing and perfecting itself , by his own : insights , power , experience , as a baby may grow , and gradually becomes independent creature from all aspects : mentally , emotionally , and , intellectually .

Finally , it may exceed that much the standards of human capacity , that , one human being , may lose by himself , his autonomous capacity or relevancy.

One may imagine as simple illustration , an AI creature , who is literally a psychologist . It shall have great and powerful influence upon humans or patience . In the eyes of many , this is a very nasty scenario .

By the way , the UK house of lord ( select committee on AI ) has expressed grave concerns in this regard , one may read here ( very recommended blog by itself ) :



Posted by: El roam | May 29, 2018 6:33:01 PM

The comments to this entry are closed.