Image [cc] – agsandrew 

A lot has been made of Elon Musk’s recent comments about Artificial Intelligence being “a greater risk than nukes.” And no less an intellect than Stephen Hawking recently echoed that sentiment. I’ve seen two examples of this fear showing up in popular television shows; a recent episode of Elementary had Sherlock Holmes administer an extended Turing test to a doll to see if it possibly killed a person, and this season of Person of Interest centers around the battle of omniscient and seemingly omnipotent computers with ridiculously capable and attractive people fulfilling their every desire. (On the other hand, maybe it’s just a CBS thing.) It strikes me that all of this sturm und drang has nothing at all to do with artificial intelligence… it’s an irrational fear of Artificial Will.

Most computers today do not exhibit Artificial Intelligence, at least not in the way that we generally imagine it, but they are tools that expand and supplement human intelligence. As such, they do provide a type of “Will-less Artificial Intelligence”. One that we embrace wholeheartedly. This Will-less AI makes it possible for fewer humans to achieve what would otherwise require many, many more humans, and the additional corresponding costs and resources that the presence of those additional humans would necessitate. In other words, the Will-less AI that currently exists allows us to do much more, with much less, much faster and we’re all for that.

At it’s base, the popular fear of AI is not that computers will become more intelligent than us, but that they will become willful and that they will exhibit ill will towards us. Today this fear manifests as  a disembodied, internet-enabled, artificially willful intelligence, that will somehow bring about the end of humanity, but 30 years ago it was an unstoppable robot from the future that looked a lot like Arnold Schwarzenegger. In 1921 it was Karel Čapek’s play about a robot factory uprising, called Rossum’s Universal Robots (from which the term Robots originates).  In 1818, it was Mary Shelley’s Frankenstein, or The Modern Prometheus, in which Dr. Frankenstein reanimates life in a bunch of cobbled together body parts that ultimately turn against him. In each century the technology changes, it always reflects the latest advances in science and industry, but the feared outcome is always the same: the creator is attacked, or destroyed, or overwhelmed by it’s creation.

Shelley references the Greek myth of Prometheus in her title.  Prometheus was a god of old, a Titan, who sided with the new Olympic gods, and helped to bring Zeus and his cadre to power.  But poor Prometheus was a true egalitarian and gave the lowly humans the power of fire, for which he was punished for all eternity by his newly installed rulers. Even in a tale of triumph for the contemporary gods, one who seeks to bring about change must be punished for his sins! This is an old story, continually reinvented throughout human history to spread Fear, Uncertainty, and Doubt about change itself.

What hope for those of us seeking to innovate in law firms?!

Now, I don’t know what Elon or Stephen actually think about AI, or why it’s dangerous, or what concerns they actually have.  I’ve only seen what’s been covered in the media, and that has mostly been sensational click-bait.  Personally, I don’t subscribe to the theory that humans have free will, so my fear of machines somehow miraculously acquiring it is probably more limited than most. We are as much a product of programming and development as any computer application, and equally as incapable of defying our code. Our programming derives from our environment, education, and experiences. We grow and develop and change over time, but our actions are still the collective result of our ongoing development. I do not have a choice whether or not to go to work each morning, I have a calculation that weighs the consequences of going against not going.  It’s a complex calculation involving my remuneration and my ongoing expenses, but also my sense of pride and self-worth, my camaraderie with colleagues, and my desire to complete ongoing projects. I may not even be consciously aware of all of the variables that go into this calculation or that the calculation is happening at all, but I retroactively apply the term “choice” to the result and say that I have “freely” chosen to go to work today.  Interestingly, if we attempt to calculate this same equation for someone else and come to a wildly different result, we don’t say “Oh, well, they’ve exercised their free will.” we say they’re crazy, or something is seriously wrong with them, and if we care about them, we try to get them help.

There is little reason to believe that a “willful-seeming” artificial intellect will be any different than a “willful-seeming” biological intellect. If it does not exhibit something that we would call Will, then we will not recognize it to be intelligent and we will not fear it. But no program is likely to exhibit behavior that we would recognize as Will unless it has been raised and educated as any other child-like intellect, to understand the world largely as we see it.  And that particular experiment has been ongoing for hundreds of thousands of years with billions of biological intellects. It has resulted in some phenomenal successes and many truly horrific failures, and the same will continue to be true whether the intelligence is silicon or carbon-based. For those unfortunate silicon-based intelligences that come to wildly different results than we, as a society, deem acceptable, we will say they are crazy and that something is seriously wrong with them, and if we care about them, we will try to get them help.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Ryan McClead Ryan McClead

Ryan is Principal and CEO at Sente Advisors, a legal technology consultancy helping law firms with innovation strategy, project planning and implementation, prototyping, and technology evaluation.  He has been an evangelist, advocate, consultant, and creative thinker in Legal Technology for more than…

Ryan is Principal and CEO at Sente Advisors, a legal technology consultancy helping law firms with innovation strategy, project planning and implementation, prototyping, and technology evaluation.  He has been an evangelist, advocate, consultant, and creative thinker in Legal Technology for more than 2 decades. In 2015, he was named a FastCase 50 recipient, and in 2018, he was elected a Fellow in the College of Law Practice Management. In past lives, Ryan was a Legal Tech Strategist, a BigLaw Innovation Architect, a Knowledge Manager, a Systems Analyst, a Help Desk answerer, a Presentation Technologist, a High Fashion Merchandiser, and a Theater Composer.