So we’re allegedly tearing headlong towards an AI assisted (controlled?) future. Regardless of those true benchmarks for intelligence, artificial or otherwise, who the heck is keeping an eye on the wider and longer-term implications on behalf of our kids and their kids?
More to the point, who is even capable of grasping the underpinnings of AI and similar innovations?
Few truly understand the full possibilities, but most opponents predict a forbiddingly Bruckheimer-esque future and it got me to thinking…
Who’s really in charge?
Or, in other words, “Quis custodiet ipsos custodes?“. Arguably more pertinent to artificial intelligence than any other technological development in history. An issue raised again, controversially, by Stephen Hawking in May;
Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes…” Interview with The Independent, 1st May 2014
Followed by a riposte from Steve Mason of ClickSoftware, suggesting Mr Hawking is scrabbling to retain his position as thought leader by stirring up some FUD. He says that it is natural for individuals at “the pinnacle of thought leadership” to feel threatened by something that could knock them off their perch.
One of the big fears linked to AI (apart from Terminator style Armageddon) is the theft of jobs from humans. In the same article Steve Mason argues it could be quite the opposite, with new services growing up around AI enabled industries.
All well and good, but to my mind at least, we’re missing the point;
Stephen Hawking says….
Steve Mason argues…
BUT nobody knows.
I’m not going to talk about the pros and cons of machine intelligence (although I did pen a little dystopian fairy tale, about one possible AI driven future). Instead I want to give the perspective of a parent. Considering our duty to keep up, as technological evolution accelerates towards a point where consequences might overtake human capability to implement controls.
The greatest challenge: The knowledge gap.
In writing “A Brief History of Time” Stephen Hawking admitted filtering his thoughts through a chain of progressively less specialist colleagues. Eventually, like a game of intellectual and mainly accurate Chinese Whispers, something publicly comprehensible emerged. The same will be necessary for all of the work going on at the bleeding edge of modern science and tech.
To stand any chance of realistic ethical oversight of boundary pushing developments, there has to be investment in translation. Not publishing academic papers for like minded colleagues. Interpreting current and potential future implications for accountable bodies empowered to implement checks and balances. Building a safety net for you, me and generations to come.
Lessons from the recent past to apply to an artificially intelligent future?
Perhaps it’s easier to view in the context of more immediate issues. For instance, the search and social media content filtering algorithms. Busily deciding which version of the world you and I would like to see today, except we don’t get any say in the matter (Twitter is imminently going to follow Facebook down this path). How many developers report to bosses that know how these algorithms work? How many grasp the potential fallout in terms of long-term shaping of public opinion?
Propaganda and social control are age old concepts and in very recent history, Facebook was caught conducting emotional control experiments on users. They changed the tone what appeared in news feeds for a selected group, to see what effect it had on the ‘mood’ of their own posts. That raises questions about what else is going on behind the scenes.
Even more generally, we know most non-specialists don’t have a useful understanding of what the IT and Cyber Security guys are getting up to on their behalf.
We’re creeping towards a point where language barriers and knowledge gaps are being bridged in my security field. It’s become clear it is to everyone’s practical and commercial advantage. But, we’re still a distance from any useful understanding of how ‘variable versions of the truth’ served up in the social media-verse will impact us and our kids.
If you take another few steps into the technical fug, no-one (except folk deep inside the field of AI), has a chance in hell of knowing where it’s all heading and the various types of fallout there might be.
Who has the interests of my children and my children’s children at heart?
So while those non-profit bodies mentioned by Stephen (The Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future of Life Institute), are working hard on everyone’s behalf to keep up, I don’t think it’s enough.
As I commented in another post, the god of money has big guns. We need to make it mandatory for governments and commercial ventures to finance effective knowledge sharing with accountable overseers. We need to give bodies (like those above) teeth and a seat at the top table.
I’m not arguing innovators should lose their intellectual property, but are we comfortable that those with everything to gain from developments are viewing implications in the round? If there are concerns voiced from within, will they make it past their boards? That certainly hasn’t historically been the case with pleas for proper consideration of security for software development and in-house IT change efforts.
Someone, not with a vested interest, must have the ability to apply statutory brakes, or have a means to inform lawmakers and risk owners, so ethical understanding and controls can keep up.
Would independent oversight hobble innovation?
Would this be a death knell for innovation? Some will say yes, but history is littered with the corpses of those trampled by the desire to ‘just see what will happen next’ (Oppenheimer and those targeted using his intellectual property, as a dramatic example) .
On the other hand, just like information sharing in security, understanding can quash prejudice and broker the respect and trust needed to start a rational conversation about “what ifs”.
Where wider social, military, political or economic implications of developments are in doubt, I would like to see ethics committees at institution and industry level. Staffed by a mix of inside and outside experts, who can equally effectively flag unforeseen pitfalls and unexpected benefits of brand new and brilliant innovations.
Those external experts will become perfect advocates for beneficial developments. Able to manage media expectations when the news breaks and defuse negative knee jerk reactions from other non-specialist decision makers.
I’m not a Luddite, but I am scared that my kids will be hung out to dry in someone else’s version of a “good future”. A future that might, thanks to a mammoth “oops, we didn’t think of that” disaster, turn out to be not so good after all. The main basis of my fear? That the brightest and best are tearing hell for leather forward in pursuit of progress (or knowledge for knowledge’s sake) and serving up the fruits of their labors to people with a less pure agenda. Not negligent per se, just motivated by immediate reward and ill equipped to look sideways and forward far enough to see any unexpected harm that might be caused.
So, who is watching the watchmen? Well meaning academics? Doubtfully tech savvy law enforcers? Secret service bodies? Occasional industry regulators?
Is that really fit for purpose and effective enough?