Skip to main content

As Artificial Intelligence Gets Serious

From its beginnings nine years ago C-PET has determined to think long-term and to think seriously about the implications of emerging technologies.

We aren’t partisan either politically or in our disposition toward technology – there is no bias either “pro” or “anti” tech among our many distinguished fellows and advisers and board members. Their personal and professional convictions cover the waterfront. But what they have in common is a dual commitment – to the conviction that emerging technology impacts will be far more important to the American and global future than almost anyone in the policy world recognizes, and to the related conviction that it does us no good at all to believe naively that this will all be good news. If we would be serious about the future we need to be grown-up about its potential challenges.

So twin developments at the University of Cambridge (full disclosure: my alma mater), and one closer to home, are of special interest to us, and we believe they should also be to you.

The Center for the Study of Existential Risk, with the active participation of Lord Rees (Martin Rees), one of the world’s leading cosmologists, is focused on issues of global risk – “a joint initiative between a philosopher, a scientist, and a software entrepreneur … founded on the conviction that these risks require a great deal more scientific investigation than they presently receive. CSER is a multidisciplinary research centre dedicated to the study and mitigation of risks that could lead to human extinction.” Start-up funding came from Jaan Tallinn, a founding engineer of both Skype and Kazaa and one of the founders of CSER together with Lord Rees and Cambridge philosophy professor Huw Price.

The second development is more recent and follows from the widely noted letter on the potential risk of AI signed by many distinguished technology leaders and intellectuals, including Bill Gates, Elon Musk, Stephen Hawking, and Lord Rees. It is interesting to note that because Musk added his name to the letter he was qualified to be named a Luddite of the Year by a fellow-DC think tank not generally known for insulting statements of the ridiculous. (But we do need to note a perspective, one many of us believe both naive and actually dangerous, that looking seriously at potential downsides will somehow inhibit innovation. We believe it will spur it in the best direction.)

These concerns have now led to the establishment at Cambridge of the Leverhulme Centre for the Future of Intelligence. As Dr. Sean Ó hÉigeartaigh said: “The Centre is intended to build on CSER’s pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones.”

In parallel, Musk has collaborated with leading tech companies here in the U.S. to put together a $1bn fund to research “beneficial” AI. Here’s what they are saying:

“Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.”

“We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.”

Early days, but these developments do suggest that our C-PET commitments – to awareness of the much greater impact of emerging technologies than the policy community realizes, and to the need for candid assessments of their potential human significance – is being recognized within the AI community.

Of course, these issues did not feature in the now finally ended presidential debates…the policy community has a long way to go.

Your thoughts?

Best regards,

Nigel Cameron
President and CEO
Center for Policy on Emerging Technologies
Washington, DC