CascadiaPrime Cognition

 Home    About    Blog    X.AI Understand the Universe    Future of Life Institute    Oxford Future of Humanity Institute    Cambridge Center for Existential Risk   Machine Intelligence Research Institute     Partnership on AI  

  Center for Brains, Minds & Machines     US Brain Project    EU Brain Project    Blue Brain Project     China Brain Project     AI for the Brain     CLAIRE Research Network  

  The Montreal Institute for Learning Algorithms (MILA)     Vector Institute for Artificial Intelligence     The Alberta Machine Intelligence Institute (AMII)     CAIDA: UBC ICICS Centre for Artificial Intelligence Decision-making and Action     CIFAR  Canadian Artificial Intelligence Association (CAIAC)  

 The Stanford Institute for Human-Centered Artificial Intelligence     Open AI    The Association for the Advancement of Artificial Intelligence (AAAI)    Allen Institute for AI     AI 100    The Lifeboat Foundation     Center for Human-Compatible AI  

 
 


International Artificial Intelligence Safety and Cooperation


The emergence of beyond human intelligence presents risks as well as rewards for humanity.

This section was crafted in 2017 and has not been updated. My views have not altered. My sense of urgency has increased.

While I remain optimistic about the benefits of artificial intelligence, I believe that we need at an early date to consider the institutional issues required to ensure that we can harness artificial intelligence and avoid potential downsides including loss of social cohesion that may result from accelerating economic effects. I will begin to explore some of those institutional aspects and provide resources for individuals and groups that wish to learn more and begin the necessary public dialogue.

The intertwining of artificial intelligence issues and national security issues is such that we will need both national and international institutions to manage our way through the artificial intelligence and robotics issues this century. It will not be enough to deal with the issues on a nation state basis alone and the potential for non-nation state rogue players cannot be ruled out.

While artificial intelligence represents one of the most profoundly intellectually difficult domains, it is clearly conceivable that we may see an "evolutionary surge" in progress that would result in unintended results. Moore's Law, advances in mathematical insights and global brain research each have the potential to push in the time horizon of non-linear change.

I believe that we will need an International Artificial Intelligence Safety and Cooperation Agency (IAISACA) modeled on the International Atomic Energy Agency and similar nation state organizations like the The US Nuclear Regulatory Commission and the Canadian Nuclear Safety Commission

IAISACA would work to provide a strong, sustainable and visible global artificial intelligence safety and security framework.

If we look at the development of the world's global communications networks we find the role of the International Telecommunication Union in harmonizing standards that enabled the emergence of the internet. The role of IAISACA would also be to promote the effective application of artificial intelligence to meet human needs.

Inititially IAISACA should be created under the auspices of the Group of 20 and operate like the Financial Stability Board (FSB) (regulates the global financial system) with technical and ministerial level oversight.

Government can also be a force in defining the direction of a technology. Not by regulation but by the allocation of resources and by directing the creation of programs at different departments and agencies that interact with industry, academia, and allies. Friendly AI research needs serious funding.

A number of institutes have begun to work on the intellectual, policy and technical issues of superintelligence.

Six Leading Institutes:

The Future of Life Institute (FLI) ,

The Machine Intelligence Research Institute (MIRI),

The James Martin 21st Century School - Oxford Future of Humanity Institute

and

The Cambridge Centre for Existential Risk

One Hundred Year Study on Artificial Intelligence (AI100)

Open AI

Europe, the U.S., Japan, China and Russia have advanced programmes to reverse engineer the brain and/or seriously pursue artificial intellgence.

Recently Max Tegmak and Nick Bostrum addressed the United Nations Crime and Justice Research Institute session on Chemical, biological, radiological and nuclear (CBRN) National Action Plans: Rising to the Challenges of International Security and the Emergence of Artificial Intelligence.

Their talks begin about the 1:55 minutes mark .

Point - Counter Point - The Critics

Some knowledgeable people in artificial intelligence and related fields - take the view that superintelligence is unlikely to be a threat to mankind or isn't going to be developed for a very, very long time. Some take the view that the concept of the singularity is science fiction and flawed.

Their perspectives deserve serious attention and respect. This section captures their ideas.

Robotics

The field of robotics is a near in area that it has been similarly proposed requires oversight and regulation. The Brookings Institute recently published "The Case for a Federal Robotics Commission" by Ryan Calo of the University of Washington School of Law. Clearly existing agencies are hard pressed to manage the disruptive waves of change that are hitting them. The risks to the social fabric of robotics are precursors of the existential aspects of artificial intelligence. Scientific American has recently made space for an article Why We Need a Federal Agency on Robotics

AI Initiatives

Google and Deepmind put together an Artificial Intelligence Safety and Ethics Board to guide them. Deep Mind has developed a Neural Turing Machine. The Board membership is not yet public and it is an internal rather than external board. Source. Google's current overall Board of Directors. DeepMind cofounder Mustafa Suleyman indicates sixty handcrafted rule-based systems have now been replaced with deep learning based networks.

Lethal Autonomous Weapons

Toward a Ban on Lethal Autonomous Weapons: Surmounting the Obstacles

    

Books

  Nick Bosrum - Superintelligence - Paths, Dangers, Strategies
  
  James Barrat - Artificial Intelligence and the end of the human era - Our Final Invention
  
  Ray Kurzweil - The Singularity is Near
  
  Ray Kurzweil - How to Create a Mind
  
  Erik Brynjolfsson Andrew McAfee - Race Against the Machine
  

Public Discussion

  Stephen Hawking
  
  Nick Bostrom
  
  Elon Musk & Thomas Dietterich on AI safety
  

Articles

  AI Has Arrived, and That Really Worries the World s Brightest Minds - Wired January 2015
  
  Bill Gates "I'm more in the Musk/Bill Joy camp than the Google camp." January 2015
  
  Edge asks Contributors - WHAT DO YOU THINK ABOUT MACHINES THAT THINK? & gets remarkable answers January 2015
  
  Andrew Ng expresses concern about employment impacts January 2015
  

Other Resources on CascadiaPrime Cognition

  Artificial Intelligence
  
  Artificial Intelligence Design
  
  Artificial General Intelligence
  
  Robotics
  
  The Quantum Computing Revolution
  
  Computer Vision
  

Top