CascadiaPrime Cognition

 Home    About    Blog    X.AI Understand the Universe    Future of Life Institute    Oxford Future of Humanity Institute    Cambridge Center for Existential Risk   Machine Intelligence Research Institute     Partnership on AI  

  Center for Brains, Minds & Machines     US Brain Project    EU Brain Project    Blue Brain Project     China Brain Project     AI for the Brain     CLAIRE Research Network  

  The Montreal Institute for Learning Algorithms (MILA)     Vector Institute for Artificial Intelligence     The Alberta Machine Intelligence Institute (AMII)     CAIDA: UBC ICICS Centre for Artificial Intelligence Decision-making and Action     CIFAR  Canadian Artificial Intelligence Association (CAIAC)  

 The Stanford Institute for Human-Centered Artificial Intelligence     Open AI    The Association for the Advancement of Artificial Intelligence (AAAI)    Allen Institute for AI     AI 100    The Lifeboat Foundation     Center for Human-Compatible AI  

 
 


CascadiaPrime Cognition - Generative AI - Large Language Models


The field is moving very quickly and planners should be asking themselves what steps they should be taking now to prepare.

Generative AI like other forms of machine learning has been seeing returns to scale in computation. Recent significant advances have given rise to safety concerns that we may see an evolutionary leap, a unexpected evolutionary surge, from artificial intelligence AI to artificial general intelligence (AGI) and a self improving runaway reaction like a nuclear chain reaction.

Generative AI - Large Language Models vary by their computational and memory costs and hence the financial and energy costs of their outputs and therefore their use cases

AGI is sometimes referred to as superintelligence. Such a superintelligence could be vastly broader and deeper than human intelligence. The boundaries of such a superintelligence cannot be predicted nor can the effectiveness of so-called safety "guardrails". The probability of the emergence of a superintelligence from increases in scaling or further development of generative ai may be small, but not zero. We may be approaching a point of "critical mass" where the implications for human society are non-linear - this has been referred to as The Singularity.

The other more immediate and demonstrable concern is that generative ai may destabilize societies leading to violence and armed conflict as individuals, small groups, small and medium states and super states vie for power.

Generative ai models may remedy many societal problems and create new opportunities but in the wrong hands they could be devastating. Global governance of AI is at a very early stage. There have been leaks of LLMs into the wild where safety and human alignment will be a problem. (Meta's LLAMA for example)

Generative AI like a variety of technlogies has multiple uses both civil and military. It also presents multiple moral hazards even for those with the best intentions. Commercial rivalries may lead to lapses in good judgement and then there are always rogue actors and criminals. The usual human dilemmas - seen with nuclear power, the internet, biotechnology - are front and center. Generative AI is going to be geopolitically destabilizing - alternately growing and diminishing power centers of all kinds. Some surprisingly so.

Generative AI now has shortcomings and flaws some of which, like those in early aircraft, can be designed out or which can be filled by using links or inbuilt or strap on capabilities. For example, large language models have had mathematics shortcomings. The mathematics shortcomings can be remedied by linking to platforms like Wolfram Alpha. The linking of generative AI platforms to other platforms can bring benefits, but also presents risks. Sometimes generative ai models hallucinate - they make things up. There are reliability issues that need to be rectified.

So what are generative ai models?

Generative ai models are AI models that can generate answers to questions or prompts made in the form of natural language texts.

Generative AI models can be text generators, voice generators, image generators, audio generators, and code generators. Generative ai models can be multimodal, that is, one model can be trained on multiple types of data and produce multiple types of outputs. Large language models have the common feature of using natural language input to generate their output. This is its primary enabling capability but it is also a limiting factor because of abstraction limitations. Large language models can also decypher languages not normally considered languages like DNA and RF signals. Large language models can also complete classification, summarization, translation, generation, and dialogue tasks.

Generative AI models can be trained on a variety of datasets and, once trained, are able to create new, realistic outputs in response to queries and prompts based on what they have learned. The more accurate the results, the more the model has learned the underlying structure within the dataset. The models don't simply memorize training data, they locate the underlying hidden structures in the data. Generative ai models require vast amounts of data that humans may not require to make a learning determination however humans are limited in the scale of the data that they analyze.

Generative AI models come in a variety of sizes and can be optimized for a variety of functions. Some have billions of parameters. Different generative ai models are compared using this metric - a measure of the size and the complexity of the model. Generally the more parameters the better the performance. Model parameters determine how input data is transformed into the desired output. Some large language models (LLM) require computational loads that are only accessible financially and resource-wise to extremely large global corporations or nation states. Some variants can be operated on a desktop. As semiconductor computational and memory capacity increases generative ai models may be lodged in cell phones and smaller devices. Another metric for measuring performance is the number of tokens the model employs and their ratio to the number of parameters. A token is the basic unit of text or code used to process and generate language. Tokens can take different forms and are an important factor in the cost and performance of a model and therefore the economics of different applications.

The returns to scale of LLM or generative ai have meant that very large organizations have established an early lead in this field. However individuals with concerns regarding the potential concentration of power - commonly in the open movement - favour the sharing of generative ai tools broadly. The clear fiancial gains to be made by the various ai players including individuals and small groups similarly creates incentives for the uncontrolled spread of capabilities. How to square the circle of safety and openess is thus problematic.

Below are links to generative ai players and resources on the topic. Again, the field is moving very quickly and planners should be asking themselves what steps they should be taking now to prepare.

Governments should be working on International AI Security & Cooperation (IAISC)

    

Generative AI - Large Language Models Overview

  Making AI accessible with Andrej Karpathy and Stephanie Zhan (March 26, 2024)
  
  Royal Institution Lecture: What is generative AI and how does it work? with Mirella Lapata (October 13, 2023)
  
  Wiki: Generative AI
  
  Max Tegmark interview: Six months to save humanity from AI? | DW Business Special (April 13, 2023)
  
  Quanta Magazine: The Unpredictable Abilities Emerging From Large AI Models (April 13, 2023)
  
  arxiv.org: Eight Things to Know about Large Language Models (April 3, 2023)
  
  Humanloop Blog: AI research, ideas and product updates
  
  Top Large Language Models (LLMs) in 2023 from OpenAI, Google AI, Deepmind, Anthropic, Baidu, Huawei, Meta AI, AI21 Labs, LG AI Research and NVIDIA (February 22, 2023)
  
  McKinsey: What is generative AI? (January 19, 2023)
  
  You Tube: Interview by Eye on AI. GPT-4 co-creator Ilya Sutskever, co-founder and chief scientist at OpenAI, talks about large language models, hallucinations and his vision of AI-aided democracy. (March 23, 2023)
  
  Top 35 Generative AI Tools by Category (text generators, voice generators, image generators, audio generators, code generators) (Updated December 2022)
  
  GTP 3 uses the InstructGPT models, now the default language models accessible on GPT 3 API (January 27, 2022)
  
  InstructGPT - A more truthful and less toxic GPT-3
  
  Deepmind: Training Compute-Optimal Large Language Models (the optimal model size and number of tokens for training a transformer language model under a given compute budget.) (March 29, 2022)
  
  arXiv paper: Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks (July 05, 2023)
  
  arXiv paper: On the Measure of Intelligence Francois Chollet (November 5, 2019)
  
  Stephen Wolfram on ChatGPT
  
  Wolfram Alpha and ChatGPT looks like a killer combo (April 17, 2023)
  
  Wolfram series of discussions on how ChatGPT works (March - April 2023)
  
  Simons Institute talk: Po-Shen Loh (Carnegie Mellon University) Teaching Mathematics: Building Human Intelligence at Scale, to Save the Next Generation from ChatGPT (from unemployment) (April 23,2023)
  
  Interview with Emad Mostaque @EMostaque the CEO & development of open-source music- and image-generating systems such as Dance Diffusion and Stable Diffusion
  
  Interview with Emad Mostaque founder of Stability AI, the company behind Stable Diffusion, an image-generating algorithm
  
  Glossary: Model parameters
  
  Glossary: Application Programming Interface (API)
  
  Computational rationality: A converging paradigm for intelligence in brains, minds, and machines (PDF)
  

Major Players in Generative AI - Large Language Models

  Open AI
  
  Microsoft
  
  Google AI
  
  Deepmind
  
  Anthropic
  
  X.AI
  
  Meta AI
  
  Amazon Bedrock
  
  IBM watsonx AI and data platform
  
  AI21 Labs
  
  LG AI Research
  
  NVIDIA
  
  BAIDU
  
  Alibaba
  
  Huawei
  

Generative AI - Large Language Models - Platforms: Text generators

  Open AI: Chat GPT
  
  Microsoft's New Bing
  
  Meta's LLaMA
  
  Amazon
  
  Anthropic's Claude
  
  Berkley AI Research (BAIR) Koala: A Dialogue Model for Academic Research
  
  Stanford University Human Centered AI: Alpaca
  
  Aimsoft
  
  Baidu Ernie Bot (Chinese)
  
  Baidu Ernie Bot Description (English)
  
  TUM AI Lecture Series - Visual Synthesis for Understanding our (Visual) World (Björn Ommer)(November 2022)
  
  Why is Discord becoming the home for AI? Discord is a VoIP and instant messaging social platform; a voice, video, and text chat app that's used by tens of millions of people ages 13+ to talk and hang out with their communities and friends.(March 13th, 2023)
  
  See Text Generators
  
  Google Query: generative ai text generators
  

Generative AI - Large Language Models - Platforms: Coding generators

  Google Search Generative AI Coding generators
  
  Top 17 Generative AI-based Programming Tools (For Developers)(March 7, 2023)
  
  
  
  GitHub Copilot X: generative AI development tool made with OpenAI’s Codex model, a descendent of GPT-3.
  
  Comparing OpenAI Codex and GitHub Copilot: A Look at the Pros and Cons of Each (April 7, 2023
  
  See Code Generators
  
  Google Query: Generative AI Code Generators
  

Generative AI Text-to-Image software

  Midjourney Midjourney is an artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called "prompts", similar to OpenAI's DALL-E and Stable Diffusion (Source: wiki)
  
  OpenAI's DALL-E can create realistic images and art from a description in natural language.
  
  OpenAI's Stable Diffusion
  
  OpenAI's Discord: Stable Diffusion
  
  stability.ai - AI text-to-image software including Midjourney, DALL-E and Stable Diffusion
  
  Google Query: Text to Image Software
  
  See Image Generators
  

Generative AI - Large Language Models - Platforms: Text to voice generators

  Deeepmind Wavenet
  
  Google Search: Text to Voice generators
  
  See voice Generators
  

Generative AI - Large Language Models - Platforms: Text to audio generators

  Dance Diffusion. Dance Diffusion is the first in a suite of generative audio tools for producers and musicians to be released by Harmonai
  
  Google Research: MusicLM: Generating Music From Text Examples
  
  See audio Generators
  

Generative AI - Platforms: Text to music generators

  Aimsoft Aimenicorn
  
  Google Music Examples
  
  See Music Generators
  
  Google Search MusicLM
  
  
  

Generative AI - Platforms:Multimodal models

  Open AI: Chat GPT 4
  
  Google Bard
  
  
  

Generative AI - Speech Recognition

  Open ai: English speech recognition
  

Generative AI - Platforms:Add ons: (API's)

  Pinecone - Long-term Memory for AI
  
  Wolfram Alpha API - LLM or other AI system natural-language access to computational intelligence
  
  Eden AI APIs Landscape
  

Generative AI - Large Language Models News

  Meta: AI Alliance Launches as an International Community of Leading Technology Developers, Researchers, and Adopters Collaborating Together to Advance Open, Safe, Responsible AI (December 4, 2023)
  
   Google & Nvidia AI Announcements - Cloud Next 2023 (September 2, 2023) (You Tube)
  
   OpenAI's plans for 2023 - 2024 (May 28, 2023)
  
  Japan's Fugaku supercomputer to help develop homegrown generative AI (May 23, 2023)
  
  The Verge: Anthropic has expanded the context window of its chatbot Claude to 75,000 words (May 14, 2023)
  
  Alphabet to combine AI research units Google Brain, DeepMind (April 20, 2023)
  
  Verge: Elon Musk quietly starts X.AI, a new artificial intelligence company to challenge OpenAI (April 14, 2023)
  
  NTIA Seeks Public Input to Boost AI Accountability (April 11, 2023)
  
  Technology Review: Three ways AI chatbots are a security disaster Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.(April 3, 2023)
  
  Google: Generative AI News
  
  Meet ChatArena: A Python Library Designed To Facilitate Communication And Collaboration Between Multiple Large Language Models (LLMs) (April 7, 2023)
  
  CBS Interview with Geoffrey Hinton at the Vector Institute in Toronto (March 31, 2023)
  
  The Register: LLaMA drama as Meta's mega language model leaks (March 8, 2023)
  
  Introducing LLaMA: A foundational, 65-billion-parameter large language model (February 24, 2023)
  
  Gmail creator says ChatGPT-like AI will destroy Google's business in two years Will it? Only time will tell.(January 30, 2023)
  
  Google Confident Adaptive Language Modeling (CALM): A New Language Model Technology can improve large language model speeds by up to three times (December 20, 2022)
  
  Generative AI is changing everything. But what’s left when the hype is gone? (December 16, 2023)
  
  Microsoft 2022 Year in Review
  
  Facebook: MultiRay: Optimizing efficiency for large-scale AI models (November 18, 2022)
  
  
  

Generative AI -Large Language Models Twitter

  Twitter Query: Large Language Models
  
  Twitter Query: Generative AI
  
  Twitter Query Stable.ai
  
  Demis Hassabis @demishassabis
  
  Ilya Sutskever @ilyasut
  
  Greg Brockman @gdb
  
  Stanford HAI @StanfordHAI
  
  Twitter: Emad Mostaque @EMostaque (Stability.ai)
  
  Twitter: OpenAI @OpenAI
  

Generative AI -Large Language Models You Tube

  You Tube Search: generative ai
  
  You Tube Search: Large Language Models
  
  Center for Humane Technology: The A.I. Dilemma (March 9, 2023)
  
  
  

Research Organizations

  Google Bard Blog: Experiment Updates
  
  Open AI Blog on Safety
  
  
  
Top