The field is moving very quickly and planners should be asking themselves what steps they should be taking now to prepare.
Generative AI like other forms of machine learning has been seeing returns to scale in computation. Recent significant advances have given rise to safety concerns that we may see an evolutionary leap, a unexpected evolutionary surge, from artificial intelligence AI to artificial general intelligence (AGI) and a self improving runaway reaction like a nuclear chain reaction.
Generative AI - Large Language Models vary by their computational and memory costs and hence the financial and energy costs of their outputs and therefore their use cases
AGI is sometimes referred to as superintelligence. Such a superintelligence could be vastly broader and deeper than human intelligence. The boundaries of such a superintelligence cannot be predicted nor can the effectiveness of so-called safety "guardrails". The probability of the emergence of a superintelligence from increases in scaling or further development of generative ai may be small, but not zero. We may be approaching a point of "critical mass" where the implications for human society are non-linear - this has been referred to as The Singularity.
The other more immediate and demonstrable concern is that generative ai may destabilize societies leading to violence and armed conflict as individuals, small groups, small and medium states and super states vie for power.
Generative ai models may remedy many societal problems and create new opportunities but in the wrong hands they could be devastating. Global governance of AI is at a very early stage. There have been leaks of LLMs into the wild where safety and human alignment will be a problem. (Meta's LLAMA for example)
Generative AI like a variety of technlogies has multiple uses both civil and military. It also presents multiple moral hazards even for those with the best intentions. Commercial rivalries may lead to lapses in good judgement and then there are always rogue actors and criminals. The usual human dilemmas - seen with nuclear power, the internet, biotechnology - are front and center. Generative AI is going to be geopolitically destabilizing - alternately growing and diminishing power centers of all kinds. Some surprisingly so.
Generative AI now has shortcomings and flaws some of which, like those in early aircraft, can be designed out or which can be filled by using links or inbuilt or strap on capabilities. For example, large language models have had mathematics shortcomings. The mathematics shortcomings can be remedied by linking to platforms like Wolfram Alpha. The linking of generative AI platforms to other platforms can bring benefits, but also presents risks. Sometimes generative ai models hallucinate - they make things up. There are reliability issues that need to be rectified.
So what are generative ai models?
Generative ai models are AI models that can generate answers to questions or prompts made in the form of natural language texts.
Generative AI models can be text generators, voice generators, image generators, audio generators, and code generators. Generative ai models can be multimodal, that is, one model can be trained on multiple types of data and produce multiple types of outputs. Large language models have the common feature of using natural language input to generate their output. This is its primary enabling capability but it is also a limiting factor because of abstraction limitations. Large language models can also decypher languages not normally considered languages like DNA and RF signals. Large language models can also complete classification, summarization, translation, generation, and dialogue tasks.
Generative AI models can be trained on a variety of datasets and, once trained, are able to create new, realistic outputs in response to queries and prompts based on what they have learned. The more accurate the results, the more the model has learned the underlying structure within the dataset. The models don't simply memorize training data, they locate the underlying hidden structures in the data. Generative ai models require vast amounts of data that humans may not require to make a learning determination however humans are limited in the scale of the data that they analyze.
Generative AI models come in a variety of sizes and can be optimized for a variety of functions. Some have billions of parameters. Different generative ai models are compared using this metric - a measure of the size and the complexity of the model. Generally the more parameters the better the performance. Model parameters determine how input data is transformed into the desired output. Some large language models (LLM) require computational loads that are only accessible financially and resource-wise to extremely large global corporations or nation states. Some variants can be operated on a desktop. As semiconductor computational and memory capacity increases generative ai models may be lodged in cell phones and smaller devices. Another metric for measuring performance is the number of tokens the model employs and their ratio to the number of parameters. A token is the basic unit of text or code used to process and generate language. Tokens can take different forms and are an important factor in the cost and performance of a model and therefore the economics of different applications.
The returns to scale of LLM or generative ai have meant that very large organizations have established an early lead in this field. However individuals with concerns regarding the potential concentration of power - commonly in the open movement - favour the sharing of generative ai tools broadly. The clear fiancial gains to be made by the various ai players including individuals and small groups similarly creates incentives for the uncontrolled spread of capabilities. How to square the circle of safety and openess is thus problematic.
Below are links to generative ai players and resources on the topic. Again, the field is moving very quickly and planners should be asking themselves what steps they should be taking now to prepare.
Governments should be working on International AI Security & Cooperation (IAISC)