City of Chicago Roadmap for AI

City of Chicago Roadmap for AI - Executive Summary

Generative AI is a disruptive technology which has important implications for the city. This document lays out an initial roadmap of recommendations to assist in the planning of the city’s AI strategy. The goal is to maximize public benefit, help city employees focus on their core missions, address risks of harm to the public, and establish a framework to aid policy makers when making decisions regarding generative AI. 

OpenAI’s release of ChatGPT on November 30, 2022[1] first introduced Generative AI to the public. This remains a rapidly emerging topic with unpredictable and far-reaching consequences. It’s critical to build the framework to take advantage of the opportunities afforded by AI, and to prepare for risks as they emerge.


[1] “Introducing ChatGPT.” Accessed August 3, 2023. https://openai.com/blog/chatgpt.


What is Artificial Intelligence

We already have artificial intelligence all around us in many forms.  AI helps us type on our smart phones, predicts traffic, estimates wait times for call centers, and sends alerts when we make an unusual transaction with our bank. AI typically performs simple tasks that make it easier for a human to complete a larger, more complicated task. It can be so effortless to engage with, we often take it for granted.

This roadmap is specifically focused on Generative Artificial Intelligence, which is a specific type of AI that uses large language models to mimic human interaction through chat. This is possible through large language models that use machine learning to predict which words will work meaningfully together to generate natural language. These models are built using vast amounts of text data from real human interactions (and possibly also simulated interactions) taken from books, movies, online forums, code samples, reference documents, and many other sources.


Why Generative AI is Different

Most AI systems complete simple, specific tasks; they know when your toast is burning, or if the freeway is experiencing a traffic jam. Generative AI is different because it can operate within nearly any context that can be described by human language. It is part of a family of machine learning models that are referred to as “zero shot” learning models, which means that it can make inferences about problems that have never been seen.

During the chat process language is submitted to the model, and a response is generated one word at a time into the chat for consumption.  Because of the vast amount of human conversation and writing used to train the model, the chatbot can mimic an intelligence with a vast amount of human knowledge.

In effect, Generative AI can closely mimic human intelligence because it’s able to relate completely new ideas expressed through language of existing ideas. It can flexibly and creatively react to language that is new or even when the language describes plausible but impossible scenarios.

Presently, Generative AI models are limited. There are limits to the types of questions they can answer, there are issues with the quality of many answers, they can omit things that should be disclosed, and they can amplify messages that should be censored. Probably the most well-known limitation is their tendency to “hallucinate” completely invalid answers. These hallucinations can be dangerous because they may sound completely plausible[2][3].


[2] Alkaissi, Hussam, and Samy I McFarlane. “Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.” Cureus 15, no. 2 (n.d.): e35179. https://doi.org/10.7759/cureus.35179.

 

[3] Weise, Karen, and Cade Metz. “When A.I. Chatbots Hallucinate.” The New York Times, May 1, 2023, sec. Business. https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html.