A Note On Pre-Paradigmatic Science
In 1962, philosopher of science Thomas Kuhn published his seminal work The Structure of Scientific Revolutions. In it, Kuhn argued that science is not a linear and continuous process of accumulating knowledge but a series of discrete “paradigm shifts.”
A common view within AI safety is that the field is “pre-paradigmatic”: we don’t yet know what the right problems, questions, tools, definitions, and approaches are.
Paradigm formation often involves unifying disparate disciplines, so AI safety is likely to be broader than other disciplines you’re used to. You’ll encounter a variety of disciplines: from logic to probability theory, from economics to psychology, from analytic geometry to voting theory, from neuroscience to contemporary machine learning.
It’s quite likely that many of these current approaches and tools will be discarded in the future. That’s part of the process. Since it’s difficult to anticipate which tools will be discarded, we’ve chosen to err on the side of including too much. Know that you don’t need to master everything in this book to be an effective AI safety researcher.
The Outline
This book is organized into three parts:
- Foundations
- Machine Learning
- Central Problems in AI Safety
This book is primarily about technical AI safety, so a large portion of the book is dedicated to getting you up to speed with the necessary technical background (in mathematics, computer science, economics, physics, etc.). The second part is dedicated to exploring modern machine learning (i.e., deep learning), the path that currently seems likeliest to lead to AGI. Finally, part three puts these tools and knowledge to use to tackle different facets of the alignment problem.
The book can be read in various ways, and we encourage to fork the repo to make your own adjustments:
- One-semester course on deep learning: Read through part 2; supplement with chapters from part 1 depending on students’ background.
- One-semester course on technical AI safety: Read through the introduction and all of part 3; supplement with chapters from part 1 depending on students’ background.
- One-semester course on non-technical AI safety: Read through the introduction and part 3. Spend extra time on Chapter 27 (Governance).