@?superintelligence

Man builds an AGI and then either destroys all of humanity or colonises $4\times 10^{20}$ stars. This book was really, really good. I’ve been interested in the idea of AI safety for a while now after reading LessWrong and this book was just full of so much good information and new perspectives. Flashcards Past developments and present capabilities Growth models and big history A few hundred thousand years ago, it took on the order of a million years to increase civilisations ability to support an extra one million citizens living at subsistence level.
2021-10-01
5 min read