Many peers have referenced Bostrom's book, most have been unsettled by it, few can articulate to others why this is so, and 'just read it' is terrible advice because it is, I think fair to say, dry. So, here we go.

Much like Bostrom's other seminal work - simulation theory - the arguments in this book are predicated on a few (agreeable) premises.

\1. thinking minds exist (we each have one of these)

2. these minds are the product of ordinary physics (no quantum or religious voodoo is required to catalyse consciousness)

3. the set of all possible such minds is [very] large - any perceived 'cap' on human intelligence is merely a point in time snapshot of current evolutionary pressures

To me, this one is critical. Think of what happens when evolution tries to optimize for speed. We get.. a cheetah. Impressive. Much faster than a human. But then we humans leverage our own evolutionary inheritance (intellect) and take a crack at the same problem:

Faster than anything evolution has thus come up with in its 3 billion+ year tenure, yet clearly physically possible, because they very much exist today. Thus:

4. It's entirely plausible - if not extremely likely - that a great deal of unexplored space 'above' our current state of intelligence exists. Rather than wait around for evolution to map it, we (and our ever smaller, ever faster computers) can chart that frontier ourselves, much as we outpaced evolution on the 'go fast' front.

Given the human propensity for exploration, that we will undertake such efforts is essentially guaranteed.

5. When we do, the product (synthetic minds) will operate on computer timescales (<1second = relevant latency), as opposed to human ones (20 year gestation period from embryo to useful adult).

6. At least some large fraction of these synthetic minds will have a desire to improve themselves (much as we do), either because we program them to, or simply because it seems like a common outcome of autonomous intelligence (as observed in ourselves).

If you cannot refute these premises, the book asserts the combination of all six is, in essence, doomsday (for us). As soon as we create an advanced general intelligence (AGI) that's capable of human-level reasoning, it will recursively improve itself (and perhaps adopt that other human instinct; reproduction) and will do so on computer timescales (thousands of iterations per second). Sci-fi has, of course, begun to explore such scenarios, but this is a singularity; we simply cannot accurately predict what happens next, except that the probability of us remaining the apex predator of Earth seems unlikely.

Bostrom then explores at length some subset of possible outcomes, and potential approaches to curating them such that we aren't the arbiters of our own demise, but the crux of the book, to my mind, is above.

It's been said that AI risk (the topic this book addresses) is 'string theory for programmers': purely theoretical and impossible to invalidate via experimentation, so i'd like to end this on a practical note.

To me, the biggest short term risk of the AI/ML explosion is it's hunger for data. We're only incrementally improving on the algorithms, many of which were published half a century ago. The giant leaps forward made in the past decade are the product of venture capital, exponential advances in compute and - most importantly - huge corpus' of data being fed in. The economic value of data capture is now well understood, thus incentivising further data capture, which bodes not well for civil libertarianism.