If anyone builds it, everyone dies : why superhuman AI would kill us all / Eliezer Yudkowsky & Nate Soares.
'If Anyone Builds It, Everyone Dies' is an urgent warning from two artificial intelligence insiders on the reckless scramble to build superhuman AI -- and how it will end humanity unless we change course.
Record details
- ISBN: 9780316595643 (hardcover)
- Physical Description: xii, 259 pages ; 22 cm
- Edition: First edition.
- Publisher: New York : Little, Brown and Company, 2025.
Content descriptions
| Bibliography, etc. Note: | Includes bibliographical references and index. |
Search for related items by subject
| Subject: | Artificial intelligence > Forecasting. |
Show Only Available Copies
| Location | Call Number / Copy Notes | Barcode | Shelving Location | Status | Due Date |
|---|---|---|---|---|---|
| Lakeshore Branch | 006.3 Yud | 31681010434710 | NONFIC | Checked out | 11/28/2025 |
- Baker & Taylor
As the global race toward superintelligent AI accelerates, two longtime researchers warn that such machines could develop goals misaligned with human survival, offering a stark evidence-based scenario of extinction and a plea for urgent preventive action. 75,000 first printing. - Grand Central Pub
INSTANT NEW YORK TIMES BESTSELLER | The scramble to create superhuman AI has put us on the path to extinctionâbut itâs not too late to change course, as two of the fieldâs earliest researchers explain in this clarion call for humanity.
"May prove to be the most important book of our time.ââTim Urban, Wait But Why
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
Â
For decades, two signatories of that letterâEliezer Yudkowsky and Nate Soaresâhave studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with usâand that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldnât even be close.
Â
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.Â
Â
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
âThe best no-nonsense, simple explanation of the AI risk problem I've ever read.ââYishan Wong, Former CEO of Reddit