Countdown header img desk

MAI SUNT 00:00:00:00

MAI SUNT

X

Countdown header img  mob

MAI SUNT 00:00:00:00

MAI SUNT

X

Promotii popup img

✨Weekend ART cu

-15%-25%✨ și

🛵Transport Gratuit peste 100 lei!

Comandă acum👉

If Anyone Builds It, Everyone Dies: Why AI Is on Track to Kill Us All--And How We Can Avert Extinction

If Anyone Builds It, Everyone Dies: Why AI Is on Track to Kill Us All--And How We Can Avert Extinction - Eliezer Yudkowsky

If Anyone Builds It, Everyone Dies: Why AI Is on Track to Kill Us All--And How We Can Avert Extinction

An urgent warning from two artificial intelligence insiders on the reckless scramble to build superhuman AI--and how it will end humanity unless we change course.

In 2023, hundreds of machine-learning scientists signed an open letter warning about our risk of extinction from smarter-than-human AI. Yet today, the race to develop superhuman AI is only accelerating, as many tech CEOs throw caution to the wind, aggressively scaling up systems they don't understand--and won't be able to restrain. There is a good chance that they will succeed in building an artificial superintelligence on a timescale of years or decades. And no one is prepared for what will happen next.

For over 20 years, two signatories of that letter--Eliezer Yudkowsky and Nate Soares-- have been studying the potential of AI and warning about its consequences. As Yudkowsky and Soares argue, sufficiently intelligent AIs will develop persistent goals of their own: bleak goals that are only tangentially related to what the AI was trained for; lifeless goals that are at odds with our own survival. Worse yet, in the case of a near-inevitable conflict between humans and AI, superintelligences will be able to trivially crush us, as easily as modern algorithms crush the world's best humans at chess, without allowing the conflict to be close or even especially interesting.

How could an AI kill every human alive, when it's just a disembodied intelligence trapped in a computer? Yudkowsky and Soares walk through both argument and vivid extinction scenarios and, in so doing, leave no doubt that humanity is not ready to face this challenge--ultimately showing that, on our current path, If Anyone Builds It, Everyone Dies.

Citeste mai mult

-10%

transport gratuit

PRP: 186.00 Lei

!

Acesta este Pretul Recomandat de Producator. Pretul de vanzare al produsului este afisat mai jos.

167.40Lei

167.40Lei

186.00 Lei

Primesti 167 puncte

Important icon msg

Primesti puncte de fidelitate dupa fiecare comanda! 100 puncte de fidelitate reprezinta 1 leu. Foloseste-le la viitoarele achizitii!

Livrare in 2-4 saptamani

Descrierea produsului

An urgent warning from two artificial intelligence insiders on the reckless scramble to build superhuman AI--and how it will end humanity unless we change course.

In 2023, hundreds of machine-learning scientists signed an open letter warning about our risk of extinction from smarter-than-human AI. Yet today, the race to develop superhuman AI is only accelerating, as many tech CEOs throw caution to the wind, aggressively scaling up systems they don't understand--and won't be able to restrain. There is a good chance that they will succeed in building an artificial superintelligence on a timescale of years or decades. And no one is prepared for what will happen next.

For over 20 years, two signatories of that letter--Eliezer Yudkowsky and Nate Soares-- have been studying the potential of AI and warning about its consequences. As Yudkowsky and Soares argue, sufficiently intelligent AIs will develop persistent goals of their own: bleak goals that are only tangentially related to what the AI was trained for; lifeless goals that are at odds with our own survival. Worse yet, in the case of a near-inevitable conflict between humans and AI, superintelligences will be able to trivially crush us, as easily as modern algorithms crush the world's best humans at chess, without allowing the conflict to be close or even especially interesting.

How could an AI kill every human alive, when it's just a disembodied intelligence trapped in a computer? Yudkowsky and Soares walk through both argument and vivid extinction scenarios and, in so doing, leave no doubt that humanity is not ready to face this challenge--ultimately showing that, on our current path, If Anyone Builds It, Everyone Dies.

Citeste mai mult

S-ar putea sa-ti placa si

De acelasi autor

Parerea ta e inspiratie pentru comunitatea Libris!

Acum se comanda

Noi suntem despre carti, si la fel este si

Newsletter-ul nostru.

Aboneaza-te la vestile literare si primesti un cupon de -10% pentru viitoarea ta comanda!

*Reducerea aplicata prin cupon nu se cumuleaza, ci se aplica reducerea cea mai mare.

Ma abonez image one
Ma abonez image one
Accessibility Logo

Salut! Te pot ajuta?

X