Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development of superintelligence. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization dedicated to researching the development of safe and beneficial artificial intelligence. He is also a co-founder of the Center for Applied Rationality (CFAR), a non-profit organization focused on teaching rational thinking skills. He is also a frequent author at as well as Rationality: From AI to Zombies.
In this episode, we discuss Eliezer’s concerns with artificial intelligence and his recent conclusion that it will inevitably lead to our demise. He’s a brilliant mind, an interesting person, and genuinely believes all of t
1 view
14
6
9 months ago 00:09:49 1
The Model That Changes Everything: Alpaca Breakthrough (ft. Apple’s LLM, BritGPT, Ernie and AlexaTM)
9 months ago 11:31:41 6
Harry Potter and The Methods of Rationality: Part 1 (Chapers 1-21)
10 months ago 03:12:41 1
Eliezer Yudkowsky on if Humanity can Survive AI
11 months ago 02:33:30 1
S4A Office Hours 2023-07-25: Studying Marxism to Solve Present-Day Social-Political Problems & More
11 months ago 00:11:28 1
The Hidden Complexity of Wishes
1 year ago 01:49:22 5
159 - We’re All Gonna Die with Eliezer Yudkowsky
1 year ago 00:17:27 1
Harry Potter and The Methods Of Rationality: Chapter 1
1 year ago 00:10:33 1
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
1 year ago 00:06:44 1
Sorting Pebbles Into Correct Heaps - A Short Story By Eliezer Yudkowsky
1 year ago 00:07:20 1
The Power of Intelligence - An Essay By Eliezer Yudkowsky
1 year ago 00:03:32 2
The Parable of the Dagger
1 year ago 03:17:51 1
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
1 year ago 04:03:25 1
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
1 year ago 01:17:09 1
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1 year ago 01:10:09 5
L’ICEBERG de l’HORREUR EXISTENTIELLE : 42 Théories Dingues et Flippantes sur la Réalité
1 year ago 01:35:45 1
George Hotz vs Eliezer Yudkowsky AI Safety Debate
1 year ago 02:53:46 1
Joscha Bach: Life, Intelligence, Consciousness, AI & the Future of Humans | Lex Fridman Podcast #392
1 year ago 03:08:46 1
George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God | Lex Fridman Podcast #387
1 year ago 00:17:11 1
Don’t Look Up - The Documentary: The Case For AI As An Existential Threat
1 year ago 00:00:59 1
Harry Potter and the Methods of Rationality Audiobook
1 year ago 00:16:42 1
Harry Potter and the Methods of Rationality: Chapter 4 (Audiobook)
1 year ago 00:17:29 1
Harry Potter and the Methods of Rationality: Chapter 3 (Audiobook)
1 year ago 00:14:05 1
Harry Potter and the Methods of Rationality: Chapter 2 (Audiobook)
1 year ago 00:23:39 1
Harry Potter and the Methods of Rationality: Chapter 1