Hiring a writer to co-author with me (Spencer Greenberg for ClearerThinking.org) — LessWrong
Published on October 27, 2024 5:34 PM GMTDiscuss
Dario Amodei's "Machines of Loving Grace" sound incredibly dangerous, for Humans — LessWrong
Published on October 27, 2024 5:05 AM GMTWhat Dario lays out as a "best-case scenario" in...
Interview with Bill O’Rourke - Russian Corruption, Putin, Applied Ethics, and More — LessWrong
Published on October 27, 2024 5:11 PM GMTThis is cross-posted from my blog and I interviewed...
On Shifgrethor — LessWrong
Published on October 27, 2024 3:30 PM GMTA small number of terms are elevated from the...
The hostile telepaths problem — LessWrong
Published on October 27, 2024 3:26 PM GMTEpistemic status: model-building based on observation, with a few...
What are some good ways to form opinions on controversial subjects in the current and upcoming era? — LessWrong
Published on October 27, 2024 2:33 PM GMTTake a random political issue with two sides A...
Video lectures on the learning-theoretic agenda — LessWrong
Published on October 27, 2024 12:01 PM GMTThis is a YouTube playlist of recorded lectures on...
Electrostatic Airships? — LessWrong
Published on October 27, 2024 4:32 AM GMTAirships are pretty dang cool. Airplanes need a continuous...
A suite of Vision Sparse Autoencoders — LessWrong
Published on October 27, 2024 4:05 AM GMTCLIP-Scope?Inspired by Gemma-Scope We trained 8 Sparse Autoencoders each...
Ways to think about alignment — LessWrong
Published on October 27, 2024 1:40 AM GMTI’m listing some “ways to think about alignment”. I’m...
Is there a CFAR handbook audio option? — LessWrong
Published on October 26, 2024 5:08 PM GMTI've gotten spoiled by AI readings, and curious if...
A superficially plausible promising alternate Earth without lockstep — LessWrong
Published on October 26, 2024 4:04 PM GMT[ Context re dath ilan:- [Keltham reflects on the...
Why is there Nothing rather than Something? — LessWrong
Published on October 26, 2024 12:37 PM GMT"Close the darn window! You know it gives me...
The Summoned Heroine's Prediction Markets Keep Providing Financial Services To The Demon King! — LessWrong
Published on October 26, 2024 12:34 PM GMTThe Summoned Heroine and the Demon KingThe Summoned Heroine...
AI Safety Camp 10 — LessWrong
Published on October 26, 2024 11:08 AM GMTWe are pleased to announce that the 10th version...
Arithmetic Models: Better Than You Think — LessWrong
Published on October 26, 2024 9:42 AM GMTLessWrong user dynomight explains how arithmetic is an underrated...
Is the Power Grid Sustainable? — LessWrong
Published on October 26, 2024 2:30 AM GMT When I was growing up most families in...
A Case for Conscious Significance rather than Free Will. — LessWrong
Published on October 25, 2024 11:20 PM GMTThe following is born out of a frustration with...
Introducing Kairos: A new home for SPAR and FSP — LessWrong
Published on October 25, 2024 9:59 PM GMTDiscuss
Brief analysis of OP Technical AI Safety Funding — LessWrong
Published on October 25, 2024 7:37 PM GMTTL;DRI spent a few hours going through Open Philanthropy...
Lab governance reading list — LessWrong
Published on October 25, 2024 6:00 PM GMTWhat labs should doThe table/list in Towards best practices in...
A Logical Proof for the Emergence and Substrate Independence of Sentience — LessWrong
Published on October 24, 2024 9:08 PM GMTSentience is the capacity to experience anything – the...
OpenAI’s cybersecurity is probably regulated by NIS Regulations — LessWrong
Published on October 25, 2024 11:06 AM GMTThe EU and UK's Network and Information Systems (NIS)...
Linkpost: Memorandum on Advancing the United States’ Leadership in Artificial Intelligence — LessWrong
Published on October 25, 2024 4:37 AM GMT Memorandum on Advancing the United States’ Leadership in...
What You Can Give Instead of Advice — LessWrong
Published on October 24, 2024 11:10 PM GMTI am often tempted to give advice, and find...
Making a Pedalboard — LessWrong
Published on October 25, 2024 12:10 AM GMT A few weeks ago I posted about how...
is it possible to comment anonymously on a post? — LessWrong
Published on October 24, 2024 10:24 PM GMTDiscuss
Against Job Boards: Human Capital and the Legibility Trap — LessWrong
Published on October 24, 2024 8:50 PM GMTHuman capital trades more like real estate than equities....
IAPS: Mapping Technical Safety Research at AI Companies — LessWrong
Published on October 24, 2024 8:30 PM GMTDiscuss
Our Digital and Biological Children — LessWrong
Published on October 24, 2024 6:36 PM GMT[Linkpost. This presents a way to talk to people...
Introducing Transluce — A Letter from the Founders — LessWrong
Published on October 23, 2024 6:10 PM GMTWe are launching an independent research lab that builds...
A bird's eye view of ARC's research — LessWrong
Published on October 23, 2024 3:50 PM GMTThis post includes a "flattened version" of an interactive...
Artificial V/S Organoid Intelligence — LessWrong
Published on October 23, 2024 2:31 PM GMTAI is a very controversial topic these days, but...
AI safety tax dynamics — LessWrong
Published on October 23, 2024 12:18 PM GMTTwo important themes in many discussions of the future...
What is malevolence? On the nature, measurement, and distribution of dark traits — LessWrong
Published on October 23, 2024 8:41 AM GMTDiscuss
Join the LessWrong Team for the Unaging System Challenge — LessWrong
Published on October 23, 2024 6:01 AM GMTHey LessWrong,After sharing early results at Less Online this...
Word Spaghetti — LessWrong
Published on October 23, 2024 5:39 AM GMTI've written a lot of words—hundreds of blog posts,...
What is the alpha in one bit of evidence? — LessWrong
Published on October 22, 2024 9:57 PM GMTRecently the whole "if your p(doom) is high, you...
Catastrophic sabotage as a major threat model for human-level AI systems — LessWrong
Published on October 22, 2024 8:57 PM GMTThanks to Holden Karnofsky, David Duvenaud, and Kate Woolverton...
Why I quit effective altruism, and why Timothy Telleen-Lawton is staying (for now) — LessWrong
Published on October 22, 2024 6:20 PM GMT~5 months I formally quit EA (formally here means...
What is autonomy? Why boundaries are necessary. — LessWrong
Published on October 21, 2024 5:56 PM GMTHere I define autonomy as not having your insides controlled...
Could literally randomly choosing people to serve as our political representatives lead to better government? — LessWrong
Published on October 21, 2024 5:10 PM GMTI'm an advocate of something known as sortition. The premise...
There aren't enough smart people in biology doing something boring — LessWrong
Published on October 21, 2024 3:52 PM GMTNote: this essay is co-written with Eryney Marrogi, who...
Automation collapse — LessWrong
Published on October 21, 2024 2:50 PM GMTSummary: If we validate automated alignment research through empirical...
What AI companies should do: Some rough ideas — LessWrong
Published on October 21, 2024 2:00 PM GMTThis post is incomplete. I'm publishing it because it...
What should OpenAI do that it hasn't already done, to stop their vacancies from being advertised on the 80k Job Board? — LessWrong
Published on October 21, 2024 1:57 PM GMTA sarcastic yet genuine question. Even in light of...
A Rocket–Interpretability Analogy — LessWrong
Published on October 21, 2024 1:55 PM GMT 1. 4.4% of the US federal budget went into the...
Tokyo AI Safety 2025: Call For Papers — LessWrong
Published on October 21, 2024 8:43 AM GMTLast April, AI Safety Tokyo and Noeon Research (in...
OpenAI defected, but we can take honest actions — LessWrong
Published on October 21, 2024 8:41 AM GMTDiscuss
Slightly More Than You Wanted To Know: Pregnancy Length Effects — LessWrong
Published on October 21, 2024 1:26 AM GMTPregnancy is most stressful at the beginning and at...