5266 stories
·
16 followers

Updated thoughts on AI risk

1 Share

So the other day I wrote a post about how humanity is inevitably going to be disempowered by the existence of AI:

A bunch of people wrote to me and asked me: “What made you change your mind?”. Three years ago, shortly after the release of the original ChatGPT, I wrote a post about how LLMs are not going to destroy the human race:

And just a couple of months ago, I wrote a post arguing that ASI (artificial superintelligence) is likely to peacefully coexist with humanity rather than kill us off.

People wanted to know why my tone had shifted from optimistic to pessimistic.

Well, the simple answer to that is “I was in a worse mood.” My rabbit was sick,1 so I was kind of grumpy, and so in my post a few days ago I painted the eventual disempowerment of humanity as more of a negative thing than I usually do. In fact, I’ve always believed that at some point, humanity would be replaced with something posthuman — godlike AIs, a hive mind, modified humans, or whatever. I grew up reading science fiction about that kind of thing — Vernor Vinge, Charles Stross, Arthur C. Clarke, Iain M. Banks, and so on — and it just always seemed impossible that humanity had already attained the theoretical pinnacle of intelligence.2 I had always simply envisioned that whatever came after us would be in the general human family, and would be more likely to be on our side than against us.

That’s what my post the other day was about. I painted a more glum picture of humanity’s eventual supersession because I was in a bad mood. But even in that post, at the end, I offered optimism that ASI will save us from things like low fertility, fascist overlords, and the end of human-driven scientific discovery. That optimistic future would be like the Culture novels, by Iain M. Banks, in which AIs take the reins of civilization but in which they respect and help and protect a now-mostly-useless humanity — basically a much nicer, more enlightened version of the way the United States of America treats Native Americans nowadays. It’s a wistful future, and in some ways a sad one, but not particularly terrifying.

BUT, at the same time, I have gotten a lot more worried about existential, catastrophic AI risk — the kind of thing that would kill us instead of just rendering us comfortably impotent — than I was three years ago. And so the people who wrote to ask me why my tone had shifted deserve a longer explanation about why I’m more worried.

What I got wrong three years ago

In my post three years ago, I argued that LLMs were not yet the kind of AI that could threaten the human race. I think I was probably right regarding the type of LLMs that existed in early 2023, for the reasons I laid out in that post. In a nutshell, I argued that since all LLMs could do was talk to people, the only way they could destroy the human race was by convincing us to destroy ourselves (unlikely) or by teaching us how to destroy ourselves (for example, by educating bioterrorists about how to make bioweapons).

In my defense, this is not too different from the scenario that Eliezer Yudkowsky — who literally wrote the book on existential AI risk — envisioned in 2022. He wrote:

My lower-bound model of “how a sufficiently powerful intelligence would kill everyone, if it didn’t want to not do that” is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they’re dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.

This is about AI teaching people how to make self-replicating nanomachinery instead of a doomsday virus. But honestly I feel like the doomsday virus would be easier to make. So I don’t think my scenario was too far behind the thinking of the most vocal and panicky AI safety people back in 2023.

Anyway, if I had said “chatbots” instead of “LLMs” in my 2023 post, I think I still would have been correct, because a chatbot is a type of user interface, while an LLM is an underlying technology that can be used to do much more than make a chatbot. What I missed was that LLMs can do a lot more than just talk to people — they can write code, because code is just a language, and it’s not too hard to get them to do this in an automated, end-to-end, agentic fashion.

In other words, I didn’t envision the advent of vibe-coding. And I probably should have. To be fair, the advent of vibe-coding required some big technological advances3 that didn’t exist in early 2023. But missing the fact that computer code is just a language that can be learned like any other — and that in fact it’s easier to learn, since you can verify when it works and when it doesn’t work — was a big miss for me. And it opens up the door to a LOT of other scary scenarios, beyond “A chatbot helps humans to do something bad”.

So anyway, let’s talk about what I’m scared about now. But first, let’s talk about what I’m less scared about, at least for the moment.

The rise of the robots is still a ways away

The scenario that everyone tends to think about is one in which a fully autonomous ASI decides that human civilization is an impediment to its use of natural resources, and that we need to be exterminated, enslaved, or otherwise radically disempowered in order to turn the world into data centers. This is basically the plot of the Terminator movies,4 the Matrix movies, and various other “rise of the robots” stories.

Conceptually speaking it’s easy to envision an AI that’s advanced enough to carry this out. It would have full control over an entirely automated chain of AI production, including:

  • Mining, refining, and processing of minerals

  • Fabrication of chips and construction of data centers

  • Manufacturing of robots

Controlling this entire chain would give AI control over its own reproduction — the way humans have always had control over our own reproduction. It could then safely dispense with humanity without endangering its own future.

This is basically a very direct analogy to what European settlers did to Native American civilization, or what various other waves of human conquerors and colonizers do to other groups of humans.

I think this scenario is worth worrying about, but it’s not immediate. Right now, robotics is still fairly rudimentary — things are advancing, but AI will need humans as its agents in the physical world for years to come. Furthermore, AI will need some algorithmic changes before it can permanently “survive” on its own without humans — long memory, for one. I’m not saying these won’t happen, but at least we have some time to think about how to prevent the “rise of the robots” scenario. I do think we should have some people (and AI) thinking about how to harden our society against that sort of attack.

It seems likely that AI will eventually get smart enough to think its way around whatever physical safeguards we put in place against the rise of the robots. But as I wrote two months ago, I think an AI advanced enough to fully control the physical world would have already reached the stage where it understands that peaceful coexistence and positive-sum interaction is a better long-term bet than genocide. Smarter humans and richer human societies both tend to be more peaceful, and I sort of expect the same from smarter AI.

So I think there are other worries to prioritize here.

What if the Machine stops?

In my post three years ago, I tried to list the ways that LLMs might eventually destroy us:

Here’s a list of ways the human race could die within a relatively short time frame:

  • Nuclear war

  • Bioweapons

  • Other catastrophic WMDs (asteroid strike, etc.)

  • Mass suicide

  • Extermination by robots

  • Major environmental catastrophe

The advent of vibe-coding has made me think of another way our civilization could be destroyed, which I probably should have thought of at the time: starvation.

Every piece of agricultural machinery in the developed world, more or less, runs on software now — every tractor, every harvester, every piece of food processing machinery. That software was mostly written by human hands, but in a fairly short period of time, it will all be vibe-coded by AI.

At that point, AI would, in principle, have the ability to bring down human civilization simply by making agricultural software stop working. It could push malicious updates, or hack in and take over, or wipe the software, etc. Agricultural machines would stop working, and in a few weeks the entire human population would begin to starve. Civilization would fall soon afterwards.

I really should have thought of this scenario when I wrote my post in 2023, because it’s the plot of a very famous science fiction story from 1909: “The Machine Stops”, by E.M. Forster. In this story, humanity lives in separate rooms, communicating with each other only electronically,5 cared for entirely by a vast AI; when the AI stops working, most of humanity starves.

This could happen to us soon. Now that vibe-coding is many times as productive as human coding, it’s very possible that a lot fewer people will get good at coding. Even the tools that exist right now might be eroding humans’ skills at working with code. This is from a recent Anthropic study:

AI creates a potential tension: as coding grows more automated and speeds up work, humans will still need the skills to catch errors, guide output, and ultimately provide oversight for AI deployed in high-stakes environments. Does AI provide a shortcut to both skill development and increased efficiency? Or do productivity increases from AI assistance undermine skill development?

In a randomized controlled trial, we examined 1) how quickly software developers picked up a new skill (in this case, a Python library) with and without AI assistance; and 2) whether using AI made them less likely to understand the code they’d just written.

We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades.

Meanwhile, Harry Law wrote a good post called “The Last Temptation of Claude”, about how the ease of vibe-coding is making him mentally lazier. There are many other such posts going around.

As vibe-coding becomes even better and eliminates humans entirely from the loop, the need for human software skills will presumably atrophy further. Ten years from now, if the software that runs our agricultural machinery just stops working for some reason, there’s a good chance there will not be enough human coders around to get it working again.

This would simply be a special case of a well-known problem — overoptimization creating fragility. When Covid hit in 2020, we found out that our just-in-time supply chains had been so over-engineered for efficiency that they lacked robustness. Vibe-coding could lead to a much worse version of the same problem.

That said, AI going on a catastrophic strike isn’t at the top of my list of fears. The reason is that I expect AI to be very fragmented; so far, no AI company seems to have any kind of monopoly, even for a short period of time. If the AI that writes the code for harvesters and tractors suddenly goes rogue, it seems like there’s a good chance that humans can call in another AI to fix it.

I guess it’s possible that all the AIs will collude so that none of them will help humans survive, or that the rogue AI(s) will be able to maliciously lock humans out from applying non-rogue AI to fix the problem. So people should be thinking about how to harden our agricultural system against software disruption. But it’s also not at the top of my list of doomsday worries.

Vibe-coding the apocalypse

OK, so what is at the top of my list of doomsday worries? It’s still AI bioterrorism.

Hunting down and killing humans with an army of robots would be fairly hard. Depriving humans of food so that we starve to death would be easier, but still a little bit hard. But slaughtering humans with a suite of genetically engineered viruses would not actually be very hard. As we saw in 2020, humans are very vulnerable to novel viruses.

Imagine the following scenario. In the near future, virology research is basically automated. Labs are robotic, and AI designs viruses in simulation before they’re created in labs. For whatever personal reasons, a human terrorist wants to destroy the human race. Using techniques he reads about on the internet, he jailbreaks a near-cutting-edge AI in order to remove all safeguards. He then prompts this AI to vibe-code a simulation that can design 100 superviruses. Each supervirus is 10x as contagious as Covid, has a 90% fatality rate, and has a long initial asymptomatic period so it’ll spread far and wide before it starts killing its victims. He then prompts his AI to vibe-code a program to hack into every virology lab on the planet and produce these 100 viruses, then release them into the human population.

If successful, this would quickly lead to the end of human civilization, and quite possibly to the extinction of the entire human species.

Is it possible? I don’t know. But developments seem to be moving in the direction of making it possible. For example, bio labs are becoming more automated all the time:

And AI algorithms are rapidly getting better at simulating things like proteins:

“Virtual labs” powered by “AI scientists” are becoming commonplace in the world of bio. And there is plenty of fear about how AI-powered laboratories might be used to create superviruses. Here’s a story that ran in Time magazine almost a year ago:

A new study claims that AI models like ChatGPT and Claude now outperform PhD-level virologists in problem-solving in wet labs, where scientists analyze chemicals and biological material. This discovery is a double-edged sword, experts say. Ultra-smart AI models could help researchers prevent the spread of infectious diseases. But non-experts could also weaponize the models to create deadly bioweapons.

The study, shared exclusively with TIME, was conducted by researchers at the Center for AI Safety, MIT’s Media Lab, the Brazilian university UFABC, and the pandemic prevention nonprofit SecureBio. The authors consulted virologists to create an extremely difficult practical test which measured the ability to troubleshoot complex lab procedures and protocols. While PhD-level virologists scored an average of 22.1% in their declared areas of expertise, OpenAI’s o3 reached 43.8% accuracy. Google’s Gemini 2.5 Pro scored 37.6%.

I am not a biology expert, and I plan to go ask more of them about this worry (as well as having AI educate me more). I asked GPT-5.2 what it thought about this risk, and here are some excerpts from what it wrote:6

[A]utomation can increase throughput and reduce expertise needed, which is directionally risk-increasing. But it doesn’t magically eliminate the underlying biological constraints…

[AI safety] guardrails can be bypassed sometimes. Also, you don’t necessarily need a frontier model to be dangerous if you have access to domain tools, leaked data, or insider capability…

A more realistic worry is a small number (1–a few) of engineered or selected agents that are “good enough” (highly transmissible and significantly more lethal than typical pandemics)…

AI accelerates, but it doesn’t replace the need for experimental validation [of new viruses] —yet…

If an attacker can truly create one pathogen that is (a) highly transmissible, (b) substantially more lethal than typical pandemics, and (c) hard to contain early, then you already have global-catastrophe potential…A single “good enough” pathogen, combined with poor detection and slow countermeasures, can be catastrophic.

Probability of “one compromised lab enables a catastrophic engineered outbreak”: still low, but not negligible, and plausibly higher than many other X-risk stories because it has fewer required miracles.

Probability of “human extinction via this route”: lower than “catastrophe/collapse,” but not zero; it remains deep tail risk.

GPT’s recommendations all included maintaining humans in the loop of biology research. But after what we’ve seen with vibe-coding over the past few months, how confident can we be that labs all across the world — including in China — will insist on maintaining humans in the loop, when full automation would speed up productivity and improve competitiveness? I can’t say I’m incredibly optimistic here.

So the advent of vibe-coding has significantly increased my own worries about truly catastrophic AI risk. It seems clear now that brute economic forces will push humanity in the direction of taking humans out of the loop anywhere they can be taken out. And in any domain where data is plentiful, outputs can be verified, and there are no physical bottlenecks, it seems likely that keeping humans in the loop will eventually prove un-economical.

Really, this boils down to another example of overoptimization creating fragility. But it’s an especially extreme and catastrophic one. I don’t think humanity is doomed, but I don’t see many signs that our governments and other systems are yet taking the threat of vibe-coded superviruses as seriously as they ought to be. Not even close.

So if you ask me if my worries about AI risk have shifted materially in recent months, the answer is “Yes.” I still think Skynet or Agent Smith is highly unlikely to appear and exterminate humanity with an army of robots in the near future. But I will admit that the thought of vibe-coded superviruses is now keeping me up at night.


Subscribe now

Share

1

He’s better now!

2

In fact, if we had been the smartest possible creatures in the Universe, that itself would be a pretty glum future.

3

From what I can tell, the most important such advance was verifier-based reinforcement learning that enabled test-time compute scaling…

4

Well, sort of. In the Terminator movies, Skynet is a military AI who sees humans as a military threat.

5

It’s pretty wild that a contemporary of H.G. Wells could have envisioned both AI and modern social media.

6

Encouragingly, it stopped answering my questions pretty quickly, because this topic hit the guardrails.

Read the whole story
mareino
2 hours ago
reply
Washington, District of Columbia
Share this story
Delete

Record Low Crime Rates Are Real, Not Just Reporting Bias Or Improved Medical Care

1 Share

Last year, the US may have recorded the lowest murder rate in its 250 year history. Other crimes have poorer historical data, but are at least at ~50 year lows.

This post will do two things:

  1. Establish that our best data show crime rates are historically low

  2. Argue that this is a real effect, not just reporting bias (people report fewer crimes to police) or an artifact of better medical care (victims are more likely to survive, so murders get downgraded to assaults)

Here’s US murder rate, 1776 - present:

The pre-1900 estimates come from Tcherni-Buzzeo (2018); their ultimate source seems to be work by sociologist Claude Fisher which I can’t access. The 1900 - present data come from historian Randolph Roth’s American Homicide and the FBI’s Uniform Crime Reporting, both by way of the Council on Criminal Justice.

There’s less historical data for property crimes, and the nature of property has changed throughout history in ways that make numbers incommensurable (is it bad if we have a higher grand theft auto rate today than in 1840?) I was only able to get good data since 1960, but here it is:

The 1960-2023 data come from FBI Data Explorer via Vital City; the 2024 and 2025 data come directly from the FBI website, with 2025 annualized via incomplete Jan - Oct data. This one may or may not be an all-time low, but it’s pretty good.

These data are counterintuitive. Are they wrong?

Could This Be An Artifact Of Reporting Bias?

People could be so inured to crime that they stop reporting it to the police. Or the police could be so overwhelmed that they stop accepting the reports. Since most crime statistics are based on police reports, this would look like crime going down. There’s some evidence of this happening in specific situations, like shoplifting in San Francisco. Could it be the whole effect?

No, for three reasons.

The National Crime Victimization Survey is a government-run survey of a 240,000 person nationally representative sample. They find random people and ask whether they were the victims of crimes in the past year. This obviously doesn’t work for murder, but they keep statistics on rape, assault, larceny, and burglary. Their numbers mostly mirror those reported by police and used in the usual statistics about crime rates. But here there’s no extra step of needing to trust the police enough to make a report: the surveyors ask the victims directly. Although there could be biases in this methodology too, it would be an extraordinary coincidence if they exactly matched the proposed reporting bias to police.

Also, you can use NCVS and police reports to calculate reporting rates directly. Overall, they seem to have increased over time - did you know that the 9-1-1 emergency hotline wasn’t available in most areas until the 1970s? This is especially true for aggravated assault (which will become important later).

(source: Baumer and Lauritsen)

There’s one caveat - FBI statistics show that crime had a small local peak in 2020/2021, then fell in 2023 - 2025. The most recent NCVS survey, in 2024, shows a smaller fall, leaving us still above 2019 lows. There’s some debate over whether the FBI vs. NCVS numbers are better for the 2022 - 2025 period, but they don’t change the overall trajectory or the fact that we’re at least close to record lows.

Murder is almost always reported to and investigated by police; there’s a person who should be alive but isn’t, and people inevitably notice and care about this. Therefore, reported murder rates should be accurate. But murder has decreased at about the same rate as every other crime. Therefore, we should believe that other crimes have gone down too (for the objection that murder statistics are unusually untrustworthy because of improving medical care, see below).

And car theft is consistently reported to the police, because insurances require a police report before they will compensate the lost car. So even if the victim doesn’t trust the police to do a good job investigating, they report it anyway. But car theft rates have declined at similar rates to other crimes. This is further evidence that the decline can’t be explained by poor reporting.

Could This Be An Artifact Of Improving Medical Care?

Good medical care can help victims survive, transforming murders into attempted murders or aggravated assaults (after this: “AM/AA”). If the same gunshot is only half as likely to kill someone today as it would have been in 1960, then a seemingly-equivalent murder rate would correspond to twice as many people getting shot. Could this explain the apparent decline in murders?

The argument would go something like: murder is the only crime that we’re completely sure gets reported consistently. But the murder rate is artificially depressed by improving medical care. Therefore, maybe the seemingly-low murder rate is because of the medical care, the seemingly-low rates of other crimes are because of reporting bias, and actually crime is up.

We’ve already seen that several parts of this can’t be true: other crimes like car theft are reported consistently, and among the inconsistently reported ones, reports are more often increasing than decreasing. But the part about murder also fails on its own terms.

The source for the claim that improving medical care lowers murder rates is Harris et al, which analyzed crime from 1960 - 1999 and concluded that “the principal explanation of the downward trend in lethality involves parallel developments in medical technology”.

They found that aggravated assaults rose faster than murders during this time; AAs increased by 5x, while murders “merely” doubled. Under the reasonable assumption that these crimes have similar generators, they suggested that the cause was improved medical care saving the lives of those who would have otherwise died, converting potential murders into AAs. If murders rose at the same rate as AAs, then the true murder rate could be up to 3x higher than reported.

Source: FBI UCR

But more recent research, especially Eckberg (2014), challenges this story. Eckberg argued the AA vs. murder divergence was caused by two things: first, better reporting of aggravated assault (as discussed above), and second, police being more likely to classify borderline causes as aggravated assault rather than regular assault.

He turned to the National Crime Victimization Survey, which escapes reporting bias and police classification flexibility. In these data, AAs and murder rose at about the same rate. He concluded that (my emphasis):

Their lethality trend is not compatible with the previous finding [of declining lethality] across 1973 through 1999, remaining stable rather than falling. After 1999, both Uniform Crime Reports (UCR)-and NCVS-based measures indicate increases in lethality.

How is this possible, since medical technology has certainly improved?

It seems that gun injuries are getting worse over time. Livingstone et al studied changing characteristics of gunshot victims between 2000 and 2011. They found that the proportion of patients with 3+ wounds almost doubled (13% → 22%) during that period (p < 0.0001). Manley et al did a similar study looking at 1996 - 2016 and found a similar result, saying that “wounding in multiple body regions suggests more effective weaponry, including increased magazine size”. A letter by top trauma doctors to the American Journal of Public Health describes:

…increases in gunshot injuries per patient, gunshot injuries to critical regions (head, spine, chest), and gunshot injuries to multiple regions. Injury Severity Scores were also higher over similar intervals correlating with lower probability of survival.

Despite which

…patients surviving evaluation in the emergency department had no significant increase in mortality. Major strides in trauma care have occurred over the last two decades, and nationwide organizational changes have expanded the delivery of these improvements.

Sakran et al, studying the 2007 - 2014 period, have an especially vivid portrayal of this pattern:

Likelihood of dying before hospitalization - primarily dependent on injury severity - went up. Likelihood of dying in the hospital went down, probably because trauma care improved (although this could also be because more of the sickest patients died before entering the hospital). Cook et al studied gunshot lethality during a slightly different period - 2003 - 2012 - and also found that it stayed the same overall.

There are three plausible explanations for gun injuries getting worse over time:

  1. Improved weapons technology (e.g. switch to semi-automatics)

  2. Shooters have been in criminal communities a long time and have a good intuitive sense of the likelihood that victims survive. As medical care improves, shooters invest more effort into harming their victims in order to maintain the same likelihood of lethality. For example, it might have been 1970s conventional wisdom in criminal communities that you only had to get one shot in, but it might be 2020s conventional wisdom that you have to get at least three shots to be sure.

  3. Changing nature of violence. Many late-20th-century shootings were robberies gone wrong. But armed robberies have decreased even more dramatically than other crimes, because of store security cameras and lower reliance on cash. In an armed robbery gone wrong, the shooter probably just shoots the clerk once and gets out. Now that there are fewer armed robberies, a higher percent of shootings involve shooters who really want to kill the victim and are working hard to make it happen. That means more gunshots to more critical areas.

I conclude that the 1960 - 2000 data are weak, but the best research (Eckberg’s) suggests stable lethality per act of violence during this time. The 2000 - 2020 data are stronger, and also suggest at-least-stable lethality per act of violence, and can even tell us why: severity of injuries is increasing at a rate comparable to the improvement in medical care.

Is it suspicious that two very different things are changing at exactly the right rate to cancel one another out, let us ignore the whole problem, take crime statistics at face value? I think so! It would be less suspicious if most of the explanation was (2) - the shooters specifically compensating for increased victim survival rates - but I can’t tell if this is true or not. But keep in mind that the alternate explanation - that apparent crime rates are around the same as in 1960 because a true increase in crime rates has been masked by improved medical care and reporting bias - also requires two things changing at exactly the same rate in a suspicious way. If we’re going to do this, we ought to at least take the suspicious cancellation that’s supported by the data.

Why are so many forms of crime (murder, violent crime, and property crime) at or near historic lows? This is an unsolved question among criminologists, but proposed answers include:

  • High crime in the 1970s was caused by lead poisoning, but lead levels have declined precipitously (plausible but controversial)

  • Mass incarceration worked (very plausible for 1990s, but hard to explain why crime continues to decline even as incarceration rates decrease)

  • Increased abortion rates among the underclass prevented the birth of future criminals (very strongly challenged, but proponents still stand by it)

  • High crime in the 1970s was caused by the drug trade. The rise of cell phones has replaced street-corner drug dealers with “a guy I know from college”, which necessitates fewer street-corner turf wars.

  • Security cameras and DNA testing have increased clearance rates. The smart criminals know they’ll be caught and don’t commit crimes; the dumb criminals commit one crime, get caught, go to prison, and are out of commission for a while.

  • Increased psychiatric care: all of the would-be criminals are on SSRIs, antipsychotics, and Adderall.

  • Welfare programs, community policing, Hugs Not Crime After School Activity Circles, and/or whatever Palantir is doing actually work.

  • The anti-police backlash after Black Lives Matter increased crime so much that it caused a backlash-to-the-backlash that gave police even more community support and resources than they had before (this is my explanation for why crime dropped so profoundly in 2023, 2024, and 2025 in particular)

  • All the criminals are too addicted to video games and Instagram to commit any crimes.

Zooming out a level, why shouldn’t crime be at historic lows? We’re a safetyist culture. Car accident fatalities are near historic lows after we mandated airbags and other safety features. Childhood injuries and deaths are near historic lows after we mandated that all playgrounds be made of Styrofoam. Various forms of hospital error are near historic lows after we let lawyers sue hospitals for zillions of dollars if they weren’t. Why should crime be the exception?

The next question is: why do people’s intuitions clash so violently with the statistics? More on that soon.



Read the whole story
mareino
4 hours ago
reply
Washington, District of Columbia
Share this story
Delete

AI Hype and the Search for Meaning

1 Share

Last week, a post went viral on X titled Something Big Is Happening. It’s one of those very long articles that X is desperately trying to make happen, but you can get a sense of what it’s about from the first few paragraphs:

Think back to February 2020.

If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren’t paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they’d been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn’t have believed if you’d described it to yourself a month earlier.

I think we’re in the “this seems overblown” phase of something much, much bigger than Covid.

The post compares the current moment in AI development to the weeks before COVID really blew up - a time when smart people could see the world was about to change, even while most people were unaware. The post might be the first X ‘article’ to have ever escaped containment and actually spread beyond just X, and inspired yet another round of AI discourse.

There’s a pattern these debates normally follow. You have your AI boosters who think everything is going to change. There are dozens of rapturous essays floating around about Clawdbot/Moltbot/OpenClaw1 and how AI agents are going to revolutionize absolutely everything. If you don’t have an army of agents running tasks for you right now, what are you even doing? You’re not gonna make it. Closely intertwined with the utopian evangelists are the doomers who believe the same future is coming, but that this is a huge problem that leads to mass unemployment at best and literal Terminator scenarios at worst. There are some folks warning about the destructive risks coming from AI, and it’s not just random people - it includes Mrinank Sharma, the head of safety at Anthropic, who resigned last week in a very public open letter.

You also have AI skeptics who think the entire thing is fake, a house of cards about to collapse. It’s a time bomb, proclaims Ed Zitron. They don’t even have a business model, claims Ross Barkan. They are not ‘smart’ in any sense of the word, says Tyler Austin Harper.

I’m normally hesitant to jump into giving hot takes on the future of AI. It’s not my area of expertise, everyone involved is speculating, and I’m not sure that the thousandth version of AI Gods Are Inevitable, And That Is Great/Awful or AI is a Giant Scam, You Utter Fools is particularly useful to anyone. But what I am interested in, and what I like to think I’m good at, is examining the social dynamics behind discourses like this. And what I notice about every side of the AI debate is how deeply committed everyone involved is to posting their way through it.


In truth, Something Big Is Happening is not the first article of its kind. There have been dozens of articles written like this before - usually on less public sites, like the EA Forums - and there have already been imitators that have popped up in its aftermath.

But the trajectory of these kinds of posts has changed recently. They’ve become more frequent and more evangelical in nature, perhaps rationally so as AI keeps advancing. But in addition to the frequency and the tone of the posts, this debate is far more public than it’s ever been. It’s not enough to believe in a particular theory of AI, you have to post about it publicly. You have to write an article. You have to write a long letter explaining why you joined a company, or why you left.

There are a lot of reasons why people might be posting more frequently. The original author of the letter is the CEO of a small AI company, and may just be a grifter - he’s got a history of hyping slop, and has been accused of fraud and cheating benchmarks before. There have been increasingly fervent rounds of discourse about OpenClaw, MoltBook, and this particular hype article, and it’s likely some of that is opportunist bandwagoners trying to make a buck. Less cynically, it’s clearly the case that many of the people working inside AI have very strong feelings about it. Mrinank Sharma is an AI safety researcher, it’s natural and predictable that if he thinks AI is heading in a dangerous direction, he’d want to express himself.

But Mrinank isn’t alone. OpenAI Is Making the Mistakes Facebook Made. I Quit, wrote OpenAI researcher Zoë Hitzig in the New York Times. A former DeepMind employee is speaking out in Time Magazine. xAI cofounder Jimmy Ba left the company last week with an announcement on X, as did Tony Wu and almost a dozen other prominent xAI figures. Every single one had a public post explaining why they left, how they felt, and their general feelings about the coming AI revolution. And all of these examples are from the last ten days! That’s a genuinely crazy amount of public posting in a short time, but if you go back further, you can find many, many more instances.

You can’t just believe things about AI quietly. You have to announce what you believe. You need to vague-post about how ‘Something is Happening’. You need an article, a manifesto, a capital-T Theory of the Case. You need a Substack, or god forbid, a podcast. It’s easy to overlook this, or say that things have always been this way. But posting is the most powerful force in the universe, and we forget that at our peril.

What’s happening here, underneath the surface, is that we’ve lost the ability to contextualize meaning independent of social media.

A few years ago I wrote a post about how technology profanes the sacred where I said:

People crave sacredness and ritual… They find meaning in these social connections and cultural practices. It’s a way to link yourself to something important - whether it be a wider community, a holy text or a God, other people that you love, a beautiful and breathtaking physical space, or a cultural tradition that goes back generations. And social media by its nature degrades that kind of practice.

After all, imagine a sacred ritual you’re undertaking with other people. It could be a wedding, a religious service, etc. What’s the most embarrassing, thoughtless thing you could do during the middle of it? Have your smartphone loudly ring and buzz, repeatedly. Or use your phone in the middle of the ritual, paying little attention to those around you.

We instinctively recognize, even if we don’t always put it into words, that some places are not meant to be profaned with technology. I cannot imagine how mortified I would be if, while sitting in the shrine of St. Edward the Confessor in Westminster Abbey with other worshippers, my phone rang. Typically, we’re able to shame people into behaving correctly in instances like that. But for the private rituals that can give us meaning - a morning walk in a forest, quiet coffee with a loved one, a night time prayer - it’s all too easy for technology to intrude.

For as long as humans have existed, we’ve created meaning out of ritual. Sometimes these rituals were religious in nature. Sometimes they were built around community or family. Sometimes they were long standing cultural practices. But they existed to ground our lives with purpose and significance.

What we have today is an algorithmic system where people increasingly try to construct meaning out of social media. Why does every view on AI need to be publicly declaimed in a viral post, an op-ed, or a public statement? Because we no longer know how to find meaning if we don’t post about what we believe. If we don’t have quantifiable metrics from social media, or if we can’t literally count how viral we are, how do we know if our worldview is valid or not? If your letter about why you left your company doesn’t get at least 5,000 likes, is it even worth leaving?

There’s a thin line between actually believing in something, and giving the public performance of believing it. And while only the lord above can judge any specific individual, I feel pretty confident that at least some of the people above are performing their opinions for approval of the crowd.

Technology has become a filter for meaning, and nothing seems to have meaning if it’s not algorithmically approved of. And technology is terrible at this. Our current systems of social media are built to maximize engagement, to feed you a constant drip-drip-drip of content that doesn’t enlighten and doesn’t really even entertain as much as it distracts. There is no way to construct purpose out of an infinite scroll that aims to continually micro-dose you on dopamine hits so that you don’t close the app and harm KPIs on a dashboard somewhere. This is not where meaning comes from, but it’s where we’re trying and failing to find it. And the people working at frontier AI labs are just as vulnerable to that dynamic as the rest of us.


I said near the beginning of this post that I was hesitant to give hot takes about the future of AI, but against my better judgment I’m going to try. Let’s go back to the original post we were talking about - the one that compares today to February 2020. And let’s do the author the courtesy of taking that analogy seriously.

We all know what happened in 2020. COVID was a minor story until, in a very short amount of time, it completely changed the world. That’s the thing with exponential curves - they can easily be mistaken for a flat line right up until the point they go exponential. COVID altered so many things about human society on a fundamental level. It changed how we work, how we ate, how we connected to people. It constrained us in physical spaces. It caused booms in some industries while nearly destroying others. It shook governments, businesses, families and every level of society… for a while.

Then we adapted. We figured out workarounds, we developed vaccines, and we got on with life. A few years later, society is plodding along just fine. That’s how you should think about AI.

AI is going to change a lot of things. It might revolutionize a lot of different parts of society, harming some people and boosting others. It might radically reshape some of our institutions. But you know what will happen? Humans will adjust those institutions, we’ll tear some things down and build newer, different things, and we’ll keep on going. Just like we did with COVID.

Here’s what I don’t worry about. I don’t worry that AI is going to kill billions of people. I don’t worry that it’s going to lead to mass unemployment or a permanent underclass. I also don’t think it will lead to a near-term utopia, or that we’re going to experience the Singularity.2 Will things change? Sure. And just like COVID, that transition will cause disruption. It won’t be pain free and maybe we’ll have some scars from the rapid change, the stretch marks of birthing a new world. But it won’t be an end, it’ll be a transition. It won’t change everything immediately, it will take many years. Society will, for the most part, be fine.

Here’s what I do worry about. We live in a world where we’ve increasingly replaced backyard barbecues with scrolling TikTok in an isolated bedroom. More and more people, despite immense material wealth, are lonely and frustrated and bored, digitally bowling alone through lives they’re not sure have any deeper meaning. I’d like to reverse that trend, but I worry that AI will accelerate it.

It’s bad enough that we now use likes and follows to construct meaning. But what does meaning look like in a world dominated by AI? What happens to human connections when you can have an AI companion rather than real friends? What happens when posting is even more effortless and frictionless, when the velocity of AI-generated content is overwhelming? What happens to intellectual pursuits when you can simply have an AI think for you? We started this essay discussing the article Something Big Is Happening, which positions itself as a form of warning. And I think it is a warning, but not in the way the author intended. The article itself is pretty clearly written by AI, leaving me to wonder - is the author actually capable of making the argument without the assistance of AI? Or is he merely picking up a idea he’s heard secondhand and getting an AI to create a worldview for him? Has he turned himself into the stochastic parrot, dully repeating the output of his chatbot’s prompts, unable to think independent thoughts or analyze the world without an AI to guide him?

The same things matter to people that have always mattered. Human relationships. Community. Family. The pain and joy of struggling for a worthy cause. The determination to do hard things, to create beautiful art, to contribute to something larger than oneself. These are still what bring meaning to life, and I worry that they’re becoming harder to reach. If you’re worried about anything with AI, worry about this - that if we’re not careful, technology will disrupt the things that make us human in the first place.

Stay on top of what’s happening online by subscribing to Infinite Scroll

1

I’ve never seen a product iterate through so many names so quickly

2

If you want a specific sense of what I do/don’t think is likely, I think Freddie deBoer’s bet is a good place to start.



Read the whole story
mareino
1 day ago
reply
Washington, District of Columbia
Share this story
Delete

You are no longer the smartest type of thing on Earth

1 Comment

“He comes like a day that has passed, and night enters our future with him.” — Charlo

Yesterday my pet rabbit bit my finger. It was an accident; he was trying to bite a towel to move it out of his way, and I accidentally stuck my hand in his mouth. He is a gentle beast, and would never bite a human intentionally. Anyway, the bite punctured and lacerated my left index finger near the front knuckle. I washed it out, put some ointment and a band-aid on it, and that was that.

It occurs to me that if my pet rabbit had instead been a tiger, I would now be dead. There is a reason most people don’t keep tigers as pets; they may be fluffy and cute, but they’re big and strong and can easily kill you. Instead, we generally keep pets who are smaller and weaker than us, allowing us to train them, and if necessary to physically restrain them, and minimizing the danger to our own health.

Until now, we haven’t had to think about this principle in the context of intelligence. As long as you or I or anyone we know has been alive — for all of recorded history, and in fact for much much longer than that — humankind has been the most intelligent thing on this planet.

At some point in the next couple of years, that will no longer be true. It arguably is no longer true right now. There is no single unarguable measure of intelligence — it’s not like distance or time. AI doesn’t think in the same way humans do. But it can get gold medals on the International Math Olympiad, solve difficult outstanding math problems all on its own, and get A’s in graduate school classes. Most human beings can’t do any of that.

Intelligence is as intelligence does. If it helps you feel unique and special to sit there and tell yourself “AI can’t think!”, then go ahead. And sure, AI doesn’t think exactly the way you do. It probably never will, in the same sense that a submarine will never paddle its fins and an airplane will never flap its wings. But a submarine can go faster than any fish, and an airplane can fly higher and faster than any bird, so it doesn’t matter. You can value your own unique human way of thinking all you like — and I agree, it’s pretty special and cool — but that doesn’t make it more effective than AI.

Right now, there are some cognitive things that humans still do better than AI, but that will probably not last. The entire might of the world’s technological innovation system is now being thrown into making AI better, and there is no sign of a slowdown in progress. One of the main things AI couldn’t do until recently was to work on a task for a long period of time. That’s changing fast. AI models are flying up the METR curve,1 which tries to measure the length of time a human would require to complete a software engineering task that AIs can do:

Source: Noam Brown

This is what’s behind all the “vibe coding” you’re hearing about. AI agents — basically, a program that keeps applying AI over and over until a task is complete — are now taking over much of software engineering. People just tell the AI what kind of software they want, and the AI pops it out. Human software engineers are still checking the code for problems, but as the technology improves, the cost of doing this is likely to become uneconomical; AI-written software will never be perfect, but it’ll be consistently much better than anything humans could do, and at a tiny fraction of the price.

Vibe coding is taking over fast. Spotify’s co-CEO recently revealed that the company’s best developers don’t write code anymore. Some journalists from CNBC, with no coding experience, vibe-coded a clone of the app Monday, and the company’s stock price promptly crashed. Meanwhile, AI is increasingly writing the next version of itself, and humans may not be in the loop for very much longer.

And all of this — ending software engineering as we know it, acing the hardest math tests, solving unsolved math problems, creating infinite apps at the touch of a button — is just the beginning. The amount of resources that the world is preparing to deploy to improve AI, this year and in the following few years, utterly dwarfs anything that it has deployed so far:

Source: Bloomberg

AI’s abilities scale with the amount of compute applied.2 The amount of compute available this year will be much greater than the amount that’s producing all the miracles you see now. And then next year’s compute will be far greater than that. All the while, AI itself will be searching for ways to improve AI algorithms to better take advantage of increased compute.

Other weaknesses of AI — in particular, its lack of long-term memory and its inability to learn on the fly — will eventually be solved.3 AI will be able to act on its own for longer and longer, with less and less human decision-making in the loop. Meanwhile, massive investment in robotics will give AI more and more direct contact with, understanding of, and control of the physical world.

More and more people are waking up to this reality. An article by Matt Shumer called “Something Big is Happening” recently went viral. It’s very simplified and hand-wavey, and Shumer himself is a bit of a huckster, but it gets the point across. If anything it understates the pace and magnitude of the changes taking place. I recommend giving it a read, if you haven’t already.

But there’s a bigger reality out there that people outside the tech industry — and even many people within it — don’t seem to have grasped yet. It isn’t just that AI could take your job, or put millions of people on welfare, or give us infinite free software, or whatever. It’s that for the first time in all of recorded history, humans no longer are — or soon no longer will be — the most intelligent beings on this planet, in any meaningful functional sense of the word.

For the rest of our lives, we’ll all be sleeping next to a tiger.

Read more

Read the whole story
mareino
2 days ago
reply
Great short essay, although the "just pull the plug" option still seems to exist.
Washington, District of Columbia
Share this story
Delete

DNA-inspired molecule breaks records for storing solar heat

1 Comment and 2 Shares

Heating accounts for nearly half of the global energy demand, and two-thirds of that is met by burning fossil fuels like natural gas, oil, and coal. Solar energy is a possible alternative, but while we have become reasonably good at storing solar electricity in lithium-ion batteries, we’re not nearly as good at storing heat.

To store heat for days, weeks, or months, you need to trap the energy in the bonds of a molecule that can later release heat on demand. The approach to this particular chemistry problem is called molecular solar thermal (MOST) energy storage. While it has been the next big thing for decades, it never really took off.

In a recent Science paper, a team of researchers from the University of California, Santa Barbara, and UCLA demonstrate a breakthrough that might finally make MOST energy storage effective.

The DNA connection

In the past, MOST energy storage solutions have been plagued by lackluster performance. The molecules either didn't store enough energy, degraded too quickly, or required toxic solvents that made them impractical. To find a way around these issues, the team led by Han P. Nguyen, a chemist at the University of California, Santa Barbara, drew inspiration from the genetic damage caused by sunburn. The idea was to store energy using a reaction similar to the one that allows UV light to damage DNA.

When you stay out on the beach too long, high-energy ultraviolet light can cause adjacent bases in the DNA (thymine, the T in the genetic code) to link together. This forms a structure known as a (6-4) lesion. When that lesion is exposed to even more UV light, it twists into an even stranger shape called a "Dewar" isomer. In biology, this is rather bad news, as Dewar isomers cause kinks in the DNA’s double-helix spiral that disrupt copying the DNA and can lead to mutations or cancer.

To counter this effect, evolution shaped a specific enzyme called photolyase to hunt (6-4) lesions down and snap them back into their safe, stable forms.

The researchers realized that the Dewar isomer is essentially a molecular battery. This snap-back effect was exactly what Nguyen’s team was looking for, since it releases a lot of heat.

Rechargeable fuel

Molecular batteries, in principle, are extremely good at storing energy. Heating oil, arguably the most popular molecular battery we use for heating, is essentially ancient solar energy stored in chemical bonds. Its energy density stands at around 40 Megajoules per kilo. To put that in perspective, Li-ion batteries usually pack less than one MJ/kg. One of the problems with heating oil, though, is that it is single-use only—it gets burnt when you use it. What Nguyen and her colleagues aimed to achieve with their DNA-inspired substance is essentially a reusable fuel.

To do that, researchers synthesized a derivative of 2-pyrimidone, a chemical cousin of the thymine found in DNA. They engineered this molecule to reliably fold into a Dewar isomer under sunlight and then unfold on command. The result was a rechargeable fuel that could absorb the energy when exposed to sunlight, release it when needed, and return to a “relaxed” state where it's ready to be charged up again.

Previous attempts at MOST systems have struggled to compete with Li-ion batteries. Norbornadiene, one of the best-studied candidates, tops out at around 0.97 MJ/kg. Another contender, azaborinine, manages only 0.65 MJ/kg. They may be scientifically interesting, but they are not going to heat your house.

Nguyen’s pyrimidone-based system blew those numbers out of the water. The researchers achieved an energy storage density of 1.65 MJ/kg—nearly double the capacity of Li-ion batteries and substantially higher than any previous MOST material.

Double rings

The reason for this jump in performance was what the team called compounded strain.

When the pyrimidone molecule absorbs light, it doesn't just fold; it twists into a fused, bicyclic structure containing two different four-membered rings: 1,2-dihydroazete and diazetidine. Four-membered rings are under immense structural tension. By fusing them together, the researchers created a molecule that is desperate to snap back into its relaxed state.

Achieving high energy density on paper is one thing. Making it work in the real world is another. A major failing of previous MOST systems is that they are solids that need to be dissolved in solvents like toluene or acetonitrile to work. Solvents are the enemy of energy density—by diluting your fuel to 10 percent concentration, for example, you effectively cut your energy density by 90 percent. Any solvent used means less fuel.

Nguyen’s team tackled this by designing a version of their molecule that is a liquid at room temperature, so it doesn't need a solvent. This simplified operations considerably, as the liquid fuel could be pumped through a solar collector to charge it up and store it in a tank.

Unlike many organic molecules that hate water, Nguyen’s system is compatible with aqueous environments. This means if a pipe leaks, you aren't spewing toxic fluids like toluene around your house. The researchers even demonstrated that the molecule could work in water and that its energy release was intense enough to boil it.

The MOST-based heating system, the team says in their paper, would circulate this rechargeable fuel through panels on the roof to capture the sun’s light and then store it in the basement tank. The fuel from this tank would later be pumped to a reaction chamber with an acid catalyst that triggers the energy release. Then, through a heat exchanger, this energy would heat up the water in the standard central heating system.

But there’s a catch.

Looking for the leak

The first hurdle is the spectrum of light that puts energy in the Nguyen’s fuel. The Sun bathes us in a broad spectrum of light, from infrared to ultraviolet. Ideally, a solar collector should use as much of this as possible, but the pyrimidone molecules only absorb light in the UV-A and UV-B range, around 300-310 nm. That represents about five percent of the total solar spectrum. The vast majority of the Sun's energy, the visible light and the infrared, passes right through Nguyen’s molecules without charging them.

The second problem is quantum yield. This is a fancy way of asking, "For every 100 photons that hit the molecule, how many actually make it switch to the Dewar isomer state?" For these pyrimidones, the answer is a rather underwhelming number, in the single digits. Low quantum yield means the fluid needs a longer exposure to sunlight to get a full charge.

The researchers hypothesize that the molecule has a fast leak, meaning a non-radiative decay path where the excited molecule shakes off the energy as heat immediately instead of twisting into the storage form. Plugging that leak is the next big challenge for the team.

Finally, the team in their experiments used an acid catalyst that was mixed directly into the storage material. The team admits that in a future closed-loop device, this would require a neutralization step—a reaction that eliminates the acidity after the heat is released. Unless the reaction products can be purified away, this will reduce the energy density of the system.

Still, despite the efficiency issues, the stability of the Nguyen’s system looks promising.

The MOST storage?

One of the biggest fears with chemical storage is thermal reversion—the fuel spontaneously discharges because it got a little too warm in the storage tank. But the Dewar isomers of the pyrimidones are incredibly stable. The researchers calculated a half-life of up to 481 days at room temperature for some derivatives. This means the fuel could be charged in the heat of July, and it would remain fully charged when you need to heat your home in January. The degradation figures also look decent for a MOST energy storage. The team ran the system through 20 charge-discharge cycles with negligible decay.

The problem with separating the acid from the fuel could be solved in a practical system by switching to a different catalyst. The scientists suggest in the paper that in this hypothetical setup, the fuel would flow through an acid-functionalized solid surface to release heat, thus eliminating the need for neutralization afterwards.

Still, we’re rather far away using MOST systems for heating actual homes. To get there, we’re going to need molecules that absorb far more of the light spectrum and convert to the activated state with a higher efficiency. We’re just not there yet.

Science, 2026. DOI: 10.1126/science.aec6413

Read full article

Comments



Read the whole story
mareino
2 days ago
reply
If you want to know what 2050 will look like, read the science news today
Washington, District of Columbia
acdha
2 days ago
reply
Washington, DC
Share this story
Delete

Saturday Morning Breakfast Cereal - Quest

1 Share


Click here to go see the bonus panel!

Hovertext:
You can actually do a lot of low-risk leveling up of the relationship just by using a bot to have your character run around stabbing rats all day.


Today's News:



Red Button mashing provided by SMBC RSS Plus. If you consume this comic through RSS, you may want to support Zach's Patreon for like a $1 or something at least especially since this is scraping the site deeper than provided.
Read the whole story
mareino
2 days ago
reply
Washington, District of Columbia
Share this story
Delete
Next Page of Stories