Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 5 submissions in the queue.

Submission Preview

Link to Story

“Like putting on glasses for the first time”—how AI improves earthquake detection

Accepted submission by Freeman at 2025-10-10 13:42:01 from the AI earthquake overlords dept.
News

https://arstechnica.com/science/2025/10/like-putting-on-glasses-for-the-first-time-how-ai-improves-earthquake-detection/ [arstechnica.com]

On January 1, 2008, at 1:59 am in Calipatria, California, an earthquake happened. You haven’t heard of this earthquake; even if you had been living in Calipatria, you wouldn’t have felt anything. It was magnitude -0.53, about the same amount of shaking as a truck passing by. Still, this earthquake is notable, not because it was large but because it was small—and yet we know about it.

Over the past seven years, AI tools based on computer imaging have almost completely automated one of the fundamental tasks of seismology: detecting earthquakes.
[...]
“In the best-case scenario, when you adopt these new techniques, even on the same old data, it’s kind of like putting on glasses for the first time, and you can see the leaves on the trees,” said Kyle Bradley, co-author of the Earthquake Insights [substack.com] newsletter.
[...]
Less certain is what comes next. Earthquake detection is a fundamental part of seismology, but there are many other data processing tasks that have yet to be disrupted. The biggest potential impacts, all the way to earthquake forecasting, haven’t materialized yet.

“It really was a revolution,” said Joe Byrnes, a professor at the University of Texas at Dallas. “But the revolution is ongoing.”
[...]
The main tool that scientists traditionally use is a seismometer. These record the movement of the Earth in three directions: up–down, north–south, and east–west. If an earthquake happens, seismometers can measure the shaking in that particular location.
[...]
Before good algorithms, earthquake cataloging had to happen by hand. Byrnes said that “traditionally, something like the lab at the United States Geological Survey would have an army of mostly undergraduate students or interns looking at seismograms.”
[...]
“The field of seismology historically has always advanced as computing has advanced,” Bradley told me.

There’s a big challenge with traditional algorithms, though: They can’t easily find smaller quakes, especially in noisy environments.
[...]
earthquakes have a characteristic “shape.” The magnitude 7.7 earthquake above looks quite different from the helicopter landing, for instance.

So one idea scientists had was to make templates from human-labeled datasets. If a new waveform correlates closely with an existing template, it’s almost certainly an earthquake.

Template matching works very well if you have enough human-labeled examples. In 2019, Zach Ross’ lab at Caltech used template matching to find 10 times as many earthquakes in Southern California as had previously been known
[...]
Also, template matching is computationally expensive. Creating a Southern California quake dataset using template matching took 200 Nvidia P100 GPUs running for days on end.
[...]
AI detection models solve all of these problems:

  • They are faster than template matching.
  • Because AI detection models are very small (around 350,000 parameters compared to billions in LLMs like GPT4.0), they can be run on consumer CPUs.
  • AI models generalize well to regions not represented in the original dataset.

[...]
To train an AI model, scientists take large amounts of labeled data, like what’s above, and do supervised training.
[...]
Earthquake Transformer [nature.com], which was developed around 2020 by a Stanford University team led by S. Mostafa Mousavi, who later became a Harvard professor.

Like many earthquake detection models, Earthquake Transformer adapts ideas from image classification. Readers may be familiar with AlexNet, a famous image-recognition model that kicked off the deep-learning boom [arstechnica.com] in 2012.
[...]
Earthquake Transformer converts raw waveform data into a collection of high-level representations that indicate the likelihood of earthquakes and other seismologically significant events. This is followed by a series of deconvolution layers that pinpoint exactly when an earthquake—and its all-important P and S waves—occurred.

The model also uses an attention layer in the middle of the model to mix information between different parts of the time series. The attention mechanism is most famous in large language models [understandingai.org], where it helps pass information between words. It plays a similar role in seismographic detection.
[...]
Ars has previously reported [arstechnica.com] on how the introduction of ImageNet [image-net.org], an image recognition benchmark, helped spark the deep learning boom. Large, publicly available earthquake datasets have played a similar role in seismology.

Earthquake Transformer was trained using the Stanford Earthquake Dataset (STEAD) [ieee.org], which contains 1.2 million human-labeled segments of seismogram data from around the world. (The paper for STEAD explicitly mentions ImageNet [ieee.org] as an inspiration). Other models, like PhaseNet, were also trained on hundreds of thousands or millions of labeled segments.
[...]
The holy grail of earthquake science is earthquake prediction. For instance, scientists know that a large quake will happen near Seattle but have little ability to know whether it will happen tomorrow or in a hundred years. It would be helpful if we could predict earthquakes precisely enough to allow people in affected areas to evacuate.

You might think AI tools would help predict earthquakes, but that doesn’t seem to have happened yet.
[...]
As in many other scientific fields, seismologists face some pressure to adopt AI methods, whether or not they are relevant to their research.

“The schools want you to put the word AI in front of everything,” Byrnes said. “It’s a little out of control.”

This can lead to papers that are technically sound but practically useless. Hubbard and Bradley told me that they’ve seen a lot of papers based on AI techniques that “reveal a fundamental misunderstanding of how earthquakes work.”
[...]
While these are real issues, and ones Understanding AI has reported on before [understandingai.org], I don’t think they detract from the success of AI earthquake detection. In the last five years, an AI-based workflow has almost completely replaced one of the fundamental tasks in seismology for the better.

That’s pretty cool.

Earthquake in SoylentNews stories:
Earthquake search on SoylentNews [soylentnews.org]


Original Submission