Another Earthquake Prediction Failure

Seismologists are ingenious and persistent in their attempts to obtain the Holy Grail of earthquake prediction, with proposed methods ranging from the primitive (observe the behavior of snakes) to the esoteric (borrow some mathematical methods from stock market forecasters and throw them in a pot with a very large data set). On the esoteric end of things is the use of something called the b-value, a number which defines the relative frequency of large earthquakes and small earthquakes. Previous studies have suggested that the proportion of large and small earthquakes in any given area is related to the amount of stress the local faults are under, and so it seems plausible that changes in b-value could indicate changes in stress that might foreshadow big events.

Nope. A recent statistical analysis of earthquakes on the Bay Area’s own Calaveras Fault has shown that using b-values for earthquake prediction on that system is statistically useless, if not completely wrongheaded. Based on a 37-year-long record of seismicity, study author Tom Parsons of the USGS was able to rule out the possibility that changing b-values over time are a useful signal of medium-to-large (M>4) earthquakes. He was unable to rule out the possibility that different b-values on different spatial segments of the fault could help us predict where along the fault future earthquakes will occur, he wasn’t able to conclusively rule it in, either – even on an active and well-instrumented fault like the Calaveras, we simply don’t have enough data to say for sure.

Reports of precursors to individual earthquakes, like the b-value changes observed before the tragic 2004 Andaman Island earthquake, offer the tantalizing hope that we might someday be able to avert such catastrophes. But we can’t make good predictions based on the behavior of just one earthquake. Potential earthquake precursors need to be tested over a long period of time, including periods when there are no large earthquakes, to be sure that we’re not just seeing noise and coincedence. Sadly, most precursors seem to fail such tests.


  1. Lab Lemming wrote:

    How many subduction megathrusts are well-instrumented continuously? Or at least on the timescale of their repeat rate (e.g. centuries, which amounts to the same thing)?

    Japan? Cascadia? Or fuck-all?

  2. yami wrote:

    On the timescale of a repeat rate for M5-6 events, Japan, for sure.

  3. gengar wrote:

    New Zealand is pretty closely monitored. I would have thought Cascadia was given the amount of seismometers plonked on the San Andreas.

    But given that nobody thought the Andaman thrust was going to rupture that dramatically, there’s probably a a fair number of worrying gaps…

  4. yami wrote:

    Despite its dangers Cascadia is not as well-instrumented as California… but it ain’t bad. I’m just not sure how far back the record goes, nor did I read the paper closely enough to see what the detection threshold needs to be to do this kind of analysis.

Post a Comment

Your email is never published nor shared. Required fields are marked *