[go: nahoru, domu]

Jump to content

Earthquake prediction

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by DoctorTerrella (talk | contribs) at 18:44, 19 November 2014 (Undid revision 634564817 by 66.249.83.212 (talk) factoid). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Earthquake prediction is a branch of the science of seismology concerned with the specification of the time, location, and magnitude of future earthquakes within stated confidence limits.[1][2] Of particular importance is the prediction of hazardous earthquakes that are likely to cause damage to infrastructure or loss of life. Earthquake prediction is sometimes distinguished from earthquake forecasting, which some authorities consider to be the probabilistic assessment of general earthquake hazard, including the frequency and magnitude of damaging earthquakes, in a given area over periods of years or decades.[3] It can be further distinguished from real-time earthquake warning systems, which, upon detection of a severe earthquake, can provide a few seconds of warning to neighboring regions.

To be useful, earthquake predictions must be precise, timely, and reliable. A prediction must be precise enough to warrant the cost of increased precautions, including the disruption of ordinary activities and commerce, and timely enough that preparations can be made. Predictions must also be reliable, as false alarms and canceled alarms are economically costly[4] and undermine confidence in, and thereby the effectiveness of, any kind of warning.[5]

With over 13,000 earthquakes around the world each year having a Richter magnitude of 4.0 or greater, trivial success in earthquake prediction is easily obtained using sufficiently broad parameters of time, location, or magnitude.[6] However, such trivial "successful predictions" are not useful. Useful prediction of large (damaging) earthquakes in a timely manner is generally notable for its absence, the few claims of success being controversial.[7] Extensive searches have reported many possible earthquake precursors, but "none have been found to be reliable."[8]

In the 1970s, scientists were optimistic that a practical method for predicting earthquakes would soon be found, but by the 1990s continuing failure led many to question whether it was even possible.[9] While some scientists still hold that, given enough resources, prediction might be possible, many others now maintain that earthquake prediction is inherently impossible.[10]

The problem of earthquake prediction

Approx. # of quakes per year, globally[11]
Mag. Class. #
M = 8 Great 1
M = 7 Major 15
M = 6 Large 134
M = 5 Moderate 1319
M = 4 Small ~13,000

There is, on average, about one Richter magnitude (M) of 8 or larger earthquake somewhere in the world each year, and there are 15 or so "major" M ≥ 7 quakes per year.[12] The United States Geological Survey (USGS) reckons another 134 "large" quakes above M 6, and about 1300 quakes in the "moderate" range, from M 5 to M 5.9 ("felt by all, many frightened"[13]). In the M 4 to M 4.9 range – "small" – it is estimated that there are 13,000 quakes annually. Quakes less than M 4 – noticeable to only a few persons, and possibly not recognized as an earthquake – number over a million each year.

To be meaningful, an earthquake prediction must be properly qualified. This includes unambiguous specification of time, location, and magnitude.[14] These should be stated either as ranges ("windows", error bounds), or with a weighting function, or with some definitive inclusion rule provided, so that there is no issue as to whether any particular event is, or is not, included in the prediction, so a prediction cannot be retrospectively expanded to include an earthquake it would have otherwise missed, or contracted to appear more significant than it really was. To show that a prediction is not post-selected ("cherry-picked") from a number of generally unsuccessful and unrevealed predictions, it must be published in a manner that reveals all attempts at prediction, failures as well as successes.[15]

Significance

While the actual occurrence – or non-occurrence – of a specified earthquake might seem sufficient for evaluating a prediction, there is always a chance, however small, of getting lucky. A prediction is significant only to the extent it is successful beyond chance.[16] Therefore, methods of statistical hypothesis testing are used to determine the probability that an earthquake such as is predicted would happen anyway (the null hypothesis). The predictions are then evaluated by testing whether they correlate with actual earthquakes better than the null hypothesis.[17]

For some types of studies it is reasonable to compare observations against a null hypothesis of random occurrence. Can the observed distribution of earthquakes be distinguished from random? In many instances, however, earthquake occurrence is clearly not random, with clustering in both space and time.[18] In southern California it has been estimated that about 6% of M≥3.0 earthquakes are "followed by an earthquake of larger magnitude within 5 days and 10 km."[19] It has been estimated that in central Italy 9.5% of M≥3.0 earthquakes are followed by a larger event within 30 km and 48 hours.[20] While such statistics are not satisfactory for purposes of prediction (in giving ten to twenty false alarms for each successful prediction) they will skew the results of any analysis that assumes that earthquakes occur randomly in time, for example, as realized from a Poisson process. It has been shown that a "naive" method based solely on clustering can successfully predict about 5% of earthquakes;[21] slightly better than chance.

Use of an incorrect null hypothesis is only one way in which many studies claiming a low but significant level of success in predicting earthquakes are statistically flawed.[22] To avoid these and other problems the Collaboratory for the Study of Earthquake Predictability (CSEP) has developed a means of rigorously and consistently conducting and evaluating earthquake prediction experiments whereby scientists can submit a prediction method which is then evaluated against an authoritative catalog of observed earthquakes.[23]

Consequences

As the purpose of short-term prediction is to enable emergency measures to reduce death and destruction, failure to give warning of a major earthquake, that does occur, or at least an adequate evaluation of the hazard, can result in legal liability,[24] or even political purging.[25] But warning of an earthquake that does not occur also incurs a cost:[26] not only the cost of the emergency measures themselves, but of major civil and economic disruption. So, for example, if anomalous data are recorded in Japan, the Prime Minister, acting in response to a recommendation by an Earthquake Assessment Committee could issue the alarm, which would shut down all expressways, bullet trains, schools, factories, etc., all coming with economic cost [27] False alarms, including alarms that are cancelled, also undermine the credibility, and thereby the effectiveness, of future warnings.[5]

Prediction methods

Earthquake prediction is an immature science in that it cannot predict from first principles the location, date, and magnitude of an earthquake.[28] Research in this area therefore seeks to empirically derive a reliable basis for predictions in either distinct precursors, or some kind of trend or pattern.[29]

Precursors

An earthquake precursor is an anomalous phenomenon that might give effective warning of an impending earthquake.[30] Reports of these – though generally recognized as such only after the event – number in the thousands,[31] some dating back to antiquity.[32] There have been around 400 reports of possible precursors in scientific literature, of roughly twenty different types,[33] running the gamut "from aeronomy to zoology".[34] None have been found to be reliable.[35]

In the early 1990, the IASPEI solicited nominations for a "Preliminary List of Significant Precursors". 40 nominations were made, of which five were selected as possible significant precursors, with two of those based on a single observation each.[36]

After a critical review of the scientific literature the International Commission on Earthquake Forecasting for Civil Protection (ICEF) concluded in 2011 there was "considerable room for methodological improvements in this type of research."[37] In particular, many cases of reported precursors are contradictory, lack a measure of amplitude, or are generally unsuitable for a rigorous statistical evaluation. Reports are bias towards publishing positive rather results, and so that the rate of false negatives (earthquake but no precursory signal) cannot be ascertained.[38]

Animal behavior

For centuries there have been occasional anecdotal accounts of anomalous animal behavior preceding and associated with the occurrence of earthquakes. In cases where animals display unusual behavior some tens of seconds prior to a quake, it has been suggested they are responding to the P-wave.[39] These travel through the ground about twice as fast as the S-waves that do the serious shaking.[40] They predict not the earthquake itself — that has already happened — but only the imminent arrival of the more destructive S-waves.

It has also been suggested that unusual behavior hours or even days beforehand could be triggered by foreshock activity at magnitudes that most people don't notice.[41] Another confounding factor of popular accounts of unusual phenomena is skewing due to "flashbulb memories": otherwise unremarkable details become more memorable and more significant when associated with an emotionally powerful event such as an earthquake.[42] A study that attempted to control for these kinds of factors found an increase in unusual animal behavior (possibly triggered by foreshocks) in one case, but not in four other cases of seemingly similar earthquakes.[43]

Scientific study of anomalous animal behavior — distinguished from compendiums of anecdotal reports — is limited because of the difficulty of performing an experiment, let alone repeating one. Yet there was a fortuitous case in 1992: some biologists were studying the behavior of an ant colony when the Landers earthquake struck just 100 km (60 mi) away. Despite severe ground shaking, the ants seemed oblivious to the quake itself, as well as to any precursors.[44]

In an earlier study, researchers monitored rodent colonies at two seismically active locations in California. In the course of the study there were several moderate quakes, and there was anomalous behavior. However, the latter was coincident with other factors; no connection with an earthquake could be shown.[45]

In 1988, Schaal tested a theory that spikes in lost pet advertisements in the San Jose Mercury News portended an increased chance of an earthquake within 70 miles of downtown San Jose, California. The hypothesis could be tested because it was based on quantifiable, objective, publicly available data. Schaal found no correlation.[46] Another study looked at reports of anomalous animal behavior reported to a hotline prior to an earthquake, but found no significant increase that could be correlated with a subsequent earthquake.[47]

Changes in Vp/Vs

Vp is the symbol for the velocity of a seismic "P" (primary or pressure) wave passing through rock, while Vs is the symbol for the velocity of the "S" (secondary or shear) wave. Small-scale laboratory experiments have shown that the ratio of these two velocities – represented as Vp/Vs – changes when rock is near the point of fracturing. In the 1970s it was considered a significant success and likely breakthrough when Russian seismologists reported observing such changes in the region of a subsequent earthquake.[48] This effect, as well as other possible precursors, has been attributed to dilatancy, where rock stressed to near its breaking point expands (dilates) slightly.[49]

Study of this phenomena near Blue Mountain Lake in New York State led to a successful prediction in 1973.[50] However, additional successes there have not followed, and it has been suggested that the prediction was only a fluke.[51] A Vp/Vs anomaly was the basis of a 1976 prediction of a M 5.5 to 6.5 earthquake near Los Angeles, which failed to occur.[52] Other studies relying on quarry blasts (more precise, and repeatable) found no such variations;[53] and an alternative explanation has been reported for such variations as have been observed.[54] Geller (1997) noted that reports of significant velocity changes have ceased since about 1980.

Radon emissions

Most rock contains small amounts of gases that can be isotopically distinguished from the normal atmospheric gases. There are reports of spikes in the concentrations of such gases prior to a major earthquake; this has been attributed to release due to pre-seismic stress or fracturing of the rock. One of these gases is radon, produced by radioactive decay of the trace amounts of uranium present in most rock.[55]

Radon is attractive as a potential earthquake predictor because being radioactive it is easily detected,[56] and its short half-life (3.8 days) makes it sensitive to short-term fluctuations. A 2009 review[57] found 125 reports of changes in radon emissions prior to 86 earthquakes since 1966. But as the ICEF found in its review, the earthquakes with which these changes are supposedly linked were up to a thousand kilometers away, months later, and at all magnitudes. In some cases the anomalies were observed at a distant site, but not at closer sites. The ICEF found "no significant correlation".[58] Another review concluded that in some cases changes in radon levels preceded an earthquake, but a correlation is not yet firmly established.[59]

Electro-magnetic variations

Various attempts have been made to identify possible pre-seismic indications in electrical, electric-resistive, or magnetic phenomena.[60] The most touted, and most criticized, is the VAN method of professors P. Varotsos, K. Alexopoulos and K. Nomicos – "VAN" – of the National and Capodistrian University of Athens. In a 1981 paper[61] they claimed that by measuring geoelectric voltages – what they called "seismic electric signals" (SES) – they could predict earthquakes of magnitude larger than 2.8 within all of Greece up to 7 hours beforehand. Later the claim changed to being able to predict earthquakes larger than magnitude 5, within 100 km of the epicentral location, within 0.7 units of magnitude, and in a 2-hour to 11-day time window.[62] Subsequent papers claimed a series of successful predictions.[63] Despite these claims, the VAN group generated intense public criticism in the 1980s by issuing telegram warnings, a large number of which were false alarms.

Objections have been raised that the physics of the claimed process is not possible. For example, none of the earthquakes which VAN claimed were preceded by SES generated SES themselves, as would have been expected. Further, an analysis of the wave propagation properties of SES in the Earth’s crust showed that it would have been impossible for signals with the amplitude reported by VAN to have been generated by small earthquakes and transmitted over the several hundred kilometers distances from the epicenter to the monitoring station.[64]

Several authors have pointed out that VAN’s publications are characterized by not accounting for (identifying and eliminating) the possible sources of electromagnetic interference (EMI) to their measuring system. Taken as a whole, the VAN method has been criticized as lacking consistency while doing statistical testing of the validity of their hypotheses.[65] In particular, there has been some contention over which catalog of seismic events to use in vetting predictions. This catalog switching can be used to conclude that, for example, of 22 claims of successful prediction by VAN[66] 74% were false, 9% correlated at random and for 14% the correlation was uncertain.[67]

In 1996 the journal Geophysical Research Letters presented a debate on the statistical significance of the VAN method;[68] the majority of reviewers found the methods of VAN to be flawed, and the claims of successful predictions statistically insignificant.[69] In 2001, the VAN method was modified to include time series analysis, and Springer published an overview in 2011.[70]

After the 1989 Loma Prieta earthquake occurred, a group led by Antony C. Fraser-Smith of Stanford University reported that the event was preceded by disturbances in background magnetic field noise as measured by a sensor placed in Corralitos, California, about 4.5 miles (7 km) from the epicenter.[71] From 5 October, they reported a substantial increase in noise was measured in the frequency range 0.01–10 Hz. The measurement instrument was a single-axis search-coil magnetometer that was being used for low frequency research. Precursory increases of noise apparently started a few days before the earthquake, with noise in the range .01–.5 Hz rising to exceptionally high levels about three hours before the earthquake. Though this pattern gave scientists new ideas for research into potential precursors to earthquakes, and the Fraser-Smith et al. report remains one of the most frequently cited examples of a specific earthquake precursor, more recent studies have cast doubt on the connection, attributing the Corralitos signals to either unrelated magnetic disturbance[72] or, even more simply, to sensor-system malfunction.[73]

Instead of watching for anomalous phenomena that might be precursory signs of an impending earthquake, other approaches to predicting earthquakes look for trends or patterns that lead to an earthquake. As these trends may be complex and involve many variables, advanced statistical techniques are often needed to understand them, therefore these are sometimes called statistical methods. These approaches also tend to be more probabilistic, and to have larger time periods, and so merge into earthquake forecasting.

Elastic rebound

Even the stiffest of rock is not perfectly rigid. Given a large enough force (such as between two immense tectonic plates moving past each other) the earth's crust will bend or deform. What happens next is described by the elastic rebound theory of Reid (1910): eventually the deformation (strain) becomes great enough that something breaks, usually at an existing fault. Slippage along the break (an earthquake) allows the rock on each side to rebound to a less deformed state, but now offset, and thereby accommodating inter-plate motion. In the process energy is released in various forms, including seismic waves.[74] The cycle of tectonic force being accumulated in elastic deformation and released in a sudden rebound is then repeated. As the displacement from a single earthquake ranges from less than a meter to around 10 meters (for an M 8 quake),[75] the demonstrated existence of large strike-slip displacements of hundreds of miles shows the existence of a long running earthquake cycle.[76]

Characteristic earthquakes

The most studied earthquake faults (such as the Nankai megathrust, the Wasatch fault, and the San Andreas fault) appear to have distinct segments. The characteristic earthquake model postulates that earthquakes are generally constrained within these segments.[77] As the lengths and other characteristics[78] of the segments are fixed, earthquakes that rupture the entire fault should have similar characteristics. These include the maximum magnitude (which is limited by the length of the rupture), and the amount of accumulated strain needed to rupture the fault segment. In that the strain accumulates steadily, it seems a fair inference that seismic activity on a given segment should be dominated by earthquakes of similar characteristics that recur at somewhat regular intervals.[79] For a given fault segment, identifying these characteristic earthquakes and timing their recurrence rate (or conversely return period) should therefore inform us when to expect the next rupture; this is the approach generally used in forecasting seismic hazard.[80] Return periods are used for modeling rare events generally, such as cyclones and floods, and as forecasts simply posit that future frequency will be similar to observed frequency to date; however, the application to finer earthquake prediction and the characteristic earthquake model are specific to this case.

This is essentially the basis of the Parkfield prediction: fairly similar earthquakes in 1857, 1881, 1901, 1922, 1934, and 1966 suggested a pattern of breaks every 21.9 years, with a standard deviation of ±3.1 years.[81] Extrapolation from the 1966 event led to a prediction of an earthquake around 1988, or before 1993 at the latest (at the 95% confidence interval).[82] The appeal of such a method is in being derived entirely from the trend, which supposedly accounts for the unknown and possibly unknowable earthquake physics and fault parameters. However, in the Parkfield case the predicted earthquake did not occur until 2004, a decade late. This seriously undercuts the claim that earthquakes at Parkfield are quasi-periodic, and suggests they differ sufficiently in other respects to question whether they have distinct characteristics.[83]

The failure of the Parkfield prediction has raised doubt as to the validity of the characteristic earthquake model itself.[84] Some studies have questioned the various assumptions, including the key one that earthquakes are constrained within segments, and suggested that the "characteristic earthquakes" may be an artifact of selection bias and the shortness of seismological records (relative to earthquake cycles).[85] Other studies have considered whether other factors need to be considered, such as the age of the fault.[86] Whether earthquake ruptures are more generally constrained within a segment (as is often seen), or break past segment boundaries (also seen), has a direct bearing on the degree of earthquake hazard: earthquakes are larger where multiple segments break, but in relieving more strain they will happen less often.[87]

Seismic gaps

At the contact where two tectonic plates slip past each other every section must eventually slip, as (in the long-term) none get left behind. But they do not all slip at the same time; different sections will be at different stages in the cycle of strain (deformation) accumulation and sudden rebound. In the seismic gap model the "next big quake" should be expected not in the segments where recent seismicity has relieved the strain, but in the intervening gaps where the unrelieved strain is the greatest.[88] This model has an intuitive appeal; it is used in long-term forecasting, and was the basis of a series of circum-Pacific (Pacific Rim) forecasts in 1979 and 1989—1991.[89]

It has been asked: "How could such an obvious, intuitive model not be true?"[90] Possibly because some underlying assumptions are not correct. A close examination suggests that "there may be no information in seismic gaps about the time of occurrence or the magnitude of the next large event in the region";[91] statistical tests of the circum-Pacific forecasts shows that the seismic gap model "did not forecast large earthquakes well".[92] Another study concluded that a long quiet period did not increase earthquake potential.[93]

Seismicity patterns

Various heuristically derived algorithms have been developed for predicting earthquakes. Probably the most widely known is the M8 family of algorithms (including the RTP method) developed under the leadership of Vladimir Keilis-Borok. M8 issues a "Time of Increased Probability" (TIP) alarms for a large earthquake of a specified magnitude upon observing certain patterns of smaller earthquakes. TIPs generally cover large areas (up to a thousand kilometers across) for up to five years.[94] Such large parameters have made M8 controversial, as it is hard to determine whether any hits that happen were skillfully predicted, or only the result of chance.

M8 gained considerable attention when the 2003 San Simeon and Hokkaido earthquakes occurred within a TIP.[95] But a widely publicized TIP for an M 6.4 quake in Southern California in 2004 was not fulfilled, nor two other lesser known TIPs.[96] A deep study of the RTP method in 2008 found that out of some twenty alarms only two could be considered hits (and one of those had a 60% chance of happening anyway).[97] It concluded that "RTP is not significantly different from a naïve method of guessing based on the historical rates seismicity." [rate?] [98]

Accelerating moment release (AMR, "moment" being a measurement of seismic energy), also known as time-to-failure analysis, or accelerating seismic moment release (ASMR), is based on observations that foreshock activity prior to a major earthquake not only increased, but increased at an exponential rate.[99] That is: a plot of the cumulative number of foreshocks gets steeper just before the main shock.

Following formulation by Bowman et al. (1998) into a testable hypothesis[100] and a number of positive reports, AMR seemed to have a promising future.[101] This despite several problems, including not being detected for all locations and events, and the difficulty of projecting an accurate occurrence time when the tail end of the curve gets steep.[102] But more rigorous testing has shown that apparent AMR trends likely result from how data fitting is done[103] and failing to account for spatiotemporal clustering of earthquakes,;[104] the AMR trends are statistically insignificant. Interest in AMR (as judged by the number of peer-reviewed papers) is reported to have fallen off since 2004.[105]

The occurrence of foreshocks has long been thought to be the most promising avenue in predicting earthquakes. A foreshock is a smaller earthquake that can strike minutes or days before a larger one. Because the rupture process for the earthquakes are still not completely clear, foreshock occurrence may give clues into an earthquake triggering process. In the Non-Critical Precursory Accelerating Seismicity Theory (N-C PAST), foreshocks happen because of the constant buildup of pressure along the fault lines [106] This theory is given weight due to seismic measurements. This had lead to the conclusion for some scientists that forehocks are a precursor to a larger event, and should be further studied and considered in earthquake prediction.

Notable predictions

Following is a list of predictions that are found in Hough's book [107] and which are prominently discussed in Geller's paper.[108]

1975: Haicheng, China

The M 7.3 Haicheng (China) earthquake of 4 February 1975 is the most widely cited "success" of earthquake prediction,[109] Study of seismic activity in the region lead the Chinese authorities to issue a medium-term prediction in June 1974. The political authorities therefore ordered various measures taken, including enforced evacuation of homes, construction of "simple outdoor structures", and showing of movies out-of-doors. Although the quake, striking at 19:36, was powerful enough to destroy or badly damage about half of the homes, the "effective preventative measures taken" were said to have kept the death toll under 300. This in a population of about 1.6 million, where otherwise tens of thousands of fatalities might have been expected.[110]

However, although a major earthquake occurred, there has been some skepticism about this narrative of effective measures taken on the basis of a timely prediction. This was during the Cultural Revolution, when "belief in earthquake prediction was made an element of ideological orthodoxy that distinguished the true party liners from right wing deviationists"[111] and record keeping was disordered, making it difficult to verify details of the claim, even as to whether there was an ordered evacuation. The method used for either the medium-term or short-term predictions (other than "Chairman Mao's revolutionary line"[112]) has not been specified.[113] It has been suggested that the evacuation was spontaneous, following the strong (M 4.7) foreshock that occurred the day before.[114]

A 2006 study that had access to an extensive range of records found that the predictions were flawed. "In particular, there was no official short-term prediction, although such a prediction was made by individual scientists."[115] Also: "it was the foreshocks alone that triggered the final decisions of warning and evacuation". They estimated that 2,041 lives were lost. That more did not die was attributed to a number of fortuitous circumstances, including earthquake education in the previous months (prompted by elevated seismic activity), local initiative, timing (occurring when people were neither working nor asleep), and local style of construction. The authors conclude that, while unsatisfactory as a prediction, "it was an attempt to predict a major earthquake that for the first time did not end up with practical failure."

1985–1993: Parkfield, USA (Bakun-Lindh)

The "Parkfield earthquake prediction experiment" was the most heralded scientific earthquake prediction ever.[116] It was based on an observation that the Parkfield segment of the San Andreas Fault[117] breaks regularly with a moderate earthquake of about M 6 every several decades: 1857, 1881, 1901, 1922, 1934, and 1966.[118] More particularly, Bakun & Lindh (1985) pointed out that, if the 1934 quake is excluded, these occur every 22 years, ±4.3 years. Counting from 1966, they predicted a 95% chance that the next earthquake would hit around 1988, or 1993 at the latest. The National Earthquake Prediction Evaluation Council (NEPEC) evaluated this, and concurred.[119] The U.S. Geological Survey and the State of California therefore established one of the "most sophisticated and densest nets of monitoring instruments in the world",[120] in part to identify any precursors when the quake came. Confidence was high enough that detailed plans were made for alerting emergency authorities if there were signs an earthquake was imminent.[121] In the words of the Economist: "never has an ambush been more carefully laid for such an event."[122]

1993 came, and passed, without fulfillment. Eventually there was an M 6.0 earthquake, on 28 September 2004, but without forewarning or obvious precursors.[123] While the experiment in catching an earthquake is considered by many scientists to have been successful,[124] the prediction was unsuccessful in that the eventual event was a decade late.[125]

1987–1995: Greece (VAN)

Professors P. Varotsos, K. Alexopoulos and K. Nomicos – "VAN" – claimed in a 1981 paper an ability to predict M ≥ 2.6 earthquakes within 80 km of their observatory (in Greece) approximately seven hours beforehand, by measurements of 'seismic electric signals'. In 1996 Varotsos and other colleagues claimed to have predicted impending earthquakes within windows of several weeks, 100–120 km, and ±0.7 of the magnitude.[126]

The VAN predictions have been criticized on various grounds, including being geophysically implausible,[127] "vague and ambiguous",[128] failing to satisfy prediction criteria,[129] and retroactive adjustment of parameters.[130] A critical review of 14 cases where VAN claimed 10 successes showed only one case where an earthquake occurred within the prediction parameters.[131] The VAN predictions not only fail to do better than chance, but show "a much better association with the events which occurred before them", according to Mulargia and Gasperini.[132]

1989: Loma Prieta, USA

On 17 October 1989, the Mw 6.9 (Ms 7.1[133]) Loma Prieta ("World Series") earthquake (epicenter in the Santa Cruz Mountains northwest of San Juan Bautista, California) caused significant damage in the San Francisco Bay area of California.[134] The U.S. Geological Survey (USGS) reportedly claimed, twelve hours after the event, that it had "forecast" this earthquake in a report the previous year.[135] USGS staff subsequently claimed this quake had been "anticipated";[136] various other claims of prediction have also been made.[137]

Harris (1998) reviewed 18 papers (with 26 forecasts) dating from 1910 "that variously offer or relate to scientific forecasts of the 1989 Loma Prieta earthquake." (Forecast is often limited to a probabilistic estimate of an earthquake happening over some time period, distinguished from a more specific prediction.[138] In this case this distinction is not made.) None of these forecasts can be rigorously tested due to lack of specificity,[139] and where a forecast does bracket time and location it is because of such a broad window (e.g., covering the greater part of California for five years) as to lose any value as a prediction. Predictions that came close (but given a probability of only 30%) had ten- or twenty-year windows.[140]

Of the several prediction methods used perhaps the most debated was the M8 algorithm used by Keilis-Borok and associates in four forecasts.[141] The first of these forecasts missed both magnitude (M 7.5) and time (a five-year window from 1 January 1984, to 31 December 1988). They did get the location, by including most of California and half of Nevada.[142] A subsequent revision, presented to the NEPEC, extended the time window to 1 July 1992, and reduced the location to only central California; the magnitude remained the same. A figure they presented had two more revisions, for M ≥ 7.0 quakes in central California. The five-year time window for one ended in July 1989, and so missed the Loma Prieta event; the second revision extended to 1990, and so included Loma Prieta.[143]

Harris describes two differing views about whether the Loma Prieta earthquake was predicted. One view argues it did not occur on the San Andreas fault (the focus of most of the forecasts), and involved dip-slip (vertical) movement rather than strike-slip (horizontal) movement, and so was not predicted.[144] The other view argues that it did occur in the San Andreas fault zone, and released much of the strain accumulated since the 1906 San Francisco earthquake; therefore several of the forecasts were correct.[145] Hough states that "most seismologists" do not believe this quake was predicted "per se".[146] In a strict sense there were no predictions, only forecasts, which were only partially successful.

Iben Browning claimed to have predicted the Loma Prieta event, but (as will be seen in the next section) this claim has been rejected.

1990: New Madrid, USA (Browning)

Dr. Iben Browning (a scientist by virtue of a Ph.D. degree in zoology and training as a biophysicist, but no training or experience in geology, geophysics, or seismology) was an "independent business consultant" who forecast long-term climate trends for businesses, including publication of a newsletter.[147] He seems to have been enamored of the idea (scientifically unproven) that volcanoes and earthquakes are more likely to be triggered when the tidal force of the sun and the moon coincide to exert maximum stress on the earth's crust.[148] Having calculated when these tidal forces maximize, Browning then "projected"[149] what areas he thought might be ripe for a large earthquake. An area he mentioned frequently was the New Madrid Seismic Zone at the southeast corner of the state of Missouri, the site of three very large earthquakes in 1811-1812, which he coupled with the date of 3 December 1990.

Browning's reputation and perceived credibility were boosted when he claimed in various promotional flyers and advertisements to have predicted (among various other events[150]) the Loma Prieta earthquake of 17 October 1989.[151] The National Earthquake Prediction Evaluation Council (NEPEC) eventually formed an Ad Hoc Working Group (AHWG) to evaluate Browning's prediction. Its report (issued 18 October 1990) specifically rejected the claim of a successful prediction of the Loma Prieta earthquake;[152] examination of a transcript of his talk in San Francisco on 10 October showed he had said only: "there will probably be several earthquakes around the world, Richter 6+, and there may be a volcano or two" – which, on a global scale, is about average for a week – with no mention of any earthquake anywhere in California.[153]

Though the AHWG report thoroughly demolished Browning's claims of prior success and the basis of his "projection", it made little impact, coming after a year of continued claims of a successful prediction, the endorsement and support of geophysicist David Stewart,[154] and the tacit endorsement of many public authorities in their preparations for a major disaster, all of which was amplified by massive exposure in all major news media.[155] Nothing happened on 3 December,[156] and Browning died of a heart attack seven months later.[157]

2004 & 2005: Southern California, USA (Keilis-Borok)

The M8 algorithm (developed under the leadership of Dr. Vladimir Keilis-Borok at UCLA) gained considerable respect by the apparently successful predictions of the 2003 San Simeon and Hokkaido earthquakes.[158] Great interest was therefore generated by the announcement in early 2004 of a predicted M ≥ 6.4 earthquake to occur somewhere within an area of southern California of approximately 12,000 sq. miles, on or before 5 September 2004.[159] In evaluating this prediction the California Earthquake Prediction Evaluation Council (CEPEC) noted that this method had not yet made enough predictions for statistical validation, and was sensitive to input assumptions. It therefore concluded that no "special public policy actions" were warranted, though it reminded all Californians "of the significant seismic hazards throughout the state."[160] The predicted earthquake did not occur.

A very similar prediction was made for an earthquake on or before 14 August 2005, in approximately the same area of southern California. The CEPEC's evaluation and recommendation were essentially the same, this time noting that the previous prediction and two others had not been fulfilled.[161] This prediction also failed.

2009: L'Aquila, Italy (Giuliani)

At 03:32 on 6 April 2009, the Abruzzo region of central Italy was rocked by a magnitude M 6.3 earthquake.[162] In the city of L'Aquila and surrounding area around 60,000 buildings collapsed or were seriously damaged, resulting in 308 deaths and 67,500 people left homeless.[163] Around the same time, it was reported that Giampaolo Giuliani had predicted the earthquake, had tried to warn the public, but had been muzzled by the Italian government.[164]

Giampaolo Giuliani was a laboratory technician at the Laboratori Nazionali del Gran Sasso. As a hobby he had for some years been monitoring radon using instruments he had designed and built. Prior to the L'Aqulia earthquake he was unknown to the scientific community, and had not published any kind of scientific work.[165] He had been interviewed on 24 March by an Italian-language blog, Donne Democratiche, about a swarm of low-level earthquakes in the Abruzzo region that had started the previous December. He said that this swarm was normal and would diminish by the end of March. On 30 March, L'Aquila was struck by a magnitude 4.0 tremblor, the largest to date.[166]

On 27 March Giuliani warned the mayor of L'Aquila there could be an earthquake within 24 hours, and an earthquake smaller than about M 2.3 occurred.[167] On 29 March he made a second prediction.[168] He telephoned the mayor of the town of Sulmona, about 55 kilometers southeast of L'Aquila, to expect a "damaging" – or even "catastrophic" – earthquake within 6 to 24 hours. Loudspeaker vans were used to warn the inhabitants of Sulmona to evacuate, with consequential panic. No quake ensued and Giuliano was cited for inciting public alarm and injoined from making public predictions.[169]

After the L'Aquila event Giuliani claimed that he had found alarming rises in radon levels just hours before.[170] He said he had warned relatives, friends and colleagues on the evening before the earthquake hit,[171] He was interviewed by the International Commission on Earthquake Forecasting for Civil Protection, which found that there had been no valid prediction of the mainshock before its occurrence.[172]

Difficulty or impossibility

Earthquake prediction may be intrinsically impossible. It has been argued that the Earth is in a state of self-organized criticality "where any small earthquake has some probability of cascading into a large event".[173] It has also been argued decision-theoretic grounds that prediction of major earthquakes is impossible.[174]

That earthquake prediction might be intrinsically impossible has been disputed.[175]

See also

Notes

  1. ^ Geller et al. 1997, p. 1616, following Allen (1976, p. 2070), who in turn followed Wood & Gutenberg (1935). Kagan (1997b, §2.1) says: "This definition has several defects which contribute to confusion and difficulty in prediction research." In addition to specification of time, location, and magnitude, Allen suggested three other requirements: 4) indication of the author's confidence in the prediction, 5) the chance of an earthquake occurring anyway as a random event, and 6) publication in a form that gives failures the same visibility as successes. Kagan & Knopoff (1987, p. 1563) define prediction (in part) "to be a formal rule where by the available space-time-seismic moment manifold of earthquake occurrence is significantly contracted ...."
  2. ^ Kagan 1997b, p. 507.
  3. ^ Kanamori 2003, p. 1205. See also ICEF 2011, p. 327.
  4. ^ Thomas 1983.
  5. ^ a b Atwood & Major 1998.
  6. ^ Mabey 2001. Mabey cites 7,000, using some older data. The yearly distribution of earthquakes by magnitude, in both the United States and worldwide, can be found at the U.S. Geological Survey Earthquake Statistics page.
  7. ^ E.g., the most famous claim of a successful prediction is that alleged for the 1975 Haicheng earthquake (ICEF 2011, p. 328), and is now listed as such in textbooks (Jackson 2004, p. 344). A later study concluded there was no valid short-term prediction (Wang et al. 2006), as described in more detail below.
  8. ^ Geller 1997, Summary.
  9. ^ Geller et al. 1997, p. 1617; Geller 1997, §2.3, p. 427; Console 2001, p. 261.
  10. ^ Kagan 1997b; Geller 1997. See also Nature Debates.
  11. ^ From USGS: Earthquake statistics.
  12. ^ USGS: Earthquake Facts and Statistics
  13. ^ USGS: Modified Mercalli Intensity Scale, level VI.
  14. ^ See Jackson 1996a, p. 3772, for an example.
  15. ^ Allen 1976, p. 2070; PEP 1976, p. 6.
  16. ^ Mulargia & Gasperini 1992, p. 32; Luen & Stark 2008, p. 302.
  17. ^ Luen & Stark 2008; Console 2001.
  18. ^ Jackson 1996a, p. 3775.
  19. ^ Jones 1985, p. 1669.
  20. ^ Console 2001, p. 1261.
  21. ^ Luen & Stark 2008. This was based on data from Southern California.
  22. ^ Hough 2010b relates how several claims of successful predictions are statistically flawed. For a deeper view of the pitfalls of the null hypothesis see Stark 1997 and Luen & Stark 2008.
  23. ^ Zechar et al. 2010.
  24. ^ The manslaughter convictions against the seven scientists and technicians in Italy are not for failing to predict the L'Aquila earthquake (where some 300 people died) as for giving undue assurance to the populace – one victim called it "anaesthetizing" – that there would not be a serious earthquake, and therefore no need to take precautions. Hall 2011; Cartlidge 2011. Additional details in Cartlidge 2012.
  25. ^ It has been reported that members of the Chinese Academy of Sciences were purged for "having ignored scientific predictions of the disastrous Tangshan earthquake of summer 1976." Wade 1977.
  26. ^ In January 1999 there was a report (Saegusa 1999) that China was introducing "tough regulations intended to stamp out ‘false’ earthquake warnings, in order to prevent panic and mass evacuation of cities triggered by forecasts of major tremors." This was prompted by "more than 30 unofficial earthquake warnings ... in the past three years, none of which has been accurate."
  27. ^ Geller 1997, §5.2, p. 437.
  28. ^ Kagan 1999, p. 234, and quoting Ben-Menahem (1995) on p. 235; ICEF 2011, p. 360.
  29. ^ PEP 1976, p. 9.
  30. ^ The IASPEI Sub-Commission for Earthquake Prediction defined a precursor as "a quantitatively measurable change in an environmental parameter that occurs before mainshocks, and that is thought to be linked to the preparation process for this mainshock." Geller 1997, §3.1
  31. ^ Geller 1997, p. 429, §3.
  32. ^ E.g., Claudius Aelianus, in De natura animalium, book 11, commenting on the destruction of Helike in 373 BC, but writing five centuries later.
  33. ^ Rikitake 1979, p. 294. Cicerone, Ebel & Britton 2009 has a more recent compilation
  34. ^ Jackson 2004, p. 335.
  35. ^ Geller (1997, p. 425). See also: Jackson (2004, p. 348): "The search for precursors has a checkered history, with no convincing successes." Zechar & Jordan (2008, p. 723): "The consistent failure to find reliable earthquake precursors...". ICEF (2009): "... no convincing evidence of diagnostic precursors."
  36. ^ Wyss & Booth 1997, p. 424.
  37. ^ ICEF 2011, p. 338.
  38. ^ ICEF 2011, p. 361.
  39. ^ ICEF 2011, p. 336; Lott, Hart & Howell 1981, p. 1204.
  40. ^ Bolt 1993, pp. 30–32.
  41. ^ Lott, Hart & Howell 1981.
  42. ^ Brown & Kulik 1977.
  43. ^ Lott, Hart & Howell 1981. In an earliar study similar behavior was seen before storms. Lott et al. 1979, p. 687.
  44. ^ Lighton & Duncan 2005.
  45. ^ Lindberg, Skiles & Hayden 1981.
  46. ^ Schaal (1988)
  47. ^ Otis & Kautz 1979.
  48. ^ Hammond 1973. Additional references in Geller 1997, §2.4.
  49. ^ Scholz, Sykes & Aggarwal 1973.
  50. ^ Aggarwal et al. 1975.
  51. ^ Hough 2010b, p. 110.
  52. ^ Allen 1983, p. 79; Whitcomb 1977.
  53. ^ McEvilly & Johnson 1974.
  54. ^ Lindh, Lockner & Lee 1978.
  55. ^ ICEF 2011, p. 333. For a fuller account of radon as an earthquake precursor see Immè & Morelli 2012.
  56. ^ Giampaolo Giuiliani's claimed prediction of the L'Aquila earthquake was based on monitoring of radon levels.
  57. ^ Cicerone, Ebel & Britton 2009, p. 382.
  58. ^ ICEF 2011, p. 334. See also Hough 2010b, pp. 93–95.
  59. ^ Immè & Morelli 2012, p. 158.
  60. ^ Park 1996.
  61. ^ Varotsos, Alexopoulos & Nomicos 1981, described by Mulargia & Gasperini 1992, p. 32, and Kagan 1997b, §3.3.1, p. 512.
  62. ^ Varotsos et al. 1986.
  63. ^ Varotsos et al. 1986; Varotsos & Lazaridou 1991.
  64. ^ Bernard 1992; Bernard & LeMouel 1996.
  65. ^ Mulargia & Gasperini 1992; Mulargia & Gasperini 1996; Wyss 1996; Kagan 1997b.
  66. ^ Varotsos & Lazaridou 1991.
  67. ^ Wyss & Allmann 1996.
  68. ^ Geller 1996.
  69. ^ See the table of contents.
  70. ^ Varotsos, Sarlis & Skordas 2011.
  71. ^ Fraser-Smith et al. 1990
  72. ^ Campbell 2009
  73. ^ Thomas et al. 2009
  74. ^ Reid 1910, p. 22; ICEF 2011, p. 329.
  75. ^ Wells & Coppersmith 1994, Fig. 11, p. 993.
  76. ^ Zoback 2006 provides a clear explanation. Evans 1997, §2.2 also provides a description of the "self-organized criticality" (SOC) paradigm that is displacing the elastic rebound model.
  77. ^ Castellaro 2003
  78. ^ These include the type of rock and fault geometry.
  79. ^ Schwartz & Coppersmith 1984; Tiampo & Shcherbakov 2012, p. 93, §2.2.
  80. ^ UCERF 2008.
  81. ^ Bakun & Lindh 1985, p. 619. Of course these were not the only earthquakes in this period. The attentive reader will recall that, in seismically active areas, earthquakes of some magnitude happen fairly constantly. The "Parkfield earthquakes" are either the ones noted in the historical record, or were selected from the instrumental record on the basis of location and magnitude. Jackson & Kagan (2006, p. S399) and Kagan (1997, pp. 211–212, 213) argue that the selection parameters can bias the statistics, and that a sequences of four or six quakes, with different recurrence intervals, are also plausible.
  82. ^ Bakun & Lindh 1985, p. 621.
  83. ^ Jackson & Kagan 2006, p. S408 say the claim of quasi-periodicity is "baseless".
  84. ^ Jackson & Kagan 2006.
  85. ^ Kagan & Jackson 1991, p. 21,420; Stein, Friedrich & Newman 2005; Jackson & Kagan 2006; Tiampo & Shcherbakov 2012, §2.2, and references there; Kagan, Jackson & Geller 2012. See also the Nature debates.
  86. ^ Young faults are expected to have complex, irregular surfaces, which impedes slippage. In time these rough spots are ground off, changing the mechanical characteristics of the fault. Cowan, Nicol & Tonkin 1996; Stein & Newman 2004, p. 185.
  87. ^ Stein & Newman 2004
  88. ^ Scholz 2002, p. 284, §5.3.3; Kagan & Jackson 1991, p. 21,419; Jackson & Kagan 2006, p. S404.
  89. ^ Kagan & Jackson 1991, p. 21,419; McCann et al. 1979; Rong, Jackson & Kagan 2003.
  90. ^ Jackson & Kagan 2006, p. S404.
  91. ^ Lomnitz & Nava 1983.
  92. ^ Rong, Jackson & Kagan 2003, p. 23.
  93. ^ Kagan & Jackson 1991, Summary.
  94. ^ See details in Tiampo & Shcherbakov 2012, §2.4.
  95. ^ CEPEC 2004a.
  96. ^ Hough 2010b, pp. 142–149.
  97. ^ Zechar 2008; Hough 2010b, pp. 145.
  98. ^ Zechar 2008, p. 7. See also p. 26.
  99. ^ Tiampo & Shcherbakov 2012, §2.1. Hough 2010b, chapter 12, provides a good description.
  100. ^ Hardebeck, Felzer & Michael 2008, par. 6
  101. ^ Hough 2010b, pp. 154–155.
  102. ^ Tiampo & Shcherbakov 2012, §2.1, p. 93.
  103. ^ Hardebeck, Felzer & Michael (2008, §4) show how suitable selection of parameters shows "DMR": Decelerating Moment Release.
  104. ^ Hardebeck, Felzer & Michael 2008, par. 1, 73.
  105. ^ Mignan 2011, Abstract.
  106. ^ Mignan 2013
  107. ^ Hough 2010b.
  108. ^ Geller 1997, §4.
  109. ^ E.g.: Davies 1975; Whitham et al. 1976, p. 265; Hammond 1976; Ward 1978; Kerr 1979, p. 543; Allen 1982, p. S332; Rikitake 1982; Zoback 1983; Ludwin 2001; Jackson 2004, pp. 335, 344; ICEF 2011, p. 328.
  110. ^ Whitham et al. 1976, p. 266 provide a brief report. The report of the Haicheng Earthquake Study Delegation (Anonymous 1977) has a fuller account. Wang et al. (2006, p. 779), after careful examination of the records, set the death toll at 2,041.
  111. ^ Raleigh et al. (1977), quoted in Geller 1997, p 434. Geller has a whole section (§4.1) of discussion and many sources. See also Kanamori 2003, pp. 1210-11.
  112. ^ Quoted in Geller 1997, p. 434. Lomnitz (1994, Ch. 2) describes some of circumstances attending to the practice of seismology at that time; Turner 1993, pp. 456–458 has additional observations.
  113. ^ Measurement of an uplift has been claimed, but that was 185 km away, and likely surveyed by inexperienced amateurs. Jackson 2004, p. 345.
  114. ^ Kanamori 2003, p. 1211. According to Wang et al. 2006 foreshocks were widely understood to precede a large earthquake, "which may explain why various [local authorities] made their own evacuation decisions" (p. 762).
  115. ^ Wang et al. 2006, p. 785.
  116. ^ Geller (1997, §6) describes some of the coverage. The most anticipated prediction ever is likely Iben Browning's 1990 New Madrid prediction (discussed below), but it lacked any scientific basis.
  117. ^ Near the small town of Parkfield, California, roughly half-way between San Francisco and Los Angeles.
  118. ^ Bakun & McEvilly 1979; Bakun & Lindh 1985; Kerr 1984.
  119. ^ Bakun et al. 1987.
  120. ^ Kerr 1984, "How to Catch an Earthquake". See also Roeloffs & Langbein 1994.
  121. ^ Roeloffs & Langbein 1994, p. 316.
  122. ^ Quoted by Geller 1997, p. 440.
  123. ^ Kerr 2004; Bakun et al. 2005, Harris & Arrowsmith 2006, p. S5.
  124. ^ Hough 2010b, p. 52.
  125. ^ It has also been argued that the actual quake differed from the kind expected (Jackson & Kagan 2006), and that the prediction was no more significant than a simpler null hypothesis (Kagan 1997).
  126. ^ Varotsos, Alexopoulos & Nomicos 1981, described by Kagan 1997b, §3.3.1, p. 512, and Mulargia & Gasperini 1992, p. 32.
  127. ^ Jackson 1996b, p. 1365; Mulargia & Gasperini 1996, p. 1324.
  128. ^ Geller 1997, §4.5, p. 436: "VAN’s ‘predictions’ never specify the windows, and never state an unambiguous expiration date. Thus VAN are not making earthquake predictions in the first place."
  129. ^ Jackson 1996b, p. 1363. Also: Rhoades & Evison (1996), p. 1373: No one "can confidently state, except in the most general terms, what the VAN hypothesis is, because the authors of it have nowhere presented a thorough formulation of it."
  130. ^ Kagan & Jackson 1996,grl p. 1434.
  131. ^ Geller 1997, Table 1, p. 436.
  132. ^ Mulargia & Gasperini 1992, p. 37.
  133. ^ Ms is a measure of the intensity of surface shaking, the surface wave magnitude.
  134. ^ Harris 1998, p. B18.
  135. ^ Garwin 1989.
  136. ^ USGS staff 1990, p. 247.
  137. ^ Kerr 1989; Harris 1998.
  138. ^ E.g., ICEF 2011, p. 327.
  139. ^ Harris 1998, p. B22.
  140. ^ Harris 1990, Table 1, p. B5.
  141. ^ Harris 1998, pp. B10–B11.
  142. ^ Harris 1990, p. B10, and figure 4, p. B12.
  143. ^ Harris 1990, p. B11, figure 5.
  144. ^ Geller (1997, §4.4) cites several authors to say "it seems unreasonable to cite the 1989 Loma Prieta earthquake as having fulfilled forecasts of a right-lateral strike-slip earthquake on the San Andreas Fault."
  145. ^ Harris 1990, pp. B21–B22.
  146. ^ Hough 2010b, p. 143.
  147. ^ Spence et al. 1993 (USGS Circular 1083) is the most comprehensive, and most thorough, study of the Browning prediction, and appears to be the main source of most other reports. In the following notes, where an item is found in this document the pdf pagination is shown in brackets.
  148. ^ A report on Browning's prediction cited over a dozen studies of possible tidal triggering of earthquakes, but concluded that "conclusive evidence of such a correlation has not been found". AHWG 1990, p. 10 [62]. It also found that Browning's identification of a particular high tide as triggering a particular earthquake "difficult to justify".
  149. ^ According to a note in Spence, et al. (p. 4): "Browning preferred the term projection, which he defined as determining the time of a future event based on calculation. He considered 'prediction' to be akin to tea-leaf reading or other forms of psychic foretelling." See also Browning's own comment on p. 36 [44].
  150. ^ Including "a 50/50 probability that the federal government of the U.S. will fall in 1992." Spence et al. 1993, p. 39 [47].
  151. ^ Spence et al. 1993, pp. 9–11 [17–19 (pdf)], and see various documents in Appendix A, including The Browning Newsletter for 21 November 1989 (p. 26 [34]).
  152. ^ AHWG 1990, p. iii [55]. Included in Spence et al. 1993 as part of Appendix B, pp. 45–66 [53–75].
  153. ^ AHWG 1990, p. 30 [72].
  154. ^ Previously involved in a psychic prediction of an earthquake for North Carolina in 1975 (Spence et al. 1993, p. 13 [21]), Stewart sent a 13 page memo to a number of colleagues extolling Browning's supposed accomplishments, including predicting Loma Prieta. Spence et al. 1993, p. 29 [37]
  155. ^ See Spence et al. 1993 throughout.
  156. ^ Tierney 1993, p. 11.
  157. ^ Spence et al. 1993, p. 40 [48] (p. 4 [12]).
  158. ^ CEPEC 2004a; Hough 2010b, pp. 145–146.
  159. ^ CEPEC 2004a.
  160. ^ CEPEC 2004a.
  161. ^ CEPEC 2004b.
  162. ^ ICEF 2011, p. 320.
  163. ^ Alexander 2010, p. 326.
  164. ^ The Telegraph, 6 April 2009. See also McIntyre 2009.
  165. ^ Hall 2011, p. 267.
  166. ^ Kerr 2009.
  167. ^ The Guardian, 5 April 2010.
  168. ^ The ICEF (2011, p. 323) alludes to predictions made on 17 February and 10 March.
  169. ^ Kerr 2009; Hall 2011, p. 267; Alexander 2010, p. 330.
  170. ^ Kerr 2009; The Telegraph, 6 April 2009.
  171. ^ The Guardian, 5 April 2010; Kerr 2009.
  172. ^ ICEF 2011, p. 323, and see also p. 335.
  173. ^ Geller et al. 1997, p. 1616; Kagan 1997b, p. 517. See also Kagan 1997b, p. 520, Vidale 1996 and especially Geller 1997, §9.1, "Chaos, SOC, and predictability".
  174. ^ Matthews 1997.
  175. ^ E.g., Sykes, Shaw & Scholz 1999 and Evison 1999.

References

  • The Ad Hoc Working Group on the December 2—3, 1990, Earthquake Prediction [AHWG] (18 October 1990), Evaluation of the December 2-3, 1990, New Madrid Seismic Zone Prediction{{citation}}: CS1 maint: extra punctuation (link) CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link). Included in App. B of Spence et al. 1993.
  • Aggarwal, Yash P.; Sykes, Lynn R.; Simpson, David W.; Richards, Paul G. (10 February 1975), "Spatial and Temporal Variations in ts/tp and in P Wave Residuals at Blue Mountain Lake, New York: Application to Earthquake Prediction", Journal of Geophysical Research, 80 (5): 718–732, Bibcode:1975JGR....80..718A, doi:10.1029/JB080i005p00718.
  • Allen, Clarence R. (December 1976), "Responsibilities in earthquake prediction", Bulletin of the Seismological Society of America, 66 (6): 2069–2074.
  • Allen, Clarence R. (December 1982), "Earthquake Prediction – 1982 Overview", Bulletin of the Seismological Society of Ammerica, 72 (6B): S331–S335.
  • Bernard, P.; LeMouel, J. L. (1996), "On electrotelluric signals", A critical review of VAN, London: Lighthill, S. J. World Scientific, pp. 118–154.
  • Bolt, Bruce A. (1993), Earthquakes and geological discovery, Scientific American Library, ISBN 0-7167-5040-6.
  • Fraser-Smith, A. C.; Bernardi, A.; McGill, P. R.; Ladd, M. E.; Helliwell, R. A.; Villard, Jr., O. G. (1990), "Low-Frequency Magnetic Field Measurements Near the Epicenter of the Ms 7.1 Loma Prieta Earthquake", Geophysical Research Letters, 17 (9): 1465–1468, doi:10.1029/GL017i009p01465.
  • Hamilton, Robert M. (1976), "Earth Prediction — Opportunity to Avert Disaster", U.S. Geological Survey, Circular 729: 6–9 {{citation}}: |chapter= ignored (help).
  • Hough, Susan (2010b), Predicting the Unpredictable: The Tumultuous Science of Earthquake Prediction, Princeton University Press, ISBN 978-0-691-13816-9.
  • Jackson, David D. (27 May, 1996b), "Earthquake prediction evaluation standards applied to the VAN method", Geophysical Review Letters, 23 (11): 1363–1366, doi:10.1029/96gl01439 {{citation}}: Check date values in: |date= (help).
  • Jolliffe, Ian T.; Stephenson, David B., eds. (2003), Forecast Verification: A Practitioner’s Guide in Atmospheric Science (1st ed.), John Wiley & Sons, Ltd., ISBN 0-471-49759-2.
  • Jones, Lucille M. (December 1985), "Foreshocks and time-dependent earthquake hazard assessment in southern California", Bulletin of the Seismological Society of America, 75 (6): 1669–1679.
  • Lomnitz, Cinna; Nava, F. Alejandro (December 1983), "The predictive value of seismic gaps.", Bulletin of the Seismological Society of America, 73 (6A): 1815–1824.
  • Lott, Dale F.; Hart, Benjamin L.; Verosub, Kenneth L.; Howell, Mary W. (September 1979), "Is Unusual Animal Behavior Observed Before Earthquakes? Yes and No", Geophysical Research Letters, 6 (9): 685–687, doi:10.1029/GL006i009p00685.
  • Lott, Dale F.; Hart, Benjamin L.; Howell, Mary W. (December 1981), "Retrospective Studies of Unusual Animal Behavior as an Earthquake Predictor", Geophysical Research Letters, 8 (12): 1203–1206, doi:10.1029/GL008i012p01203.
  • McCann, W. R.; Nishenko, S. P.; Sykes, L. R.; Krause, J. (1979), "Seismic gaps and plate tectonics: Seismic potential for major boundaries", Pure and Applied Geophysics, 117 (6): 1082–1147, Bibcode:1979PApGe.117.1082M, doi:10.1007/BF00876211.
  • McEvilly, T.V.; Johnson, L.R. (April 1974), "Stability of P an S velocities from Central California quarry blasts", Bulletin of the Seismological Society of America, 64 (2): 343–353.
  • Mignan, Arnaud (June 2011), "Retrospective on the Accelerating Seismic Release (ASR) hypothesis: controversy and new horizons", Tectonophysics, 505 (1–4): 1–16, doi:10.1016/j.tecto.2011.03.010.
  • Mignan, Arnaud (14 February 2013), "The debate on the prognostic value of earthquake foreshocks: A meta-analysis", Scientific Reports, 4, doi:10.1038/srep04099.
  • Otis, Leon; Kautz, William (1979), "Proceedings of Conference XI: Abnormal Animal Behavior Prior to Earthquakes, II", U.S. Geological Survey, Open-File Report 80-453: 225–226 {{citation}}: |chapter= ignored (help).
  • Rikitake, Tsuneji (1982), Earthquake Forecasting and Warning, Tokyo: Center for Academic Publications.
  • Scholz, Christopher H. (2002), The Mechanics of earthquakes and faulting (2nd ed.), Cambridge Univ. Press, ISBN 0-521-65223-5.
  • Schwartz, David P.; Coppersmith, Kevin J. (10 July 1984), "Fault Behavior and Characteristic Earthquakes: Examples From the Wasatch and San Andreas Fault Zones", Journal of Geophysical Research, 89 (B7): 5681–5698, Bibcode:1984JGR....89.5681S, doi:10.1029/JB089iB07p05681.
  • Varotsos, P.; Alexopoulos, K.; Nomicos, K. (1981), "Seven-hour precursors to earthquakes determined from telluric currents", Praktika of the Academy of Athens, 56: 417–433.
  • Varotsos, P.; Sarlis, N.; Skordas, E. (2011), Natural time analysis : the new view of time ; Precursory seismic electric signals, earthquakes and other complex time series, Springer Praxis, ISBN 364216448-X.
  • Ward, Peter L. (1978), "Ch. 3: Earthquake prediction", Geophysical predictions (PDF), National Academy of Sciences.
  • Wyss, M. (1996), "Brief summary of some reasons why the VAN hypothesis for predicting earthquakes has to be rejected", A critical review of VAN, London: Lighthill, S. J. World Scientific, pp. 250–266.