We can plot the density of earthquake events, see attached file.
The plots have been done with R packages 'sp' and 'spstat'.
Can we export reliable results from such a plot?
Dear Demetris,
If you have considered the frequency of earthquakes in the plot, then the results might be reliable.
Although several trials have been done in the past for the earthquake prediction , we know that scientists cannot predict specific future events of earthquakes. However, data of past events of earthequakes definetely show that something is wrong in the underground and in this sense they may be an indicator that the same will continue in this area in the future. The size of the events may be of the same order or different.
Therefore, places that had earthquakes in the past will continue to have earthquakes in the future.
Is there anyone who can support that Northern and Eastern Mediterranean, Middle East up to China, Both Coasts of the Pacific Ocean, and some other scattered places around the globe will not suffer severe earthequakes in the future? They always do and so people and structures have to be prepared.
Dear Guoliang, it is spatial density, ie #of events/square area, not exactly frequence.
Dear Theodore, my amateuristic studies show that the main event happens in a less dense area: it seems that geological energy is blowing out with many small events in an area, thus probably the big event becomes in another area where there exist small density.
Look the density plot before the earthquake of Sep 7 1999 in Athens.
The high density was at Ionian Sea, the main event came in Attika region, far away.
Demetris you are right and this is supported by the fact that Seismologists insist in that a chain of small events in an area makes the phenomenon of the earthquake in the area to deteriorate. I was talking about the long period of time and where erthquakes will happen.
The Athens data are from 1997 JUL 1 to 1999 AUG 31, ie two years before the main event. What can we say about that time period, is it a long or a short one?
In a similar way as in the laboratory experiments, I suggest that you have to go back in the past and find the mean time collapsed between two main enents in the wider region of the same location. Then you may use your judgement to define the period you need. The longer the period the better the approximation. Now, the period of two years is very short if the mean time of the main events is 20 years. Good luck with your research.
Yes, you are right. The (rare) events follow the geometric distribution and the time between them the exponential one. I shall check it statistically! Thank you.
Dear Demetri,
To begin with, earthquake density plots say one and only one thing: there are few or many earthquakes in a given area and that is all. Also, the high-rate Ionian seismicity has nothing to do with the Athens earthquake, which was more probably triggered by the August 17 Izmit event in Turkey. There's plenty of literature if you want to know more... Next, consider that earthquake statistics is a science in its own right and should not be approached without basic understanding of the undelying physical processes. For instance, you need to consider whether you are dealing with point processes (memory-less) or feedback-feedforward processes (memory-full in the sense that seismicity triggers future seismicity via short and long range stress-stress interaction and the whole system gets self-organized, sometimes culminating with a large main shock). In the former case you have normal distribution of interevent times and random occurrence of large earthquakes. In the latter case you have q-exponential distribution because the system is non-extensive. Moreover, you have to consider the effects of aftershocks. Aftershock sequences are correlated in time, their distribution being some variant of Pareto, with the q-exponential being the most likely contestant. However, are they only seismicity bursts contained in the interaction radius of their parent main event or do they contribute to the evolution of seismicity via long-range interactions? And how does this affect the expectation values of earthquake occurrences in the long term?
These are some of the hot issues of concurrent research and from the little technical information I've provided, you can probably understand that they are very difficult and mathematically very involved problems. Simple approaches will get you nowhere. If you're need information for your research, you can easily find how I can be reached. And for God's sake, where physical problems are concerned, be careful with statistical approaches that are not solidly founded on the physics of the problem - they will also get you nowhere...
Dear Demetris,
maybe not for as an indicator for future events but for sure to get a first approximation for seismic activity distribution in the region. And with some additional analysis it may help to delineate and charcterize area souces, which can form an input for PSHA or DSHA. The density analysis works fine for areas with high seismic activity. For areas with low seismic activity the data meight be not representative for such kind of analysis. The bandwidth for e.g. kernel analysis are dependent on the regional settings and scale of the analysis. Therefore it is appropriate to conduct some sensitivity studies.
In northern Tien Shan I was able to show that the seismic activity pattern does not significatly change about a period of 30 years (suitable record for this period allowed to generate annual density maps), considering earthquakes with magnitudes 5 are difficult to include in point density plots keeping a physical meaning of a hypocenter processes.
All in one, statistical techniques are helpfull if you have a imagination of the tectonophysics of the region and can interpret the statistical results accordingly.
Pure statistical analysis without any background meight be missleading.
Dear Jewgenij, thank you for sharing with me your results. The problem is that in Greece we have areas with high seismicity for a time period and then they become inactive. Do you mean high seismicity all time? How do you define the term 'high' here?
Could we define another more proper index than spatial density? Since, as you wrote : "...events, say, with magnitudes >5 are difficult to include in point density plots keeping a physical meaning of a hypocenter processes.", probably we should treat those events a little bit different. For example we could agree that a main event (say 5R) it has to be treated as a set which contains at least its after shocks. I think it could be done via any statistical package if we agree at the restrictions about the spatial zone of the event. By this way we could eliminate the problem of equal referring small and indifferent events and keep tracking only to the scale we are interested for.
Is there any specific suggestion?
Thank you.
Dear Andrea,
I have to apologize for my amateuristic treatment of earthquake events, it is more like a hobby and not my main research field of interest.
We agree about the Anatolia fault and Athens 99 event. My opinion is also that the density plot can show - predict only small events and not the main, the big one, the interesting one.
Since I am not a geologist and I am rather a data analyst, I am trying to analyse data taken from NOA with statistical and other methods.
I have tried also the Varotsos suggestion for using the concept of 'self-organized criticality', see my attached very old work (in Greek, when I find time I'll translate and 'latex' it). What is your opinion about that method? I conclude that, albeit its huge promotion in Greece, it cannot predict an event when the same conditions hold.
5 years ago I was thinking seriously the working in the 'earthquake field'.
Maybe after finishing my current obligations I will re-study all more rigorously.
Meanwhile, is there any application of the fast and accurate methods that I have developed for estimating the inflection point of a curve?
(see my profile for details, if you are an R user, then install package 'inflection').
Thank you in advance and probably we could collaborate, who knows?
Demetris
According to the work of the Japanese statistician Yosa Ogata, which builds on the work of geologists over many years, the rate of earthquakes "now" depends in a very systematic way on the times and magnitudes of past earthquakes. Moreover, one can predict new *big* earthquakes by looking at the discrepancy between the actual rate and the predicted rate. If there are less small earthquakes than you would expect, it is more likely that a new big one is coming soon. This phenomenon too has been known to geologists for a long time and is known as seismic quiescence, but it was kind of folklore - not proven from data in any hard way - till Ogata showed it in some beautiful papers of maybe 30 years ago (in particular, in JASA). He did lots more important stuff in this field since then, too!
Dear Demetris,
the seismicity in northern Tien Shan is "high" compared to seismicity in Central Europe. So far in relative terms. To give you an better impression, the parameter 'a' of the well-known Gutenberg-Richter quation (logN(M) = a-bM) is about 8 for zones of "high" activity and about 5 in average. So the total number of earthquakes is quite high. For the seismic catalogue of 30 years there are about 20k events.
By the way, this could be interesting for you: "Patterns of seismic activity preceding large earthquakes" http://www.ldeo.columbia.edu/~shaw/publications/ShawCL92.pdf
I must agree with Jewgenij Torizin (and Shaw et al. 1992) that some places are so week that the seismicity there can be concidered as "precursors" for stronger EQ at other place. It depends on tectonic settings. We observed the same behaviour of tilt (or seismicity) at some places in Central Europe before strongest EQs (see the attachment - the changes of tilt in Pribram started weeks before M8.5 EQ at Sumatra). Sometimes, the tilt is accompanied with seismic swarm in Western Bohemia. Many examples are here: https://www.researchgate.net/publication/258997698_Tilts_global_tectonics_and_earthquake_prediction?ev=prf_pub
Book Tilts, global tectonics and earthquake prediction
I attach the data from Greece for the period 1997-JUL-1 to 1999-AUG-31 from NOA, for every one want to see if we could predict, forecast or whatever the Athens 1999-SEP-7 event of about 6 R from two years data.
@Jewgenij Torizin, thank you for the recommended paper, I have to read with care. From a first read I found that the cumulative events can be used and indeed I have used them I have used the cumulative total energy by the definition of Richter scale, in ergs. I found that yes, before a big event, we have a steep increase and maybe an inflection point, but it is not so forecasting powerful. The problem is that it becomes with high speed and you can realise it after the shock!
@Pavel, if we could define a quantity that could show its inflection point some days before the main event (I refer to the Taiwan Mw=6.2 point in your attached graph) then we could probably 'predict' the occurence of the event.
@Anne, do you think that using the catalogue of USGS could be more accurate? Have you worked with other catalogues in Mediterranean field? I would appreciate to find an alternative data source.
@Richard, I looked here:
http://www.ism.ac.jp/souran-en/researcher/ogata_y.html
and I realised that he has done enormous job.
Can you suggest me the papers that are suitable for the current conversation?
Thank you.
You could well start with Ogata, Y. (1988): Statistical models for earthquake occurrences and residual analysis for point processes, Journal of the American Statistical Association, Vol.83, No.401, Applications, 9-27.
Any hint for reading full time data in R:
%Y %B %d %H %M %S?
1997 JUL 1 0 49 19.8
I have tried a lot, but JUL damages the as.Date etc functions.
Any idea?
This sounds like - and probably is - a question to which only a statistical answer can be given. However, from my perspective (as someone interested in the physics of non-seismic pre-earthquake signals) statistical answers appear to be woefully useless when it comes to "really" get a feeling when a major earthquake is likely to occur. Saying that a magnitude 6 or higher has a 70% probability to strike within 30 yrs is of interest mostly to the civil engineers who write building codes. All others would dearly like to know more. There is an ongoing discussion on Nature website:
http://www.nature.com/news/earthquake-lights-linked-to-rift-zones-1.14455
where some of the questions are discussed how we may go beyond seismology to address the "dream" of an actionable earthquake forecasting capability that could issue warnings on the time scale of a few weeks to days ahead of the event.
A very simple approximation is just to compute the cumulative magnitude and plot it in time. For Greece see attachment file, time period= 1997 JAN 1 to 1999 DEC 31.
We observe three events of 5.4:
1605 1997 NOV 14 21 38 52.7 38.80 25.87 25 5.4
4768 1998 SEP 30 23 42 59.3 41.93 20.57 32 5.4
6899 1999 SEP 7 11 56 50.5 38.15 23.62 30 5.4
The 3rd one is the main of Athens 1999 SEP 7.
We observe that the overall seismicity has something close to 'inflection' point at the first of the three above mentioned events.
By not optical inspection and only with the use of R package 'inflection' (as is implemented in Fortran for faster results) we find that the inflection point is time:
2523
"1997-12-13 10:42:27 EET"
which agrees with the plot.
But we should want a closer inflection point. Any idea of such a proximity?
I afraid that this question could not receive an unequivocal reply. Undoubtedly, in general, density of earthquake events may be used as an indicator for future events (taking into account well-known long duration of geodynamic activity). At the same time, dangerous geodynamic event may appear in the areas with a low density of earthquake events. For this aim different statistical methodologies are used. Statistical and informational methodologies are sufficiently widely applied in geophysics. However, I believe that we should very carefully work with statistics in the earthquake prognosis. If we speak, for instance, about 80% probability of revealing economic hydrocarbon deposit, it means that there is 20% probability that Oil Company will lose its expenditures (it is sorrowful fact, but not fatal one). At the same time, 80% probability in the case of earthquake prediction may cause (in the case of mistake) death of many men.
Some ways for increasing the value of information obtaining in geological-geophysical analysis are presented in:
Eppelbaum, L., Eppelbaum, V. and Ben-Avraham, Z., 2003. Formalization and estimation of integrated geological investigations: Informational Approach. Geoinformatics, 14, No.3, 233-240.
Dear Demetris,
there are two independent parameters:
1) stress
2) movement.
Your cumulative magnitude (Benioff´s graph) is proportional to the movement (or cumulative deformation) at the specific fault. This is not the same as the stress. During the period of the high deformation rate close to the focal area, the stress field itself is decreasing. The maximum of stress is right before the mainshock. Therefore, it is not possible "to predict" the mainshock from the deformation rate, because the deformation rate is the consequence of the mainshock.
On my graph, there is the tilt, which is proportional to the first derivative of the main component (complex amplitude) of the stress tensor. Then I can observe the increasing of the stress after Taiwan EQ (which was a trigger of the process) and the maximum of the stress was observed right before the mainshock at Sumatra EQ M8.5.
Dear Pavel, I am referencing to:
"Measurement of tilt by vertical static pendulums and prediction of earthquakes"
and to your institution:
http://irsm.cas.cz/index_en.php?page=projekty_seznam_archiv_en
Is there available online data for your above mentioned stress measurements?
Or, how else could I obtain such data?
Thank you.
Dear Demetri, with respect to the plot you show with the "inflection point" etc., please take care not to misinterpret it! The inflection point is actually the time of the 18/11/1997 Strofades event, with ML=6.1 and MS approx. 6.5; the subsequent PARETO-like variation of the cumulative earthquake count (in seismology it is Omori's distribution law) is its aftershock sequence. I am afraid that here is no conclusion to be drawn from the data of your attachment with respect to the Athens earthquake, based on the aftershocks of the Strofades event alone! Moreover, you have to be careful not to use inconsistent data. Two of the reasons for the "inconsistencies" of the raw NOA catalogue are: (1) changes in the way magnitudes were reported before, during and after the transition from analogue telemetry to hybrid telemetry and then to fully digital acquisition, telemetry and processing, which took place between 1995 and 2005 and, (2) the progressive lessening of the network's detection threshold, which allowed progressively more smaller earthquakes to be included in the catalogue thus changing the sample. So, if you do not apply some form of reduction procedure to homogenize the catalogue, if you do not consider the earthquake sample subject to the completeness threshold applicable at a given area and time interval, and, if you do not decide what is the physical model that governs aftershock sequences and their consequences, (so as to decluster the catalogue if necessary), any statistical results will be ab initio deprecated.
Dear Andreas, I shall study in more details your post tomorrow morning, but I have to post now that the iteratively using of methods ESE and EDE gave me the answer:
[1] "1997-11-20 21:13:00 EET"
So, without no prior knowledge of the seismicity records (Strofades event), the two methods recognized the proper inflection point. I'll return when I'll be 'more refreshed'.
Dear Demetris, my project is not founded by Czech Grant Agency and therefore it is not mentioned at this site. The recent data are available here: http://www.dynamicgravity.org/mereni/
If we take data from SHARE:
http://www.share-eu.org/
Then we can work for the region of Greece from 1000-2006 with avoiding NaN data.
The density plot and the cumulative magnitude plot are presented at the attachment file.
If we compute the inflection point is time:
[1] "1982-03-11 06:13:50 GMT"
Let's constraint our time to 1900-2006 and do the same job.
See attachment file.
Again we obtain (iteratively) the same inflection point:
[1] "1982-03-11 06:13:50 GMT"
which is specific the time-space point:
Lon Lat Year Mo Da Ho Mi Se Mw H event_id LatUnc LonUnc HUnc MwUnc McModel1 CSZ_ShortName Main McModel2 IDAS
17904 20.05 40.08 1982 3 11 6 13 50,5 4.1 1 ### 10 10 NaN 0.2 0 EADI 0 0 GRAS369
or ifor the interersting info:
Lon Lat Year Mo Da Ho Mi Se Mw H event_id
20.05 40.08 1982 3 11 6 13 50,5 4.1 1
We thus observe that the iterative use of ESE & EDE methods are not sensitive to the time range of data.
Pavel, thank you for sharing with me your data, I really appreciate it.
If we just keep the main events only (SHARE data), then for Greece and time period 1900-2006 we have the attachment output-plots.
We see that now the density is more representative, since it covers most of the high seismicity area of Greece.
The interesting point is that again an inflection point occurs at ~1982-1983:
[1] "1983-02-05 14:07:28 GMT"
Lon Lat Year Mo Da Ho Mi Se Mw H event_id LatUnc LonUnc HUnc MwUnc McModel1 CSZ_ShortName Main McModel2 IDAS
18424 23.27 35.25 1983 2 5 14 7 28,6 5 57 ### 10 10 NaN 0.2 1 AEGE 1 1 GRAS400
So, a critical change has occured at 1982-1983, which our specialists can clarify it to us.
Dear Andreas, I think you are right and I add that in the past there were also other problems with NOA data (different formats, missing data etc).
What is your opinion about SHARE data?
I have used their files and concluded that in Greece a critical date is at the end of 1982 - early 1983.
Does this agree with our overall knowledge for Greek seismicity?
Dear Pavel, I downloaded your suggested data from station Příbram P7 : 24-12-2013 to 23-01-2014 and I plotted the North, East and the var of them (see attachment).
What can we say about the:
1)JAN 15 peak at varNorth
2)JAN 7 discontinuity of East
If I have understood good:
When we observe a peak at vars then we wait a big event?
And what about covering of Greek region from your stations?
Is there any special data for us?
Thank you.
Dear Demetris, all of "anomalies" in this case are connected with the reconstruction of the pendulum at Jan 7, 2014. Wait a moment, while the pendulum will stabilise its working diapason. The correct anomalies look like (in attachment).
The explanation of the "stress waves" model and earthquake prediction method is here:
https://www.researchgate.net/publication/259390335_Pribram2013_predikce?ev=prf_pub
On the slide 5 there is the non-linear model of the focal area close to the failure. If we are able to recognise the phase between points b and c, then we are able to estimate, when the phase behind point d will start (failure). The period between points b and d is proportional to the magnitude. This mechanismus can be called as "generation of stress waves". The prediction, based on "stress waves" is described in papers:
https://www.researchgate.net/publication/259291056_INDIRECT_STRESS_MEASUREMENT_BY_STATIC_VERTICAL_PENDULUM?ev=prf_pub
https://www.researchgate.net/publication/259295469_Measurement_of_tilt_by_vertical_static_pendulums_and_prediction_of_earthquakes?ev=prf_pub
and in the book
https://www.researchgate.net/publication/258997698_Tilts_global_tectonics_and_earthquake_prediction?ev=prf_pub
The noise (variations) are proportional to the microseisms and/or to the deformation velocity (the second pictue).
Data Pribram2013 predikce
Article Indirect stress measurement by static vertical pendulum
Conference Paper Measurement of tilt by vertical static pendulums and predict...
Book Tilts, global tectonics and earthquake prediction
Pavel, This is intriguing, but I can't wrap my mind around it. Could you provide a brief tutorial (for someone as simple-minded as I am)?
Dear Friedmann, the pendulum measures tilt, which is proportional to the horizontal component of deformation, which is proportional to the first derivative of the the appropriate component of stress tensor. In the local minima and maxima of tilt, the stress is maximal or minimal (depending on the local stress field orientation). We want to detect the "stress waves", which are generated in the focal area before the mainshock. These "stress waves" have S-form and their periods are proportional to the magnitude of the mainshock. Then, we must only localise the focal area, where these "stress waves" were originated. This is in our case the most difficult task.
We can cooperate with other methods, which are able to localise such focal areas like IR satelite measurement, radon measurement, EM measurement and other.
Much more info is in papers:
https://www.researchgate.net/publication/259390335_Pribram2013_predikce?ev=prf_pub
https://www.researchgate.net/publication/259295512_Deformation_waves_in_the_Earth_tectonosphere_and_seismicity_%28by_European_tiltmetric_network_and_ukrainian_extenzometers_data%29?ev=prf_pub
https://www.researchgate.net/publication/258959868_The_Multi-parameter_observation_of_pre-earthquake_signals_and_theirpotential_for_short_term_earthquake_forecasting?ev=prf_pub
https://www.researchgate.net/publication/259295469_Measurement_of_tilt_by_vertical_static_pendulums_and_prediction_of_earthquakes?ev=prf_pub
or in the book:
https://www.researchgate.net/publication/258997698_Tilts_global_tectonics_and_earthquake_prediction?ev=prf_pub
Data Pribram2013 predikce
Conference Paper Deformation waves in the Earth tectonosphere and seismicity ...
Data The Multi-parameter observation of pre-earthquake signals an...
Conference Paper Measurement of tilt by vertical static pendulums and predict...
Book Tilts, global tectonics and earthquake prediction
Thanks, Pavel. I'm currently too short on time to dive more deeply into it, but a question popped into my mind reading your last message. Do you know the so-called "VolksMeter" (a terrible name), which is pendulum-based, sensitive to tilt and seismic signals from dc to about 1 kHz. It's moderately priced (less than $2000) and amazingly versatile for its price. It has been developed by Randall Peters, Physics Dept at Mercer University in Georgia, USA. Maybe it could be of interest to you (if the sensitivity is good enough)
Friedemann
Dear Demetri,
The SHARE catalogue for Greece after approx. 1974 is in fact the International Seismological Centre's catalogue with magnitudes referred to Ms-Kiruna (why is another story). The ISC catalogue is homogeneous by constrruction and has reliable epiccentral/hypocentral determinations. However, it is incomplete, with the magnitude of completeness changing from mb approx. 4.3 in 1974 to mb approx. 3 recently due to improvements in the detection threshold of the seismological networks. And this is for the entire catalogue, because the magnitude of completeness changes VERY significantly from place to place. Thus, the SHARE, or whatever you like to call it, sample is still inconsistent and due diligence is required or the results will lack statistical significance!
With respect to the 1982-83 period, and as far as the whole of Greece and adjacent areas are concerned, nothing extraordinary happened at that time in terms of changes in the background rate of earthquake occurrence. Granted that there were two large earthquakes along the north and western boundaries of the Aegean plate in 1983, but this is nothing extraordinary with respect to the intermediate-term evolution of seismicity in Greece: there was no critical time in either a physicist's or layman's terms. As I said before, due diligence is needed when treating seismological data - they can be very easily misleading!
Sorry to join in the discussion later ...
I suppose the question about using ''density of earthquake events as an indicator for future events'' has a statistical answer and it first goes through the correct identification of statistically significant divergence of real density from random one. We have worked on a stochastic model of earthquake occurrence in case of randomness, which real data could be compared to.
Dear Demetri, I am sorry to admit but I am not acquainted with the term ''cumulative magnitude''. What does it mean?
Dear Dragomir, since the magnitude is a measure of the energy which is released at every event, then we can cumulatively add all magnitudes for a large time period. If you do this you will notice some sigmoid curves which denote seismic circles. The relative inflection points indicate the significant change in energy released which is closest to a big event. The problem is to find other proper quantities that can be inflected at least one week before the main event.
For example, the yesterday event of 5.8 at Kefallonia Island had a kind of 'indicator' from JAN 11 as you can see at the inflection point of cumulative magnitude from DEC 29 2013 to JAN 27 2014, see attachment.
Dear Andreas, in experimental scientific fields we are used to work with erroneous data measurements, despite of the error origin per case. The only thing that really cares us is the demand that the error terms $\epsilon_{i}\sim\,iid(0,\sigma^2)$, ie if they are of zero-mean only. If we had to work with 100% accurate data, then we couldn't produce anything. Here is the beauty of statistical methods: Take an orthogonal projection and then the OLS estimator for $\beta$ is a BLUE one. So, despite the catalogue in use, my opinion is that every time we have to work with one single catalogue and not to mix data from different seismic catalogues.
For Kefallonia Island 1990 - 2014, before main event of 5.8, we observe an inflection point at year 2007, see attachment. Watch the rather 'flat' region after 2012.
For the same island we investigate the 'flat' region 2012-2014 and find another inflection point at 2013 Apr. So, it could be awaiting a bigger event to occur, since we have:
1)a main step at 2007
2)a small step at 2013 Apr
3)a rather 'silent' situation after ther
See attachment.
Dear Pavel, is there any connection between Jan15 peak and Jan 26 event at Kefallonia Island? Or those measurements are correlated only with magnitudes > 7.5?
The question of temporal variations in the cumulative seismicity curves has been investigated in depth by Ted Habermann,Colorado Univ. His and his student's works have shown some success, but no definitative univcersal prediction system. As to seismicity maps indicating where future events will occur, Alan Kafka, Boston College has studied this problem, which indicated for example, that in the central US about 86% of future events were within around 35 km of previous events.
Dear Demetris, according to my opinion, Kefalonia Island EQ (a realtively small event) is only consequence of global processes, which occurred easternly from Greece mainly at the contact between Eurasian and Pacific lithosphere plates. Moreover, such event was triggered by atmospheric low (depression) above Atlantic. See the high deformation noise at the pendulum in Ida mine starting on Jan 25: http://www.dynamicgravity.org/mereni/chart.php?ofc=09_graf30-30.txt
The same is visible at Beregovo (Ukraine): http://www.dynamicgravity.org/mereni/chart.php?ofc=15_graf30-30.txt
Dear Pavel, I want to say public that I really like your work on seismic prediction for two reasons:
1)because it provides new theoretical and experimental aspects
2)because you are not arrogad to insist that you can predict every event.
I have been exhausted from so many egoist scientists who argue that they have predicted all events, so when I saw in your presentations: "... we didn't predict this event.." it was a positive shock for me.
Keep working in the combined way you do and you will be able to have more and more success in developing of efficient earthquake prediction methods.
Of course I shall keep watching you! :)
Dear Demeteris, many thanks for your nice words.
Now, after more than 30 years of my research, I know, that the earthquake prediction is a real scientific task and not only a chimera. We must only understand the behaviour of non-linear physics, which is in the backround of nucleation process, and we must measure such parameters, which are connected with the stress and not with the seismicity.
Dear Mr Kalenda I am really working to analyze this but as it goes nothing is found until we find either ends of a rabbit hole.
i would just like to add that velocity and slip pattern of a fault are discontinuous by nature. So if a scenario comes where we need to analyze nucleation initiation we will get significant changes in both these two parameters for a fault.
Are earthquakes predictable : Starting with the fundamental question and finding an answer through nucleation will involve two things : nucleation arrest and nucleation initation. We need to define how the earthquake genesis is going to be for a fault patch.
What is an earthquakes behavioral pattern
Earthquakes occur when the stress on the fault overcomes the frictional resistance as rupture nucleates with a directivity for a fault slip propogating with a certain velocity for the fault patch until it ruptures the fault length. When the stress wave reaches a barrier it diverts the direction as it continues in an offset with a profound increase of stress close to the asperity where future large earthquakes are bound to occur in large numbers.
why study nucleation then?
The study of nucleation is essential because of the non causal behavior of stress variability. Understanding the structure of the earths crust and mechanical nucleation models pre valent in geo analaysis and initiation of seismogenesis, we need to initiate analysis of slip velocity locally as there are gradients.Delayed dynamic triggering is difficult to analyze as there are small ruptures and larger jumps around fluid barriers. Sudden jump in slip velocity is imposed as there is also instantaneously change in rupture length uniformly around the depth of the rock.
The basic analysis of nucleation for a fault patch will depend how it is arrested and not how it is initiated.
My inference so far which I hope a few may add to>>>
This proves that earthquake forecast is possible since when we talk of initiation process .Earthquakes do not just nucleate under the initiation of a breakge due to excessive stress and strain behavior or just the weakness of the point in the fault.
When we need to talk of genesis of nucleation there is a
There is a nucleation arrest for the fault patch.
Which proves the first statement by Prof Demetris Christopoulos that density of earthquake events for a source will be an effective model to analyze the process of nulceation.
How?
I will just explain with an anology If there are points of conflict in the society the denser the source of conflict the more will be the events occuring in the conflict zone..
It may sound as comon sense what I will add is that the conflict will be a more violent one if it is arrested to tht place and not dissipated.
Or in other terms say a 1. X conflict events initiated with L no of probable points.
L no of probable points dissipated over time but X conflict is same
and 2. X conflict initiated with L no of probable points where E(L) to add to the conflict increased over time.(E is the function of an Expectation parameter...
3. X conflict decreased over time But E(L) increased over time which means dissipation in nucleation or nucleation arrest.
4. E(X) to increase over time is independent of the nucleation points.
For me the condition 3 will be the most dangerous as it shows paper for a certain condition.
I am waiting for your analysis and this is the hypothesis of my current work.
Please suggest or add points .
Dear Pushan, your idea (hypothesis) is very close to my point of view:
1) the rock mass is a heterogenous environment. Therefore, the parts of it, which are close to the limits, there exist. This can be measured as conflict density. These parts (from macroscopic point of view) generate the stress waves, because they are destructed during the stress increasing and their stress-toughness is spreading to their surrounding. The denser and larger area generates the higher and larger stress waves.
2) It is correct, that conflict X decreases over time, because of rheology parameters of teh rock mass (creep, slow-slip events, micrseisms, .....). E(L) increases over time due to ratcheting, which can accumulate the elastic energy (only one-way function).
All of rock mass is heterogenous - see Fig. (example of thermal conductivity).
Then, the reaction of environment to external forces can be different - here is the example of strain. There are upper and lower limits of linear parts of strain. We found (in this case) that 0.3% of the material exceeded the strength limit and will be destructed (microscopically). On the other hand, the lower limits will be exceeded in some cases and the cracks will be open.
I understood the X parameter in this sense as the density of exceeding of limits (both).
All depends on how the seismicity determined.
The structure of the the seismic field in the vicinity of the future earthquake source is very complex. In this area are formed the regions of the seismic activity, and the regions of the lulls.
I should mention that the distribution of the energies of the earthquakes also changes.
Yes, I have employed a density plot wherein I don't simply count the distribution of hypocenters but weight them - in short, examining the 3D function of seismic energy release, rather than simply hypocenter location. Especially given the likelihood that the smaller events are the ones prone to mis-picking, poor sensor azimuthal distribution and therefore mislocation, these events should be less important in the assessment of seismic activity and their contribution should be downweighted. Unless their hypocenters, phase picks and residuals have been carefully individually reviewed. So overall perhaps a better proxy might be to use the location probability functions as the spatial measure of earthquake activity. This requires using something other than the standard, linearized location algorithms (Hypo71, Hypoinverse, Hypoellipse sorts) but a probabilistic locator such as NonLinLoc.
Dear C. Rowe, is there any example of using "the location probability functions as the spatial measure of earthquake activity" in R or other package? Or any relevant paper that is simply written? Thank you.
You can certainly google on NonLinLoc and find not only papers about it (Anthony Lomax is the author) but also the website where you can download the code and see a detailed explanation of how to use it and how it works. The output includes the probability cloud for each earthquake. It should be possible to sum these oneself to generate this cumulative probability distribution, just write yourself a 3D gridding program and read and sum the results for each event. I have done this for the simpler "hypocenter location and size of event" information but it seems a more robust thing would be to located using NonLinLoc and then sum the probability densities instead.
Unfortunately it is written only for Linux and MacOS, not for windows. Anyway I shall check the paper and I' ll see if it is reproducible. Thanks C.
You could also overlay a virtual Linux machine on your Windows one, then you could have a lot more flexibility to use such codes.
Yes and no. There are lots of samples when dence earthquakes swamps predicted nothing.
In some cases the space pattern of swamp (fast changes of pattern) means something.
like hint on the forecast. I would say the knowleage of sesmicity is not enough for prediction, We need another independent information.
There is a broad misconception about seismic activity, I dare say (as a non-seismologist). That is: any precursory signal, be it acoustic (due, for instance, to some lower magnitude earthquakes) or electric/electromagnetic etc., is an indicator for increasing levels of deviatoric STRESSES. Nothing more and nothing less. However, Mother Earth knows of more than one way to deal with stresses, even high one. Yes, the rocks may rupture catastrophically creating a major or even monster earthquake, or the rocks may slide over time intervals of hours, days, weeks, creating what has become known as "silent earthquakes". I fully agree with V Zhuraviev that one single type of precursory signal is, for instance seismic signals, not enough. Better also look at non-seismic indicators of stress build-up.
Density of earthquakes in a localized area may be good indicator of release of stress and one may not expect a big event. But if the density of earthquakes have large spatial distributions, such may be an indicator for future earthquakes. The earth is a complex system of systems, its behavior and characteristics can not be commented, anything is possible, we have all kinds of evidences.
I support R. Singh's view point that complexity of phenomenon does not allow us any simple solution.
As far as I know, the density of earthquake events (long period) can give us the probability of hazard in this area, and from this, we should have stronger buildings.
If we had a 10,000 year record of earthquakes, then we could use the record to estimate the risk of future events. With the short instrumental record of earthquakes (less than 100 years in most places), the sample is too small to give adequate detail for local earthquake risk.
Many fault zones that we know have had large earthquakes in the past (including parts of the San Andreas Fault) have had very few smaller earthquakes in the last century. The 2008 M7.9 Wenchuan earthquake in Sichuan, China struck an area with only moderate seismic activity in the last 1000 years, so it was a surprise to many seismologists. The last M8 event there may have been more than 2000 years ago.
The 2011 M9.0 Tohoku-oki earthquake in Japan struck a part of the subduction zone that was assumed by many to have only M7.5-M8 earthquakes from the recorded earthquake record. Only the paleoseismic research in the area revealed that there had been a previous M9 earthquake about 1100 years ago with a similar giant tsunami.
At a regional scale (100-500 km or more), the density of earthquakes in the last century is a better indication of the areas of the Earth where there is ongoing deformation that will have future earthquakes, but there are still areas where there are large earthquakes with long intervals between them.
If only we knew accurately enough the frequency of past earthquake, it would be possible "to estimate the risk of future events". Is this really true in the practical sense? Estimate, yes, on time scales of 30 years or 300 years or 3000 years, but not anything better. Though I am not steeped in statistical mathematics, I believe there is a widespread misconception about past and future earthquakes. Even if there is a catalog containing tens of thousands of well-characterized past earthquakes, such a catalog is of limited usefulness when it comes to "knowing" when the next one will hit. Timing of any future single event is subject to the law of statistics of small numbers: It will always be burdened by large error bars regardless how well-constrained and well-behaved the past data have been.
That is why it is simply wrong to use the seismological approach when it comes to earthquake forewarning. By the time rocks are stressed to the point, where they fail catastrophically, it's too late. Why not work on understanding the non-seismic physical processes that take place in rocks that are subjected to ever increasing tectonic stress during the lead-up to a major earthquake? From a physics perspective it is just inconceivable that there wouldn't be any manifestation of the pile-up of stresses which, in the case of a magnitude 9 event, releases the energy equivalent to the simultaneous explosion of over 2,000,000 Hiroshima-class A bombs. The Japanese seismologists had no inkling that the M9 Tohoku quake of 3/11/11 was coming, nor did the Chinese seismologists "see" the approach of the M7.9 Sechuan event of May 12, 2012. The reason is not a lack of a catalog of past events. The reason is that the seismological approach - in essence, collecting information on past events and waiting for foreshocks heralding future events - is woefully inadequate.
Earthquake cluster anlysis using point density function and allied spatial statistics can be an effective tool to locate zone of strain accumulation and release. Please go through the papers of ours in this regard for more insight. But it is not the only tool to do such analysis.
Basab Mukhopadhyay, M. Fnais, Manoj Mukhopadhyay and Sujit Dasgupta, (2010) 'Seismic cluster analysis for the Burmese-Andaman and West Sunda Arc: insight into subduction kinematics and seismic potentiality', Geomatics, Natural Hazards and Risk, Vol. 1, No. 4, pp. 283–314, DOI: 10.1080/19475705.2010.49401
Basab Mukhopadhyay, Anshuman Acharyya and Sujit Dasgupta (2011) “Potential Source Zones for Himalayan Earthquake: Constraints from Spatial -Temporal Clusters”, Natural Hazards, 57, pp. 369–383, DOI 10.1007/s11069-010-9618-2
Basab Mukhopadhyay, Manoj Mukhopadhyay and Sujit Dasgupta (2011) SEISMIC CLUSTERS AND THEIR CHARACTERISTICS AT THE ARABIAN SEA TRIPLE JUNCTION: SUPPORTIVE EVIDENCES FOR PLATE MARGIN DEFORMATIONS, Journal Geological Society of India, Vol 78, August 2011, pp.131-146
Dear Eric Fielding and Friedemann Freund, I was waiting for a such a quality answer many months. I really thank you.
Dear Basab Mukhopadhyay, I will read your articles when I find time. Thank you.
As time passes I tend to approve the fact that statistics cannot help us for predicting the next big event, but I cannot throw it away...
Recently I am working with the cumulative quantities, see my work:
https://www.researchgate.net/publication/259914812_Inflection_point_analysis_for_Greek_seismicity_1900-2006?ev=prf_pub
and last Figures of this:
https://www.researchgate.net/publication/261849367_Urgent_hypothesis_on_plane_MH370_disappearance_v2?ev=prf_pub
Article Inflection point analysis for Greek seismicity 1900-2006
Data Urgent hypothesis on plane MH370 disappearance v2
Do you mean larger magnitude future events? You can only reliably estimate the periods of similar magnitude events by using earthquake catalog. A simple rule for engineering, and possibly for science: Extrapolation is poor practice. I recommend studying neotectonics of the region.
nope...
I have plotted both number of events and accumulated seismic moment against time and space. For 10 subduction zones. From the global cmt database
Result: Nothing.
Not a real correlation.
Some earth quakes show foreshocks , some not. And also you can observe accumulation of many small earthquakes without a mainshock.
So, NO.
All the rest is only theory
Erik, your opinion is interesting since Chile is one of the most seismic areas in the World.
Have you also done a local study for the seismicity of your country?
A question could be about the percentage of strong events that could be predicted by inspecting the density of earthquake events. Has anybody seen any such a study, paper or whatever?
After spending 20 years on study of Precursory signals emanating from Himalayas (1983-2003), we could hardly make any progress in prediction of Earthquakes. We also tried to fit global data into empirical formulations but failed. During this period, Technology has progressed by leaps and bounds; probing deeper space beyond our solar system and probing matter at Nano scale. I wonder if our failure to predict Earthquakes is due to lack of appropriate Earth Science? Or failure to harness proper Technology?
After many years of contributions to and work with probabilistic seismic hazard assessments i am convinced that even though it is very likely the best tool available, it does not work. The basic premise is that seismicity is stationary and that with sufficient time one will obtain a good estimate of the likelihood of an earthquake occurring. This is not borne out by long records of earthquakes. On all time scales and all spatial scales seismic activity can shift unpredictably. . A classic example is the Parkfield, CA, experiment. Active periods on the order of the length of the existing record can be misleading. Earthquakes in general are a non-linear process. The evidence for the non-linearity is the scale invariance of the Gutenberg-Richter recursion relation, where Aki has interpreted the “b” value as directly related to fractal dimension. It is ironic that the “b” value is used extensively in statistical analyses when it’s existence is symptomatic of a system that cannot be predicted. The solution probably lies in using non-linear statistical methods, such as those used in flood projections.
Leland Long: I agree with ur comments. Non-linearity, Chaos, Stochastic variables, Fractal analysis and Probabilistic approaches are all available for use in EQ prediction. I do not know about Flood prediction but Weather prediction has achieved great success after use of remote sensing satellites. Nothing of that sort happened in EQ prediction.
I have worked (without publishing) with the so called "self organized criticality" in predicting Earthquakes, based on C. Varotsos works for Greek territories. Unfortunately, once again, we do not know if (provided that our check constraints are satisfied) that the Earthquake will take place in the area under examination. So, it is still an open question and a hard one.
Anyone who thinks they have a wayto predict earthquakes (stock market, poker hands, socker team victories etc) needs to read "The signal and the Noise" by Nate Silver. Written for the non-scientist, but with references to technical papers. Another book "Why Stock Markets Crash" by Didier Sornette is helpful. Didier worked extensively with earthquakes, but discovered more money and success in the markets. The reason I believe is that the SOC component of earthquakes is quite variable and most likely a function of depth of focus, whereas the SOC component of the stock market (greed) is more stable. The mechanical properties leading up to the triggering of an earthquake are highly variable and we do not yet have all the tools to evaluate them. That is likely why the "b" value for a few events can be highly varible, it is dependent of physical conditions associated with a specific earthquake sequence and why a "b" value representing a global average based on many events is of marginal use in assessing the likelihood of a large event.
Wonderful reply by Leland Long. Let us wait and watch. But thanks Demetris for reminding me about C. Varotsos. I have read his Paper "self organized criticality" in predicting Earthquakes. In fact, I visited Thesloniki and Athens twice to deliver lectures.
Probably I should publish my 'negative result' for SOC & Varotos study in Greece, later.
I will look forward to seeing it. My observations are that SOC is most pronoiunced for shallow earthquakes and some earthquakes induced (as by reservoir or injection). SOC may have a minimal role in earthquakes below 15 km.
In our comparisons of earthquake sequences to stock markets and other SOC systems, let's not forget that there are plenty of people who make a lot of money in the stock market by calculating probability gains over various time-scales. One question: is the (presumably better) success rate in stock (and other) markets, as compared to earthquake predictability science, a product of the system's intrinsic predictability, or is it cultural. It is a lot easier to make a living predicting stock prices than predicting earthquake probabilities -- at least as it stands today.
I enjoyed ur answer: It is a lot easier to make a living predicting stock prices than predicting earthquake probabilities -- at least as it stands today.
In my view, Earthquake Prediction has defied all attempts to formulate a comprehensive scheme for prediction. Despite all tall claims, there is hardly any progress in predicting earthquakes. In this race, we move forward a step but never reach the finishing line.
Thus, dear Hardev, we are reaching the goal of earthquake prediction almost asymptotically?
I like ur suggestion; which means we can hope for reaching our goal of prediction after infinite Time?
There is something fundamentally wrong, or at least questionable, about the seismological approach to "predicting" earthquakes.
(i) If you try to use past seismic events to learn something about an anticipated future event and "predict" when and where it will occur and how powerful it might be, it does not matter how large your library is of well-characterized past events. It's a question of statistical probability for the single future event. Look up the law of the statistics of small numbers. Any single future event is subject to huge uncertainties.
(ii) The seismologists have traditionally relied on "listening" when a rupture occurs. They have tried to build an enormously wide knowledge base around where, when and how past events have occurred. They try to use this information to divine the nest event. However, when we look at earthquakes and how large a rupture they have to produce to become even detectable, mag 1, 2, 3, we are up for a surprise. To create a mag 1 event, the event deep in the Earth's crust has to create a rupture about 1000 square meter large. Mag 2 events are those that produce ruptures on the order of 10,000 m2. Mag 3 events 100,000 m2 and so on. Have you ever wondered how much accumulated stress is needed to create a 1000 square meter rupture plane in rocks deep below to generate an almost imperceptible mag 1 event?
If we want to learn something about coming earthquakes, we need to focus NOT on the seismic signals which are produced when the stresses have already reached an impressively high level to produce any "acoustic" signal detectable at the Earth's surface. We have to focus on the wide parameter space that seismologists don't know how to access, namely the parameter space of all processes in rocks that take place DURING build-up of stress but before catastrophic rupture.
This can be done and is being done, albeit outside the discipline of seismology. It takes a different approach, a different knowledge set - focusing on electrical and electromagnetic processes taking place during stress build-up. It takes an openness of mind and readiness to learn which are, unfortunately, not to the liking of many established seismologists.
"Prediction" in the sense of accurately saying when, where and how powerful will never be possible - simple laws of physics - but we a pretty far ahead understanding the signals generated during the progressive build-up of stresses deep below. We can monitor those processes and derive information about increasing seismic risk. We can do this in near-real time over a period of weeks and days BEFORE the point is reached where the rocks deep below would rupture catastrophically.