Geophysics in Geothermal Exploration

187 6. The use of passive seismic methods for Geothermal exploration and monitoring establishment of seismic networks such as the Worldwide Standardized Seismograph Network (WWSSN) in the mid-1900s. However, traditional approaches often struggle with events in regions of sparse station coverage or complex crustal structures, where seismic wave propagation deviates from standard models. To overcome these challenges, the introduction of seismic tomography and 3D velocity models in the late 20th century represented a major breakthrough. These methods account for variations in the Earth’s subsurface, significantly improving the accuracy of earthquake localization, particularly in tectonically complex regions like subduction zones. Template matching has also been instrumental in localization. By comparing real-time waveforms with those of well-located events, the locations of new earthquakes can be inferred with remarkable precision. This method excels at identifying and locating small, repeating earthquakes that may not generate strong signals across broad networks. The rise of machine learning in the 2010s has further transformed earthquake localization by automating seismic waveform analysis and improving accuracy. Neural networks trained on synthetic and real seismic datasets can estimate earthquake hypocenters (the points of origin beneath the Earth’s surface) with impressive speed and precision (Zhu et al., 2019). Probabilistic methods, such as Bayesian inference combined with machine learning, allow robust localization even in areas with limited station coverage or high noise levels. These innovations highlight the remarkable progress in earthquake localization, offering critical insights into seismic processes and ensuring effective monitoring of tectonic activity worldwide. Magnitude The estimation of earthquake magnitude has evolved significantly since its inception, transitioning from simple empirical scales to sophisticated, physics-based calculations that leverage global seismic networks and advanced computational tools. The concept of quantifying an earthquake’s size was first formalized by Charles F. Richter in 1935 with the introduction of the Richter scale, or the local magnitude (ML) scale. This method measured the amplitude of seismic waves recorded by a specific type of seismograph (the Wood-Anderson torsion seismometer) at a standardized distance of 100 kilometers from the epicenter. The Richter scale was revolutionary because it provided a logarithmic measure of earthquake size, allowing a single number to represent the energy released during an event. While the Richter scale worked well for moderate earthquakes in Southern California, it had limitations for very large earthquakes and those occurring outside the region for which it was calibrated. This led to the development of additional magnitude scales, such as the surface-wave magnitude (Ms) suitable for large, shallow events or bodywave magnitude (Mb) focusing on compressional body waves useful for deep-focus earthquakes. Despite their broader applicability, these scales also had shortcomings, such as underestimating the size of very large earthquakes (known as saturation). To address these issues, the moment magnitude scale (Mw) was introduced in the late 1970s by Hiroo Kanamori and Thomas Hanks. This scale is based on seismic moment, a physical quantity directly related to the energy released during fault rupture. Mw considers the area of the fault that slipped, the average slip displacement,

RkJQdWJsaXNoZXIy NjA3NzQ=