Seismic tomography

However, advances in modeling techniques and computing power have allowed different parts, or the entirety, of the measured seismic waveform to be fit during the inversion.

Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter.

[5] In the early 20th century, seismologists first used travel time variations in seismic waves from earthquakes to make discoveries such as the existence of the Moho[6] and the depth to the outer core.

[12] As early as 1972,[13] researchers successfully used some of the underlying principles of modern seismic tomography to search for fast and slow areas in the subsurface.

[14] The first widely cited publication that largely resembles modern seismic tomography was published in 1976 and used local earthquakes to determine the 3D velocity structure beneath Southern California.

[15][14] The following year, P wave delay times were used to create 2D velocity maps of the whole Earth at several depth ranges,[16] representing an early 3D model.

This method models seismic wave propagation in its full complexity and can yield more accurate images of the subsurface.

[14] Various methods are used to resolve anomalies in the crust, lithosphere, mantle, and core based on the availability of data and types of seismic waves that pass through the region.

The trade off from whole mantle to whole Earth coverage is the coarse resolution (hundreds of kilometers) and difficulty imaging small features (e.g. narrow plumes).

[30] Variations in these parameters may be a result of thermal or chemical differences, which are attributed to processes such as mantle plumes, subducting slabs, and mineral phase changes.

Tomographic images have been made of most subduction zones around the world and have provided insight into the geometries of the crust and upper mantle in these areas.

Data collected from four seismometers placed by the Apollo missions have been used many times to create 1-D velocity profiles for the moon,[54][55][56] and less commonly 3-D tomographic models.

While on Earth these methods are often used in combination with seismic tomography models to better constrain the locations of subsurface features,[58][59] they can still provide useful information about the interiors of other planetary bodies when only a single seismometer is available.

For example, data gathered by the SEIS (Seismic Experiment for Interior Structure) instrument on InSight[60] on Mars has been able to detect the Martian core.

[62] Temporary seismic networks have helped improve tomographic models in regions of particular interest, but typically only collect data for months to a few years.

Finer resolution can be achieved with surface waves, with the trade off that they cannot be used in models deeper than the crust and upper mantle.

Because seismometers have only been deployed in large numbers since the late-20th century, tomography is only capable of viewing changes in velocity structure over decades.

[64] However, seismic tomography has still been used to view near-surface velocity structure changes at time scales of years to months.

Computing power limits the amount of seismic data, number of unknowns, mesh size, and iterations in tomographic models.

This is of particular importance in ocean basins, which due to limited network coverage and earthquake density require more complex processing of distant data.

Simplified and interpreted P and S wave velocity variations in the mantle across southern North America showing the subducted Farallon plate.
The African large low-shear-velocity province (superplume)