Research – Electrical Resistivity Tomography – A Brief Introduction
Resistivity in Geophysics
The DC resistivity survey is an inexpensive and widely used geophysical technique for the investigation of near surface resistivity anomalies (e.g., a buried tank, a moisture plume, a lithological subsurface change). In principle, it measures the voltage generated by a transmission of current between electrodes placed in the earth (typically at the surface, but also in boreholes or buried underground). Normally, apparent (bulk or effective) electrical resistivity is then calculated and these are used to create pseudosections and these are then interpreted and inverted to determine subsurface resistivity anomalies. The inversion methods I use (see research page for examples) to accomplish the same task are much more general, and do not require the assumptions which go into using pseudosections.
Limitations to Classical Methodologies
Classical interpretation of the DC resistivity survey for determining resistivity anomalies assumes homogeneity; on top of this the potential field is smooth because of its highly diffusive nature. Consequently, conventional electrical resistivity surveys have been virtually ineffective for environmental applications, where electrical resistivity anomalies are subtle, complex, and multi-scale. To overcome these difficulties, a modern electrical resistivity survey method has been designed to collect extensive electric current and electric potential data sets in multiple dimensions. This is called Electrical Resistivity Tomography (ERT). ERT technology was inspired by tomography in the medical fields (e.g., CAT scans and MRI).
The tomography portion of ERT comes from taking many groups of current source / voltage measurements at as many locations as possible. Each group of measurements is a traditional test; tomography involves the joint inversion of many independent tests, using an algorithm to discern subtle details from differences which would not be seen in any one tests. A generalization of this process can be illustrated using a simple analogy. If you want to know what a building or structure looks like, first you look at it from the south side (a single test), then you walk around to the east side and look at it, and so on. The more independent directions that you look at it from, the closer that the “model” of it in your head is to reality. Getting in a helicopter, and looking down at it from above, or going inside and seeing what the things are that you see from the outside, can add to the 3-D conceptualization of the building in your mind. In this example, your brain is the computer, your eyes, ears and senses are the instruments used in the survey, and the algorithm for putting all this together is your common sense.
Extending this analogy even further, it is also true that the first and second views that you have of the building adds a great amount of information to your model (i.e. “Ahh! I didn't know there was a porch in back!”), while the 20th view of the building doesn't add nearly as much information. This demonstrates the diminishing return of additional similar data to the overall understanding of the whole.
Different Types of Information
Different types of data add more information to the overall result. If you go and see the blueprint of the building, or fly over it in a helicopter, that will really help you to visualize its “true” layout better, more so than additional different views of the building just from the ground. In practice, the building is an aquifer or reservoir, and the different ways of looking at it are the results of different tests, pumping tests, laboratory tests of samples, etc.; unfortunately in the case of the aquifer we can never cheat and look at its “true” properties, since we don't have the blueprint.
This is generally how the tomography concept is applied to both geophysics and hydraulic testing. More pairs of independent measurements, from differently located sources (on the surface, or in boreholes) adds to the detail that can be discerned from the data. Check out the animation of the cumulative results from the hydraulic tomography, to see these concepts visually.
The concept of using different types of data; including geophysical methods (resistivity, GPR, gravity, seismic and thermal), hydraulic methods (traditional and tomographic pumping tests), and tracer methods (both traditional and tomographic tracer test), married with uncertainty analysis and stochastics, is the basis for Stochastic Fusion (Yeh and Simunek, 2002).