Data flow & execution design

Overview

This is a system of restoring poles and zeros of an unknown instrument.

It consists of 7 executables:

view_cohe.exe - plots the coherence between the input data so that we may verify that the input data has sufficiently high coherence so as to produce meaningful results in the frequency range of interest.

Note

view_cohe.exe is a diagnostic tool and, technically, can be omitted. Still, it is useful to execute it before the run. It calculates and plots the coherence between the inputs and allows the end user to determine which frequencies are restored reliably; coherence should not differ much from 1.0 in reliable points.

stage0.exe - extract epochs from the known instrument’s RESP files which correspond to the dates found in the waveform data. Includes some text processing.

stage1e.exe - restore the transfer function from the known instrument to the unknown. This is done with the Anatoli Kuschan’s linear approximation method.

stage1.exe (legacy program) - restore the transfer function from the known instrument to the unknown. This is done with the Pavlis and Vernon robust averaging method.

Note

This is a legacy program. For better results please use stage1e.

stage2.exe - filtering of the data acquired on the previous stage; making them smooth, without gaps and peaks. Currently, only the median filter is applied.

stage3.exe - calculation of poles and zeros of the restored response. Selects the best number of poles and zeros. Includes the linear first approximation and Levenberg-Marquardt improvement.

stage4.exe - encoding poles and zeros into a RESP formatted file.

Each stage writes some files and draws some plots. These will be explained in their respective sections (see below).

The system is ruled by the variables from /opt/osop/respgen/setup.py. This file is imported by all the stages, and each stage uses some variables from it. This file is extensively commented and should be self explanatory. See also: Configuration parameters explained

The final output of the processing chain is the file called out4.response.restored, produced by stage4.exe. This is the RESP formatted file containing the instrument response (poles, zeros and gain) of the unknown sensor.

Theory & Methodology

Our method was inspired by Pavlis and Vernon (1994) but significantly deviates in its implementation in stage1e.

Theory

Given a pair of co-located sensors, if we know the instrument response of one, we can restore the instrument response of the other.

“The data used are obtained by placing two sets of sensors close enough together that we can assume they record the same ground motion. It is further assumed that one of the sensors has a known, absolute calibration. One then records ground noise of sufficiently high amplitude to guarantee that one is recording above the amplifier noise floor across the entire frequency band of interest. Data are recorded for a time period that depends upon the lowest frequency that is to be resolved. The data is then divided into a series of N partially overlapping time windows, transfer-function estimates are calculated from each of these N time windows, and finally a robust mean estimation procedure is used to produce transfer-function estimates at a set of discrete frequencies.” -Pavlis and Vernon, 1994

What we expect for a seismometer in general:

  • A flat frequency response (in frequency v. amplitude space) between some low and high corner frequencies.
  • The low frequency corner might be at 0.001, 0.01, 0.1, 0.5, 4.5 Hz, … depending on the sensor and how “broad band” it is (the more broad band, the wider the flat frequency response is and the lower it goes). The lower corner is controlled by the electronics and generally rolls off at a gradual one or two poles.
  • The high frequency corner, if present, will be near the Nyquist. This corner can be a more aggressive corner that rolls of like a cliff if FIR filters were used by the ADC to downsample to lower frequencies. To see this effect, create a modified version of your RESP file where you delete the decimation stage (FIR filters). Now use a program like JPlotResp to plot the original and modified RESP files. In the modified RESP file you will see that there is no roll off.

Methodology

The Pavlis-Method gives us an array of coefficients of the transfer function from the known instrument to the unknown. If we then element-wise multiply this array by the response of the known instrument, we obtain the approximate response of the unknown instrument. Given this, we can (again, approximately) restore poles and zeros of the response with the rational function model as the first approximation, and then improve the poles and zeros with non-linear least squares.

respGen works with FFT coefficients of the recorded waveforms, their robust averages across time windows and their ratios in which the true ground motion cancels leaving only the instrument response.

Assumptions

  • “The basic  principal  of the method  developed here is to deploy two or more sensors close enough together that we  can  assume  they  all record  the  same  ground  motion e(t).” -Pavlis and Vernon, 1994; “Two seismometers are placed close together so that it can be assumed that they record the same ground motion.” - Sleeman et al., 2006
  • “…the method assumes that one of the pairs of sensors has an accurate known frequency response.” -Sleeman et al., 2006
  • “We  assume  the standard  convolutional model  of observed  seismograms  is  valid…[see paper]” -Pavlis and Vernon, 1994
  • That the flat frequency response of the unknown sensor coincides and fits within the flat frequency response of the known sensor. By reviewing the plot generated in stage 0, the end user can ensure that he has chosen UFILE and KFILE appropriately.
  • In translating an acceleration response to velocity, the polynomial gains a zero at the origin.

Visualizations

Visualization is factored out into view[0,1,2,3,4].exe programs; these are automatically run by each of the stage?.exe programs. Any view?.exe also can be run standalone after the corresponding stage?.exe has finished it’s work.

For example, after running stage3.exe and closing all the windows, you can say ./view3.exe once again and it will paint the data acquired on stage 3.

The system also includes the program called view_cohe.exe which calculates and plots the coherence between the input waveforms.

Stages explained

view_cohe

Objective: calculate and plot the coherence between the input waveforms

“Coherence estimates are essential to appraise the validity of results.” - Pavlis and Vernon, 1994

  1. Input: Read the input signals from KFILE and UFILE (waveforms from the known and unknown sensor)
  2. Check the start and end time of the input signals. Align the input signals on a common start and end time.
  3. Remove DC from the input arrays.
  4. Check for gaps. If encountered anywhere in the input signals, the program exits.
  5. Determine the lowest frequency that can be resolved, based on the waveform length, OVERLAP_FRACTION, and NUM_OF_WINDOWS (100x rule). Determine the length of the window, choose from powers of 2.
  6. Calculate coherence to appraise the quality of the input signals according to the length of the window and OVERLAP_FRACTION.
  7. Plot the coherence between the known and unknown input signals as Log. Frequency (Hz) v. Coherence.

This step allows the end-user to estimate how robust the final solution will be and at which frequencies it will be most reliable.

Input files: KFILE, UFILE

Output files: None

stage 0

Objective: This stage examines the input waveforms from both sensors, finds the time interval of their overlap, and extracts the corresponding epochs from the known instrument’s RESP files. It also does some checks: if the waveforms have gaps, if sampling rates match, and if the dates really overlap.

This is mainly a text processing stage. To figure out where an epoch starts in the original RESP file, the ‘Start date’ record is found, then the text is searched back line by line until something that doesn’t belong to this epoch’s text is found. These start line numbers are stored in an array, and the start line of the next epoch is considered to be the end of the current epoch. After an array of epochs has been built, we search it to find the epoch which the waveform dates fall into.

Sequence of events:

  1. Read the input signals from KFILE and UFILE, check them for the gaps, align the input signals on a common start and end time.
  2. Extract the epochs from the KRESP and URESP files that correspond to the header of the miniSEED files.
  3. Save epochs to the corresponding ‘out0.response.known’ and ‘out0.response.unknown’ files.
  4. Plot the responses from the corresponding epochs in 2 subplots, starting from LOWFRQ to the Nyquist frequency:
  • upper - log-log amplitude of the 2 responses
  • lower - log-x phase of the 2 responses

Input files: UFILE, KFILE, URESP, KRESP

Output files: out0.response.unknown, out0.response.known. Each of the output files contains a single epoch of the original RESP file.

view 0

Objective: This plots the epochs extracted by stage0.exe in a way similar to JPlotResp. It opens 2 windows, each of which depict the corresponding response. These plots allow the end-user to assess that a basic condition has been met: That the flat frequency response of the unknown sensor fits within the flat frequency response of the known sensor.

Warning

this operation takes sime time.

Responses are evaluated with the obspy evalresp(). Unfortunately, it cannot calculate the response in several predetermined points; it works only with the regular grids of frequencies. To make the plots start at some very low frequency (LOWFRQ from /opt/osop/respgen/setup.py, currently 0.0001 Hz), the response is evaluated on an extremely dense grid - and this makes the operation a bit lengthy. To ensure that any matplotlib backend will plot the result and won’t choke on too many points, only NSTEPS+1 points of the resulting grid are used which are spaced exponentially to make a nice log-x plot.

Input files: UFILE, KFILE, out0.response.unknown, out0.response.known

Output files: none

stage 1e

Objective: calculate the transfer function via Anatoli’s linear approximation method

It is derived from stage1.exe; all the code related to the Pavlis robust averaging (Pavlis and Vernon, 1994) has been deleted, and best linear approximation used instead. It overcomes the problem with spectral division and robust averaging mentioned in the previous section.

So, how does stage1.exe (robust averaging by Pavlis) evaluate a transfer coefficient for some frequency?

  • 2 arrays of FFT coefficients corresponding to this frequency are taken: spectra1 and spectra2. spectra1 contains coefficients for the unknown sensor, spectra2 for the known.
  • spectra1 is divided by spectra2; and
  • the result of the division is robustly averaged.

Sometimes division itself is not robust when abs(spectra2) <<< abs(spectra1) as occurred with our shake table experiments. When this happens, the results of division and averaging totally hide the transfer coefficient found in reliable points.

Notice that in fact, we are looking for a coefficient k such that: spectra2 * k approximately equals spectra1.

What is done instead in stage1e.exe?

  • the same arrays of coefficients are taken, that is, spectra1 and spectra2,
  • the best linear approximation of spectra1 with spectra2 is found.
  • the coefficient of this approximation is what we are looking for.

A simple geometric formula is used:

(spectra1 - spectra2 * k, spectra2) = 0
or
k = (spectra1,spectra2) / (spectra2,spectra2)

(.,.) denotes a complex dot product here. Coherence is calculated as follows:

cohe = |(spectra1,spectra2)| ** 2  /  |(spectra1,spectra1)*(spectra2,spectra2)|

Note

As shown in many experiments, compared to stage1.exe, stage1e.exe gives results of alike or superior quality.

Warning

stage1e.exe may require some tuning of the COHE_LIMIT parameter in /opt/osop/respgen/setup.py. Values acceptable for stage1.exe may have to be changed - usually, decreased.

So, this is how stage1e.exe works:

  1. Input: Read the input signals from KFILE and UFILE, detrend the input signals removing the DC signal and align them on a common start and end time.

  2. Determine the lowest frequency that can be resolved, based on the waveform length, OVERLAP_FRACTION, and NUM_OF_WINDOWS (100x rule). Determine the length of the window, choose from powers of 2.

  3. Apply a standard windowed FFT with overlapping windows to the input signals according to the length of the window and OVERLAP_FRACTION. As the window moves, the FFT is calculated and the spectrum stored for each window.  Windowing is done using a 4pi prolate function (basically a bell-shaped curve).

  4. Choose ‘deputy’ frequencies spaced exponentially over the range of frequencies, based on NUM_OF_POINTS. This culls out redundant information and greatly accelerates calculations.

  5. This is the only change in logic: Restore the transfer function coefficients from the known input to the unknown using best linear approximation, and coherence as described above.

  6. Output results to:

    1. out1.coherence.npy for the coherence
    2. out1.frequency.npy for the frequency
    3. out1.transfer.npy for the transfer function
    4. out1.npts.npy for the window length determined in 2.
    5. out1.rang.npy for the list of indices which determine the frequencies which survive culling in 4.
  7. Plot the Transfer function and coherence as Log. Frequency (Hz) v. Amplitude and Coherence. Set the lower limit of the plot to LOWFRQ.

Input files: KFILE, UFILE

Output files:
  • out1.coherence.npy - coherence between the inputs
  • out1.frequency.npy - regular grid of real frequencies
  • out1.transfer.npy - complex array of the transfer coefficients from the known input to the unknown input.
  • out1.npts.npy
  • out1.rang.npy

stage 1

Note

This is a legacy program. For better results please use stage1e (described above).

Objective: This stage calculates the transfer function between the known input and the unknown input, frequencies and coherence.

This is a translation of a program originally authored by Gary L. Pavlis (Pavlis and Vernon, 1994) into Python. Please refer to:

Pavlis, G.L. and Vernon, F.L. (1994) Calibration of Seismometers Using Ground Noise, Bulletin of the Seismological Society of America, Vol. 84, No. 4, pp. 1243-1255

for details.

Sequence of events:

  1. Input: Read the input signals from KFILE and UFILE, detrend the input signals removing the DC signal and align them on a common start and end time.

  2. Determine the lowest frequency that can be resolved, based on the waveform length, OVERLAP_FRACTION, and NUM_OF_WINDOWS (100x rule). Determine the length of the window, choose from powers of 2.

  3. Apply a standard windowed FFT with overlapping windows to the input signals according to the length of the window and OVERLAP_FRACTION. As the window moves, the FFT is calculated and the spectrum stored for each window. Windowing is done using a 4pi prolate function (basically a bell-shaped curve).

  4. Choose ‘deputy’ frequencies spaced exponentially over the range of frequencies, based on NUM_OF_POINTS. This culls out redundant information and greatly accelerates calculations.

  5. Restore the transfer function coefficients from the known input to the unknown using the Pavlis method (Pavlis and Vernon, 1994). This is done by averaging the ratio of the spectra coefficients over the number of windows. Averaging is based on a statistical technique that does not favor outliers and assigns them less weight. Also calculate the coherence of the transfer function coefficients. This is the “coherence” which comes directly from the Pavlis method described in Pavlis and Vernon (1994).

  6. Output results to:

    1. out1.coherence.npy for the coherence
    2. out1.frequency.npy for the frequency
    3. out1.transfer.npy for the transfer function
    4. out1.npts.npy for the window length determined in 2.
    5. out1.rang.npy for the list of indices which determine the frequencies which survive culling in 4.
  7. Plot the Transfer function and coherence as Log. Frequency (Hz) v. Amplitude and Coherence. Set the lower limit of the plot to LOWFRQ.

Input files: KFILE, UFILE

Output files:
  • out1.coherence.npy - coherence between the inputs
  • out1.frequency.npy - regular grid of real frequencies
  • out1.transfer.npy - complex array of the transfer coefficients from the known input to the unknown input.
  • out1.npts.npy
  • out1.rang.npy

A technical note on implementing the Pavlis robust averaging method in Python:

During the translation, some low level loops were substituted with the Python array syntax. For that, 2 functions - huber() and thompson() - were vectorized into vec_huber() and vec_thompson(); this is not an optimization, just a syntax sugar.

A technical note on stage1 versus stage1e:

Potential drawback of using stage1 instead of stage1e: Division is not robust when abs(spectra2) <<< abs(spectra1) as occurred with our shake table experiments. When some of the data is noisy and the noise overwhelms the true signal, the results of the division at the noisy points make averaging useless. Big and random results of division in general can be averaged, but the results of averaging totally hides the transfer coefficient found in reliable points.

view 1 (e)

Objective: This plots the output from stage 1(e): amplitude, real and imaginary parts of the transfer function and also the coherence.

Input files: out1.coherence.npy, out1.frequency.npy, out1.transfer.npy

Output files: none

stage 2

Objective: build and refine the instrument response

This stage restores an array of the unknown instrument response coefficients. It does so by multiplying the transfer function coefficients obtained from the previous stage by the response of the known instrument. It also makes the response ‘nice’ by removing spurious gaps and peaks. It also builds an array of ‘weights’ used in the linear method of Stage 3. Weights are selected in a way that favors the low frequency details of the curve.

The known instrument response used here is decoded in the ‘ACC’ mode. This is to prepare the acceleration response, which is significant on stage 3.

As discovered, the linear method used on stage 3 is a bit sensitive to spurious gaps and peaks in the input data, while the transfer coefficients do have some. To improve the resulting curve, a median filter is applied. Real and imaginary parts of the response are filtered separately. To preserve the precious extra low frequency coefficients from garbling by the filter, they are left intact.

An array of weights is built in a way that favors the frequencies below the peak of the acceleration response:

  • ampltitude of the response is taken,
  • maximum is found,
  • weights from the lowest frequency to the peak are set to the maximum value
  • the rest of the weights is set to the amplitude.

Sequence of events:

  1. Input: Read the transfer function from the previous stage.

  2. Input: Read the known response from KRESP. Decode it in acceleration units.

  3. Calculate the array of acceleration response coefficients.

  4. Improve the response: apply median filter of length MEDIAN_LEN to remove gaps and peaks, leaving the first MEDIAN_LEN low-frequency coefficients untouched. No other smoothing should be applied.

  5. Calculate an array of weights. Denote AMPL the array of amplitudes of the response, MAX the maximum of AMPL, and ARGMAX the index of MAX,

    • set weight = MAX for indices from 0 to ARGMAX
    • set weight = AMPL for indices greater than ARGMAX
  6. Output the results to:

    • out2.response1.npy for the restored response
    • out2.response2.npy for the improved response
    • out2.weight.npy for the weight
  7. Plot the original (with prominent gaps and peaks) and improved instrument response (smoothed) and weight as Log. Frequency (Hz) v. Amplitude. Set the lower limit of the plot to LOWFRQ.

Input files: KRESP, out1.transfer.npy

Output files:

  • out2.response1.npy - unfiltered response (used just for plotting)
  • out2.response2.npy - filtered response
  • out2.weight.npy - weight

view 2

Objective: This plots the amplitudes of the restored response and the improved response, as well as the weight.

Input files: out1.frequency.npy, out2.response1.npy, out2.response2.npy, out2.weight.npy

Output files: none

stage 3

Objective: select the best degrees of polynomials and restore poles and zeros. This stage calculates poles and zeros from the response acquired on stage 2.

Stage 3 can either determine the number of poles and zeros which gives the best approximation to the restored response, or it can use some prescribed numbers. True is the default, while False is useful for development. Depending on the variable DETERMINE_DEGREE from /opt/osop/respgen/setup.py, therefore, it either determines the number of poles and zeros that gives the best fit to the response, or it just calculates the best fit for the specified number of poles and zeros.

Usually, we set DETERMINE_DEGREE = True, set the maximum number of poles NMAX, and launch stage3.exe. It will search all the valid combinations of numbers of poles and zeros, find the best one, and output the calculated PZ. Sometimes, when we have an idea about how many poles and zeros there should be, we set DETERMINE_DEGREE = False, and set the number of zeros MM and number of poles NN. Then, stage 3 will calculate PZ for that combination of numbers.

Note

Acceleration is used in the calculations in stages 2 and 3 for this reason: the Levenberg-Marquardt method can give us a zero close to, but not exactly at the origin. When this happens, the velocity response calculated from these poles and zeros will be OK, but the acceleration response will be wrong.

Sequence of events:

  1. Input: data from previous stages.

  2. Poles and zeros for acceleration are calculated.

  3. Coefficients with coherence less than COHE_LIMIT must not participate in calculations.

  4. Should be able to

    1. calculate the PZ of the predetermined degree MM (num.of zeros) and NN (num.of poles) if DETERMINE_DEGREE = False
    2. select the degree of polynomials which best fit the data, and calculate PZ if DETERMINE_DEGREE = True. All the polynomial degrees should be searched up to NMAX.
  5. Each calculation of PZ consists of 3 stages:

    1. use linear weighted least squares to get the first approximation,

    2. use non-linear least squares (Levenberg-Marquardt method) to improve the approximation,

    3. check the poles to see if all of them are negative

      Note

      All poles must be negative. Poles in the positive half of the complex plane are unstable (see pages 6-7 of Frank Scherbaum’s 2001 edition of “Understanding Poles and Zeros”).

  6. Additive formulation of the polynomials is used on both stages above to ensure that zeros and poles are either real, or come in conjugate pairs. That is, both the first approximation and the improvement steps deal with real polynomial coefficients, not poles and zeros themselves. Polynomials are solved only for the last stage and output.

    This is done for the following reason: if we want poles and zeros to be either real or come in nice conjugate pairs, we have to either force them to be such artificially or employ a formulation that ensures their good behavior. Moreover, stanadrd SciPy Levenberg-Marquardt can optimize only for the real variables.

    The first approximation is acquired with the method similar to this: http://en.wikipedia.org/wiki/Polynomial_and_rational_function_modeling#Rational_function_models We write down a linear system corresponding to the rational function and solve it with the weighted least squares.

  7. PZs with poles to the right of the imaginary axis should be culled out as positive poles are unstable (see pages 6-7 of Understanding Poles and Zeros- see background reading folder at Google Docs)

  8. Linear weighted least squares:

    1. starting from the rational function approximation

      R(z) = (A0Zm+ A1Zm-1 + … + Am) / (Zn+ B1Zn-1+ … + Bn)

      when evaluated in Z = 2 * Pi * frequency, it should give us the response coefficients R.

    2. write down the linear formulation:

      A0Zm+ A1Zm-1+ … + Am - R Z n- R B1Zn-1- … - R Bn =0; or

      A0Zm+ A1Zm-1+ … + Am- R B1Zn-1 - … - R Bn= R Zn

    with respect to the unknowns A0, A1, …, Am, B1, …, Bn

    c. or written otherwise,

    A x = y
    

    where

    x = [ A0, A1, …, Am, B1, …, Bn] (a vector),

    A = [Zm, Zm-1, …, Am,  - R Zn-1, - R Zn-2,… - R ] (a matrix),

    y = [ R Zn] (a vector),

    A and Z implicitly indexed over the number of frequencies

    d. This problem has complex matrix and complex right side of the equation (see below), while we need real solution. So, we get an overdetermined system Ax = y. Split it into 2 problems:

    A.real x = y.real
    A.imag x = y.imag
    

    These problems are just concatenated. Because we are looking for the least squares solution, the real solution of the concatenated problem is about as good as the complex solution of the original problem. e. Denoting the concatenated problem Ax = y again, to solve it with the least squares, a weighted formulation is used:

    AT * W * A * x = AT * W * y
    

    where AT is A transposed, W is the weight matrix (in fact, a diagonal matrix; an array of weigt coefficients has been created on stage 2.), x is the unknown, y is the right part of the original overdetermined problem. As said before, the weights are much bigger at low frequencies and force the rational curve to fit better to the low frequncy response coefficients.

  9. Improvement:

    For the improvement step, standard scipy.optimize.leastsq() routine is used.

    1. Use the same trick as above - re-formulate a complex optimization problem as a concatenation of real problems,
    2. Solve it with a standard leastsq() routine.
    Both the first approximation and the improvement steps rewrite their respective complex problems into real problems twice as big. The reasons are a bit different:
    • the linear step must ensure that the solution of the linear problem is real,
    • the improvement step has to pass real arguments to leastsq().
  10. Having acquired the improved coefficients of the polynomials, solve them to get zeros and poles,

  11. Determine a more precise scale (i.e. amplitude):

    1. restore the velocity response R0 for the measured acceleration response,
    2. restore the velocity response R1 for the calculated from PZ acceleration response,
    3. find a more precise scale = (R0,R1) / (R1,R1) - here parentheses denote a dot product.
  12. write the output files:

    1. out3.zeros.npy - complex zeros
    2. out3.poles.npy - complex poles
    3. out3.scale.npy - real leading coefficient of the numerator polynomial
    4. out3.response.npy - response restored from poles and zeros
  13. Plot the results on two plots:

    1. Response components from the Pavlis method and from poles and zeros as Log. Frequency (Hz) v. Amplitude, set low limit to LOWFRQ.
    2. Poles and zeros as Real v. Imaginary on the s-domain of the Laplace transform

Input files: out1.frequency.npy, out2.response2.npy, out2.weight.npy

Output files:

  • out3.zeros.npy - complex array of zeros
  • out3.poles.npy - complex array of poles
  • out3.scale.npy - real coefficient before the rational function
  • out3.response.npy - response calculated from poles and zeros

view 3

Objective: Draws 2 plots:

  • Agreement between the input response and the response calculated from PZ; and
  • Poles and zeros themselves.

Input files: out1.frequency.npy, out2.response2.npy, out3.response.npy, out3.poles.npy, out3.zeros.npy

Output files: none

stage 4

Objective: RESP file generation.

Write PZ to a RESP formatted file. Some text processing and a bit of calculations.

Note

calculations on stages 2 and 3 deal with the acceleration response, while velocity response is required, so this step adds another zero to the set of zeros.

Sequence of events:

  1. Input: data from previous stages.

  2. Determine if the unknown MSEED file contains velocities or accelerations,

  3. If it is velocities, convert the instrument response from acceleration to velocity by appending a zero at 0,0 to the list of zeros,

  4. Set zeros close to the origin to pure (0+0j) according to the ZERO_LIMIT parameter.

  5. Build a simplified version of a RESP file according to SEED convention as defined by the FDSN and output as URESP in units of velocity (m/s). The RESP file was modeled after this RESP file: RESP.PA.BRU2..HHZ.simplified (the original, un-simplified RESP file: RESP.PA.BRU2..HHZ - see Google Docs folder). Here is the same RESP file with comments: RESP.PA.BRU2..HHZ.simplified.explained.

    You will notice above that the only difference seen between RESP.PA.BRU2..HHZ and RESP.PA.BRU2..HHZ.simplified is at ~50 Hz. This is due to differences in the RESP files themselves. Now, please open these two RESP files for inspection. You should notice that in the response files, the differences between RESP.PA.BRU2..HHZand RESP.PA.BRU2..HHZ.simplified are:

    1. At RESP.PA.BRU2..HHZ.simplified, I have removed all stages labeled “Channel Gain,  BRU2 ch HHZ”. Notice that the “Sensitivity” given at the end of the RESP files is just the product of all of the “Gains” (B058F04) reported in the file. These are, therefore, present only for completeness and are not required for our purposes. Without a priori knowledge about the electronics used, they would be impossible to reconstruct.

      The example data I am providing for testing of the pavlis-relresp program was generated by experiments where the ground motion signals recorded at the miniSEED files actually passed through TWO instruments (or systems in math jargon): First, the sensor (seismometer), then the digitizer (seismograph). So the response files are more complicated than they need to be because they contain detailed information about gains at various stages along the data acquisition chain, from sensor to digitizer. Currently, in pavlis-relresp we treat these two systems (sensor + digitizer) as a SINGLE system. In order to look at just one part of the system (at either the sensor or digitizer separately) we would have to provide a known instrument response for only the sensor or only the digitizer. But, at the moment, we are only interested in the system as a whole, not the individual parts.

    2. At RESP.PA.BRU2..HHZ.simplified, I have also removed all “Decimation” sections and the associated sections labeled “Response (Coefficients)” containing the LONG lists of FIR (Finite Impulse Response) filters which are applied by the digitizer to the raw continuous signals in order to downsample from the actual digitization rate to the desired output rate of 100 samples per second. This accounts for the difference you see in the 2 plots above at ~50 Hz. For our purposes, when dealing with the miniSEED output from an “unknown” system, the FIR filters applied by the digitizer are obviously unknown to us. As with the gains, there is no a prioir knowledge about what the digitizer was doing and no way to infer post facto. But that is not a problem. Since we treat the output miniSEED waveforms as coming from a single “system”, the FIR filters should be reduced to a series of poles and zeros. So, in the end, the RESP files from the manufacturer and the ones we generate will not have identical poles and zeros. The RESP files from the pavlis-relresp program will contain addditional poles and zeros that account for the behavior at ~50 Hz. Regardless, the two RESP files should have similar behaviors when the RESP files are viewed with JPlotResp.

  6. To determine the A0 and gain,

    1. calculate the response in NORMFREQ, that is, the value of the rational function R(Z) where Z = NORMFREQ * 2 * pi * j,
    2. set A0 = 1 / abs(R(z))
    3. set sensitivity = scale / A0
  7. Output the results to out4.response.restored.

  8. Plot a comparison between the manufacturer’s and the generated RESPs (if both are available) as Log Frequency (Hz) v. Amplitude.

Adam Ringler cautions: “You set A0 so that the pole/zero response is equal to unity at some fixed frequency (usually you pick this in band and in a flat part of the response).  You then tack on the sensitivity later. We use a cascade of stages in our response.  So stage 1 is the seismometer, stage 2 is the digitizer, and finally stage 0 is a check of stage 1 and stage 2. / One thing we have found annoying is using different frequencies.  If you sample your data at 40 Hz and 1 Hz, then pick something like 10 seconds period.  However, if you also use VH data (1 sample per 10 seconds) then pick something even longer period.  This way you avoid all the mess of different sample rates and different A0s for each sample rate. / Once you do this all you can run it through evalresp to make sure you are happy with things.”

Input files: UFILE, out3.zeros.npy, out3.poles.npy, out3.scale.npy

Output files: out4.response.restored - a RESP formatted file

view 4

Objective: Plots the unknown instrument response (if provided) vs. the calculated response.

As in view0.exe, a very dense grid is used to deal with extra low frequencies.

Input files: out0.response.unknown, out4.response.restored

Output files: none