API Reference

Contents

API Reference#

class gsolve.GSolveReport(observations: GravityObservations | GravitySurvey, sites: GravitySites | GravitySurvey, results: GSolveResults, terrain_corrections: TerrainCorrectionData | None = None, anomalies: GravityAnomalies | None = None)[source]#

Bases: object

Class for summarising and reporting results of a simnge GSolve network adjustment.

This class provides a simple interface for collating the various inputs and outputs from a GSolve run. The report can be printed to the console or saved to an Excel workbook.

Parameters:
observationsGravityObservations

The observations used in the GSolve network adjustment.

surveyGravitySurvey

The survey associated with the observations.

resultsGSolveResults

The results of a GSolve network adjustment.

anomaliesGravityAnomalies, optional

The gravity anomalies calculated from the observations.

terrain_corrections: TerrainCorrectionData, optional

Terrain corrections for sites. If terrain corrections are included in the anomalies object, these will be used to populate the terrain correction data

Attributes:
obs_dataDataframe

Gravity observation inputs and outputs.

site_dataDataframe

All site related data, including inputs, solutions, normal gravity corrections and anomalies.

loop_dataDataframe

Gsolve solutions by loop.

paramsdict

The parameters objects for “observations”, “sites”, “solution”, “anomalies” and terrain corrections.

to_excel(filename: str | PathLike, if_workbook_exists: Literal['error', 'replace', 'append'] = 'error', if_sheet_exists: Literal['error', 'replace', 'new'] = 'error') None[source]#

Save ‘report’ to an Excel file.

Parameters:
filenamestr or PathLike

Write the Excel workbook to filename.

if_workbook_exists{‘error’, ‘append’, ‘replace’}, default ‘error’

Behaviour if filename already exists:

  • 'error' : raise an error if the workbook already exists.

  • 'append' : attempt append worksheets to an existing workbook.

  • 'replace' : overwrite the existing workbook.

if sheet_exists{‘error’, ‘replace’, ‘new’}, default ‘error’

Behaviour if the worksheet already exists (Only applicable when if_workbook_exists='append')

  • 'error': raise a ValueError

  • 'replace' : overwrite workseet.

  • 'new' : create a new worksheet with a different name.

class gsolve.GravityAnomalies(absolute_gravity: GSolveResults | DataFrame | Series, sites: GravitySites | GravitySurvey | DataFrame, corrections_parameters: GravityCorrectionParameters | GravityCorrectionProvider | GravityCorrections, terrain_corrections: TerrainCorrectionData | None = None)[source]#

Bases: GSolveTable

Compute and store gravity anomalies for a set of sites.

This class provide a simple mechanism to compute free-air and Bouguer anomalies from the outputs of a gsolve network adjustment.

Parameters:
absolute_gravityGSolveResults, DataFrame or Series

An object providing site_id’s and associated absolute gravity values for which anomalies will be computed. Can be any of the following:

  • GSolveResults : the output of a gsolve network adjustment.

  • DataFrame : must contain an 'absolute_gravity' column and be indexed by 'site_id'

  • Series : absolute gravity values indexed by 'site_id'.

sitesGravitySites, GravitySurvey or DataFrame

An object providing the geographic coordiates and ellipsoidal height for each site. Can be any of the following:

  • GravitySites or GravitySurvey : A gsolve object providing site metadata.

  • DataFrame : must contain columns 'latitude', 'longitude' and 'height_ellipsoidal' and be indexed by 'site_id'.

corrections_parametersGravityCorrectionParameters, GravityCorrectionProvider or GravityCorrections

An object providing either the parameters used to compute the various gravity corrections and/or a set of pre-computed gravity corrections. Can be any of the following:

  • GravityCorrectionParameters : a parameter object defining how to compute gravity corrections. The parameters object will be copied to self.params attribute.

  • GravityCorrectionProvider : a class for computing gravity corrections as specified in a GravityCorrectionParameters object. This will be used directly to compute the necessary gravity corrections, and it’s params copied to self.params.

  • GravityCorrections : pre-computed gravity corrections for a set of sites according to parameters in a GravityCorrectionParameters object. The corrections used dircetly and , and it’s params copied to self.params

terrain_correctionsTerrainCorrectionData, optional

An object providing terrain corrections at each site. These are required to compute the complete Bouguer anomaly. If provided, georgraphic coordinates and terrain corrections will be copied to self.data and the associated TerrainCorrectionParameter objects copied to self.tcorr_params. If None, then a terrain correction column 'tcorr:total' will be added and set to NaN.

Attributes:
datapandas.DataFrame

Table of computed gravity corrections and anomalies indexed by site_id. The primary columns are:

  • absolute_gravity : the input absolute gravity values.

  • normal_gravity_at_ellipsoidnormal gravity at surface of the ellipsoid

    self.params.ellipsoid

  • free_air_correction : the free-air correction.

  • atmospheric_correction : the atmospheric corrections due to elevation. Only inclued if self.params.use_atmospheric_correction is True.

  • bouguer_slab_correction or bouguer_slab_curvature_corrected : the Bouguer correction, with form determined by self.params.use_curvature_corrected.

  • tcorr:* : terrain correction for various zones, if terrain corrections were provided. Note that only the tcorr:total column is used in anomaly calculations.

  • tcorr:total : sum of contributions from each terrain correction zone. Will be NaN if no terrain corrections were provided.

  • free_air_anomaly : the free-air anomaly in mGal.

  • bouguer_anomaly_simple : the Bouguer anomaly without terrain corrections.

  • bouguer_anomaly_complete : the Bouguer anomaly including terrain corrections. Will be NaN if no terrain corrections were provided.

paramsGravityCorrectionParameters

A copy of the parameters used to compute corrections and anomalies:

  • params.ellipsoid : the ellipsoid used to compute normal gravity.

  • params.density_crust : the crustal density used in Bouguer corrections.

  • params.density_water : the water density used in Bouguer corrections.

  • params.spherical_cap_radius : the radius of spherical cap used in computing curvature-corrected form of the Bouguer correction.

  • params.use_curvature_corrected : The type of Bouguer correction used. If True, the Bouger correction was the curvature-corrected form, otherwise the infinite planar slab form was used.

  • params.use_atmospheric_correction : If True, atmospheric corrections were included in anomaly calculations.

tcorr_paramsdict[str, TerrainCorrectionParameters]

A dictionary of cpoies of the TerrainCorrectionParameters objects associated with terrain corrections. The keys are the terrain correction zone ID’s, and will partially correspond to columns in the self.data attribute. Will be an empty dict if no terrain corrections were provided.

class gsolve.GravityCorrectionParameters(ellipsoid: str = 'GRS80', density_crust: float = 2670.0, density_water: float = 1030.0, spherical_cap_radius: float = 166735.0, use_curvature_corrected: bool = True, use_atmospheric_correction: bool = True, free_air_gradient: float = 0.3087691)[source]#

Bases: GSolveParameters

Class to store parameters for normal gravity and anomally calculations.

Parameters:
ellipsoid“WGS84” or “GRS80” (default)

The reference ellipsoid used in normal gravity

density_crust: float, default = 2670.0

Density of crust in Mg.m**-3

density_water: float, default = 1030.0

Density of water in Mg.m**-3

spherical_cap_radius: float, defult = 166735.0

The radius in km of the circular cap correction. The default 166735.0 km is equivalent to 1.5 degrees of arc for a spherical earth.

use_curvature_corrected: bool, default True

Specify the type of Bouguer correction to compute and use in subsequent anomaly calculations. If True, bouguer corrections are curvature corrected. If False, bouguer corrections are for an infinite horizotal slab.

use_atmospheric_correction: bool, default True

Whether to include atmospheric correction in gravity corrections and subsequent anomaly calculations.

free_air_gradientfloat, default 0.3087691

The free air gradient in mGal/m.

bouguer_correction_fields() list[str][source]#

Return tuple of names of correction methods required for specified Bouguer method.

bouguer_correction_type() str[source]#

Return the name specified of Bouguer correction method.

density_crust: float = 2670.0#
density_water: float = 1030.0#
ellipsoid: str = 'GRS80'#
free_air_gradient: float = 0.3087691#
spherical_cap_radius: float = 166735.0#
use_atmospheric_correction: bool = True#
use_curvature_corrected: bool = True#
class gsolve.GravityCorrectionProvider(params: None | GravityCorrectionParameters = None, **kwargs)[source]#

Bases: object

Class to calculate normal gravity and various gravity corrections.

Parameters:
paramsGravityCorrectionParameters or None

Object defining parameters used in computing gravity corrections. If None, then a GravityCorrectionParameters object will be created using default values.

kwargsdict

Additional keyword arguments used to override parameters in the supplied GravityCorrectionParameters object params. If params is None, then kwargs are used to override default parameter values.

Attributes:
paramsGravityCorrectionParameters

Parameters used to compute the gravity corrections.

classmethod available_corrections() tuple[str, ...][source]#

Return tuple of available gravity correction methods

bouguer_corrections(sites: DataFrame | GravitySites) GravityCorrections[source]#

Calculate corrections required for computing a Bouguer anomaly as defined in self.params.

Parameters:
sitespd.DataFrame | GravitySites

An object providing site latitude and ellipsoidal height. See GravityCorrectionProvider.compute() for details.

Returns:
GravityCorrections

Object containing Bouguer corrections and the correction parameters.

compute(sites: GravitySites | DataFrame, corrections: str | Sequence[str] | None = None, column_names: dict[str, str] | None = None, include_coords: bool = False) GravityCorrections[source]#

Compute gravity corrections at sites.

Parameters:
sitesGravitySites | DataFrame

An object providing site latitude and ellipsoidal height, and indexed by 'site_id'. If sites is a DataFrame, it is expected to have columns named 'latitude' and 'height_ellipsoidal', unless alternative columns are specified using the column_names argument.

correctionsstr | Sequence[str], optional

An array or string of corrections to compute. By default compute all corrections required for generating a Bouguer anomaly as specified in self.params.

column_namesdict[str, str] | None, optional

A dictionary mapping dexpected columns latitude and height_ellipsoidal to alternative column names. E.g. {'latitude': 'lat', 'height_ellipsoidal': 'height'}.

include_coordsbool, default False

If True, include site latitude and height in output.

Returns:
GravityCorrections

Object containing computed gravity corrections and the correction parameters.

free_air_corrections(sites: DataFrame | GravitySites) GravityCorrections[source]#

Calculate corrections required for computing a free air anomaly.

Parameters:
sitespd.DataFrame | GravitySites

An object providing site latitude and ellipsoidal height. See GravityCorrectionProvider.compute() for details.

Returns:
GravityCorrections

Object containing free air corrections and the correction parameters.

params: GravityCorrectionParameters#
class gsolve.GravityObservations(site_id: ArrayLike, datetime: ArrayLike, meter_id: ArrayLike, meter_reading: ArrayLike | None = None, meter_reading_mgal: ArrayLike | None = None, obs_id: ArrayLike | None = None, loop: ArrayLike | None = None, active: ArrayLike | None = None, timedelta_unit: str | int | float | Timedelta | timedelta = '1h', fixed_time_datum: int | float | str | date | datetime64 | Timestamp | None = None, **kwargs)[source]#

Bases: GSolveTable

Class to store and process gravity observations.

Parameters:
site_idArrayLike

Observation site identifier.

datetimeArrayLike

The observation datetime in a format parseable by pandas.to_datetime() method. All datetimes will be converted to UTC with timezone information removed.

meter_idArrayLike

Gravity meter identifier.

meter_readingArrayLike, optional

Observed meter reading in meter units. At least one of meter_reading or meter_reading_mgal must be specified.

meter_reading_mgalArrayLike, optional

Observed meter readings in mGal. At least one of meter_reading or meter_reading_mgal must be specified.

obs_idArrayLike, optional

Array-like object containing unique observation identifiers. If omitted, unique identifiers will be generated from the site_id and datetime fields.

loopArrayLike, optional

Array-like object containing survey loop identifiers. If omitted, all observations will be assigned to loop ‘1’.

activeArrayLike, optional

Array-like object indicating whether an observation is ‘active’ (True) or inactive (False). Only ‘active’ observations will be included as a datapoints in network adjustment. All observations are considered active by default.

timedelta_unitTimedeltaConvertibleTypes, default “1h”

Time interval unit for timedelta calculations. The default is ‘1h’ (i.e. 1 hour), meaning ‘survey time’ is in decimal hours.

fixed_time_datumDatetimeScalar, optional

The time datum used to compute survey time deltas. If None, then the datetime of the earliest observation will be used.

**kwargs

Additional keyword arguments can be used to specify additional fields to be included in the data DataFrame attribute. .

Attributes:
datapandas.DataFrame

DataFrame containing the gravity observations, gravity reductions and other derived information.

paramsGravityObservationsParameters

Return parameters as a GravityObservationsParameters object.

_known_fieldsdict[str, DataFieldSpecification]

A dictionary of ‘known’ field name and their associated DataFieldSpecification, which defines the expected data type, default value, and other metadata for that field. If data are added using the set_column(name, value,...), and ‘name’ is in _known_fields, the associated DataFieldSpecification will be used to validate and coerce the data before it is added to the obj.data dataframe.

activate(obs_id: str | Iterable[str] | None = None, site_id: str | Iterable[str] | None = None, loop: str | Iterable[str] | None = None, add_metadata: bool = False) None[source]#

Activate observations.

Only ‘active’ observations are included in gsolve solutions. By default, all observations are considered ‘active’. This method allows for the reactivation of observations that were specified as inactive in the input data or by calling the deactivate method.

Parameters:
obs_idstr or array_like, optional

The obs_id of the observations to activate.

site_idstr or array_like, optional

The site_id of the observations to activate.

loopstr or array_like, optional

The loop of the observations to activate.

add_metadatabool, default=False

Not implemented.

Raises:
Value Error

If any of the specified obs_id, site_id or loop are not in the data.

See also

deactivate

equivalent function to deactivate observations.

apply_dial_to_mgal(converter: MeterReadingConverter, check_meter_id: bool = True, check_datetime: bool = True, set_converter_id_column: bool = True, input_column_name: str = 'meter_reading', output_column_name: str = 'meter_reading_mgal') None[source]#

Apply dial to mgal conversion to observed “meter_reading”.

Parameters:
converterMeterReadingConverter

The meter reading converter object.

check_meter_idbool, default=True

Only convert readings where “meter_id” matches the converter “meter_id”. If False, ignore “meter_id” and convert all observations.

check_datetimebool, default=True

Only convert readings where observation datetime falls within the valid the converter date range. If False, ignore “datetime” and convert all readings.

set_converter_id_column: bool, default=True

If True set the “meter_reading_converter_id” column to the converter_id from the MeterReadingConverter object.

input_column_namestr, default=’meter_reading’

The columns holding gravity readings to convert.

output_column_namestr, default=’meter_reading_mgal’

The column to store the converted readings.

apply_earth_tide_correction(sites: GravitySites, tide_corrector: None | EarthTideCorrectionProvider = None, column_name: str = 'earth_tide_corr', **kwargs) None[source]#

Compute earth tide correction and store in column column_name.

Parameters:
sitesGravitySites

GravitySites object providing latitude, longitude and height_ellipsoidal for each site.

tide_correctorTideCorrectionProvider, optional

A TideCorrectionProvider object to use. If not specified, then a LongmanTidalCorrection object with default parameters will be used.

column_namestr, default=’earth_tide_corr’

The column name to store the earth tide correction.

kwargsdict

Additional keyword arguments passed to the provider’s tidal_correction() method.

See also

gsolve.tide.earth_tide.LongmanTidalCorrection

Longman tidal correction class.

gsolve.tide.earth_tide.gravimetric_factor

Calculate amplification factor from Love numbers

apply_ocean_load_correction(corrector: OceanLoadCorrectionProvider, column_name: str = 'ocean_load_corr', if_not_matched: Literal['error', 'warn'] = 'error', **kwargs) None[source]#

Get ocean loading corrections and store in column column_name.

This method calls the ocean_load_correction() method of the provided ocean_load_corrector object to retrieve ocean loading corrections for each observation. Ocean load corrections will typically have been pre-computed in some Third Party software such as Quick Tide Pro.

Parameters:
ocean_load_correctorOceanLoadCorrectionProvider

An object providing ocean loading corrections.

sitesGravitySites, optional

GravitySites object providing latitude, longitude and height. Only required if the ocean_load_corrector requires site location parameters.

column_namestr, default=’ocean_load_corr’

The column name to store the ocean loading correction.

if_not_matched{‘error’, ‘warn’}, default ‘error’

Behaviour when an observation cannot be matched with the corrections provided by the ocean_load_corrector. E.G. for the timeseries based corrector QuickTideTimeSeries, if datetimes are outside the range of the time series. Options are:

  • ‘error’ : raise a ValueError.

  • ‘warn’ : issue a warning, and return nan for unmatched observations.

kwargsdict

Additional keyword arguments passed to the provider’s ocean_load_correction() method.

calculate_tide_corrected_gravity() None[source]#

Calculate corrected gravity values and assign to column ‘gravity_corr’

check_data(warn: bool = True) bool[source]#

Check the data for errors.

Parameters:
warnbool, default=True

If True, print warnings.

Returns:
bool

True if data is OK, False otherwise.

deactivate(obs_id: str | Iterable[str] | None = None, site_id: str | Iterable[str] | None = None, loop: str | Iterable[str] | None = None, add_metadata: bool = False) None[source]#

Deactivate observations.

Deactivated observations are not included in gsolve solutions.

Parameters:
obs_idstr or array_like, optional

The obs_id of the observations to deactivate.

site_idstr or array_like, optional

The site_id of the observations to deactivate.

loopstr or array_like, optional

The loop of the observations to deactivate.

add_metadatabool, default=False

Not implemented.

See also

activate

Activate observations

property endtime: Timestamp#

Latest observation datetime.

fixed_time_datum() None | Timestamp[source]#

Return time datum used for calculating timedelta.

Returns:
Timestamp or None

None if no fixed time datum has been set.

property loop_ids: list[str]#

Return unique survey loop id’s sorted by loop start time.

loop_summary() DataFrame[source]#

Return a summary of the observations by loop.

params() GravityObservationsParameters[source]#

Return parameters as a GravityObservationsParameters object.

plot_network_map(sites: GravitySites, savefilename: str | PathLike | None = None, figsize: tuple[float, float] = (10, 10), marker_scale_factor: float = 25, plot_stn_labels: bool = False, ax: Axes | None = None, **kwargs) tuple[Figure, Axes][source]#

Plot network map showing connections between stations. Station markers are scaled according to the number of occupations

Parameters:
sitesGravitySites

DESCRIPTION.

savefilenamestr or PathLike, optional

DESCRIPTION. The default is None.

figsizetuple, default=(10, 10)

The pyplot figure size.

marker_scale_factor: float, default=25

Scale marker size by this value.

plot_stn_labels: bool, default=False

Plot station name next to station points.

ax=None
**kwargsdict

Optional keyword arguments passed directly to matplotlib.pyplot.plot().

Returns:
A figure showing connections between sites as well as a tuple containing the
figure and axis objects (fig, ax).
plot_observed_data(loop: str | int, x_column: str = 'datetime', y_column: str = 'meter_reading_mgal', savefilename: str | PathLike | None = None, figsize: tuple[float, float] = (12, 8), ax=None, **kwargs) tuple[Figure, Axes][source]#

Plot observed data.

Parameters:
loopstr or int

The loop to plot. Use loop=’all’ to plot all data, ignoring loops.

x_columnstr, default=’datetime’

The ‘x’ data column to plot.

y_columnstr, default=’meter_reading_mgal’

The ‘y’ data column to plot.

savefilenameFilePath, default=None

If not None, save the plot to savefilename.

figsizetuple, default=(12, 8)

The pyplot figure size.

ax = None
**kwargsdict

Optional keyword arguments passed directly to matplotlib.pyplot.plot().

Returns:
Plot of the observed data.
plot_site_visits(loop: str) None[source]#

Plot the order of station visits for each loop.

Parameters:
loopstr

Loop number to plot.

Returns:
A figure showing the station occupation order for each loop.
set_calibration_factor(calibration_factor: float = 1.0, meter_id: str | None = None) None[source]#

Set gravity meter calibration factor.

Parameters:
calibration_factorfloat or array_like

The gravity meter calculate_calibration. Default is 1.0

meter_idstr, default None

Set calibration_factor for specifed meter_id only. If data contains multiple gravity meters, meter_id must be specified.

Raises:
ValueError

If data contains multiple gravity meter_id’s and meter_id is None. When specified meter_id is not in data.

set_fixed_time_datum(t: int | float | str | date | datetime64 | Timestamp | None, set_tdelta: bool = True) None[source]#

Set time datum used for calculating timedelta.

By default, gsolve will use the earliest survey and/or loop observation as the time datum. The fixed_time_datum feature is provided for reproducibility purposes. Legacy gsolve versions used the J1900.00 epoch as the datum.

Warning

Setting a fixed_time_datum that is far from the survey time range in conjuction with setting a small timedelta_unit will lead to very large time_delta values being used in gsolve drift calculations. Results may then be incorrect due to floating point rounding errors.

Parameters:
t_pd.Timestamp or None

The time datum to use. If None then fixed time datum is removed.

set_obs_id(idx: ArrayLike | str | None = None, duplicated_obs_id: Literal['error', 'keep', 'rename'] = 'rename', drop: bool = True) None[source]#

Set obs_id as the index of the data DataFrame attribute.

Warning:: This method will overwrite the existing index of obj.data.

Parameters:
idxArrayLike, str or None, default is None

The obs_id values to set as the index of obj.data. Behaiviour depends on the dtype of idx. If idx is:

  • None : a default obs_id will be auto-generated using the _default_index_generator() method.

  • str : idx is assumed to be the name of a column in obj.data to be used to set as the index. Equivalent to obj.data.set_index(idx). Note that the index will be renamed to obs_id.

  • array-like : idx is assumed to be a sequence of obs_id values to set as the index.

duplicated_obs_id{‘error’, ‘keep’, ‘rename’}, default ‘rename’

The behaviour when duplicate obs_id values are found:

  • 'error' : raise a ValueError.

  • 'keep' : issue a warning and keep duplicate obs_id’s as-is.

  • 'rename'issue a warning and rename duplicate obs_id’s by appending

    a 3-digit sequence number.

dropbool, default True

If obs_id is to be set from an existing column (i.e. where idx is a string), this flag indicates whether to drop that column from obj.data after.

Raises:
ValueError
set_tdelta() None[source]#

Calculate time delta for survey and loop observations and assign to columns “survey_tdelta”,”loop_tdelta”.

The datum for survey_tdelta is the earliest observation (i.e self.starttime), while the datum for loop_tdelta is the earliest observation in each loop.

The default datum(s) can be overridden by calling set_fixed_time_datum()

set_timedelta_unit(unit: str | int | float | Timedelta | timedelta, set_tdelta: bool = True) None[source]#
site_summary(data_col: str | None = None) DataFrame[source]#

Return a summary of the observations by site.

property starttime: Timestamp#

Earlest observation datetime

timedelta_unit() Timedelta[source]#

Time interval unit used for calculating survey timedelta.

Can be any valid argument for pandas.Timedelta(). The default is ‘1h’ (i.e. 1 hour), meaning survey time is in decimal hours.

Warning

Setting a time_delta unit that is very small in conjunction with setting a ‘distant’ fixed_time_datum will lead to very large timedelta values being used in gsolve drift calculations. Results may then be incorrect due to floating point rounding errors.

to_excel(fname: str | PathLike, sheet_name: str | None = None, params_sheet_name: str | None = None, normalize_column_names: bool = True, expand_datetime: str | None = 'datetime', drop_datetime: bool = False, bool_to_int: bool = True, include_unknown_fields: bool | Sequence[str] = False, active_only: bool = False, if_workbook_exists: Literal['error', 'replace', 'append'] = 'error', if_sheet_exists: Literal['error', 'replace', 'new'] = 'error', **kwargs) None[source]#

Write data to an excel file.

write_to_csv(fname: str | PathLike, normalize_column_names: bool = True, expand_datetime: str | None = 'datetime', drop_datetime: bool = False, bool_to_int: bool = True, include_unknown_fields: bool | Sequence[str] = False, active_only: bool = False, **kwargs) None[source]#

Write data to a csv file.

Parameters:
fnamestr or PathLike

Output file name.

normalize_columns_namesbool, default True

Make column names lowercase with no spaces.

expand_datetimestr or None, default None

Expand datetime fields to

bool_to_intbool, default False

Convert True, False to 1, 0.

**kwargs

Optional arguments to be passed to pandas.to_csv.

See also

pandas.to_csv
class gsolve.GravitySites(site_id: ArrayLike, latitude: ArrayLike, longitude: ArrayLike, height_ellipsoidal: ArrayLike, reference_gravity: ArrayLike | float | None = nan, gsolve_tie: ArrayLike | bool | None = False, **kwargs: ArrayLike)[source]#

Bases: GSolveTable

Class to store gravity site/station data and metadata.

Parameters:
site_idArrayLike of str

The unique identifier for each site.

latitudeArrayLike of float

The latitude of the site in decimal degrees.

longitudeArrayLike of float

The longitude of the site in decimal degrees.

height_ellipsoidalArrayLike of float

The height of the site above the ellipsoid in meters.

reference_gravityArrayLike or None, default nan

The reference gravity value at the site in mGal.

gsolve_tieArrayLike or None, default False

An array of booleans indicating whether the site will be used as a fixed tie when solving for drift. A valid tie must have a non-null reference_gravity value.

**kwargsdict[str, ArrayLike]

Additional fields to be added to the site data.

Attributes:
dataDataFrame

Gravity site/station information stored as a pandas DataFrame. The data fields (i.e. columns) required by GSolve are have explicitly defined names, dtypes and default values. The preferred method for setting fields is to use the gsolve.core.data.GSolveTable.set_columns() method. Other fields may be added to obj.data as required, but will be ignored by gsolve.

The defined fields are:

  • Fields that must be defined at object creation:

    • site_id, index, str: unique station identifier.

    • latitude, float: station latitude in decimal degrees.

    • longitude, float: station longitude in decimal degrees.

    • height_ellipsoidal, float: elevation relative to the ellipsoid.

  • Fields that will be created with default values if not specified at object creation

    • reference_gravity, float: reference gravity value at that site, NaN if unknown. Typically absolute gravity, but could be set to some arbitrary value if no reference gravity data are available. At least one reference_gravity value must be set for solve for drift.

    • gsolve_tie, bool: indicates whether this site is to be used as “tie” when solving for drift. At least one site with a reference_gravity value must be set as a gsolve_tie.

  • Cartesian coordinates. These are not required for gsolve, but are required for calculating terrain corrections.

    • easting, float: site locations in some cartesian coordinate system

    • northing, float: site locations in some cartesian coordinate system

    • height_orthometric, float: height of site above some datum

activate_ties(site_id: str | ArrayLike | None = None) None[source]#

Set one or more “tie” sites as active, i.e. to be used in gsolve.

Parameters:
site_idstr or ArrayLike, default None

The site_id(s) to be activated. If None, then all sites reference gravity are activated.

check_data(warn: bool = True) bool[source]#

Check the data errors.

Parameters:
warnbool, default=True

If True, print warnings for common errors.

Returns:
bool

True if data is OK, False otherwise.

deactivate_ties(site_id: str | ArrayLike | None = None) None[source]#

Set one or more “tie” sites as inactive, i.e. not used in gsolve.

Parameters:
site_idstr or ArrayLike, default None.

The site_id(s) to be deactivated. If None, the all sites with reference gravity are deactivated.

classmethod from_excel(excel_file: str | PathLike, sheet_name: str | int | list[int | str] | None = None, ignore_unknown_fields: bool = True, parse_split_datetime: bool = True, mapper: Mapping[Any, Hashable] | Callable[[Any], Hashable] | None = None, **kwargs) GravitySites[source]#

Read a GravitySites object from an excel workbook.

Parameters:
excel_filestr or PathLike

The excel workbook to read from.

sheet_namestr | int, optional

The worksheet name or location within excel_file. If not specified. attempt to read from the standard sheet name ‘sites’ and then from the legacy sheet name ‘Locations’

ignore_unknown_fieldsbool, default True

If True, columns that have no defined specification are dropped. Use GravitySites.known_fields() to return a list of the defined fields

mapperdict or function, optional

Dict-like or function transformations to apply to column names before creating object. See DataFrame.rename method for full documentation

kwargs

Arguments passed to pandas.read_excel method.

Returns:
Sites

The GravitySites object created from the excel worksheet.

get_points(xcol: str, ycol: str, zcol: str | None = None) tuple[ndarray[tuple[Any, ...], dtype[float64]], ndarray[tuple[Any, ...], dtype[float64]], ndarray[tuple[Any, ...], dtype[float64]]][source]#
get_ties(active_only: bool = True, gravity_only: bool = True) DataFrame[source]#

Return rows sites that will be used as gsolve ties.

Parameters:
active_onlybool, optional, default is True

Only , by default True

gravity_onlybool, optional, default is True

If True only return reference_gravity values, otherwise return all fields.

Returns:
DataFrame
sample_elevation(dem: DataArray | Dataset | str | PathLike, output_col: str | None = None, xcol: str = 'easting', ycol: str = 'northing', method: str = 'nearest') None | Series[source]#

Get elevations at site locations from an DEM/xarray grid.

Parameters:
demxarray.DataArray, xarray.Dataset, str or PathLike

The array of values to sample. If dem is not a

output_colstr or None, default None

If output_col is defined, write sampled values to obj.data[output_col]. If output_col is None, return a Series of sampled values.

xcolstr, optional

The column holding x coordinates, by default “easting”

ycolstr, optional

The column holding y coordinates, by default “northing”

methodstr, default “nearest”

The interpolation method used. See xarray.DataArray.interp for available options.

Returns:
elevationsSeries or None

None if output_col is defined, otherwise a Series of sampled elevations.

set_reference_gravity(ref_sites: ReferenceGravity | DataFrame | dict, reset: bool = False) None[source]#

Load reference gravity values into the sites table.

Parameters:
ref_sitesReferenceGravity | DataFrame | dict

The reference gravity values to be loaded.

resetbool, optional

Blank any existing reference gravity values, by default False.

to_excel(excel_file: str | PathLike, sheet_name: str | None = None, normalize_column_names: bool = True, bool_to_int: bool = True, include_unknown_fields: bool = False, if_workbook_exists: Literal['error', 'replace', 'append'] = 'error', if_sheet_exists: Literal['error', 'replace', 'new'] = 'error', **kwargs) None[source]#

Write data DataFrame to an excel file.

Parameters:
excel_filestr or PathLike

The excel workbook to write to.

sheet_namestr, default None

The name of the worksheet to write to.

normalize_column_namesbool, default True

Convert columns name to snake case.

bool_to_intbool, default True

Convert boolean True/False to 1,0.

include_unknown_fieldsbool, default False

Include fields not in the known fields.

if_workbook_exists{“error”, “replace”, “append”}, default “error”

Behaviour if the excel file already exists.

if_sheet_exists{“error”, “replace”, “new”}, default “error”

Behaviour if the worksheet already exists.

**kwargs

Additional keyword arguments passed to pandas.DataFrame.to_excel.

See also

gsolve.core.excel_io._core_excel_io.write_excel_worksheet

For complete explanation of parameters if_workbook_exists and if_sheet_exists.

pandas.DataFrame.to_excel

The underlying function used to write the DataFrame to the excel file.

write_to_csv(fname: str | PathLike, normalize_column_names: bool = True, expand_datetime: str | None = None, drop_datetime: bool = False, bool_to_int: bool = True, include_unknown_fields: bool = False, **kwargs) None[source]#

Write data DataFrame to csv file.

Parameters:
csv_filestr or PathLike

The path to the excel file.

normalize_column_namesbool, default True

Convert columns name to snake case.

bool_to_intbool, default True

Convert boolean True/False to 1,0.

include_unknown_fieldsbool, default False

Include fields not in the known fields.

**kwargs

Additional keyword arguments passed to pandas.DataFrame.to_csv.

See also

pandas.DataFrame.to_csv

The underlying function used to write the DataFrame to a csv.

class gsolve.GravitySurvey(obs: GravityObservations, sites: GravitySites)[source]#

Bases: object

Class to store gravity observations and sites and facilitate running gsolve

Parameters:
obsGravityObservations

The gravity observations object.

sitesGravitySites

The gravity sites object.

.
apply_dial_to_mgal(converter: MeterReadingConverter, input_column_name: str = 'meter_reading', output_column_name: str = 'meter_reading_mgal') None[source]#

Apply dial to mgal conversion to observations.

Parameters:
converterMeterReadingConverter, optional

The dial to mgal converter object. If None, meter_reading data are assumed to be in mgal and no conversion is applied.

input_column_namestr, default=’meter_reading’

The input column name to convert.

output_column_namestr, default=’meter_reading_mgal’

The output column name to store the converted data.

apply_earth_tide_correction(tide_corrector: EarthTideCorrectionProvider, **kwargs) None[source]#
calculate_tide_corrected_gravity() None[source]#
classmethod from_excel(fname: str | PathLike, ignore_unknown_fields: bool = True, parse_split_datetime: bool = True) Self[source]#

Read gravity observations and sites from an excel file.

Parameters:
fnamestr or PathLike

The path to the excel file.

ignore_unknown_fieldsbool, default is True

Ignore unknown fields in the excel file.

parse_split_datetimebool, default is True

Parse split datetime fields into a single datetime column.

pre_flight_check(warn: bool = True) bool[source]#

Check data are valid before running gsolve.

Parameters:
warnbool, default=True

If True, print warnings.

set_calibration_factor(calibration_factor: float = 1.0, meter_id: str | None = None) None[source]#
set_reference_gravity(ref_grav: ReferenceGravity | DataFrame, reset: bool = False) None[source]#
solve_lstsq(method: Literal[1, 2, 3], percentile_clipping: float = 100, use_loops: bool = True, calculate_calibration_factor: bool = False) GSolveResults[source]#
class gsolve.LaCosteRombergDialConverter(meter_id: str, counter_reading: ArrayLike, value_mgal: ArrayLike, interval_factor: ArrayLike | None = None, starttime: int | float | str | date | datetime64 | Timestamp | NaTType | None = None, endtime: int | float | str | date | datetime64 | Timestamp | NaTType | None = None)[source]#

Bases: object

Convert Lacoste-Romberg G and D meter readings to mgal.

Implements table based linear interpolation using the “Calibration Table” provided with each L&R meter. Readings may be filtered by meter_id and date range.

Parameters:
meter_idstr

The gravity meter name.

counter_readingArrayLike

Array of counter readings. This will typically be an array of floats from 0.0 to 7000.0 in increments of 100.0 for G meters or 0.0 to 200.0 in incremenets of 10.0 for D meters

value_mgalArrayLike

Gravity in milligals at each counter_reading.

interval_factorArrayLike, optional

The gradient of mgal/counter_reading for each interval.

starttimedatetimelike, optional

Date from which correction parameters are valid, default is pandas.Timestamp.min.

endtimedatetimelike, optional

Date up to which correction parameters are valid. Defaults to pandas.Timestamp.max.

Attributes:
table: DataFrame

The conversion table, with columns ‘counter_reading’, ‘value_mgal’, ‘interval_factor’ and ‘value_mgal_from_ifactor’.

Notes

If interval_factor is provided, then ‘value_mgal’ will be recalculated and stored in the ‘value_mgal_from_ifactor’ column. L&R calibration tables typically provide value_mgal rounded to 2 dp (10 ugal resolution) whereas interval_factor is specified to 5 dp (1 ugal resolution). Corrections are interpolated using ‘value_mgal_from_ifactor’ where possible to minimise any loss of precision.

convert_readings(readings: ArrayLike, meter_id: ArrayLike | None = None, date_time: int | float | str | date | datetime64 | Timestamp | list | tuple | ndarray | Series | Index | DatetimeIndex | None = None) ndarray[tuple[Any, ...], dtype[float64]][source]#

Convert meter readings to milligal.

Parameters:
readingsfloat, array_like

The readings to be converted.

meter_idstr, array_like, optional

The meter id/name associated with the readings. If provided, only readings with meter_id matching the converter’s meter_id will be converted.

date_timedatetimelike, array_like, optional

The date/time of the readings. If provided, only readings with date_time falling within converter’s valid_date_range are converted.

Returns:
float, ndarray

The converted readings. Readings where meter_id or date_time do not match the converter’s will be returned as NaN.

Raises:
ValueError

Where reading(s) are outside the limits of the conversion table.

converter_id() str[source]#

Identifier label of form ‘meter_id:starttime_to_endtime’.

property endtime: Timestamp | None#

The date up to which correction parameters are valid.

classmethod from_csv(fname: str | PathLike, **kwargs) LaCosteRombergDialConverter[source]#

Generate a LaCosteRombergDialConverter object from a csv file.

classmethod from_dataframe(meter_id: str, table: DataFrame, starttime: int | float | str | date | datetime64 | Timestamp = Timestamp('1677-09-21 00:12:43.145224193'), endtime: int | float | str | date | datetime64 | Timestamp = Timestamp('2262-04-11 23:47:16.854775807')) LaCosteRombergDialConverter[source]#

Generate a LaCosteRombergDialConverter object from a standard L&R G-meter table.

The input table data must have at least 3 columns, which are assumed to be “interval_start”, “interval_end”, “interval_factor”.

Parameters:
meter_idstr

Meter id/name.

table_pd.DataFrame | _npt.ArrayLike

The correction table data.

starttimedatetimelike

Date from which correction parameters are valid, default is pandas.Timestamp.min.

endtimedatetimelike

Date up to which correction parameters are valid. Defaults to pandas.Timestamp.max.

Returns:
DialToMgalConverter
property meter_id: str#
set_datetime_range(starttime: int | float | str | date | datetime64 | Timestamp | None | NaTType, endtime: int | float | str | date | datetime64 | Timestamp | None | NaTType) None[source]#

Set the start and end times defining the converter’s valid date range.

The meter conversion values for a given LaCoste-Romberg gravity meter may change over time due to, say, upgrades or physical damage. The starttime and endtime properties allow for a conversion table to be assigned a date range for which it is valid. Conversion will only be applied to readings that fall within the valid date range.

Parameters:
starttimedatetimelike, NaT or None

Date from which correction parameters are valid, default is None (i.e. no start date).

endtimedatetimelike, NaT or None

Date up to which correction parameters are valid. Defaults to None (i.e. no end date).

property starttime: Timestamp | None#

The date from which correction parameters are valid.

class gsolve.ReferenceGravity(site_id: ArrayLike, gravity: ArrayLike, active: ArrayLike | bool = True, **kwargs: dict[str, ArrayLike])[source]#

Bases: GSolveTable

Class providing a simple mechanism for merging reference gravity data.

Parameters:
site_idarray_like

The unique site identifier. Will be converted to str.

gravityarray_like

The reference gravity value for each site.

active, array_like or bool, default True

An array indicating whether a site should be set as an active “gsolve_tie” when merged into a GravitySites.

**kwargsdict[str, array_like]

Additional fields to be added to the site data.

Attributes:
dataDataFrame

The reference gravity data indexed by site_id. The defined fields are:

  • 'gravity' : (float) The reference gravity value for the site.

  • 'active'(bool) Indicates whether the site should be used

    as an active “gsolve_tie” when merged into a GravitySites.

Other fields may be added to obj.data as required, but will be ignored by gsolve.

classmethod from_dict(data: Mapping, set_active: bool = True) Self[source]#

Create a ReferenceGravity object from a dictionary.

This method provides a simple mechanism for users to add reference gravity data to a GravitySites object.

Parameters:
datadict of float or dict of (float, bool)

Reference site data as a dictionary where keys are 'site_id' and values are either the reference gravity (float) or a sequence of (reference gravity, active) where active is a boolean

Returns:
ReferenceGravity

The created object.

to_excel(excel_file: str | PathLike, sheet_name: str | None = None, normalize_column_names: bool = True, bool_to_int: bool = True, include_unknown_fields: bool = False, if_workbook_exists: Literal['error', 'replace', 'append'] = 'error', if_sheet_exists: Literal['error', 'replace', 'new'] = 'error', **kwargs) None[source]#

Write data to an excel file.

Parameters:
fnamestr or PathLike

The path to the excel file.

sheet_namestr, default None

The name of the worksheet to write to.

normalize_column_namesbool, default True

Convert columns name to snake case.

bool_to_intbool, default True

Convert boolean True/False to 1,0.

include_unknown_fieldsbool, default False

Include fields not in the known fields.

if_workbook_exists{“error”, “replace”, “append”}, default “error”

Behaviour if the excel file already exists.

if_sheet_exists{“error”, “replace”, “new”}, default “error”

Behaviour if the worksheet already exists.

**kwargs

Additional keyword arguments passed to pandas.DataFrame.to_excel.

See also

gsolve.core.excel_io.write_excel_worksheet

For complete explanation of parameters if_workbook_exists and if_sheet_exists.

pandas.DataFrame.to_excel

The underlying function used to write the DataFrame to the excel file.

write_to_csv(fname: str | PathLike, normalize_column_names: bool = True, expand_datetime: str | None = None, drop_datetime: bool = False, bool_to_int: bool = True, include_unknown_fields: bool = False, **kwargs) None[source]#

Write data to a csv file.

Parameters:
fnamestr or PathLike

Output file name.

normalize_columns_namesbool, default True

Make column names lowercase with no spaces.

expand_datetimestr or None, default None

Expand datetime fields to

bool_to_intbool, default False

Convert True, False to 1, 0.

**kwargs

Optional arguments to be passed to pandas.to_csv.

See also

pandas.to_csv
class gsolve.TerrainCorrectionData(site_id: ArrayLike, params: TerrainCorrectionParameters | list[TerrainCorrectionParameters] | tuple[TerrainCorrectionParameters, ...] | None = None, terrain_corrections: ArrayLike | list[Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]]] | tuple[Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], ...] | None = None, **kwargs)[source]#

Bases: GSolveTable

Class to store terrain correction outputs and parameters.

In general, a user should not need to instantiate a TerrainCorrectionData object directly. Instances will be generated from a TerrainCorrector object via the compute() method. A TerrainCorrectionData object can be written to a file and then reloaded and re-instantiated, supporting a workflow where terrain corrections need only be computed once.

Parameters:
site_idarray_like of str

The unique site identifiers as a sequence. All elements are converted to str. Will be used to index the obj.data DataFrame.

paramsTerrainCorrectionParameters or sequence of TerrainCorrectionParameters, optional

The parameters for the various terrain correction zones.

terrain_correctionsarray_like or list of array_like

The terrain correction values for each zone.

**kwargsdict

Additional columns to be included in obj.data DataFrame. This could include site location information such as easting, northing, latitude, longitude etc.

Attributes:
paramsdict

Dictionary of containing copies of the TerrainCorrectionParameters objects used in computing terrain corrections for each zone. For each parameter object, the dictionary key will be 'tcorr:{obj.name}'. This naming pattern is also used to label the corresponding terrain correction columns in the TerrainCorrectionData.data DataFrame.

dataDataFrame

A DataFrame containing terrain correction data and indexed by 'site_id'. The DataFrame will contain columns for site locations, terrain corrections for each zone, and the total terrain correction. For a given zone defined by TerrainCorrectionParameters object = obj, the output columns will be:

  • 'tcorr:{obj.name}:topo' : the topography only component of the terrain correction. Ommited if compute_topography is False.

  • 'tcorr:{obj.name}:bath' : the bathymetry only component of the terrain correction. Ommited if compute_bathymetry is False.

The total terrain correction column will be labeled 'tcorr:total'. This is computed at initialisation and whenever new corrections are added via the set_corrections() method.

Columns are ordered by minimum distance of the corresponding zone, with the total terrain correction column last.

classmethod create_empty(site_id: ArrayLike, **kwargs) Self[source]#

Create an empty TerrainCorrectionData object with only site_id and any additional columns specified in kwargs.

Parameters:
site_idarray_like of str

The unique site identifiers as a sequence. All elements are converted to str. Will be used to index the obj.data DataFrame.

**kwargsdict

Additional columns to be included in obj.data DataFrame. This could include site location information such as easting, northing, latitude, longitude etc.

Returns:
TerrainCorrectionData

An empty TerrainCorrectionData object with only site_id and any additional columns specified in kwargs.

classmethod from_csv(fname: str | PathLike, **kwargs) TerrainCorrectionData[source]#

Read terrain corrections from a CSV file.

Parameters:
fnamestr or PathLike

The path to the input CSV file.

**kwargsdict

Additional keyword arguments passed to pandas.read_csv.

Returns:
TerrainCorrectionOutput
classmethod from_dataframe(df: DataFrame, params: TerrainCorrectionParameters | Sequence[TerrainCorrectionParameters] | DataFrame | Series, include_extra_cols: bool = True) Self[source]#

Create a TerrainCorrectionData object from a DataFrame.

Parameters:
dfDataFrame

DataFrame containing terrain correction data.

paramsTerrainCorrectionParameters, array-like, DataFrame, or Series

Parmeters for terrain correction calculations. If a DataFrame, it must have a column named ‘parameters’.

include_extra_colsbool, default is True

If True, include any extra columns in the DataFrame that are not terrain correction values.

Returns:
TerrainCorrectionOutput

TerrainCorrectionOutput object.

classmethod from_excel(fname: str | PathLike, sheet_name: str | None = None, params_sheet_name: str | None = None, **kwargs) TerrainCorrectionData[source]#

Read terrain corrections from an Excel file.

Parameters:
fnamestr or PathLike

The Excel file to read from.

sheet_namestr, optional

The name of the excel worksheet from which to read terrain corrections. If not specified then ‘Terrain Corrections’ will be used.

params_sheet_namestr, optional

The name of the excel worksheet from which to read terrain correction parameters. If not specified then '{sheet_name} Params' will be used.

**kwargsdict

Additional keyword arguments passed to pandas.read_excel.

Returns:
TerrainCorrectionOutput
get_corrections(site_id: ArrayLike, total_only: bool = False, if_missing: Literal['drop', 'raise', 'fill'] = 'drop', fill_value: float = nan) DataFrame[source]#

Get terrain correction values for the specified sites.

Parameters:
site_idstr or array-like of str

The unique site identifiers for which to get the terrain corrections.

total_onlybool, default False

If True, return only the total terrain correction values, otherwise return all terrain correction values.

if_missing{‘drop’, ‘raise’, ‘fill’}, default ‘drop’

How to handle any site_id arguments for which there are no terrain correction data:

  • If 'drop', then missing sites will be dropped from the output. A warning will be issued.

  • If 'fill', then missing site will be have all fields set to fill_value. A warning will be issued.

  • If:const:’raise’, then raise an exception.

fill_valuefloat, default np.nan

Value used to fill data for missing sites when if_missing='fill'.

Returns:
Series or DataFrame

The terrain correction values. Will be a Series if total_only=True, otherwise a DataFrame is returned.

set_corrections(params: TerrainCorrectionParameters, topography_corrections: ArrayLike | None = None, bathymetry_corrections: ArrayLike | None = None) None[source]#

Add a set of terrain correction parameters and values.

This will add a new column to the obj.data DataFrame, and recalculate the total terrain correction column.

Parameters:
paramsTerrainCorrectionParameters

The parameters for the terrain correction calculations.

topography_correctionsarray-like

The terrain correction values.

bathymetry_correctionsarray-like

Corrections for bathymetry

to_csv(fname: str | PathLike | None = None, **kwargs) str | None[source]#

Write the terrain correction data to a CSV file.

The output CSV file includes Terrain correction parameters as headers lines prefixed with ‘#’ prefix.

Parameters:
fnamestr or PathLike | None

The path to the output CSV file. If None, then the CSV string is returned.

**kwargsdict

Additional keyword arguments passed to pandas.DataFrame.to_csv.

Returns:
None | str

None if CSV written to file, otherwise the CSV string.

to_excel(fname: str | PathLike, sheet_name: str | None = None, params_sheet_name: str | None = None, if_workbook_exists: Literal['error', 'replace', 'append'] = 'error', if_sheet_exists: Literal['error', 'replace', 'new'] = 'error', **kwargs) None[source]#

Write the terrain correction data to an Excel file.

Parameters:
fnamestr or PathLike

The path to the output Excel file.

sheet_namestr, optional

The name of the excel worksheet to write terrain corrections. If not specified then 'terrain_corrections' will be used.

params_sheet_namestr, optional

The name of the excel worksheet to write terrain correction parameters. If None, then params_sheet_name will be set to '{sheet_name}_params'.

if_workbook_exists{‘error’, ‘append’, ‘replace’}, default=’error’

Action to take if the workbook already exists. Options are: ‘error’, ‘append’, or ‘replace’.

if_sheet_existsIfSheetExists, default=’error’

Action to take if the sheet already exists. Options are: ‘error’, ‘replace’, or ‘overlay’.

**kwargsdict

Additional keyword arguments passed to pandas.DataFrame.to_excel.

Returns:
None
class gsolve.TerrainCorrectionParameters(name: str, min_dist: float, max_dist: float, terrain_density: float = 2670.0, water_density: float = 1030.0, sea_level_elevation: float = 0.0, distance_mask_type: Literal['radial', 'rectangular'] = 'radial', dem_source: str | PathLike = '', density_dataset_source: str | PathLike = '', compute_topography: bool = True, compute_bathymetry: bool = True)[source]#

Bases: GSolveParameters

Class to store parameters for computing terrain corrections for a single “zone”.

A “zone” here is analagous to classic Hammer zones; ia symmetric region surrounding a point over which terrain corrections arecomputed. It is, defined by its extent (min_dist and max_dist), material densities, and topography data sources.

A full terrain correction would typically include several zones, covering different distance ranges.

Attributes:
namestr

The name for this “zone”. This name will be used as the key for this parameter object when it is added to the TerrainCorrector and TerrainCorrectionData objects. It will be used to label output columns in the TerrainCorrectionData.data DataFrame.

min_distfloat

Minimum distance or inner radius for this terrain correction zone. Data within this radius are excluded.

max_distfloat

Maximum distance or outer radius for this terrain correction zone. Data beyond this radius are excluded.

terrain_densityfloat, default=2670.0

Density of the terrain in kg/m^3. This is and water_density are used to generate a simple density model from the DEM.

water_densityfloat, default=1030.0

Density of water in kg/m^3. This is and terrain_density are used to generate a simple density model from the dem.

sea_level_elevationfloat, default=0.0

Elevation of sea level in meters using the same vertical datum as dem and points. Defines the boundary between topography and bathymetry when generating the density model.

distance_mask_type{“radial”, “rectangular”}, default=”radial”

Type of distance mask to use. A “radial” mask creates an approximately circular zone, while a “rectangular” mask creates a rectangular mask.

dem_sourcestr, PathLike, default=””

Path to a terrain dataset file. This will be loaded during terrain correction computation. If an empty string, then DEM data must be supplied directly to a TerrainCorrector instance. Note that dem_source input are converted to and stored as a string.

density_dataset_sourcestr, PathLike, default=””

Path to a density model file, which will be loaded during terrain correction computation. If an empty string, then a simple density model will be generated from the DEMusing terrain_density, water_density and sea_level_elevation. Note that density_dataset_source inputs are converted to and stored as a string.

compute_topographybool, default is True

Compute gravity corrections due to topographic masses.

compute_bathymetrybool, default is True

Compute gravity corrections due to water bodies such as the ocean.

compute_bathymetry: bool = True#
compute_topography: bool = True#
dem_source: str | PathLike = ''#
density_dataset_source: str | PathLike = ''#
distance_mask_type: Literal['radial', 'rectangular'] = 'radial'#
classmethod from_dataframe(df: DataFrame) dict[str, Self][source]#

Create one or more TerrainCorrectionParameters objects from a DataFrame.

Parameters:
dfpd.DataFrame

A DataFrame containing terrain correction parameters. Each row corresponds to a single TerrainCorrectionParameters object. Column names must match the attribute names of the TerrainCorrectionParameters class.

Returns:
dict[str, TerrainCorrectionParameters]

A dictionary of TerrainCorrectionParameters objects created from the DataFrame. The keys are of the form “tcorr:{name}”, where {name} is the name attribute of each TerrainCorrectionParameters object.

max_dist: float#
min_dist: float#
name: str#
sea_level_elevation: float = 0.0#
terrain_density: float = 2670.0#
to_dict(path2str: bool = False) dict[str, Any][source]#

Convert the parameters to a dictionary of the form {parameter_name: parameter_value, ...}.

Parameters:
path2strbool, optional

If True, convert any Path objects to their string representations.

Returns:
dict[str, Any]

A dictionary of parameter names and their values.

to_series(series_name: str | None = 'value', index_name: str | None = 'parameter', index_prefix: str | Sequence[str] | None = None) Series[source]#

Convert the parameters object to a pandas Series, where index is the parameter name and values are the parameter values.

Parameters:
series_namestr | None, optional

Name for the resulting Series. If None, the Series will be unnamed.

index_namestr | None, optional

Name for the Series index. If None, the index will be unnamed.

index_prefixstr | None, optional

If specified, the returned series will have a MultiIndex where the first level is index_prefix. E.g. if index_prefix=”zone1”, then the Series index will be of the form: (“zone1”, parameter_name,…). This is useful when combining multiple parameter Series.

Returns:
pd.Series

A pandas Series containing the parameters.

water_density: float = 1030.0#
class gsolve.TerrainCorrector(params: TerrainCorrectionParameters | Sequence[TerrainCorrectionParameters], dem: DataArray | Sequence[None | DataArray] | None = None, density_model: DataArray | Sequence[None | DataArray] | None = None)[source]#

Bases: object

A class for computing terrain corrections for gravity measurements using digital elevation models (DEMs).

It supports multiple calculation “zones”, each with its own parameters and data sources.

A typical workflow using this class would be:

  • Define one or more TerrainCorrectionParameters objects for the desired zones.

  • Instantiate a TerrainCorrector with these parameters and optional DEM/density models.

  • Add additional zones as needed using add_calculation_zone.

  • Call compute() on a set of points.

Parameters:
params: TerrainCorrectionParameters | list-like of TerrainCorrectionParameters

The TerrainCorrectionParameters object(s) defining the “zones” to be computed.

demxarray.DataArray | list-like | None, default is None

User supplided DEM(s) corresponding to each zone defined in params. For example if params=[p1, p2, p3], and you wish to specify a DEM for p2 only, then the argument must be dem=[None, dem_for_p2, None]. If None, the dem will be loaded from the dem_source attribute of the corresponding TerrainCorrectionParameters object. If a dem is specified here, then the dem_source attribute is ignored.

density_modelxarray.DataArray | list-like | None, default is None

User supplied density model(s) corresponding to each zone defined in params. For example if params=[p1, p2, p3], and you wish to specify a density model for p2 only, then the argument must be density_model=[None, model_for_p2, None]. If None, the density model will be generated internally based. If a density model is specified here, then the density_dataset_source attribute is ignored.

Attributes:
paramsdict

A dictionary of TerrainCorrectionParmeter objects defining the “zones” to be computed.

demsdict[str, DataArray | None]

A dictionary storing user supplied DEMs by zone name. Values will be None if no DEM was provided for that zone, in which case the dem_source attribute of the associated TerrainCorrectionParameters object will be used to obtain the DEM. Note that DEMs specified by dem_source are never stored here, but are loaded on-the-fly during computation.

_density_modelsdict[str, xr.DataArray | None]

A dictionary storing user supplied density models by zone name. Values will be None if no density model was provided for that zone. Note that internally generated density models are never stored here, but are created on-the-fly during computation.

add_zone(params: TerrainCorrectionParameters, dem: DataArray | None = None, density_model: DataArray | None = None) None[source]#

Add a terrain correction calculation zone and (optionally) an associated dem and/or density model.

Parameters:
paramsTerrainCorrectionParameters

The parameters defining the terrain correction zone.

demxarray.DataArray, optional

The digital elevation model corresponding to the terrain correction zone.

density_modelxarray.DataArray, optional

The density model corresponding to the terrain correction zone.

compute(points: SitesLike | tuple[Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]]], site_id: ArrayLike | None = None, show_progress: bool = True, method: str = 'harmonica', site_height_field: str = 'height_ellipsoidal', site_xy_fields: tuple[str, str] = ('easting', 'northing')) TerrainCorrectionData[source]#

Compute terrain corrections for a set of points.

Parameters:
pointsGravitySites or sequence of array_likes (x, y, z)

The observation points where terrain corrections are to be computed. Must be in the same coordinate reference system as the dem. If points is a GravitySites object, then data colums corresponding to site_xy_fields (default: (“easting”, “northing”)) and site_height_field (default: “height_ellipsoidal”) must have been set. If points is a sequence of array_likes, then it must be of the form (x, y, z), where x, y, and z are arrays of equal length.

site_idarray_like, optional

An array of site IDs corresponding to each point. If None, then a simple RangeIndex will be used.

show_progressbool, default is True.

Report progress, including a progress bar if the tqdm package is installed.

methodstr, default is “harmonica”

The terrain correction calculation method to use. Currently only “harmonica” is supported.

site_height_fieldstr, default is “height_ellipsoidal”

When points is a GravitySites object, get site elevation from this field.

site_xy_fieldstuple of str, default is (“easting”, “northing”)

When points is a GravitySites object, get site x and y coordinates from these fields.

Returns:
TerrainCorrectionData

An object containing the computed terrain corrections and the TerrainCorrectionParameters used.

dems: dict[str, DataArray | None]#
params: dict[str, TerrainCorrectionParameters]#
property zones: list[str]#

Return list of defined terrain correction zones sorted by min_dist.

class gsolve.tide.earth_tide.EarthTideCorrectionProvider(*args, **kwargs)[source]#

Bases: Protocol

Protocol defining interface for classes that provide earth tide corrections.

identifier(**kwargs) str[source]#
tidal_correction(lat: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], lon: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], elev: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], date_time: list | tuple | ndarray | Series | Index | DatetimeIndex, site_id: Sequence[str] | Series | Index | ndarray[tuple[Any, ...], dtype[str_]] | None = None, **kwargs) ndarray[tuple[Any, ...], dtype[float64]][source]#
class gsolve.tide.earth_tide.EternaPredictTidalCorrection(tidal_params: ArrayLike | EternaTidalParameters | None = None, tidalpoten: int = 8, tidalcompo: int = 0, amtruncate: float = 1e-10, poletidecor: float = 1.16, lodtidecor: float = 1.16, **kwargs)[source]#

Bases: EarthTideCorrectionProvider

Compute earth tide gravity corrections using PREDICT from ETERNA34.

The

Parameters:
tidal_paramsarray-like | EternaTidalParameters, optional

The tidal parameters applied to discrete components of the tidal catalogue. This is a 2D array with 4 columns, where each row corresponds to a tidal component and the columns are:

tidalpotenint, default 8

The tide potential catalogue to use. ETERNA/pygtide provides 8 potential catalogues of increasing resolution and therefore computational cost. The most commonly used cataloges:

  • 4 : Tamura (1987), 1200 waves.

  • 7 : Hartmann and Wenzel (1995), 12935 waves.

  • 8 (Default) : Kudryavtsev (2004), 28806 waves (highest resolution).

See ETERNA/pygtide for the full list of available catalogues.

tidalcompoint, default 0

The tidal component to calculate. The default 0 is earth tide gravity, but other components such as displacement, tilt or strain can also be calculated. See pygtide or ETERNA documentation for details.

amtruncatefloat, default 1e-10

Amplitude threshold for components to be included in the tidal calculation. Waves with amplitudes below this threshold are excluded. Higher values will reduce computation time, but also the accuracy of results.

poletidecorfloat, default 1.16

Amplitude factor for of pole tide gravity component. Pole tides are caused to variations in the Earth’s rotation axis (Chandler Wobble) and are not included in the standard tidal potential catalogues. Pole tide solutions are dependent on observational data provided by IERS, so the user should that they periodically run pgtide.update() to ensure these data are up to date.

lodtidecorfloat, default 1.16

Amplitude factor for of Length Of Day tide gravity component, due to variations in the Earth’s rotation rate and are not included in the standard tidal potential catalogues.. LOD corrections are depenedent on observational data provided by IERS, so the user should that they periodically run pgtide.update() to ensure these data are up to date.

Attributes:
tidal_paramsEternaTidalParameters

See also

EternaTidalParameters

Class to store tidal parameters for use with ETERNA/pygtide.

pygtide

Python package for tidal predictions using ETERNA catalogues.

identifier(**kwargs) str[source]#
set_tidal_params(tidal_params: ArrayLike | EternaTidalParameters) None[source]#

Set the tidal parameters to be applied.

tidal_correction(lat: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], lon: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], elev: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], date_time: list | tuple | ndarray | Series | Index | DatetimeIndex, site_id: Sequence[str] | Series | Index | ndarray[tuple[Any, ...], dtype[str_]] | None = None, unit: Literal['mgal', 'ugal', 'nm/s^2'] = 'mgal', sample_interval: int = 60, **kwargs) ndarray[tuple[Any, ...], dtype[float64]][source]#
time_series(lat: float, lon: float, elev: float, starttime: int | float | str | date | datetime64 | Timestamp, duration: str | int | float | Timedelta | timedelta, sample_interval: int = 60, unit: Literal['mgal', 'ugal', 'nm/s^2'] = 'mgal', **kwargs) DataFrame[source]#

Compute a time series of tidal corrections at some location.

A limitation of ETERNA is that tides are always calculated from the start of a UTC day.

Parameters:
latfloat

Latitude of the location in degrees.

lonfloat

Longitude of the location in degrees.

elevfloat

Elevation of the location in meters.

starttimeDatetimeScalar

Start

durationTimedeltaScalar

Duration of the time series.

sample_intervalint, optional

Sampling interval in seconds (default is 60).

unit{“mgal”, “ugal”, “nm/s^2”}, optional

Unit of the output (default is “mgal”).

**kwargsdict

Additional keyword arguments to pass to the underlying pygtide predictor.

Returns:
DataFrame

DataFrame containing the tidal corrections.

class gsolve.tide.earth_tide.LongmanConstants(a: float = 637813660.0, c: float = 38439900000.0, c1: float = 14959830000000.0, e: float = 0.054900489, i: float = np.float64(0.08979719), m: float = 0.074804, mu: float = 6.67428e-08, M: float = 7.3477e+25, omega: float = np.float64(0.409314616), S: float = 1.98840987e+33)[source]#

Bases: object

Constants used in Longman gravitational potential calculations.

Parameters:
afloat, default 6.3781366e08

Earth’s equatorial radius in cm (after UNSO 2011).

cfloat, default 3.84399e10

Mean distance between the centers earth-moon (cm). c1 : float, default 1.495983e13 Mean distance between centers earth-sun (cm).

efloat, default 0.054900489

Eccentricity of the moon’s orbit.

ifloat, default 0.08979719

Inclination of moon’s orbit to the ecliptic, 5.145 degrees.

mfloat, default 0.074804

Ratio of mean motion of the sun to that of the moon.

mufloat, default 6.67428e-08

Newton’s gravitational constant, 6.670e-8 in orig.

Mfloat, default 7.3477e25

Mass of the moon in grams.

omegafloat, default 0.409314616

Inclination of Earth’s equator to ecliptic 23.452 degrees.

Sfloat, default 1.98840987e33

Mass of the sun in grams https://aa.usno.navy.mil/downloads/publications/Constants_2021.pdf.

M: float = 7.3477e+25#
S: float = 1.98840987e+33#
a: float = 637813660.0#
c: float = 38439900000.0#
c1: float = 14959830000000.0#
e: float = 0.054900489#
i: float = np.float64(0.08979719)#
m: float = 0.074804#
mu: float = 6.67428e-08#
omega: float = np.float64(0.409314616)#
class gsolve.tide.earth_tide.LongmanTidalCorrection(amp_factor: float = 1.2, **kwargs)[source]#

Bases: EarthTideCorrectionProvider

Class to compute lunar and solar gravitational effects using Longman’s method.

Parameters:
amp_factorfloat, default 1.2

The default amplification factor.

kwargsoptional

Additional keyword arguments are used when instantiating a LongmanConstants object, allowing the user to override the default constants.

Attributes:
amp_factorfloat

The amplification factor applied when calculating gravity corrections using tidal_correction().

constantsLongmanConstants

The physical constants used in the Longman method.

See also

gsolve.tide.earth_tide.LongmanConstants

Physical constants used in Longman’s method.

gsolve.tide.earth_tide.EarthTideCorrectionProvider

Protocol defining interface for classes implementing earth tide correction.

gravity_accelerations(lat: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], lon: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], elev: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], dt: int | float | str | date | datetime64 | Timestamp | list | tuple | ndarray | Series | Index | DatetimeIndex) tuple[ndarray[tuple[Any, ...], dtype[float64]], ndarray[tuple[Any, ...], dtype[float64]]][source]#

Compute lunar and solar gravitational accelerations.

Parameters:
latscalar or array_like of shape(M,)

Latitude in decimal degrees.

lonscalar or array_like of shape(M,)

Longitude in decimal degrees.

elevscalar or array_like of shape(M,)

Elevation in meters (datum independent)

dtstr, datetime, or array_like of shape(M,)

The date-times to be at which to calculate corrections. Can be any format parsable by Pandas.to_datetime() method.

amp_factorfloat, or array_like of shape(M,), optional

Factor to apply to tidal corrections to account for earth’s deformation response to tidal forces. Tyypically in range 1.14-1.2 for semi-diurnal tides (default 1.2). To calculate this factor using h2 and k2 parameters the gravimetric_factor function can be used.

Returns:
g_lunar, g_solar

The tidal acceleration in milligals due to the moon and sun. Scalar if all input args are scalar.

identifier(**kwargs) str[source]#
tidal_correction(lat: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], lon: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], elev: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], date_time: list | tuple | ndarray | Series | Index | DatetimeIndex, site_id: Sequence[str] | Series | Index | ndarray[tuple[Any, ...], dtype[str_]] | None = None, **kwargs) ndarray[tuple[Any, ...], dtype[float64]][source]#

Compute tidal corrections at specified locations and times.

Parameters:
site_idarray_like

REMOVE

latfloat or array_like

Latitude in decimal degrees.

lonfloat or array_like

Longitude in decimal degrees.

elevfloat or array_like

Elevation in meters

date_timearray_like

The datetimes at which to calculate corrections.

Returns:
correctionsNDArray

The tidal corrections in milligals.

time_series(starttime: int | float | str | date | datetime64 | Timestamp, endtime: int | float | str | date | datetime64 | Timestamp, step: str | int | float | Timedelta | timedelta, lat: float, lon: float, elev: float = 0.0, method: Literal['correction', 'acceleration'] = 'correction') Series[source]#

Compute a time series of tidal corrections at some location.

Parameters:
starttimestr, datetime

The start time of the time series.

endtimestr, datetime

The end time of the time series.

stepint, float, str, Timedelta

The sampling interval of the time series. Can be a timedelta-like object, a timedelta string (e.g. “1S”, “1H”, “1D”), or a number of seconds.

latfloat

Latitude in decimal degrees.

lonfloat

Longitude in decimal degrees.

elevfloat

Elevation in meters (datum independent)

method: {‘correction’, ‘acceleration’}, default ‘correction’

Whether to return gravity corrections or accelerations.

Returns:
time_seriesSeries

The time series of tidal corrections or accelerations.

gsolve.tide.earth_tide.gravimetric_factor(k2: float = 0.298, h2: float = 0.6032) float64 | ndarray[tuple[Any, ...], dtype[float64]][source]#

Compute gravimetric factor from Love numbers k2 and h2.

The gravimetric factor accounts for the non-rigidity of the Earth. Default values for standard modern earth model (PREM) are from [R253c19390816-1]_Agnew (2007) .

Parameters:
k2, h2float or array-like, optional

Love numbers for earth response to semi-diurnal tides.

Returns:
delta2: ndarray or float

The gravimetric factor, float if both k2 & h2 are floats.

References

[1]

Agnew, D. C. (2007). 3.06 Earth Tides. In Treatise on Geophysics (pp. 163-195). Elsevier. https://doi.org/10.1016/B978-044452748-6.00056-0

class gsolve.tide.ocean_load.OceanLoadAtSiteTime(site_id: Sequence[str] | Series | Index | ndarray[tuple[Any, ...], dtype[str_]], date_time: list | tuple | ndarray | Series | Index | DatetimeIndex, corrections: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], **metadata)[source]#

Bases: OceanLoadCorrectionProvider

A class to provide ocean load corrections at discrete locations and times.

The class is effctively a lookup table populated with precaclculated ocean load correction values for multiple at arbitrary times. Corrections are retrieved by matching a site identifier and datetime.

Parameters:
site_idarray-like[str]

Site identifiers corresponding to each correction value.

datetimesDatetimeArray

Sequence of datetime values corresponding to each correction value. Must have the same length as site_id.

correctionsSequence[float]

Ocean load correction values in mGal. Must have the same length as site_id and datetimes.

**metadatadict[str, Any]

Additional metadata to be stored in the obj.metadata dictionary.

Attributes:
datapd.DataFrame

DataFrame containing correction values, indexed by (site_id, datetime).

metadatadict[str, Any]

Dictionary containing metadata about the corrections.

Examples

>>> # Create a generic ocean load correction provider
>>> site_ids = ['SITE_A', 'SITE_A', 'SITE_B']
>>> datetimes = pd.to_datetime(['2023-01-01 12:00', '2023-01-01 13:00', '2023-01-01 12:00'])
>>> corrections = [0.025, 0.030, 0.015]  # mGal
>>> provider = OceanLoadMultiStationGeneric(site_ids, datetimes, corrections)
>>>
>>> # Get corrections for specific site/datetime pairs
>>> corr = provider.ocean_load_correction(
...     site_ids=['SITE_A'],
...     datetimes=pd.to_datetime(['2023-01-01 12:00'])
... )
identifier(**kwargs) str[source]#

Corrector identifier string.

ocean_load_correction(site_id: Sequence[str] | Series | Index | ndarray[tuple[Any, ...], dtype[str_]], date_time: int | float | str | date | datetime64 | Timestamp | list | tuple | ndarray | Series | Index | DatetimeIndex, if_not_matched: Literal['error', 'warn'] = 'error', **kwargs) ndarray[tuple[Any, ...], dtype[float64]][source]#

Get ocean load corrections for specified site-datetime pairs.

Parameters:
site_idarray-like[str]

Site identifiers where corrections are requested.

datetimedatetime-like or array-like

Datetime values for which to get corrections. Must have the same length as site_id.

if_not_matched{“error”, “warn”}, optional

Action to take when site_id/datetime pairs are not found in the data. If “error” (default), raises ValueError. If “warn”, issues a warning and returns NaN for missing values.

**kwargsdict[str, Any]

Additional keyword arguments. (Not used).

Returns:
np.ndarray

Array of ocean load corrections in mGal. Missing values are set to NaN.

class gsolve.tide.ocean_load.OceanLoadTimeSeries(date_time: list | tuple | ndarray | Series | Index | DatetimeIndex, corrections: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], metadata: dict[str, Any] | None = None)[source]#

Bases: OceanLoadCorrectionProvider

A class to provide ocean load corrections at discrete times for a single location/station, by interpolation.

Parameters:
datetimesDatetimeArray

Sequence of datetime values corresponding to each correction value.

correctionsSequence[float]

Ocean load correction values in mGal. Must have the same length as datetimes.

**metadatadict[str, Any]

Additional metadata to be stored in the obj.metadata dictionary.

Attributes:
datapd.DataFrame

DataFrame containing correction values, indexed by datetime.

metadatadict[str, Any]

Dictionary containing metadata about the corrections.

Examples

>>> # Create a generic ocean load correction provider for a single station
>>> datetimes = pd.to_datetime(['2023-01-01 12:00', '2023-01-01 13:00'])
>>> corrections = [0.025, 0.030]  # mGal
>>> provider = OceanLoadSingleStationGeneric(datetimes, corrections)
>>>
>>> # Get corrections for specific datetimes
>>> corr = provider.ocean_load_correction(
...     datetime=pd.to_datetime(['2023-01-01 12:00'])
... )
property endtime: Timestamp#

The end time of the timeseries data.

identifier(**kwargs) str[source]#

Corrector identifier string.

ocean_load_correction(site_id: Sequence[str] | Series | Index | ndarray[tuple[Any, ...], dtype[str_]], date_time: int | float | str | date | datetime64 | Timestamp | list | tuple | ndarray | Series | Index | DatetimeIndex, if_not_matched: Literal['error', 'warn'] = 'error', **kwargs) ndarray[tuple[Any, ...], dtype[float64]][source]#
property sample_rate: float#

The mean sampling interval in decimal seconds.

property starttime: Timestamp#

The start time of the timeseries data.

gsolve.tide.ocean_load.generate_qtp_input(site_id: Sequence[str] | Series | Index | ndarray[tuple[Any, ...], dtype[str_]], datetimes: list | tuple | ndarray | Series | Index | DatetimeIndex, latitude: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], longitude: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]], elevation: Sequence[float] | Series | Index | ndarray[tuple[Any, ...], dtype[floating]] | float | int | floating, output_file: str | PathLike) None[source]#

Generate a QuickTide Pro site-time input CSV file from gravity observations and site data.

The resultant CSV file can be used as an input to QuickTide Pro for generating ocean load corrections for multiple gravity stations.

Parameters:
site_idSiteIDArray

The unique site identifier for each site/datetime pair.

datetimesDatetimeArray

Datetime values for each site/datetime pair. site_id.

latitudeFloatArray

The latitde for each site/datetime pair.

longitudeFloatArray

The longitude for each site/datetime pair.

elevationFloatArray | float

The elevation for each site/datetime pair or a single elevation value.

output_filestr or PathLike

Path to the output CSV file to be created.

Returns:
None
gsolve.tide.ocean_load.qtp_to_corrector(file_path: str | PathLike, corr_type: Literal['auto', 'timeseries', 'site-datetime'] = 'auto', metadata: dict[str, Any] | None = None) OceanLoadTimeSeries | OceanLoadAtSiteTime[source]#
class gsolve.scintrex.CG6Data(data: DataFrame, metadata: dict[str, str | float | int | bool | Timestamp], metadata_units: dict[str, str] | None = None, loop_from_line: bool = False, on_error: Literal['raise', 'warn', 'ignore'] = 'warn')[source]#

Bases: ScintrexData

An object to read and store gravity observations recorded on a Scintrex CG-6.

This class handles tsv data files written to internal storage of a CG-6. These files are typically named CG-6_####_Survey_Name.dat.

Note

The preferred method for initialising a CG6Data object is to use the from_file class method.

Parameters:
dataDataFrame

The observation data.

metadatadict

Metadata from file headers.

metadata_unitsdict, optional

The measurement units of metadata fields.

Attributes:
datapd.DataFrame

CG6 gravity readings as a dataframe and converted to appropriate dtypes:

  • Column names are normalized to lowercase.

  • The ‘date’ and ‘time’ fields are combined to a single ‘datetime’ column.

  • The corrections flag field ‘Corrections[drift-temp-na-tide-tilt]’ is split into individual boolean columns.

metadatadict

Metadata from file headers converted to approiate dtypes, with field names normalized to lowercase. Measurement units stored as a suffix to the field name (e.g. “fieldname [unit]”) are removed and stored in the ‘metadata_units’ attribute.

metadata_unitsdict

The measurement units for metadata fields.

loop_from_linebool, optional

If True, use the ‘line’ field as the loop identifier. The ‘line’ field is a user settable field on the CG-6.

on_error{‘raise’, ‘warn’, ‘ignore’}, optional

How to handle errors arising from null values in some output fields:

classmethod from_file(cg6_file: str | PathLike, loop_from_line: bool = False, on_error: Literal['raise', 'warn', 'ignore'] = 'warn') Self[source]#

Load and parse a Scintrex CG-6 data file.

Parameters:
cg6_fileFilePath

The CG-6 data file to load.

loop_from_linebool, optional

If True, use the ‘line’ field as the loop identifier, by default False.

on_error{“except”, “warn”, “ignore”}, deafault “warn”

How to handle errors arising from null values in some output fields:

  • except: raise an Exception if bad data encountered

  • warn: issuse a wraning and fix errors

  • ignore: fix errrors silently

Returns:
CG6Data

A CG6Data object.

set_drift_correction(drift_rate: float, drift_zero_time: int | float | str | date | datetime64 | Timestamp) None[source]#

Apply linear drift correction to CG6 data.

CG-6 data files include an internally applied drift correction based on rates calculated during a previous calibration sun. This may be problematic because:

  • The drift rate estimation may be out of date and therefore not accurate for the meter when these data were collected.

  • The internal drift rate may not be accurate because the method used to determine drift function is simplistic and uses data that may not have had all time-dependent corrections applied.

This method allows the user to specify a new drift function and apply it to the observations. For example, a calibration run could be performed after the survey data or a user could fit their own drift curve to calibration data.

Parameters:
drift_ratefloat

Drift rate in mGal per day.

drift_zero_timedatetime-like

Zero time for drift correction.

set_loop(field: str | None = None, array: ArrayLike | None = None, datetimes: Mapping[str, str] | list | tuple | ndarray | Series | Index | DatetimeIndex | None = None, time_gap: str | int | float | Timedelta | timedelta | None = None, loop_start: int = 1, loop_step: int = 1, loop_format: str = '{LOOP}', output_column: str = 'loop') None[source]#

Set loop identifiers using one of several methods.

Parameters:
fieldstr, default None

Set loop values from existing data field. For CG6 data this would typically be the user set “line” field.

arrayarray-like, default None

Set loop values from an array-like object. Length must match the number of observations.

datetimesdict, Series or array-like, default None

Use time intervals defined by datetimes and assign observations to those intervals based on observation times. If datetimes is dict-like or Series, then construct intervals from the keys/index and assign loop id’s from the corrresponding values. If datetimes is an array-like, then loop identifiers will be generated automatically.

time_gap: timedelta-like, str int, default None

Set loop values based on time gaps in the data. Loop intervals are defined where time gaps between observations exceed time_gap.

loop_startint, default 1

Loop identifier start value.

loop_stepint, default 1

Increment loop identifier by loop_step.

loop_formatstr, default ‘{LOOP}’

Format string for loop identifiers. Use ‘LOOP’ as a placeholder for the loop number. The default “{LOOP}” is effectively no formatting. Using, for example, loop_format="x_{'LOOP':02d}_y" would produce loop id’s 'x_01_y', 'x_02_y', ....

output_columnstr, default ‘loop’

Name of the output column.

to_gsolve_observations(tilt_corr: bool = True, temp_corr: bool = True, drift_corr: bool = True, tide_corr: bool = False, include_non_standard_fields: bool | Sequence = False) GravityObservations[source]#

Export CG6 data to a GravityObservations object.

Relevant data fields are renamed to match the GravityObservations schema.

  • Values from corrgrav field are not exported directly.

  • Output meter_reading is derived from corrgrav with all internally applied corrections removed.

  • meter_reading_mgal will contain meter_reading + the specified corrections

Parameters:
tilt_corrbool, default is True

Apply tilt correction tiltcorr to output field meter_reading_mgal.

temp_corrbool, default is True

Apply temperature correction tempcorr to output field meter_reading_mgal.

drift_corrbool, default is False

Apply drift correction driftcorr to output field meter_reading_mgal.

tide_corrbool, default False

Include earth tide correction by copying tidecorr to output field earth_tide_corr.

include_non_standard_fieldsbool or sequence, default is False

Include non-standard fields in the output _GravityObservations object. If a sequence is provided, only specified non-standard fields will be included.

Returns:
_GravityObservations
to_gsolve_sites(coords_source: Literal['user', 'gps'] = 'user') GravitySites[source]#

Export CG6 data to a _GravitySites object.

This returned _GravitySites object contains site locations only. The user will need to set “reference_gravity” and “gsolve_tie” fields.

Warning

Low accuracy location data

Coordinates may be sourced from the GC-6’s onboard GPS receiver. These will have accuracy equivalent to a typical hand-held GPS unit, should not be used for computing gravity reductions such as free air or Bouguer corrections.

Parameters:
coords_source{‘user’, ‘gps’}, default ‘user’

Specify the source of lat, lon and elev data:

  • 'gps' : the mean of ‘latgps’, ‘longps’ and ‘elevgps’ for each site. These positions are derived from the internal GPS reciever and are of low accuracy, but are almost certainly correct to within a few 10’s of metres.

  • 'user' : take values from ‘latuser’, ‘lonuser’ and ‘elevuser’ for each site. The ‘user’ coords are sourced from the instrument file stations.dat. This file can be pre-populated with accurate station coordinates prior to field data collection, however there is no guarantee that these values are correct. Also, for sites that were not pre-defined in stations.dat, the CG-6 will create a site and use the set it’s coordinates from the initial gps fix. In this case, the ‘user’ coords will be less reliable than ‘gps’ coords, which are averaged over all readings at a site.

Returns:
GravitySites
class gsolve.scintrex.ScintrexData(data: DataFrame, metadata: dict[str, str | float | int | bool | Timestamp], metadata_units: dict[str, str] | None = None, on_error: Literal['raise', 'warn', 'ignore'] = 'raise')[source]#

Bases: ABC

Base class for Scintrex data files.

copy() Self[source]#
property meter_id: str#

Return instrument identifier - the last for digits of full serial number.

abstractmethod set_loop() None[source]#
property stations: list[str]#

Return a list of unique station names in the data.

abstractmethod to_gsolve_observations() GravityObservations[source]#

Classes and functions to compute normal gravity and gravity corrections.

class gsolve.reductions.corrections.GravityCorrectionParameters(ellipsoid: str = 'GRS80', density_crust: float = 2670.0, density_water: float = 1030.0, spherical_cap_radius: float = 166735.0, use_curvature_corrected: bool = True, use_atmospheric_correction: bool = True, free_air_gradient: float = 0.3087691)[source]#

Bases: GSolveParameters

Class to store parameters for normal gravity and anomally calculations.

Parameters:
ellipsoid“WGS84” or “GRS80” (default)

The reference ellipsoid used in normal gravity

density_crust: float, default = 2670.0

Density of crust in Mg.m**-3

density_water: float, default = 1030.0

Density of water in Mg.m**-3

spherical_cap_radius: float, defult = 166735.0

The radius in km of the circular cap correction. The default 166735.0 km is equivalent to 1.5 degrees of arc for a spherical earth.

use_curvature_corrected: bool, default True

Specify the type of Bouguer correction to compute and use in subsequent anomaly calculations. If True, bouguer corrections are curvature corrected. If False, bouguer corrections are for an infinite horizotal slab.

use_atmospheric_correction: bool, default True

Whether to include atmospheric correction in gravity corrections and subsequent anomaly calculations.

free_air_gradientfloat, default 0.3087691

The free air gradient in mGal/m.

bouguer_correction_fields() list[str][source]#

Return tuple of names of correction methods required for specified Bouguer method.

bouguer_correction_type() str[source]#

Return the name specified of Bouguer correction method.

density_crust: float = 2670.0#
density_water: float = 1030.0#
ellipsoid: str = 'GRS80'#
free_air_gradient: float = 0.3087691#
spherical_cap_radius: float = 166735.0#
use_atmospheric_correction: bool = True#
use_curvature_corrected: bool = True#
class gsolve.reductions.corrections.GravityCorrectionProvider(params: None | GravityCorrectionParameters = None, **kwargs)[source]#

Bases: object

Class to calculate normal gravity and various gravity corrections.

Parameters:
paramsGravityCorrectionParameters or None

Object defining parameters used in computing gravity corrections. If None, then a GravityCorrectionParameters object will be created using default values.

kwargsdict

Additional keyword arguments used to override parameters in the supplied GravityCorrectionParameters object params. If params is None, then kwargs are used to override default parameter values.

Attributes:
paramsGravityCorrectionParameters

Parameters used to compute the gravity corrections.

classmethod available_corrections() tuple[str, ...][source]#

Return tuple of available gravity correction methods

bouguer_corrections(sites: DataFrame | GravitySites) GravityCorrections[source]#

Calculate corrections required for computing a Bouguer anomaly as defined in self.params.

Parameters:
sitespd.DataFrame | GravitySites

An object providing site latitude and ellipsoidal height. See GravityCorrectionProvider.compute() for details.

Returns:
GravityCorrections

Object containing Bouguer corrections and the correction parameters.

compute(sites: GravitySites | DataFrame, corrections: str | Sequence[str] | None = None, column_names: dict[str, str] | None = None, include_coords: bool = False) GravityCorrections[source]#

Compute gravity corrections at sites.

Parameters:
sitesGravitySites | DataFrame

An object providing site latitude and ellipsoidal height, and indexed by 'site_id'. If sites is a DataFrame, it is expected to have columns named 'latitude' and 'height_ellipsoidal', unless alternative columns are specified using the column_names argument.

correctionsstr | Sequence[str], optional

An array or string of corrections to compute. By default compute all corrections required for generating a Bouguer anomaly as specified in self.params.

column_namesdict[str, str] | None, optional

A dictionary mapping dexpected columns latitude and height_ellipsoidal to alternative column names. E.g. {'latitude': 'lat', 'height_ellipsoidal': 'height'}.

include_coordsbool, default False

If True, include site latitude and height in output.

Returns:
GravityCorrections

Object containing computed gravity corrections and the correction parameters.

free_air_corrections(sites: DataFrame | GravitySites) GravityCorrections[source]#

Calculate corrections required for computing a free air anomaly.

Parameters:
sitespd.DataFrame | GravitySites

An object providing site latitude and ellipsoidal height. See GravityCorrectionProvider.compute() for details.

Returns:
GravityCorrections

Object containing free air corrections and the correction parameters.

params: GravityCorrectionParameters#
class gsolve.reductions.corrections.GravityCorrections(params: GravityCorrectionParameters | None, site_id: ArrayLike, **kwargs)[source]#

Bases: GSolveTable

Class to store gravity corrections and correction parameters.

Objects of this class are are not intended to be created directly. They are generated by calling GravityCorrectionProvider.compute().

Parameters:
paramsGravityCorrectionParameters or None

Object defining parameters used in computing gravity corrections. If None, a default GravityCorrectionParameters object is created.

site_idarray-like

Array of unique site identifiers.

**kwargsdict

Keyword arguments providing correction values. The keyword names should correspond to known correction types. Call GravityCorrections.known_fields() to get a list of valid correction types.

Attributes:
datapandas.DataFrame

DataFrame containing gravity corrections, indexed by site_id.

paramsGravityCorrectionParameters

Parameters used to compute the gravity corrections.

gsolve.reductions.corrections.atmospheric_correction(height_ellipsoidal: ArrayLike) float | ndarray[source]#

Calculate the gravitational effect of the atmospheric mass as a function of station elevation.

The atmospheric correction (eqn 3 in Hinze et al.), is given by :

\[g_{atm} = 0.874 - 9.9E-5 h + 3.5625E-9 h^2\]

where \(h\) is the elevation of the station above the ellipsoid and \(g_{atm}\) is the atmospheric correction in mGal.

For an evaluation of atmospheric correction in New Zealand see Tenzer et al. (2010).

Parameters:
height_ellipsoidalarray or pandas.DataFrame

Station elevation in meters referenced to the ellipsoid.

Returns:
atmospheric_correctionarray or pandas.DataFrame

The gravitational effect of the atmosphere in mGal.

References

Hinze, W. J., et al. (2005). New standards for reducing gravity data: The North

American gravity database, Geophysics, 70(4) 25-32, https://doi.org/10.1190/1.1988183

Tenzer, R. et al. (2010). Computation of the atmospheric gravity correction in New Zealand,

New Zealand Journal of Geology and Geophysics, 53(4), pp. 333-340. https://doi.org/10.1080/00288306.2010.510171.

gsolve.reductions.corrections.bouguer_slab_correction(height_ellipsoidal: ArrayLike, density_crust: float = 2670.0, density_water: float = 1030.0) float | ndarray[source]#

Calculate the classic Bouguer correction.

The Bouguer correction approximates the gravitational effect of topography by with the gravity effect at the surface of an infinite horizontal slab of constant thickness and density.

This Bouguer correction \(g_{bg}\) is given by:

\[g_{bg} = 2 \pi G \rho h\]

where \(G\) is the gravitational constant \(h\) is the thickness of the slab (i.e. height) and density of the slab \(\rho\).

Used to remove the gravitational attraction of topography above the ellipsoid from the gravity disturbance. The infinite plate approximation is adequate for regions with flat topography and observation points close to the surface of the Earth.

In the oceans, subtracting normal gravity from the observed gravity results in over correction because the normal Earth has crust where there was water in the real Earth. The Bouguer correction for the oceans aims to remove this residual effect due to the over correction:

\[g_{bg} = 2 \pi G (\rho_w - \rho_c) |h|\]

in which \(\rho_w\) is the density of water and \(\rho_c\) is the density of the crust of the normal Earth. We need to take the absolute value of the bathymetry \(h\) because it is negative and the equation requires a thickness value (positive).

Parameters:
height_ellipsoidalfloat or array-like

Station height relative the ellipsoid or reference datum in meters. Positive heights are treated as topographic elevation and the Bouguer correction will be calculated using density_crust. Negative heights are treated as bathymetric depth and the Bouguer correction will be calculated using the difference between density_water - density_crust.

density_crustfloat, default 2670.0

Density of the crust in \(kg/m^3\). Used as the density of topography on land and the density of the normal Earth’s crust in the oceans.

density_waterfloat, default 1030.0

Density of water in \(kg/m^3\).

Returns:
bouguer_slab_correctionfloat or array-like

The gravitational effect of topography and residual bathymetry in mGal.

gsolve.reductions.corrections.bouguer_slab_curvature_corrected(height_ellipsoidal: ArrayLike, density_water: float, density_crust: float, cap_extent: float = 166735.0, ellipsoid_or_radius: float | str | Ellipsoid = 'GRS80') float | ndarray[source]#

Calculate the spherically corrected Bouguer slab at specified heights.

This is the function implements the analytic expression from LaFehr _[1]

This is equivalent to Bullard “A” + Bullard “B” corections. .. math:

g_{sbcc} = 2 \pi G \rho(\mu h - \lambda R)

g_{sbcc} = 2 \pi G \rho[(1 + \mu) h - \lambda R)]

where \(\mu\) and \(\lambda\) are dimensionless constants, \(h\) is the elevation of the station above ellipsoid and \(R = Ro + h\) where \(Ro\) is the mean elevation of the ellipsoid.

The spherical cap correction is added to the infinite slab correction.

Parameters:
height_ellipsoidalarray or pandas.DataFrame

Station height in meters referenced to the ellipsoid.

density_crustfloat

Density of the crust in \(kg/m^3\). Used as the density of topography on land and the density of the normal Earth’s crust in the oceans.

density_waterfloat

Density of water in \(kg/m^3\).

cap_extentfloat, default 166735.0

The width of the cap correction in m. The default value corresponds to the Hayford-Bowie ‘Zone O’ distance, equivalent to 1.5 degrees of arc on a spherical earth.

ellipsoid_or_radius: float | str | boule.Ellipsoid, default ‘GRS80’

Ellipsoid radius in meters, the name of the ellipsoid, or a boule.Ellipsoid object. Various ellipsoids are available from “Boule” package.

Returns:
sb_cap_corr, sb_corr: float or ndarray

The spherical Bouguer cap correction and spherical Bouguer correction in mGal.

References

..[1] LaFehr, T. R. (1991). Standardization in gravity reduction,

Geophysics 56, 1170-1178. https://doi.org/10.1190/1.1443137.

gsolve.reductions.corrections.free_air_correction(latitude: ArrayLike, height_ellipsoidal: ArrayLike, free_air_gradient: float = 0.3087691) float | ndarray[source]#

Calculate the free air correction (FAC) for height above the ‘GRS80’ ellipsoid.

This method uses the second-order formula (eqn 5 in Hinze et al.):

\[g_{fac} = -(0.3087691 - 0.0004398 sin^{2}\phi)h + 7.2125E-08 h^{2}\]

where \(h\) is the elevation of the station above the ellipsoid, \(\phi\) is the station latitude, and \(g_{fac}\) is the gravitational effect of the ‘free air’ in mGal.

Parameters:
latitudefloat, array-like

Latitude of the station in decimal degrees.

height_ellipsoidalfloat, array-like

Height of the station above the ellipsoid in meters.

free_air_gradientfloat, default 0.3087691

The free air gradient in mGal/m.

Returns:
free_air_correctionndarray or float

The gravitational effect of the elevation of the station above the ellipsoid in the absence of topographic mass in mGal.

References

Hinze, W. J., et. al. (2005). New standards for reducing gravity data: The North

American gravity database, Geophysics, 70(4) 25-32, https://doi.org/10.1190/1.1988183

gsolve.reductions.corrections.normal_gravity_at_ellipsoid(latitude: ArrayLike, ellipsoid: Literal['GRS80', 'WGS84', 'GRS67'] = 'GRS80', si_units: bool = False) float | ndarray[source]#

Calculate normal gravity at the ellipsoid surface using the full Somigliana formula.

Parameters:
latitudearray-like

Latitude in decimal degrees.

ellipsoid‘GRS80’, ‘WGS84’, or ‘GRS67’, default ‘GRS80’

The ellipsoid on which to calculate normal gravity. The ‘GRS67’ ellipsoid is obsolete and not recommended for use. It is provided only for use with older datasets. ‘WGS84’ is treated as being identical to ‘GRS80’ for normal gravity calculations.

si_unitsbool, default False

If True return normal gravity in m/s², otherwise in mGal (default).

Returns:
gammandarray or float

Normal gravity on the ellipsoid surface.

Notes

GRS80 ellipsoid parameters are taken from Table 2.2 Physical Geodesy 2nd Edition or Moritz 1980 https://doi.org/10.1007/s001900050278

GRS67 ellipsoid parameters are taken from https://bgi.obs-mip.fr/wp-content-omp/uploads/sites/46/2017/10/BGI_Normal_gravity_determination.pdf

References

Heiskanen, W. A., & Moritz, H. (1967). Physical Geodesy. W. H. Freeman and Company Moritz, H. (2000). Geodetic Reference System 1980. Journal of Geodesy, 74(1), 128–133. https://doi.org/10.1007/s001900050278

gsolve.reductions.corrections.normal_gravity_at_stn_elevation(latitude: ArrayLike, height_ellipsoidal: ArrayLike, ellipsoid: Literal['WGS84', 'GRS80'] | Ellipsoid = 'GRS80', si_units: bool = False) float | ndarray[source]#

Calculate normal gravity (gamma) of an ellipsoid at the given latitude and height.

Computes the magnitude of the gradient of the gravity potential (gravitational + centrifugal) generated by the ellipsoid at the given geodetic latitude and height above the ellipsoid (geometric height) [1].

Assumes that the internal density distribution of the ellipsoid is such that the gravity potential is constant at its surface.

Based on the closed-form expressions by Lakshmanan (1991) [2] and corrected by Li & Gotze (2001) [3], which do not require the free-air correction. Normal gravity is calcualted using Boule from the Fatiando a Terra project.

Parameters:
latitudearray-like

The geodetic latitude in decimal degrees.

height_ellipsoidalarray_likefloat or array-like

The ellipsoidal height in meters.

ellipsoid: ‘WGS84’, ‘GRS80’ or boule.Ellipsoid, default ‘GRS80’

The ellipsoid to use for normal gravity calculation.

si_units: bool, default False

Return the value in mGal (False, default) or m/s² (True)

Returns:
gammafloat or ndarray

Normal gravity in mGal or m/s².

References

[1]

Hofmann-Wellenhof, B., & Moritz, H. (2006). Physical Geodesy (2nd ed.). Vienna: Springer

[2]

Lakshmanan, J. (1991). The generalized gravity anomaly: Endoscopic microgravity. GEOPHYSICS, 56(5), 712-723. https://doi.org/10.1190/1.1443090

[3]

Li, X., & Götze, H. (2001). Ellipsoid, geoid, gravity, geodesy, and geophysics. GEOPHYSICS, 66(6), 1660-1668. https://doi.org/10.1190/1.1487109

gsolve.reductions.corrections.spherical_bouguer_cap_correction(height_ellipsoidal: ArrayLike) float | ndarray[source]#

Calculate the adjustments to the Bouguer slab correction to account for curvature of the Earth.

The spherical cap correction is given by Hinze (2013):

\[g_{spher} = 0.001464139 h - 3.533047e-07 h^{2} + 1.002709e-13 h^{3} + 3.002407E-18 h^{4}\]

where \(h\) is the elevation of the station above the ellipsoid and \(g_{spher}\) is the gravitational effect in mGal.

Parameters:
height_ellipsoidalfloat or array-like

Station height(s) in meters relative to the ellipsoid.

Returns:
spherical_cap_correctionndarray or float

The gravitational effect of the spherical cap in mGal.

See also

bouguer_slab_curvature_corrected

Calculate the full Bouguer correction, including spherical cap correction with customizable cap radius.

Notes

This function assumes a fixed spherical cap extent of 166735.0 meters, equivalent to 1.5 degrees of arc on a spherical earth.

References

Hinze WJ, von Frese RRB, Saad AH, (2013). Gravity and Magnetic Exploration:

Principles, Practices, and Applications. Cambridge University Press. https://doi.org/10.1017/CBO9780511843129

class gsolve.reductions.anomalies.GravityAnomalies(absolute_gravity: GSolveResults | DataFrame | Series, sites: GravitySites | GravitySurvey | DataFrame, corrections_parameters: GravityCorrectionParameters | GravityCorrectionProvider | GravityCorrections, terrain_corrections: TerrainCorrectionData | None = None)[source]#

Bases: GSolveTable

Compute and store gravity anomalies for a set of sites.

This class provide a simple mechanism to compute free-air and Bouguer anomalies from the outputs of a gsolve network adjustment.

Parameters:
absolute_gravityGSolveResults, DataFrame or Series

An object providing site_id’s and associated absolute gravity values for which anomalies will be computed. Can be any of the following:

  • GSolveResults : the output of a gsolve network adjustment.

  • DataFrame : must contain an 'absolute_gravity' column and be indexed by 'site_id'

  • Series : absolute gravity values indexed by 'site_id'.

sitesGravitySites, GravitySurvey or DataFrame

An object providing the geographic coordiates and ellipsoidal height for each site. Can be any of the following:

  • GravitySites or GravitySurvey : A gsolve object providing site metadata.

  • DataFrame : must contain columns 'latitude', 'longitude' and 'height_ellipsoidal' and be indexed by 'site_id'.

corrections_parametersGravityCorrectionParameters, GravityCorrectionProvider or GravityCorrections

An object providing either the parameters used to compute the various gravity corrections and/or a set of pre-computed gravity corrections. Can be any of the following:

  • GravityCorrectionParameters : a parameter object defining how to compute gravity corrections. The parameters object will be copied to self.params attribute.

  • GravityCorrectionProvider : a class for computing gravity corrections as specified in a GravityCorrectionParameters object. This will be used directly to compute the necessary gravity corrections, and it’s params copied to self.params.

  • GravityCorrections : pre-computed gravity corrections for a set of sites according to parameters in a GravityCorrectionParameters object. The corrections used dircetly and , and it’s params copied to self.params

terrain_correctionsTerrainCorrectionData, optional

An object providing terrain corrections at each site. These are required to compute the complete Bouguer anomaly. If provided, georgraphic coordinates and terrain corrections will be copied to self.data and the associated TerrainCorrectionParameter objects copied to self.tcorr_params. If None, then a terrain correction column 'tcorr:total' will be added and set to NaN.

Attributes:
datapandas.DataFrame

Table of computed gravity corrections and anomalies indexed by site_id. The primary columns are:

  • absolute_gravity : the input absolute gravity values.

  • normal_gravity_at_ellipsoidnormal gravity at surface of the ellipsoid

    self.params.ellipsoid

  • free_air_correction : the free-air correction.

  • atmospheric_correction : the atmospheric corrections due to elevation. Only inclued if self.params.use_atmospheric_correction is True.

  • bouguer_slab_correction or bouguer_slab_curvature_corrected : the Bouguer correction, with form determined by self.params.use_curvature_corrected.

  • tcorr:* : terrain correction for various zones, if terrain corrections were provided. Note that only the tcorr:total column is used in anomaly calculations.

  • tcorr:total : sum of contributions from each terrain correction zone. Will be NaN if no terrain corrections were provided.

  • free_air_anomaly : the free-air anomaly in mGal.

  • bouguer_anomaly_simple : the Bouguer anomaly without terrain corrections.

  • bouguer_anomaly_complete : the Bouguer anomaly including terrain corrections. Will be NaN if no terrain corrections were provided.

paramsGravityCorrectionParameters

A copy of the parameters used to compute corrections and anomalies:

  • params.ellipsoid : the ellipsoid used to compute normal gravity.

  • params.density_crust : the crustal density used in Bouguer corrections.

  • params.density_water : the water density used in Bouguer corrections.

  • params.spherical_cap_radius : the radius of spherical cap used in computing curvature-corrected form of the Bouguer correction.

  • params.use_curvature_corrected : The type of Bouguer correction used. If True, the Bouger correction was the curvature-corrected form, otherwise the infinite planar slab form was used.

  • params.use_atmospheric_correction : If True, atmospheric corrections were included in anomaly calculations.

tcorr_paramsdict[str, TerrainCorrectionParameters]

A dictionary of cpoies of the TerrainCorrectionParameters objects associated with terrain corrections. The keys are the terrain correction zone ID’s, and will partially correspond to columns in the self.data attribute. Will be an empty dict if no terrain corrections were provided.

gsolve.reductions.anomalies.compute_complete_bouguer_anomaly(absolute_gravity: ArrayLike, normal_gravity: ArrayLike, free_air_correction: ArrayLike, bouguer_correction: ArrayLike, terrain_correction: ArrayLike, atmospheric_correction: ArrayLike = 0.0, spherical_bouguer_cap_correction: ArrayLike = 0.0) ndarray[source]#

Calculate the complete Bouguer anomaly.

The complete Bouguer anomaly is calculated using the following formula:

\[CBA = AG - (NG + FAC + AC + BSC + SBC - TC)\]
Where:
  • CBA = Complete Bouguer Anomaly

  • AG = Absolute Gravity

  • NG = Normal Gravity on the ellipsoid surface

  • FAC = Free Air Correction

  • AC = Atmospheric Correction

  • BSC = Bouguer Slab Correction

  • SBC = Spherical Bouguer Cap Correction

  • TC = Terrain Correction

Parameters:
absolute_gravityArrayLike

Observed absolute gravity.

normal_gravityArrayLike

Normal gravity value at the ellipsoid.

free_air_correctionArrayLike
bouguer_correctionArrayLike

Bouguer correction for infinite planar slab or for curvature corrected. If curvature corrected, ensure spherical_bouguer_cap_correction = 0.0.

terrain_correctionArrayLike
atmospheric_correctionArrayLike, default = 0.0
spherical_bouguer_cap_correctionArrayLike, default = 0.0
Returns:
complete_bouguer_anomaly_np.ndarray

The complete Bouguer anomaly in mGal.

gsolve.reductions.anomalies.compute_free_air_anomaly(absolute_gravity: ArrayLike, normal_gravity: ArrayLike, free_air_correction: ArrayLike) ndarray[source]#

Calculate the free air anomaly.

The free air anomaly is calculated using the formula:

\[FAA = AG - (NG + FAC)\]
Where:
  • FAA = Free Air Anomaly

  • AG = Absolute Gravity

  • NG = Normal Gravity on the ellipsoid surface

  • FAC = Free Air Correction

Parameters:
absolute_gravityArrayLike

Absolute gravity in mGal, typically from the gsolve network adjustment.

normal_gravityArrayLike

Gravity at the ellipsoid surface in mGal.

free_air_correctionArrayLike

Free Air Correction in mGal at the station elevation.

Returns:
free_air_anomalyndarray

The free air anomaly in mGal.

gsolve.reductions.anomalies.compute_simple_bouguer_anomaly(absolute_gravity: ArrayLike, normal_gravity: ArrayLike, free_air_correction: ArrayLike, bouguer_correction: ArrayLike, atmospheric_correction: ArrayLike = 0.0, spherical_bouguer_cap_correction: ArrayLike = 0.0) ndarray[source]#

Calculate the simple Bouguer anomaly; terrain corrections not included

The simple Bouguer anomaly is calculated using the following formula:

\[SBA = AG - (NG + FAC + AC + BSC + SBC)\]
Where:
  • SBA = Simple Bouguer Anomaly

  • AG = Absolute Gravity

  • NG = Normal Gravity on the ellipsoid surface

  • FAC = Free Air Correction

  • AC = Atmospheric Correction

  • BSC = Bouguer Slab Correction

  • SBC = Spherical Bouguer Cap Correction

Parameters:
absolute_gravityArrayLike

Observed absolute gravity in mGal.

normal_gravityArrayLike

Normal gravity value at the ellipsoid in mGal.

free_air_correctionArrayLike

The free air correction in mGal.

bouguer_correctionArrayLike

Bouguer correction for infinite planar slab or for curvature corrected in mGal. If curvature corrected, ensure spherical_bouguer_cap_correction = 0.0.

atmospheric_correctionArrayLike, default = 0.0

The atmospheric correction in mGal.

spherical_bouguer_cap_correctionArrayLike, default = 0.0

The spherical Bouguer cap correction in mGal. Should be zero if bouguer_correction is curvature corrected.

Returns:
simple_bouguer_anomalyarray or DataFrame

Bouguer anomaly in mGal without terrain correction.