Skip to content

Usage

Usage

Configuration

toshi-hazard-post requires some configuration to run. This can either be done via a environment variables and/or a configuration file. Environment variables will override settings in the configuration file.

  • 'THP_{RLZ|AGG}_DIR': The path to the {realization or aggregate} datastore. Can be a local filepath or S3 URI.
  • 'THP_NUM_WORKERS': Number of parallel processes to run (default: 1).
  • 'THP_DELAY_MULTIPLIER': multiplier to apply to the delay of parallel workers. Parallel jobs are delayed by multiplier * mod(job_number, 10) to avoid too many simultaneous reads from the realization dataset.

By default, toshi-hazard-post will look for a configuration file .env in the local directory, though you can specify a different file with the env var 'THP_ENV_FILE'.

Using an input file to run the calculation

The standard way to run toshi-hazard-post is to use the thp command.

$ thp aggregate [--config-file PATH] INPUT_FILE

The input file is a toml file that specifies the calculation arguments:

[general]
compatibility_key = "A_A"
hazard_model_id = "DEMO_MODEL"

[hazard_model]
model_version = "NSHM_v1.0.4"

# alternatively, specify a path to logic tree files
# srm_logic_tree = "demo/srm_logic_tree_no_slab.json"
# gmcm_logic_tree = "demo/gmcm_logic_tree_medium.json"

[site_params]
vs30s = [275, 400]
locations = ["WLG", "SRWG214", "-41.000~174.700"]
# locations_file = "locations.csv"

[calculation]
imts = ["PGA", "SA(0.2)", "SA(0.5)", "SA(1.5)", "SA(3.0)", "SA(5.0)"]
agg_types = ["mean", "cov", "std", "0.1", "0.005", "0.01", "0.025"]

[general]

  • compatibility_key: this is a string used to identify entries in the realization database that were created using a compatible hazard engine, i.e. all hazard curves created with the same compatibility key can be directly compared to each other. Differences will be due to changes in e.g. location, ground motion models, sources, etc. Differences will not be due to the hazard calculation algorithm.
  • hazard_model_id: used to identify the model in the output, aggregation database

[hazard_model]

Logic trees can be specified in one of two ways:

  1. Specify an official New Zealand NSHM model defined by the nzhsm-model package. This will use the logic trees (both SRM and GMCM) provided by nzshm-model. See the nzhsm-model package documentation for details.
  2. Specify a path to SRM and GMCM logic tree files. See the nzhsm-model documentation for the file format.

[site_params]

  • vs30s: Site conditions are specified by vs30 and are specified by a list of ints. All vs30s will be applied to every location to produce len(vs30s) * len(locations) sites.
  • locations: Site locations can be specified as a list of strings using the format specified for the get_locations() function in nzshm-common.
  • locations_file: Path to csv file with site locations. File can include a vs30 column for site-specific vs30 values rather than using the vs30s entry. The header row must have "lat", and "lon", and optionally "vs30", e.g.
    lat,lon,vs30
    -40,170,250
    -45,170,750
    

[calculation]

These two settings may be omitted, in which case the calc will be performed for all members of the corresponding Enumeration type (see toshi_hazard_store/model/constraints.py).

  • imts: list of strings of intensity measure types to calculate (following OpenQuake IMT naming convention)
  • agg_types: list of strings of the statistical aggregates to calculate. Options are:
    • mean: weighted mean
    • std: weighted standard deviation
    • cov: weighted coeficint of variation Meletti et al., 2021
    • fractile specified by the string representation of a floating point number between 0 and 1

Manipulating calculation arguments programmatically

Users may want to manipulate arguments in a script to facilitate easy experimentation. Here is an example of altering the logic tree and re-running a calculation:

from toshi_hazard_post.aggregation_args import load_input_args
from toshi_hazard_post.aggregation import run_aggregation
from toshi_hazard_post.aggregation_setup import get_logic_trees
from nzshm_model.logic_tree.correlation import LogicTreeCorrelations

# starting model
input_file = "demo/hazard_mini.toml"
args = load_input_args(input_file)

# run model
run_aggregation(args)

# extract logic trees
slt, glt = get_logic_trees(args.hazard_model.nshm_model_version, args.hazard_model.srm_logic_tree, args.hazard_model.gmcm_logic_tree)

# modify SRM logic tree
for branch_set in slt.branch_sets:
    print(branch_set.tectonic_region_types)
slt.branch_sets = [slt.branch_sets[0]]
slt.branch_sets[0].branches = [slt.branch_sets[0].branches[0]]

# weights of branches of each branch set must sum to 1.0
slt.branch_sets[0].branches[0].weight = 1.0

# remove correlations
slt.correlations = LogicTreeCorrelations()

# modify GMCM logic tree to match the TRT of the new SRM logic tree
for branch_set in glt.branch_sets:
    print(branch_set.tectonic_region_type)
glt.branch_sets = [glt.branch_sets[1]]

# write logic trees to json and add them to the arguments
slt.to_json('slt_one_branch.json')
glt.to_json('glt_crust_only.json')
args.hazard_model.srm_logic_tree = 'slt_one_branch.json'
args.hazard_model.gmcm_logic_tree = 'glt_crust_only.json'
args.general.hazard_model_id = 'ONE_SRM_BRANCH'

run_aggregation(args)