Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introducing gulmc: full monte carlo loss engine #1137

Merged
merged 79 commits into from
Dec 8, 2022
Merged
Show file tree
Hide file tree
Changes from 69 commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
26ed229
[mcgul] first implementation of full MC gul
mtazzari Oct 12, 2022
6587122
[modelpy] montecarlo implementation in modelpy
mtazzari Oct 13, 2022
9f1468f
stop tracking mcgul
mtazzari Oct 13, 2022
f90ec6a
[modelpy] fixes
mtazzari Oct 13, 2022
6a4f5a3
simplify algorithm
mtazzari Oct 13, 2022
94acf96
remove unused imports
mtazzari Oct 13, 2022
10fc8e8
use numba, process last areaperil id, cleanup
mtazzari Oct 14, 2022
15f1954
[modelpy] add docstrings
mtazzari Oct 14, 2022
d64c1ff
[modelpy] function namechange
mtazzari Oct 14, 2022
6a5c0fa
[modelpy] add TODOs not to forget
mtazzari Oct 14, 2022
8e35fd2
[gulpy] bugfix in calling read_getmodel_data
mtazzari Oct 14, 2022
3d4ebf5
[gulpy] drafting monte carlo implementation
mtazzari Oct 18, 2022
20704ab
[mcgul] Add major modelpy and gulpy rewrite as one tool
mtazzari Oct 21, 2022
5dec3ab
[mcgul] do not sample haz if no haz uncertainty
mtazzari Oct 21, 2022
4443888
[mcgul] cleanup
mtazzari Oct 21, 2022
bc3f3e4
[mcgul] good working implementation
mtazzari Oct 24, 2022
36d1a56
[mcgul] perfectly reproduces effective damageability
mtazzari Oct 24, 2022
dd914f5
[mcgul] further simplification
mtazzari Oct 24, 2022
7431a6e
[mcgul] wip
mtazzari Oct 25, 2022
bcda3c1
[mcgul] compute haz cdf in map_areaperil_ids_in_footprint
mtazzari Oct 25, 2022
e29b09c
[gulmc] update cli
mtazzari Oct 25, 2022
623672a
[getmodel] reverting full mc modifications
mtazzari Oct 25, 2022
05d4b16
[gul] reverting mc modifications
mtazzari Oct 25, 2022
84caafc
[getmodel] reverting mc modifications
mtazzari Oct 25, 2022
238b803
[getmodel] Reverting unused mc modifications
mtazzari Oct 25, 2022
bb0faf4
[gul] updating docstring
mtazzari Oct 25, 2022
b6690db
[getmode;] update docstring
mtazzari Oct 25, 2022
f01ce52
[gulmc] dynamic buff_size
mtazzari Oct 26, 2022
7e85363
[gulmc] imports cleanup
mtazzari Oct 26, 2022
a78b4e1
[gulmc] cleanup
mtazzari Oct 26, 2022
e0cf96c
[gulmc] dynamic buff size
mtazzari Nov 1, 2022
3ff5e26
[gulmc] compute effective damageability
mtazzari Nov 1, 2022
b884c83
[gulmc] effective damageability with numba
mtazzari Nov 1, 2022
05008ab
Merge branch 'develop' into feature/gulmc
mtazzari Nov 1, 2022
7b036c4
[gulpy] minor bugfix
mtazzari Nov 1, 2022
4d5275d
[gulmc] bugfix: use 4 as item size in int32_mv
mtazzari Nov 4, 2022
ac94e0c
[gulmc] minor cleanup
mtazzari Nov 4, 2022
28e5209
[gulmc] fix conflicts with stashed edits
mtazzari Nov 4, 2022
50517c6
Merge branch 'develop' into feature/gulmc
mtazzari Nov 7, 2022
4989962
[gulmc] cleanup
mtazzari Nov 8, 2022
5d32e90
Merge branch 'develop' into feature/gulmc
mtazzari Nov 11, 2022
7342153
[gulmc] remove unused imports
mtazzari Nov 14, 2022
40ceb1b
[modelpy] remove one blank line
mtazzari Nov 14, 2022
a0eccfe
[gulmc] add effective_damageability optional arg
mtazzari Nov 16, 2022
68e8dd0
[gulmc] bugfix effective damageability
mtazzari Nov 17, 2022
3573331
[gulmc] add tests
mtazzari Nov 17, 2022
ab8db52
[gulmc] add tests for effective damageability
mtazzari Nov 17, 2022
4925111
[gulmc] move gulpy tests to separate module
mtazzari Nov 17, 2022
3547c84
[tests] add test_model_1 to the tests assets
mtazzari Nov 17, 2022
a2919d4
[tests] use typing.Tuple for type hints
mtazzari Nov 17, 2022
be0e4bf
[gulmc] better tests, tiv set to float64
mtazzari Nov 21, 2022
31b5895
[gulmc] log info about effective_damageabilty
mtazzari Nov 21, 2022
5657157
[gulmc] cleaning up, adding docs (WIP).
mtazzari Nov 21, 2022
02d59a3
[gulmc] adding documentation and docstrings
mtazzari Nov 22, 2022
ff0719e
[gulmc] bugfix
mtazzari Nov 22, 2022
6952a4c
[gulmc] adding docs
mtazzari Nov 23, 2022
7ca5a30
[gulmc] add docs
mtazzari Nov 23, 2022
5783a2c
[gulmc] rewrite complex outputs as tuples
mtazzari Nov 23, 2022
17e1b4d
[gulmc] add final docs
mtazzari Nov 23, 2022
91ace7e
[gulmc] remove unused import
mtazzari Nov 23, 2022
f970467
[gulmc] Improve --debug flag
mtazzari Nov 23, 2022
396df02
[gulmc] raise ValueError if alloc_rule>0 when debug is 1 or 2
mtazzari Nov 23, 2022
e5a128c
[gulmc] cleanup
mtazzari Nov 23, 2022
5b1f1f2
[flake8] fix error code in ignore config
mtazzari Nov 24, 2022
d8e3eb1
[requirements] testing unpinning virtualenv
mtazzari Nov 24, 2022
f3798b2
[requirements] testing unpinning virtualenv
mtazzari Nov 24, 2022
d525acc
[requirements] fixing package clash
mtazzari Nov 24, 2022
481147c
[gulmc] test ValueError if alloc_rule is invalid
mtazzari Nov 24, 2022
1e2532c
[gulmc] improve tests
mtazzari Nov 24, 2022
310e36b
[gulmc] remove unnecessary binary files
mtazzari Dec 8, 2022
c8facbb
[requirements] removing unnecessary virtualenv
mtazzari Dec 8, 2022
1d79281
[CI] specify pip-compile resolver for py 3.7
mtazzari Dec 8, 2022
6b6d23b
Merge branch 'develop' into feature/gulmc
mtazzari Dec 8, 2022
6e3b74b
[CI] fix bug in CI
mtazzari Dec 8, 2022
09ad782
[CI] bugfix
mtazzari Dec 8, 2022
93e9f2f
[gulmc] update following review comments
mtazzari Dec 8, 2022
40abd3c
[gulmc] implement fixes following review
mtazzari Dec 8, 2022
6691535
Merge branch 'develop' into feature/gulmc
mtazzari Dec 8, 2022
0280821
[gulmc] bugfix in logging
mtazzari Dec 8, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,12 @@ tests/model_execution/tmp_fc_kparse_output/

!tests/pytools/data_layer/oasis_files/meta_data/correlations.bin

# do not ignore tests assets
!tests/assets/test_model_1/*.bin
!tests/assets/test_model_1/input/*.bin
!tests/assets/test_model_1/static/*.bin
!tests/assets/test_model_1/expected/*.bin

# pycharm
.idea/
data/
Expand Down
7 changes: 7 additions & 0 deletions oasislmf/pytools/common.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
"""
This file defines quantities reused across the pytools stack.

"""

# streams
PIPE_CAPACITY = 65536 # bytes
1 change: 1 addition & 0 deletions oasislmf/pytools/getmodel/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
parquetfootprint_meta_filename = "footprint_parquet_meta.json"

areaperil_int = np.dtype(os.environ.get('AREAPERIL_TYPE', 'u4'))
nb_areaperil_int = nb.from_dtype(areaperil_int)
oasis_float = np.dtype(os.environ.get('OASIS_FLOAT', 'f4'))

FootprintHeader = nb.from_dtype(np.dtype([('num_intensity_bins', np.int32),
Expand Down
53 changes: 32 additions & 21 deletions oasislmf/pytools/getmodel/manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,18 @@
from contextlib import ExitStack

import numba as nb
from numba.typed import Dict
import numpy as np
import pyarrow.parquet as pq
from numba.typed import Dict
from oasislmf.pytools.common import PIPE_CAPACITY

from oasislmf.pytools.data_layer.footprint_layer import FootprintLayerClient
from .common import areaperil_int, oasis_float, Index_type, Keys
from .footprint import Footprint
from oasislmf.pytools.getmodel.common import areaperil_int, oasis_float, Index_type, Keys
from oasislmf.pytools.getmodel.footprint import Footprint

logger = logging.getLogger(__name__)

buff_size = 65536
buff_size = PIPE_CAPACITY

oasis_int_dtype = np.dtype('i4')
oasis_int = np.int32
Expand Down Expand Up @@ -312,12 +313,15 @@ def get_vulns(static_path, vuln_dict, num_intensity_bins, ignore_file_type=set()
num_damage_bins = header[0]
if "vulnerability.idx" in static_path:
logger.debug(f"loading {os.path.join(static_path, 'vulnerability.idx')}")
vulns_bin = np.memmap(os.path.join(static_path, "vulnerability.bin"), dtype=VulnerabilityRow, offset=4, mode='r')
vulns_idx_bin = np.memmap(os.path.join(static_path, "vulnerability.idx"), dtype=VulnerabilityIndex, mode='r')
vulns_bin = np.memmap(os.path.join(static_path, "vulnerability.bin"),
mtazzari marked this conversation as resolved.
Show resolved Hide resolved
dtype=VulnerabilityRow, offset=4, mode='r')
vulns_idx_bin = np.memmap(os.path.join(static_path, "vulnerability.idx"),
dtype=VulnerabilityIndex, mode='r')
vuln_array = load_vulns_bin_idx(vulns_bin, vulns_idx_bin, vuln_dict,
num_damage_bins, num_intensity_bins)
else:
vulns_bin = np.memmap(os.path.join(static_path, "vulnerability.bin"), dtype=Vulnerability, offset=4, mode='r')
vulns_bin = np.memmap(os.path.join(static_path, "vulnerability.bin"),
dtype=Vulnerability, offset=4, mode='r')
vuln_array = load_vulns_bin(vulns_bin, vuln_dict, num_damage_bins, num_intensity_bins)

elif "vulnerability.csv" in input_files and "csv" not in ignore_file_type:
Expand Down Expand Up @@ -362,7 +366,7 @@ def get_damage_bins(static_path, ignore_file_type=set()):
return np.fromfile(os.path.join(static_path, "damage_bin_dict.bin"), dtype=damagebindictionary)
elif "damage_bin_dict.csv" in input_files and 'csv' not in ignore_file_type:
logger.debug(f"loading {os.path.join(static_path, 'damage_bin_dict.csv')}")
return np.genfromtxt(os.path.join(static_path, "damage_bin_dict.csv"), dtype=damagebindictionaryCsv)
return np.genfromtxt(os.path.join(static_path, "damage_bin_dict.csv"), dtype=damagebindictionaryCsv, skip_header=1, delimiter=',')
else:
raise FileNotFoundError(f'damage_bin_dict file not found at {static_path}')

Expand All @@ -371,13 +375,15 @@ def get_damage_bins(static_path, ignore_file_type=set()):
def damage_bin_prob(p, intensities_min, intensities_max, vulns, intensities):
"""
Calculate the probability of an event happening and then causing damage.
Note: vulns is a 1-d array containing 1 damage bin of the damage probability distribution as a
function of hazard intensity.

Args:
p: (float) the probability to be updated
intensities_min: (int) intensity minimum
intensities_max: (int) intensity maximum
vulns: (List[float]) PLEASE FILL IN
intensities: (List[float]) list of all the intensities
intensities_min: (int) minimum intensity bin id
intensities_max: (int) maximum intensity bin id
vulns: (List[float]) slice of damage probability distribution given hazard intensity
intensities: (List[float]) intensity probability distribution

Returns: (float) the updated probability
"""
Expand All @@ -402,9 +408,9 @@ def do_result(vulns_id, vuln_array, mean_damage_bins,
mean_damage_bins: (List[float]) the mean of each damage bin (len(mean_damage_bins) == num_damage_bins)
int32_mv: (List[int]) FILL IN LATER
num_damage_bins: (int) number of damage bins in the data
intensities_min: (int) intensity minimum
intensities_max: (int) intensity maximum
intensities: (List[float]) list of all the intensities
intensities_min: (int) minimum intensity bin id
intensities_max: (int) maximum intensity bin id
intensities: (List[float]) intensity probability distribution
event_id: (int) the event ID that concerns the result being calculated
areaperil_id: (List[int]) the areaperil ID that concerns the result being calculated
vuln_i: (int) the index concerning the vulnerability inside the vuln_array
Expand Down Expand Up @@ -540,6 +546,7 @@ def run(run_dir, file_in, file_out, ignore_file_type, data_server, peril_filter)
file_out: (Optional[str]) the path to the output directory
ignore_file_type: set(str) file extension to ignore when loading
data_server: (bool) if set to True runs the data server
peril_filter (list[int]): list of perils to include in the computation (if None, all perils will be included).

Returns: None
"""
Expand Down Expand Up @@ -573,7 +580,8 @@ def run(run_dir, file_in, file_out, ignore_file_type, data_server, peril_filter)
if peril_filter:
keys_df = pd.read_csv(os.path.join(input_path, 'keys.csv'), dtype=Keys)
valid_area_peril_id = keys_df.loc[keys_df['PerilID'].isin(peril_filter), 'AreaPerilID'].to_numpy()
logger.debug(f'Peril specific run: ({peril_filter}), {len(valid_area_peril_id)} AreaPerilID included out of {len(keys_df)}')
logger.debug(
f'Peril specific run: ({peril_filter}), {len(valid_area_peril_id)} AreaPerilID included out of {len(keys_df)}')
else:
valid_area_peril_id = None

Expand All @@ -599,9 +607,7 @@ def run(run_dir, file_in, file_out, ignore_file_type, data_server, peril_filter)

# even_id, areaperil_id, vulnerability_id, num_result, [oasis_float] * num_result
max_result_relative_size = 1 + + areaperil_int_relative_size + 1 + 1 + num_damage_bins * results_relative_size

mv = memoryview(bytearray(buff_size))

int32_mv = np.ndarray(buff_size // np.int32().itemsize, buffer=mv, dtype=np.int32)

# header
Expand All @@ -613,13 +619,18 @@ def run(run_dir, file_in, file_out, ignore_file_type, data_server, peril_filter)
if len_read == 0:
break

# get the next event_id from the input stream
event_id = event_ids[0]

if data_server:
event_footprint = FootprintLayerClient.get_event(event_ids[0])
event_footprint = FootprintLayerClient.get_event(event_id)
else:
event_footprint = footprint_obj.get_event(event_ids[0])
event_footprint = footprint_obj.get_event(event_id)

if event_footprint is not None:
for cursor_bytes in doCdf(event_ids[0],
# compute effective damageability probability distribution
# stream out: event_id, areaperil_id, number of damage bins, effecive damageability cdf bins (bin_mean and prob_to)
for cursor_bytes in doCdf(event_id,
num_intensity_bins, event_footprint,
areaperil_to_vulns_idx_dict, areaperil_to_vulns_idx_array, areaperil_to_vulns,
vuln_array, vulns_id, num_damage_bins, mean_damage_bins,
Expand Down
16 changes: 12 additions & 4 deletions oasislmf/pytools/gul/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,22 @@
# probably need to set this dynamically depending on the stream type
gul_header = np.int32(1 | 2 << 24).tobytes()

PIPE_CAPACITY = 65536 # bytes
GETMODEL_STREAM_BUFF_SIZE = 2 * PIPE_CAPACITY

items_data_type = nb.from_dtype(np.dtype([('item_id', np.int32),
('damagecdf_i', np.int32),
('rng_index', np.int32)
]))

coverage_type = nb.from_dtype(np.dtype([('tiv', np.float),
# TODO: 'vulnerability_id' should be renamed to 'vulnerability_idx' because it is the
mtazzari marked this conversation as resolved.
Show resolved Hide resolved
# index of vuln_array where the vulnerability function is stored.
items_MC_data_type = nb.from_dtype(np.dtype([('item_id', np.int32),
('vulnerability_id', np.int32),
('hazcdf_i', np.int32),
('rng_index', np.int32),
('eff_vuln_cdf_i', np.int32),
('eff_vuln_cdf_Ndamage_bins', np.int32)
]))

coverage_type = nb.from_dtype(np.dtype([('tiv', np.float64),
('max_items', np.int32),
('start_items', np.int32),
('cur_items', np.int32)
Expand All @@ -38,6 +45,7 @@

ITEM_MAP_KEY_TYPE = nb.types.Tuple((nb.from_dtype(areaperil_int), nb.types.int32))
ITEM_MAP_VALUE_TYPE = nb.types.UniTuple(nb.types.int32, 3)
ITEM_MAP_KEY_TYPE_internal = nb.types.Tuple((nb.from_dtype(areaperil_int), nb.types.int64))

# compute the relative size of oasis_float and areaperil_int vs int32
oasis_float_to_int32_size = oasis_float.itemsize // np.int32().itemsize
Expand Down
8 changes: 5 additions & 3 deletions oasislmf/pytools/gul/io.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@
from numba import njit
from numba.typed import Dict, List
from numba.types import int32 as nb_int32, int64 as nb_int64, int8 as nb_int8
from oasislmf.pytools.common import PIPE_CAPACITY

from oasislmf.pytools.getmodel.common import oasis_float, areaperil_int
from oasislmf.pytools.gul.common import (
ProbMean, damagecdfrec_stream, oasis_float_to_int32_size, areaperil_int_to_int32_size,
items_data_type, ProbMean_size, NP_BASE_ARRAY_SIZE, GETMODEL_STREAM_BUFF_SIZE
items_data_type, ProbMean_size, NP_BASE_ARRAY_SIZE
)
from oasislmf.pytools.gul.random import generate_hash

Expand Down Expand Up @@ -40,7 +41,7 @@ def gen_valid_area_peril(valid_area_peril_id):
return valid_area_peril_dict


def read_getmodel_stream(stream_in, item_map, coverages, compute, seeds, valid_area_peril_id=None, buff_size=GETMODEL_STREAM_BUFF_SIZE, ):
def read_getmodel_stream(stream_in, item_map, coverages, compute, seeds, valid_area_peril_id=None, buff_size=PIPE_CAPACITY):
"""Read the getmodel output stream yielding data event by event.

Args:
Expand All @@ -50,7 +51,8 @@ def read_getmodel_stream(stream_in, item_map, coverages, compute, seeds, valid_a
coverages (numpy.ndarray[coverage_type]): array with coverage data.
compute (numpy.array[int]): list of coverages to be computed.
seeds (numpy.array[int]): the random seeds for each coverage_id.
buff_size (int): size in bytes of the read buffer (see note). Default is GETMODEL_STREAM_BUFF_SIZE.
buff_size (int): size in bytes of the read buffer (see note).
valid_area_peril_id (list[int]): list of valid areaperil_ids.

Raises:
ValueError: If the stream type is not 1.
Expand Down
12 changes: 5 additions & 7 deletions oasislmf/pytools/gul/manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
import numpy as np
from numba import njit
from numba.typed import Dict, List
from oasislmf.pytools.common import PIPE_CAPACITY

from oasislmf.pytools.data_layer.conversions.correlations import CorrelationsData

Expand All @@ -19,22 +20,19 @@
from oasislmf.pytools.getmodel.common import oasis_float, Keys, Correlation

from oasislmf.pytools.gul.common import (
MEAN_IDX, PIPE_CAPACITY, STD_DEV_IDX, TIV_IDX, CHANCE_OF_LOSS_IDX, MAX_LOSS_IDX, NUM_IDX,
MEAN_IDX, STD_DEV_IDX, TIV_IDX, CHANCE_OF_LOSS_IDX, MAX_LOSS_IDX, NUM_IDX,
ITEM_MAP_KEY_TYPE, ITEM_MAP_VALUE_TYPE,
gulSampleslevelRec_size, gulSampleslevelHeader_size, coverage_type, gul_header,
)
from oasislmf.pytools.gul.core import split_tiv_classic, split_tiv_multiplicative, get_gul, setmaxloss, compute_mean_loss
from oasislmf.pytools.gul.io import (
write_negative_sidx, write_sample_header,
write_sample_rec, read_getmodel_stream,
)

from oasislmf.pytools.gul.random import (
get_random_generator, compute_norm_cdf_lookup,
compute_norm_inv_cdf_lookup, get_corr_rval, generate_correlated_hash_vector
)

from oasislmf.pytools.gul.random import get_random_generator
from oasislmf.pytools.gul.core import split_tiv_classic, split_tiv_multiplicative, get_gul, setmaxloss, compute_mean_loss
from oasislmf.pytools.gul.utils import append_to_dict_value, binary_search


Expand Down Expand Up @@ -295,7 +293,7 @@ def run(run_dir, ignore_file_type, sample_size, loss_threshold, alloc_rule, debu
last_processed_coverage_ids_idx = 0

# adjust buff size so that the buffer fits the longest coverage
buff_size = PIPE_CAPACITY
buff_size = PIPE_CAPACITY * 2
max_bytes_per_coverage = np.max(coverages['cur_items']) * max_bytes_per_item
while buff_size < max_bytes_per_coverage:
buff_size *= 2
Expand Down Expand Up @@ -337,7 +335,7 @@ def compute_event_losses(event_id, coverages, coverage_ids, items_data,
Args:
event_id (int32): event id.
coverages (numpy.array[oasis_float]): array with the coverage values for each coverage_id.
coverage_ids (numpy.array[: array of **uniques** coverage ids used in this event.
coverage_ids (numpy.array[int]): array of unique coverage ids used in this event.
items_data (numpy.array[items_data_type]): items-related data.
last_processed_coverage_ids_idx (int): index of the last coverage_id stored in `coverage_ids` that was fully processed
and printed to the output stream.
Expand Down
24 changes: 23 additions & 1 deletion oasislmf/pytools/gul/random.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,14 @@
EVENT_ID_HASH_CODE = np.int64(1943272559)
HASH_MOD_CODE = np.int64(2147483648)

HAZ_GROUP_ID_HASH_CODE = np.int64(1343271947)
HAZ_EVENT_ID_HASH_CODE = np.int64(1743274343)
HAZ_HASH_MOD_CODE = np.int64(2157483709)


@njit(cache=True, fastmath=True)
def generate_hash(group_id, event_id, base_seed=0):
"""Generate hash for a given `group_id`, `event_id` pair.
"""Generate hash for a given `group_id`, `event_id` pair for the vulnerability pdf.

Args:
group_id (int): group id.
Expand All @@ -35,6 +39,24 @@ def generate_hash(group_id, event_id, base_seed=0):
return hash


@njit(cache=True, fastmath=True)
def generate_hash_haz(group_id, event_id, base_seed=0):
"""Generate hash for a given `group_id`, `event_id` pair for the hazard pdf.

Args:
group_id (int): group id.
event_id (int]): event id.
base_seed (int, optional): base random seed. Defaults to 0.

Returns:
int64: hash
"""
hash = (base_seed + (group_id * HAZ_GROUP_ID_HASH_CODE) % HAZ_HASH_MOD_CODE +
(event_id * HAZ_EVENT_ID_HASH_CODE) % HAZ_HASH_MOD_CODE) % HAZ_HASH_MOD_CODE

return hash


def get_random_generator(random_generator):
"""Get the random generator function.

Expand Down
5 changes: 5 additions & 0 deletions oasislmf/pytools/gulmc/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
import logging
from logging import NullHandler

logger = logging.getLogger(__name__)
logger.addHandler(NullHandler())
66 changes: 66 additions & 0 deletions oasislmf/pytools/gulmc/cli.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
#!/usr/bin/env python

import argparse
import logging

from oasislmf import __version__ as oasis_version
from oasislmf.pytools.gulmc import manager, logger

parser = argparse.ArgumentParser(
usage='use "%(prog)s --help" for more information',
formatter_class=argparse.RawTextHelpFormatter # for multi-line help text
)

# arguments in alphabetical order (lower-case, then upper-case, then long arguments)
parser.add_argument('-a', help='back-allocation rule. Default: 0', default=0, type=int, dest='alloc_rule')
parser.add_argument('-d', help='output the ground up loss (0), the random numbers used for hazard sampling (1), '
'the random numbers used for damage sampling (2). Default: 0',
action='store', type=int, dest='debug', default=0)
parser.add_argument('-i', '--file-in', help='filename of input stream (list of events from `eve`).',
action='store', type=str, dest='file_in')
parser.add_argument('-o', '--file-out', help='filename of output stream (ground up losses).',
action='store', type=str, dest='file_out')
parser.add_argument('-L', help='Loss treshold. Default: 1e-6', default=1e-6,
action='store', type=float, dest='loss_threshold')
parser.add_argument('-S', help='Sample size. Default: 0', default=0, action='store', type=int, dest='sample_size')
parser.add_argument('-V', '--version', action='version', version='{}'.format(oasis_version))
parser.add_argument('--effective-damageability',
help='if passed true, the effective damageability is used to draw loss samples instead of full MC. Default: False',
action='store_true', dest='effective_damageability', default=False)
parser.add_argument('--ignore-correlation',
help='if passed true, peril correlation groups (if defined) are ignored for the generation of correlated samples. Default: False',
action='store_true', dest='ignore_correlation', default=False)
parser.add_argument('--ignore-file-type', nargs='*',
help='the type of file to be loaded. Default: set()', default=set())
parser.add_argument('--data-server', help='=Use tcp/sockets for IPC data sharing.',
action='store_true', dest='data_server')
parser.add_argument('--logging-level',
help='logging level (debug:10, info:20, warning:30, error:40, critical:50). Default: 30',
default=30, type=int)
parser.add_argument('--peril-filter', help='Id of the peril to keep, if empty take all perils',
nargs='+', dest='peril_filter')
parser.add_argument('--random-generator',
help='random number generator\n0: numpy default (MT19937), 1: Latin Hypercube. Default: 1',
default=1, type=int, dest='random_generator')
parser.add_argument('--run-dir', help='path to the run directory. Default: "."', default='.')


def main():
# parse arguments to variables
# note: the long flag name (e.g., '--opt-one') is used as variable name (i.e, the `dest`).
# hyphens in the long flag name are parsed to underscores, e.g. '--opt-one' is stored in `opt_one``
kwargs = vars(parser.parse_args())

# add handler to gul logger
ch = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
logging_level = kwargs.pop('logging_level')
logger.setLevel(logging_level)

manager.run(**kwargs)


if __name__ == '__main__':
main()
Loading