Component | Build Status |
---|---|
Cypress Integration Tests |
NOTE: For now some background on IBF-terminology (e.g. triggers) is expected. This can be expanded on later.
This is the repository for the IBF-system. It includes 2 main folders.
- API-service
- which accepts input from various IBF-pipelines, that upload impact forecast data to the IBF-system on regular intervals. (See 'Dependencies' below.)
- and which lets the IBF-dashboard - or other authorized accounts - retrieve data from the IBF-database.
- IBF-dashboard
- showing all impact forecast data - either leading to a trigger or not - in a visual portal
- The IBF-dashboard will not show meaningful information (or even load correctly) without impact forecast data being uploaded to it.
- This data is provided by separate IBF-pipelines, that are not part of this repository, but are strongly connected.
- See the 510 IBF Project Document for more info and links to the 510-instances of these pipelines per disaster-type.
- For development/testing purposes, there are mock-endpoints and mock-data available to replace the need for these pipelines. (See 'Load local database with data' below.)
-
Setup env variables:
cp example.env .env
Fill in the .env variables with someone who has them.
-
Set up secret values for IBF-pipeline:
cp services/IBF-pipeline/pipeline/secrets.py.template services/IBF-pipeline/pipeline/secrets.py
Fill in the variables with someone who has them.
-
(Only if connecting local setup to remote database): Whitelist your machine IP at the database server
docker-compose -f docker-compose.yml up # for production
docker-compose up # for development
docker-compose -f docker-compose.yml -f docker-compose.override.yml up # for development (explicit)
For local development you can also run and start the services and interface without docker:
For API-service
cp .env services/API-service/.env
cd services/API-service
npm run start:dev
For IBF-dashboard
cd interfaces/IBF-dashboard
npm start
Suggestion: load everything through Docker, except IBF-dashboard. This has the benefit that changes in front-end code are immediately reflected, instead of having to rebuild.
docker-compose up -d
docker-compose stop ibf-dashboard
cd interfaces/IBF-dashboard
npm start
When running Docker locally, a database-container will start (as opposed to remote servers, which are connected to a database-server). For setting up a fully working version of the IBF-dasbhoard 2 steps are needed.
- Seed database with initial static data
- docker-compose exec ibf-api-service npm run seed
- Load initial raster data
- Get the file
raster-files.zip
from this link. - Unzip it in
services/API-service/geoserver-volume/raster-files
folder, such that that folder now has subfolders:input
-folder: static raster files that are served through 'geoserver' to the 'IBF-dashboard'mock-output
-foldermock output raster files that are used by the mock-endpoint (see below)output
-folder: currently empty, but any raster files that are posted to the API-service by IBF-pipelines (or mock endpoint) will be stored here, and Geoserver will be able to read them from here.
- Post 1st batch of dynamic data to database
- by calling mock-endpoint
- see API documentation: http://localhost:3000/docs/#/scripts
- run for all countries and disaster-type at once: http://localhost:3000/docs/#/scripts/ScriptsController_mockAll
- or run for 1 country and 1 disaster-type: http://localhost:3000/docs/#/scripts/ScriptsController_mockDynamic
- or by having external pipeline make a call to IBF-system
Adding new data
- Any new static data needs to be imported using a seed-script + corresponding TypeORM entity
- This includes e.g. geojson data
- The only exception are raster-files, which need to be included in data.zip here and transfered to all relevant servers.
These commands will install the IBF-system with listeners at,
- localhost:3000/docs for the API-service documentation
- *development only - localhost:4200 for the web interface
Please read the troubleshoot guidlelines to support the insatllation of IBF in the TROUBLESHOOT.md
We use Cypress for automated integration testing in this project.
Installation: 0. (Potentially on Ubuntu?:
sudo apt-get install libgtk2.0-0 libgtk-3-0 libgbm-dev libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2 libxtst6 xauth xvfb
)
- In root folder
npm install --only=dev
- This should download and install Cypress
- If it fails, find out why and/or install Cypress in some other way (e.g.
npm install cypress
)
- Set necessary environment variables, for example by using a CYPRESS_* prefix (see https://docs.cypress.io/guides/guides/environment-variables for more)
- e.g. on Windows Powershell: $env:CYPRESS_LOGIN_USER = ""
- Run
npm run cypress:open
- When the Cypress window opens click on 'Run X integration specs'
- Alternatively run
npm run cypress:start
to run from commandline
See notable changes and the currently release version in the CHANGELOG.
- Check if the latest integration tests passed on Cypress Dashboard.
- Pick a tag to release. Let's say we want to release the tag v0.27.9 on GitHub.
- Click the 'Edit Tag' button.
- Enter the release title to v0.27.9.
- Optional: Enter the release description from the CHANGELOG file.
- IMPORTANT: Before actually doing the release (and thus releasing to
staging
), check if any .ENV-variables on the server need to be updated. Do that by SSH'ing into the server and making the required changes. This will make sure the new release will use the updated .ENV-variables immediately. - Click the 'Publish Release' button.
The above steps should trigger the release webhook which updates the staging environment to the published release. This takes a while (approx 20 mins) to update.
- Make sure to verify if the environment-settings are appropriately set on the test VM before merging the PR.
- Merged PR's to 'master' branch are automatically deployed to the test-server. (via webhook, see: /tools#GitHub-webhook)
- Run seed-script if seed-data or datamodel has changed
- Run 'mock-all' endpoint
- Make sure to verify if the environment-settings are appropriately set on the stage VM before publishing the release.
- When a release is published, it is automatically deployed to the staging-server.
- Run seed-script if seed-data or datamodel has changed
- Run 'mock-all' endpoint
- Make sure to verify if the environment variables are appropriately set on the VM.
- Manually run the deploy script with the tag which should be deployed for the specific country.
Please read the contributing guidlelines in the CONTRIBUTING.md
For adding a new country to the IBF-system, a lot of components are already generic, and thus automized. But also quit some manual steps are needed at the moment. This is intended to be improved in the future. The list below is intended to give a full overview. It is not however meant to be detailed enough to execute each step, without further knowledge. Ask a developer who knows more.
NOTE: outdated!! Check with developers first.
- IBF-API-Service
- Users:
- Add user for country to src/scripts/users.json
- Add country to admin-user
- Country
- Add country in src/scripts/countries.json
- Upload through 'npm run seed' from API-service
- Users:
- Data for database (look at existing countries and files for examples in terms
of format)
- Save admin-area-boundary file (.shp) for agreed upon admin-level as
geojson (with extension .json) with the right column names in
services/API-service/src/scripts/git-lfs/
- Save Glofasstations_locations_with_trigger_levels.csv in the same folder
- Save Glofasstation_per_admin_area.csv in the same
folder
- which admin-areas are triggered if station X is triggered?
- note: this should include all admin-areas. If not mapped to any station, use 'no_station'
- Potentially add extra code in seed-scripts (seed-amin-area.ts / seed-glofas-station.ts / etc.) to process new data correctly.
- Run seed script of IBF-API-service
- NOTE: we are in a migration, where we want to move new data as much as
possible to this new seed-script set up. So also for other data, not
mentioned here, the goal is to upload this via seed-scripts as well. Some
other data that is not yet included in seed-script
- COVID risk data (.csv) > uploaded through specifically created endpoint
- Save admin-area-boundary file (.shp) for agreed upon admin-level as
geojson (with extension .json) with the right column names in
- Geodata for IBF-pipeline and IBF-geoserver (look at existing countries and
files for examples in terms of format)
- Save in
services/IBF-pipeline/pipeline/data
in the right subfolder ..- Flood extent raster (for at least 1 return period) + an 'empty' raster of the same exten/pixel size. (.tif)
- Population (.tif)
- Grassland + cropland (.tif)
- When deploying to other environments (local/remote) this data needs to be transfered (e.g. as data.zip through WinSCP or similar)
- Save in
- IBF-pipeline
- add countryCodeISO3 to .env (for development settings, replace by ONLY that code)
- add country-specific settings to settings.py (e.g. right links to
abovementioned data)
- with model = 'glofas'
- add country-specific settings to secrets.py
- add dummy-trigger-station to glofasdata.py with forecast-value that exceeds trigger-value
- Run runPipeline.py (
python3 runPipeline.py
) to test pipeline.
- Geoserver
- Manually create new stores+layers in
Geoserver interface of test-vm
- floodextent_ for each lead-time
- population_
- grassland_
- cropland_
- Test that the specifics layers are viewable in the dashboard now
- When done, commit the (automatically) generated content in IBF-pipeline/geoserver-workspaces to Github
- This will prevent you from having to do the same for another server, or if your content is lost somehow
- Manually create new stores+layers in
Geoserver interface of test-vm
- IBF-dashboard
- Test dashboard by logging in through admin-user or country-specific user
- Specifics/Extras
- Whatsapp:
- create whatsapp group
- paste link in IBF-pipeline/pipeline/lib/notifications/formatInfo.py
- EAP-link
- create bookmark in Google Docs at place where Trigger Model section starts
- paste link (incl bookmark) in countries seed-script
- paste link (excl bookmark) in IBF-pipeline/pipeline/lib/notifications/formatInfo.py
- Logo's
- Get logo(s) (.png)
- Paste in IBF-dashboard/app/assets/logos + add reference to each logo in countries seed-script
- Paste in IBF-pipeline/pipeline/lib/notifications/logos/email-logo-.png
- Upload logo to mailchimp + retrieve shareable link + copy this in IBF-pipeline/pipeline/lib/notifications/formatInfo.py
- Mailchimp segment
- Add new tag '' to at least 1 user
- Create new segment '' defined as users with tag ''.
- Get segmentId of new segment
- Paste this in IBF-pipeline/pipeline/secrets.py
- EAP-actions
- Summarize actions from EAP-document + define 1 Area of Focus per EAP-action
- Add to API-service/seed-data/EAP-actions.json
- run 'npm run seed' from API-service
- Whatsapp:
NOTE: outdated!! Check with developers first.
- Follow the 'flood' manual above as much as possible, with notable exceptions
- Input data database
- Rainfall_station_locations_with_trigger_levels.csv > currently not included in seed-script yet, but manually uploaded (through runSetup.py)
- Input data pipeline
- There is no equivalent input to the flood extent raster. This is created in the pipeline.
- Add country in IBF-pipeline settings.py with model = 'rainfall'
- Save geoserver output as rainfallextent_
Term | Definition (we use) |
---|---|
version |
A 'number' specified in the SemVer -format: 0.1.0 |
tag |
A specific commit or point-in-time on the git-timeline; named after a version, i.e. v0.1.0 |
release |
A fixed 'state of the code-base', published on GitHub |
deployment |
An action performed to get (released) code running on an environment |
environment |
A machine that can run code (with specified settings); i.e. a server or VM, or your local machine |