Performance tests have been written to simulate real life load on the MONAI Deploy solution stack. This includes benchmark, average and peak load (soak) configuration. Because of the asynchronous architecture of MONAI Deploy, k6 will be used as the load generator but logs from elastic will be used to measure the performance of the individual components.
- MONAI Workflow Manager (WM) and its dependencies
- MONAI Informatics Gateway (IG) and its dependencies
- Docker
- Dummy models
- ELK Stack
- Prometheus and Grafana
- Test data (CT, MR, US, RF)
- k6
- Load generator written in Go and can be executed via Docker.
- Scripting is done in JS or TS.
- Configuration of load throughput are held in config files here
- Will send HTTP STORE requests to Orthanc for a given modality.
- k6 Scripts
- dicom_benchmark.js - Sends MR study store requests with a 2 minute sleep between each iteration.
- dicom_peak_avg.js - Sends CT, MR, US and RF study store requests based on the configuration
- dotnet-performance-app
- A lightweight .net 6 app which is used for sending C-STORE requests using fo-dicom
- ELK Stack
- A log aggregator (i.e ELK) will be used for capturing all logs so that investigation of run time metrics can be achieved.
- Grafana and Prometheus
- Monitoring and visualization platforms to monitor memory, CPU and GPU usage from the applications
- Models
- Dummy models created will simulate real model usage to stress the system.
Name | CPU Cores | RAM | GPU | Disk Space |
---|---|---|---|---|
Small | 2 | 1GB | 1GB | 2GB |
Medium | 8 | 10GB | 6GB | 15GB |
Large | 12 | 16GB | 12GB | 25GB |
AET: MONAI
Tasks [
{
name: router
type: router
task-destinations{
if CT run ct-argo
if MR run mr-argo
if US run us-argo
if RF run rf-argo
}
},
{
name: ct-argo
type: argo
args{
argo-template: large-model
}
},
{
name: mr-argo
type: argo
args{
argo-template: medium-model
}
},
{
name: us-argo
type: argo
args{
argo-template: medium-model
}
}
{
name: rf-argo
type: argo
args{
argo-template: small-model
}
}
]
cd performance-testing/dotnet-performance-app
dotnet build
docker build -t dotnet-performance-app .
docker run -it --rm -p 5000:80 -p 5001:443 -e InformaticsGateway__Host={host} dotnet-performance-app
host is to be replaced with the the host that MIG is running on
Liver Seg Benchmark tests will be used to measuring the use of a MAP within MONAI Deploy given a known resource limit. This is a low throughput test which put no stress on the system. These stats will be used to measure any degradation.
- Deploy MIG and MWM to an environment including all its dependencies.
- Set up MIG with AET and Destinations scripts found here
- Seed MongoDB with Clinical Workflows found here
- Seed Argo with the Argo Workflow Templates found here
- Install k6 from here
cd performance-testing/k6
k6 run -e CONFIG=config/liverConfig.json dicom/liver_benchmark.js --insecure-skip-tls-verify
Liver Seg Benchmark parallel tests will be used to measuring the use of a MAP within MONAI Deploy given a known resource limit. This is a parallel execution of 5 concurrent associations to measure difference between sequential and parallel execution
- Deploy MIG and MWM to an environment including all its dependencies.
- Set up MIG with AET and Destinations scripts found here
- Seed Orthanc with Test Data from here
- Set up Orthanc with a Remote Modality, configuration can be found here
- MONAI - This will be send C-STORE requests to MIG with an AET "MONAI"
- Seed MongoDB with Clinical Workflows found here
- Seed Argo with the Argo Workflow Templates found here
- Install k6 from here
- Update Orthanc details (i.e url) in the config/liverParallelConfig.json
cd performance-testing/k6
k6 run -e CONFIG=config/liverParallelConfig.json dicom/liver_benchmark_parallel.js --insecure-skip-tls-verify
Benchmark tests will be used to measuring the best performance of the MONAI stack. This is a low throughput test which put no stress on the system. These stats will be used to measure any degradation.
Modality | Iterations | Typical Image Size | # of Images / Study | Size (Raw) |
---|---|---|---|---|
MRI | 10 | (256, 256, 30, 1) | 200 | 26mb |
- Deploy MIG and MWM to an environment including all its dependencies.
- Set up MIG with AET and Destinations scripts found here
- Run dotnet-performance-app
- Set up Orthanc with a Remote Modality, configuration can be found here
- MONAI - This will be send C-STORE requests to MIG with an AET "MONAI"
- Seed MongoDB with Clinical Workflows found here
- Seed Argo with the Argo Workflow Templates found here
- Install k6 from here
cd performance-testing/k6
k6 run -e CONFIG=config/benchmarkConfig.json -e URL={url} -e DICOM_MODALITY={modality} -e WF_AET={AET} dicom/dicom_benchmark.js --insecure-skip-tls-verify
url is to be replaced by dotnet-performance-app url modality is to be replaced by either CT, RF, US or MR AET is to be replaced by the Calling AET to be sent with the association. 1 triggers a workflow and 1 does not
- Informatics gateway will output logs detailing the time when an association was made and when a WorkflowRequest was sent. This can be seen by | grep "Payload took" which will give a hh:mm:ss between the 2 events.
- Export request times can be seen by checking the time the export request was send and the time an export
- Logs TBC
- Logs TBC
- Grafana will be used for visualization of the hardware resources.
Average and Peak load times are displayed as below. These tests are most valuable running on production like hardware to measure performance metrics including processing times as well as system metrics such as CPU, Memory and GPU usage.
Modality | Peak 1 hour | Avg 1 hour (8-5) | Typical Image Size | # of Images / Study | Size (Raw) |
---|---|---|---|---|---|
X-ray | 120 | 60 | (2000, 2500, 1, 1) | 3 | 30mb |
Ultrasound | 50 | 28 | (640, 480, 1, 1) | 30 | 9.2mb |
CT | 30 | 10 | (512, 512, 1, 1) | 60 | 32mb |
Multi Slice CT | split with above | split with above | (512, 512, 200, 1) | 500 | 262mb |
MRI | 25 | 13 | (256, 256, 30, 1) | 200 | 26mb |
ALL (Inc. other modalities) | 250 | 140 | - | - | - |
- Deploy MIG and MWM to an environment including all its dependencies.
- Set up MIG with AET and Destinations scripts found here
- Run dotnet-performance-app
- Set up Orthanc with 2 Remote Modalities, configuration can be found here
- MONAI - This will be send C-STORE requests to MIG with an AET "MONAI"
- NOTMONAI - This will be send C-STORE requests to MIG with an AET "NOTMONAI"
- Seed MongoDB with Clinical Workflows found here
- Seed Argo with the Argo Workflow Templates found here
- Install k6 from here
- Update Orthanc details (i.e url) in the config/benchmarkConfig.json
cd k6
k6 run -e CONFIG=config/{config}.json -e URL={url} -e WF_AET={AET} -e NO_WF_AET{AET} dicom/dicom_peak_avg.js --insecure-skip-tls-verify
url is to be replaced by dotnet-performance-app url config is to be replaced by either avgConfig.json or peakConfig.json AET is to be replaced by the Calling AET to be sent with the association. 1 triggers a workflow and 1 does not
- Informatics gateway will output logs detailing the time when an association was made and when a WorkflowRequest was sent. This can be seen by | grep "Payload took" which will give a hh:mm:ss between the 2 events.
- Export request times can be seen by checking the time the export request was send and the time an export
- Logs TBC
- Logs TBC
- Grafana will be used for visualization of the hardware resources.