Skip to main content

Deployment

Environment Variables

Only OXIDICOM_AMQP_ADDRESS and OXIDICOM_FILES_ROOT are required. Those configure how oxidicom connects to CUBE. The other variables are either for optional features or performance tuning.

NameDescription
OXIDICOM_AMQP_ADDRESS(required) AMQP address of the RabbitMQ used by CUBE's celery workers
OXIDICOM_FILES_ROOT(required) Path to where CUBE's storage is mounted
OXIDICOM_QUEUE_NAME(optional) RabbitMQ queue name for the celery register_pacs_series task
OXIDICOM_NATS_ADDRESS(optional) NATS server where to send progress messages
OXIDICOM_PROGRESS_INTERVALMinimum delay between progress messages. Uses humantime syntax, e.g. 5ms.
OXIDICOM_SCP_AETDICOM AE title (PACS pushing to oxidicom should be configured to push to this name)
OXIDICOM_SCP_STRICTWhether receiving PDUs must not surpass the negotiated maximum PDU length.
OXIDICOM_SCP_UNCOMPRESSED_ONLYOnly accept native/uncompressed transfer syntaxes
OXIDICOM_SCP_PROMISCUOUSWhether to accept unknown abstract syntaxes.
OXIDICOM_SCP_MAX_PDU_LENGTHMaximum PDU length
OXIDICOM_LISTENER_THREADSMaximum number of concurrent SCU clients to handle. (see Performance Tuning)
OXIDICOM_LISTENER_PORTTCP port number to listen on
OXIDICOM_DEV_SLEEPDuration to sleep between sending LONK notifications. Only useful for development purposes.
TOKIO_WORKER_THREADSNumber of threads to use for the async runtime
OTEL_EXPORTER_OTLP_ENDPOINTOpenTelemetry Collector gRPC endpoint
OTEL_RESOURCE_ATTRIBUTESResource attributes, e.g. service.name=oxidicom-test
RUST_LOGLogging verbosity, set oxidicom=info to turn on verbose messages.

See src/settings.rs for the source of truth on the table above and default values of optional settings.

Performance Tuning

Behind the scenes, oxidicom has three components connected by asynchronous channels:

  1. listener: receives DICOM objects over TCP
  2. writer: writes DICOM objects to storage
  3. notifier: emits progress messages to NATS and series registration jobs to celery

OXIDICOM_LISTENER_THREADS controls the parallelism of the listener, whereas TOKIO_WORKER_THREADS controls the async runtime's thread pool which is shared between the writer and registerer. (The reason why we have two thread pools is an implementation detail: the Rust ecosystem suffers from a sync/async divide.)

Message Frequency

Increase the value for OXIDICOM_PROGRESS_INTERVAL to decrease the rate of messages. Doing so will decrease load on CUBE's ASGI web server and improve ChRIS_ui performance.

Scaling

Large amounts of incoming data can be handled by horizontally scaling oxidicom. It is easy to increase its number of replicas. However, the task queue for registering the data to CUBE will fill up. If you try to increase the number of CUBE celery workers, then the PostgreSQL database will get strained.

Failure Modes

oxidicom is designed to be fault-tolerant. For instance, an error with an individual DICOM instance does not terminate the association (meaning, subsequent DICOM instances will still have the chance to be received).

No assumptions are made about the PACS being well-behaved. oxidicom does not care if the PACS sends illegal data (e.g. the wrong number of DICOM instances for a series).

Receiving the same DICOM data more than once will overwrite the existing file in storage, and another task to register the series will be sent to CUBE's celery workers. CUBE's workers are going to throw an error when this happens. The overall behavior is idempotent.

Observability

oxidicom exports traces to OpenTelemetry collector. There is a span for the association (TCP connection from PACS server to send us DICOM objects).