Open Workshop Storage is split into two outward-facing services that share the same storage root and helper code:
distributorserves stored files and blurhash metadataloaderingests uploads, runs transfer jobs, repacks artifacts, and reports completion back to Manager
The loader side still uses a single-worker runtime because it owns in-memory transfer state and WebSocket fan-out. The distributor side is stateless apart from its in-memory BlurHash cache and can be scaled independently.
When both services are published on the same domain, only the conflicting control endpoints need separate
prefixes. In practice that means docs and health URLs live under /distributor/... and /loader/...,
while business routes like /download/..., /upload, and /transfer/... stay at their natural paths.
- FastAPI applications served with Granian.
- Separate loader and distributor entrypoints for clearer service boundaries.
- Loader runtime designed around in-memory job state and WebSocket progress.
- Protected archive downloads with access-service validation.
- Transfer pipeline for remote downloads and direct raw-body uploads.
- Archive repacking with
7z, encrypted ZIP rejection, and unpacked-size heuristics. - Automatic image normalization to WebP.
- WebSocket progress stream for upload, download, extract, and repack stages.
- Optional Uptrace / OpenTelemetry instrumentation.
Ubuntu / Debian:
sudo apt update
sudo apt install -y p7zip-fullpython3 -m venv .venv
./.venv/bin/pip install -r requirements.txtcp ow_config_sample.py ow_config.pyThen fill at least:
MAIN_DIRMANAGER_URLACCESS_SERVICE_URLTRANSFER_JWT_SECRET- token values in
ow_config.py
Configuration details: docs/CONFIGURATION.md
./.venv/bin/python token_gen.pyDistributor:
granian --working-dir src --interface asgi --host 127.0.0.1 --port 8000 open_workshop_storage.distributor:appLoader:
granian --working-dir src --interface asgi --host 127.0.0.1 --port 8001 --workers 1 --respawn-failed-workers --access-log open_workshop_storage.loader:appThe loader service is expected to run as a single worker process. Multi-worker deployment is not supported by the
current architecture because active transfer state lives in process memory. Production runs should keep
--respawn-failed-workers enabled so Granian replaces a worker that exits unexpectedly.
If you want a tiny external health monitor, the repository ships with watchdog.py.
It checks a health endpoint every 20 seconds by default and restarts the service after 5 minutes of
continuous failure.
Required env vars:
WATCHDOG_HEALTH_URL- service health endpoint, for examplehttps://example.com/loader/healthzWATCHDOG_RESTART_COMMAND- shell command used to restart the service, for examplesystemctl restart open-workshop-storage
Optional env vars:
WATCHDOG_CHECK_INTERVAL_SECONDS- default20WATCHDOG_RESTART_AFTER_SECONDS- default300WATCHDOG_REQUEST_TIMEOUT_SECONDS- default5
Example:
WATCHDOG_HEALTH_URL="https://example.com/loader/healthz" \
WATCHDOG_RESTART_COMMAND="systemctl restart open-workshop-loader" \
python watchdog.py- Distributor Swagger UI:
https://example.com/distributor/ - Distributor OpenAPI JSON:
https://example.com/distributor/openapi.json - Loader Swagger UI:
https://example.com/loader/ - Loader OpenAPI JSON:
https://example.com/loader/openapi.json
The paths below stay at their natural root locations. On a shared domain, keep only the docs and health endpoints behind service-specific prefixes, and leave the functional routes unchanged.
| Service | Method | Path | Purpose |
|---|---|---|---|
| Distributor | GET / HEAD |
/download/{type}/{path:path} |
Download stored files, with access-service validation for protected mod archives |
| Distributor | POST |
/blurhashes |
Generate BlurHash metadata for stored images |
| Loader | POST |
/upload |
Internal multipart upload endpoint for Manager |
| Loader | DELETE |
/delete |
Internal delete endpoint for Manager |
| Loader | GET / POST |
/transfer/start |
Start background download and repack flow from transfer JWT |
| Loader | POST |
/transfer/upload |
Upload archive or image as raw body using transfer JWT |
| Loader | WS |
/transfer/ws/{job_id} |
Subscribe to live transfer progress |
| Loader | POST |
/transfer/repack |
Repack an already uploaded source file |
| Loader | POST |
/transfer/move |
Move packed file to permanent storage |
Detailed request and response semantics: docs/API.md
The loader service keeps active job state in memory and persists per-job metadata under
<MAIN_DIR>/temp/<job_id>/meta.json.
That design keeps the loader code simple and fast, but it also means:
- one loader process must own the whole lifecycle of a transfer job
- WebSocket clients for a job must connect to the same loader process that started it
- horizontal fan-out or multi-worker loader deployment needs a shared state layer before it becomes safe
More details: docs/ARCHITECTURE.md
src/open_workshop_storage/
├── api/routes/ # FastAPI endpoints
├── core/ # shared state contracts and metadata helpers
├── observability/ # OpenTelemetry / Uptrace wiring
├── services/ # long-running transfer workflows
├── distributor.py # distributor app entrypoint
├── loader.py # loader app entrypoint
├── service_factory.py # shared app wiring and router cloning helpers
└── utils/ # archive, auth, file, and image utilities
The repository ships with a small Makefile for formatting, linting, and type checking:
make format
make lint
make type-checkToolchain:
blackfor code styleisortfor importsflake8for lintingmypyfor static type checks
make lint verifies isort, black, and flake8, while make format applies isort and black.
Development workflow details: docs/DEVELOPMENT.md
If UPTRACE_DSN is configured, the app enables OpenTelemetry tracing and exports spans to Uptrace.
Example:
export UPTRACE_DSN="https://<token>@api.uptrace.dev/<project_id>"
export OTEL_SERVICE_NAME="open-workshop-storage"
export OTEL_SERVICE_VERSION="1.0.0"
export OTEL_DEPLOYMENT_ENVIRONMENT="production"
granian --working-dir src --interface asgi --host 127.0.0.1 --port 7070 open_workshop_storage.app:appTelemetry settings reference: docs/CONFIGURATION.md
This project is distributed under the terms of the MPL-2.0 license. See LICENSE.