Skip to content

Open-Workshop/open-workshop-storage

Repository files navigation

Open Workshop Storage

imports-isort BlackCode mypy Discord Telegram

Open Workshop Storage is split into two outward-facing services that share the same storage root and helper code:

  • distributor serves stored files and blurhash metadata
  • loader ingests uploads, runs transfer jobs, repacks artifacts, and reports completion back to Manager

The loader side still uses a single-worker runtime because it owns in-memory transfer state and WebSocket fan-out. The distributor side is stateless apart from its in-memory BlurHash cache and can be scaled independently.

When both services are published on the same domain, only the conflicting control endpoints need separate prefixes. In practice that means docs and health URLs live under /distributor/... and /loader/..., while business routes like /download/..., /upload, and /transfer/... stay at their natural paths.

Highlights

  • FastAPI applications served with Granian.
  • Separate loader and distributor entrypoints for clearer service boundaries.
  • Loader runtime designed around in-memory job state and WebSocket progress.
  • Protected archive downloads with access-service validation.
  • Transfer pipeline for remote downloads and direct raw-body uploads.
  • Archive repacking with 7z, encrypted ZIP rejection, and unpacked-size heuristics.
  • Automatic image normalization to WebP.
  • WebSocket progress stream for upload, download, extract, and repack stages.
  • Optional Uptrace / OpenTelemetry instrumentation.

Quick Start

1. Install system dependency

Ubuntu / Debian:

sudo apt update
sudo apt install -y p7zip-full

2. Install Python dependencies

python3 -m venv .venv
./.venv/bin/pip install -r requirements.txt

3. Create local config

cp ow_config_sample.py ow_config.py

Then fill at least:

  • MAIN_DIR
  • MANAGER_URL
  • ACCESS_SERVICE_URL
  • TRANSFER_JWT_SECRET
  • token values in ow_config.py

Configuration details: docs/CONFIGURATION.md

4. Generate tokens

./.venv/bin/python token_gen.py

5. Start the services

Distributor:

granian --working-dir src --interface asgi --host 127.0.0.1 --port 8000 open_workshop_storage.distributor:app

Loader:

granian --working-dir src --interface asgi --host 127.0.0.1 --port 8001 --workers 1 --respawn-failed-workers --access-log open_workshop_storage.loader:app

The loader service is expected to run as a single worker process. Multi-worker deployment is not supported by the current architecture because active transfer state lives in process memory. Production runs should keep --respawn-failed-workers enabled so Granian replaces a worker that exits unexpectedly.

Watchdog

If you want a tiny external health monitor, the repository ships with watchdog.py. It checks a health endpoint every 20 seconds by default and restarts the service after 5 minutes of continuous failure.

Required env vars:

  • WATCHDOG_HEALTH_URL - service health endpoint, for example https://example.com/loader/healthz
  • WATCHDOG_RESTART_COMMAND - shell command used to restart the service, for example systemctl restart open-workshop-storage

Optional env vars:

  • WATCHDOG_CHECK_INTERVAL_SECONDS - default 20
  • WATCHDOG_RESTART_AFTER_SECONDS - default 300
  • WATCHDOG_REQUEST_TIMEOUT_SECONDS - default 5

Example:

WATCHDOG_HEALTH_URL="https://example.com/loader/healthz" \
WATCHDOG_RESTART_COMMAND="systemctl restart open-workshop-loader" \
python watchdog.py

6. Open the API docs

  • Distributor Swagger UI: https://example.com/distributor/
  • Distributor OpenAPI JSON: https://example.com/distributor/openapi.json
  • Loader Swagger UI: https://example.com/loader/
  • Loader OpenAPI JSON: https://example.com/loader/openapi.json

Documentation

API At A Glance

The paths below stay at their natural root locations. On a shared domain, keep only the docs and health endpoints behind service-specific prefixes, and leave the functional routes unchanged.

Service Method Path Purpose
Distributor GET / HEAD /download/{type}/{path:path} Download stored files, with access-service validation for protected mod archives
Distributor POST /blurhashes Generate BlurHash metadata for stored images
Loader POST /upload Internal multipart upload endpoint for Manager
Loader DELETE /delete Internal delete endpoint for Manager
Loader GET / POST /transfer/start Start background download and repack flow from transfer JWT
Loader POST /transfer/upload Upload archive or image as raw body using transfer JWT
Loader WS /transfer/ws/{job_id} Subscribe to live transfer progress
Loader POST /transfer/repack Repack an already uploaded source file
Loader POST /transfer/move Move packed file to permanent storage

Detailed request and response semantics: docs/API.md

Runtime Model

The loader service keeps active job state in memory and persists per-job metadata under <MAIN_DIR>/temp/<job_id>/meta.json.

That design keeps the loader code simple and fast, but it also means:

  • one loader process must own the whole lifecycle of a transfer job
  • WebSocket clients for a job must connect to the same loader process that started it
  • horizontal fan-out or multi-worker loader deployment needs a shared state layer before it becomes safe

More details: docs/ARCHITECTURE.md

Project Layout

src/open_workshop_storage/
├── api/routes/         # FastAPI endpoints
├── core/               # shared state contracts and metadata helpers
├── observability/      # OpenTelemetry / Uptrace wiring
├── services/           # long-running transfer workflows
├── distributor.py      # distributor app entrypoint
├── loader.py           # loader app entrypoint
├── service_factory.py  # shared app wiring and router cloning helpers
└── utils/              # archive, auth, file, and image utilities

Quality Tooling

The repository ships with a small Makefile for formatting, linting, and type checking:

make format
make lint
make type-check

Toolchain:

  • black for code style
  • isort for imports
  • flake8 for linting
  • mypy for static type checks

make lint verifies isort, black, and flake8, while make format applies isort and black.

Development workflow details: docs/DEVELOPMENT.md

Telemetry

If UPTRACE_DSN is configured, the app enables OpenTelemetry tracing and exports spans to Uptrace.

Example:

export UPTRACE_DSN="https://<token>@api.uptrace.dev/<project_id>"
export OTEL_SERVICE_NAME="open-workshop-storage"
export OTEL_SERVICE_VERSION="1.0.0"
export OTEL_DEPLOYMENT_ENVIRONMENT="production"
granian --working-dir src --interface asgi --host 127.0.0.1 --port 7070 open_workshop_storage.app:app

Telemetry settings reference: docs/CONFIGURATION.md

License

This project is distributed under the terms of the MPL-2.0 license. See LICENSE.

About

Backend часть сервиса Open Workshop. Управляет каталогом модов.

Topics

Resources

License

Stars

Watchers

Forks

Contributors

Languages