Generative Intelligence for Wildfire Smoke.
PlumeSentinelAI is an agent-driven prototype that drives DAVINCI-MONET through a manifest of historical wildfire smoke events, reasons over the produced satellite imagery with a multimodal LLM, renders a structured bulletin, and ships the deterministic metrics + bulletin to InfluxDB via MQTT for visualization in Grafana.
The agent loop is implemented as the /plume-sentinel-monitor slash command (one task per
invocation) and an outer /loop that re-fires it until the sweep manifest is exhausted.
┌──────────────────┐ MQTT ┌──────────────┐ InfluxDB ┌──────────┐
│ /plume-sentinel- │ publish │ plumesentinel│ line proto │ Grafana │
│ monitor (agent) │ ───────────▶│ -bridge │ ─────────────▶ │ dashboard│
└──────────────────┘ └──────────────┘ └──────────┘
│ ▲
│ subprocess │ HTTP
▼ │
┌──────────────────┐ ┌────────────────────────────┐
│ DAVINCI-MONET │ ──── PNG plots ────── │ python -m http.server 8080 │
│ runner │ to local disk │ (so Grafana can fetch them)│
└──────────────────┘ └────────────────────────────┘
The public project website is in web/ and deploys to GitHub Pages on every
push that touches web/**. Live at https://davidfillmore.github.io/PlumeSentinelAI/.
See docs/superpowers/specs/2026-05-03-website-design.md for the design spec.
This walkthrough runs the full anchor manifest end-to-end: 3 event dates × 2 sensor configurations (MODIS AOD + GOES/HMS) = 6 tasks.
- Docker Desktop (for Mosquitto, InfluxDB, Grafana, grafana-image-renderer)
- A
davinci-monetconda environment with DAVINCI-MONET and thefeature/plume-sentinel-addonbranch installed (provides--emit-metrics-json,--event-date, etc.) - Anchor-event input data downloaded under
~/Data/PlumeSentinelAI/per the addon's data layout (MODIS L2 AOD HDF for 2020-09-09, GOES-16 ABI L2 MCMIP NetCDF, NOAA HMS smoke shapefile) - This repo's Python venv installed:
python3 -m venv .venv && .venv/bin/pip install -e '.[dev]'
From the repo root, in the order shown:
# 1. Bring up the full Docker stack (Mosquitto + InfluxDB + Grafana + renderer).
./scripts/start_dev_stack.sh
# 2. (One-time per host) Mint a Grafana service-account API key and provision the
# InfluxDB datasource + PlumeSentinel dashboard. Skip on subsequent runs.
# The provisioner exits with a curl recipe if the key is still the placeholder.
.venv/bin/python scripts/provision_grafana.py \
--grafana-creds secrets/grafana.json \
--influxdb-creds secrets/influxdb.json \
--dashboards config/grafana
# 3. Start the bridge (MQTT → InfluxDB) and the static HTTP server that serves PNGs to Grafana.
.venv/bin/plumesentinel-bridge run --config config/bridge.toml > /tmp/ps-bridge.log 2>&1 &
.venv/bin/python -m http.server 8080 --directory ~/Data/PlumeSentinelAI/runs > /tmp/ps-http.log 2>&1 &
# 4. Initialize the sweep state, then drive the agent loop from inside Claude Code.
.venv/bin/plumesentinel-sweep init --manifest config/sweep.example.yaml --state state/sweep.json
# then in Claude Code, from this directory:
# /loop /plume-sentinel-monitor --manifest config/sweep.example.yaml --state state/sweep.json --onceEach /loop tick fires one agent iteration that:
- Publishes a heartbeat to
plume-sentinel-ai/agent/health. - Picks the next pending task from
state/sweep.jsonand marks it in-progress. - Launches DAVINCI-MONET in the background; waits for
.done/.failedsentinel. - Publishes deterministic metrics (
plume_metrics) — AOD peak/p50/p95, area-with-AOD>2, HMS heavy/medium/light km², retrieval failure %, etc. - Reads the produced PNGs, reasons over them with the LLM, renders the bulletin (
plume_bulletin), and publishes it. - Marks the task completed and republishes the sweep summary.
For the 6-task anchor sweep, expect ~10-15 min total wallclock (each iteration is ~30-60s of DAVINCI runtime + ~30-60s of agent reasoning, plus a 60s self-paced delay between iterations).
When complete:
plumesentinel-sweep status --state state/sweep.jsonshows{"total": 6, "completed": 6, ...}.- The Grafana dashboard at
http://localhost:3000(default credentials insecrets/grafana_password.txt) shows 6 rows in the per-run table (Row 3), 6 cells in the severity timeline, and bulletin content for the selected event date in Row 2. scripts/smoke.sh(a one-task variant of the same flow, end-to-end) exits 0 withSMOKE PASS.
For a faster end-to-end validation (one task, ~2 min) instead of the full 6-task sweep:
./scripts/smoke.sh- DAVINCI-MONET loaders are not yet date-aware: the MODIS AOD and HMS shapefile loaders read
an explicit per-config file list, so for the anchor sweep the underlying physical input is the
Sept 9 file across all three event dates. Per-date metric values are therefore identical.
The bulletin's
event_date/valid_time/bulletin_idare correctly tagged with each manifest-driven date, and the agent's reasoning narrative differentiates the dates from contemporaneous surface-station and HMS context — but the displayed AOD/HMS numbers themselves are the same across dates. Production fix: split the gemini configs into per-date files, or add a date-filtering loader. - Plot title strings carry the same caveat: they reflect the source file's
start_time(Sept 9), not the agent's--event-dateflag. - Bulletin column alignment: the Grafana bulletin panel uses Grafana's stock Table panel,
which doesn't preserve fixed-width ASCII alignment in cell wrap. The full bulletin text
is visible but column tabbing in the rendered text is approximate. Production fix: install
the
marcusolsson-dynamictext-panelplugin. - Per-run table renders 3 rows, not 6: the dashboard's
joinByFieldtransform onevent_datecollapses the modisaod and goeshms rows for the same date. The underlying Flux query returns the correct 6 rows; only the rendered table is affected. - Peak-AOD time-series renders empty under default time range:
plume_metricsis stamped withvalid_time(2020-09-09 etc.), but Grafana's UI default range isnow-1h. Set the dashboard time range to an absolute window covering the anchor event (e.g. 2020-09-08 → 2020-09-12) when viewing.
See docs/superpowers/plans/2026-05-01-plumesentinel-agent-prototype.md (the "Implementation
Status" appendix) for the full list of follow-ups.