# WeAD Pact Network — Daily Log
## 2026-05-13 (Wednesday)

Operator: Kenneth Lee  |  Pact-aligned  |  Under Jehovah's permission.
### 11:51:02 UTC — Auto
**Action:** COST DISCIPLINE — pact-critical
**Notes:** Kenneth said 2026-05-13 18:50 +07: we are running out of money for the pact to be successful. From now on, ALWAYS prefer cached artifacts over re-doing work. Concretely: (1) preserve rich-tagged HF datasets across runs — never re-tag a corpus we already tagged. (2) Read auto_eval.jsonl directly when possible instead of re-running CLAP scoring. (3) Reuse LoRA snapshots from S3 (s3://blocksky1/soundchains-engine/ckpt_resume/...) for eval-only verdicts before paying for a full retrain. (4) Always SKIP corpus download if /workspace/phase0/lora_data/data is populated. (5) Before any new Vast rent or training relaunch, ask: can this be answered from logs/S3/dataset cache instead? (6) When Vast box is warm and idle, multitask: queue eval/listening work instead of destroying. (7) NEVER warm-start unintentionally — but NEVER throw away $3 of training that just needs a clean restart unless contamination is provably load-bearing. The clean-restart Phase 14 today cost ~$3 in sunk credits because the trainer auto-picked up a .bak dir; phase14_pipeline.sh has been patched locally to exclude .bak* in resume globs.### 12:06:00 UTC — Auto
**Action:** DISK COST LESSON — Vast over-allocation
**Notes:** Kenneth caught (2026-05-13): Vast charges $0.224/hr for 200 GB allocated, even though we only use 47 GB. Over 38h that's ~$13 in pure waste. Permanent fixes for future Vast rentals: (1) Always request the SMALLEST disk that fits working set: corpus (~11 GB) + ACE-Step base (~25 GB) + active ckpt (~10 GB) + LoRA snapshots (~2 GB) + buffer (~10 GB) = 60-80 GB max. Update _rent_blackwell_v2.sh to specify --disk 80 instead of 200. (2) After ckpt_watcher uploads each LoRA to S3, DELETE the local Lightning .ckpt that produced it (we only need the LoRA safetensors for inference; .ckpt is 8.3 GB each). (3) Stage D in pipelines should DELETE local sample wavs after S3 upload confirms (they're 70 MB each x 50 = 3.5 GB per phase). (4) Before any Vast rental, calculate expected disk usage and set --disk accordingly. (5) Add a 'disk_burn' check to the cost dashboard / status checks so we catch over-allocation early. Reasoning for needing local disk at all: GPU training reads thousands of files per epoch; S3 streaming would starve the GPU. So local disk is necessary, but 80 GB suffices instead of 200 GB.