- Reserve five buckets on the system volume: macOS plus Xcode, repos, DerivedData, SPM caches, and headroom for archives and temp exports.
- Do not ship DerivedData across regions; align lockfiles and Xcode build numbers, then rebuild locally per node in JP, KR, HK, SG, and US West.
- Upgrade storage or add a parallel lane when hygiene stops fixing P95 clean builds or when notarization temp files fail for lack of space.
Why 256 GB Fills Before the CPU Sweats
Large Xcode workspaces on rented Mac minis in Tokyo, Seoul, Hong Kong, Singapore, and US West often fill two hundred fifty six gigabyte SSDs before CPUs thermal-throttle. DerivedData swells with every scheme slice; Swift Package Manager checkouts and org.swift.swiftpm caches grow on each dependency move. macOS updates, side-by-side Xcode installs, Instruments traces, and crash logs add another double-digit slice. Without a written budget, the first symptom is a failed xcodebuild archive or a notarization export that cannot stage temp files.
Disk Budget: Five Buckets on One Volume
Think in five bands on one volume: macOS plus Xcode with simulators (~40–55 GB); repos and assets (~10–40 GB); DerivedData (~30–120 GB on busy apps); SPM checkouts and caches (~10–80 GB); and free headroom (~15–30 GB) for archives and xcrun stapler staging. If utilization stays above ~85% for several days, treat it as a capacity incident, not a cosmetic warning.
DerivedData Hygiene Versus SPM Cache Hygiene
Deleting DerivedData is safe but lengthens the next cold build—prune stale folders or give CI unique DERIVED_DATA_DIR paths per branch family. SPM is different: clearing ~/Library/Caches/org.swift.swiftpm frees tens of gigabytes fast but forces resolver churn, so run it after lockfile changes, not blindly nightly. Prefer secondary volumes when offered; on flat 256 GB hosts, never share one giant cache across parallel jobs without eviction rules. Release lanes should budget the same temp spikes you see in 2026 Remote Mac: Notarization & Distribution as a Rentable Pipeline.
Five Regions Without Shipping Hot Caches
Five regions mean five copies of hot caches unless you govern inputs—do not rsync DerivedData across oceans. Centralize Package.resolved and binary manifests, pin one Xcode build ID per fleet, and accept per-node rebuilds for predictable disk. Mixed automation stacks on the same disk compete for free blocks; see 2026 OpenClaw Deployment Guide: From Installation to Automation on Mac VPS for colocation patterns. Prefer checksum-verified artifacts over repeated deep clones when mirroring frameworks.
Decision Matrix: M4 16 GB / 256 GB, 24 GB / 512 GB, M4 Pro + 1 TB / 2 TB
Use the table as policy: 16 GB / 256 GB is one active scheme plus weekly sweeps; 24 GB / 512 GB tolerates two lanes if archives are time-boxed; M4 Pro with TB-class disks keeps parallel queues on monthly hygiene. When a tier hits its upgrade signal, add a parallel host instead of endless cron deletes.
| Tier | Hygiene cadence | Parallel posture | Upgrade signal |
|---|---|---|---|
| M4 · 16 GB / 256 GB | Weekly DerivedData sweep for stale schemes; SPM cache only after lockfile changes | Single primary lane; avoid simultaneous archive plus heavy UI tests | Free space under twenty GB for forty eight hours or notary temp failures |
| M4 · 24 GB / 512 GB | Biweekly SPM review; DerivedData split by branch family directories | Two compile lanes if I/O queues monitored; archive window isolated | P95 clean build regresses twenty percent after cleans |
| M4 Pro · 1 TB / 2 TB | Monthly hygiene; retain intermediates for bisect-friendly rebuilds | Parallel runners acceptable when disk charts stay below seventy percent | Multiple products on one host or simulator farms that pin extra slices |
Cleanup First, Parallel Second, Bigger SSD Third
Order of operations: delete orphaned DerivedData for dead branches, then evict SPM caches after lock changes; shard schemes across two modest nodes before you buy one giant box; step up to 512 GB or TB-class M4 Pro when hygiene no longer restores P95 compile times. Watch free gigabytes, inode pressure, and “no space left on device” errors that can still appear when APFS snapshots hold space—trim snapshots quarterly.
FAQ
Why Mac mini M4 Fits This Disk Discipline
This playbook needs a host that stays up for sweeps and long compiles. Apple Silicon M4 and M4 Pro give strong single-thread performance and unified memory bandwidth for Swift indexing; macOS keeps the same Unix tools, SSH, and Xcode path you use locally. Gatekeeper, SIP, and FileVault matter for unattended runners, and Mac mini–class idle power (~4 W) keeps noise and electricity low versus a patchwork of Windows jump boxes.
Mac mini M4 remains the best 2026 anchor to own the baseline: buy storage you mean to keep, clone the policy to a second mini for parallel lanes, and rent burst capacity when the matrix says so. Open the homepage to compare plans and capacity for your regions.