Data Lifecycle, Storage, and TTL Enforcement
EphemeralNet’s value proposition hinges on deterministic expiry. This guide describes how chunks move through the system, how manifests encode TTL metadata, and how secure wiping + shard management guarantee data disappears on schedule.
Chunk ingestion pipeline
ControlServeraccepts aSTORErequest, authenticates it (token + PoW), enforcesPAYLOAD-LENGTH, and streams bytes to the node.Node::store_chunk()computeschunk_idvia SHA-256, derives the ChaCha20 key/nonce, encrypts the payload when enabled, and wraps it inChunkDatawith metadata.ChunkStore::put()writes the chunk into memory (and to disk when persistence is enabled) keyed bychunk_id. Each record tracksexpires_at = now + sanitized_ttl.- Secure persistence: existing files are overwritten before updates; metadata retains filename hints for later fetches.
Manifest generation
protocol::Manifest objects returned to the CLI include:
chunk_id+chunk_hashfor integrity verification.- ChaCha20
nonceand Shamir shard metadata (threshold,total_shares). - TTL-derived
expires_atencoded as Unix seconds. - Metadata map (e.g., original filename) for ergonomic fetch defaults.
- Discovery hints: prioritized
(scheme, transport, endpoint, priority)tuples from manual advertise entries and auto-discovered endpoints (NAT/relay diagnostics feed this list). - Security advisory: handshake/store PoW expectations, attestation digest mirroring the chunk hash, and textual guidance for offline solvers.
- Fallback URIs: optional
control://orhttps://entries attempted after discovery hints fail.
Manifests are base64-encoded and prefixed with eph://, forming the shareable URI. Recipients can reconstruct encryption keys once they collect enough shards.
TTL enforcement
ChunkStore::sweep_expired()runs duringNode::tick()and deletes in-memory records whoseexpires_atpassed. When persistence is enabled, files are securely wiped (see below) before removal.KademliaTable::sweep_expired()prunes provider entries and shard metadata so the DHT never advertises stale nodes.Node::withdraw_manifest()retracts ANNOUNCE state for expired chunks, preventing peers from chasing dead replicas.- CLI commands (
list,ttl-audit) compute TTL remaining by subtractingnowfrom snapshot metadata so auditors can validate compliance.
Secure wiping
- Controlled by
storage_wipe_on_expiryandstorage_wipe_passes(default one pass). ChunkStore::secure_wipe_file()overwrites persisted filesntimes, flushing after each pass before deletion.- Failures emit structured diagnostics so operators notice disks that cannot honor rewrite policies.
Persistent vs. in-memory mode
- In-memory storage is always active; it guarantees data disappears as soon as TTL expires or the daemon restarts.
- Persistent mode mirrors encrypted bytes onto disk under
<storage_dir>/<chunk>.chunk. Upon restart, the daemon sweeps expired entries immediately using the stored TTL metadata. - Operators toggle persistence per profile or CLI flag (
--persistent).
Shard management
NodeemitsConfig::shard_totalShamir shares per chunk and publishes them both inside manifests and to the DHT.- Fetchers reconstruct the ChaCha20 key after collecting
thresholdshares. - Shard records share the same TTL as the underlying chunk; once expired,
KademliaTable::shard_record()returnsnulloptand ANNOUNCE messages withdraw the share.
Fetch lifecycle recap
- CLI decodes the manifest and orders discovery hints by
priority. - Transport hints (e.g.,
scheme="transport",transport="tcp") are attempted first; each may require solving bootstrap PoW. - If transport attempts fail, control hints/fallback URIs take over. When all hints fail (or
--direct-onlyis absent) the CLI falls back to the local daemon, which launches a swarm fetch using the same manifest metadata. - Chunk bytes are decrypted using the stored nonce + reconstructed ChaCha20 key and delivered to disk, honoring CLI flags such as
--fetch-use-manifest-name.
By centralizing TTL logic inside the node and manifest pipeline, EphemeralNet guarantees that data, metadata, and discovery hints all expire coherently—even if peers try to replay stale manifests or a node restarts mid-sweep.