Depending on how you think about things, content distribution is the second problem that must be solved to get a computing environment up and running, immediately after the physical distribution of computing infrastructure... assuming you're someone who considers the OS content.
FIDO is one example of how we're starting to get frameworks to address one foundational piece, the trust establishment aspect, of how we distribute content at that layer.
I'm time constrained writing currently, so in the interests of getting things down this is going to be a fast and skeletal word sketch without much narrative.
Distributing content in unreliable environments using desired state style approaches. Useful for:
dealing with networks that are unreliable, low bandwidth, or highly contended with QoS issues
minimizing content transfer
load distribution on servers, as content blobs can be hosted on CDNs or sourced via p2p
can express content locality requirements in terms of latency of access and availability.
Content, once sufficiently local and available, can be served up via desired state expression of how it needs to be consumed. For example, and installer can record a desired state of "corp/software-package:latest available with xxx latency and tolerant of zero failures within local env, located via discovery API xxx, served via protocol sets {HTTP, HTTPS, FTP, ...}" or some similar statement of pre-requisite expectation on environment to function.
Expressing things in this way allows us as a system to derive what else must be present to perform the role of the installer. This could be handled via constraint solver patterns, or various other ways.