On Thu, Sep 01, 2016 at 05:09:32PM -0000, Oliver Grawert wrote:
> dd'ing a sparse 4GB file still writes 4GB of zeros to my SD which takes
> about 20min (what is the magic to make it not do that that you mention
> above ?)
conv=sparse option to dd.
> while a 150MB file (as the pi2 image would likely be if we'd just map to
> the content size) takes below 1min ...
The rootfs image we create shows here as 289M (du -sh
workdir/.images/root.img). Is this so much smaller because of filesystem
overhead when creating a larger image?
> also when we release images we usually manually compress them using xz
> which takes a horrid amount of time for just compressing zeros ... (i
> understand cdimage will take care for this in future images for us
> though)
xz does create sparse files by default, but it's possible that it doesn't
support /reading/ sparsely on input. If so, that's also a good argument for
creating the image smaller and expanding it on first boot, though it's an
impact only on the building and not on the user experience.
> additionally you indeed force the user to have the amount of diskspace
> available when he wants to uncompress before writing the image.
Certainly not. xz uses sparse unpacking by default.
On Thu, Sep 01, 2016 at 05:09:32PM -0000, Oliver Grawert wrote:
> dd'ing a sparse 4GB file still writes 4GB of zeros to my SD which takes
> about 20min (what is the magic to make it not do that that you mention
> above ?)
conv=sparse option to dd.
> while a 150MB file (as the pi2 image would likely be if we'd just map to
> the content size) takes below 1min ...
The rootfs image we create shows here as 289M (du -sh .images/ root.img) . Is this so much smaller because of filesystem
workdir/
overhead when creating a larger image?
> also when we release images we usually manually compress them using xz
> which takes a horrid amount of time for just compressing zeros ... (i
> understand cdimage will take care for this in future images for us
> though)
xz does create sparse files by default, but it's possible that it doesn't
support /reading/ sparsely on input. If so, that's also a good argument for
creating the image smaller and expanding it on first boot, though it's an
impact only on the building and not on the user experience.
> additionally you indeed force the user to have the amount of diskspace
> available when he wants to uncompress before writing the image.
Certainly not. xz uses sparse unpacking by default.