Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
And then you have to put a filesystem on it, which has its own metadata – file attributes and folder/file names and so on. If you use NTFS you lose at least 12.5% to the metadata so now you’re down to 11.8 GiB. 😛
As an amusing side note, I once came across a joke compression program that could compress any data down to zero bytes. It did this by creating directories filled with zero-sized files whose filenames contained the actual data of the file in question.
If you right-clicked on the folder and asked the OS how big it was, it’d report 0 bytes. But of course all that data still had to be stored somewhere, in the metadata of the filesystem.
That’s part of why I use du on Linux instead of df/ls -l to figure out file/directory/partition usage. The former figures out actual size on disk, whereas the latter ignores metadata like the list of files in the directory.
And then you have to put a filesystem on it, which has its own metadata – file attributes and folder/file names and so on. If you use NTFS you lose at least 12.5% to the metadata so now you’re down to 11.8 GiB. 😛
As an amusing side note, I once came across a joke compression program that could compress any data down to zero bytes. It did this by creating directories filled with zero-sized files whose filenames contained the actual data of the file in question.
If you right-clicked on the folder and asked the OS how big it was, it’d report 0 bytes. But of course all that data still had to be stored somewhere, in the metadata of the filesystem.
That’s part of why I use
du
on Linux instead ofdf
/ls -l
to figure out file/directory/partition usage. The former figures out actual size on disk, whereas the latter ignores metadata like the list of files in the directory.