Wednesday, July 24, 2019

filesystems - Why DD output image file is larger than the source partition/runs out of space while copying partition to a file




DD output image file is larger than the source partition and DD runs out of space on the target partition(where the image is created) despite it being larger than the source partition.



I am trying to copy a partition to a file on another partition on the same disk. The target partition is slightly larger than the input partition. Both are ext3 partitions.



Running from OpenSuse-Rescue LIVE CD. Yast shows the input partition (sdb1) is 62.5 GiB and the output one sdb2 is 62.85 GiB.



Thunar shows the input sdb1 is 65.9 GB and the output one sdb2 is 66.2 GB, while the output dd image file is being also 66.2 so obviously maxing out sdb2.



Here is the console:




(sdb1 was unmounted, tried dd few times)



linux:# dd if=/dev/sdb1 of=RR.image bs=4096

dd: error writing ‘RR.image’: No space left on device
16156459+0 records in
16156458+0 records out
66176851968 bytes (66 GB) copied, 2648.89 s, 25.0 MB/s






Additional info by request:



And again : I am seeing the difference in the source partition size sdb1 and the DD image file RR.image it created from it. That file resides on sdb2.






There is still something unclear here: I am RUNNING DD AS ROOT, so that reserved space is available to write into, correct? The target sdb2 is 62.85 GiB while the total bytes for the image as you said are about 61.63 GiB. Here is also the output of df and POSIXLY_CORRECT=1 df commands:




The system now is system-rescue-cd



root@sysresccd /root % df

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sdb1 64376668 7086884 56241208 12% /media/Data1
/dev/sdb2 64742212 64742212 0 100% /media/Data2
/dev/sdb3 5236728 4785720 451008 92% /usr/local


root@sysresccd /root % POSIXLY_CORRECT=1 df /dev/sdb1
Filesystem 512B-blocks Used Available Use% Mounted on
/dev/sdb1 128753336 14173768 112482416 12% /media/Data1

root@sysresccd /root % POSIXLY_CORRECT=1 df /dev/sdb2
Filesystem 512B-blocks Used Available Use% Mounted on
/dev/sdb2 129484424 129484424 0 100% /media/Data2



The numbers are exactly the same as in simple df if we divide it by 2. 1024b/512b=2 is the divisor.




  1. sdb1 is smaller than sdb2. The 100 percent usage on sdb2 now is because of the DD image file that filled the partition up. It has to be the only file on it now.


  2. The image file itself is 66,176,851,968 bytes as of DD (at run time) and Thunar reports. Divided by 1024 bytes we get 64625832 K-blocks correct? So it is still smaller than df reported for sdb2 by more than 116380K and it is LARGER THAN THE sdb1 (THE SOURCE), but it maxes out the partition sdb2.




The question is: what is in there to take that space on sdb2?







But most important and interesting is:



Why is the target file larger than the source partition that dd created it from? Which means to me: I can't write it back.



sdb1 (64376668K) < RR.image (64625832K)



And



sdb1 (64376668 1K-blocks) < RR.image (64625832 1K-blocks) < sdb2 (64742212 1K-blocks)




(I hope things were calculated right…)



Now I checked the blocks that are rerserved for ROOT. I found this command to execute:



root@sysresccd /root % dumpe2fs -h /dev/sdb1 2> /dev/null | awk -F ':' '{ if($1 == "Reserved block count") { rescnt=$2 } } { if($1 == "Block count") { blkcnt=$2 } } END { print "Reserved blocks: "(rescnt/blkcnt)*100"%" }'

Reserved blocks: 1.6%

root@sysresccd /root % dumpe2fs -h /dev/sdb2 2> /dev/null | awk -F ':' '{ if($1 == "Reserved block count") { rescnt=$2 } } { if($1 == "Block count") { blkcnt=$2 } } END { print "Reserved blocks: "(rescnt/blkcnt)*100"%" }'


Reserved blocks: 1.59999%


So the percentage reserved for ROOT is also the same on both partitions in case that matters.






Here is the output for gdisk:




root@sysresccd /root % gdisk -l /dev/sdb

GPT fdisk (gdisk) version 1.0.1

Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: not present



***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory.
***************************************************************

Disk /dev/sdb: 312581808 sectors, 149.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): DCF8AFC4-11CA-46C5-AB7A-4818336EBCA3
Partition table holds up to 128 entries

First usable sector is 34, last usable sector is 312581774
Partitions will be aligned on 2048-sector boundaries
Total free space is 7789 sectors (3.8 MiB)

Number Start (sector) End (sector) Size Code Name
1 2048 131074047 62.5 GiB 8300 Linux filesystem
2 131074048 262889471 62.9 GiB 8300 Linux filesystem
3 302086144 312580095 5.0 GiB 0700 Microsoft basic data
5 262891520 293771263 14.7 GiB 8300 Linux filesystem
6 293773312 302086143 4.0 GiB 8200 Linux swap



So what is the real size of sdb1 then?



Isn't sdb2 (N2) larger than sdb1 (N1)? So WHY the image file GROWS LARGER than sdb2 (N2)? If I turn off the space reserved for root on sdb2, shall it fit there then?


Answer



Every filesystem needs some space for metadata. Additionally ext family reserves some space for root user and it's 5% by default.



Example




In my Kubuntu I created a (sparse) file of 1GiB:



truncate -s 1G myfile


and made ext3 filesystem within it. The command was plain



mkfs.ext3 myfile



This instantly allocated about 49MiB (~5% in this case) to the myfile. I could see that because the file was sparse and initially reported 0B usage on my real disk, then it grew. I assume this is where metadata lives.



I mounted the filesystem; df -h reported 976MiB of total space, but only 925MiB available. This means another ~5% wasn't available to me.



Then I filled up this space (after cd to the mountpoint) with



dd if=/dev/urandom of=placeholder


As a regular user I was able to take 925MiB only. The reported "disk" usage was then 100%. However, doing the same as a root, I could write 976MiB to the file. When the file grew over 925MiB the usage remained at 100%.




Conclusion



Comparing sizes of your partitions is wrong in this case; so is comparing the sizes of your filesystems. You should have checked the available space on the target filesystem (e.g. with df) and compare it to the size of the source partition.






EDIT:



To make it clear: your 66176851968 bytes are about 61.63 GiB. This is not larger than the source partition which is 62.5 GiB. The source partition was not fully read when the target filesystem got full.




In case you're not familiar with GB/GiB distinction, read man 7 units.






EDIT 2



Now we have all the actual numbers. Let's stick to the unit of 512B, it's a common sector size.





  • Your sdb1 partition occupies 131074048-2048=131072000 units on the disk. Let's call this P1. This is from gdisk output.

  • Your sdb2 partition occupies 262889472-131074048=131815424 units on the disk. Let it be P2. This is also from gdisk output.

  • Your filesystem inside sdb1 can store files up to 128753336 units total. Let's call this number F1. This is from df output.

  • Your filesystem inside sdb2 can store up to 129484424 units. Let it be F2. This is also from df output.



The difference between P1 and F1, as well as the difference between P2 and F2, can be explained if you know there must be a room for metadata. This is mentioned earlier in this answer.



Your dd tried to copy the whole sdb1 partition, i.e. P1 of data, into a file that takes space provided by the filesystem inside sdb2, i.e. F2 of available space.




P1 > F2 – this is the final answer. Your image file didn't grow larger than it should. It looks to me you expected its size to be F1. In fact the whole image would have a size of P1 units.



P2 and F1 are irrelevant in this context.


No comments:

Post a Comment

hard drive - Leaving bad sectors in unformatted partition?

Laptop was acting really weird, and copy and seek times were really slow, so I decided to scan the hard drive surface. I have a couple hundr...