I wanted to check the consistency of the data partition on one of my servers. It is 6.5TB and formatted with XFS, so I ran :
#xfs_check /dev/sdb1
And I got :
xfs_check: out of memory
After some searching, it turns out that a lot of memory is needed to perform the xfs_check on a large file system : >6GB and you need to run it on 64bit, to able to address that memory.
My system is a 32-bit with only 4GB, so I would probably not be able to run xfs_check on my system, but there is another way :
#xfs_repair -n /dev/sdb1
This tools tries to repair a XFS filesystem, but with the -n switch no changes are written to the file system, so the effect is quite the same. It still uses a lot of memory if you have a lot of files/inodes on the file system, but 3GB on a 32-bit system should be sufficient.
Of course, if xfs_repair finds a problem, you can still run it without the -n switch, to repair the filesystem.
3 comments:
What is the problem of ext4 on a big raid.
I am using a 32bit pc as my fileserver and now I got this out of memory message after I increased the side.
My idea was to move to ext4 to have my data a little bit more save.
No problem with ext4 and partition size, I'll remove that from the post.
I don't exactly remember why I couldn't use ext4 at that time.
I've been busting my head for an hour with this, great advice thank you!
Post a Comment