[Gluster-users] glusterfs missing files on ls

Vijay Bellur vbellur at redhat.com
Sun Jun 2 07:15:15 UTC 2013


On 06/02/2013 11:35 AM, Stefano Sinigardi wrote:
> Dear Vijay,
> the filesystem is ext4, on a GPT structured disk, formatted by Ubuntu 12.10.

A combination of ext4 on certain kernels and glusterfs has had its share 
of problems (https://bugzilla.redhat.com/show_bug.cgi?id=838784) for 
readdir workloads. I am not sure if the Ubuntu 12.10 kernel is affected 
by this bug as well. GlusterFS 3.3.2 has an improvement which will 
address this problem seen with ext4.

> The rebalance I did was with the command
> gluster volume rebalance data start
> but in the log it got stuck on a file that I cannot remember (was a
> small working .cpp file, saying that it was going to be moved to an much
> more occupied replica, and it repeated this message until writing a log
> that was a few GB).
> Then I stopped it and restarted with
> gluster volume rebalance data start force
> in order to get rid of this problems about files going to bricks already
> highly occupied.
> Because I was almost stuck, remembering that a rebalance solved another
> problem I had as a miracle, I retried it, but got stuck in a
> .dropbox-cache folder. That is not a very important folder, so I thought
> I could remove it. I launched a script to find all the files looking at
> all the bricks but removing them from the fuse mountpoint. I don't know
> what went wrong (the script is very simple, the problem maybe was that
> it was 4 am in the night) but the fact is that files got removed calling
> rm at the bricks mountpoints, not the fuse one. So I think that now I'm
> in a even worse situation that before. I just stopped working on it,
> asking for some time from my colleagues (at least data is still there,
> on the bricks, just sparse on all of them) in order to think well about
> how to proceed (maybe destroying it and rebuilding it, but it will be
> very time consuming as I don't have so much free space elsewere to save
> everything, also it's very difficult to save from the fuse mountpoint as
> it's not listing all the files)

Were only files removed from the brick mountpoints or did directories 
get removed too?  Would it be possible for you to move to 3.3.2qa3 and 
check if ls does list all files present in the bricks? Note that, qa3 is 
not yet GA and might see a few fixes before it becomes so.

Regards,
Vijay





More information about the Gluster-users mailing list