[Gluster-users] glusterfs missing files on ls
Stefano Sinigardi
stefano.sinigardi at gmail.com
Mon Jun 3 09:24:18 UTC 2013
So, just to recap, is it ok to clone repo from github, go to tag 3.3.2qa3,
stop glusterd, configure, make & make install?
Regards,
Stefano
On Mon, Jun 3, 2013 at 5:54 PM, Vijay Bellur <vbellur at redhat.com> wrote:
> On 06/02/2013 01:42 PM, Stefano Sinigardi wrote:
>
>> Also directories got removed. I did a really bad job in that script,
>> wrong sed and path was not truncated and replaced with the fuse
>> mountpoint...
>> Yes I can move to 3.3.2qa3. At the moment I have gluster installed as
>> per the semiosis repository. What is the best way for me to move to this
>> other version (I think that I have to build it from source?)?
>>
>
> I think qa builds are not part of semiosis' ppa repository. Copying him to
> chime in if it is otherwise.
>
> If the builds are not available, you will need to build it from source.
>
> Regards,
> Vijay
>
> Thanks and best regards,
>> Stefano
>>
>>
>> On Sun, Jun 2, 2013 at 4:15 PM, Vijay Bellur <vbellur at redhat.com
>> <mailto:vbellur at redhat.com>> wrote:
>>
>> On 06/02/2013 11:35 AM, Stefano Sinigardi wrote:
>>
>> Dear Vijay,
>> the filesystem is ext4, on a GPT structured disk, formatted by
>> Ubuntu 12.10.
>>
>>
>> A combination of ext4 on certain kernels and glusterfs has had its
>> share of problems
>> (https://bugzilla.redhat.com/_**_show_bug.cgi?id=838784<https://bugzilla.redhat.com/__show_bug.cgi?id=838784>
>> <https://bugzilla.redhat.com/**show_bug.cgi?id=838784<https://bugzilla.redhat.com/show_bug.cgi?id=838784>>)
>> for readdir
>>
>> workloads. I am not sure if the Ubuntu 12.10 kernel is affected by
>> this bug as well. GlusterFS 3.3.2 has an improvement which will
>> address this problem seen with ext4.
>>
>>
>> The rebalance I did was with the command
>> gluster volume rebalance data start
>> but in the log it got stuck on a file that I cannot remember (was
>> a
>> small working .cpp file, saying that it was going to be moved to
>> an much
>> more occupied replica, and it repeated this message until
>> writing a log
>> that was a few GB).
>> Then I stopped it and restarted with
>> gluster volume rebalance data start force
>> in order to get rid of this problems about files going to bricks
>> already
>> highly occupied.
>> Because I was almost stuck, remembering that a rebalance solved
>> another
>> problem I had as a miracle, I retried it, but got stuck in a
>> .dropbox-cache folder. That is not a very important folder, so I
>> thought
>> I could remove it. I launched a script to find all the files
>> looking at
>> all the bricks but removing them from the fuse mountpoint. I
>> don't know
>> what went wrong (the script is very simple, the problem maybe
>> was that
>> it was 4 am in the night) but the fact is that files got removed
>> calling
>> rm at the bricks mountpoints, not the fuse one. So I think that
>> now I'm
>> in a even worse situation that before. I just stopped working on
>> it,
>> asking for some time from my colleagues (at least data is still
>> there,
>> on the bricks, just sparse on all of them) in order to think
>> well about
>> how to proceed (maybe destroying it and rebuilding it, but it
>> will be
>> very time consuming as I don't have so much free space elsewere
>> to save
>> everything, also it's very difficult to save from the fuse
>> mountpoint as
>> it's not listing all the files)
>>
>>
>> Were only files removed from the brick mountpoints or did
>> directories get removed too? Would it be possible for you to move
>> to 3.3.2qa3 and check if ls does list all files present in the
>> bricks? Note that, qa3 is not yet GA and might see a few fixes
>> before it becomes so.
>>
>> Regards,
>> Vijay
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130603/eb9fb66d/attachment.html>
More information about the Gluster-users
mailing list