[Gluster-users] glusterfs missing files on ls
Stefano Sinigardi
stefano.sinigardi at gmail.com
Thu May 30 08:33:35 UTC 2013
Dear Davide,
Thanks.
I have 6 bricks per node, across two nodes, in the replicated volume
and those should be correctly coupled as you were saying.
I never access bricks directly, but always through their FUSE mount point.
Maybe I was not clear enough, but the problem is that if I access
folders through the fuse mount point I don't find all the files that
should be there and that in fact I find if I browse manually each
brick. Now I've launched a
find <gluster-mount> -noleaf -print0 | xargs --null stat >/dev/null 2>
/var/log/glusterfs/<gluster-mount>-selfheal.log
to see if it works better than the find . > /dev/null that was not
working, but it's eating almost all of the 16 GB of RAM of the machine
as of now and I fear that it will start swapping...
Any suggestion?
Thanks a lot
Stefano
On Thu, May 30, 2013 at 5:07 PM, Davide Poccecai <poccecai at gmail.com> wrote:
> Hi Stefano,
> I'm not sure what is causing your problem, and the variables in your
> configuration can lead to various scenarios.
> In any case, according to what you wrote, I will assume that the
> replicated-distributed volume is contained on 2 machines only, spanning 4
> bricks.
> You might have done this already, but if you look at page 15 of the Gluster
> Administration guide (I have the guide for 3.3.0, but I think it's the
> same), you will read in the "Note" that you need to be careful about the
> order in which you specify the bricks across the 2 servers.
>
> In particular, when creating the volume, you need to specify the first brick
> of each server, and then the second of each server and so on. In your case,
> I think it should be something like:
>
> gluster volume create volume_name replica 2 transport tcp server1:/exp1
> server2:/exp1 server1:/exp2 server2:/exp2
>
> (replacing the transport type accrding to your network characteristic, if
> necessary), so that the exp1 on server1 will be replicated on exp1 on server
> 2, and exp2 on server1 will be replicated on exp2 on server2, and all 4
> bricks together will form a replicated-distributed volume.
> As I said, you might have followed these steps correctly already, but better
> to double-check.
> About the mounting, the manuals claim better performances with the native
> gluster protocol, but I'm not sure this is the case if you have a a huge
> amount of small files.
> I experienced big performance problems with a replicated volume that had
> many small files, and I had to give up entirely on the glusterfs technology
> for that particular project. To be fair, because of network topology
> constraints, my two nodes where using a standar 1Gbit network link, shared
> with other network traffic, so the performance issues might be partially
> caused by the "slow" link, but googling around, I'm not the only one facing
> this kind of problem.
>
> About your missing files, I have one last (possibly silly)
> question/suggestion: on the server themselves, did you remount the glusterfs
> volume using either the native or nfs protocol on another mount point?
> I have just a replicated volume, but I found that if you don't do that, and
> above all, if you don't access/create the files on the glusterfs-mounted
> filesystem but you use the original fs mount point (in my case an xfs), the
> files are not replicated at all.
> To be clear:
> You have server1:/brick1 replicated to server2:/brick2. (let's forget about
> distribution, for now).
> server1:/brick1 is a xfs or ext4 filesystem mounted on a mount point on your
> server.
> When you create the volume, you must mount the volume via nfs or glusterfs
> protocol on another mount point on the server, for example
> server1:/glustermountpoint
> When you create files, you must do it on server1:/glustermountpoint, and not
> on server1:/brick1, otherwise the files are not replicated to server2 and
> they are stored on server1 only.
> I don't think that the documentation is very clear on this (i don't think
> that the glusterfs documentation is particularly clear in general) so
> double-check that as well.
> Hope this helps...
> Regards,
>
> Davide
>
>
> On 30 May 2013 07:24, Stefano Sinigardi <stefano.sinigardi at gmail.com> wrote:
>>
>> Dear all,
>> this is my first message to this mailing list and I also just
>> subscribed to it. So please, forgive me for my inexperience. I hope
>> this is also the correct place to ask this question. I'm not a system
>> administrator, even if I'm requested to do so (phd stud here). I like
>> to do it, but sometimes I'm lacking the required knowledge. Anyway,
>> here's my problem that, has always, needs to be solved by me as soon
>> as possible.
>> I installed gluster 3.3.1 on Ubuntu 12.10 (from the repository) on 4
>> machines, all connected together via LAN but two also have a special
>> Infiniband link between them. On two of them I created a "scratch"
>> volume (distributed, 8 TB tot), on the other two I created a "storage"
>> volume (distributed + replicated, 12 TB tot but because of replica
>> just 6 TB available to users). All of the machines see both volumes,
>> and for now to use them you have to ssh to one of those (in future it
>> will be exported: do you suggest nfs or gluster as the mounting type?)
>> The distributed and _not_ replicated filesystem seems to work (at
>> least for now) very well and also is perfectly accessible from all
>> machines, even if is built on them connected by infiniband.
>> The other replicated _and_ distributed filesystem, on the other hand,
>> has some problems. In fact, from all nodes, it's missing some files
>> when asked to list file in a folder with commands like 'ls'. This
>> happened from one day to the other, because I'm sure that three days
>> ago it was working perfectly. The configuration didn't change (one
>> machine got rebooted, but even a global reboot didn't fix anything).
>> I tried to do a volume rebalance to see if it was going to do anything
>> (it magically fixed a problem at the very beginning of my gluster
>> adventure), but it never completed: it grew up to a rebalance of
>> hundred of million of files, but there should not be so many files in
>> this volume, we're speaking of order of magnitude less. I tried to
>> list single bricks and I found that files are still present on them,
>> and each one on two bricks (because of replica), and perfectly working
>> if directly accessed to read them, so it seems that it's not a
>> hardware problem on a particular brick.
>> As another strategy, I found on the internet that a "find . >
>> /dev/null" launched as root on the root folder of the glusterfs should
>> trigger a re-hash of the files, so maybe that could help me.
>> Unfortunately it hangs almost immediately in a folder that, as said,
>> is missing some files when listed from the global filesystem.
>> I tried to read logs, but nothing strange seems to be happening (btw:
>> analysing logs I found out that also the rebalance got stuck in one of
>> these folders and just started counting millions and millions of
>> "nonexistant" files (not even in the single brick, I'm sure that those
>> folder are not so big), so that's why it wrote hundreds of millions of
>> files non requiring rebalance in the status)
>>
>> Do you have any suggestion?
>> Sorry for the long mail, I hope it's enough to explain my problem.
>>
>> Thanks a lot for your time and for your help, in advance
>> Best regards to all,
>>
>> Stefano
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
More information about the Gluster-users
mailing list