[Gluster-users] Cannot see all data in mount
Nithya Balachandran
nbalacha at redhat.com
Thu May 16 09:04:58 UTC 2019
On Thu, 16 May 2019 at 14:17, Paul van der Vlis <paul at vandervlis.nl> wrote:
> Op 16-05-19 om 05:43 schreef Nithya Balachandran:
> >
> >
> > On Thu, 16 May 2019 at 03:05, Paul van der Vlis <paul at vandervlis.nl
> > <mailto:paul at vandervlis.nl>> wrote:
> >
> > Op 15-05-19 om 15:45 schreef Nithya Balachandran:
> > > Hi Paul,
> > >
> > > A few questions:
> > > Which version of gluster are you using?
> >
> > On the server and some clients: glusterfs 4.1.2
> > On a new client: glusterfs 5.5
> >
> > Is the same behaviour seen on both client versions?
>
> Yes.
>
> > > Did this behaviour start recently? As in were the contents of that
> > > directory visible earlier?
> >
> > This directory was normally used in the headoffice, and there is
> direct
> > access to the files without Glusterfs. So I don't know.
> >
> >
> > Do you mean that they access the files on the gluster volume without
> > using the client or that these files were stored elsewhere
> > earlier (not on gluster)? Files on a gluster volume should never be
> > accessed directly.
>
> The central server (this is the only gluster-brick) is a thin-client
> server, people are working directly on the server using LTSP terminals:
> http://ltsp.org/).
>
> The data is exported using Gluster to some other machines in smaller
> offices.
>
> And to a new thin-client server what I am making (using X2go). The goal
> is that this server will replace all of the excisting machines in the
> future. X2go is something like "Citrix for Linux", you can use it over
> the internet.
>
> I did not setup Gluster and I have never met the old sysadmin. I guess
> it's also very strange to use Gluster with only one brick. So when I
> understand you right, the whole setup is wrong, and you may not access
> the files without client?
>
>
That is correct - any files on a gluster volume should be accessed only via
a gluster client (if using fuse).
> > To debug this further, please send the following:
> >
> > 1. The directory contents when the listing is performed directly on the
> > brick.
> > 2. The tcpdump of the gluster client when listing the directory using
> > the following command:
> >
> > tcpdump -i any -s 0 -w /var/tmp/dirls.pcap tcp and not port 22
> >
> >
> > You can send these directly to me in case you want to keep the
> > information private.
>
> I have just heard (during writing this message) that the owner of the
> firm where I make this for, is in hospital in very critical condition.
> They've asked me to stop with the work at the moment.
>
> I did also hear that there where more problems with the filesystem.
> Especially when a directory was renamed.
> And this directory was renamed in the past.
>
>
Let me know when you plan to continue with this . We can take a look.
Regards,
Nithya
> With regards,
> Paul van der Vlis
>
> > Regards,
> > Nithya
> >
> >
> >
> > With regards,
> > Paul van der Vlis
> >
> > > Regards,
> > > Nithya
> > >
> > >
> > > On Wed, 15 May 2019 at 18:55, Paul van der Vlis
> > <paul at vandervlis.nl <mailto:paul at vandervlis.nl>
> > > <mailto:paul at vandervlis.nl <mailto:paul at vandervlis.nl>>> wrote:
> > >
> > > Hello Strahil,
> > >
> > > Thanks for your answer. I don't find the word "sharding" in the
> > > configfiles. There is not much shared data (24GB), and only 1
> > brick:
> > > ---
> > > root at xxx:/etc/glusterfs# gluster volume info DATA
> > >
> > > Volume Name: DATA
> > > Type: Distribute
> > > Volume ID: db53ece1-5def-4f7c-b59d-3a230824032a
> > > Status: Started
> > > Snapshot Count: 0
> > > Number of Bricks: 1
> > > Transport-type: tcp
> > > Bricks:
> > > Brick1: xxx-vpn:/DATA
> > > Options Reconfigured:
> > > transport.address-family: inet
> > > nfs.disable: on
> > > ----
> > > (I have edited this a bit for privacy of my customer).
> > >
> > > I think they have used glusterfs because it can do ACLs.
> > >
> > > With regards,
> > > Paul van der Vlis
> > >
> > >
> > > Op 15-05-19 om 14:59 schreef Strahil Nikolov:
> > > > Most probably you use sharding , which splits the files into
> > smaller
> > > > chunks so you can fit a 1TB file into gluster nodes with
> > bricks of
> > > > smaller size.
> > > > So if you have 2 dispersed servers each having 500Gb
> > brick-> without
> > > > sharding you won't be able to store files larger than the
> > brick size -
> > > > no matter you have free space on the other server.
> > > >
> > > > When sharding is enabled - you will see on the brick the
> first
> > > shard as
> > > > a file and the rest is in a hidden folder called ".shards"
> (or
> > > something
> > > > like that).
> > > >
> > > > The benefit is also viewable when you need to do some
> > maintenance on a
> > > > gluster node, as you will need to heal only the shards
> > containing
> > > > modified by the customers' data.
> > > >
> > > > Best Regards,
> > > > Strahil Nikolov
> > > >
> > > >
> > > > В сряда, 15 май 2019 г., 7:31:39 ч. Гринуич-4, Paul van der
> Vlis
> > > > <paul at vandervlis.nl <mailto:paul at vandervlis.nl>
> > <mailto:paul at vandervlis.nl <mailto:paul at vandervlis.nl>>> написа:
> > > >
> > > >
> > > > Hello,
> > > >
> > > > I am the new sysadmin of an organization what uses Glusterfs.
> > > > I did not set it up, and I don't know much about Glusterfs.
> > > >
> > > > What I do not understand is that I do not see all data in
> > the mount.
> > > > Not as root, not as a normal user who has privileges.
> > > >
> > > > When I do "ls" in one of the subdirectories I don't see any
> > data, but
> > > > this data exists at the server!
> > > >
> > > > In another subdirectory I see everything fine, the rights of
> the
> > > > directories and files inside are the same.
> > > >
> > > > I mount with something like:
> > > > /bin/mount -t glusterfs -o acl 10.8.0.1:/data /data
> > > > I see data in /data/VOORBEELD/, and I don't see any data in
> > > /data/ALGEMEEN/.
> > > >
> > > > I don't see something special in /etc/exports or in
> > /etc/glusterfs on
> > > > the server.
> > > >
> > > > Is there maybe a mechanism in Glusterfs what can exclude
> > data from
> > > > export? Or is there a way to debug this problem?
> > > >
> > > > With regards,
> > > > Paul van der Vlis
> > > >
> > > > ----
> > > > # file: VOORBEELD
> > > > # owner: root
> > > > # group: secretariaat
> > > > # flags: -s-
> > > > user::rwx
> > > > group::rwx
> > > > group:medewerkers:r-x
> > > > mask::rwx
> > > > other::---
> > > > default:user::rwx
> > > > default:group::rwx
> > > > default:group:medewerkers:r-x
> > > > default:mask::rwx
> > > > default:other::---
> > > >
> > > > # file: ALGEMEEN
> > > > # owner: root
> > > > # group: secretariaat
> > > > # flags: -s-
> > > > user::rwx
> > > > group::rwx
> > > > group:medewerkers:r-x
> > > > mask::rwx
> > > > other::---
> > > > default:user::rwx
> > > > default:group::rwx
> > > > default:group:medewerkers:r-x
> > > > default:mask::rwx
> > > > default:other::---
> > > > ------
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Paul van der Vlis Linux systeembeheer Groningen
> > > > https://www.vandervlis.nl/
> > > > _______________________________________________
> > > > Gluster-users mailing list
> > > > Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> > <mailto:Gluster-users at gluster.org <mailto:Gluster-users at gluster.org
> >>
> > > <mailto:Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org>>>
> > > > https://lists.gluster.org/mailman/listinfo/gluster-users
> > >
> > >
> > >
> > > --
> > > Paul van der Vlis Linux systeembeheer Groningen
> > > https://www.vandervlis.nl/
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> > <mailto:Gluster-users at gluster.org <mailto:Gluster-users at gluster.org
> >>
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
> > >
> >
> >
> >
> > --
> > Paul van der Vlis Linux systeembeheer Groningen
> > https://www.vandervlis.nl/
> >
>
>
>
> --
> Paul van der Vlis Linux systeembeheer Groningen
> https://www.vandervlis.nl/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190516/f3dc3e69/attachment.html>
More information about the Gluster-users
mailing list