[Gluster-users] gluster 3.2.0 - totally broken?

Anand Avati anand.avati at gmail.com
Wed May 18 17:14:17 UTC 2011


Udo,
 Do you know what kind of access was performed on those files? Were they
just copied in (via cp), were they rsync'ed over an existing set of data?
Was it data carried over from 3.1 into a 3.2 system? We hate to lose users
(community users or paid customers equally) and will do our best to keep you
happy. Please file a bug report with as much history as possible and we will
have it assigned on priority.

Thanks,
Avati

On Wed, May 18, 2011 at 5:45 AM, Udo Waechter <
udo.waechter at uni-osnabrueck.de> wrote:

> Hi there,
> after reporting some trouble with group access permissions,
> http://gluster.org/pipermail/gluster-users/2011-May/007619.html (which
> still persist, btw.)
>
> things get worse and worse with each day.
>
> Now, we see a lot of duplicate files (again, only fuse-clients here),
> access permissions are reset on a random and totally annoying basis. Files
> are empty from time to time and become:
> -rwxrws--x  1 user1  group2       594 2011-02-04 18:43 preprocessing128.m
> -rwxrws--x  1 user1  group2       594 2011-02-04 18:43 preprocessing128.m
> -rwxrws--x  1 user2 group2       531 2011-03-03 10:47 result_11.mat
> ------S--T  1 root     group2         0 2011-04-14 07:57 result_11.mat
> -rwxrws--x  1 user1  group2     11069 2010-12-02 14:53 trigger.odt
> -rwxrws--x  1 user1  group2     11069 2010-12-02 14:53 trigger.odt
>
> where group2 are secondary groups.
>
> How come that there are these empty and duplicate files? Again, this
> listing is from the fuse-mount
>
> Could it be that version 3.2.0 is totally borked?
>
> Btw.: From time to time, these permissions as well as which duplicate files
> one sees change in a random manner.
>
> I followed various hints on configuring and deconfiguring options Went
> from:
>
> root at store02:/var/log/glusterfs# gluster volume info store
>
> Volume Name: store
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 5 x 2 = 10
> Transport-type: tcp
> Bricks:
> Brick1: store01-i:/srv/store01
> Brick2: pvmserv01-i:/srv/store01
> Brick3: pvmserv02-i:/srv/store01
> Brick4: store02-i:/srv/store03
> Brick5: store02-i:/srv/store01
> Brick6: store01-i:/srv/store02
> Brick7: store02-i:/srv/store04
> Brick8: store02-i:/srv/store05
> Brick9: store02-i:/srv/store06
> Brick10: store02-i:/srv/store02
> Options Reconfigured:
> nfs.disable: on
> auth.allow: 127.0.0.1,10.10.*
> performance.cache-size: 1024Mb
> performance.write-behind-window-size: 64Mb
> performance.io-thread-count: 32
> diagnostics.dump-fd-stats: off
> diagnostics.brick-log-level: WARNING
> diagnostics.client-log-level: WARNING
> performance.stat-prefetch: off
> diagnostics.latency-measurement: off
> performance.flush-behind: off
> performance.quick-read: disable
>
> to:
>
> Volume Name: store
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 5 x 2 = 10
> Transport-type: tcp
> Bricks:
> Brick1: store01-i:/srv/store01
> Brick2: pvmserv01-i:/srv/store01
> Brick3: pvmserv02-i:/srv/store01
> Brick4: store02-i:/srv/store03
> Brick5: store02-i:/srv/store01
> Brick6: store01-i:/srv/store02
> Brick7: store02-i:/srv/store04
> Brick8: store02-i:/srv/store05
> Brick9: store02-i:/srv/store06
> Brick10: store02-i:/srv/store02
> Options Reconfigured:
> auth.allow: 127.0.0.1,10.10.*
>
> nothing helped.
>
> Currently our only option seems to be to go away from glusterfs to some
> other filesystem which would be a bitter decission.
>
> Thanks for any help,
> udo.
>
> --
> Institute of Cognitive Science - System Administration Team
>     Albrechtstrasse 28 - 49076 Osnabrueck - Germany
>      Tel: +49-541-969-3362 - Fax: +49-541-969-3361
>        https://doc.ikw.uni-osnabrueck.de
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110518/0295700e/attachment.html>


More information about the Gluster-users mailing list