[Gluster-users] gluster 3.2.0 - totally broken?

Udo Waechter udo.waechter at uni-osnabrueck.de
Wed May 18 12:45:19 UTC 2011


Hi there,
after reporting some trouble with group access permissions, 
http://gluster.org/pipermail/gluster-users/2011-May/007619.html (which 
still persist, btw.)

things get worse and worse with each day.

Now, we see a lot of duplicate files (again, only fuse-clients here), 
access permissions are reset on a random and totally annoying basis. 
Files are empty from time to time and become:
-rwxrws--x  1 user1  group2       594 2011-02-04 18:43 preprocessing128.m
-rwxrws--x  1 user1  group2       594 2011-02-04 18:43 preprocessing128.m
-rwxrws--x  1 user2 group2       531 2011-03-03 10:47 result_11.mat
------S--T  1 root     group2         0 2011-04-14 07:57 result_11.mat
-rwxrws--x  1 user1  group2     11069 2010-12-02 14:53 trigger.odt
-rwxrws--x  1 user1  group2     11069 2010-12-02 14:53 trigger.odt

where group2 are secondary groups.

How come that there are these empty and duplicate files? Again, this 
listing is from the fuse-mount

Could it be that version 3.2.0 is totally borked?

Btw.: From time to time, these permissions as well as which duplicate 
files one sees change in a random manner.

I followed various hints on configuring and deconfiguring options Went from:

root at store02:/var/log/glusterfs# gluster volume info store

Volume Name: store
Type: Distributed-Replicate
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Bricks:
Brick1: store01-i:/srv/store01
Brick2: pvmserv01-i:/srv/store01
Brick3: pvmserv02-i:/srv/store01
Brick4: store02-i:/srv/store03
Brick5: store02-i:/srv/store01
Brick6: store01-i:/srv/store02
Brick7: store02-i:/srv/store04
Brick8: store02-i:/srv/store05
Brick9: store02-i:/srv/store06
Brick10: store02-i:/srv/store02
Options Reconfigured:
nfs.disable: on
auth.allow: 127.0.0.1,10.10.*
performance.cache-size: 1024Mb
performance.write-behind-window-size: 64Mb
performance.io-thread-count: 32
diagnostics.dump-fd-stats: off
diagnostics.brick-log-level: WARNING
diagnostics.client-log-level: WARNING
performance.stat-prefetch: off
diagnostics.latency-measurement: off
performance.flush-behind: off
performance.quick-read: disable

to:

Volume Name: store
Type: Distributed-Replicate
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Bricks:
Brick1: store01-i:/srv/store01
Brick2: pvmserv01-i:/srv/store01
Brick3: pvmserv02-i:/srv/store01
Brick4: store02-i:/srv/store03
Brick5: store02-i:/srv/store01
Brick6: store01-i:/srv/store02
Brick7: store02-i:/srv/store04
Brick8: store02-i:/srv/store05
Brick9: store02-i:/srv/store06
Brick10: store02-i:/srv/store02
Options Reconfigured:
auth.allow: 127.0.0.1,10.10.*

nothing helped.

Currently our only option seems to be to go away from glusterfs to some 
other filesystem which would be a bitter decission.

Thanks for any help,
udo.

-- 
Institute of Cognitive Science - System Administration Team
      Albrechtstrasse 28 - 49076 Osnabrueck - Germany
       Tel: +49-541-969-3362 - Fax: +49-541-969-3361
         https://doc.ikw.uni-osnabrueck.de



More information about the Gluster-users mailing list