[Gluster-users] Error message: getxattr failed on <somefile>: user.virtfs.rdev (No data available)

Alessandro Ipe Alessandro.Ipe at meteo.be
Tue Sep 2 16:38:56 UTC 2014


Hi,


We have set up a "home" volume using gluster 3.4.2 over 4 servers configured as distributed and replicated. On each server, 4 ext4 bricks are mounted with the following options:
defaults,noatime,nodiratime

This "home" volume is mounted using FUSE on a client server with the following options:
defaults,_netdev,noatime,direct-io-mode=disable,backupvolfile-server=tsunami2,log-level=ERROR,log-file=/var/log/gluster.log

This client (host) also runs a virtual machine (qemu-kvm guest) with a "Filesystem Passthrough" from the host using "Mapped" mode to the guest. The filesystem exported from the host is located on the gluster "home" volume. This filesystem is mounted inside the guest using the fstab line:
home                 /home                9p         trans=virtio,version=9p2000.L,rw,noatime    0   0

On my guest, userid mapping is working correctly and I can copy files on /home. However, doing so results in my bricks' logs (/var/log/glusterfs/bricks on the 4 gluster servers) filling with error messages similar to:
E [posix.c:2668:posix_getxattr] 0-home-posix: getxattr failed on /data/glusterfs/home/brick1/hail/mailman/mailman: user.virtfs.rdev (No data available)
for all files being copied on /home.

For the moment, a quick fix to avoid filling my system partition holding the logs was to set the volume's parameter "diagnostics.brick-log-level" to CRITICAL, but this will prevent me to see other important error messages which could occur.

Is there a cleaner way (use of ACL ?) to prevent these error messages filling my logs and my system partition, please ?


Many thanks,


Alessandro.


gluster volume info home outputs:
Volume Name: home
Type: Distributed-Replicate
Volume ID: 501741ed-4146-4022-af0b-41f5b1297766
Status: Started
Number of Bricks: 8 x 2 = 16
Transport-type: tcp
Bricks:
Brick1: tsunami1:/data/glusterfs/home/brick1
Brick2: tsunami2:/data/glusterfs/home/brick1
Brick3: tsunami1:/data/glusterfs/home/brick2
Brick4: tsunami2:/data/glusterfs/home/brick2
Brick5: tsunami1:/data/glusterfs/home/brick3
Brick6: tsunami2:/data/glusterfs/home/brick3
Brick7: tsunami1:/data/glusterfs/home/brick4
Brick8: tsunami2:/data/glusterfs/home/brick4
Brick9: tsunami3:/data/glusterfs/home/brick1
Brick10: tsunami4:/data/glusterfs/home/brick1
Brick11: tsunami3:/data/glusterfs/home/brick2
Brick12: tsunami4:/data/glusterfs/home/brick2
Brick13: tsunami3:/data/glusterfs/home/brick3
Brick14: tsunami4:/data/glusterfs/home/brick3
Brick15: tsunami3:/data/glusterfs/home/brick4
Brick16: tsunami4:/data/glusterfs/home/brick4
Options Reconfigured:
diagnostics.brick-log-level: CRITICAL
cluster.read-hash-mode: 2
features.limit-usage: <some_quota_info>
features.quota: on
performance.cache-size: 512MB
performance.io-thread-count: 64
performance.flush-behind: off
performance.write-behind-window-size: 4MB
performance.write-behind: on
nfs.disable: on




More information about the Gluster-users mailing list