[Gluster-users] Gluster problems permission denied LOOKUP () /etc/samba/private/msg.sock

Diego Remolina dijuremo at gmail.com
Tue Oct 2 11:54:38 UTC 2018


Dear all,

I have a two node setup running on Centos and gluster version
glusterfs-3.10.12-1.el7.x86_64

One of my nodes died (motherboard issue). Since I had to continue
being up, I modified the quorum to below 50% to make sure I could
still run on one server.

The server runs ovirt and 2 VMs on top of a volume called vmstorage. I
also had a third node in the peer list, but never configured it as an
arbiter, so it just comes up in gluster v status. The server also run
a file server with samba to serve files to windows machines.

The issue is that since starting the server on it's own as the samba
server, I am seeing permission denied errors for the "export" volume
in /var/log/glusterfs/export.log

The errors look like this and repeat over and over:

[2018-10-02 11:46:56.327925] I [MSGID: 139001]
[posix-acl.c:269:posix_acl_log_permit_denied] 0-posix-acl-autoload:
client: -, gfid: 5b5bed22-ace0-410d-8623-4f1a31069b81,
req(uid:1051,gid:513,perm:1,ngrps:2),
ctx(uid:0,gid:0,in-groups:0,perm:700,updated-fop:LOOKUP, acl:-)
[Permission denied]
[2018-10-02 11:46:56.328004] W [fuse-bridge.c:490:fuse_entry_cbk]
0-glusterfs-fuse: 20599112: LOOKUP() /etc/samba/private/msg.sock/15149
=> -1 (Permission denied)
[2018-10-02 11:46:56.328185] W [fuse-bridge.c:490:fuse_entry_cbk]
0-glusterfs-fuse: 20599113: LOOKUP() /etc/samba/private/msg.sock/15149
=> -1 (Permission denied)
[2018-10-02 11:47:53.766562] W [fuse-bridge.c:490:fuse_entry_cbk]
0-glusterfs-fuse: 20600590: LOOKUP() /etc/samba/private/msg.sock/15149
=> -1 (Permission denied)

The gluster volume export is mounted on /export, samba and ctdb are
instructed to use /export/etc/samba/private and /export/lock which is
on the gluster file system for the clustered tdb, etc. However, I keep
getting the log messages that fuse seems to try to access a folder
that does not exist, /etc/samba/private/msg.sock

Why is this, how can I fix it?

[root at ysmha01 export]# gluster v status export
Status of volume: export
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.0.1.6:/bricks/hdds/brick           49153     0          Y       3516
Self-heal Daemon on localhost               N/A       N/A        Y       3710
Self-heal Daemon on 10.0.1.5                N/A       N/A        Y       4380

Task Status of Volume export
------------------------------------------------------------------------------
There are no active volume tasks

These are all the volume options currently set:

http://termbin.com/1xm5

Diego


More information about the Gluster-users mailing list