[Bugs] [Bug 1312421] glusterfs mount-point return permission denied

bugzilla at redhat.com bugzilla at redhat.com
Mon Mar 7 10:22:08 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1312421



--- Comment #5 from bitchecker <ciro.deluca at autistici.org> ---
as suggested by @rastar on IRC channel, i update info with:

gluster volume status:

Status of volume: volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/data/brick/volume      49152     0          Y       2108 
Brick gluster02:/data/brick/volume      49152     0          Y       2109 
Self-heal Daemon on localhost               N/A       N/A        Y       2102 
Self-heal Daemon on gluster02               N/A       N/A        Y       2103 

Task Status of Volume volume
------------------------------------------------------------------------------
There are no active volume tasks

"dh -h" on first server:

File system              Dim. Usati Dispon. Uso% Montato su
/dev/mapper/centos-root  4,0G  1,2G    2,9G  29% /
devtmpfs                 483M     0    483M   0% /dev
tmpfs                    493M     0    493M   0% /dev/shm
tmpfs                    493M  6,7M    487M   2% /run
tmpfs                    493M     0    493M   0% /sys/fs/cgroup
/dev/sdb1                 10G   33M     10G   1% /data/brick
/dev/sda1                497M  148M    350M  30% /boot
tmpfs                     99M     0     99M   0% /run/user/0

"df -h" on second server:

File system              Dim. Usati Dispon. Uso% Montato su
/dev/mapper/centos-root  4,0G  1,2G    2,9G  29% /
devtmpfs                 483M     0    483M   0% /dev
tmpfs                    493M     0    493M   0% /dev/shm
tmpfs                    493M  6,7M    487M   2% /run
tmpfs                    493M     0    493M   0% /sys/fs/cgroup
/dev/sda1                497M  148M    350M  30% /boot
/dev/sdb1                 10G   33M     10G   1% /data/brick
tmpfs                     99M     0     99M   0% /run/user/0

"ps aux | grep gluster" on first server:

avahi      733  0.0  0.1  28116  1508 ?        Ss   mar03   0:00 avahi-daemon:
running [glusterfs1.local]
root       970  0.0  2.2 667456 22952 ?        Ssl  mar03   0:11
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root      2102  0.0  2.7 616716 27324 ?        Ssl  mar03   0:09
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/2e8b6abea9d77a38e2540d59717e058c.socket --xlator-option
*replicate*.node-uuid=2e213ba9-a512-412e-a7a9-cee0b98abd2f
root      2108  0.0  3.8 901448 38820 ?        Ssl  mar03   0:26
/usr/sbin/glusterfsd -s gluster01 --volfile-id
volume.gluster01.data-brick-volume -p
/var/lib/glusterd/vols/volume/run/gluster01-data-brick-volume.pid -S
/var/run/gluster/4deb782b2475ab253a925df28f26131f.socket --brick-name
/data/brick/volume -l /var/log/glusterfs/bricks/data-brick-volume.log
--xlator-option *-posix.glusterd-uuid=2e213ba9-a512-412e-a7a9-cee0b98abd2f
--brick-port 49152 --xlator-option volume-server.listen-port=49152

"ps aux | grep gluster" on second server:

avahi      737  0.0  0.1  28112  1688 ?        Ss   mar03   0:00 avahi-daemon:
running [glusterfs2.local]
root       970  0.0  2.0 667460 20660 ?        Ssl  mar03   0:11
/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root      2103  0.0  2.2 600320 22260 ?        Ssl  mar03   0:08
/usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p
/var/lib/glusterd/glustershd/run/glustershd.pid -l
/var/log/glusterfs/glustershd.log -S
/var/run/gluster/1e2471cab262fc7e30dfec5b09ccb81e.socket --xlator-option
*replicate*.node-uuid=5f4726a7-1d44-4bf2-8755-0a06868e9086
root      2109  0.0  3.8 900416 38836 ?        Ssl  mar03   0:26
/usr/sbin/glusterfsd -s gluster02 --volfile-id
volume.gluster02.data-brick-volume -p
/var/lib/glusterd/vols/volume/run/gluster02-data-brick-volume.pid -S
/var/run/gluster/6342b2fba5b55dee8aefabf8c6b4bc64.socket --brick-name
/data/brick/volume -l /var/log/glusterfs/bricks/data-brick-volume.log
--xlator-option *-posix.glusterd-uuid=5f4726a7-1d44-4bf2-8755-0a06868e9086
--brick-port 49152 --xlator-option volume-server.listen-port=49152

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=dmSBNHaP2n&a=cc_unsubscribe


More information about the Bugs mailing list