[Gluster-users] Enabling Halo sets volume RO

Jon Cope jcope at redhat.com
Tue Nov 7 22:33:10 UTC 2017


Hi all, 

I'm taking a stab at deploying a storage cluster to explore the Halo AFR feature and running into some trouble. In GCE, I have 4 instances, each with one 10gb brick. 2 instances are in the US and the other 2 are in Asia (with the hope that it will drive up latency sufficiently). The bricks make up a Replica-4 volume. Before I enable halo, I can mount to volume and r/w files. 

The issue is when I set the `cluster.halo-enabled yes`, I can no longer write to the volume: 

[root at jcope-rhs-g2fn vol]# touch /mnt/vol/test1 
touch: setting times of ‘test1’: Read-only file system 

This can be fixed by turning halo off again. While halo is enabled and writes return the above message, the mount still shows it to be r/w: 

[root at jcope-rhs-g2fn vol]# mount 
gce-node1:gv0 on /mnt/vol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) 


Thanks in advace, 
-Jon 


Setup info 
CentOS Linux release 7.4.1708 (Core) 
4 GCE Instances (2 US, 2 Asia) 
1 10gb Brick/Instance 
replica 4 volume 

Packages: 


glusterfs-client-xlators-3.12.1-2.el7.x86_64 
glusterfs-cli-3.12.1-2.el7.x86_64 
python2-gluster-3.12.1-2.el7.x86_64 
glusterfs-3.12.1-2.el7.x86_64 
glusterfs-api-3.12.1-2.el7.x86_64 
glusterfs-fuse-3.12.1-2.el7.x86_64 
glusterfs-server-3.12.1-2.el7.x86_64 
glusterfs-libs-3.12.1-2.el7.x86_64 
glusterfs-geo-replication-3.12.1-2.el7.x86_64 

Logs, beginning when halo is enabled: 


[2017-11-07 22:20:15.029298] W [MSGID: 101095] [xlator.c:213:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared object file: No such file or directory 
[2017-11-07 22:20:15.204241] W [MSGID: 101095] [xlator.c:162:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared object file: No such file or directory 
[2017-11-07 22:20:15.232176] I [MSGID: 106600] [glusterd-nfs-svc.c:163:glusterd_nfssvc_reconfigure] 0-management: nfs/server.so xlator is not installed 
[2017-11-07 22:20:15.235481] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped 
[2017-11-07 22:20:15.235512] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped 
[2017-11-07 22:20:15.235572] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped 
[2017-11-07 22:20:15.235585] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped 
[2017-11-07 22:20:15.235638] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped 
[2017-11-07 22:20:15.235650] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped 
[2017-11-07 22:20:15.250297] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xde17a) [0x7fc23442117a] -->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xddc3d) [0x7fc234420c3d] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fc23f915da5] ) 0-management: Ran script: /var/lib 
/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv0 -o cluster.halo-enabled=yes --gd-workdir=/var/lib/glusterd 
[2017-11-07 22:20:15.255777] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xde17a) [0x7fc23442117a] -->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xddc3d) [0x7fc234420c3d] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fc23f915da5] ) 0-management: Ran script: /var/lib 
/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv0 -o cluster.halo-enabled=yes --gd-workdir=/var/lib/glusterd 
[2017-11-07 22:20:47.420098] W [MSGID: 101095] [xlator.c:213:xlator_dynload] 0-xlator: /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared object file: No such file or directory 
[2017-11-07 22:20:47.595960] W [MSGID: 101095] [xlator.c:162:xlator_volopt_dynload] 0-xlator: /usr/lib64/glusterfs/3.12.1/xlator/nfs/server.so: cannot open shared object file: No such file or directory 
[2017-11-07 22:20:47.631833] I [MSGID: 106600] [glusterd-nfs-svc.c:163:glusterd_nfssvc_reconfigure] 0-management: nfs/server.so xlator is not installed 
[2017-11-07 22:20:47.635109] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: quotad already stopped 
[2017-11-07 22:20:47.635136] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: quotad service is stopped 
[2017-11-07 22:20:47.635201] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: bitd already stopped 
[2017-11-07 22:20:47.635216] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: bitd service is stopped 
[2017-11-07 22:20:47.635284] I [MSGID: 106132] [glusterd-proc-mgmt.c:83:glusterd_proc_stop] 0-management: scrub already stopped 
[2017-11-07 22:20:47.635297] I [MSGID: 106568] [glusterd-svc-mgmt.c:229:glusterd_svc_stop] 0-management: scrub service is stopped 
[2017-11-07 22:20:47.648524] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xde17a) [0x7fc23442117a] -->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xddc3d) [0x7fc234420c3d] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fc23f915da5] ) 0-management: Ran script: /var/lib 
/glusterd/hooks/1/set/post/S30samba-set.sh --volname=gv0 -o cluster.halo-max-latency=10 --gd-workdir=/var/lib/glusterd 
[2017-11-07 22:20:47.654091] I [run.c:190:runner_log] (-->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xde17a) [0x7fc23442117a] -->/usr/lib64/glusterfs/3.12.1/xlator/mgmt/glusterd.so(+0xddc3d) [0x7fc234420c3d] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7fc23f915da5] ) 0-management: Ran script: /var/lib 
/glusterd/hooks/1/set/post/S32gluster_enable_shared_storage.sh --volname=gv0 -o cluster.halo-max-latency=10 --gd-workdir=/var/lib/glusterd 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171107/a6b73ddc/attachment.html>


More information about the Gluster-users mailing list