<div dir="ltr">Hi Strahil.<div><br></div><div>yes I know but I already tried that and failed at implementing it. </div><div>I'm now even suspecting gluster to have some kind of bug.</div><div><br></div><div>Could you show me how to do it correctly? Which services goes into after?</div><div>Do have example unit files for mounting gluster volumes?</div><div><br></div><div>Cheers</div><div>Sherry</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 24 Jan 2020 at 14:03, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On January 24, 2020 10:20:50 AM GMT+02:00, Sherry Reese <<a href="mailto:s.reese4u@gmail.com" target="_blank">s.reese4u@gmail.com</a>> wrote:<br>
>Hello Hubert,<br>
><br>
>that would be an easy fix. I already tried that.<br>
>I additionally tried a service like the following one. Does not work<br>
>either.<br>
><br>
>I'm lost here. Even a workaround would be a relief.<br>
><br>
>[Unit]<br>
>Description=Gluster Mounting<br>
>After=network.target<br>
>After=systemd-user-sessions.service<br>
>After=network-online.target<br>
><br>
>[Service]<br>
>Type=simple<br>
>RemainAfterExit=true<br>
>ExecStart=/bin/mount -a -t glusterfs<br>
>TimeoutSec=30<br>
>Restart=on-failure<br>
>RestartSec=30<br>
>StartLimitInterval=350<br>
>StartLimitBurst=10<br>
><br>
>[Install]<br>
>WantedBy=multi-user.target<br>
><br>
>Cheers<br>
>Sherry<br>
><br>
>On Fri, 24 Jan 2020 at 06:50, Hu Bert <<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>> wrote:<br>
><br>
>> Hi Sherry,<br>
>><br>
>> maybe at the time, when the mount from /etc/fstab should take place,<br>
>> name resolution is not yet working? In your case i'd try to place<br>
>> proper entries in /etc/hosts and test it with a reboot.<br>
>><br>
>><br>
>> regards<br>
>> Hubert<br>
>><br>
>> Am Fr., 24. Jan. 2020 um 02:37 Uhr schrieb Sherry Reese <<br>
>> <a href="mailto:s.reese4u@gmail.com" target="_blank">s.reese4u@gmail.com</a>>:<br>
>> ><br>
>> > Hello everyone,<br>
>> ><br>
>> > I am using the following entry on a CentOS server.<br>
>> ><br>
>> > gluster01.home:/videos /data2/plex/videos glusterfs _netdev 0 0<br>
>> > gluster01.home:/photos /data2/plex/photos glusterfs _netdev 0 0<br>
>> ><br>
>> > I am able to use sudo mount -a to mount the volumes without any<br>
>> problems. When I reboot my server, nothing is mounted.<br>
>> ><br>
>> > I can see errors in /var/log/glusterfs/data2-plex-photos.log:<br>
>> ><br>
>> > ...<br>
>> > [2020-01-24 01:24:18.302191] I [glusterfsd.c:2594:daemonize]<br>
>> 0-glusterfs: Pid of current running process is 3679<br>
>> > [2020-01-24 01:24:18.310017] E [MSGID: 101075]<br>
>> [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed<br>
>> (family:2) (Name or service not known)<br>
>> > [2020-01-24 01:24:18.310046] E<br>
>> [name.c:266:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS<br>
>resolution<br>
>> failed on host gluster01.home<br>
>> > [2020-01-24 01:24:18.310187] I [MSGID: 101190]<br>
>> [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started<br>
>thread<br>
>> with index 0<br>
>> > ...<br>
>> ><br>
>> > I am able to to nslookup on gluster01 and gluster01.home without<br>
>> problems, so "DNS resolution failed" is confusing to me. What happens<br>
>here?<br>
>> ><br>
>> > Output of my volumes.<br>
>> ><br>
>> > sudo gluster volume status<br>
>> > Status of volume: documents<br>
>> > Gluster process               TCP Port RDMA Port <br>
>Online<br>
>> Pid<br>
>> ><br>
>><br>
>------------------------------------------------------------------------------<br>
>> > Brick gluster01.home:/data/documents    49152   0     Y<br>
>>Â 5658<br>
>> > Brick gluster02.home:/data/documents    49152   0     Y<br>
>>Â 5340<br>
>> > Brick gluster03.home:/data/documents    49152   0     Y<br>
>>Â 5305<br>
>> > Self-heal Daemon on localhost        N/A    N/A    Y<br>
>>Â 5679<br>
>> > Self-heal Daemon on gluster03.home     N/A    N/A    Y<br>
>>Â 5326<br>
>> > Self-heal Daemon on gluster02.home     N/A    N/A    Y<br>
>>Â 5361<br>
>> ><br>
>> > Task Status of Volume documents<br>
>> ><br>
>><br>
>------------------------------------------------------------------------------<br>
>> > There are no active volume tasks<br>
>> ><br>
>> > Status of volume: photos<br>
>> > Gluster process               TCP Port RDMA Port <br>
>Online<br>
>> Pid<br>
>> ><br>
>><br>
>------------------------------------------------------------------------------<br>
>> > Brick gluster01.home:/data/photos      49153   0     Y<br>
>>Â 5779<br>
>> > Brick gluster02.home:/data/photos      49153   0     Y<br>
>>Â 5401<br>
>> > Brick gluster03.home:/data/photos      49153   0     Y<br>
>>Â 5366<br>
>> > Self-heal Daemon on localhost        N/A    N/A    Y<br>
>>Â 5679<br>
>> > Self-heal Daemon on gluster03.home     N/A    N/A    Y<br>
>>Â 5326<br>
>> > Self-heal Daemon on gluster02.home     N/A    N/A    Y<br>
>>Â 5361<br>
>> ><br>
>> > Task Status of Volume photos<br>
>> ><br>
>><br>
>------------------------------------------------------------------------------<br>
>> > There are no active volume tasks<br>
>> ><br>
>> > Status of volume: videos<br>
>> > Gluster process               TCP Port RDMA Port <br>
>Online<br>
>> Pid<br>
>> ><br>
>><br>
>------------------------------------------------------------------------------<br>
>> > Brick gluster01.home:/data/videos      49154   0     Y<br>
>>Â 5883<br>
>> > Brick gluster02.home:/data/videos      49154   0     Y<br>
>>Â 5452<br>
>> > Brick gluster03.home:/data/videos      49154   0     Y<br>
>>Â 5416<br>
>> > Self-heal Daemon on localhost        N/A    N/A    Y<br>
>>Â 5679<br>
>> > Self-heal Daemon on gluster03.home     N/A    N/A    Y<br>
>>Â 5326<br>
>> > Self-heal Daemon on gluster02.home     N/A    N/A    Y<br>
>>Â 5361<br>
>> ><br>
>> > Task Status of Volume videos<br>
>> ><br>
>><br>
>------------------------------------------------------------------------------<br>
>> > There are no active volume tasks<br>
>> ><br>
>> > On the server (Ubuntu) following versions are installed.<br>
>> ><br>
>> > glusterfs-client/bionic,now 7.2-ubuntu1~bionic1 armhf<br>
>> [installed,automatic]<br>
>> > glusterfs-common/bionic,now 7.2-ubuntu1~bionic1 armhf<br>
>> [installed,automatic]<br>
>> > glusterfs-server/bionic,now 7.2-ubuntu1~bionic1 armhf [installed]<br>
>> ><br>
>> > On the client (CentOS) following versions are installed.<br>
>> ><br>
>> > sudo rpm -qa | grep gluster<br>
>> > glusterfs-client-xlators-7.2-1.el7.x86_64<br>
>> > glusterfs-cli-7.2-1.el7.x86_64<br>
>> > glusterfs-libs-7.2-1.el7.x86_64<br>
>> > glusterfs-7.2-1.el7.x86_64<br>
>> > glusterfs-api-7.2-1.el7.x86_64<br>
>> > libvirt-daemon-driver-storage-gluster-4.5.0-23.el7_7.3.x86_64<br>
>> > centos-release-gluster7-1.0-1.el7.centos.noarch<br>
>> > glusterfs-fuse-7.2-1.el7.x86_64<br>
>> ><br>
>> > I tried to disable IPv6 on the client voa sysctl with following<br>
>> parameters.<br>
>> ><br>
>> > net.ipv6.conf.all.disable_ipv6 = 1<br>
>> > net.ipv6.conf.default.disable_ipv6 = 1<br>
>> ><br>
>> > That did not help.<br>
>> ><br>
>> > Volumes are configured with inet.<br>
>> ><br>
>> > sudo gluster volume info videos<br>
>> ><br>
>> > Volume Name: videos<br>
>> > Type: Replicate<br>
>> > Volume ID: 8fddde82-66b3-447f-8860-ed3768c51876<br>
>> > Status: Started<br>
>> > Snapshot Count: 0<br>
>> > Number of Bricks: 1 x 3 = 3<br>
>> > Transport-type: tcp<br>
>> > Bricks:<br>
>> > Brick1: gluster01.home:/data/videos<br>
>> > Brick2: gluster02.home:/data/videos<br>
>> > Brick3: gluster03.home:/data/videos<br>
>> > Options Reconfigured:<br>
>> > features.ctime: on<br>
>> > transport.address-family: inet<br>
>> > nfs.disable: on<br>
>> > performance.client-io-threads: off<br>
>> ><br>
>> > I tried turning off ctime but that did not work either.<br>
>> ><br>
>> > Any ideas? How do I do this correctly?<br>
>> ><br>
>> > Cheers<br>
>> > Sherry<br>
>> > ________<br>
>> ><br>
>> > Community Meeting Calendar:<br>
>> ><br>
>> > APAC Schedule -<br>
>> > Every 2nd and 4th Tuesday at 11:30 AM IST<br>
>> > Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>> ><br>
>> > NA/EMEA Schedule -<br>
>> > Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
>> > Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>> ><br>
>> > Gluster-users mailing list<br>
>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>><br>
<br>
Systemd services are bad approach to define a mount.<br>
Use systemd's '.mount' unit instead.<br>
You can define 'before' & 'after' what it should be initiated.<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
</blockquote></div>