<div dir="ltr">Hi Strahil.<div><br></div><div>yes I know but I already tried that and failed at implementing it. </div><div>I&#39;m now even suspecting gluster to have some kind of bug.</div><div><br></div><div>Could you show me how to do it correctly? Which services goes into after?</div><div>Do have example unit files for mounting gluster volumes?</div><div><br></div><div>Cheers</div><div>Sherry</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 24 Jan 2020 at 14:03, Strahil Nikolov &lt;<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On January 24, 2020 10:20:50 AM GMT+02:00, Sherry Reese &lt;<a href="mailto:s.reese4u@gmail.com" target="_blank">s.reese4u@gmail.com</a>&gt; wrote:<br>
&gt;Hello Hubert,<br>
&gt;<br>
&gt;that would be an easy fix. I already tried that.<br>
&gt;I additionally tried a service like the following one. Does not work<br>
&gt;either.<br>
&gt;<br>
&gt;I&#39;m lost here. Even a workaround would be a relief.<br>
&gt;<br>
&gt;[Unit]<br>
&gt;Description=Gluster Mounting<br>
&gt;After=network.target<br>
&gt;After=systemd-user-sessions.service<br>
&gt;After=network-online.target<br>
&gt;<br>
&gt;[Service]<br>
&gt;Type=simple<br>
&gt;RemainAfterExit=true<br>
&gt;ExecStart=/bin/mount -a -t glusterfs<br>
&gt;TimeoutSec=30<br>
&gt;Restart=on-failure<br>
&gt;RestartSec=30<br>
&gt;StartLimitInterval=350<br>
&gt;StartLimitBurst=10<br>
&gt;<br>
&gt;[Install]<br>
&gt;WantedBy=multi-user.target<br>
&gt;<br>
&gt;Cheers<br>
&gt;Sherry<br>
&gt;<br>
&gt;On Fri, 24 Jan 2020 at 06:50, Hu Bert &lt;<a href="mailto:revirii@googlemail.com" target="_blank">revirii@googlemail.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt; Hi Sherry,<br>
&gt;&gt;<br>
&gt;&gt; maybe at the time, when the mount from /etc/fstab should take place,<br>
&gt;&gt; name resolution is not yet working? In your case i&#39;d try to place<br>
&gt;&gt; proper entries in /etc/hosts and test it with a reboot.<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;&gt; regards<br>
&gt;&gt; Hubert<br>
&gt;&gt;<br>
&gt;&gt; Am Fr., 24. Jan. 2020 um 02:37 Uhr schrieb Sherry Reese &lt;<br>
&gt;&gt; <a href="mailto:s.reese4u@gmail.com" target="_blank">s.reese4u@gmail.com</a>&gt;:<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Hello everyone,<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; I am using the following entry on a CentOS server.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; gluster01.home:/videos /data2/plex/videos glusterfs _netdev 0 0<br>
&gt;&gt; &gt; gluster01.home:/photos /data2/plex/photos glusterfs _netdev 0 0<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; I am able to use sudo mount -a to mount the volumes without any<br>
&gt;&gt; problems. When I reboot my server, nothing is mounted.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; I can see errors in /var/log/glusterfs/data2-plex-photos.log:<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; ...<br>
&gt;&gt; &gt; [2020-01-24 01:24:18.302191] I [glusterfsd.c:2594:daemonize]<br>
&gt;&gt; 0-glusterfs: Pid of current running process is 3679<br>
&gt;&gt; &gt; [2020-01-24 01:24:18.310017] E [MSGID: 101075]<br>
&gt;&gt; [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed<br>
&gt;&gt; (family:2) (Name or service not known)<br>
&gt;&gt; &gt; [2020-01-24 01:24:18.310046] E<br>
&gt;&gt; [name.c:266:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS<br>
&gt;resolution<br>
&gt;&gt; failed on host gluster01.home<br>
&gt;&gt; &gt; [2020-01-24 01:24:18.310187] I [MSGID: 101190]<br>
&gt;&gt; [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started<br>
&gt;thread<br>
&gt;&gt; with index 0<br>
&gt;&gt; &gt; ...<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; I am able to to nslookup on gluster01 and gluster01.home without<br>
&gt;&gt; problems, so &quot;DNS resolution failed&quot; is confusing to me. What happens<br>
&gt;here?<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Output of my volumes.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; sudo gluster volume status<br>
&gt;&gt; &gt; Status of volume: documents<br>
&gt;&gt; &gt; Gluster process                             TCP Port  RDMA Port <br>
&gt;Online<br>
&gt;&gt; Pid<br>
&gt;&gt; &gt;<br>
&gt;&gt;<br>
&gt;------------------------------------------------------------------------------<br>
&gt;&gt; &gt; Brick gluster01.home:/data/documents        49152     0          Y<br>
&gt;&gt;  5658<br>
&gt;&gt; &gt; Brick gluster02.home:/data/documents        49152     0          Y<br>
&gt;&gt;  5340<br>
&gt;&gt; &gt; Brick gluster03.home:/data/documents        49152     0          Y<br>
&gt;&gt;  5305<br>
&gt;&gt; &gt; Self-heal Daemon on localhost               N/A       N/A        Y<br>
&gt;&gt;  5679<br>
&gt;&gt; &gt; Self-heal Daemon on gluster03.home          N/A       N/A        Y<br>
&gt;&gt;  5326<br>
&gt;&gt; &gt; Self-heal Daemon on gluster02.home          N/A       N/A        Y<br>
&gt;&gt;  5361<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Task Status of Volume documents<br>
&gt;&gt; &gt;<br>
&gt;&gt;<br>
&gt;------------------------------------------------------------------------------<br>
&gt;&gt; &gt; There are no active volume tasks<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Status of volume: photos<br>
&gt;&gt; &gt; Gluster process                             TCP Port  RDMA Port <br>
&gt;Online<br>
&gt;&gt; Pid<br>
&gt;&gt; &gt;<br>
&gt;&gt;<br>
&gt;------------------------------------------------------------------------------<br>
&gt;&gt; &gt; Brick gluster01.home:/data/photos           49153     0          Y<br>
&gt;&gt;  5779<br>
&gt;&gt; &gt; Brick gluster02.home:/data/photos           49153     0          Y<br>
&gt;&gt;  5401<br>
&gt;&gt; &gt; Brick gluster03.home:/data/photos           49153     0          Y<br>
&gt;&gt;  5366<br>
&gt;&gt; &gt; Self-heal Daemon on localhost               N/A       N/A        Y<br>
&gt;&gt;  5679<br>
&gt;&gt; &gt; Self-heal Daemon on gluster03.home          N/A       N/A        Y<br>
&gt;&gt;  5326<br>
&gt;&gt; &gt; Self-heal Daemon on gluster02.home          N/A       N/A        Y<br>
&gt;&gt;  5361<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Task Status of Volume photos<br>
&gt;&gt; &gt;<br>
&gt;&gt;<br>
&gt;------------------------------------------------------------------------------<br>
&gt;&gt; &gt; There are no active volume tasks<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Status of volume: videos<br>
&gt;&gt; &gt; Gluster process                             TCP Port  RDMA Port <br>
&gt;Online<br>
&gt;&gt; Pid<br>
&gt;&gt; &gt;<br>
&gt;&gt;<br>
&gt;------------------------------------------------------------------------------<br>
&gt;&gt; &gt; Brick gluster01.home:/data/videos           49154     0          Y<br>
&gt;&gt;  5883<br>
&gt;&gt; &gt; Brick gluster02.home:/data/videos           49154     0          Y<br>
&gt;&gt;  5452<br>
&gt;&gt; &gt; Brick gluster03.home:/data/videos           49154     0          Y<br>
&gt;&gt;  5416<br>
&gt;&gt; &gt; Self-heal Daemon on localhost               N/A       N/A        Y<br>
&gt;&gt;  5679<br>
&gt;&gt; &gt; Self-heal Daemon on gluster03.home          N/A       N/A        Y<br>
&gt;&gt;  5326<br>
&gt;&gt; &gt; Self-heal Daemon on gluster02.home          N/A       N/A        Y<br>
&gt;&gt;  5361<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Task Status of Volume videos<br>
&gt;&gt; &gt;<br>
&gt;&gt;<br>
&gt;------------------------------------------------------------------------------<br>
&gt;&gt; &gt; There are no active volume tasks<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; On the server (Ubuntu) following versions are installed.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; glusterfs-client/bionic,now 7.2-ubuntu1~bionic1 armhf<br>
&gt;&gt; [installed,automatic]<br>
&gt;&gt; &gt; glusterfs-common/bionic,now 7.2-ubuntu1~bionic1 armhf<br>
&gt;&gt; [installed,automatic]<br>
&gt;&gt; &gt; glusterfs-server/bionic,now 7.2-ubuntu1~bionic1 armhf [installed]<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; On the client (CentOS) following versions are installed.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; sudo rpm -qa | grep gluster<br>
&gt;&gt; &gt; glusterfs-client-xlators-7.2-1.el7.x86_64<br>
&gt;&gt; &gt; glusterfs-cli-7.2-1.el7.x86_64<br>
&gt;&gt; &gt; glusterfs-libs-7.2-1.el7.x86_64<br>
&gt;&gt; &gt; glusterfs-7.2-1.el7.x86_64<br>
&gt;&gt; &gt; glusterfs-api-7.2-1.el7.x86_64<br>
&gt;&gt; &gt; libvirt-daemon-driver-storage-gluster-4.5.0-23.el7_7.3.x86_64<br>
&gt;&gt; &gt; centos-release-gluster7-1.0-1.el7.centos.noarch<br>
&gt;&gt; &gt; glusterfs-fuse-7.2-1.el7.x86_64<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; I tried to disable IPv6 on the client voa sysctl with following<br>
&gt;&gt; parameters.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; net.ipv6.conf.all.disable_ipv6 = 1<br>
&gt;&gt; &gt; net.ipv6.conf.default.disable_ipv6 = 1<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; That did not help.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Volumes are configured with inet.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; sudo gluster volume info videos<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Volume Name: videos<br>
&gt;&gt; &gt; Type: Replicate<br>
&gt;&gt; &gt; Volume ID: 8fddde82-66b3-447f-8860-ed3768c51876<br>
&gt;&gt; &gt; Status: Started<br>
&gt;&gt; &gt; Snapshot Count: 0<br>
&gt;&gt; &gt; Number of Bricks: 1 x 3 = 3<br>
&gt;&gt; &gt; Transport-type: tcp<br>
&gt;&gt; &gt; Bricks:<br>
&gt;&gt; &gt; Brick1: gluster01.home:/data/videos<br>
&gt;&gt; &gt; Brick2: gluster02.home:/data/videos<br>
&gt;&gt; &gt; Brick3: gluster03.home:/data/videos<br>
&gt;&gt; &gt; Options Reconfigured:<br>
&gt;&gt; &gt; features.ctime: on<br>
&gt;&gt; &gt; transport.address-family: inet<br>
&gt;&gt; &gt; nfs.disable: on<br>
&gt;&gt; &gt; performance.client-io-threads: off<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; I tried turning off ctime but that did not work either.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Any ideas? How do I do this correctly?<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Cheers<br>
&gt;&gt; &gt; Sherry<br>
&gt;&gt; &gt; ________<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Community Meeting Calendar:<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; APAC Schedule -<br>
&gt;&gt; &gt; Every 2nd and 4th Tuesday at 11:30 AM IST<br>
&gt;&gt; &gt; Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; NA/EMEA Schedule -<br>
&gt;&gt; &gt; Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
&gt;&gt; &gt; Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Gluster-users mailing list<br>
&gt;&gt; &gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt;&gt; &gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
&gt;&gt;<br>
<br>
Systemd services are bad approach to define a mount.<br>
Use systemd&#39;s &#39;.mount&#39; unit instead.<br>
You can define &#39;before&#39; &amp; &#39;after&#39; what it should be initiated.<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
</blockquote></div>