<div dir="ltr"><div dir="ltr">Hello Hubert,<div><br></div><div>that would be an easy fix. I already tried that.</div><div>I additionally tried a service like the following one. Does not work either. </div><div><br></div><div>I&#39;m lost here. Even a workaround would be a relief.</div><div><br></div><div>[Unit]<br>Description=Gluster Mounting<br>After=network.target<br>After=systemd-user-sessions.service<br>After=network-online.target<br><br>[Service]<br>Type=simple<br>RemainAfterExit=true<br>ExecStart=/bin/mount -a -t glusterfs<br>TimeoutSec=30<br>Restart=on-failure<br>RestartSec=30<br>StartLimitInterval=350<br>StartLimitBurst=10<br><br>[Install]<br>WantedBy=multi-user.target<br></div></div><div><br></div><div>Cheers</div><div>Sherry</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 24 Jan 2020 at 06:50, Hu Bert &lt;<a href="mailto:revirii@googlemail.com">revirii@googlemail.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Sherry,<br>
<br>
maybe at the time, when the mount from /etc/fstab should take place,<br>
name resolution is not yet working? In your case i&#39;d try to place<br>
proper entries in /etc/hosts and test it with a reboot.<br>
<br>
<br>
regards<br>
Hubert<br>
<br>
Am Fr., 24. Jan. 2020 um 02:37 Uhr schrieb Sherry Reese &lt;<a href="mailto:s.reese4u@gmail.com" target="_blank">s.reese4u@gmail.com</a>&gt;:<br>
&gt;<br>
&gt; Hello everyone,<br>
&gt;<br>
&gt; I am using the following entry on a CentOS server.<br>
&gt;<br>
&gt; gluster01.home:/videos /data2/plex/videos glusterfs _netdev 0 0<br>
&gt; gluster01.home:/photos /data2/plex/photos glusterfs _netdev 0 0<br>
&gt;<br>
&gt; I am able to use sudo mount -a to mount the volumes without any problems. When I reboot my server, nothing is mounted.<br>
&gt;<br>
&gt; I can see errors in /var/log/glusterfs/data2-plex-photos.log:<br>
&gt;<br>
&gt; ...<br>
&gt; [2020-01-24 01:24:18.302191] I [glusterfsd.c:2594:daemonize] 0-glusterfs: Pid of current running process is 3679<br>
&gt; [2020-01-24 01:24:18.310017] E [MSGID: 101075] [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed (family:2) (Name or service not known)<br>
&gt; [2020-01-24 01:24:18.310046] E [name.c:266:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS resolution failed on host gluster01.home<br>
&gt; [2020-01-24 01:24:18.310187] I [MSGID: 101190] [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started thread with index 0<br>
&gt; ...<br>
&gt;<br>
&gt; I am able to to nslookup on gluster01 and gluster01.home without problems, so &quot;DNS resolution failed&quot; is confusing to me. What happens here?<br>
&gt;<br>
&gt; Output of my volumes.<br>
&gt;<br>
&gt; sudo gluster volume status<br>
&gt; Status of volume: documents<br>
&gt; Gluster process                             TCP Port  RDMA Port  Online  Pid<br>
&gt; ------------------------------------------------------------------------------<br>
&gt; Brick gluster01.home:/data/documents        49152     0          Y       5658<br>
&gt; Brick gluster02.home:/data/documents        49152     0          Y       5340<br>
&gt; Brick gluster03.home:/data/documents        49152     0          Y       5305<br>
&gt; Self-heal Daemon on localhost               N/A       N/A        Y       5679<br>
&gt; Self-heal Daemon on gluster03.home          N/A       N/A        Y       5326<br>
&gt; Self-heal Daemon on gluster02.home          N/A       N/A        Y       5361<br>
&gt;<br>
&gt; Task Status of Volume documents<br>
&gt; ------------------------------------------------------------------------------<br>
&gt; There are no active volume tasks<br>
&gt;<br>
&gt; Status of volume: photos<br>
&gt; Gluster process                             TCP Port  RDMA Port  Online  Pid<br>
&gt; ------------------------------------------------------------------------------<br>
&gt; Brick gluster01.home:/data/photos           49153     0          Y       5779<br>
&gt; Brick gluster02.home:/data/photos           49153     0          Y       5401<br>
&gt; Brick gluster03.home:/data/photos           49153     0          Y       5366<br>
&gt; Self-heal Daemon on localhost               N/A       N/A        Y       5679<br>
&gt; Self-heal Daemon on gluster03.home          N/A       N/A        Y       5326<br>
&gt; Self-heal Daemon on gluster02.home          N/A       N/A        Y       5361<br>
&gt;<br>
&gt; Task Status of Volume photos<br>
&gt; ------------------------------------------------------------------------------<br>
&gt; There are no active volume tasks<br>
&gt;<br>
&gt; Status of volume: videos<br>
&gt; Gluster process                             TCP Port  RDMA Port  Online  Pid<br>
&gt; ------------------------------------------------------------------------------<br>
&gt; Brick gluster01.home:/data/videos           49154     0          Y       5883<br>
&gt; Brick gluster02.home:/data/videos           49154     0          Y       5452<br>
&gt; Brick gluster03.home:/data/videos           49154     0          Y       5416<br>
&gt; Self-heal Daemon on localhost               N/A       N/A        Y       5679<br>
&gt; Self-heal Daemon on gluster03.home          N/A       N/A        Y       5326<br>
&gt; Self-heal Daemon on gluster02.home          N/A       N/A        Y       5361<br>
&gt;<br>
&gt; Task Status of Volume videos<br>
&gt; ------------------------------------------------------------------------------<br>
&gt; There are no active volume tasks<br>
&gt;<br>
&gt; On the server (Ubuntu) following versions are installed.<br>
&gt;<br>
&gt; glusterfs-client/bionic,now 7.2-ubuntu1~bionic1 armhf [installed,automatic]<br>
&gt; glusterfs-common/bionic,now 7.2-ubuntu1~bionic1 armhf [installed,automatic]<br>
&gt; glusterfs-server/bionic,now 7.2-ubuntu1~bionic1 armhf [installed]<br>
&gt;<br>
&gt; On the client (CentOS) following versions are installed.<br>
&gt;<br>
&gt; sudo rpm -qa | grep gluster<br>
&gt; glusterfs-client-xlators-7.2-1.el7.x86_64<br>
&gt; glusterfs-cli-7.2-1.el7.x86_64<br>
&gt; glusterfs-libs-7.2-1.el7.x86_64<br>
&gt; glusterfs-7.2-1.el7.x86_64<br>
&gt; glusterfs-api-7.2-1.el7.x86_64<br>
&gt; libvirt-daemon-driver-storage-gluster-4.5.0-23.el7_7.3.x86_64<br>
&gt; centos-release-gluster7-1.0-1.el7.centos.noarch<br>
&gt; glusterfs-fuse-7.2-1.el7.x86_64<br>
&gt;<br>
&gt; I tried to disable IPv6 on the client voa sysctl with following parameters.<br>
&gt;<br>
&gt; net.ipv6.conf.all.disable_ipv6 = 1<br>
&gt; net.ipv6.conf.default.disable_ipv6 = 1<br>
&gt;<br>
&gt; That did not help.<br>
&gt;<br>
&gt; Volumes are configured with inet.<br>
&gt;<br>
&gt; sudo gluster volume info videos<br>
&gt;<br>
&gt; Volume Name: videos<br>
&gt; Type: Replicate<br>
&gt; Volume ID: 8fddde82-66b3-447f-8860-ed3768c51876<br>
&gt; Status: Started<br>
&gt; Snapshot Count: 0<br>
&gt; Number of Bricks: 1 x 3 = 3<br>
&gt; Transport-type: tcp<br>
&gt; Bricks:<br>
&gt; Brick1: gluster01.home:/data/videos<br>
&gt; Brick2: gluster02.home:/data/videos<br>
&gt; Brick3: gluster03.home:/data/videos<br>
&gt; Options Reconfigured:<br>
&gt; features.ctime: on<br>
&gt; transport.address-family: inet<br>
&gt; nfs.disable: on<br>
&gt; performance.client-io-threads: off<br>
&gt;<br>
&gt; I tried turning off ctime but that did not work either.<br>
&gt;<br>
&gt; Any ideas? How do I do this correctly?<br>
&gt;<br>
&gt; Cheers<br>
&gt; Sherry<br>
&gt; ________<br>
&gt;<br>
&gt; Community Meeting Calendar:<br>
&gt;<br>
&gt; APAC Schedule -<br>
&gt; Every 2nd and 4th Tuesday at 11:30 AM IST<br>
&gt; Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
&gt;<br>
&gt; NA/EMEA Schedule -<br>
&gt; Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
&gt; Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
&gt;<br>
&gt; Gluster-users mailing list<br>
&gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div></div>