[Gluster-users] No possible to mount a gluster volume via /etc/fstab?

Strahil Nikolov hunter86_bg at yahoo.com
Fri Jan 24 13:01:55 UTC 2020


On January 24, 2020 10:20:50 AM GMT+02:00, Sherry Reese <s.reese4u at gmail.com> wrote:
>Hello Hubert,
>
>that would be an easy fix. I already tried that.
>I additionally tried a service like the following one. Does not work
>either.
>
>I'm lost here. Even a workaround would be a relief.
>
>[Unit]
>Description=Gluster Mounting
>After=network.target
>After=systemd-user-sessions.service
>After=network-online.target
>
>[Service]
>Type=simple
>RemainAfterExit=true
>ExecStart=/bin/mount -a -t glusterfs
>TimeoutSec=30
>Restart=on-failure
>RestartSec=30
>StartLimitInterval=350
>StartLimitBurst=10
>
>[Install]
>WantedBy=multi-user.target
>
>Cheers
>Sherry
>
>On Fri, 24 Jan 2020 at 06:50, Hu Bert <revirii at googlemail.com> wrote:
>
>> Hi Sherry,
>>
>> maybe at the time, when the mount from /etc/fstab should take place,
>> name resolution is not yet working? In your case i'd try to place
>> proper entries in /etc/hosts and test it with a reboot.
>>
>>
>> regards
>> Hubert
>>
>> Am Fr., 24. Jan. 2020 um 02:37 Uhr schrieb Sherry Reese <
>> s.reese4u at gmail.com>:
>> >
>> > Hello everyone,
>> >
>> > I am using the following entry on a CentOS server.
>> >
>> > gluster01.home:/videos /data2/plex/videos glusterfs _netdev 0 0
>> > gluster01.home:/photos /data2/plex/photos glusterfs _netdev 0 0
>> >
>> > I am able to use sudo mount -a to mount the volumes without any
>> problems. When I reboot my server, nothing is mounted.
>> >
>> > I can see errors in /var/log/glusterfs/data2-plex-photos.log:
>> >
>> > ...
>> > [2020-01-24 01:24:18.302191] I [glusterfsd.c:2594:daemonize]
>> 0-glusterfs: Pid of current running process is 3679
>> > [2020-01-24 01:24:18.310017] E [MSGID: 101075]
>> [common-utils.c:505:gf_resolve_ip6] 0-resolver: getaddrinfo failed
>> (family:2) (Name or service not known)
>> > [2020-01-24 01:24:18.310046] E
>> [name.c:266:af_inet_client_get_remote_sockaddr] 0-glusterfs: DNS
>resolution
>> failed on host gluster01.home
>> > [2020-01-24 01:24:18.310187] I [MSGID: 101190]
>> [event-epoll.c:682:event_dispatch_epoll_worker] 0-epoll: Started
>thread
>> with index 0
>> > ...
>> >
>> > I am able to to nslookup on gluster01 and gluster01.home without
>> problems, so "DNS resolution failed" is confusing to me. What happens
>here?
>> >
>> > Output of my volumes.
>> >
>> > sudo gluster volume status
>> > Status of volume: documents
>> > Gluster process                             TCP Port  RDMA Port 
>Online
>> Pid
>> >
>>
>------------------------------------------------------------------------------
>> > Brick gluster01.home:/data/documents        49152     0          Y
>>  5658
>> > Brick gluster02.home:/data/documents        49152     0          Y
>>  5340
>> > Brick gluster03.home:/data/documents        49152     0          Y
>>  5305
>> > Self-heal Daemon on localhost               N/A       N/A        Y
>>  5679
>> > Self-heal Daemon on gluster03.home          N/A       N/A        Y
>>  5326
>> > Self-heal Daemon on gluster02.home          N/A       N/A        Y
>>  5361
>> >
>> > Task Status of Volume documents
>> >
>>
>------------------------------------------------------------------------------
>> > There are no active volume tasks
>> >
>> > Status of volume: photos
>> > Gluster process                             TCP Port  RDMA Port 
>Online
>> Pid
>> >
>>
>------------------------------------------------------------------------------
>> > Brick gluster01.home:/data/photos           49153     0          Y
>>  5779
>> > Brick gluster02.home:/data/photos           49153     0          Y
>>  5401
>> > Brick gluster03.home:/data/photos           49153     0          Y
>>  5366
>> > Self-heal Daemon on localhost               N/A       N/A        Y
>>  5679
>> > Self-heal Daemon on gluster03.home          N/A       N/A        Y
>>  5326
>> > Self-heal Daemon on gluster02.home          N/A       N/A        Y
>>  5361
>> >
>> > Task Status of Volume photos
>> >
>>
>------------------------------------------------------------------------------
>> > There are no active volume tasks
>> >
>> > Status of volume: videos
>> > Gluster process                             TCP Port  RDMA Port 
>Online
>> Pid
>> >
>>
>------------------------------------------------------------------------------
>> > Brick gluster01.home:/data/videos           49154     0          Y
>>  5883
>> > Brick gluster02.home:/data/videos           49154     0          Y
>>  5452
>> > Brick gluster03.home:/data/videos           49154     0          Y
>>  5416
>> > Self-heal Daemon on localhost               N/A       N/A        Y
>>  5679
>> > Self-heal Daemon on gluster03.home          N/A       N/A        Y
>>  5326
>> > Self-heal Daemon on gluster02.home          N/A       N/A        Y
>>  5361
>> >
>> > Task Status of Volume videos
>> >
>>
>------------------------------------------------------------------------------
>> > There are no active volume tasks
>> >
>> > On the server (Ubuntu) following versions are installed.
>> >
>> > glusterfs-client/bionic,now 7.2-ubuntu1~bionic1 armhf
>> [installed,automatic]
>> > glusterfs-common/bionic,now 7.2-ubuntu1~bionic1 armhf
>> [installed,automatic]
>> > glusterfs-server/bionic,now 7.2-ubuntu1~bionic1 armhf [installed]
>> >
>> > On the client (CentOS) following versions are installed.
>> >
>> > sudo rpm -qa | grep gluster
>> > glusterfs-client-xlators-7.2-1.el7.x86_64
>> > glusterfs-cli-7.2-1.el7.x86_64
>> > glusterfs-libs-7.2-1.el7.x86_64
>> > glusterfs-7.2-1.el7.x86_64
>> > glusterfs-api-7.2-1.el7.x86_64
>> > libvirt-daemon-driver-storage-gluster-4.5.0-23.el7_7.3.x86_64
>> > centos-release-gluster7-1.0-1.el7.centos.noarch
>> > glusterfs-fuse-7.2-1.el7.x86_64
>> >
>> > I tried to disable IPv6 on the client voa sysctl with following
>> parameters.
>> >
>> > net.ipv6.conf.all.disable_ipv6 = 1
>> > net.ipv6.conf.default.disable_ipv6 = 1
>> >
>> > That did not help.
>> >
>> > Volumes are configured with inet.
>> >
>> > sudo gluster volume info videos
>> >
>> > Volume Name: videos
>> > Type: Replicate
>> > Volume ID: 8fddde82-66b3-447f-8860-ed3768c51876
>> > Status: Started
>> > Snapshot Count: 0
>> > Number of Bricks: 1 x 3 = 3
>> > Transport-type: tcp
>> > Bricks:
>> > Brick1: gluster01.home:/data/videos
>> > Brick2: gluster02.home:/data/videos
>> > Brick3: gluster03.home:/data/videos
>> > Options Reconfigured:
>> > features.ctime: on
>> > transport.address-family: inet
>> > nfs.disable: on
>> > performance.client-io-threads: off
>> >
>> > I tried turning off ctime but that did not work either.
>> >
>> > Any ideas? How do I do this correctly?
>> >
>> > Cheers
>> > Sherry
>> > ________
>> >
>> > Community Meeting Calendar:
>> >
>> > APAC Schedule -
>> > Every 2nd and 4th Tuesday at 11:30 AM IST
>> > Bridge: https://bluejeans.com/441850968
>> >
>> > NA/EMEA Schedule -
>> > Every 1st and 3rd Tuesday at 01:00 PM EDT
>> > Bridge: https://bluejeans.com/441850968
>> >
>> > Gluster-users mailing list
>> > Gluster-users at gluster.org
>> > https://lists.gluster.org/mailman/listinfo/gluster-users
>>

Systemd services are bad approach to define a mount.
Use systemd's '.mount' unit instead.
You can define 'before' & 'after' what it should be initiated.

Best Regards,
Strahil Nikolov


More information about the Gluster-users mailing list