[Gluster-users] distributed glusterfs volume of four ramdisks problems

Yaniv Kaul ykaul at redhat.com
Sun Jul 11 21:01:51 UTC 2021


On Sun, 11 Jul 2021, 23:59 Ewen Chan <alpha754293 at hotmail.com> wrote:

> Yaniv:
>
> I created a directory on a XFS formatted drive and that initially worked
> with tcp/inet.
>
> I then went to stop, delete, and tried to recreate the gluster volume with
> the option "transport tcp,rdma", it failed.
>

RDMA support was deprecated in recent releases.
Y.


> I had to use the force options for gluster to work.
>
> But then it failed when trying to mount the volume, but prior to this
> change, I was able to mount the glusterfs volume using tcp/inet only.
>
> But now when I try to re-create the volume with "transport tcp,rdma", it
> fails.
>
> When I try to recreate the volume without any arguments, it fails as well
> because it thinks that the mount point/folder/directory has already been
> associated with a previous gluster volume, which I don't know how to
> properly resolve and none of the official documentation on gluster.org
> explains how to deal with that.
>
> Thank you.
>
> Sincerely,
> Ewen
>
> ------------------------------
> *From:* Yaniv Kaul <ykaul at redhat.com>
> *Sent:* July 11, 2021 4:02 PM
> *To:* Ewen Chan <alpha754293 at hotmail.com>
> *Subject:* Re: [Gluster-users] distributed glusterfs volume of four
> ramdisks problems
>
> Can you try on a non tmpfs file system?
> Y.
>
> On Sun, 11 Jul 2021, 22:59 Ewen Chan <alpha754293 at hotmail.com> wrote:
>
> Strahil:
>
> I just tried to create an entirely new gluster volume, gv1, instead of
> trying to use gv0.
>
> Same error.
>
> # gluster volume create gv1 node{1..4}:/mnt/ramdisk/gv1
> volume create: gv1: success: please start the volume to access data
>
> When I tried to start the volume with:
>
> # gluster volume start gv1
>
> gluster responds with:
>
> volume start: gv1: failed: Commit failed on localhost. Please check log
> file for details.
>
> Attached are the updated glusterd.log and cli.log files.
>
> I checked and without specifying the options or the transport parameters,
> it defaults to using tcp/inet, but that still failed, so I am not really
> sure what's going on here.
>
> Thanks.
>
> Sincerely,
> Ewen
>
> ------------------------------
> *From:* Strahil Nikolov <hunter86_bg at yahoo.com>
> *Sent:* July 11, 2021 2:49 AM
> *To:* gluster-users at gluster.org <gluster-users at gluster.org>; Ewen Chan <
> alpha754293 at hotmail.com>
> *Subject:* Re: [Gluster-users] distributed glusterfs volume of four
> ramdisks problems
>
> Does it crash with tcp ?
> What happens when you mount on one of the hosts ?
>
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В събота, 10 юли 2021 г., 18:55:40 ч. Гринуич+3, Ewen Chan <
> alpha754293 at hotmail.com> написа:
>
>
>
>
>
>
>
> Hello everybody.
>
> I have a cluster with four nodes and I am trying to create a distributed
> glusterfs volume consisting of four RAM drives, each being 115 GB in size.
>
>
>
>
> I am running CentOS 7.7.1908.
>
>
>
>
> I created the ramdrives on each of the four nodes with the following
> command:
>
> # mount -t tmpfs -o size=115g tmpfs /mnt/ramdisk
>
>
>
>
> I then create the mount point for the gluster volume on each of the nodes:
>
>
> # mkdir -p /mnt/ramdisk/gv0
>
>
>
>
> And then I tried to create the glusterfs distributed volume:
>
> # gluster volume create gv0 transport tcp,rdma node{1..4}:/mnt/ramdisk/gv0
>
> And that came back with:
>
> volume create: gv0: success: pleas start the volume to access data
>
>
>
>
> When I tried to start the volume with:
>
>
> # gluster volume start gv0
>
>
>
> gluster responds with:
>
>
>
>
> volume start: gv0: failed: Commit failed on localhost. Please check log
> file for details.
>
>
>
>
> So I tried forcing the start with:
>
> # gluster volume start gv0 force
>
>
>
>
> gluster responds with:
>
>
>
>
> volume start: gv0: success
>
>
>
>
> I then created the mount point for the gluster volume:
>
> # mkdir -p /home/gluster
>
>
>
>
> And tried to mount the gluster gv0 volume:
>
> # mount -t glusterfs -o transport=rdma,direct-io-mode=enable node1:/gv0
> /home/gluster
>
>
>
>
> and the system crashes.
>
>
>
>
> After rebooting the system and switching users back to root, I get this:
>
> ABRT has detected 1 problem(s). For more info run: abrt-cli list --since
> 1625929899
>
>
>
>
> # abrt-cli list --since 1625929899
> id 2a8ae7a1207acc48a6fc4a6cd8c3c88ffcf431be
>
> reason:         glusterfsd killed by SIGSEGV
>
> time:           Sat 10 Jul 2021 10:56:13 AM EDT
>
> cmdline:        /usr/sbin/glusterfsd -s aes1 --volfile-id
> gv0.aes1.mnt-ramdisk-gv0 -p /var/run/gluster/vols/gv0/aes1-mnt-ramdisk-g
>
> v0.pid -S /var/run/gluster/5c2a19a097c93ac6.socket --brick-name
> /mnt/ramdisk/gv0 -l /var/log/glusterfs/bricks/mnt-ramdisk-gv0.log
>
> --xlator-option *-posix.glusterd-uuid=0a569353-5991-4bc1-a61f-4ca6950f313d
> --process-name brick --brick-port 49152 49153 --xlator-
>
> option gv0-server.transport.rdma.listen-port=49153 --xlator-option
> gv0-server.listen-port=49152 --volfile-server-transport=socket,
>
> rdma
>
> package:        glusterfs-fuse-9.3-1.el7
>
> uid:            0 (root)
>
> count:          4
>
> Directory:      /var/spool/abrt/ccpp-2021-07-10-10:56:13-4935
>
>
>
>
> The Autoreporting feature is disabled. Please consider enabling it by
> issuing
> 'abrt-auto-reporting enabled' as a user with root privileges
>
>
>
>
> Where do I begin to even remotely try and fix this, and to get this up and
> running?
>
>
>
>
> Any help in regards to this is greatly appreciated.
>
>
>
>
> Thank you.
>
>
>
>
> Sincerely,
>
> Ewen
>
>
>
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210712/258d2ce3/attachment.html>


More information about the Gluster-users mailing list