[Gluster-users] distributed glusterfs volume of four ramdisks problems
Ewen Chan
alpha754293 at hotmail.com
Sun Jul 11 17:25:36 UTC 2021
Strahil:
I haven't tried it with TCP.
When I tried to force it to start and then when I try to mount it, it seg faults as shown below.
I think the problem arises when I am trying to start the volume and it says:
"volume start: gv0: failed: Commit failed on localhost. Please check log file for details."
That would be first point where it indicates there's a problem.
Being that I am trying to configure four bricks that are RAM drives, it would be imperative that I am able to create and/or start the volume with RDMA (I am running it over 100 Gbps Infiniband), otherwise, if I am running it over TCP, the additional network overhead of the TCP stack will cripple the Gluster volume in regards to it's performance potential.
Thank you.
Sincerely,
Ewen
________________________________
From: Strahil Nikolov <hunter86_bg at yahoo.com>
Sent: July 11, 2021 2:49 AM
To: gluster-users at gluster.org <gluster-users at gluster.org>; Ewen Chan <alpha754293 at hotmail.com>
Subject: Re: [Gluster-users] distributed glusterfs volume of four ramdisks problems
Does it crash with tcp ?
What happens when you mount on one of the hosts ?
Best Regards,
Strahil Nikolov
В събота, 10 юли 2021 г., 18:55:40 ч. Гринуич+3, Ewen Chan <alpha754293 at hotmail.com> написа:
Hello everybody.
I have a cluster with four nodes and I am trying to create a distributed glusterfs volume consisting of four RAM drives, each being 115 GB in size.
I am running CentOS 7.7.1908.
I created the ramdrives on each of the four nodes with the following command:
# mount -t tmpfs -o size=115g tmpfs /mnt/ramdisk
I then create the mount point for the gluster volume on each of the nodes:
# mkdir -p /mnt/ramdisk/gv0
And then I tried to create the glusterfs distributed volume:
# gluster volume create gv0 transport tcp,rdma node{1..4}:/mnt/ramdisk/gv0
And that came back with:
volume create: gv0: success: pleas start the volume to access data
When I tried to start the volume with:
# gluster volume start gv0
gluster responds with:
volume start: gv0: failed: Commit failed on localhost. Please check log file for details.
So I tried forcing the start with:
# gluster volume start gv0 force
gluster responds with:
volume start: gv0: success
I then created the mount point for the gluster volume:
# mkdir -p /home/gluster
And tried to mount the gluster gv0 volume:
# mount -t glusterfs -o transport=rdma,direct-io-mode=enable node1:/gv0 /home/gluster
and the system crashes.
After rebooting the system and switching users back to root, I get this:
ABRT has detected 1 problem(s). For more info run: abrt-cli list --since 1625929899
# abrt-cli list --since 1625929899
id 2a8ae7a1207acc48a6fc4a6cd8c3c88ffcf431be
reason: glusterfsd killed by SIGSEGV
time: Sat 10 Jul 2021 10:56:13 AM EDT
cmdline: /usr/sbin/glusterfsd -s aes1 --volfile-id gv0.aes1.mnt-ramdisk-gv0 -p /var/run/gluster/vols/gv0/aes1-mnt-ramdisk-g
v0.pid -S /var/run/gluster/5c2a19a097c93ac6.socket --brick-name /mnt/ramdisk/gv0 -l /var/log/glusterfs/bricks/mnt-ramdisk-gv0.log
--xlator-option *-posix.glusterd-uuid=0a569353-5991-4bc1-a61f-4ca6950f313d --process-name brick --brick-port 49152 49153 --xlator-
option gv0-server.transport.rdma.listen-port=49153 --xlator-option gv0-server.listen-port=49152 --volfile-server-transport=socket,
rdma
package: glusterfs-fuse-9.3-1.el7
uid: 0 (root)
count: 4
Directory: /var/spool/abrt/ccpp-2021-07-10-10:56:13-4935
The Autoreporting feature is disabled. Please consider enabling it by issuing
'abrt-auto-reporting enabled' as a user with root privileges
Where do I begin to even remotely try and fix this, and to get this up and running?
Any help in regards to this is greatly appreciated.
Thank you.
Sincerely,
Ewen
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210711/c4e4ce3c/attachment.html>
More information about the Gluster-users
mailing list