[Gluster-users] distributed glusterfs volume of four ramdisks problems

Ewen Chan alpha754293 at hotmail.com
Sat Jul 10 15:55:32 UTC 2021


Hello everybody.

I have a cluster with four nodes and I am trying to create a distributed glusterfs volume consisting of four RAM drives, each being 115 GB in size.

I am running CentOS 7.7.1908.

I created the ramdrives on each of the four nodes with the following command:
# mount -t tmpfs -o size=115g tmpfs /mnt/ramdisk

I then create the mount point for the gluster volume on each of the nodes:
# mkdir -p /mnt/ramdisk/gv0

And then I tried to create the glusterfs distributed volume:
# gluster volume create gv0 transport tcp,rdma node{1..4}:/mnt/ramdisk/gv0

And that came back with:

volume create: gv0: success: pleas start the volume to access data

When I tried to start the volume with:
# gluster volume start gv0

gluster responds with:

volume start: gv0: failed: Commit failed on localhost. Please check log file for details.

So I tried forcing the start with:
# gluster volume start gv0 force

gluster responds with:

volume start: gv0: success

I then created the mount point for the gluster volume:
# mkdir -p /home/gluster

And tried to mount the gluster gv0 volume:
# mount -t glusterfs -o transport=rdma,direct-io-mode=enable node1:/gv0 /home/gluster

and the system crashes.

After rebooting the system and switching users back to root, I get this:

ABRT has detected 1 problem(s). For more info run: abrt-cli list --since 1625929899

# abrt-cli list --since 1625929899
id 2a8ae7a1207acc48a6fc4a6cd8c3c88ffcf431be
reason:         glusterfsd killed by SIGSEGV
time:           Sat 10 Jul 2021 10:56:13 AM EDT
cmdline:        /usr/sbin/glusterfsd -s aes1 --volfile-id gv0.aes1.mnt-ramdisk-gv0 -p /var/run/gluster/vols/gv0/aes1-mnt-ramdisk-g
v0.pid -S /var/run/gluster/5c2a19a097c93ac6.socket --brick-name /mnt/ramdisk/gv0 -l /var/log/glusterfs/bricks/mnt-ramdisk-gv0.log
--xlator-option *-posix.glusterd-uuid=0a569353-5991-4bc1-a61f-4ca6950f313d --process-name brick --brick-port 49152 49153 --xlator-
option gv0-server.transport.rdma.listen-port=49153 --xlator-option gv0-server.listen-port=49152 --volfile-server-transport=socket,
rdma
package:        glusterfs-fuse-9.3-1.el7
uid:            0 (root)
count:          4
Directory:      /var/spool/abrt/ccpp-2021-07-10-10:56:13-4935

The Autoreporting feature is disabled. Please consider enabling it by issuing
'abrt-auto-reporting enabled' as a user with root privileges

Where do I begin to even remotely try and fix this, and to get this up and running?

Any help in regards to this is greatly appreciated.

Thank you.

Sincerely,
Ewen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210710/50d02c5b/attachment.html>


More information about the Gluster-users mailing list