[Gluster-users] ganesha.nfsd process dies when copying files

Pui Edylie email at edylie.net
Fri Aug 10 13:40:29 UTC 2018


Hi Karli,

The following is my note which i gathered from google searches, storhaug 
wiki and more google searches ... i might have missed certain steps and 
this is based on Centos 7

install centos 7.x
yum update -y

i have disabled both firewalld and selinux

In our setup we are using LSI raid card RAID10 and present the virtual 
drive partition as /dev/sdb

Create LVM so that we could utilise the the snapshot feature of gluster

pvcreate --dataalignment 256k /dev/sdb
vgcreate --physicalextentsize 256K gfs_vg /dev/sdb

set the volume to use all the space with -l 100%FREE
lvcreate --thinpool gfs_vg/thin_pool -l 100%FREE  --chunksize 256K 
--poolmetadatasize 15G --zero n

we use XFS file system for our glusterfs
mkfs.xfs -i size=512 /dev/gfs_vg/thin_pool

Adding the following into /etc/fstab with mount point /bring1683 (you 
could change the name accordingly)
/dev/gfs_vg/thin_pool                 /brick1683 xfs    defaults 1 2

Enable gluster 4.1 repro

vi /etc/yum.repos.d/Gluster.repo

[gluster41]
name=Gluster 4.1
baseurl=http://mirror.centos.org/centos/7/storage/$basearch/gluster-4.1/
gpgcheck=0
enabled=1

install gluster 4.1

yum install -y centos-release-gluster

Once we have done the above steps on our 3 nodes, login to 1 of the node 
and issue the following

gluster volume create gv0 replica 3 192.168.0.1:/brick1683/gv0 
192.168.0.2:/brick1684/gv0 192.168.0.3:/brick1685/gv0


Setting up HA for NFS-Ganesha using CTDB

install the storhaug package on all participating nodes
Install the storhaug package on all nodes using the appropriate command 
for your system:

yum -y install storhaug-nfs

Note: this will install all the dependencies, e.g. ctdb, 
nfs-ganesha-gluster, glusterfs, and their related dependencies.

Create a passwordless ssh key and copy it to all participating nodes
On one of the participating nodes (Fedora, RHEL, CentOS):
node1% ssh-keygen -f /etc/sysconfig/storhaug.d/secret.pem
or (Debian, Ubuntu):
node1% ssh-keygen -f /etc/default/storhaug.d/secret.pem
When prompted for a password, press the Enter key.

Copy the public key to all the nodes nodes (Fedora, RHEL, CentOS):
node1% ssh-copy-id -i /etc/sysconfig/storhaug.d/secret.pem.pub root at node1
node1% ssh-copy-id -i /etc/sysconfig/storhaug.d/secret.pem.pub root at node2
node1% ssh-copy-id -i /etc/sysconfig/storhaug.d/secret.pem.pub root at node3

...

You can confirm that it works with (Fedora, RHEL, CentOS):
node1% ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/etc/sysconfig/storhaug.d/secret.pem root at node1


populate /etc/ctdb/nodes and /etc/ctdb/public_addresses
Select one node as your lead node, e.g. node1. On the lead node, 
create/edit /etc/ctdb/nodes and populate it with the (fixed) IP 
addresses of the participating nodes. It should look like this:
192.168.122.81
192.168.122.82
192.168.122.83
192.168.122.84

On the lead node, create/edit /etc/ctdb/public_addresses and populate it 
with the floating IP addresses (a.k.a. VIPs) for the participating 
nodes. These must be different than the IP addresses in /etc/ctdb/nodes. 
It should look like this:
192.168.122.85 eth0
192.168.122.86 eth0
192.168.122.87 eth0
192.168.122.88 eth0

edit /etc/ctdb/ctdbd.conf
Ensure that the line CTDB_MANAGES_NFS=yes exists. If not, add it or 
change it from no to yes. Add or change the following lines:
CTDB_RECOVERY_LOCK=/run/gluster/shared_storage/.ctdb/reclock
CTDB_NFS_CALLOUT=/etc/ctdb/nfs-ganesha-callout
CTDB_NFS_STATE_FS_TYPE=glusterfs
CTDB_NFS_STATE_MNT=/run/gluster/shared_storage
CTDB_NFS_SKIP_SHARE_CHECK=yes
NFS_HOSTNAME=localhost

create a bare minimum /etc/ganesha/ganesha.conf file
On the lead node:
node1% touch /etc/ganesha/ganesha.conf
or
node1% echo "### NFS-Ganesha.config" > /etc/ganesha/ganesha.conf

Note: you can edit this later to set global configuration options.

create a trusted storage pool and start the gluster shared-storage volume
On all the participating nodes:
node1% systemctl start glusterd
node2% systemctl start glusterd
node3% systemctl start glusterd
...

On the lead node, peer probe the other nodes:
node1% gluster peer probe node2
node1% gluster peer probe node3
...

Optional: on one of the other nodes, peer probe node1:
node2% gluster peer probe node1

Enable the gluster shared-storage volume:
node1% gluster volume set all cluster.enable-shared-storage enable
This takes a few moments. When done check that the 
gluster_shared_storage volume is mounted at /run/gluster/shared_storage 
on all the nodes.

start the ctdbd and ganesha.nfsd daemons
On the lead node:
node1% storhaug setup
You can watch the ctdb (/var/log/ctdb.log) and ganesha log 
(/var/log/ganesha/ganesha.log) to monitor their progress. From this 
point on you may enter storhaug commands from any of the participating 
nodes.

export a gluster volume
Create a gluster volume
node1% gluster volume create replica 2 myvol node1:/bricks/vol/myvol 
node2:/bricks/vol/myvol node3:/bricks/vol/myvol node4:/bricks/vol/myvol ...

Start the gluster volume you just created
node1% gluster volume start myvol

Export the gluster volume from ganesha
node1% storhaug export myvol

Regards,
Edy



On 8/10/2018 9:23 PM, Karli Sjöberg wrote:
> On Fri, 2018-08-10 at 21:23 +0800, Pui Edylie wrote:
>> Hi Karli,
>>
>> Storhaug works with glusterfs 4.1.2 and latest nfs-ganesha.
>>
>> I just installed them last weekend ... they are working very well :)
> Okay, awesome!
>
> Is there any documentation on how to do that?
>
> /K
>
>> Cheers,
>> Edy
>>
>> On 8/10/2018 9:08 PM, Karli Sjöberg wrote:
>>> On Fri, 2018-08-10 at 08:39 -0400, Kaleb S. KEITHLEY wrote:
>>>> On 08/10/2018 08:08 AM, Karli Sjöberg wrote:
>>>>> Hey all!
>>>>> ...
>>>>>
>>>>> glusterfs-client-xlators-3.10.12-1.el7.x86_64
>>>>> glusterfs-api-3.10.12-1.el7.x86_64
>>>>> nfs-ganesha-2.4.5-1.el7.x86_64
>>>>> centos-release-gluster310-1.0-1.el7.centos.noarch
>>>>> glusterfs-3.10.12-1.el7.x86_64
>>>>> glusterfs-cli-3.10.12-1.el7.x86_64
>>>>> nfs-ganesha-gluster-2.4.5-1.el7.x86_64
>>>>> glusterfs-server-3.10.12-1.el7.x86_64
>>>>> glusterfs-libs-3.10.12-1.el7.x86_64
>>>>> glusterfs-fuse-3.10.12-1.el7.x86_64
>>>>> glusterfs-ganesha-3.10.12-1.el7.x86_64
>>>>>
>>>> For nfs-ganesha problems you'd really be better served by posting
>>>> to
>>>> support@ or devel at lists.nfs-ganesha.org.
>>>>
>>>> Both glusterfs-3.10 and nfs-ganesha-2.4 are really old.
>>>> glusterfs-
>>>> 3.10
>>>> is even officially EOL. Ganesha isn't really organized  enough to
>>>> have
>>>> done anything as bold as officially declaring 2.4 as having
>>>> reached
>>>> EOL.
>>>>
>>>> The nfs-ganesha devs are currently working on 2.7; maintaining
>>>> and
>>>> supporting 2.6, and less so 2.5, is pretty much at the limit of
>>>> what
>>>> they might be willing to help debug.
>>>>
>>>> I strongly encourage you to update to a more recent version of
>>>> both
>>>> glusterfs and nfs-ganesha.  glusterfs-4.1 and nfs-ganesha-2.6
>>>> would
>>>> be
>>>> ideal. Then if you still have problems you're much more likely to
>>>> get
>>>> help.
>>>>
>>> Hi, thank you for your answer, but it raises even more questions
>>> about
>>> any potential production deployment.
>>>
>>> Actually, I knew that the versions are old, but it seems to me that
>>> you
>>> are contradicting yourself:
>>>
>>> https://lists.gluster.org/pipermail/gluster-users/2017-July/031753.
>>> html
>>>
>>> "After 3.10 you'd need to use storhaug.... Which.... doesn't work
>>> (yet).
>>>
>>> You need to use 3.10 for now."
>>>
>>> So how is that supposed to work?
>>>
>>> Is there documentation for how to get there?
>>>
>>> Thanks in advance!
>>>
>>> /K
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>   
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list