[Gluster-users] GlusterFS Performance tuning

Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC] uthra.r.rao at nasa.gov
Wed Nov 25 14:44:14 UTC 2015


Thank you all for taking the time to reply to my email:

Here is some more information on our setup:
- Number of Nodes --> 2 Gluster servers and 1 client for testing. After testing we will mount the GlusterFS volume on 3 clients.
- CPU & RAM on Each Node --> 2 CPUs 3.4MHz, 384GB RAM on each Gluster Server
- What else is running on the nodes --> Nothing it is only our data server
- Number of bricks --> Two
- output of "gluster  volume info" & "gluster volume status"

Storage server1:
# gluster  volume info gtower
Volume Name: gtower
Type: Replicate
Volume ID: 838ab806-06d9-45c5-8d88-2a905c167dba
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: storage1.sci.gsfc.nasa.gov:/tower7/gluster1/brick
Brick2: storage2.sci.gsfc.nasa.gov:/tower8/gluster2/brick
Options Reconfigured:
nfs.export-volumes: off
nfs.addr-namelookup: off
performance.readdir-ahead: on
performance.cache-size: 2GB

-----------------------------------------------------------

Storage server 2:
# gluster  volume info gtower
Volume Name: gtower
Type: Replicate
Volume ID: 838ab806-06d9-45c5-8d88-2a905c167dba
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: storage1.sci.gsfc.nasa.gov:/tower7/gluster1/brick
Brick2: storage2.sci.gsfc.nasa.gov:/tower8/gluster2/brick
Options Reconfigured:
nfs.export-volumes: off
nfs.addr-namelookup: off
performance.readdir-ahead: on
performance.cache-size: 2GB

-------------------------------------------------------------------------

We have made a raidz3 consisting of 6 vdevs each consisting of 12 (6TB) drives and assigned one 200GB SSD drive for ZFS caching.

Our attached storage has 60 (6TB) drives for which I have done multipathing. We are also using 12 drives in the server for which I have set-up vdevs. So we are using 60+12 = 72 drives for ZFS (raidz3)


If you have any other suggestions based on our configuration please let me know.

Thank you.
Uthra






From: Gmail [mailto:b.s.mikhael at gmail.com]
Sent: Tuesday, November 24, 2015 4:50 PM
To: Pierre MGP Pro
Cc: Lindsay Mathieson; Rao, Uthra R. (GSFC-672.0)[ADNET SYSTEMS INC]; gluster-users at gluster.org
Subject: Re: [Gluster-users] GlusterFS Performance tuning

you can do the following:

# gluster volume set $vol performance.o-thread-count 64
Today’s CPU are powerful enough to handle 64 threads per volume.

# gluster volume set $vol client.event-threads XX
XX depend on the number of connections from the FUSE client to the server, you can get this number by running netstat and grep on the server IP and count the number of connections.

# gluster volume set $vol server.event-threads XX

XX depend on the number of connections from the server to the client(s), you can get this number by running netstat and grep on “gluster" and count the number of connections.

also, you can follow the instructions in the following page:
http://gluster.readthedocs.org/en/release-3.7.0/Administrator%20Guide/Linux%20Kernel%20Tuning/


-Bishoy
On Nov 24, 2015, at 1:31 PM, Pierre MGP Pro <pierre-mgp-jouy.inra at agsic.net<mailto:pierre-mgp-jouy.inra at agsic.net>> wrote:

Hi Lindsay Mathieson and all,

Le 24/11/2015 21:09, Lindsay Mathieson a écrit :
More details on your setup[ would be useful:
- Number of Nodes
- CPU & RAM on Each Node
- What else is running on the nodes
- Number of bricks
- output of "gluster  volume info" & "gluster volume status"

- ZFS config for each Node
  * number of disks and rai arrangement
  * log and cache SSD?
  * zpool status
OK I have tested that kind of configuration, and the result depend of what you are waiting for :

  *   zfsonlinux is now efficient, but you will not have access to the ACL ;
  *   on a volume with seven disk we get the maximum of the PCI Express bandwidth ;
  *   so you can mount a distributed gluster volume with your zfsonlinux nodes. The bandwidth will depend of the kind of glusterfs volume you want to build : distributed, stripped, replicated ;

     *   replicated : bad, because of the synchronism write for the files replication ;
     *   striped is the best because it allow you to get an average bandwidth on a file whatever the node you R/W the file ;

  *   then the last, for me, is the Ethernet access between each nodes. If you have 1Gb, get back to your sand box, At the year you need 10Gbs and as the minimal Ethernet access is two port you need to bound them ;
  *   have 10 Gbs Ethernet switch ;
That the expressions of the needs for the now and future necessities.
sincerely
Pierre Léonard

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151125/b2609e69/attachment.html>


More information about the Gluster-users mailing list