[Gluster-users] [SPAM?] Storage Design Overview
Burnash, James
jburnash at knight.com
Wed May 11 14:22:28 UTC 2011
Answers inline below as well ☺
Hope this helps.
James Burnash, Unix Engineering
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Nyamul Hassan
Sent: Wednesday, May 11, 2011 10:04 AM
To: gluster-users at gluster.org
Subject: Re: [Gluster-users] [SPAM?] Storage Design Overview
Thank you for the prompt and insightful answer, James. My remarks are inline.
1. Can we mount a GlusterFS on a client and expect it to provide sustained throughput near wirespeed? <No>
In your scenario, what were the maximum read speeds that you observed?
Read (using dd) approximately 60MB/sec to 100MB/sec.
3. Does it put extra pressure on the client? <What do you mean by pressure? My clients (HP ProLiant DL360 G5 Quad Core with 32GB RAM) show up to 2GB of memory usage when the native Gluster client is used for mounts – but that is dependent on what you set the client cache max for – in my case, 2GB. CPU utilization is usually negligible in my systems, network bandwidth utilization and I/O throughput … depend on what the files sizes and access patterns look like>
Thx for the insight. Can you describe your current deployment a bit more, like configs of the storage nodes, and the client nodes, and what type of application you are using it for? Don't want to be too intrusive, just to get an idea on what others are doing.
All on Gluster 3.1.3
Servers:
4 CentOS 5.5 (ProLiant DL370 G6 servers, Intel Xeon 3200 MHz),
Each with:
Single P812 Smart Array Controller,
Single MDS600 with 70 2TB SATA drives configured as RAID 50
48 MB RAM
Clients:
185 CentOS 5.2 (mostly DL360 G6).
/pfs2 is the mount point for a Duplicated-Replicate volume of 4 servers.
Volume Name: pfs-ro1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 20 x 2 = 40
Transport-type: tcp
Bricks:
Brick1: jc1letgfs17-pfs1:/export/read-only/g01
Brick2: jc1letgfs18-pfs1:/export/read-only/g01
Brick3: jc1letgfs17-pfs1:/export/read-only/g02
Brick4: jc1letgfs18-pfs1:/export/read-only/g02
Brick5: jc1letgfs17-pfs1:/export/read-only/g03
Brick6: jc1letgfs18-pfs1:/export/read-only/g03
Brick7: jc1letgfs17-pfs1:/export/read-only/g04
Brick8: jc1letgfs18-pfs1:/export/read-only/g04
Brick9: jc1letgfs17-pfs1:/export/read-only/g05
Brick10: jc1letgfs18-pfs1:/export/read-only/g05
Brick11: jc1letgfs17-pfs1:/export/read-only/g06
Brick12: jc1letgfs18-pfs1:/export/read-only/g06
Brick13: jc1letgfs17-pfs1:/export/read-only/g07
Brick14: jc1letgfs18-pfs1:/export/read-only/g07
Brick15: jc1letgfs17-pfs1:/export/read-only/g08
Brick16: jc1letgfs18-pfs1:/export/read-only/g08
Brick17: jc1letgfs17-pfs1:/export/read-only/g09
Brick18: jc1letgfs18-pfs1:/export/read-only/g09
Brick19: jc1letgfs17-pfs1:/export/read-only/g10
Brick20: jc1letgfs18-pfs1:/export/read-only/g10
Brick21: jc1letgfs14-pfs1:/export/read-only/g01
Brick22: jc1letgfs15-pfs1:/export/read-only/g01
Brick23: jc1letgfs14-pfs1:/export/read-only/g02
Brick24: jc1letgfs15-pfs1:/export/read-only/g02
Brick25: jc1letgfs14-pfs1:/export/read-only/g03
Brick26: jc1letgfs15-pfs1:/export/read-only/g03
Brick27: jc1letgfs14-pfs1:/export/read-only/g04
Brick28: jc1letgfs15-pfs1:/export/read-only/g04
Brick29: jc1letgfs14-pfs1:/export/read-only/g05
Brick30: jc1letgfs15-pfs1:/export/read-only/g05
Brick11: jc1letgfs17-pfs1:/export/read-only/g06
Brick12: jc1letgfs18-pfs1:/export/read-only/g06
Brick13: jc1letgfs17-pfs1:/export/read-only/g07
Brick14: jc1letgfs18-pfs1:/export/read-only/g07
Brick15: jc1letgfs17-pfs1:/export/read-only/g08
Brick16: jc1letgfs18-pfs1:/export/read-only/g08
Brick17: jc1letgfs17-pfs1:/export/read-only/g09
Brick18: jc1letgfs18-pfs1:/export/read-only/g09
Brick19: jc1letgfs17-pfs1:/export/read-only/g10
Brick20: jc1letgfs18-pfs1:/export/read-only/g10
Brick21: jc1letgfs14-pfs1:/export/read-only/g01
Brick22: jc1letgfs15-pfs1:/export/read-only/g01
Brick23: jc1letgfs14-pfs1:/export/read-only/g02
Brick24: jc1letgfs15-pfs1:/export/read-only/g02
Brick25: jc1letgfs14-pfs1:/export/read-only/g03
Brick26: jc1letgfs15-pfs1:/export/read-only/g03
Brick27: jc1letgfs14-pfs1:/export/read-only/g04
Brick28: jc1letgfs15-pfs1:/export/read-only/g04
Brick29: jc1letgfs14-pfs1:/export/read-only/g05
Brick30: jc1letgfs15-pfs1:/export/read-only/g05
Brick31: jc1letgfs14-pfs1:/export/read-only/g06
Brick32: jc1letgfs15-pfs1:/export/read-only/g06
Brick33: jc1letgfs14-pfs1:/export/read-only/g07
Brick34: jc1letgfs15-pfs1:/export/read-only/g07
Brick35: jc1letgfs14-pfs1:/export/read-only/g08
Brick36: jc1letgfs15-pfs1:/export/read-only/g08
Brick37: jc1letgfs14-pfs1:/export/read-only/g09
Brick38: jc1letgfs15-pfs1:/export/read-only/g09
Brick39: jc1letgfs14-pfs1:/export/read-only/g10
Brick40: jc1letgfs15-pfs1:/export/read-only/g10
Options Reconfigured:
diagnostics.brick-log-level: ERROR
cluster.metadata-change-log: on
diagnostics.client-log-level: ERROR
performance.stat-prefetch: on
performance.cache-size: 2GB
network.ping-timeout: 10
Thank you once again for your remarks.
Cheers,
HASSAN
DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the addressee(s)named herein and
may contain legally privileged and/or confidential information. If you are not the intended recipient of this
e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail and any attachments
thereto, is strictly prohibited. If you have received this in error, please immediately notify me and permanently
delete the original and any printout thereof. E-mail transmission cannot be guaranteed to be secure or error-free.
The sender therefore does not accept liability for any errors or omissions in the contents of this message which
arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY
Knight Capital Group may, at its discretion, monitor and review the content of all e-mail communications.
http://www.knight.com<http://www.knight.com/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110511/af42f030/attachment.html>
More information about the Gluster-users
mailing list