[Gluster-users] question on config for Distributed Replicated Storage

Christian Marnitz christian.marnitz at icesmedia.de
Sat Aug 8 22:18:46 UTC 2009


Hi,

we try to setup an Distributed Replicated Storage with 8 servers:

-------------------------------------------------------------------
2x dual quad core xeon 2,5 GHz
32 GM RAM
1,25 TB Raid 10 - 15k rpm SAS
Infiniband Mellanox HCA and Voltaire Switch
-------------------------------------------------------------------
# file: /etc/glusterfs/glusterfsd.vol
volume posix
  type storage/posix
  option directory /services/glusterfs/sdb/export/
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type ib-verbs
  option auth.addr.brick.allow 10.10.10.*
  subvolumes brick
end-volume
-------------------------------------------------------------------


On the client side we have the same server with these config:
-------------------------------------------------------------------
###################################################
##### file /etc/glusterfs/glusterfs-client-fast.vol
###################################################

###################################################
##### bricks
###################################################
volume remote1
  type protocol/client
  option transport-type ib-verbs
  option remote-host 10.10.10.10
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type ib-verbs
  option remote-host 10.10.10.11
  option remote-subvolume brick
end-volume

volume remote3
  type protocol/client
  option transport-type ib-verbs
  option remote-host 10.10.10.12
  option remote-subvolume brick
end-volume

volume remote4
  type protocol/client
  option transport-type ib-verbs
  option remote-host 10.10.10.13
  option remote-subvolume brick
end-volume

volume remote5
  type protocol/client
  option transport-type ib-verbs
  option remote-host 10.10.10.14
  option remote-subvolume brick
end-volume

volume remote6
  type protocol/client
  option transport-type ib-verbs
  option remote-host 10.10.10.15
  option remote-subvolume brick
end-volume

volume remote7
  type protocol/client
  option transport-type ib-verbs
  option remote-host 10.10.10.16
  option remote-subvolume brick
end-volume

volume remote8
  type protocol/client
  option transport-type ib-verbs
  option remote-host 10.10.10.17
  option remote-subvolume brick
end-volume


###################################################
##### replicates
###################################################
volume replicate1
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume replicate2
  type cluster/replicate
  subvolumes remote3 remote4
end-volume

volume replicate3
  type cluster/replicate
  subvolumes remote5 remote6
end-volume

volume replicate4
  type cluster/replicate
  subvolumes remote7 remote8
end-volume


###################################################
##### performance
###################################################

volume distribute
  type cluster/distribute
  subvolumes replicate1 replicate2 replicate3 replicate4
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes distribute
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume
-------------------------------------------------------------------

On all servers we use ubuntu x64 8.0.4 LTS with 2.6.24-24-generic #1 SMP
Wed Apr 15 15:11:35 UTC 2009 x86_64 GNU/Linux and OFED-1.4 installed.


I have 2 questions to it:
-------------------------

1. we get around 220 MB/s not more. What could we do? The hardware
should more?

2. the folder size is 16.384 kb for the 8 bricks and when we add 8 other
bricks it increase to 32.768 kb - so on 64 bricks every empty created
folder would have 131.072 kb. Is this normal?  Or have we done mistakes
somewhere. 


Many thanks for the help in advance and with best regards,
Christian Marnitz





More information about the Gluster-users mailing list