[Gluster-users] NUFA and benchmarks in Amazon EC2 cloud

Brian Koloszyc brian at creativemerch.com
Thu Jul 30 18:39:48 UTC 2009


Hi,

Can someone correct me if I'm wrong in my understanding of glusterFS.  I currently have glusterFS up and running on two Amazon EC2 instances with replication and nufa.  I've mounted the client on /san.   If I create a file on one server in /san it replicates fine to the other server.  I'm confused though as to how nufa is supposed to work.  Is it supposed to give preference to the local attached drive?  My benchmarks are not showing this behavior.  Theoretically, should glusterFS perform as fast as a direct attached RAID-0 drive?  My benchmarks show a direct attached RAID-0 volume formatted with XFS writing at about 140 MB/s and glusterFS mounted with RAID-0 using nufa writing at about 45 MB/s, which is about the same speed as NFS.  Below are my vol files:

# file: /etc/glusterfs/glusterfs-server.vol
volume posix
  type storage/posix
  option directory /data/export
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  subvolumes brick
end-volume

-------------------------------------------------------------------

# file: /etc/glusterfs/gluster-client.vol
volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host 10.208.11.223
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host 10.208.9.156
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume nufa
 type cluster/nufa
 option local-volume-name remote2 # note the backquote, so 'hostname' output will be used as the option.
 subvolumes remote1 remote2 #node03 #node04
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

-------------------------------------------------------------------

Thanks for the help!  I'm looking forward to rolling this out to production, if I can get these write speeds up.

--Brian.


More information about the Gluster-users mailing list