[Gluster-users] another NFS vs glusterfs performance question

Matt M misterdot at gmail.com
Tue Apr 28 16:19:46 UTC 2009


Hi All,

I'm new to gluster and have a basic test environment of three old PCs: 
two servers and one client.  I've currently got it configured to do AFR 
on the two servers and HA on the client, according to this example:
http://www.gluster.org/docs/index.php/High-availability_storage_using_server-side_AFR

I'm trying to figure out why NFS seems significantly faster in my 
(basic) tests.  My config files and results are below.  Any help is 
greatly appreciated!

server1 (garnet) is SUSE SLES 9(OES1), gluster 2.0.0rc7, FUSE 2.5.3, 
2.6.5-7.316-smp
server2 (or) is SUSE SLES 10, gluster 2.0.0rc7, FUSE 2.7.2, 
2.6.16.60-0.34-default
client1 (charon) is SUSE SLES 10, gluster 2.0.0rc7, FUSE 2.7.2, 
2.6.16.60-0.34-default

----
RESULTS
all tests performed on the client -- /gfs is my glusterfs mount and /nfs 
is the gluster FS shared from server1 via NFS:

GFS - no performance translators
time find /gfs/users/1 -type f
0.768u 1.460s 1:59.09 1.8%      0+0k 0+0io 0pf+0w

GFS - w/readahead and writeback:
0.784u 1.860s 1:59.62 2.2%      0+0k 0+0io 0pf+0w

NFS
time find /nfs/users/1 -type f
0.584u 3.796s 0:37.96 11.5%     0+0k 0+0io 0pf+0w

NFS - after an umount/mount
time find /nfs/users/1 -type f
0.556u 3.224s 0:40.57 9.2%      0+0k 0+0io 0pf+0w

GFS - dd
Directory: /gfs/users
[charon: users]# time sh -c "dd if=/dev/zero of=ddfile bs=8k 
count=2000000 && sync"
2000000+0 records in
2000000+0 records out
16384000000 bytes (16 GB) copied, 7065.52 seconds, 2.3 MB/s
1.488u 13.440s 1:57:45.64 0.2%  0+0k 0+0io 1pf+0w

NFS - dd
(unmount NFS volume, remount it)
Directory: /nfs/users
[charon: users]# time sh -c "dd if=/dev/zero of=ddfile bs=8k 
count=2000000 && sync"
2000000+0 records in
2000000+0 records out
16384000000 bytes (16 GB) copied, 1582.31 seconds, 10.4 MB/s
2.640u 125.299s 26:22.70 8.0%   0+0k 0+0io 5pf+0w
----

CONFIGS:
--
server1 (garnet)
[garnet: users]# cat /etc/gluster/glusterfsd-ha-afr.vol

# dataspace on garnet
volume gfs-ds
   type storage/posix
   option directory /export/home
end-volume

# posix locks
volume gfs-ds-locks
   type features/posix-locks
   subvolumes gfs-ds
end-volume

# dataspace on or
volume gfs-or-ds
   type protocol/client
   option transport-type tcp/client
   option remote-host 152.xx.xx.xx
   option remote-subvolume gfs-ds-locks
   option transport-timeout 10
end-volume

# automatic file replication translator for dataspace
volume gfs-ds-afr
   type cluster/afr
   subvolumes gfs-ds-locks gfs-or-ds    # local and remote dataspaces
end-volume

# the actual volume to export
volume users
   type performance/io-threads
   option thread-count 8
   subvolumes gfs-ds-afr
end-volume

# make the home volume available as a server share
volume server
  type protocol/server
  option transport-type tcp
  subvolumes users
  option auth.addr.gfs-ds-locks.allow 152.xx.xx.*
  option auth.addr.users.allow 152.xx.xx.*
end-volume

--
server2 (or)
[or: gluster]# cat /etc/gluster/glusterfsd-ha-afr.vol

# dataspace on or
volume gfs-ds
   type storage/posix
   option directory /export/home
end-volume

# posix locks
volume gfs-ds-locks
   type features/posix-locks
   subvolumes gfs-ds
end-volume

# dataspace on garent
volume gfs-garnet-ds
   type protocol/client
   option transport-type tcp/client
   option remote-host 152.xx.xx.xx
   option remote-subvolume gfs-ds-locks
   option transport-timeout 10
end-volume

# automatic file replication translator for dataspace
volume gfs-ds-afr
   type cluster/afr
   subvolumes gfs-ds-locks gfs-garnet-ds    # local and remote dataspaces
end-volume

# the actual volume to export
volume users
   type performance/io-threads
   option thread-count 8
   subvolumes gfs-ds-afr
end-volume

# make the users volume available as a server share
volume server
  type protocol/server
  option transport-type tcp
  subvolumes users
  option auth.addr.gfs-ds-locks.allow 152.xx.xx.*
  option auth.addr.users.allow 152.xx.xx.*
end-volume

--
client1 (charon)
[root at charon:users]# cat /etc/gluster/glusterfs-ha.vol

# the exported volume to mount
volume gluster
   type protocol/client
   option transport-type tcp/client
   option remote-host gluster.example.com   # RRDNS
   option remote-subvolume users            # exported volume
   option transport-timeout 10
end-volume

# performance block for gluster
volume writeback
   type performance/write-behind
   option aggregate-size 131072
   subvolumes gluster
end-volume

# performance block for gluster
volume readahead
   type performance/read-ahead
   option page-size 65536
   option page-count 16
   subvolumes writeback
end-volume

volume ioc
   type performance/io-cache
   option cache-size 128MB
   subvolumes readahead
end-volume

Thanks!
-Matt





More information about the Gluster-users mailing list