[Gluster-users] Problems to work with mounted directory in Gluster 3.2.7 -> switch to 3.2.4 ; -)

Bernhard Glomm bernhard.glomm at ecologic.eu
Wed Feb 19 17:43:03 UTC 2014


I would strongly recommend to restart fresh with gluster 3.2.4 from http://download.gluster.org/pub/gluster/glusterfs/3.4/
It works totally fine for me.
(reinstall the vms as slim as possible if you can.)

As a quick howto consider this:



- We have 2 Hardware machines (just desktop machines for dev-env)
- both running zol
- create a zpool and zfs filesystem
- create a gluster replica 2 volume between hostA and hostB
- installe 3 VM vmachine0{4,5,6}
- vmachine0{4,5} each have a 100GB diskimage file as /dev/vdb which also resides on the glustervolume
- create ext3 filesystem on vmachine0{4,5}:/dev/vdb1
- create gluster replica 2 between vmachine04 and vmachine05 as shown below

(!!!obviously nobody would do that in any serious environment,
just to show that even a setup like that _would_ be possible!!!)


- run some benchmarks on that volume and compare the results to other 

So:

root at vmachine04[/0]:~ # mkdir -p /srv/vdb1/gf_brick
root at vmachine04[/0]:~ # mount /dev/vdb1 /srv/vdb1/
root at vmachine04[/0]:~ # gluster peer probe vmachine05
peer probe: success

# now switch over to vmachine05 and do

root at vmachine05[/1]:~ # mkdir -p /srv/vdb1/gf_brick
root at vmachine05[/1]:~ # mount /dev/vdb1 /srv/vdb1/
root at vmachine05[/1]:~ # gluster peer probe vmachine04
peer probe: success
root at vmachine05[/1]:~ # gluster peer probe vmachine04
peer probe: success: host vmachine04 port 24007 already in peer list

# the peer probe from BOTH sides ist often forgotten 
# switch back to vmachine04 and continue with

root at vmachine04[/0]:~ # gluster peer status
Number of Peers: 1

Hostname: vmachine05
Port: 24007
Uuid: 085a1489-dabf-40bb-90c1-fbfe66539953
State: Peer in Cluster (Connected)
root at vmachine04[/0]:~ # gluster volume info layer_cake_volume

Volume Name: layer_cake_volume
Type: Replicate
Volume ID: ef5299db-2896-4631-a2a8-d0082c1b25be
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vmachine04:/srv/vdb1/gf_brick
Brick2: vmachine05:/srv/vdb1/gf_brick
root at vmachine04[/0]:~ # gluster volume status layer_cake_volume
Status of volume: layer_cake_volume
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick vmachine04:/srv/vdb1/gf_brick                         49152   Y       12778
Brick vmachine05:/srv/vdb1/gf_brick                         49152   Y       16307
NFS Server on localhost                                 2049    Y       12790
Self-heal Daemon on localhost                           N/A     Y       12791
NFS Server on vmachine05                                    2049    Y       16320
Self-heal Daemon on vmachine05                              N/A     Y       16319

There are no active volume tasks

# set any option you might like

root at vmachine04[/1]:~ # gluster volume set layer_cake_volume network.remote-dio enable
volume set: success

# go to vmachine06 and mount the volume
root at vmachine06[/1]:~ # mkdir /srv/layer_cake
root at vmachine06[/1]:~ # mount -t glusterfs -o backupvolfile-server=vmachine05 vmachine04:/layer_cake_volume /srv/layer_cake
root at vmachine06[/1]:~ # mount
vmachine04:/layer_cake_volume on /srv/layer_cake type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
root at vmachine06[/1]:~ # df -h
Filesystem                 Size  Used Avail Use% Mounted on
...
vmachine04:/layer_cake_volume   97G  188M   92G   1% /srv/layer_cake

All fine and stable



# now let's see how it tastes
# note this is postmark on  / NOT on the glustermounted layer_cake_volume!
# that postmark results might be available tomorrow ;-)))
root at vmachine06[/1]:~ # postmark
PostMark v1.51 : 8/14/01
pm>set transactions 500000
pm>set number 200000
pm>set subdirectories 10000
pm>run
Creating subdirectories...Done
Creating files...Done
Performing transactions..........Done
Deleting files...Done
Deleting subdirectories...Done
Time:
        2314 seconds total
        2214 seconds of transactions (225 per second)

Files:
        450096 created (194 per second)
                Creation alone: 200000 files (4166 per second)
                Mixed with transactions: 250096 files (112 per second)
        249584 read (112 per second)
        250081 appended (112 per second)
        450096 deleted (194 per second)
                Deletion alone: 200192 files (3849 per second)
                Mixed with transactions: 249904 files (112 per second)

Data:
        1456.29 megabytes read (644.44 kilobytes per second)
        2715.89 megabytes written (1.17 megabytes per second)

# reference
# running postmark on the hardware machine directly on zfs
#
#           /test # postmark
#           PostMark v1.51 : 8/14/01
#           pm>set transactions 500000
#           pm>set number 200000
#           pm>set subdirectories 10000
#           pm>run
#           Creating subdirectories...Done
#           Creating files...Done
#           Performing transactions..........Done
#           Deleting files...Done
#           Deleting subdirectories...Done
#           Time:
#           605 seconds total
#           549 seconds of transactions (910 per second)
#
#           Files:
#           450096 created (743 per second)
#           Creation alone: 200000 files (4255 per second)
#           Mixed with transactions: 250096 files (455 per second)
#           249584 read (454 per second)
#           250081 appended (455 per second)
#           450096 deleted (743 per second)
#           Deletion alone: 200192 files (22243 per second)
#           Mixed with transactions: 249904 files (455 per second)
#
#           Data:
#           1456.29 megabytes read (2.41 megabytes per second)
#           2715.89 megabytes written (4.49 megabytes per second)

dbench -D /srv/layer_cake 5

 Operation      Count    AvgLat    MaxLat
 ----------------------------------------
 NTCreateX     195815     5.159   333.296
 Close         143870     0.793    93.619
 Rename          8310    10.922   123.096
 Unlink         39525     2.428   203.753
 Qpathinfo     177736     2.551   220.605
 Qfileinfo      31030     2.057   175.565
 Qfsinfo        32545     1.393   174.045
 Sfileinfo      15967     2.691   129.028
 Find           68664     9.629   185.739
 WriteX         96860     0.841   108.863
 ReadX         307834     0.511   213.602
 LockX            642     1.511    10.578
 UnlockX          642     1.541    10.137
 Flush          13712    12.853   405.383

Throughput 10.1832 MB/sec  5 clients  5 procs  max_latency=405.405 ms


# reference
dbench -D /tmp 5

# reference
dbench -D /tmp 5

 Operation      Count    AvgLat    MaxLat
 ----------------------------------------
 NTCreateX    3817455     0.119   499.847
 Close        2804160     0.005    16.000
 Rename        161655     0.322   459.790
 Unlink        770906     0.556   762.314
 Deltree           92    20.647    81.619
 Mkdir             46     0.003     0.012
 Qpathinfo    3460227     0.017    18.388
 Qfileinfo     606258     0.003    11.652
 Qfsinfo       634444     0.006    14.976
 Sfileinfo     310990     0.155   604.585
 Find         1337732     0.056    18.466
 WriteX       1902611     0.245   503.604
 ReadX        5984135     0.008    16.154
 LockX          12430     0.008     9.111
 UnlockX        12430     0.004     4.551
 Flush         267557     4.505   902.093

Throughput 199.664 MB/sec  5 clients  5 procs  max_latency=902.099 ms


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140219/0dbe4eef/attachment.html>


More information about the Gluster-users mailing list