[Gluster-users] mnt-mail.log

Craig Carl craig at gluster.com
Wed May 12 00:37:46 UTC 2010


Bryan - 
   Your server vol file isn't perfect for large rsync operations. The changes I'm recommending will improve your rsync performance if you are moving a lot of small files. Please backup the current file before making any changes. You should comment out "readahead" and "iocache". In the quickread section change the "option cache-timeout" to 10 and change the "max-file-size" to the size of the largest file of which you have many, rounded up to the nearest factor of 4. 
   After you have made the changes across all the storage nodes please restart Gluster and measure the throughput again.

Thanks, 

Craig



----- Original Message -----
From: "Bryan McGuire" <bmcguire at newnet66.org>
To: "Craig Carl" <craig at gluster.com>
Cc: gluster-users at gluster.org
Sent: Tuesday, May 11, 2010 2:31:48 PM GMT -08:00 US/Canada Pacific
Subject: Re: [Gluster-users] mnt-mail.log

Here they are,


For msvr1 - 192.168.1.15

## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs

volume posix1
   type storage/posix
   option directory /fs
end-volume

volume locks1
     type features/locks
     subvolumes posix1
end-volume

volume brick1
     type performance/io-threads
     option thread-count 8
     subvolumes locks1
end-volume

volume server-tcp
     type protocol/server
     option transport-type tcp
     option auth.addr.brick1.allow *
     option transport.socket.listen-port 6996
     option transport.socket.nodelay on
     subvolumes brick1
end-volume


## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs

# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
     type protocol/client
     option transport-type tcp
     option remote-host 192.168.1.16
     option transport.socket.nodelay on
     option transport.remote-port 6996
     option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
     type protocol/client
     option transport-type tcp
     option remote-host 192.168.1.15
     option transport.socket.nodelay on
     option transport.remote-port 6996
     option remote-subvolume brick1
end-volume

volume mirror-0
     type cluster/replicate
     subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

volume readahead
     type performance/read-ahead
     option page-count 4
     subvolumes mirror-0
end-volume

volume iocache
     type performance/io-cache
     option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed  
's/[^0-9]//g') / 5120 ))`MB
     option cache-timeout 1
     subvolumes readahead
end-volume

volume quickread
     type performance/quick-read
     option cache-timeout 1
     option max-file-size 64kB
     subvolumes iocache
end-volume

volume writebehind
     type performance/write-behind
     option cache-size 4MB
     subvolumes quickread
end-volume

volume statprefetch
     type performance/stat-prefetch
     subvolumes writebehind
end-volume


For msvr2 192.168.1.16

## file auto generated by /bin/glusterfs-volgen (export.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs

volume posix1
   type storage/posix
   option directory /fs
end-volume

volume locks1
     type features/locks
     subvolumes posix1
end-volume

volume brick1
     type performance/io-threads
     option thread-count 8
     subvolumes locks1
end-volume

volume server-tcp
     type protocol/server
     option transport-type tcp
     option auth.addr.brick1.allow *
     option transport.socket.listen-port 6996
     option transport.socket.nodelay on
     subvolumes brick1
end-volume


## file auto generated by /bin/glusterfs-volgen (mount.vol)
# Cmd line:
# $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs  
192.168.1.16:/fs

# RAID 1
# TRANSPORT-TYPE tcp
volume 192.168.1.16-1
     type protocol/client
     option transport-type tcp
     option remote-host 192.168.1.16
     option transport.socket.nodelay on
     option transport.remote-port 6996
     option remote-subvolume brick1
end-volume

volume 192.168.1.15-1
     type protocol/client
     option transport-type tcp
     option remote-host 192.168.1.15
     option transport.socket.nodelay on
     option transport.remote-port 6996
     option remote-subvolume brick1
end-volume

volume mirror-0
     type cluster/replicate
     subvolumes 192.168.1.15-1 192.168.1.16-1
end-volume

volume readahead
     type performance/read-ahead
     option page-count 4
     subvolumes mirror-0
end-volume

volume iocache
     type performance/io-cache
     option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed  
's/[^0-9]//g') / 5120 ))`MB
     option cache-timeout 1
     subvolumes readahead
end-volume

volume quickread
     type performance/quick-read
     option cache-timeout 1
     option max-file-size 64kB
     subvolumes iocache
end-volume

volume writebehind
     type performance/write-behind
     option cache-size 4MB
     subvolumes quickread
end-volume

volume statprefetch
     type performance/stat-prefetch
     subvolumes writebehind
end-volume






Bryan McGuire
Senior Network Engineer
NewNet 66

918.231.8063
bmcguire at newnet66.org





On May 11, 2010, at 4:26 PM, Craig Carl wrote:

> Bryan -
>   Can you send your client and server vol files?
>
> Thanks,
>
> Craig
>
> -- 
> Craig Carl
> Sales Engineer
> Gluster, Inc.
> Cell - (408) 829-9953 (California, USA)
> Office - (408) 770-1884
> Gtalk - craig.carl at gmail.com
> Twitter - @gluster
>
> ----- Original Message -----
> From: "Bryan McGuire" <bmcguire at newnet66.org>
> To: gluster-users at gluster.org
> Sent: Tuesday, May 11, 2010 2:12:13 PM GMT -08:00 US/Canada Pacific
> Subject: [Gluster-users] mnt-mail.log
>
> Hello,
>
> I have Glusterfs 3.0.4 setup in a two node replication. It appears to
> be working just fine. Although I am using rsync to move over 350 Gig
> of email files and the process is very slow.
>
> I have noticed the following in the file /var/log/gluserfs/mntl-
> mail.log....... could someone explain what the lines mean. Thanks
>
> [2010-05-11 15:41:51] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-
> fuse: LOOKUP(/_outgoing/retry/201005100854105298-1273614110_8.tmp)
> inode (ptr=0xa235c70, ino=808124434, gen=5468694309383486383) found
> conflict (ptr=0x2aaaea26c290, ino=808124434, gen=5468694309383486383)
> [2010-05-11 15:46:53] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-
> fuse: LOOKUP(/_outgoing/retry/201005101016464462-1273614395_8.tmp)
> inode (ptr=0x2aaaf07f5550, ino=808124438, gen=5468694309383486385)
> found conflict (ptr=0x82e4420, ino=808124438, gen=5468694309383486385)
> [2010-05-11 15:46:53] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-
> fuse: LOOKUP(/_outgoing/retry/201005100830599960-1273614395_8.tmp)
> inode (ptr=0x2aaac01da520, ino=808124430, gen=5468694309383486381)
> found conflict (ptr=0x60d7a90, ino=808124430, gen=5468694309383486381)
> [2010-05-11 15:46:53] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-
> fuse: LOOKUP(/_outgoing/retry/201005101417175132-1273614396_8.tmp)
> inode (ptr=0x2aaaf07f5550, ino=808124446, gen=5468694309383486389)
> found conflict (ptr=0x8eb16e0, ino=808124446, gen=5468694309383486389)
> [2010-05-11 15:51:53] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-
> fuse: LOOKUP(/_outgoing/retry/201005100749045904-1273614665_8.tmp)
> inode (ptr=0x1ec11ee0, ino=808124420, gen=5468694309383486379) found
> conflict (ptr=0x2aaaea26bd30, ino=808124420, gen=5468694309383486379)
>
>
>
> Bryan
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


-- 



Craig Carl 



Join us for a Webinar on May 26; 
Case Studies: Deploying Open Source Storage Clouds 
Sales Engineer 
Gluster, Inc. 
Cell - (408) 829-9953 (California, USA) 
Office - (408) 770-1884 
Gtalk - craig.carl at gmail.com 
Twitter - @gluster 
http://www.gluster.com/files/installation-demo/demo.html 



More information about the Gluster-users mailing list