[Gluster-users] mnt-mail.log - memory ???
Bryan McGuire
bmcguire at newnet66.org
Wed May 12 20:19:09 UTC 2010
Thanks for that, it seems to be working. I can not determine if its
faster yet as I have noticed something else.
The gluster servers are running at or above 98% memory usage. I have
32 Gigs of memory so this seems a bit out of bounds.
I have read about a memory leak does this apply to this situation?
link here http://comments.gmane.org/gmane.comp.file-systems.gluster.user/2701
Or is this just what gluster does when copying a lot of small files?
Bryan McGuire
Senior Network Engineer
NewNet 66
918.231.8063
bmcguire at newnet66.org
-------------- next part --------------
On May 11, 2010, at 11:48 PM, Craig Carl wrote:
> Bryan -
> Sorry about that. You still need a "subvolumes" value, it should
> be the translator next up in the list, mirror-0. So -
>
> volume quickread
> type performance/quick-read
> option cache-timeout 10
> option max-file-size 64kB
> subvolumes mirror-0
> end-volume
>
>
>
>
>
> ----- Original Message -----
> From: "Bryan McGuire" <bmcguire at newnet66.org>
> To: "Craig Carl" <craig at gluster.com>
> Cc: gluster-users at gluster.org
> Sent: Tuesday, May 11, 2010 6:45:44 PM GMT -08:00 US/Canada Pacific
> Subject: Re: [Gluster-users] mnt-mail.log
>
> Done..... but now have this error in /var/log/glusterfs/mnt-mail.log
>
> [2010-05-11 20:40:21] E [quick-read.c:2194:init] quickread: FATAL:
> volume (quickread) not configured with exactly one child
> [2010-05-11 20:40:21] E [xlator.c:839:xlator_init_rec] quickread:
> Initialization of volume 'quickread' failed, review your volfile again
> [2010-05-11 20:40:21] E [glusterfsd.c:591:_xlator_graph_init]
> glusterfs: initializing translator failed
> [2010-05-11 20:40:21] E [glusterfsd.c:1394:main] glusterfs: translator
> initialization failed. exiting
>
> I did change the option max-file-size but when I received the errors I
> put it back to 64kb.
>
> New glusterfs.vol
>
> ## file auto generated by /bin/glusterfs-volgen (mount.vol)
> # Cmd line:
> # $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs
> 192.168.1.16:/fs
>
> # RAID 1
> # TRANSPORT-TYPE tcp
> volume 192.168.1.16-1
> type protocol/client
> option transport-type tcp
> option remote-host 192.168.1.16
> option transport.socket.nodelay on
> option transport.remote-port 6996
> option remote-subvolume brick1
> end-volume
>
> volume 192.168.1.15-1
> type protocol/client
> option transport-type tcp
> option remote-host 192.168.1.15
> option transport.socket.nodelay on
> option transport.remote-port 6996
> option remote-subvolume brick1
> end-volume
>
> volume mirror-0
> type cluster/replicate
> subvolumes 192.168.1.15-1 192.168.1.16-1
> end-volume
>
> #volume readahead
> # type performance/read-ahead
> # option page-count 4
> # subvolumes mirror-0
> #end-volume
>
> #volume iocache
> # type performance/io-cache
> # option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
> 's/[^0-9]//g') / 5120 ))`MB
> # option cache-timeout 1
> # subvolumes readahead
> #end-volume
>
> volume quickread
> type performance/quick-read
> option cache-timeout 10
> option max-file-size 64kB
> # subvolumes iocache
> end-volume
>
> volume writebehind
> type performance/write-behind
> option cache-size 4MB
> subvolumes quickread
> end-volume
>
> volume statprefetch
> type performance/stat-prefetch
> subvolumes writebehind
> end-volume
>
>
>
>
> Bryan McGuire
>
>
> On May 11, 2010, at 7:37 PM, Craig Carl wrote:
>
>> Bryan -
>> Your server vol file isn't perfect for large rsync operations. The
>> changes I'm recommending will improve your rsync performance if you
>> are moving a lot of small files. Please backup the current file
>> before making any changes. You should comment out "readahead" and
>> "iocache". In the quickread section change the "option cache-
>> timeout" to 10 and change the "max-file-size" to the size of the
>> largest file of which you have many, rounded up to the nearest
>> factor of 4.
>> After you have made the changes across all the storage nodes
>> please restart Gluster and measure the throughput again.
>>
>> Thanks,
>>
>> Craig
>>
>>
>>
>> ----- Original Message -----
>> From: "Bryan McGuire" <bmcguire at newnet66.org>
>> To: "Craig Carl" <craig at gluster.com>
>> Cc: gluster-users at gluster.org
>> Sent: Tuesday, May 11, 2010 2:31:48 PM GMT -08:00 US/Canada Pacific
>> Subject: Re: [Gluster-users] mnt-mail.log
>>
>> Here they are,
>>
>>
>> For msvr1 - 192.168.1.15
>>
>> ## file auto generated by /bin/glusterfs-volgen (export.vol)
>> # Cmd line:
>> # $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs
>> 192.168.1.16:/fs
>>
>> volume posix1
>> type storage/posix
>> option directory /fs
>> end-volume
>>
>> volume locks1
>> type features/locks
>> subvolumes posix1
>> end-volume
>>
>> volume brick1
>> type performance/io-threads
>> option thread-count 8
>> subvolumes locks1
>> end-volume
>>
>> volume server-tcp
>> type protocol/server
>> option transport-type tcp
>> option auth.addr.brick1.allow *
>> option transport.socket.listen-port 6996
>> option transport.socket.nodelay on
>> subvolumes brick1
>> end-volume
>>
>>
>> ## file auto generated by /bin/glusterfs-volgen (mount.vol)
>> # Cmd line:
>> # $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs
>> 192.168.1.16:/fs
>>
>> # RAID 1
>> # TRANSPORT-TYPE tcp
>> volume 192.168.1.16-1
>> type protocol/client
>> option transport-type tcp
>> option remote-host 192.168.1.16
>> option transport.socket.nodelay on
>> option transport.remote-port 6996
>> option remote-subvolume brick1
>> end-volume
>>
>> volume 192.168.1.15-1
>> type protocol/client
>> option transport-type tcp
>> option remote-host 192.168.1.15
>> option transport.socket.nodelay on
>> option transport.remote-port 6996
>> option remote-subvolume brick1
>> end-volume
>>
>> volume mirror-0
>> type cluster/replicate
>> subvolumes 192.168.1.15-1 192.168.1.16-1
>> end-volume
>>
>> volume readahead
>> type performance/read-ahead
>> option page-count 4
>> subvolumes mirror-0
>> end-volume
>>
>> volume iocache
>> type performance/io-cache
>> option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
>> 's/[^0-9]//g') / 5120 ))`MB
>> option cache-timeout 1
>> subvolumes readahead
>> end-volume
>>
>> volume quickread
>> type performance/quick-read
>> option cache-timeout 1
>> option max-file-size 64kB
>> subvolumes iocache
>> end-volume
>>
>> volume writebehind
>> type performance/write-behind
>> option cache-size 4MB
>> subvolumes quickread
>> end-volume
>>
>> volume statprefetch
>> type performance/stat-prefetch
>> subvolumes writebehind
>> end-volume
>>
>>
>> For msvr2 192.168.1.16
>>
>> ## file auto generated by /bin/glusterfs-volgen (export.vol)
>> # Cmd line:
>> # $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs
>> 192.168.1.16:/fs
>>
>> volume posix1
>> type storage/posix
>> option directory /fs
>> end-volume
>>
>> volume locks1
>> type features/locks
>> subvolumes posix1
>> end-volume
>>
>> volume brick1
>> type performance/io-threads
>> option thread-count 8
>> subvolumes locks1
>> end-volume
>>
>> volume server-tcp
>> type protocol/server
>> option transport-type tcp
>> option auth.addr.brick1.allow *
>> option transport.socket.listen-port 6996
>> option transport.socket.nodelay on
>> subvolumes brick1
>> end-volume
>>
>>
>> ## file auto generated by /bin/glusterfs-volgen (mount.vol)
>> # Cmd line:
>> # $ /bin/glusterfs-volgen --name repstore1 --raid 1 192.168.1.15:/fs
>> 192.168.1.16:/fs
>>
>> # RAID 1
>> # TRANSPORT-TYPE tcp
>> volume 192.168.1.16-1
>> type protocol/client
>> option transport-type tcp
>> option remote-host 192.168.1.16
>> option transport.socket.nodelay on
>> option transport.remote-port 6996
>> option remote-subvolume brick1
>> end-volume
>>
>> volume 192.168.1.15-1
>> type protocol/client
>> option transport-type tcp
>> option remote-host 192.168.1.15
>> option transport.socket.nodelay on
>> option transport.remote-port 6996
>> option remote-subvolume brick1
>> end-volume
>>
>> volume mirror-0
>> type cluster/replicate
>> subvolumes 192.168.1.15-1 192.168.1.16-1
>> end-volume
>>
>> volume readahead
>> type performance/read-ahead
>> option page-count 4
>> subvolumes mirror-0
>> end-volume
>>
>> volume iocache
>> type performance/io-cache
>> option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed
>> 's/[^0-9]//g') / 5120 ))`MB
>> option cache-timeout 1
>> subvolumes readahead
>> end-volume
>>
>> volume quickread
>> type performance/quick-read
>> option cache-timeout 1
>> option max-file-size 64kB
>> subvolumes iocache
>> end-volume
>>
>> volume writebehind
>> type performance/write-behind
>> option cache-size 4MB
>> subvolumes quickread
>> end-volume
>>
>> volume statprefetch
>> type performance/stat-prefetch
>> subvolumes writebehind
>> end-volume
>>
>>
>>
>>
>>
>>
>> Bryan McGuire
>> Senior Network Engineer
>> NewNet 66
>>
>> 918.231.8063
>> bmcguire at newnet66.org
>>
>>
>>
>>
>>
>> On May 11, 2010, at 4:26 PM, Craig Carl wrote:
>>
>>> Bryan -
>>> Can you send your client and server vol files?
>>>
>>> Thanks,
>>>
>>> Craig
>>>
>>> --
>>> Craig Carl
>>> Sales Engineer
>>> Gluster, Inc.
>>> Cell - (408) 829-9953 (California, USA)
>>> Office - (408) 770-1884
>>> Gtalk - craig.carl at gmail.com
>>> Twitter - @gluster
>>>
>>> ----- Original Message -----
>>> From: "Bryan McGuire" <bmcguire at newnet66.org>
>>> To: gluster-users at gluster.org
>>> Sent: Tuesday, May 11, 2010 2:12:13 PM GMT -08:00 US/Canada Pacific
>>> Subject: [Gluster-users] mnt-mail.log
>>>
>>> Hello,
>>>
>>> I have Glusterfs 3.0.4 setup in a two node replication. It appears
>>> to
>>> be working just fine. Although I am using rsync to move over 350 Gig
>>> of email files and the process is very slow.
>>>
>>> I have noticed the following in the file /var/log/gluserfs/mntl-
>>> mail.log....... could someone explain what the lines mean. Thanks
>>>
>>> [2010-05-11 15:41:51] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-
>>> fuse: LOOKUP(/_outgoing/retry/201005100854105298-1273614110_8.tmp)
>>> inode (ptr=0xa235c70, ino=808124434, gen=5468694309383486383) found
>>> conflict (ptr=0x2aaaea26c290, ino=808124434,
>>> gen=5468694309383486383)
>>> [2010-05-11 15:46:53] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-
>>> fuse: LOOKUP(/_outgoing/retry/201005101016464462-1273614395_8.tmp)
>>> inode (ptr=0x2aaaf07f5550, ino=808124438, gen=5468694309383486385)
>>> found conflict (ptr=0x82e4420, ino=808124438,
>>> gen=5468694309383486385)
>>> [2010-05-11 15:46:53] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-
>>> fuse: LOOKUP(/_outgoing/retry/201005100830599960-1273614395_8.tmp)
>>> inode (ptr=0x2aaac01da520, ino=808124430, gen=5468694309383486381)
>>> found conflict (ptr=0x60d7a90, ino=808124430,
>>> gen=5468694309383486381)
>>> [2010-05-11 15:46:53] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-
>>> fuse: LOOKUP(/_outgoing/retry/201005101417175132-1273614396_8.tmp)
>>> inode (ptr=0x2aaaf07f5550, ino=808124446, gen=5468694309383486389)
>>> found conflict (ptr=0x8eb16e0, ino=808124446,
>>> gen=5468694309383486389)
>>> [2010-05-11 15:51:53] W [fuse-bridge.c:491:fuse_entry_cbk]
>>> glusterfs-
>>> fuse: LOOKUP(/_outgoing/retry/201005100749045904-1273614665_8.tmp)
>>> inode (ptr=0x1ec11ee0, ino=808124420, gen=5468694309383486379) found
>>> conflict (ptr=0x2aaaea26bd30, ino=808124420,
>>> gen=5468694309383486379)
>>>
>>>
>>>
>>> Bryan
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>> --
>>
>>
>>
>> Craig Carl
>>
>>
>>
>> Join us for a Webinar on May 26;
>> Case Studies: Deploying Open Source Storage Clouds
>> Sales Engineer
>> Gluster, Inc.
>> Cell - (408) 829-9953 (California, USA)
>> Office - (408) 770-1884
>> Gtalk - craig.carl at gmail.com
>> Twitter - @gluster
>> http://www.gluster.com/files/installation-demo/demo.html
>
>
> --
>
>
>
>
> Craig Carl
>
>
>
>
>
>
> Join us for a Webinar on May 26;
> Case Studies: Deploying Open Source Storage Clouds
> Sales Engineer; Gluster, Inc.
> Cell - (408) 829-9953 (California, USA)
> Office - (408) 770-1884
> Gtalk - craig.carl at gmail.com
> Twitter - @gluster
> Installing Gluster Storage Platform, the movie!
More information about the Gluster-users
mailing list