[Gluster-devel] Major slowdown in cp performance in 1.4 branch
Anand Babu Periasamy
ab at gnu.org.in
Tue Aug 26 16:31:44 UTC 2008
Avati, trace translator is loaded in this setup. It could have the slowdown or crash!
--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://ab.freeshell.org]
The GNU Operating System [http://www.gnu.org]
Z RESEARCH Inc [http://www.zresearch.com]
Anand Avati wrote:
> Brent,
> do you have a non-AFR volume to compare the write speeds against?
>
> avati
>
> 2008/8/26 Brent A Nelson <brent at phys.ufl.edu>
>
>> I don't seem to be getting this crashing namespace issue anymore with a tla
>> from today and adding posix-locks to my namespace exports (which are AFRed,
>> and AFR now needs locking, which sounds like a good thing).
>>
>> Writes are still just as slow (3.3MBps), but dd reads are very fast
>> (~117-118MBps, apparently saturating my gigabit link).
>>
>>
>> Thanks,
>>
>> Brent
>>
>> On Mon, 25 Aug 2008, Brent A Nelson wrote:
>>
>> I went to try to test the read performance of that file and got:
>>> dd if=/beast/blah0 of=/dev/null bs=1M
>>> dd: opening `/beast/blah0': Input/output error
>>> ls -al /beast
>>> ls: cannot access /beast: No such file or directory
>>> df
>>> ...
>>> glusterfs 5697753088 4054016 5406548992 1% /beast
>>> df /beast
>>> df: `/beast': No such file or directory
>>> df: no file systems processed
>>>
>>> Looking through the processes, I see the namespace volumes both died.
>>>
>>> Here's the tail end of one log:
>>>
>>> 2008-08-25 17:57:43 N [trace.c:1117:trace_lookup] ns0: callid: 420040
>>> (*this=0x8052ff8, loc=0xbf99a9a0 {path=/blah0, inode=0x80a2070} )
>>> 2008-08-25 17:57:43 N [trace.c:535:trace_lookup_cbk] ns0: callid: 420040
>>> (*this=0x8052ff8, op_ret=0, op_errno=61, inode=0x80a2070, *buf=0xbf99a864
>>> {st_dev=65031, st_ino=28855, st_mode=33188, st_nlink=1, st_uid=0, st_gid=0,
>>> st_rdev=0, st_size=0, st_blksize=4096, st_blocks=0})
>>> 2008-08-25 17:57:43 N [trace.c:1505:trace_open] ns0: (*this=0x8052ff8,
>>> loc=0x80adf20 {path=/blah0, inode=0x80a2070}, flags=32768, fd=0x80988e8)
>>> 2008-08-25 17:57:43 N [trace.c:150:trace_open_cbk] ns0: (*this=0x8052ff8,
>>> op_ret=0, op_errno=0, *fd=0x80988e8)
>>> 2008-08-25 17:57:43 W [common-utils.c:156:gf_print_bytes] glusterfs: Total
>>> data (in bytes): transfered (47460701), received (41736499)
>>> pending frames:
>>> frame : type(1) op(40)
>>>
>>> Signal received: 11/lib/tls/i686/cmov/libc.so.6[0xb7d99128]
>>> /usr/lib/libglusterfs.so.0(default_gf_lk+0xb4)[0xb7ee8724]
>>>
>>> /usr/lib/glusterfs/1.4.0qa34/xlator/protocol/server.so(server_lk_common+0x6ff)[0xb7cf93df]
>>>
>>> /usr/lib/glusterfs/1.4.0qa34/xlator/protocol/server.so(server_gf_lk+0x48)[0xb7cf95d8]
>>>
>>> /usr/lib/glusterfs/1.4.0qa34/xlator/protocol/server.so(protocol_server_interpret+0xd6)[0xb7cf41e6]
>>>
>>> /usr/lib/glusterfs/1.4.0qa34/xlator/protocol/server.so(protocol_server_pollin+0xb3)[0xb7cf43e3]
>>>
>>> /usr/lib/glusterfs/1.4.0qa34/xlator/protocol/server.so(notify+0x51)[0xb7cf44e1]
>>> /usr/lib/glusterfs/1.4.0qa34/transport/socket.so[0xb74e7f3a]
>>> /usr/lib/libglusterfs.so.0[0xb7ef6a45]
>>> /usr/lib/libglusterfs.so.0(event_dispatch+0x21)[0xb7ef5661]
>>> glusterfsd(main+0x953)[0x804a163]
>>> /lib/tls/i686/cmov/libc.so.6(__libc_start_main+0xe0)[0xb7d84450]
>>> glusterfsd[0x80497a1]
>>> ---------
>>>
>>>
>>> Thanks,
>>>
>>> Brent
>>>
>>> On Mon, 25 Aug 2008, Brent A Nelson wrote:
>>>
>>> 3.2 MBps, which is far less than previous, on a "dd if=/dev/zero
>>>> of=/beast/blah0 bs=1M count=10000"
>>>>
>>>> Thanks,
>>>>
>>>> Brent
>>>>
>>>> On Mon, 25 Aug 2008, Amar S. Tumballi wrote:
>>>>
>>>> Brent,
>>>>> Can you check the dd speed too? One user on irc reported very slow cp
>>>>> performance too, but his dd performance was good.
>>>>>
>>>>> Regards,
>>>>>
>>>>> 2008/8/25 Brent A Nelson <brent at phys.ufl.edu>
>>>>>
>>>>> A checkout today from the 1.4 branch seems to give terrible performance
>>>>>> on
>>>>>> "cp -a". Something that took a little over 9 minutes from a checkout
>>>>>> last
>>>>>> week now takes over an hour (it hasn't finished yet). CPU time
>>>>>> consumed by
>>>>>> glusterfs and all the glusterfsd processes is quite a bit smaller than
>>>>>> it
>>>>>> used to be; it looks like a recent patch is causing a substantial
>>>>>> performance issue...
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Brent
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Gluster-devel mailing list
>>>>>> Gluster-devel at nongnu.org
>>>>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> Amar Tumballi
>>>>> Gluster/GlusterFS Hacker
>>>>> [bulde on #gluster/irc.gnu.org]
>>>>> http://www.zresearch.com - Commoditizing Super Storage!
>>>>>
>>>>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>
>
>
More information about the Gluster-devel
mailing list