[Gluster-devel] Cascading different translator doesn't work as expectation

yaomin @ gmail yangyaomin at gmail.com
Tue Jan 6 12:21:35 UTC 2009


Krishna,

    1, The version is 1.3.9
    2, the client and server vol files are in the attachments.
    3, The result is "No Stack"

Thanks,
Yaomin

--------------------------------------------------
From: "Krishna Srinivas" <krishna at zresearch.com>
Sent: Tuesday, January 06, 2009 5:36 PM
To: "yaomin @ gmail" <yangyaomin at gmail.com>
Cc: <gluster-devel at nongnu.org>
Subject: Re: [Gluster-devel] Cascading different translator doesn't work as 
expectation

> Yaomin,
>
> Can you:
> * mention what version you are using
> * give the modified client and server vol file (to see if there are any 
> errors)
> * give gdb backtrace from the core file? "gdb -c /core.pid glusterfs"
> and then type "bt"
>
> Krishna
>
> On Tue, Jan 6, 2009 at 2:43 PM, yaomin @ gmail <yangyaomin at gmail.com> 
> wrote:
>> Krishna,
>>
>>     Thank you for your kind help before.
>>
>>     According to your advice, I confront a new error. The storage node 
>> has
>> no log information, and the client's log is like following:
>>
>> /lib64/libc.so.6[0x3fbb2300a0]
>> /usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(afr_setxattr+0x6a)[0x2aaaaaf0658a]
>> /usr/local/lib/glusterfs/1.3.9/xlator/cluster/stripe.so(notify+0x220)[0x2aaaab115c80]
>> /usr/local/lib/libglusterfs.so.0(default_notify+0x25)[0x2aaaaaab8f55]
>> /usr/local/lib/glusterfs/1.3.9/xlator/cluster/afr.so(notify+0x16d)[0x2aaaaaefc19d]
>> /usr/local/lib/glusterfs/1.3.9/xlator/protocol/client.so(notify+0x681)[0x2aaaaacebac1]
>> /usr/local/lib/libglusterfs.so.0(sys_epoll_iteration+0xbb)[0x2aaaaaabe14b]
>> /usr/local/lib/libglusterfs.so.0(poll_iteration+0x79)[0x2aaaaaabd509]
>> [glusterfs](main+0x66a)[0x4026aa]
>> /lib64/libc.so.6(__libc_start_main+0xf4)[0x3fbb21d8a4]
>> [glusterfs][0x401b69]
>> ---------
>>
>> [root at IP6 ~]# df -h
>> Filesystem            Size  Used Avail Use% Mounted on
>> /dev/sda2             9.5G  6.8G  2.2G  76% /
>> /dev/sda1             190M   12M  169M   7% /boot
>> tmpfs                1006M     0 1006M   0% /dev/shm
>> /dev/sda4             447G  2.8G  422G   1% /locfs
>> /dev/sdb1             459G  199M  435G   1% /locfsb
>> df: `/mnt/new': Transport endpoint is not connected
>>
>> Thanks,
>> Yaomin
>> --------------------------------------------------
>> From: "Krishna Srinivas" <krishna at zresearch.com>
>> Sent: Tuesday, January 06, 2009 1:09 PM
>> To: "yaomin @ gmail" <yangyaomin at gmail.com>
>> Cc: <gluster-devel at nongnu.org>
>> Subject: Re: [Gluster-devel] Cascading different translator doesn't work 
>> as
>> expectation
>>
>>> Alfred,
>>> Your vol files are wrong. you need to remove all the volume
>>> definitions below "writeback" in the client vol file. For server vol
>>> file the definition of performance translators is not having any
>>> effect. Also you need to use "features/locks" translator above
>>> "storage/posix"
>>> Krishna
>>>
>>> On Tue, Jan 6, 2009 at 8:51 AM, yaomin @ gmail <yangyaomin at gmail.com>
>>> wrote:
>>>> All,
>>>>
>>>>     It seems difficult for you.
>>>>
>>>>     There is a new problem when I tested.
>>>>
>>>>     When I kill all the storage nodes, the client still try to send 
>>>> data,
>>>> and doesn't quit.
>>>>
>>>> Thanks,
>>>> Alfred
>>>> From: yaomin @ gmail
>>>> Sent: Monday, January 05, 2009 10:52 PM
>>>> To: Krishna Srinivas
>>>> Cc: gluster-devel at nongnu.org
>>>> Subject: Re: [Gluster-devel] Cascading different translator doesn't 
>>>> work
>>>> as
>>>> expectation
>>>> Krishna,
>>>>     Thank you for your quick response.
>>>>     There are two log information in the client's log file when setting
>>>> up
>>>> the client.
>>>>     2009-01-05 18:44:59 W [fuse-bridge.c:389:fuse_entry_cbk]
>>>> glusterfs-fuse:
>>>> 2: (34) / => 1 Rehashing 0/0
>>>>     2009-01-05 18:48:04 W [fuse-bridge.c:389:fuse_entry_cbk]
>>>> glusterfs-fuse:
>>>> 2: (34) / => 1 Rehashing 0/0
>>>>
>>>>   There is no any information in the storage node's log file.
>>>>
>>>>   Although I changed the scheduler from ALU to RR, there only the
>>>> No.3(192.168.13.5) and No.4(192.168.13.7) storage nodes on working.
>>>>
>>>>   Each machine has 2GB memory.
>>>>
>>>> Thanks,
>>>> Alfred
>>>> 
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: client.txt
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090106/4615a6be/attachment-0006.txt>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: server.txt
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20090106/4615a6be/attachment-0007.txt>


More information about the Gluster-devel mailing list