[Gluster-users] please remove me from the list. I am using LUSTRE now since glusterfs is fading...

michael at mjvitale.com michael at mjvitale.com
Tue Sep 29 12:34:05 UTC 2009


 





-----Original Message-----
From: gluster-users-bounces at gluster.org
[mailto:gluster-users-bounces at gluster.org] On Behalf Of
gluster-users-request at gluster.org
Sent: Tuesday, September 29, 2009 7:53 AM
To: gluster-users at gluster.org
Subject: Gluster-users Digest, Vol 17, Issue 47

Send Gluster-users mailing list submissions to
	gluster-users at gluster.org

To subscribe or unsubscribe via the World Wide Web, visit
	http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
	gluster-users-request at gluster.org

You can reach the person managing the list at
	gluster-users-owner at gluster.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."


Today's Topics:

   1. Re: is glusterfs DHT really distributed? (Anand Avati)
   2. glusterfsd unexpected termination (David Saez Padros)
   3. Re: is glusterfs DHT really distributed? (David Saez Padros)
   4. Re: is glusterfs DHT really distributed? (Mark Mielke)
   5. Re: AFR self-heal bug with rmdir (Directory not	empty)
      (Corentin Chary)
   6. Re: is glusterfs DHT really distributed? (Vijay Bellur)
   7. Re: is glusterfs DHT really distributed? (Vijay Bellur)
   8. Re: glusterfsd unexpected termination (Vijay Bellur)
   9. Re: glusterfsd unexpected termination (David Saez Padros)
  10. gluster / fuse on CentOS 5.2 ? (Daniel Maher)


----------------------------------------------------------------------

Message: 1
Date: Tue, 29 Sep 2009 07:33:36 +0530
From: Anand Avati <avati at gluster.com>
Subject: Re: [Gluster-users] is glusterfs DHT really distributed?
To: Wei Dong <wdong.pku at gmail.com>
Cc: Gluster-users at gluster.org
Message-ID:
	<8bd4838e0909281903kef1380dx9fa9118e90f1004d at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

>
?http://www.gluster.com/community/documentation/index.php/Translators/cluste
r/distribute
>
> It seems to suggest that 'lookup-unhashed' says that the default is 'on'.
>
> Perhaps try turning it 'off'?

Wei,
   There are two things we would like you to try. First is what Mark
has just pointed, the 'option lookup-unhashed off' in distribute. The
second is 'option transport.socket.nodelay on' in each of your
protocol/client _and_ protocol/server volumes. Do let us know what
influence these changes have on your performance.

Avati


------------------------------

Message: 2
Date: Tue, 29 Sep 2009 09:30:37 +0200
From: David Saez Padros <david at ols.es>
Subject: [Gluster-users] glusterfsd unexpected termination
To: Gluster-users at gluster.org
Message-ID: <4AC1B79D.9000504 at ols.es>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi

This morning i noticed that in one of the servers glusterfsd
daemon was not working, the last log entries were:

patchset: v2.0.4
signal received: 11
configuration details:argp 1
backtrace 1
bdb->cursor->get 1
db.h 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.0.4
/lib/libc.so.6[0x7ff43f926db0]
/usr/lib/glusterfs/2.0.4/xlator/cluster/unify.so(unify_open+0xd8)[0x7ff43ecd
40a8]
/usr/lib/glusterfs/2.0.4/xlator/features/locks.so(pl_open+0xc5)[0x7ff43eac69
45]
/usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so(iot_open_wrapper+0
xc0)[0x7ff43e8bc850]
/usr/lib/libglusterfs.so.0(call_resume+0x551)[0x7ff440083ac1]
/usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so(iot_worker_unorder
ed+0x18)[0x7ff43e8baea8]
/lib/libpthread.so.0[0x7ff43fc4cf9a]
/lib/libc.so.6(clone+0x6d)[0x7ff43f9c156d]

Looks like it segfaulted, is this a known bug solved in any recent
version ?

-- 
Thanx & best regards ...

----------------------------------------------------------------
    David Saez Padros                http://www.ols.es
    On-Line Services 2000 S.L.       telf    +34 902 50 29 75
----------------------------------------------------------------




------------------------------

Message: 3
Date: Tue, 29 Sep 2009 09:39:53 +0200
From: David Saez Padros <david at ols.es>
Subject: Re: [Gluster-users] is glusterfs DHT really distributed?
To: Anand Avati <avati at gluster.com>
Cc: Gluster-users at gluster.org
Message-ID: <4AC1B9C9.4070304 at ols.es>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi

> The
> second is 'option transport.socket.nodelay on' in each of your
> protocol/client _and_ protocol/server volumes.

where is this option documented ?

-- 
Thanx & best regards ...

----------------------------------------------------------------
    David Saez Padros                http://www.ols.es
    On-Line Services 2000 S.L.       telf    +34 902 50 29 75
----------------------------------------------------------------




------------------------------

Message: 4
Date: Tue, 29 Sep 2009 04:00:56 -0400
From: Mark Mielke <mark at mark.mielke.cc>
Subject: Re: [Gluster-users] is glusterfs DHT really distributed?
To: David Saez Padros <david at ols.es>
Cc: Anand Avati <avati at gluster.com>, Gluster-users at gluster.org
Message-ID: <4AC1BEB8.1010704 at mark.mielke.cc>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 09/29/2009 03:39 AM, David Saez Padros wrote:
>> The
>> second is 'option transport.socket.nodelay on' in each of your
>> protocol/client _and_ protocol/server volumes.
>
> where is this option documented ?

I'm a little surprised TCP_NODELAY isn't set by default? I set it on all 
servers I write as a matter of principle.

The Nagle algorithm is for very simple servers to have acceptable 
performance. The type of servers that benefit, are the type of servers 
that do writes of individual bytes (no buffering).

Serious servers intended to perform well should be able to easily beat 
the Nagle algorithm. writev(), sendmsg(), or even write(buffer) where 
the buffer is built first, should all beat the Nagle algorithm in terms 
of increased throughput and reduced latency. On Linux, there is also 
TCP_CORK. Unless GlusterFS does small writes, I suggest TCP_NODELAY be 
set by default in future releases.

Just an opinion. :-)

Cheers,
mark

-- 
Mark Mielke<mark at mielke.cc>



------------------------------

Message: 5
Date: Tue, 29 Sep 2009 11:21:48 +0200
From: Corentin Chary <corentin.chary at gmail.com>
Subject: Re: [Gluster-users] AFR self-heal bug with rmdir (Directory
	not	empty)
To: gluster-users at gluster.org
Message-ID:
	<71cd59b00909290221sf4f0867r4659cc119eccb423 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Sep 28, 2009 at 3:49 PM, Corentin Chary
<corentin.chary at gmail.com> wrote:
> Hi,
> I'm trying to use glusterfs with afr.
> My setup have 2 servers and 2 clients. / is mounted with user_xattr.
> It seems that if you shutdown a server, remove a directory with one or
> more childs, then restart the server, the changes won't be replicated
> because rmdir is not recursive in afr-self-heal-entry.c

The bug is affecting  2.0.2 and the current git (and probably all 2.x)
1.3.10 works as excepted.


-- 
Corentin Chary
http://xf.iksaif.net


------------------------------

Message: 6
Date: Tue, 29 Sep 2009 15:26:47 +0530
From: Vijay Bellur <vijay at gluster.com>
Subject: Re: [Gluster-users] is glusterfs DHT really distributed?
To: Mark Mielke <mark at mark.mielke.cc>
Cc: Anand Avati <avati at gluster.com>, Gluster-users at gluster.org
Message-ID: <4AC1D9DF.6090509 at gluster.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Mark Mielke wrote:
> I'm a little surprised TCP_NODELAY isn't set by default? I set it on 
> all servers I write as a matter of principle.
>
> Serious servers intended to perform well should be able to easily beat 
> the Nagle algorithm. writev(), sendmsg(), or even write(buffer) where 
> the buffer is built first, should all beat the Nagle algorithm in 
> terms of increased throughput and reduced latency. On Linux, there is 
> also TCP_CORK. Unless GlusterFS does small writes, I suggest 
> TCP_NODELAY be set by default in future releases.
>
> Just an opinion. :-)

Thanks for this feedback, Mark. Pre-2.0.3, there was no option to turn 
off Nagle's algorithm. We introduced this in 2.0.3 and are debating 
whether this needs to be made the default, since it involves altering a 
default behavior :-). We will certainly consider making this the default 
behavior in our upcoming releases.

Thanks,
Vijay



------------------------------

Message: 7
Date: Tue, 29 Sep 2009 15:38:54 +0530
From: Vijay Bellur <vijay at gluster.com>
Subject: Re: [Gluster-users] is glusterfs DHT really distributed?
To: David Saez Padros <david at ols.es>
Cc: Anand Avati <avati at gluster.com>, Gluster-users at gluster.org
Message-ID: <4AC1DCB6.9020108 at gluster.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

David Saez Padros wrote:
> Hi
>
>> The
>> second is 'option transport.socket.nodelay on' in each of your
>> protocol/client _and_ protocol/server volumes.
>
> where is this option documented ?
>
Thanks for pointing this out.

We wanted to expose this as a regular option in the upcoming 2.1 release 
and had introduced this as an experimental option in 2.0.x releases.
Hence it will be documented in the 2.1 user manual.

Thanks,
Vijay


------------------------------

Message: 8
Date: Tue, 29 Sep 2009 15:43:44 +0530
From: Vijay Bellur <vijay at gluster.com>
Subject: Re: [Gluster-users] glusterfsd unexpected termination
To: David Saez Padros <david at ols.es>
Cc: Gluster-users at gluster.org
Message-ID: <4AC1DDD8.80204 at gluster.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

David Saez Padros wrote:
> patchset: v2.0.4
> signal received: 11
> configuration details:argp 1
> backtrace 1
> bdb->cursor->get 1
> db.h 1
> dlfcn 1
> fdatasync 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 2.0.4
> /lib/libc.so.6[0x7ff43f926db0]
>
/usr/lib/glusterfs/2.0.4/xlator/cluster/unify.so(unify_open+0xd8)[0x7ff43ecd
40a8] 
>
>
/usr/lib/glusterfs/2.0.4/xlator/features/locks.so(pl_open+0xc5)[0x7ff43eac69
45] 
>
>
/usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so(iot_open_wrapper+0
xc0)[0x7ff43e8bc850] 
>
> /usr/lib/libglusterfs.so.0(call_resume+0x551)[0x7ff440083ac1]
>
/usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so(iot_worker_unorder
ed+0x18)[0x7ff43e8baea8] 
>
> /lib/libpthread.so.0[0x7ff43fc4cf9a]
> /lib/libc.so.6(clone+0x6d)[0x7ff43f9c156d]
>
> Looks like it segfaulted, is this a known bug solved in any recent
> version ?
>
This does not look like a known issue. If you have the core file, can 
you please send the complete backtrace across?

Thanks,
Vijay


------------------------------

Message: 9
Date: Tue, 29 Sep 2009 12:31:28 +0200
From: David Saez Padros <david at ols.es>
Subject: Re: [Gluster-users] glusterfsd unexpected termination
To: Vijay Bellur <vijay at gluster.com>
Cc: Gluster-users at gluster.org
Message-ID: <4AC1E200.9010008 at ols.es>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi

here you have it:

Core was generated by `/usr/sbin/glusterfsd -p /var/run/glusterfsd.pid 
-f /etc/glusterfs/glusterfsd.vo'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f7b568f30a8 in unify_open () from 
/usr/lib/glusterfs/2.0.4/xlator/cluster/unify.so
(gdb) backtrace
#0  0x00007f7b568f30a8 in unify_open () from 
/usr/lib/glusterfs/2.0.4/xlator/cluster/unify.so
#1  0x00007f7b566e5945 in pl_open () from 
/usr/lib/glusterfs/2.0.4/xlator/features/locks.so
#2  0x00007f7b564db850 in iot_open_wrapper ()
    from /usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so
#3  0x00007f7b57ca2ac1 in call_resume () from /usr/lib/libglusterfs.so.0
#4  0x00007f7b564d9ea8 in iot_worker_unordered ()
    from /usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so
#5  0x00007f7b5786bf9a in start_thread () from /lib/libpthread.so.0
#6  0x00007f7b575e056d in clone () from /lib/libc.so.6
#7  0x0000000000000000 in ?? ()

> David Saez Padros wrote:
>> patchset: v2.0.4
>> signal received: 11
>> configuration details:argp 1
>> backtrace 1
>> bdb->cursor->get 1
>> db.h 1
>> dlfcn 1
>> fdatasync 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> spinlock 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 2.0.4
>> /lib/libc.so.6[0x7ff43f926db0]
>>
/usr/lib/glusterfs/2.0.4/xlator/cluster/unify.so(unify_open+0xd8)[0x7ff43ecd
40a8] 
>>
>>
/usr/lib/glusterfs/2.0.4/xlator/features/locks.so(pl_open+0xc5)[0x7ff43eac69
45] 
>>
>>
/usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so(iot_open_wrapper+0
xc0)[0x7ff43e8bc850] 
>>
>> /usr/lib/libglusterfs.so.0(call_resume+0x551)[0x7ff440083ac1]
>>
/usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so(iot_worker_unorder
ed+0x18)[0x7ff43e8baea8] 
>>
>> /lib/libpthread.so.0[0x7ff43fc4cf9a]
>> /lib/libc.so.6(clone+0x6d)[0x7ff43f9c156d]
>>
>> Looks like it segfaulted, is this a known bug solved in any recent
>> version ?
>>
> This does not look like a known issue. If you have the core file, can 
> you please send the complete backtrace across?
> 
> Thanks,
> Vijay
> 

-- 
Salu-2 y hasta pronto ...

----------------------------------------------------------------
    David Saez Padros                http://www.ols.es
    On-Line Services 2000 S.L.       telf    +34 902 50 29 75
----------------------------------------------------------------




------------------------------

Message: 10
Date: Tue, 29 Sep 2009 13:52:19 +0200
From: Daniel Maher <dma+gluster at witbe.net>
Subject: [Gluster-users] gluster / fuse on CentOS 5.2 ?
To: gluster-users at gluster.org
Message-ID: <4AC1F4F3.30508 at witbe.net>
Content-Type: text/plain; charset=UTF-8; format=flowed

Hello all,

Trying to get Gluster going on a CentOS 5.2 (32bit) machine.  The daemon 
loads OK, but when the client tries, it spits this out in the log :

+---------------------------------------------------------------------------
---+
[2009-09-29 11:02:57] E [xlator.c:736:xlator_init_rec] xlator: 
Initialization of volume 'fuse' failed, review your volfile again
[2009-09-29 11:02:57] E [glusterfsd.c:513:_xlator_graph_init] glusterfs: 
initializing translator failed
[2009-09-29 11:02:57] E [glusterfsd.c:1217:main] glusterfs: translator 
initialization failed.  exiting

Naturally this indicates a Fuse problem, which makes sense, since CentOS 
5.2 doesn't ship with Fuse.  So i went ahead and installed, well, more 
or less everything - yet the error persists.  See below :

[root at A01 ~]# rpm -qa | grep ^fuse
fuse-libs-2.7.4glfs11-1
fuse-2.7.4glfs11-1
fuse-devel-2.7.4glfs11-1

[root at A01 ~]# rpm -qa | grep dkms
dkms-fuse-2.7.4-1.nodist.rf
dkms-2.0.22.0-1.el5.rf

[root at A01 ~]# rpm -qa | grep gluster
glusterfs-server-2.0.6-1
glusterfs-client-2.0.6-1
glusterfs-common-2.0.6-1

[root at A01 ~]# locate libfuse
/usr/lib/libfuse.so
/usr/lib/libfuse.so.2
/usr/lib/libfuse.so.2.7.4

[root at A01 ~]# ls -l /dev/fuse
crw------- 1 root root 10, 229 Sep 29 11:01 /dev/fuse

However...

[root at A01 ~]# modprobe fuse
FATAL: Module fuse not found.


Does anybody have any ideas ?

Thanks.


-- 
Daniel Maher <dma+gluster at witbe.net>


------------------------------

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


End of Gluster-users Digest, Vol 17, Issue 47
*********************************************




More information about the Gluster-users mailing list