[Gluster-users] gluster 3.4alpha

Pranith Kumar Karampuri pkarampu at redhat.com
Wed Apr 24 13:01:00 UTC 2013


Michael,
     I posted the fix at http://review.gluster.com/4884. Please let me know the bug id once you raise a bug for it. The patch needs a bug-id to be merged upstream. Then I can port it back to 3.4

Pranith.

----- Original Message -----
From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
To: "Michael" <michael.auckland at gmail.com>
Cc: gluster-users at gluster.org
Sent: Tuesday, April 23, 2013 3:24:26 PM
Subject: Re: [Gluster-users] gluster 3.4alpha

Michael,
   Could you please raise a bug[1] with io-cache component in glusterfs.
Mention the content of this mail as the bug description.

Pranith.

[1]-> bugzilla.redhat.com

----- Original Message -----
From: "Michael" <michael.auckland at gmail.com>
To: gluster-users at gluster.org
Sent: Tuesday, April 23, 2013 1:39:33 PM
Subject: [Gluster-users] gluster 3.4alpha


Hi, 

I'm using gluster3.4.alpha2 ( 3.4.alpha 3 failing at self-heal daemon) on debain wheezy 

Got lots of this from syslog: 

Apr 23 13:57:41 localhost GlusterFS[16976]: [2013-04-23 01:57:41.505530] C [mem-pool.c:497:mem_put] (-->/usr/lib/glusterfs/3.4.0alpha2/xlator/performance/io-cache.so(ioc_readv+0x3ab) [0x7f5090b2f34b] (-->/usr/lib/glusterfs/3.4. 
0alpha2/xlator/performance/io-cache.so(ioc_dispatch_requests+0x3a8) [0x7f5090b2ee88] (-->/usr/lib/glusterfs/3.4.0alpha2/xlator/performance/io-cache.so(ioc_frame_return+0x3c1) [0x7f5090b31c71]))) 0-mem-pool: mem_put called on fr 
eed ptr 0x1fbc874 of mem pool 0x1fbaa30 

and /usr/sbin/glusterfsd for that volume stop worknig 
how to debug and solve this problem? 

When running all seems to be fine. 
gluster volume info 

Volume Name: VM 
Type: Replicate 
Volume ID: 72cf884d-83a4-459c-8601-b7b3e6ef7308 
Status: Started 
Number of Bricks: 1 x 3 = 3 
Transport-type: tcp 
Bricks: 
Brick1: vhoc1h:/mnt/kvmimages/data 
Brick2: vhoc2h:/mnt/kvmimages/data 
Brick3: vhoc3h:/mnt/kvmimages/data 

when fail: 

gluster> volume status 
Status of volume: VM 
Gluster process Port Online Pid 
------------------------------------------------------------------------------ 
Brick vhoc2h:/mnt/kvmimages/data N/A N 4758 
Brick vhoc3h:/mnt/kvmimages/data N/A N 16718 
NFS Server on localhost 38467 Y 16758 
Self-heal Daemon on localhost N/A Y 16769 
NFS Server on c2e876af-784e-4b29-95dd-50f2d8d69ab8 38467 Y 4864 
Self-heal Daemon on c2e876af-784e-4b29-95dd-50f2d8d69ab 
8 N/A Y 4871 

There are no active volume tasks 
Status of volume: VMCOL 
Gluster process Port Online Pid 
------------------------------------------------------------------------------ 
Brick vhoc2h:/mnt/kvmcollege/data 47154 Y 4765 
Brick vhoc3h:/mnt/kvmcollege/data 47154 Y 16748 
NFS Server on localhost 38467 Y 16758 
Self-heal Daemon on localhost N/A Y 16769 
NFS Server on c2e876af-784e-4b29-95dd-50f2d8d69ab8 38467 Y 4864 
Self-heal Daemon on c2e876af-784e-4b29-95dd-50f2d8d69ab 
8 N/A Y 4871 

There are no active volume tasks 

and from /var/log/glusterfs/bricks/mnt-kvmimages-data.log : 
[2013-04-22 20:02:44.758083] I [glusterfsd-mgmt.c:1583:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing 
pending frames: 
frame : type(0) op(0) 
frame : type(0) op(34) 
frame : type(0) op(34) 
frame : type(0) op(34) 
frame : type(0) op(34) 
frame : type(0) op(34) 
frame : type(0) op(34) 

Any clue on this? 

-- 
-- 
Michael 

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list