[Gluster-users] Problem

artur.k a.kaminski at o2.pl
Thu Apr 16 12:41:17 UTC 2009


I have a problem (4 servers  and x clients) - Distributed Replicated Storage. Configuration copied from the documentation. I not tested on latest glusterfs version ;-)


glusterfs 2.0.0rc4 built on Mar 11 2009 09:26:17
Repository revision: cb602a1d7d41587c24379cb2636961ab91446f86 +
Copyright (c) 2006-2009 Z RESEARCH Inc. <http://www.zresearch.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.


Linux xx-storage-1a 2.6.18-6-xen-amd64 #1 SMP Wed Dec 10 13:10:58 CET 2008 x86_64 GNU/Linux


On server glusterfsd.log:


s/produkty/dodatki/skiny/8f (157420)'
2009-04-16 14:29:49 D [server-protocol.c:5843:server_inodelk_resume] brick: 6740: INODELK '/xx/production/web/upload
s/produkty/dodatki/skiny/8f (157420)'
2009-04-16 14:29:49 D [common.c:514:pl_setlk] p-locks: Unlock (pid=4856) 0 - 0 => OK
2009-04-16 14:29:49 D [inode.c:312:__inode_passivate] brick/inode: passivating inode(157420) lru=1025/1024 active=142 purge=0
2009-04-16 14:29:49 D [inode.c:336:__inode_retire] brick/inode: retiring inode(455567) lru=1024/1024 active=142 purge=1
2009-04-16 14:29:49 D [inode.c:112:__dentry_unhash] brick/inode: dentry unhashed a8 (455567)
2009-04-16 14:29:49 D [inode.c:125:__dentry_unset] brick/inode: unset dentry a8 (455567)
2009-04-16 14:29:49 D [inode.c:276:__inode_destroy] brick/inode: destroy inode(455567) [@0x52e4d0]
2009-04-16 14:29:49 D [inode.c:293:__inode_activate] brick/inode: activating inode(443638), lru=1023/1024 active=143 purge=0
2009-04-16 14:29:49 D [server-protocol.c:3537:server_lookup_resume] brick: 397214: LOOKUP '428439/da'
2009-04-16 14:29:49 D [inode.c:312:__inode_passivate] brick/inode: passivating inode(443638) lru=1024/1024 active=142 purge=0
2009-04-16 14:29:49 D [server-protocol.c:3537:server_lookup_resume] brick: 439605: LOOKUP '1/xx-dodatki'
2009-04-16 14:29:49 D [server-protocol.c:3537:server_lookup_resume] brick: 439606: LOOKUP '146308/production'
2009-04-16 14:29:49 D [server-protocol.c:3537:server_lookup_resume] brick: 439607: LOOKUP '146312/web'
2009-04-16 14:29:49 D [server-protocol.c:3537:server_lookup_resume] brick: 439608: LOOKUP '146316/uploads'
2009-04-16 14:29:49 D [server-protocol.c:3537:server_lookup_resume] brick: 439609: LOOKUP '146320/produkty'
2009-04-16 14:29:49 D [inode.c:471:__inode_create] brick/inode: create inode(0)
2009-04-16 14:29:49 D [inode.c:293:__inode_activate] brick/inode: activating inode(0), lru=1024/1024 active=143 purge=0
2009-04-16 14:29:49 D [server-protocol.c:3537:server_lookup_resume] brick: 439610: LOOKUP '428439/14'
2009-04-16 14:29:49 D [inode.c:94:__dentry_hash] brick/inode: dentry hashed 14 (429342)
2009-04-16 14:29:49 D [inode.c:312:__inode_passivate] brick/inode: passivating inode(429342) lru=1025/1024 active=142 purge=0
2009-04-16 14:29:49 D [inode.c:336:__inode_retire] brick/inode: retiring inode(460133) lru=1024/1024 active=142 purge=1
2009-04-16 14:29:49 D [inode.c:112:__dentry_unhash] brick/inode: dentry unhashed db61754b26f5ba1daf8df6dc387ae57dc94d292b.jpg
(460133)
2009-04-16 14:29:49 D [inode.c:125:__dentry_unset] brick/inode: unset dentry db61754b26f5ba1daf8df6dc387ae57dc94d292b.jpg (460
133)
2009-04-16 14:29:49 D [inode.c:276:__inode_destroy] brick/inode: destroy inode(460133) [@0x537c10]


and Xen console :

 [<ffffffff880b374e>] :reiserfs:sprintf_le_key+0x19/0x337
PGD 6ee23067 PUD 6f009067 PMD 0
Oops: 0000 [1] SMP
CPU 0
Modules linked in: nfs ipt_REJECT xt_tcpudp iptable_filter ip_tables x_tables nfsd exportfs lockd nfs_acl sunrpc ipv6 reiserfs fuse evdev pcspkr 8250 serial_core ext3 jbd mbcache dm_mirror dm_snapshot dm_mod raid1 md_mod
Pid: 5723, comm: glusterfsd Not tainted 2.6.18-6-xen-amd64 #1
RIP: e030:[<ffffffff880b374e>]  [<ffffffff880b374e>] :reiserfs:sprintf_le_key+0x19/0x337
RSP: e02b:ffff88006f39bb08  EFLAGS: 00010206
RAX: 0000000000000028 RBX: 0000000001000000 RCX: 0000000000000015
RDX: ffff88006f39bc18 RSI: 0000000001000000 RDI: ffffffff880d765f
RBP: ffff88006f39bbd8 R08: 00000000ffffffff R09: 0000000000000000
R10: 0000000001000000 R11: 0000000000000001 R12: ffffffff880d7a20
R13: ffffffff880d765f R14: 0000000000000001 R15: 0000000000000008
FS:  00002ad4535a4ae0(0000) GS:ffffffff804c4000(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000
Process glusterfsd (pid: 5723, threadinfo ffff88006f39a000, task ffff88007f01f140)
Stack:  ffff88006f39bbd8  ffff8800015a646b  ffff88006f39bbd8  ffffffff880d7a20
 ffffffff880d765f  ffffffff880b3b3f  ffff880002cb2d30  0000000000000000
 ffff8800458a0d30  ffffffff880d7a4c
Call Trace:
 [<ffffffff880b3b3f>] :reiserfs:prepare_error_buf+0xd3/0x560
 [<ffffffff8022f62e>] alloc_page_buffers+0x82/0xd4
 [<ffffffff880b36f4>] :reiserfs:reiserfs_warning+0x50/0x91
 [<ffffffff802a2112>] read_cache_page+0xf2/0x167
 [<ffffffff880c3c00>] :reiserfs:reiserfs_xattr_get+0x1e9/0x204
 [<ffffffff880c3131>] :reiserfs:reiserfs_getxattr+0x7e/0xbc
 [<ffffffff802c0f86>] vfs_getxattr+0x85/0xe3
 [<ffffffff802c1079>] getxattr+0x95/0xf6
 [<ffffffff8022d220>] mntput_no_expire+0x19/0x8b
 [<ffffffff8020df65>] do_path_lookup+0x268/0x28c
 [<ffffffff80207138>] kmem_cache_free+0x77/0xca
 [<ffffffff80223b92>] __user_walk_fd+0x48/0x53
 [<ffffffff802c116b>] sys_lgetxattr+0x43/0x61
 [<ffffffff8025be3a>] system_call+0x86/0x8b
 [<ffffffff8025bdb4>] system_call+0x0/0x8b


Code: 48 8b 46 08 48 c1 e8 3c 3c 03 77 14 0f b6 c0 41 bc 01 00 00
RIP  [<ffffffff880b374e>] :reiserfs:sprintf_le_key+0x19/0x337
 RSP <ffff88006f39bb08>
CR2: 0000000001000008
 <0>Kernel panic - not syncing: Fatal exception


I copy "cp -r /var/storage/glusterfs /test" and there were no problems with the filesystem






More information about the Gluster-users mailing list