[Gluster-devel] brick crash/hang with io-threads in 2.5 patch 240

Harris Landgarten harrisl at lhjonline.com
Wed Jun 27 11:59:13 UTC 2007


More info

When the brink crashes, the client running the ls -lR also crashes

This is the bt

#0  0xb7f20383 in dict_del (this=0x1, key=0x80563a0 "client2") at dict.c:198
198       data_pair_t *pair = this->members[hashval];
(gdb) bt
#0  0xb7f20383 in dict_del (this=0x1, key=0x80563a0 "client2") at dict.c:198
#1  0xb75b3945 in notify (this=0x80563b0, event=3, data=0x8091810) at client-protocol.c:4052
#2  0xb7f27a27 in transport_notify (this=0x1, event=0) at transport.c:152
#3  0xb7f284a9 in sys_epoll_iteration (ctx=0xbfc4f574) at epoll.c:54
#4  0xb7f27afd in poll_iteration (ctx=0xbfc4f574) at transport.c:260
#5  0x0804a0eb in main (argc=6, argv=0xbfc4f654) at glusterfs.c:341

----- Original Message -----
From: "Harris Landgarten" <harrisl at lhjonline.com>
To: "gluster-devel" <gluster-devel at nongnu.org>
Sent: Wednesday, June 27, 2007 7:54:45 AM (GMT-0500) America/New_York
Subject: [Gluster-devel] brick crash/hang with io-threads in 2.5 patch 240

Whenever I enable io-threads in one of my bricks I can cause a crash

in client1:

ls -lR /mnt/glusterfs

while this is running

in client2:

ls -l /mnt/glusterfs
ls: /mnt/glusterfs/secondary: Transport endpoint is not connected
total 4
?--------- ? ?      ?         ?            ? /mnt/glusterfs/backups
?--------- ? ?      ?         ?            ? /mnt/glusterfs/tmp

At this point the brick with io-threads has crashed:

2007-06-27 07:45:55 C [common-utils.c:205:gf_print_trace] debug-backtrace: Got signal (11), printing backtrace
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/libglusterfs.so.0(gf_print_trace+0x2d) [0xb7fabd4d]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: [0xbfffe420]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/glusterfs/1.3.0-pre5/xlator/protocol/server.so [0xb761436b]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/libglusterfs.so.0 [0xb7fa9d55]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/libglusterfs.so.0(call_resume+0x4f2) [0xb7fb2462]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/glusterfs/1.3.0-pre5/xlator/performance/io-threads.so [0xb7626770]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /lib/libpthread.so.0 [0xb7f823db]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /lib/libc.so.6(clone+0x5e) [0xb7f0c26


The bricks is running on fedora and it doesn't want to generate a core. Any suggestions?

This is the spec file I used for the test


### Export volume "brick" with the contents of "/home/export" directory.
volume posix1
  type storage/posix                    # POSIX FS translator
  option directory /mnt/export        # Export this directory
end-volume

volume io-threads
  type performance/io-threads
  option thread-count 8
  subvolumes posix1
end-volume

### Add POSIX record locking support to the storage brick
volume brick
  type features/posix-locks
  option mandatory on          # enables mandatory locking on all files
  subvolumes io-threads
end-volume


### Add network serving capability to above brick.
volume server
  type protocol/server
  option transport-type tcp/server     # For TCP/IP transport
# option transport-type ib-sdp/server  # For Infiniband transport
# option bind-address 192.168.1.10     # Default is to listen on all interfaces
 option listen-port 6996              # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
  subvolumes brick
# NOTE: Access to any volume through protocol/server is denied by
# default. You need to explicitly grant access through "auth" option.
  option auth.ip.brick.allow *          # access to "brick" volume
end-volume



_______________________________________________
Gluster-devel mailing list
Gluster-devel at nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel






More information about the Gluster-devel mailing list