[Gluster-devel] brick crash/hang with io-threads in 2.5 patch 240
Harris Landgarten
harrisl at lhjonline.com
Wed Jun 27 11:54:45 UTC 2007
Whenever I enable io-threads in one of my bricks I can cause a crash
in client1:
ls -lR /mnt/glusterfs
while this is running
in client2:
ls -l /mnt/glusterfs
ls: /mnt/glusterfs/secondary: Transport endpoint is not connected
total 4
?--------- ? ? ? ? ? /mnt/glusterfs/backups
?--------- ? ? ? ? ? /mnt/glusterfs/tmp
At this point the brick with io-threads has crashed:
2007-06-27 07:45:55 C [common-utils.c:205:gf_print_trace] debug-backtrace: Got signal (11), printing backtrace
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/libglusterfs.so.0(gf_print_trace+0x2d) [0xb7fabd4d]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: [0xbfffe420]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/glusterfs/1.3.0-pre5/xlator/protocol/server.so [0xb761436b]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/libglusterfs.so.0 [0xb7fa9d55]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/libglusterfs.so.0(call_resume+0x4f2) [0xb7fb2462]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /usr/lib/glusterfs/1.3.0-pre5/xlator/performance/io-threads.so [0xb7626770]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /lib/libpthread.so.0 [0xb7f823db]
2007-06-27 07:45:55 C [common-utils.c:207:gf_print_trace] debug-backtrace: /lib/libc.so.6(clone+0x5e) [0xb7f0c26
The bricks is running on fedora and it doesn't want to generate a core. Any suggestions?
This is the spec file I used for the test
### Export volume "brick" with the contents of "/home/export" directory.
volume posix1
type storage/posix # POSIX FS translator
option directory /mnt/export # Export this directory
end-volume
volume io-threads
type performance/io-threads
option thread-count 8
subvolumes posix1
end-volume
### Add POSIX record locking support to the storage brick
volume brick
type features/posix-locks
option mandatory on # enables mandatory locking on all files
subvolumes io-threads
end-volume
### Add network serving capability to above brick.
volume server
type protocol/server
option transport-type tcp/server # For TCP/IP transport
# option transport-type ib-sdp/server # For Infiniband transport
# option bind-address 192.168.1.10 # Default is to listen on all interfaces
option listen-port 6996 # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
subvolumes brick
# NOTE: Access to any volume through protocol/server is denied by
# default. You need to explicitly grant access through "auth" option.
option auth.ip.brick.allow * # access to "brick" volume
end-volume
More information about the Gluster-devel
mailing list