[Gluster-devel] QEMU-libgfapi integration: glfs_pwritev_async() failing for iovec count>1

Anand Avati anand.avati at gmail.com
Sat Jul 14 17:37:23 UTC 2012


Thanks for reporting! Let me know if this fixes the problem --

----------------snip-----------------------
diff --git a/api/src/glfs-fops.c b/api/src/glfs-fops.c
index 7523ed2..b62942b 100644
--- a/api/src/glfs-fops.c
+++ b/api/src/glfs-fops.c
@@ -480,6 +480,7 @@ glfs_pwritev (struct glfs_fd *glfd, const struct iovec
*iovec, int iovcnt,
        size_t          size = -1;
        struct iobref  *iobref = NULL;
        struct iobuf   *iobuf = NULL;
+       struct iovec    iov = {0, };

        __glfs_entry_fd (glfd);

@@ -510,7 +511,10 @@ glfs_pwritev (struct glfs_fd *glfd, const struct iovec
*iovec, int iovcnt,

        iov_unload (iobuf_ptr (iobuf), iovec, iovcnt);

-       ret = syncop_writev (subvol, glfd->fd, iovec, iovcnt, offset,
+       iov.iov_base = iobuf_ptr (iobuf);
+       iov.iov_len = size;
+
+       ret = syncop_writev (subvol, glfd->fd, &iov, 1, offset,
                             iobref, flags);

        iobuf_unref (iobuf);
--------------------------------------------------------

Avati

On Sat, Jul 14, 2012 at 8:17 AM, Bharata B Rao <bharata.rao at gmail.com>wrote:

> Hi Avati,
>
> I am seeing a failure when using glfs_pwritev_async() with iovec count
> greater than 1 from QEMU. I have verified from a sample application
> that it works correctly for iovec count of 1 and 2. However when QEMU
> issues writes with higher iovec counts, I see a problem whose log is
> pasted below. I am not sure if you will get enough details to exactly
> pin point the probem, but I am sending the log with the hope that you
> might get some clues by looking at the backtrace. The log has details
> about how I start the QEMU, the options used, how all
> glfs_pwritev_async() calls with iovec count of 1 succeed before the
> one with higher iovec count fails. I am not sure if I am doing
> everything right from QEMU side, but thought of passing this by you.
> Sorry for the big log, you might want to scroll down to the end to see
> the failure case backtrace.
>
> [root at bharata qemu]# gdb ./x86_64-softmmu/qemu-system-x86_64
> (gdb) set args --enable-kvm -nographic -m 1024 -smp 4 -drive
> file=gluster:bharata at 24007:test:/F16,format=gluster,cache=none -net
> nic,model=virtio -net user -redir tcp:2000::22
> (gdb) b qemu_gluster_aio_writev
> Breakpoint 1 at 0x877b9: file block/gluster.c, line 433.
> (gdb) r
> Starting program:
> /home/bharata/work/kvmfs/qemu/qemu/x86_64-softmmu/qemu-system-x86_64
> --enable-kvm -nographic -m 1024 -smp 4 -drive
> file=gluster:bharata at 24007:test:/F16,format=gluster,cache=none -net
> nic,model=virtio -net user -redir tcp:2000::22
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> [New Thread 0x7ffff4c42700 (LWP 27681)]
> [New Thread 0x7ffff4441700 (LWP 27682)]
> [New Thread 0x7ffff33e4700 (LWP 27683)]
> [New Thread 0x7ffff27c1700 (LWP 27684)]
> [New Thread 0x7fffebfff700 (LWP 27685)]
> [New Thread 0x7fffeb7fe700 (LWP 27686)]
> [New Thread 0x7fffeaffd700 (LWP 27687)]
> [New Thread 0x7fffea7fc700 (LWP 27688)]
> [    0.000000] Initializing cgroup subsys cpuset
> [    0.000000] Initializing cgroup subsys cpu
> [    0.000000] Linux version 3.1.0-7.fc16.x86_64
> (mockbuild at x86-07.phx2.fedoraproject.org) (gcc version 4.6.2 20111027
> (Red Hat 4.6.2-1) (GCC) ) #1 SMP Tue Nov 1 21:10:48 UTC 2011
> [    0.000000] Command line:
> BOOT_IMAGE=/boot/vmlinuz-3.1.0-7.fc16.x86_64
> root=UUID=3b5db79f-240c-453d-9266-62c69a3873f2 ro rd.md=0 rd.lvm=0
> rd.dm=0 KEYTABLE=us SYSFONT=latarcyrheb-sun16 rd.luks=0
> LANG=en_US.UTF-8 console=tty0 console=ttyS0
> [    0.000000] BIOS-provided physical RAM map:
> [    0.000000]  BIOS-e820: 0000000000000000 - 000000000009f400 (usable)
> [    0.000000]  BIOS-e820: 000000000009f400 - 00000000000a0000 (reserved)
> [    0.000000]  BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
> [    0.000000]  BIOS-e820: 0000000000100000 - 000000003fffe000 (usable)
> [    0.000000]  BIOS-e820: 000000003fffe000 - 0000000040000000 (reserved)
> [    0.000000]  BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved)
> [    0.000000]  BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
> <snip boot messages>
> [    0.923633] sd 0:0:0:0: [sda] Attached SCSI disk
> [    0.931185] Freeing unused kernel memory: 940k freed
> [    0.932797] Write protecting the kernel read-only data: 10240k
> [    0.937811] Freeing unused kernel memory: 1260k freed
> [    0.944614] Freeing unused kernel memory: 1584k freed
> [    1.079727] dracut: dracut-013-18.fc16
> [    1.109626] dracut: rd.luks=0: removing cryptoluks activation
> [    1.116721] dracut: rd.lvm=0: removing LVM activation
> [    1.131585] udevd[107]: starting version 173
> [    1.149477] input: ImExPS/2 Generic Explorer Mouse as
> /devices/platform/i8042/serio1/input/input2
> [    1.241982] dracut: Starting plymouth daemon
> [    1.371167] dracut: rd.dm=0: removing DM RAID activation
> [    1.386252] dracut: rd.md=0: removing MD RAID activation
> [    1.496211] Refined TSC clocksource calibration: 2591.580 MHz.
> [    2.494456] EXT4-fs (sda2): INFO: recovery required on readonly
> filesystem
> [    2.501674] EXT4-fs (sda2): write access will be enabled during recovery
> [Switching to Thread 0x7fffea7fc700 (LWP 27688)]
>
> Breakpoint 1, qemu_gluster_aio_writev (bs=0x5555564c2ba0,
> sector_num=8656896, qiov=
>     0x7fffd80009b0, nb_sectors=8, cb=0x5555555d43f6
> <bdrv_co_io_em_complete>, opaque=
>     0x7ffff021fe80) at block/gluster.c:433
> 433     {
> (gdb) p *qiov
> $1 = {iov = 0x7fffd0000950, niov = 1, nalloc = 1, size = 4096}
> (gdb) c
> Continuing.
> [   18.114729] EXT4-fs (sda2): recovery complete
>
> Breakpoint 1, qemu_gluster_aio_writev (bs=0x5555564c2ba0,
> sector_num=8656896, qiov=
>     0x7fffd80009b0, nb_sectors=8, cb=0x5555555d43f6
> <bdrv_co_io_em_complete>, opaque=
>     0x7ffff021fe80) at block/gluster.c:433
> 433     {
> (gdb) p *qiov
> $2 = {iov = 0x7fffd0000950, niov = 1, nalloc = 1, size = 4096}
> (gdb) c
> Continuing.
>
> Breakpoint 1, qemu_gluster_aio_writev (bs=0x5555564c2ba0, sector_num=6144,
> qiov=
>     0x7fffd80009b0, nb_sectors=8, cb=0x5555555d43f6
> <bdrv_co_io_em_complete>, opaque=
>     0x7ffff021fe80) at block/gluster.c:433
> 433     {
> (gdb) p *qiov
> $3 = {iov = 0x7fffd0000950, niov = 1, nalloc = 1, size = 4096}
> (gdb) c
> Continuing.
> [   28.245346] EXT4-fs (sda2): mounted filesystem with ordered data
> mode. Opts: (null)
> [   28.449980] dracut: Checking ext4:
> /dev/disk/by-uuid/3b5db79f-240c-453d-9266-62c69a3873f2
> [   28.458685] dracut: issuing e2fsck -a
> /dev/disk/by-uuid/3b5db79f-240c-453d-9266-62c69a3873f2
> [Switching to Thread 0x7fffebfff700 (LWP 27685)]
>
> Breakpoint 1, qemu_gluster_aio_writev (bs=0x5555564c2ba0,
> sector_num=8656896, qiov=
>     0x7fffd80009b0, nb_sectors=8, cb=0x5555555d43f6
> <bdrv_co_io_em_complete>, opaque=
>     0x7ffff021fe80) at block/gluster.c:433
> 433     {
> (gdb) c
> Continuing.
>
> Breakpoint 1, qemu_gluster_aio_writev (bs=0x5555564c2ba0, sector_num=6144,
> qiov=
>     0x7fffd80009b0, nb_sectors=8, cb=0x5555555d43f6
> <bdrv_co_io_em_complete>, opaque=
>     0x7ffff021fe80) at block/gluster.c:433
> 433     {
> (gdb) c
> Continuing.
> [   33.312288] dracut:
> /dev/disk/by-uuid/3b5db79f-240c-453d-9266-62c69a3873f2: clean,
> 123997/589824 files, 1987344/2359296 blocks
> [   33.327729] dracut: Remounting
> /dev/disk/by-uuid/3b5db79f-240c-453d-9266-62c69a3873f2 with -o ro
> [   33.715341] EXT4-fs (sda2): mounted filesystem with ordered data
> mode. Opts: (null)
> [   33.769651] dracut: Mounted root filesystem /dev/sda2
> [   34.046373] dracut: Switching root
> [   34.918839] type=1404 audit(1342277089.702:2): enforcing=1
> old_enforcing=0 auid=4294967295 ses=4294967295
> [   35.178576] type=1403 audit(1342277089.963:3): policy loaded
> auid=4294967295 ses=4294967295
> [   35.211767] systemd[1]: Successfully loaded SELinux policy in 382ms
> 569us.
> [   35.269569] systemd[1]: Successfully loaded SELinux database in
> 54ms 592us, size on heap is 475K.
> [   35.339585] systemd[1]: Relabelled /dev and /run in 34ms 343us.
> [   35.378108] systemd[1]: systemd 36 running in system mode. (+PAM
> +LIBWRAP +AUDIT +SELINUX +SYSVINIT +LIBCRYPTSETUP; fedora)
> [   35.444876] systemd[1]: Set hostname to <bharata-kvm>.
> [   37.458589] systemd-readahead-collect[355]: Disabling readahead
> collector due to execution in virtualized environment.
> [   38.113677] udevd[365]: starting version 173
>
> Breakpoint 1, qemu_gluster_aio_writev (bs=0x5555564c2ba0, sector_num=6144,
> qiov=
>     0x7fffd80009b0, nb_sectors=8, cb=0x5555555d43f6
> <bdrv_co_io_em_complete>, opaque=
>     0x7ffff021fe80) at block/gluster.c:433
> 433     {
> (gdb) c
> Continuing.
> [   42.901809] EXT4-fs (sda2): re-mounted. Opts: (null)
> [   43.628721] RPC: Registered named UNIX socket transport module.
> [   43.638120] RPC: Registered udp transport module.
> [   43.641807] RPC: Registered tcp transport module.
> [   43.643052] RPC: Registered tcp NFSv4.1 backchannel transport module.
> [   44.762586] microcode: CPU0 sig=0x623, pf=0x0, revision=0x1
> [   44.840511] rmmod[389]: ERROR: Module scsi_wait_scan does not exist
> in /proc/modules
> [   45.056936] microcode: CPU1 sig=0x623, pf=0x0, revision=0x1
> [   45.067566] microcode: CPU2 sig=0x623, pf=0x0, revision=0x1
> [   45.073029] microcode: CPU3 sig=0x623, pf=0x0, revision=0x1
> [   45.077156] microcode: Microcode Update Driver: v2.00
> <tigran at aivazian.fsnet.co.uk>, Peter Oruba
> [   45.276189] parport_pc 00:05: reported by Plug and Play ACPI
> [   45.291786] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE]
> [   45.520716] piix4_smbus 0000:00:01.3: SMBus Host Controller at
> 0xb100, revision 0
> [   45.562657] ppdev: user-space parallel port driver
> [   46.588864] fedora-storage-init[545]: Setting up Logical Volume
> Management:   No volume groups found
> [   46.630607] Adding 1044476k swap on /dev/sda3.  Priority:0
> extents:1 across:1044476k
> [Switching to Thread 0x7fffea7fc700 (LWP 27688)]
>
> Breakpoint 1, qemu_gluster_aio_writev (bs=0x5555564c2ba0,
> sector_num=12860888, qiov=
>     0x7fffd80009b0, nb_sectors=8, cb=0x5555555d43f6
> <bdrv_co_io_em_complete>, opaque=
>     0x7ffff021fe80) at block/gluster.c:433
> 433     {
> (gdb) p *qiov
> $4 = {iov = 0x7fffd0000950, niov = 1, nalloc = 1, size = 4096}
> (gdb) c
> Continuing.
>
> Breakpoint 1, qemu_gluster_aio_writev (bs=0x5555564c2ba0,
> sector_num=8656896, qiov=
>     0x7fffd80009b0, nb_sectors=8, cb=0x5555555d43f6
> <bdrv_co_io_em_complete>, opaque=
>     0x7ffff021fe80) at block/gluster.c:433
> 433     {
> (gdb) c
> Continuing.
> [Switching to Thread 0x7fffeb7fe700 (LWP 27686)]
>
> Breakpoint 1, qemu_gluster_aio_writev (bs=0x5555564c2ba0,
> sector_num=8656904, qiov=
>     0x7fffd80009b0, nb_sectors=856, cb=0x5555555d43f6
> <bdrv_co_io_em_complete>, opaque=
>     0x7ffff021fe80) at block/gluster.c:433
> 433     {
> (gdb) p *qiov
> $5 = {iov = 0x7fffd80026e0, niov = 106, nalloc = 106, size = 438272}
> (gdb) c
> Continuing.
> [   61.588698] fedora-storage-init[545]: [  OK  ]
> *** glibc detected *** qemu/x86_64-softmmu/qemu-system-x86_64: double
> free or corruption (!prev): 0x00007fff88001280 ***
> ======= Backtrace: =========
> /lib64/libc.so.6(+0x364fe7dda6)[0x7ffff6052da6]
> /lib64/libc.so.6(+0x364fe7f08e)[0x7ffff605408e]
> /usr/local/lib/libglusterfs.so.0(__gf_free+0x4d)[0x7ffff5d9058d]
>
> /usr/local/lib/glusterfs/3git/rpc-transport/socket.so(__socket_ioq_entry_free+0xa8)[0x7ffff29d3d66]
>
> /usr/local/lib/glusterfs/3git/rpc-transport/socket.so(__socket_ioq_flush+0x111)[0x7ffff29d3e79]
>
> /usr/local/lib/glusterfs/3git/rpc-transport/socket.so(socket_event_poll_err+0x118)[0x7ffff29d4202]
>
> /usr/local/lib/glusterfs/3git/rpc-transport/socket.so(socket_event_handler+0x2a9)[0x7ffff29d8ba3]
> /usr/local/lib/libglusterfs.so.0(+0x4e4b6)[0x7ffff5d8f4b6]
> /usr/local/lib/libglusterfs.so.0(+0x4e6c9)[0x7ffff5d8f6c9]
> /usr/local/lib/libglusterfs.so.0(event_dispatch+0x88)[0x7ffff5d8fa3c]
> /usr/local/lib/libgfapi.so.0(+0x4475)[0x7ffff7284475]
> /lib64/libpthread.so.0(+0x3650207d90)[0x7ffff6394d90]
> /lib64/libc.so.6(clone+0x6d)[0x7ffff60c5f5d]
> ======= Memory map: ========
> 555555554000-555555937000 r-xp 00000000 08:02 6607643
>   /home/bharata/work/kvmfs/qemu/qemu/x86_64-softmmu/qemu-system-x86_64
> 555555b37000-555555bea000 r--p 003e3000 08:02 6607643
>   /home/bharata/work/kvmfs/qemu/qemu/x86_64-softmmu/qemu-system-x86_64
> 555555bea000-555555c33000 rw-p 00496000 08:02 6607643
>   /home/bharata/work/kvmfs/qemu/qemu/x86_64-softmmu/qemu-system-x86_64
> 555555c33000-5555565ae000 rw-p 00000000 00:00 0
>  [heap]
> 5555565ae000-5555565be000 rw-p 00000000 00:00 0
>  [heap]
> 5555565be000-5555565d9000 rw-p 00000000 00:00 0
>  [heap]
> 5555565d9000-5555565e9000 rw-p 00000000 00:00 0
>  [heap]
> 5555565e9000-5555565f9000 rw-p 00000000 00:00 0
>  [heap]
> 7fff88000000-7fff88021000 rw-p 00000000 00:00 0
> 7fff88021000-7fff8c000000 ---p 00000000 00:00 0
> 7fff8fdff000-7fff8fe00000 rw-p 00000000 00:00 0
> 7fff8fe00000-7fffcfe00000 rw-p 00000000 00:00 0
> 7fffcfe00000-7fffd0000000 rw-p 00000000 00:00 0
> 7fffd0000000-7fffd0203000 rw-p 00000000 00:00 0
> 7fffd0203000-7fffd4000000 ---p 00000000 00:00 0
> 7fffd4000000-7fffd4021000 rw-p 00000000 00:00 0
> 7fffd4021000-7fffd8000000 ---p 00000000 00:00 0
> 7fffd8000000-7fffd8204000 rw-p 00000000 00:00 0
> 7fffd8204000-7fffdc000000 ---p 00000000 00:00 0
> 7fffdc000000-7fffdc203000 rw-p 00000000 00:00 0
> 7fffdc203000-7fffe0000000 ---p 00000000 00:00 0
> 7fffe0000000-7fffe0406000 rw-p 00000000 00:00 0
> 7fffe0406000-7fffe4000000 ---p 00000000 00:00 0
> 7fffe4000000-7fffe4021000 rw-p 00000000 00:00 0
> 7fffe4021000-7fffe8000000 ---p 00000000 00:00 0
> 7fffe93e6000-7fffe93fb000 r-xp 00000000 08:03 144305
>   /lib64/libgcc_s-4.6.3-20120306.so.1
> 7fffe93fb000-7fffe95fa000 ---p 00015000 08:03 144305
>   /lib64/libgcc_s-4.6.3-20120306.so.1
> 7fffe95fa000-7fffe95fb000 rw-p 00014000 08:03 144305
>   /lib64/libgcc_s-4.6.3-20120306.so.1
> 7fffe95fb000-7fffe9600000 rw-p 00000000 00:00 0
> 7fffe9600000-7fffe9e00000 rw-p 00000000 00:00 0
> 7fffe9e00000-7fffe9ffc000 rw-p 00000000 00:00 0
> 7fffe9ffc000-7fffe9ffd000 ---p 00000000 00:00 0
> 7fffe9ffd000-7fffea7fd000 rw-p 00000000 00:00 0
>   [stack:27688]
> 7fffea7fd000-7fffea7fe000 ---p 00000000 00:00 0
> 7fffea7fe000-7fffeaffe000 rw-p 00000000 00:00 0
>   [stack:27687]
> 7fffeaffe000-7fffeafff000 ---p 00000000 00:00 0
> 7fffeafff000-7fffeb7ff000 rw-p 00000000 00:00 0
>   [stack:27686]
> 7fffeb7ff000-7fffeb800000 ---p 00000000 00:00 0
> 7fffeb800000-7fffec000000 rw-p 00000000 00:00 0
>   [stack:27685]
> 7fffec000000-7fffec044000 rw-p 00000000 00:00 0
> 7fffec044000-7ffff0000000 ---p 00000000 00:00 0
> 7ffff0120000-7ffff03d3000 rw-p 00000000 00:00 0
> 7ffff03d3000-7ffff03f3000 rw-p 00000000 00:00 0
> 7ffff03f3000-7ffff03f4000 rw-p 00000000 00:00 0
> 7ffff0415000-7ffff0e9e000 rw-p 00000000 00:00 0
> 7ffff0e9e000-7ffff0eb9000 r-xp 00000000 08:03 265319
>   /usr/local/lib/glusterfs/3git/xlator/debug/io-stats.so.0.0.0
> 7ffff0eb9000-7ffff10b8000 ---p 0001b000 08:03 265319
>   /usr/local/lib/glusterfs/3git/xlator/debug/io-stats.so.0.0.0
> 7ffff10b8000-7ffff10ba000 rw-p 0001a000 08:03 265319
>   /usr/local/lib/glusterfs/3git/xlator/debug/io-stats.so.0.0.0
> 7ffff10ba000-7ffff10c7000 r-xp 00000000 08:03 264895
>   /usr/local/lib/glusterfs/3git/xlator/performance/md-cache.so.0.0.0
> 7ffff10c7000-7ffff12c7000 ---p 0000d000 08:03 264895
>   /usr/local/lib/glusterfs/3git/xlator/performance/md-cache.so.0.0.0
> 7ffff12c7000-7ffff12c8000 rw-p 0000d000 08:03 264895
>   /usr/local/lib/glusterfs/3git/xlator/performance/md-cache.so.0.0.0
> 7ffff12c8000-7ffff12e3000 r-xp 00000000 08:03 264894
>   /usr/local/lib/glusterfs/3git/xlator/performance/quick-read.so.0.0.0
> 7ffff12e3000-7ffff14e2000 ---p 0001b000 08:03 264894
>   /usr/local/lib/glusterfs/3git/xlator/performance/quick-read.so.0.0.0
> 7ffff14e2000-7ffff14e4000 rw-p 0001a000 08:03 264894
>   /usr/local/lib/glusterfs/3git/xlator/performance/quick-read.so.0.0.0
> 7ffff14e4000-7ffff14f8000 r-xp 00000000 08:03 264893
>   /usr/local/lib/glusterfs/3git/xlator/performance/io-cache.so.0.0.0
> 7ffff14f8000-7ffff16f7000 ---p 00014000 08:03 264893
>   /usr/local/lib/glusterfs/3git/xlator/performance/io-cache.so.0.0.0
> 7ffff16f7000-7ffff16f9000 rw-p 00013000 08:03 264893
>   /usr/local/lib/glusterfs/3git/xlator/performance/io-cache.so.0.0.0
> 7ffff16f9000-7ffff1705000 r-xp 00000000 08:03 264892
>   /usr/local/lib/glusterfs/3git/xlator/performance/read-ahead.so.0.0.0
> Program received signal SIGABRT, Aborted.
> [Switching to Thread 0x7ffff33e4700 (LWP 27683)]
> 0x00007ffff600b285 in raise () from /lib64/libc.so.6
> Missing separate debuginfos, use: debuginfo-install
> libgcc-4.6.3-2.fc16.x86_64
> (gdb) bt
> #0  0x00007ffff600b285 in raise () from /lib64/libc.so.6
> #1  0x00007ffff600cb9b in abort () from /lib64/libc.so.6
> #2  0x00007ffff604ca7e in __libc_message () from /lib64/libc.so.6
> #3  0x00007ffff6052da6 in malloc_printerr () from /lib64/libc.so.6
> #4  0x00007ffff605408e in _int_free () from /lib64/libc.so.6
> #5  0x00007ffff5d9058d in __gf_free (free_ptr=0x7fff88001280) at
> mem-pool.c:258
> #6  0x00007ffff29d3d66 in __socket_ioq_entry_free
> (entry=0x7fff88001280) at socket.c:586
> #7  0x00007ffff29d3e79 in __socket_ioq_flush (this=0x7fffec042820) at
> socket.c:606
> #8  0x00007ffff29d4202 in socket_event_poll_err (this=0x7fffec042820)
> at socket.c:680
> #9  0x00007ffff29d8ba3 in socket_event_handler (fd=17, idx=1,
> data=0x7fffec042820,
>     poll_in=1, poll_out=0, poll_err=16) at socket.c:1855
> #10 0x00007ffff5d8f4b6 in event_dispatch_epoll_handler
> (event_pool=0x5555564e07a0, events=
>     0x7fffec0008c0, i=0) at event.c:785
> #11 0x00007ffff5d8f6c9 in event_dispatch_epoll
> (event_pool=0x5555564e07a0) at event.c:847
> #12 0x00007ffff5d8fa3c in event_dispatch (event_pool=0x5555564e07a0)
> at event.c:947
> #13 0x00007ffff7284475 in glfs_poller (data=0x5555564c48e0) at glfs.c:408
> #14 0x00007ffff6394d90 in start_thread () from /lib64/libpthread.so.0
> #15 0x00007ffff60c5f5d in clone () from /lib64/libc.so.6
> (gdb)
>
> Regards,
> Bharata.
> --
> http://bharata.sulekha.com/blog/posts.htm,
> http://raobharata.wordpress.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20120714/f28aaae1/attachment-0003.html>


More information about the Gluster-devel mailing list