[Gluster-users] crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Raghavendra G
raghavendra at gluster.com
Thu Apr 1 06:09:44 UTC 2010
Sabuj Pattanayek,
Seems like this issue has been fixed in latest 3.0.x releases. Can you try
with latest release?
regards,
On Wed, Mar 3, 2010 at 11:05 AM, Harshavardhana <harsha at gluster.com> wrote:
> On 03/02/2010 10:43 PM, Sabuj Pattanayek wrote:
>
>> Hi,
>>
>> I've got this strange problem where a striped endpoint will crash when
>> I try to use cp to copy files off of it but not when I use rsync to
>> copy files off:
>>
>> [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/
>> cp: reading
>> `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py':
>> Software caused connection abort
>> cp: closing
>> `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py':
>> Transport endpoint is not connected
>>
>>
>> pending frames:
>> frame : type(1) op(READ)
>> frame : type(1) op(READ)
>>
>> patchset: 2.0.1-886-g8379edd
>> signal received: 11
>> time of crash: 2010-03-02 11:06:40
>> configuration details:
>> argp 1
>> backtrace 1
>> dlfcn 1
>> fdatasync 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> spinlock 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 3.0.0
>> /lib64/libc.so.6[0x3a66a30280]
>> /lib64/libpthread.so.0(pthread_spin_lock+0x2)[0x3a6760b0d2]
>> /usr/lib64/libglusterfs.so.0(iobref_merge+0x2f)[0x37af83fe71]
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/cluster/stripe.so(stripe_readv_cbk+0x1ee)[0x2b55b16c1b68]
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/performance/stat-prefetch.so(sp_readv_cbk+0xf5)[0x2b55b14a39d2]
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/performance/quick-read.so(qr_readv+0x6a6)[0x2b55b128c209]
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/performance/stat-prefetch.so(sp_readv+0x256)[0x2b55b14a3c4c]
>>
>> /usr/lib64/glusterfs/3.0.0/xlator/cluster/stripe.so(stripe_readv+0x5fc)[0x2b55b16c28cd]
>> /usr/lib64/glusterfs/3.0.0/xlator/mount/fuse.so[0x2b55b18d2665]
>> /usr/lib64/glusterfs/3.0.0/xlator/mount/fuse.so[0x2b55b18d88ff]
>> /lib64/libpthread.so.0[0x3a67606367]
>> /lib64/libc.so.6(clone+0x6d)[0x3a66ad2f7d]
>> ---------
>>
>> Here's the client configuration:
>>
>> volume client-stripe-1
>> type protocol/client
>> option transport-type ib-verbs
>> option remote-host gluster1
>> option remote-subvolume iothreads
>> end-volume
>>
>> volume client-stripe-2
>> type protocol/client
>> option transport-type ib-verbs
>> option remote-host gluster2
>> option remote-subvolume iothreads
>> end-volume
>>
>> volume client-stripe-3
>> type protocol/client
>> option transport-type ib-verbs
>> option remote-host gluster3
>> option remote-subvolume iothreads
>> end-volume
>>
>> volume client-stripe-4
>> type protocol/client
>> option transport-type ib-verbs
>> option remote-host gluster4
>> option remote-subvolume iothreads
>> end-volume
>>
>> volume client-stripe-5
>> type protocol/client
>> option transport-type ib-verbs
>> option remote-host gluster5
>> option remote-subvolume iothreads
>> end-volume
>>
>> volume readahead-gluster1
>> type performance/read-ahead
>> option page-count 4 # 2 is default
>> option force-atime-update off # default is off
>> subvolumes client-stripe-1
>> end-volume
>>
>> volume readahead-gluster2
>> type performance/read-ahead
>> option page-count 4 # 2 is default
>> option force-atime-update off # default is off
>> subvolumes client-stripe-2
>> end-volume
>>
>> volume readahead-gluster3
>> type performance/read-ahead
>> option page-count 4 # 2 is default
>> option force-atime-update off # default is off
>> subvolumes client-stripe-3
>> end-volume
>>
>> volume readahead-gluster4
>> type performance/read-ahead
>> option page-count 4 # 2 is default
>> option force-atime-update off # default is off
>> subvolumes client-stripe-4
>> end-volume
>>
>> volume readahead-gluster5
>> type performance/read-ahead
>> option page-count 4 # 2 is default
>> option force-atime-update off # default is off
>> subvolumes client-stripe-5
>> end-volume
>>
>> volume writebehind-gluster1
>> type performance/write-behind
>> option flush-behind on
>> subvolumes readahead-gluster1
>> end-volume
>>
>> volume writebehind-gluster2
>> type performance/write-behind
>> option flush-behind on
>> subvolumes readahead-gluster2
>> end-volume
>>
>> volume writebehind-gluster3
>> type performance/write-behind
>> option flush-behind on
>> subvolumes readahead-gluster3
>> end-volume
>>
>> volume writebehind-gluster4
>> type performance/write-behind
>> option flush-behind on
>> subvolumes readahead-gluster4
>> end-volume
>>
>> volume writebehind-gluster5
>> type performance/write-behind
>> option flush-behind on
>> subvolumes readahead-gluster5
>> end-volume
>>
>> volume quick-read-gluster1
>> type performance/quick-read
>> subvolumes writebehind-gluster1
>> end-volume
>>
>> volume quick-read-gluster2
>> type performance/quick-read
>> subvolumes writebehind-gluster2
>> end-volume
>>
>> volume quick-read-gluster3
>> type performance/quick-read
>> subvolumes writebehind-gluster3
>> end-volume
>>
>> volume quick-read-gluster4
>> type performance/quick-read
>> subvolumes writebehind-gluster4
>> end-volume
>>
>> volume quick-read-gluster5
>> type performance/quick-read
>> subvolumes writebehind-gluster5
>> end-volume
>>
>> volume stat-prefetch-gluster1
>> type performance/stat-prefetch
>> subvolumes quick-read-gluster1
>> end-volume
>>
>> volume stat-prefetch-gluster2
>> type performance/stat-prefetch
>> subvolumes quick-read-gluster2
>> end-volume
>>
>> volume stat-prefetch-gluster3
>> type performance/stat-prefetch
>> subvolumes quick-read-gluster3
>> end-volume
>>
>> volume stat-prefetch-gluster4
>> type performance/stat-prefetch
>> subvolumes quick-read-gluster4
>> end-volume
>>
>> volume stat-prefetch-gluster5
>> type performance/stat-prefetch
>> subvolumes quick-read-gluster5
>> end-volume
>>
>> volume stripe
>> type cluster/stripe
>> option block-size 2MB
>> #subvolumes client-stripe-1 client-stripe-2 client-stripe-3
>> client-stripe-4 client-stripe-5
>> #subvolumes writebehind-gluster1 writebehind-gluster2
>> writebehind-gluster3 writebehind-gluster4 writebehind-gluster5
>> subvolumes stat-prefetch-gluster1 stat-prefetch-gluster2
>> stat-prefetch-gluster3 stat-prefetch-gluster4 stat-prefetch-gluster5
>> end-volume
>>
>> ######
>>
>> Here's the server configuration from one of the 5 gluster systems:
>>
>> volume posix-stripe
>> type storage/posix
>> option directory /export/gluster5/stripe
>> end-volume
>>
>> volume posix-distribute
>> type storage/posix
>> option directory /export/gluster5/distribute
>> end-volume
>>
>> volume locks
>> type features/locks
>> subvolumes posix-stripe
>> end-volume
>>
>> volume locks-dist
>> type features/locks
>> subvolumes posix-distribute
>> end-volume
>>
>> volume iothreads
>> type performance/io-threads
>> option thread-count 16
>> subvolumes locks
>> end-volume
>>
>> volume iothreads-dist
>> type performance/io-threads
>> option thread-count 16
>> subvolumes locks-dist
>> end-volume
>>
>> volume server
>> type protocol/server
>> option transport-type ib-verbs
>> option auth.addr.iothreads.allow 10.2.178.*
>> option auth.addr.iothreads-dist.allow 10.2.178.*
>> subvolumes iothreads iothreads-dist
>> end-volume
>>
>> volume server-tcp
>> type protocol/server
>> option transport-type tcp
>> option auth.addr.iothreads.allow ip.not.for.you
>> option auth.addr.iothreads-dist.allow ip.not.for.you
>> subvolumes iothreads iothreads-dist
>> end-volume
>>
>> #####
>>
>> Using
>>
>> glusterfs-common-3.0.0-1
>> glusterfs-debuginfo-3.0.0-1
>> glusterfs-devel-3.0.0-1
>> glusterfs-server-3.0.0-1
>> glusterfs-client-3.0.0-1
>>
>> Linux gluster5 2.6.18-128.2.1.el5 #1 SMP Tue Jul 14 06:36:37 EDT 2009
>> x86_64 x86_64 x86_64 GNU/Linux
>>
>> Any ideas?
>>
>> Thanks,
>> Sabuj Pattanayek
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>>
> Can you file a bug report with your client and server logs and your volume
> files?. Also preferably a gdb backtrace from the core file?.
>
> Thanks
>
> --
> Harshavardhana
> http://www.gluster.com
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
--
Raghavendra G
More information about the Gluster-users
mailing list