[Gluster-devel] Re: NFS reexport status
Brent A Nelson
brent at phys.ufl.edu
Fri Aug 3 19:33:25 UTC 2007
I do have a workaround which can hide this bug, thanks to the wonderful
flexibility of GlusterFS and the fact that it in itself is POSIX. If I mount
the GlusterFS as usual, but then use another glusterfs/glusterfsd pair to
export and mount it and NFS reexport THAT, the problem does not appear.
Presumably, server-side AFR instead of client-side would also bypass the issue
(not tested)...
Thanks,
Brent
On Fri, 3 Aug 2007, Brent A Nelson wrote:
> I turned off self-heal on all the AFR volumes, remounted and reexported (I
> didn't delete the data; let me know if that is needed).
>
> du -sk /tmp/blah/* (via NFS)
> du: cannot access `/tmp/blah/usr0/include/c++/4.1.2/\a': No such file or
> directory
> 171832 /tmp/blah/usr0
> 109476 /tmp/blah/usr0-copy
> du: cannot access `/tmp/blah/usr1/include/sys/\337O\004': No such file or
> directory
> du: cannot access
> `/tmp/blah/usr1/src/linux-headers-2.6.20-16/include/asm-ia64/\v': No such
> file or directory
> du: cannot access
> `/tmp/blah/usr1/src/linux-headers-2.6.20-16/include/asm-ia64/&\324\004': No
> such file or directory
> du: cannot access `/tmp/blah/usr1/src/linux-headers-2.6.20-16/drivers/\006':
> No such file or directory
> 117472 /tmp/blah/usr1
> 58392 /tmp/blah/usr1-copy
>
> It appears that self-heal isn't the culprit.
>
> Thanks,
>
> Brent
>
> On Fri, 3 Aug 2007, Krishna Srinivas wrote:
>
>> Hi Brent,
>>
>> Can you turn self-heal off (option self-heal off) and see how it
>> behaves?
>>
>> Thanks
>> Krishna
>>
>> On 8/3/07, Brent A Nelson <brent at phys.ufl.edu> wrote:
>>> A hopefully relevant strace snippet:
>>>
>>> open("share/perl/5.8.8/unicore/lib/jt",
>>> O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 3
>>> fstat64(3, {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
>>> fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
>>> mmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
>>> 0) = 0xb7c63000
>>> getdents64(3, /* 6 entries */, 1048576) = 144
>>> lstat64("share/perl/5.8.8/unicore/lib/jt/C.pl", {st_mode=S_IFREG|0644,
>>> st_size=220, ...}) = 0
>>> lstat64("share/perl/5.8.8/unicore/lib/jt/U.pl", {st_mode=S_IFREG|0644,
>>> st_size=251, ...}) = 0
>>> lstat64("share/perl/5.8.8/unicore/lib/jt/D.pl", {st_mode=S_IFREG|0644,
>>> st_size=438, ...}) = 0
>>> lstat64("share/perl/5.8.8/unicore/lib/jt/R.pl", {st_mode=S_IFREG|0644,
>>> st_size=426, ...}) = 0
>>> getdents64(3, /* 0 entries */, 1048576) = 0
>>> munmap(0xb7c63000, 1052672) = 0
>>> close(3) = 0
>>> open("share/perl/5.8.8/unicore/lib/gc_sc",
>>> O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 3
>>> fstat64(3, {st_mode=S_IFDIR|0755, st_size=0, ...}) = 0
>>> fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
>>> mmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
>>> 0) = 0xb7c63000
>>> getdents64(3, 0xb7c63024, 1048576) = -1 EIO (Input/output error)
>>> write(2, "rsync: readdir(\"/tmp/blah/usr0/s"..., 91rsync:
>>> readdir("/tmp/blah/usr0/share/perl/5.8.8/unicore/lib/gc_sc"): Input/output
>>> error (5)) = 91
>>> write(2, "\n", 1
>>> ) = 1
>>> munmap(0xb7c63000, 1052672) = 0
>>> close(3) = 0
>>>
>>> Thanks,
>>>
>>> Brent
>>>
>>> On Thu, 2 Aug 2007, Brent A Nelson wrote:
>>>
>>>> NFS reexport of a unified GlusterFS seems to be working fine as of TLA
>>>> 409.
>>>> I can make identical copies of a /usr area local-to-glusterfs and
>>>> glusterfs-to-glusterfs, hardlinks and all. Awesome!
>>>>
>>>> However, this is not true when AFR is added to the mix (rsync
>>>> glusterfs-to-glusterfs via NFS reexport):
>>>>
>>>> rsync: readdir("/tmp/blah/usr0/lib/perl/5.8.8/auto/POSIX"): Input/output
>>>> error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/perl/5.8.8"): Input/output error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/i18n/locales"): Input/output error
>>>> (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/locale-langpack/en_GB/LC_MESSAGES"):
>>>> Input/output error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/groff/1.18.1/font/devps"):
>>>> Input/output
>>>> error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/man/man1"): Input/output error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/man/man8"): Input/output error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/man/man7"): Input/output error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/X11/xkb/symbols"): Input/output error
>>>> (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/zoneinfo/right/Africa"): Input/output
>>>> error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/zoneinfo/right/Asia"): Input/output
>>>> error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/zoneinfo/right/America"):
>>>> Input/output
>>>> error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/zoneinfo/Asia"): Input/output error
>>>> (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/doc"): Input/output error (5)
>>>> rsync: readdir("/tmp/blah/usr0/share/consolefonts"): Input/output error
>>>> (5)
>>>> rsync: readdir("/tmp/blah/usr0/bin"): Input/output error (5)
>>>> rsync:
>>>> readdir("/tmp/blah/usr0/src/linux-headers-2.6.20-16/include/asm-sparc64"):
>>>> Input/output error (5)
>>>> rsync:
>>>> readdir("/tmp/blah/usr0/src/linux-headers-2.6.20-16/include/linux"):
>>>> Input/output error (5)
>>>> rsync:
>>>> readdir("/tmp/blah/usr0/src/linux-headers-2.6.20-16/include/asm-mips"):
>>>> Input/output error (5)
>>>> rsync:
>>>> readdir("/tmp/blah/usr0/src/linux-headers-2.6.20-16/include/asm-parisc"):
>>>> Input/output error (5)
>>>> file has vanished:
>>>> "/tmp/blah/usr0/src/linux-headers-2.6.20-16/include/asm-sparc/\#012"
>>>> rsync:
>>>> readdir("/tmp/blah/usr0/src/linux-headers-2.6.20-16-server/include/config"):
>>>> Input/output error (5)
>>>> rsync:
>>>> readdir("/tmp/blah/usr0/src/linux-headers-2.6.20-16-server/include/linux"):
>>>> Input/output error (5)
>>>> ...
>>>>
>>>> Any ideas? Meanwhile, I'll try to track it down in strace (the output will
>>>> be
>>>> huge, but maybe I'll get lucky)...
>>>>
>>>> Thanks,
>>>>
>>>> Brent
>>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at nongnu.org
>>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>>
>>
>
More information about the Gluster-devel
mailing list