[Gluster-devel] Cannot run VMware Virtual Machines on GlusterFS
Anand Avati
anand.avati at gmail.com
Tue Jun 26 02:59:22 UTC 2012
Please let me know if this patch fixes your problem:
http://review.gluster.com/3617
Thanks for your help and patience so far!
Avati
On Mon, Jun 25, 2012 at 7:50 PM, Anand Avati <anand.avati at gmail.com> wrote:
> Tomaoki, excellent debugging! Please add yourself to CC -
> https://bugzilla.redhat.com/show_bug.cgi?id=835336
>
> Avati
>
>
> On Sun, Jun 24, 2012 at 10:55 PM, Tomoaki Sato <tsato at valinux.co.jp>wrote:
>
>> Avati,
>>
>> Are these intended ?:
>> - hashcount value of 'bar'(0) is not same as 'foo/..'(2) and,
>> - hashcount value of 'foo'(1) is not same as 'foo/../foo'(3).
>>
>>
>> # tshark -i 1 -R nfs
>> Running as user "root" and group "root". This could be dangerous.
>> Capturing on eth0
>> 2.386732 192.168.1.23 -> 192.168.1.132 NFS V3 GETATTR Call, FH:0x43976ad5
>> 2.387772 192.168.1.132 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In 7)
>> Directory mode:0755 uid:0 gid:0
>> 3.666252 192.168.1.23 -> 192.168.1.132 NFS V3 GETATTR Call, FH:0x43976ad5
>> 3.667112 192.168.1.132 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In 17)
>> Directory mode:0755 uid:0 gid:0
>> 3.667260 192.168.1.23 -> 192.168.1.132 NFS V3 LOOKUP Call,
>> DH:0x43976ad5/foo /* bar/foo */
>> 3.668321 192.168.1.132 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In 19),
>> FH:0x3f9fd887
>> 11.386638 192.168.1.23 -> 192.168.1.132 NFS V3 GETATTR Call,
>> FH:0x43976ad5
>> 11.387664 192.168.1.132 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In
>> 52) Directory mode:0755 uid:0 gid:0
>> 20.386438 192.168.1.23 -> 192.168.1.132 NFS V3 GETATTR Call,
>> FH:0x43976ad5
>> 20.387436 192.168.1.132 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In
>> 95) Directory mode:0755 uid:0 gid:0
>> 29.382531 192.168.1.23 -> 192.168.1.132 NFS V3 GETATTR Call,
>> FH:0x43976ad5
>> 29.383796 192.168.1.132 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In
>> 126) Directory mode:0755 uid:0 gid:0
>> 33.666658 192.168.1.23 -> 192.168.1.132 NFS V3 LOOKUP Call,
>> DH:0x3f9fd887/.. /* foo/.. */
>> 33.668097 192.168.1.132 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In
>> 144), FH:0x42966b36
>> 33.668310 192.168.1.23 -> 192.168.1.132 NFS V3 READDIRPLUS Call,
>> FH:0x42966b36
>> 33.669996 192.168.1.132 -> 192.168.1.23 NFS V3 READDIRPLUS Reply (Call
>> In 146) .. foo .
>> 33.670188 192.168.1.23 -> 192.168.1.132 NFS V3 LOOKUP Call,
>> DH:0x42966b36/.. /* bar/.. */
>> 33.671279 192.168.1.132 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In
>> 148), FH:0xbc1b2900
>> 33.671425 192.168.1.23 -> 192.168.1.132 NFS V3 LOOKUP Call,
>> DH:0x42966b36/foo /* bar/foo */
>> 33.672421 192.168.1.132 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In
>> 150), FH:0x3e9ed964
>> 20 packets captured
>>
>> # egrep "nfs3_log_fh_entry_call|nfs3_**log_newfh_res"
>> /var/log/glusterfs/nfs.log | tail -8
>> [2012-06-25 14:28:40.090333] D [nfs3-helpers.c:1645:nfs3_log_**fh_entry_call]
>> 0-nfs-nfsv3: XID: 3d78d872, LOOKUP: args: FH: hashcount 0, exportid
>> b2d75589-8370-4528-ab4e-**b543b3abdc3b, gfid 00000000-0000-0000-0000-**000000000001,
>> name: foo /* bar/foo */
>> [2012-06-25 14:28:40.091108] D [nfs3-helpers.c:3462:nfs3_log_**newfh_res]
>> 0-nfs-nfsv3: XID: 3d78d872, LOOKUP: NFS: 0(Call completed successfully.),
>> POSIX: 0(Success), FH: hashcount 1, exportid b2d75589-8370-4528-ab4e-**b543b3abdc3b,
>> gfid 7c4b5a51-0108-4ac9-8fd2-**4b843dcb2715
>> [2012-06-25 14:29:10.089791] D [nfs3-helpers.c:1645:nfs3_log_**fh_entry_call]
>> 0-nfs-nfsv3: XID: 3d78d879, LOOKUP: args: FH: hashcount 1, exportid
>> b2d75589-8370-4528-ab4e-**b543b3abdc3b, gfid 7c4b5a51-0108-4ac9-8fd2-**4b843dcb2715,
>> name: .. /* foo/.. */
>> [2012-06-25 14:29:10.090872] D [nfs3-helpers.c:3462:nfs3_log_**newfh_res]
>> 0-nfs-nfsv3: XID: 3d78d879, LOOKUP: NFS: 0(Call completed successfully.),
>> POSIX: 0(Success), FH: hashcount 2, exportid b2d75589-8370-4528-ab4e-**b543b3abdc3b,
>> gfid 00000000-0000-0000-0000-**000000000001
>> [2012-06-25 14:29:10.093266] D [nfs3-helpers.c:1645:nfs3_log_**fh_entry_call]
>> 0-nfs-nfsv3: XID: 3d78d87b, LOOKUP: args: FH: hashcount 2, exportid
>> b2d75589-8370-4528-ab4e-**b543b3abdc3b, gfid 00000000-0000-0000-0000-**000000000001,
>> name: .. /* bar/.. */
>> [2012-06-25 14:29:10.094056] D [nfs3-helpers.c:3462:nfs3_log_**newfh_res]
>> 0-nfs-nfsv3: XID: 3d78d87b, LOOKUP: NFS: 0(Call completed successfully.),
>> POSIX: 0(Success), FH: hashcount 3, exportid b2d75589-8370-4528-ab4e-**b543b3abdc3b,
>> gfid 6edd430d-bc57-470e-8e98-**eacfe1a91040
>> [2012-06-25 14:29:10.094498] D [nfs3-helpers.c:1645:nfs3_log_**fh_entry_call]
>> 0-nfs-nfsv3: XID: 3d78d87c, LOOKUP: args: FH: hashcount 2, exportid
>> b2d75589-8370-4528-ab4e-**b543b3abdc3b, gfid 00000000-0000-0000-0000-**000000000001,
>> name: foo /* bar/foo */
>> [2012-06-25 14:29:10.095198] D [nfs3-helpers.c:3462:nfs3_log_**newfh_res]
>> 0-nfs-nfsv3: XID: 3d78d87c, LOOKUP: NFS: 0(Call completed successfully.),
>> POSIX: 0(Success), FH: hashcount 3, exportid b2d75589-8370-4528-ab4e-**b543b3abdc3b,
>> gfid 7c4b5a51-0108-4ac9-8fd2-**4b843dcb2715
>>
>> Regards,
>>
>> Tomo
>>
>> Anand Avati wrote:
>>
>>> Tomoaki, this is very useful. I will look deeper soon.
>>> Thanks!
>>>
>>> Avati
>>>
>>> On Thu, Jun 21, 2012 at 9:21 PM, Tomoaki Sato <tsato at valinux.co.jp<mailto:
>>> tsato at valinux.co.jp>> wrote:
>>>
>>> Avati,
>>>
>>> tshark says ...
>>> FH values that the linux kernel NFS server returns stays constant for
>>> every LOOKUP 'foo' but,
>>> FH values that the GlusterFS(NFS) returns are non-constant.
>>>
>>> operaions at the ESXi host:
>>>
>>> ~ # ./getcwd /vmfs/volumes/94925201-**78f190e0/foo
>>> ========= sleep 30 ================
>>> /vmfs/volumes/94925201-**78f190e0/foo
>>> ~ #
>>>
>>> tshark's output at the linux kernel NFS server:
>>>
>>> # tshark -i 2 -R nfs
>>> Running as user "root" and group "root". This could be dangerous.
>>> Capturing on br0
>>> /* chdir */
>>> 2.056680 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call,
>>> FH:0x1ffd38ff
>>> 2.056990 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 13) Directory mode:0755 uid:0 gid:0
>>> 9.848666 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call,
>>> FH:0x1ffd38ff
>>> 9.848767 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 60) Directory mode:0755 uid:0 gid:0
>>> 9.848966 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
>>> DH:0x1ffd38ff/foo
>>> 9.849049 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In
>>> 62), FH:0xdb05b90a <=====
>>> 20.055508 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call,
>>> FH:0x1ffd38ff
>>> 20.055702 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 103) Directory mode:0755 uid:0 gid:0
>>> 29.054939 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call,
>>> FH:0x1ffd38ff
>>> 29.055180 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 132) Directory mode:0755 uid:0 gid:0
>>> 38.054338 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call,
>>> FH:0x1ffd38ff
>>> 38.054583 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 151) Directory mode:0755 uid:0 gid:0
>>> /* getcwd */
>>> 39.849107 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
>>> DH:0xdb05b90a/..
>>> 39.849449 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call
>>> In 170), FH:0x1ffd38ff
>>> 39.849676 192.168.1.23 -> 192.168.1.254 NFS V3 READDIRPLUS Call,
>>> FH:0x1ffd38ff
>>> 39.849833 192.168.1.254 -> 192.168.1.23 NFS V3 READDIRPLUS Reply
>>> (Call In 172) . .. foo
>>> 39.850071 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
>>> DH:0x1ffd38ff/foo
>>> 39.850149 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call
>>> In 174), FH:0xdb05b90a
>>> 39.850746 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
>>> DH:0xdb05b90a/..
>>> 39.850814 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call
>>> In 176), FH:0x1ffd38ff
>>> 39.851014 192.168.1.23 -> 192.168.1.254 NFS V3 READDIRPLUS Call,
>>> FH:0x1ffd38ff
>>> 39.851095 192.168.1.254 -> 192.168.1.23 NFS V3 READDIRPLUS Reply
>>> (Call In 178) . .. foo
>>> 39.851329 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
>>> DH:0x1ffd38ff/foo
>>> 39.851438 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call
>>> In 180), FH:0xdb05b90a <=====
>>>
>>> operations at the ESXi host:
>>>
>>> ~ # ./getcwd /vmfs/volumes/ef172a87-**e5ae817f/foo
>>> ========= sleep 30 ================
>>> getcwd: No such file or directory
>>> ~ #
>>>
>>> tshark's output at the GlusterFS(NFS) server:
>>>
>>> # tshark -i 1 -R nfs
>>> Running as user "root" and group "root". This could be dangerous.
>>> Capturing on eth0
>>> /* chdir */
>>> 1.228396 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call,
>>> FH:0x43976ad5
>>> 1.229406 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 6) Directory mode:0755 uid:0 gid:0
>>> 4.445894 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call,
>>> FH:0x43976ad5
>>> 4.446916 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 16) Directory mode:0755 uid:0 gid:0
>>> 4.447099 192.168.1.23 -> 192.168.1.136 NFS V3 LOOKUP Call,
>>> DH:0x43976ad5/foo
>>> 4.448147 192.168.1.136 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In
>>> 18), FH:0x3f9fd887 <=====
>>> 10.228438 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call,
>>> FH:0x43976ad5
>>> 10.229432 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 31) Directory mode:0755 uid:0 gid:0
>>> 19.228321 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call,
>>> FH:0x43976ad5
>>> 19.229309 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 47) Directory mode:0755 uid:0 gid:0
>>> 28.228139 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call,
>>> FH:0x43976ad5
>>> 28.229112 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call
>>> In 70) Directory mode:0755 uid:0 gid:0
>>> /* getcwd */
>>> 34.448796 192.168.1.23 -> 192.168.1.136 NFS V3 LOOKUP Call,
>>> DH:0x3f9fd887/..
>>> 34.450119 192.168.1.136 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call
>>> In 81), FH:0x42966b36
>>> 34.450343 192.168.1.23 -> 192.168.1.136 NFS V3 READDIRPLUS Call,
>>> FH:0x42966b36
>>> 34.452105 192.168.1.136 -> 192.168.1.23 NFS V3 READDIRPLUS Reply
>>> (Call In 83) .. foo .
>>> 34.452311 192.168.1.23 -> 192.168.1.136 NFS V3 LOOKUP Call,
>>> DH:0x42966b36/..
>>> 34.453464 192.168.1.136 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call
>>> In 85), FH:0xbc1b2900
>>> 34.453648 192.168.1.23 -> 192.168.1.136 NFS V3 LOOKUP Call,
>>> DH:0x42966b36/foo
>>> 34.454677 192.168.1.136 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call
>>> In 87), FH:0x3e9ed964 <======
>>>
>>> Regards,
>>>
>>> Tomo
>>>
>>> (2012年06月20日 16:28), Tomoaki Sato wrote:
>>> > Avati,
>>> >
>>> > I've tried following:
>>> > 1) 'esxcfg-nas -d gluster_nfs' at the ESXi host.
>>> > 2) 'volume set bar nfs.enable-ino32 on' at the 192.168.1.136 host.
>>> > 3) 'volume stop bar' and 'volume start bar' at the 192.168.1.136
>>> host.
>>> > 4) 'esxcfg-nas -a -o 192.168.1.136 -s /bar gluster_nfs' at the
>>> ESXi host.
>>> >
>>> > on the ESXi host:
>>> >
>>> > ~ # uname -m
>>> > x86_64
>>> > ~ # mkdir /vmfs/volumes/ef172a87-**e5ae817f/after-enable-ino32-on
>>> > ~ # ls -liR /vmfs/volumes/ef172a87-**e5ae817f
>>> > /vmfs/volumes/ef172a87-**e5ae817f:
>>> > -2118204814 drwxr-xr-x 1 root root 4096 Jun 20 07:13
>>> after-enable-ino32-on
>>> > 1205893126 drwxr-xr-x 1 root root 4096 Jun 20 07:08 baz
>>> > -1291907235 drwx------ 1 root root 16384 Jun 6 23:41 lost+found
>>> >
>>> > /vmfs/volumes/ef172a87-**e5ae817f/after-enable-ino32-**on:
>>> >
>>> > /vmfs/volumes/ef172a87-**e5ae817f/baz:
>>> > -1374929331 drwxr-xr-x 1 root root 4096 Jun 19 06:41 foo
>>> >
>>> > /vmfs/volumes/ef172a87-**e5ae817f/baz/foo:
>>> >
>>> > /vmfs/volumes/ef172a87-**e5ae817f/lost+found:
>>> > ~ # ./getcwd /vmfs/volumes/ef172a87-**
>>> e5ae817f/after-enable-ino32-on
>>> > getcwd: No such file or directory
>>> > ~ #
>>> >
>>> > on the 192.168.1.136 host:
>>> >
>>> > # gluster volume info bar
>>> >
>>> > Volume Name: bar
>>> > Type: Distribute
>>> > Volume ID: b2d75589-8370-4528-ab4e-**b543b3abdc3b
>>> > Status: Started
>>> > Number of Bricks: 1
>>> > Transport-type: tcp
>>> > Bricks:
>>> > Brick1: bar-1-private:/mnt/brick
>>> > Options Reconfigured:
>>> > diagnostics.brick-log-level: TRACE
>>> > diagnostics.client-log-level: TRACE
>>> > nfs.enable-ino32: on
>>> >
>>> > please fine attached nfs.log5.
>>> >
>>> > Regards,
>>> >
>>> > Tomo
>>> >
>>> > (2012/06/20 16:11), Anand Avati wrote:
>>> >> -1374929331 drwxr-xr-x 1 root root 4096 Jun 19 06:41 foo
>>> >>
>>> >> ...
>>> >>
>>> >> 2920037965 drwxr-xr-x 2 root root 4096 Jun 19 15:41 foo
>>> >>
>>> >>
>>> >> Ouch!
>>> >>
>>> >> -1374929331 == (int32_t) 2920037965
>>> >>
>>> >> 'uname -m' from the ESXi host please! Is it a 32bit OS? Can you
>>> try 'gluster volume set bar nfs.enable-ino32 on' and retry?
>>> >>
>>> >> Avati
>>> >
>>>
>>>
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20120625/4ce578fd/attachment-0003.html>
More information about the Gluster-devel
mailing list