[Gluster-devel] Cannot run VMware Virtual Machines on GlusterFS

Anand Avati anand.avati at gmail.com
Sat Jun 23 14:33:45 UTC 2012


Tomoaki, this is very useful. I will look deeper soon.
Thanks!

Avati

On Thu, Jun 21, 2012 at 9:21 PM, Tomoaki Sato <tsato at valinux.co.jp> wrote:

> Avati,
>
> tshark says ...
> FH values that the linux kernel NFS server returns stays constant for
> every LOOKUP 'foo' but,
> FH values that the GlusterFS(NFS) returns are non-constant.
>
> operaions at the ESXi host:
>
> ~ # ./getcwd /vmfs/volumes/94925201-78f190e0/foo
> ========= sleep 30 ================
> /vmfs/volumes/94925201-78f190e0/foo
> ~ #
>
> tshark's output at the linux kernel NFS server:
>
> # tshark -i 2 -R nfs
> Running as user "root" and group "root". This could be dangerous.
> Capturing on br0
> /* chdir */
>  2.056680 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call, FH:0x1ffd38ff
>  2.056990 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In 13)
>  Directory mode:0755 uid:0 gid:0
>  9.848666 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call, FH:0x1ffd38ff
>  9.848767 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In 60)
>  Directory mode:0755 uid:0 gid:0
>  9.848966 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
> DH:0x1ffd38ff/foo
>  9.849049 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In 62),
> FH:0xdb05b90a         <=====
>  20.055508 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call, FH:0x1ffd38ff
>  20.055702 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In
> 103)  Directory mode:0755 uid:0 gid:0
>  29.054939 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call, FH:0x1ffd38ff
>  29.055180 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In
> 132)  Directory mode:0755 uid:0 gid:0
>  38.054338 192.168.1.23 -> 192.168.1.254 NFS V3 GETATTR Call, FH:0x1ffd38ff
>  38.054583 192.168.1.254 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In
> 151)  Directory mode:0755 uid:0 gid:0
> /* getcwd */
>  39.849107 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
> DH:0xdb05b90a/..
>  39.849449 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In
> 170), FH:0x1ffd38ff
>  39.849676 192.168.1.23 -> 192.168.1.254 NFS V3 READDIRPLUS Call,
> FH:0x1ffd38ff
>  39.849833 192.168.1.254 -> 192.168.1.23 NFS V3 READDIRPLUS Reply (Call In
> 172) . .. foo
>  39.850071 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
> DH:0x1ffd38ff/foo
>  39.850149 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In
> 174), FH:0xdb05b90a
>  39.850746 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
> DH:0xdb05b90a/..
>  39.850814 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In
> 176), FH:0x1ffd38ff
>  39.851014 192.168.1.23 -> 192.168.1.254 NFS V3 READDIRPLUS Call,
> FH:0x1ffd38ff
>  39.851095 192.168.1.254 -> 192.168.1.23 NFS V3 READDIRPLUS Reply (Call In
> 178) . .. foo
>  39.851329 192.168.1.23 -> 192.168.1.254 NFS V3 LOOKUP Call,
> DH:0x1ffd38ff/foo
>  39.851438 192.168.1.254 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In
> 180), FH:0xdb05b90a            <=====
>
> operations at the ESXi host:
>
> ~ # ./getcwd /vmfs/volumes/ef172a87-e5ae817f/foo
> ========= sleep 30 ================
> getcwd: No such file or directory
> ~ #
>
> tshark's output at the GlusterFS(NFS) server:
>
> # tshark -i 1 -R nfs
> Running as user "root" and group "root". This could be dangerous.
> Capturing on eth0
> /* chdir */
>  1.228396 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call, FH:0x43976ad5
>  1.229406 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In 6)
>  Directory mode:0755 uid:0 gid:0
>  4.445894 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call, FH:0x43976ad5
>  4.446916 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In 16)
>  Directory mode:0755 uid:0 gid:0
>  4.447099 192.168.1.23 -> 192.168.1.136 NFS V3 LOOKUP Call,
> DH:0x43976ad5/foo
>  4.448147 192.168.1.136 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In 18),
> FH:0x3f9fd887     <=====
>  10.228438 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call, FH:0x43976ad5
>  10.229432 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In 31)
>  Directory mode:0755 uid:0 gid:0
>  19.228321 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call, FH:0x43976ad5
>  19.229309 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In 47)
>  Directory mode:0755 uid:0 gid:0
>  28.228139 192.168.1.23 -> 192.168.1.136 NFS V3 GETATTR Call, FH:0x43976ad5
>  28.229112 192.168.1.136 -> 192.168.1.23 NFS V3 GETATTR Reply (Call In 70)
>  Directory mode:0755 uid:0 gid:0
> /* getcwd */
>  34.448796 192.168.1.23 -> 192.168.1.136 NFS V3 LOOKUP Call,
> DH:0x3f9fd887/..
>  34.450119 192.168.1.136 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In 81),
> FH:0x42966b36
>  34.450343 192.168.1.23 -> 192.168.1.136 NFS V3 READDIRPLUS Call,
> FH:0x42966b36
>  34.452105 192.168.1.136 -> 192.168.1.23 NFS V3 READDIRPLUS Reply (Call In
> 83) .. foo .
>  34.452311 192.168.1.23 -> 192.168.1.136 NFS V3 LOOKUP Call,
> DH:0x42966b36/..
>  34.453464 192.168.1.136 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In 85),
> FH:0xbc1b2900
>  34.453648 192.168.1.23 -> 192.168.1.136 NFS V3 LOOKUP Call,
> DH:0x42966b36/foo
>  34.454677 192.168.1.136 -> 192.168.1.23 NFS V3 LOOKUP Reply (Call In 87),
> FH:0x3e9ed964     <======
>
> Regards,
>
> Tomo
>
> (2012年06月20日 16:28), Tomoaki Sato wrote:
> > Avati,
> >
> > I've tried following:
> > 1) 'esxcfg-nas -d gluster_nfs' at the ESXi host.
> > 2) 'volume set bar nfs.enable-ino32 on' at the 192.168.1.136 host.
> > 3) 'volume stop bar' and 'volume start bar' at the 192.168.1.136 host.
> > 4) 'esxcfg-nas -a -o 192.168.1.136 -s /bar gluster_nfs' at the ESXi host.
> >
> > on the ESXi host:
> >
> > ~ # uname -m
> > x86_64
> > ~ # mkdir /vmfs/volumes/ef172a87-e5ae817f/after-enable-ino32-on
> > ~ # ls -liR /vmfs/volumes/ef172a87-e5ae817f
> > /vmfs/volumes/ef172a87-e5ae817f:
> > -2118204814 drwxr-xr-x 1 root root 4096 Jun 20 07:13
> after-enable-ino32-on
> > 1205893126 drwxr-xr-x 1 root root 4096 Jun 20 07:08 baz
> > -1291907235 drwx------ 1 root root 16384 Jun 6 23:41 lost+found
> >
> > /vmfs/volumes/ef172a87-e5ae817f/after-enable-ino32-on:
> >
> > /vmfs/volumes/ef172a87-e5ae817f/baz:
> > -1374929331 drwxr-xr-x 1 root root 4096 Jun 19 06:41 foo
> >
> > /vmfs/volumes/ef172a87-e5ae817f/baz/foo:
> >
> > /vmfs/volumes/ef172a87-e5ae817f/lost+found:
> > ~ # ./getcwd /vmfs/volumes/ef172a87-e5ae817f/after-enable-ino32-on
> > getcwd: No such file or directory
> > ~ #
> >
> > on the 192.168.1.136 host:
> >
> > # gluster volume info bar
> >
> > Volume Name: bar
> > Type: Distribute
> > Volume ID: b2d75589-8370-4528-ab4e-b543b3abdc3b
> > Status: Started
> > Number of Bricks: 1
> > Transport-type: tcp
> > Bricks:
> > Brick1: bar-1-private:/mnt/brick
> > Options Reconfigured:
> > diagnostics.brick-log-level: TRACE
> > diagnostics.client-log-level: TRACE
> > nfs.enable-ino32: on
> >
> > please fine attached nfs.log5.
> >
> > Regards,
> >
> > Tomo
> >
> > (2012/06/20 16:11), Anand Avati wrote:
> >> -1374929331 drwxr-xr-x 1 root root 4096 Jun 19 06:41 foo
> >>
> >> ...
> >>
> >> 2920037965 drwxr-xr-x 2 root root 4096 Jun 19 15:41 foo
> >>
> >>
> >> Ouch!
> >>
> >> -1374929331 == (int32_t) 2920037965
> >>
> >> 'uname -m' from the ESXi host please! Is it a 32bit OS? Can you try
> 'gluster volume set bar nfs.enable-ino32 on' and retry?
> >>
> >> Avati
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20120623/2c4fc729/attachment-0003.html>


More information about the Gluster-devel mailing list