[Gluster-users] [Gluster-devel] Exporting Gluster Volume

ABHISHEK PALIWAL abhishpaliwal at gmail.com
Mon May 2 11:28:24 UTC 2016


Hi Niels,


Here is the output of rpcinfo -p $NFS_SERVER

root at 128:/# rpcinfo -p $NFS_SERVER
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100005    3   tcp  38465  mountd
    100005    1   tcp  38465  mountd
    100003    3   tcp  38465  nfs
    100227    3   tcp  38465


out of mount command

#mount -vvv -t nfs -o acl,vers=3 128.224.95.140:/gv0 /tmp/e
mount: fstab path: "/etc/fstab"
mount: mtab path:  "/etc/mtab"
mount: lock path:  "/etc/mtab~"
mount: temp path:  "/etc/mtab.tmp"
mount: UID:        0
mount: eUID:       0
mount: spec:  "128.224.95.140:/gv0"
mount: node:  "/tmp/e"
mount: types: "nfs"
mount: opts:  "acl,vers=3"
mount: external mount: argv[0] = "/sbin/mount.nfs"
mount: external mount: argv[1] = "128.224.95.140:/gv0"
mount: external mount: argv[2] = "/tmp/e"
mount: external mount: argv[3] = "-v"
mount: external mount: argv[4] = "-o"
mount: external mount: argv[5] = "rw,acl,vers=3"
mount.nfs: timeout set for Mon May  2 16:58:58 2016
mount.nfs: trying text-based options 'acl,vers=3,addr=128.224.95.140'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 128.224.95.140 prog 100003 vers 3 prot TCP port 38465
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 128.224.95.140 prog 100005 vers 3 prot TCP port 38465


On Mon, May 2, 2016 at 4:36 PM, Niels de Vos <ndevos at redhat.com> wrote:

> On Mon, May 02, 2016 at 04:14:01PM +0530, ABHISHEK PALIWAL wrote:
> > HI Team,
> >
> > I am exporting gluster volume using GlusterNFS  with ACL support but at
> NFS
> > client while running 'setfacl' command getting "setfacl: /tmp/e: Remote
> I/O
> > error"
> >
> >
> > Following is the NFS option status for Volume:
> >
> > nfs.enable-ino32
> > no
> > nfs.mem-factor
> > 15
> > nfs.export-dirs
> > on
> > nfs.export-volumes
> > on
> > nfs.addr-namelookup
> > off
> > nfs.dynamic-volumes
> > off
> > nfs.register-with-portmap
> > on
> > nfs.outstanding-rpc-limit
> > 16
> > nfs.port
> > 38465
> > nfs.rpc-auth-unix
> > on
> > nfs.rpc-auth-null
> > on
> > nfs.rpc-auth-allow
> > all
> > nfs.rpc-auth-reject
> > none
> > nfs.ports-insecure
> > off
> > nfs.trusted-sync
> > off
> > nfs.trusted-write
> > off
> > nfs.volume-access
> > read-write
> > nfs.export-dir
> >
> > nfs.disable
> > off
> > nfs.nlm
> > on
> > nfs.acl
> > on
> > nfs.mount-udp
> > off
> > nfs.mount-rmtab
> > /var/lib/glusterd/nfs/rmtab
> > nfs.rpc-statd
> > /sbin/rpc.statd
> > nfs.server-aux-gids
> > off
> > nfs.drc
> > off
> > nfs.drc-size
> > 0x20000
> > nfs.read-size                           (1 *
> > 1048576ULL)
> > nfs.write-size                          (1 *
> > 1048576ULL)
> > nfs.readdir-size                        (1 *
> > 1048576ULL)
> > nfs.exports-auth-enable
> > (null)
> > nfs.auth-refresh-interval-sec
> > (null)
> > nfs.auth-cache-ttl-sec                  (null)
> >
> > Command to mount exported gluster volume on NFS client is
> >
> > mount -v -t nfs -o acl,vers=3 128.224.95.140:/gv0 /tmp/e
>
> Could you post the output of mounting with 'mount -vvv ...'? In previous
> emails I've asked for the output of 'rpcinfo -p $NFS_SERVER', I do not
> think I've seen that yet.
>
> The port used for NFSv3 ACLs on the NFS-server should be listed in
> 'netstat -tulpen' and the PID of the process should be the one of the
> Gluster/NFS service.
>
> HTH,
> Niels
>
>
> > setfacl -m u:nobody:r /tmp/e
> > setfacl: /tmp/e: Remote I/O error
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
>
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
>
>


-- 




Regards
Abhishek Paliwal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160502/9d399042/attachment.html>


More information about the Gluster-users mailing list