[Gluster-users] Samba-VFS-Glusterfs issues

Jon Archer jon at rosslug.org.uk
Wed Jul 30 18:19:43 UTC 2014


Any ideas on release date of the RPMs which contain this fix?

Thanks

Jon
On 25/07/14 08:59, Poornima Gurusiddaiah wrote:
> Hi Jon,
>
> I believe the bug is fixed as a part of patch http://review.gluster.org/#/c/8374/.
> But this patch(fix) is not in glusterfs-api-3.5.1-1.el6.x86_64, I have posted the same for 3.5-2 release.
> Hopefully the fix will be available in the next release.
>
> Regards,
> Poornima
>
> ----- Original Message -----
> From: "Jon Archer" <jon at rosslug.org.uk>
> To: gluster-users at gluster.org
> Sent: Thursday, July 24, 2014 3:45:10 AM
> Subject: Re: [Gluster-users] Samba-VFS-Glusterfs issues
>
> Still having issues,
>
> This is the testparm output from one of the several installs I've tried it on....
>
>
> #testparm -v
>
> Load smb config files from /etc/samba/smb.conf
> rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
> Processing section "[homes]"
> Processing section "[printers]"
> Processing section "[testvol]"
> Loaded services file OK.
> Server role: ROLE_STANDALONE
> Press enter to see a dump of your service definitions
> [global]
> dos charset = CP850
> unix charset = UTF-8
> workgroup = MYGROUP
> realm =
> netbios name = STORENODE02
> netbios aliases =
> netbios scope =
> server string = Samba Server Version %v
> interfaces =
> bind interfaces only = No
> server role = auto
> security = USER
> auth methods =
> encrypt passwords = Yes
> client schannel = Auto
> server schannel = Auto
> allow trusted domains = Yes
> map to guest = Never
> null passwords = No
> obey pam restrictions = No
> password server = *
> smb passwd file = /var/lib/samba/private/smbpasswd
> private dir = /var/lib/samba/private
> passdb backend = tdbsam
> algorithmic rid base = 1000
> root directory =
> guest account = nobody
> enable privileges = Yes
> pam password change = No
> passwd program =
> passwd chat = *new*password* %n\n *new*password* %n\n *changed*
> passwd chat debug = No
> passwd chat timeout = 2
> check password script =
> username map =
> username level = 0
> unix password sync = No
> restrict anonymous = 0
> lanman auth = No
> ntlm auth = Yes
> client NTLMv2 auth = Yes
> client lanman auth = No
> client plaintext auth = No
> client use spnego principal = No
> preload modules =
> dedicated keytab file =
> kerberos method = default
> map untrusted to domain = No
> log level = 2
> syslog = 1
> syslog only = No
> log file = /var/log/samba/log.%m
> max log size = 50
> debug timestamp = Yes
> debug prefix timestamp = No
> debug hires timestamp = Yes
> debug pid = No
> debug uid = No
> debug class = No
> enable core files = Yes
> smb ports = 445, 139
> large readwrite = Yes
> server max protocol = SMB3
> server min protocol = LANMAN1
> client max protocol = NT1
> client min protocol = CORE
> unicode = Yes
> min receivefile size = 0
> read raw = Yes
> write raw = Yes
> disable netbios = No
> reset on zero vc = No
> log writeable files on exit = No
> defer sharing violations = Yes
> nt pipe support = Yes
> nt status support = Yes
> max mux = 50
> max xmit = 16644
> name resolve order = lmhosts, wins, host, bcast
> max ttl = 259200
> max wins ttl = 518400
> min wins ttl = 21600
> time server = No
> unix extensions = Yes
> use spnego = Yes
> client signing = default
> server signing = default
> client use spnego = Yes
> client ldap sasl wrapping = plain
> enable asu support = No
> svcctl list =
> cldap port = 0
> dgram port = 0
> nbt port = 0
> krb5 port = 0
> kpasswd port = 0
> web port = 0
> rpc big endian = No
> deadtime = 0
> getwd cache = Yes
> keepalive = 300
> lpq cache time = 30
> max smbd processes = 0
> max disk size = 0
> max open files = 16384
> socket options = TCP_NODELAY
> use mmap = Yes
> use ntdb = No
> hostname lookups = No
> name cache timeout = 660
> ctdbd socket = /var/run/ctdb/ctdbd.socket
> cluster addresses =
> clustering = No
> ctdb timeout = 0
> ctdb locktime warn threshold = 0
> smb2 max read = 1048576
> smb2 max write = 1048576
> smb2 max trans = 1048576
> smb2 max credits = 8192
> load printers = Yes
> printcap cache time = 750
> printcap name =
> cups server =
> cups encrypt = No
> cups connection timeout = 30
> iprint server =
> disable spoolss = No
> addport command =
> enumports command =
> addprinter command =
> deleteprinter command =
> show add printer wizard = Yes
> os2 driver map =
> mangling method = hash2
> mangle prefix = 1
> max stat cache size = 256
> stat cache = Yes
> machine password timeout = 604800
> add user script =
> rename user script =
> delete user script =
> add group script =
> delete group script =
> add user to group script =
> delete user from group script =
> set primary group script =
> add machine script =
> shutdown script =
> abort shutdown script =
> username map script =
> username map cache time = 0
> logon script =
> logon path = \\%N\%U\profile
> logon drive =
> logon home = \\%N\%U
> domain logons = No
> init logon delayed hosts =
> init logon delay = 100
> os level = 20
> lm announce = Auto
> lm interval = 60
> preferred master = No
> local master = Yes
> domain master = Auto
> browse list = Yes
> enhanced browsing = Yes
> dns proxy = Yes
> wins proxy = No
> wins server =
> wins support = No
> wins hook =
> lock spin time = 200
> oplock break wait time = 0
> ldap admin dn =
> ldap delete dn = No
> ldap group suffix =
> ldap idmap suffix =
> ldap machine suffix =
> ldap passwd sync = no
> ldap replication sleep = 1000
> ldap suffix =
> ldap ssl = start tls
> ldap ssl ads = No
> ldap deref = auto
> ldap follow referral = Auto
> ldap timeout = 15
> ldap connection timeout = 2
> ldap page size = 1024
> ldap user suffix =
> ldap debug level = 0
> ldap debug threshold = 10
> eventlog list =
> add share command =
> change share command =
> delete share command =
> preload =
> lock directory = /var/lib/samba
> state directory = /var/lib/samba
> cache directory = /var/lib/samba
> pid directory = /run
> ntp signd socket directory =
> utmp directory =
> wtmp directory =
> utmp = No
> default service =
> message command =
> get quota command =
> set quota command =
> remote announce =
> remote browse sync =
> nbt client socket address = 0.0.0.0
> nmbd bind explicit broadcast = Yes
> homedir map = auto.home
> afs username map =
> afs token lifetime = 604800
> log nt token command =
> NIS homedir = No
> registry shares = No
> usershare allow guests = No
> usershare max shares = 0
> usershare owner only = Yes
> usershare path = /var/lib/samba/usershares
> usershare prefix allow list =
> usershare prefix deny list =
> usershare template share =
> async smb echo handler = No
> panic action =
> perfcount module =
> host msdfs = Yes
> passdb expand explicit = No
> idmap backend = tdb
> idmap cache time = 604800
> idmap negative cache time = 120
> idmap uid =
> idmap gid =
> template homedir = /home/%D/%U
> template shell = /bin/false
> winbind separator = \
> winbind cache time = 300
> winbind reconnect delay = 30
> winbind max clients = 200
> winbind enum users = No
> winbind enum groups = No
> winbind use default domain = No
> winbind trusted domains only = No
> winbind nested groups = Yes
> winbind expand groups = 1
> winbind nss info = template
> winbind refresh tickets = No
> winbind offline logon = No
> winbind normalize names = No
> winbind rpc only = No
> create krb5 conf = Yes
> ncalrpc dir = /run/samba/ncalrpc
> winbind max domain connections = 1
> winbindd socket directory =
> winbindd privileged socket directory =
> winbind sealed pipes = No
> allow dns updates = disabled
> dns forwarder =
> dns update command =
> nsupdate command =
> rndc command =
> multicast dns register = Yes
> samba kcc command =
> server services =
> dcerpc endpoint servers =
> spn update command =
> share backend =
> tls enabled = No
> tls keyfile =
> tls certfile =
> tls cafile =
> tls crlfile =
> tls dh params file =
> idmap config * : backend = tdb
> comment =
> path =
> username =
> invalid users =
> valid users =
> admin users =
> read list =
> write list =
> force user =
> force group =
> read only = Yes
> acl check permissions = Yes
> acl group control = No
> acl map full control = Yes
> acl allow execute always = No
> create mask = 0744
> force create mode = 00
> directory mask = 0755
> force directory mode = 00
> force unknown acl user = No
> inherit permissions = No
> inherit acls = No
> inherit owner = No
> guest only = No
> administrative share = No
> guest ok = No
> only user = No
> hosts allow =
> hosts deny =
> allocation roundup size = 1048576
> aio read size = 0
> aio write size = 0
> aio write behind =
> ea support = No
> nt acl support = Yes
> profile acls = No
> map acl inherit = No
> afs share = No
> smb encrypt = default
> durable handles = Yes
> block size = 1024
> change notify = Yes
> directory name cache size = 100
> kernel change notify = Yes
> max connections = 0
> min print space = 0
> strict allocate = No
> strict sync = No
> sync always = No
> use sendfile = No
> write cache size = 0
> max reported print jobs = 0
> max print jobs = 1000
> printable = No
> print notify backchannel = Yes
> print ok = No
> printing = cups
> cups options = raw
> print command =
> lpq command = %p
> lprm command =
> lppause command =
> lpresume command =
> queuepause command =
> queueresume command =
> printer name =
> use client driver = No
> default devmode = Yes
> force printername = No
> printjob username = %U
> default case = lower
> case sensitive = Auto
> preserve case = Yes
> short preserve case = Yes
> mangling char = ~
> hide dot files = Yes
> hide special files = No
> hide unreadable = No
> hide unwriteable files = No
> delete veto files = No
> veto files =
> hide files =
> veto oplock files =
> map archive = Yes
> map hidden = No
> map system = No
> map readonly = yes
> mangled names = Yes
> store dos attributes = No
> dmapi support = No
> browseable = Yes
> access based share enum = No
> blocking locks = Yes
> csc policy = manual
> fake oplocks = No
> kernel oplocks = No
> kernel share modes = Yes
> locking = Yes
> oplocks = Yes
> level2 oplocks = Yes
> oplock contention limit = 2
> posix locking = Yes
> strict locking = Auto
> dfree cache time = 0
> dfree command =
> copy =
> preexec =
> preexec close = No
> postexec =
> root preexec =
> root preexec close = No
> root postexec =
> available = Yes
> volume =
> fstype = NTFS
> wide links = No
> follow symlinks = Yes
> dont descend =
> magic script =
> magic output =
> delete readonly = No
> dos filemode = No
> dos filetimes = Yes
> dos filetime resolution = No
> fake directory create times = No
> vfs objects =
> msdfs root = No
> msdfs proxy =
> ntvfs handler =
>
> [homes]
> comment = Home Directories
> read only = No
> browseable = No
>
> [printers]
> comment = For samba share of volume testvol
> path = /var/spool/samba
> printable = Yes
> print ok = Yes
> browseable = No
>
> [testvol]
> path = /
> read only = No
> guest ok = Yes
> kernel share modes = No
> vfs objects = glusterfs
> glusterfs:volume = testvol
> glusterfs:logfile = /var/log/samba/glusterfs-testvol.log
> glusterfs:loglevel = 7
>
>
>
> On 22/07/14 14:26, Lalatendu Mohanty wrote:
>
>
> On 07/21/2014 02:33 PM, Jon Archer wrote:
>
>
> Hi Lala,
>
> Thanks for your response (here and on your blog), I did try removing the valid users statement but still no luck. Although I would imagine the valid users statement should work, otherwise how would we control access.
>
> Jon
>
> Jon,
>
> Are you still facing this issue? if yes, can you please send us the output of "testparm -s" where your Samba server is running.
>
>
>
>
>
> On 2014-07-18 10:59, Lalatendu Mohanty wrote:
>
>
> On 07/17/2014 07:11 PM, Jon Archer wrote:
>
>
> Hi all,
>
> I'm currently testing out the samba-vfs-glusterfs configuration to look into replacing fuse mounted volumes.
>
> I've got a server configured as per:
> http://lalatendumohanty.wordpress.com/2014/02/11/using-glusterfs-with-samba-and-samba-vfs-plugin-for-glusterfs-on-fedora-20/ but am seeing an issue:
> "Failed to set volfile_server..."
>
> I have a gluster volume share:
>
> Volume Name: share
> Type: Replicate
> Volume ID: 06d1eb42-873d-43fe-ae94-562e975cca9a
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: storenode1:/gluster/bricks/share/brick1
> Brick2: storenode2:/gluster/bricks/share/brick1
> Options Reconfigured:
> server.allow-insecure: on
>
>
> and this is then added as a share in samba:
> [share]
> comment = Gluster and CTDB based share
> path = /
> read only = no
> guest ok = yes
> valid users = jon
> vfs objects = glusterfs
> glusterfs:loglevel = 10
> glusterfs:volume = share
>
>
> The share is visible at top level just not accessible, if i do try to access it I get a samba client log entry of:
> [2014/07/17 13:31:56.084620, 0] ../source3/modules/vfs_glusterfs.c:253(vfs_gluster_connect)
> Failed to set volfile_server localhost
>
>
> I've tried setting the glusterfs:volfile_server option in smb.conf to IP, localhost and hostname with the same response just specifying IP,localhost or hostname in the error message.
>
> Packages installed are:
>
> rpm -qa|grep gluster
> glusterfs-libs-3.5.1-1.el6.x86_64
> glusterfs-cli-3.5.1-1.el6.x86_64
> glusterfs-server-3.5.1-1.el6.x86_64
> glusterfs-3.5.1-1.el6.x86_64
> glusterfs-fuse-3.5.1-1.el6.x86_64
> samba-vfs-glusterfs-4.1.9-2.el6.x86_64
> glusterfs-api-3.5.1-1.el6.x86_64
>
> rpm -qa|grep samba
> samba-4.1.9-2.el6.x86_64
> samba-common-4.1.9-2.el6.x86_64
> samba-winbind-modules-4.1.9-2.el6.x86_64
> samba-vfs-glusterfs-4.1.9-2.el6.x86_64
> samba-winbind-clients-4.1.9-2.el6.x86_64
> samba-libs-4.1.9-2.el6.x86_64
> samba-winbind-4.1.9-2.el6.x86_64
> samba-client-4.1.9-2.el6.x86_64
>
>
> for purposes of testing I have disabled SELINUX and flushed all firewall rules.
>
> Does anyone have any ideas on this error and how to resolve?
>
> Hey Jon,
>
> The output of "testparm -s" might help. Also you can try following
>
> 1. Remove the line "valid users = jon" from smb.conf
> 2. Set smbpasswd for root.
> $smbpasswd -a root
>
> And try to access the share from the client (use root and <smbpasswd>)
>
> -Lala
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list