[Bugs] [Bug 1728183] SMBD thread panics on file operations from Windows, OS X and Windows when using vfs_glusterfs

bugzilla at redhat.com bugzilla at redhat.com
Fri Jul 19 10:19:41 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1728183

Anoop C S <anoopcs at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|                            |needinfo?(ryan at magenta.tv)



--- Comment #5 from Anoop C S <anoopcs at redhat.com> ---
(In reply to ryan from comment #0)
> Created attachment 1588661 [details]
> Windows error 01
> 
> Description of problem:
> SMBD thread panics when a file operation performed from a Windows, Linux or
> OS X client when the share is using the glusterfs VFS module, either on its
> own, or in conjunction with others i.e.:
> >    vfs objects = catia fruit streams_xattr glusterfs
> 
> 
> Gluster volume info:
> Volume Name: mcv01
> Type: Distributed-Replicate
> Volume ID: 1580ab45-0a14-4f2f-8958-b55b435cdc47
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: mcn01:/mnt/h1a/mcv01_data
> Brick2: mcn02:/mnt/h1b/mcv01_data
> Brick3: mcn01:/mnt/h2a/mcv01_data
> Brick4: mcn02:/mnt/h2b/mcv01_data
> Options Reconfigured:
> features.quota-deem-statfs: on
> nfs.disable: on
> features.inode-quota: on
> features.quota: on
> cluster.brick-multiplex: off
> cluster.server-quorum-ratio: 50%
> 
> 
> Version-Release number of selected component (if applicable):
> Gluster 6.3
> Samba 4.10.6-5
> 
> How reproducible:
> Every time
> 
> Steps to Reproduce:
> 1. Mount share as mapped drive
> 2. Write to share or read from share
> 
> Actual results:
> Multiple error messages, attached to bug
> In OS X or Linux, running 'dd if=/dev/zero of=/mnt/share/test.dat bs=1M
> count=100' results in a hang. Tailing OS X console logs reveals that the
> share is timing out.

This is weird. Can you post your smb.conf?

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list