[Bugs] [Bug 1322214] New: [HC] Add disk in a Hyper-converged environment fails when glusterfs is running in directIO mode
bugzilla at redhat.com
bugzilla at redhat.com
Wed Mar 30 03:26:49 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1322214
Bug ID: 1322214
Summary: [HC] Add disk in a Hyper-converged environment fails
when glusterfs is running in directIO mode
Product: GlusterFS
Version: mainline
Component: core
Keywords: Triaged
Severity: high
Priority: high
Assignee: bugs at gluster.org
Reporter: kdhananj at redhat.com
CC: annair at redhat.com, bugs at gluster.org,
kdhananj at redhat.com, rcyriac at redhat.com,
rhs-bugs at redhat.com, sabose at redhat.com,
sasundar at redhat.com, srao at redhat.com
Depends On: 1314421
Blocks: 1258386 (Gluster-HC-1)
+++ This bug was initially created as a clone of Bug #1314421 +++
Description of problem:
In a oVirt-Gluster hyperconverged environment, adding disk to VM from a
glusterfs storage pool fails when glusterfs is running in posix/directio mode
The gluster volume is configured to run in directIO mode by adding
option o-direct on
in the /var/lib/glusterd/vols/gl_01/*.vol files. Example below
volume gl_01-posix
type storage/posix
option o-direct on
option brick-gid 36
option brick-uid 36
option volume-id c131155a-d40c-4d9e-b056-26c61b924c26
option directory /bricks/b01/g
end-volume
When the option is removed and the volume is restarted, disks can be added to
the VM from the glusterfs pool.
Version-Release number of selected component (if applicable):
RHEV version is RHEV 3.6
glusterfs-client-xlators-3.7.5-11.el7rhgs.x86_64
glusterfs-cli-3.7.5-11.el7rhgs.x86_64
glusterfs-libs-3.7.5-11.el7rhgs.x86_64
glusterfs-3.7.5-11.el7rhgs.x86_64
glusterfs-api-3.7.5-11.el7rhgs.x86_64
glusterfs-fuse-3.7.5-11.el7rhgs.x86_64
glusterfs-server-3.7.5-11.el7rhgs.x86_64
How reproducible:
Easily reproducible
Steps to Reproduce:
1. Create a GlusterFS storage pool in an oVirt environment
2. Configure GlusterFS in a posix/directIO mode
3. Create a new VM or add disk to an existing VM. The add disk part fails
Actual results:
Expected results:
Additional info:
--- Additional comment from Krutika Dhananjay on 2016-03-17 08:11:14 EDT ---
Hi Sanjay,
In light of the recent discussion we had wrt direct-io behavior on a mail
thread, I have the following question:
Assuming the 'cache=none' command line option implies that the vm image files
will all be opened with O_DIRECT flag (which means that the write buffers will
already be aligned with the "sector size of the underlying block device", the
only layer in the combined client-server stack that could prevent us from
achieving o-direct-like behavior because of caching would be the write-behind
translator.
Therefore, I am wondering if it is sufficient to enable
'performance.strict-o-direct' to achieve the behavior you expect to see with
o-direct?
-Krutika
--- Additional comment from Sanjay Rao on 2016-03-17 08:20:02 EDT ---
I have tested with different options. The only option that enabled true
directIO on the glusterfs server was the posix setting.
I can verify again with the performance.strict-o-direct with the recent
glusterfs version (glusterfs-server-3.7.5-18.33) installed on my system just to
be sure.
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1258386
[Bug 1258386] [TRACKER] Gluster Hyperconvergence - Phase 1
https://bugzilla.redhat.com/show_bug.cgi?id=1314421
[Bug 1314421] [HC] Add disk in a Hyper-converged environment fails when
glusterfs is running in directIO mode
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list