[Bugs] [Bug 1360785] New: Direct io to sharded files fails when on zfs backend
bugzilla at redhat.com
bugzilla at redhat.com
Wed Jul 27 13:30:38 UTC 2016
https://bugzilla.redhat.com/show_bug.cgi?id=1360785
Bug ID: 1360785
Summary: Direct io to sharded files fails when on zfs backend
Product: GlusterFS
Version: 3.7.13
Component: sharding
Severity: high
Assignee: bugs at gluster.org
Reporter: dgossage at carouselchecks.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org
Created attachment 1184658
--> https://bugzilla.redhat.com/attachment.cgi?id=1184658&action=edit
logs from directio test
Beginning with 3.7.12 and 3.7.13 when using zfs backed bricks connecting to
sharded files fails with direct io.
How reproducible: Always
Steps to Reproduce:
1. zfs backed bricks default settings except xattr=sa
2. gluster fs 3.7.12+ sharding enabled
3. dd if=/dev/zero
of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
oflag=direct count=100 bs=1M
Actual results: dd: error writing
‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test’:
Operation not permitted
file test is created with file size defined by shard size. sharded file
created in .shard are 0
Expected results:
100+0 records in
100+0 records out
104857600 bytes etc.....
Additional info:
Using proxmox users have been able to work around by changing disk caching from
none to writethrough/back. Not sure this would help with oVirt as the pything
script that checks storage with dd and oflag=direct also fails
attaching client and brick log from test
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list