[Gluster-users] Error writing to disperse volume with direct io option

Taehwa Lee alghost.lee at gmail.com
Sun Sep 20 04:28:26 UTC 2015


Hi,

In my case, block size of underlying filesystem is 4K.

So, I tested again using bs=768K (multiples of 12KB).
Your advice worked!

10gnode1:disp on /mnt/gluster type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)
root at node1:~/gluster_ec# dd if=/dev/zero of=/mnt/gluster/test.io.1 bs=768k
count=1024
1024+0 records in
1024+0 records out
805306368 bytes (805 MB) copied, 9.5404 s, 84.4 MB/s
root at node1:~/gluster_ec#

Thank you for your advice.

Regards.



2015-09-20 6:54 GMT+09:00 Xavier Hernandez <xhernandez at datalab.es>:

> Hi,
>
> what filesystem are you using on the bricks and what block size is it
> using ?
>
> Direct I/O requires strict alignment of buffer offsets and block sizes.
> The disperse volume writes fragments multiples of 512 bytes in the bricks.
> If the underlying filesystem requires an alignment different than 512
> bytes, it could cause the problem you are seeing.
>
> For example if the required alignment of the underlying filesystem is 4KB,
> you will need to write to the disperse volume in multiples of 12KB (you are
> using 3 data bricks).
>
> Xavi
>
> El 18/09/2015 a las 3:00, 이태화 escribió:
>
>> Hi, I have tested Disperse volume.
>>
>> so I perform a test using "dd"
>>
>>
>> I'm using 4 nodes. and nodes are connected through 10g network.
>>
>> *Volume creation & mount command:*
>> root at node1:~# gluster vol create disp_vol disperse 4 redundancy 1
>> 10gnode1:/node/dispv 10gnode2:/node/dispv 10gnode3:/node/dispv
>> 10gnode4:/node/dispv
>> This configuration is not optimal on most workloads. Do you want to use
>> it ? (y/n) y
>> volume create: disp_vol: success: please start the volume to access data
>> root at node1:~# gluster vol start disp_vol
>> volume start: disp_vol: success
>> root at node1:~# mount -t glusterfs 10gnode1:/disp_vol /mnt/gluster
>>
>> *Test command & error message:*
>> root at node1:~# dd if=/dev/zero of=/mnt/gluster/test.io.1 oflag=direct
>> bs=1024k count=1024
>> dd: error writing ‘/mnt/gluster/test.io.1’: Invalid argument
>> dd: closing output file ‘/mnt/gluster/test.io.1’: Invalid argument
>>
>>
>> To recognize why, I saw logs. but I didn't know reasons.
>>
>> *Warning messages in /var/log/glusterfs/mnt-gluster.log*
>>
>> [2015-09-18 00:39:38.460320] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-2: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.460320] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-1: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.464081] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-3: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.464297] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-0: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.464459] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 18: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.464643] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 19: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.464801] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 20: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.464958] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 21: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465114] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 22: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465278] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 23: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465433] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 24: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465585] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 25: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465529] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-0: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.465863] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-1: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.465890] W [fuse-bridge.c:690:fuse_truncate_cbk]
>> 0-glusterfs-fuse: 26: FTRUNCATE() ERR => -1 (Invalid argument)
>> [2015-09-18 00:39:38.466192] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-2: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.466353] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-3: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.467464] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-0: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.467848] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-1: remote
>> operation failed [Invalid argument]
>> ---- repeated ----
>> [2015-09-18 00:39:38.473542] W [fuse-bridge.c:1230:fuse_err_cbk]
>> 0-glusterfs-fuse: 27: FLUSH() ERR => -1 (Invalid argument)
>>
>>
>> I need your help. anything about this is fine.
>>
>> If you need some environment informations, I'm ready to inform that.
>>
>> Thanks
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150920/d664eede/attachment.html>


More information about the Gluster-users mailing list