[Gluster-users] Error writing to disperse volume with direct io option

Taehwa Lee alghost.lee at gmail.com
Sat Sep 19 00:12:07 UTC 2015


Thanks for your response.

I tried to your suggestion.
root at node1:~/gluster_ec# mount
...
10gnode1:disp on /mnt/gluster type fuse.glusterfs
(rw,default_permissions,allow_other,max_read=131072)
root at node1:~/gluster_ec# !dd
dd if=/dev/zero of=/mnt/gluster/test.io.1 bs=1024k count=1024 oflag=direct
dd: error writing ‘/mnt/gluster/test.io.1’: Invalid argument
dd: closing output file ‘/mnt/gluster/test.io.1’: Invalid argument

but, I got the same error.

Regards.


2015-09-18 19:23 GMT+09:00 Javier Talens Segui <Javier.Segui at uib.no>:

> mount -t glusterfs 10gnode1:disp_vol /mnt/gluster
>
> Note. remove the / when indicate the volume name, try like that and check
> if the volume is mounted before accesing it:
>
> mount  (you should see 10gnode1:disp_vol on /mnt/gluster (rw,telatime.....
> etc)
>
> Regards,
> J.
>
>
> On 2015-09-18 03:00, 이태화 wrote:
>
>> Hi, I have tested Disperse volume.
>>
>> so I perform a test using "dd"
>>
>>
>> I'm using 4 nodes. and nodes are connected through 10g network.
>>
>> *Volume creation & mount command:*
>> root at node1:~# gluster vol create disp_vol disperse 4 redundancy 1
>> 10gnode1:/node/dispv 10gnode2:/node/dispv 10gnode3:/node/dispv
>> 10gnode4:/node/dispv
>> This configuration is not optimal on most workloads. Do you want to use it
>> ? (y/n) y
>> volume create: disp_vol: success: please start the volume to access data
>> root at node1:~# gluster vol start disp_vol
>> volume start: disp_vol: success
>> root at node1:~# mount -t glusterfs 10gnode1:/disp_vol /mnt/gluster
>>
>> *Test command & error message:*
>> root at node1:~# dd if=/dev/zero of=/mnt/gluster/test.io.1 oflag=direct
>> bs=1024k count=1024
>> dd: error writing ‘/mnt/gluster/test.io.1’: Invalid argument
>> dd: closing output file ‘/mnt/gluster/test.io.1’: Invalid argument
>>
>>
>> To recognize why, I saw logs. but I didn't know reasons.
>>
>> *Warning messages in /var/log/glusterfs/mnt-gluster.log*
>>
>> [2015-09-18 00:39:38.460320] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-2: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.460320] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-1: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.464081] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-3: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.464297] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-0: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.464459] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 18: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.464643] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 19: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.464801] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 20: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.464958] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 21: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465114] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 22: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465278] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 23: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465433] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 24: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465585] W [fuse-bridge.c:2240:fuse_writev_cbk]
>> 0-glusterfs-fuse: 25: WRITE => -1 (Invalid argument)
>> [2015-09-18 00:39:38.465529] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-0: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.465863] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-1: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.465890] W [fuse-bridge.c:690:fuse_truncate_cbk]
>> 0-glusterfs-fuse: 26: FTRUNCATE() ERR => -1 (Invalid argument)
>> [2015-09-18 00:39:38.466192] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-2: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.466353] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-3: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.467464] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-0: remote
>> operation failed [Invalid argument]
>> [2015-09-18 00:39:38.467848] W [MSGID: 114031]
>> [client-rpc-fops.c:904:client3_3_writev_cbk] 0-disp_vol-client-1: remote
>> operation failed [Invalid argument]
>> ---- repeated ----
>> [2015-09-18 00:39:38.473542] W [fuse-bridge.c:1230:fuse_err_cbk]
>> 0-glusterfs-fuse: 27: FLUSH() ERR => -1 (Invalid argument)
>>
>>
>> I need your help. anything about this is fine.
>>
>> If you need some environment informations, I'm ready to inform that.
>>
>> Thanks
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
> --
> Javier Talens Segui
> University of Bergen
> Senior Engineer
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150919/0a202d0d/attachment.html>


More information about the Gluster-users mailing list