[Gluster-users] xfs images on glusterfs

Anand Avati avati at zresearch.com
Sun Aug 17 12:58:12 UTC 2008


Krzysztof,
 is this server side kernel 2.6.x or 2.4.x ? Permitted offsets boundaries
for direct-IO are different for 2.4.x and 2.6.x.

avati

2008/8/15 Krzysztof Chojnowski <notch at toltech.nl>

> Hi list,
>
> we're evaluating glusterfs as a storage solution for our Xen cluster. We
> want to use it to store rootfs images of virtual machines and be able to
> use advanced features like live migration. Unfortunately we encountered
> some problems while trying to use xfs on those images (ext3 is working
> just fine, but we really would like to use xfs).
>
> When trying to create xfs on the image stored on glusterfs we get:
> # mkfs.xfs tst.img
> mkfs.xfs: pwrite64 failed: Invalid argument
>
> server debug output:
> 2008-08-15 15:38:03 D [inode.c:367:__active_inode] brick/inode:
> activating inode(268640449), lru=1/1024
> 2008-08-15 15:38:04 E [posix.c:1212:posix_writev] brick: O_DIRECT:
> offset is Invalid
>
> client:
> 2008-08-15 15:38:04 D [fuse-bridge.c:1604:fuse_writev_cbk]
> glusterfs-fuse: 524797: WRITE => 512/512,939524096/1073741824
> 2008-08-15 15:38:04 D [fuse-bridge.c:1641:fuse_write] glusterfs-fuse:
> 524798: WRITE (0x2aaaab200a00, size=512, offset=402653184)
> 2008-08-15 15:38:04 D [fuse-bridge.c:1604:fuse_writev_cbk]
> glusterfs-fuse: 524798: WRITE => 512/512,402653184/1073741824
> 2008-08-15 15:38:04 D [fuse-bridge.c:1641:fuse_write] glusterfs-fuse:
> 524799: WRITE (0x2aaaab200a00, size=512, offset=134218240)
> 2008-08-15 15:38:04 E [fuse-bridge.c:1609:fuse_writev_cbk]
> glusterfs-fuse: 524799: WRITE => -1 (22)
> 2008-08-15 15:38:04 D [fuse-bridge.c:1665:fuse_flush] glusterfs-fuse:
> 524800: FLUSH 0x2aaaab200a00
> 2008-08-15 15:38:04 D [fuse-bridge.c:916:fuse_err_cbk] glusterfs-fuse:
> 524800: (16) ERR => 0
> 2008-08-15 15:38:04 D [fuse-bridge.c:1692:fuse_release] glusterfs-fuse:
> 524801: CLOSE 0x2aaaab200a00
> 2008-08-15 15:38:04 D [fuse-bridge.c:916:fuse_err_cbk] glusterfs-fuse:
> 524801: (17) ERR => 0
>
> server spec:
> volume brick
>  type storage/posix
>  option directory /mnt/export/test
> end-volume
>
> volume server
>  type protocol/server
>  option transport-type tcp/server # For TCP/IP transport
>  option auth.ip.brick.allow *
>  subvolumes brick
> end-volume
>
> client spec:
> volume remote1
>  type protocol/client
>  option transport-type tcp/client
>  option remote-host 192.168.211.2
>  option remote-subvolume brick
> end-volume
>
> server was started with:
> glusterfsd -f glusterfs-server-simple.vol  --no-daemon
> --log-file=/dev/stdout --log-level=DEBUG
> and client:
> glusterfs -f glusterfs-client-simple.vol --direct-io-mode=DISABLE
> --no-daemon --log-file=/dev/stdout --log-level=DEBUG /mnt/glusterfs/
> (we use --direct-io-mode=DISABLE as suggested in:
>
> http://www.gluster.org/docs/index.php/Technical_FAQ#Loop_mounting_image_files_stored_in_glusterFS_file_system
> )
>
> Server and client was on the same machine running Debian etch and
> glusterfs 1.3.9 built on Jul 11 2008 15:10:51
> Repository revision: glusterfs--mainline--2.5--patch-770
>
> Thanks in advance for any help with this problem.
>
> regards
> Notch
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>



-- 
If I traveled to the end of the rainbow
As Dame Fortune did intend,
Murphy would be there to tell me
The pot's at the other end.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080817/1946204e/attachment.html>


More information about the Gluster-users mailing list