[Gluster-devel] Fwd: GFS fehlermeldung

Harshavardhana harsha at gluster.com
Tue Feb 16 12:19:55 UTC 2010


Hi Roland,

* my replies inline  *
On Tue, Feb 16, 2010 at 4:57 PM, Roland Fischer
<roland.fischer at xidras.com>wrote:

> hi glusterfsguys,
>
> we use glusterfs version 3.0.0 with latest patches and get always an error:
>
> Latest patches means from "master" or release-3.0 branch?

Please use the release-3.0 branch "master" branch is for upstream
development which is recommended for production use.

Regards

> from the log:
>
> [2010-02-16 11:54:27] E [afr.c:179:afr_read_child] mirror: invalid
> argument: inode
> [2010-02-16 11:54:27] E [afr.c:179:afr_read_child] mirror: invalid
> argument: inode
> [2010-02-16 11:54:27] E [afr.c:179:afr_read_child] mirror: invalid
> argument: inode
> [2010-02-16 11:54:27] E [afr.c:179:afr_read_child] mirror: invalid
> argument: inode
> [2010-02-16 11:54:27] E [afr.c:179:afr_read_child] mirror: invalid
> argument: inode
> [2010-02-16 11:54:27] E [afr.c:179:afr_read_child] mirror: invalid
> argument: inode
> [2010-02-16 11:54:27] E [afr.c:179:afr_read_child] mirror: invalid
> argument: inode
>
> What does this error means?
>
>
> server config:
> # export-backup01-client_repl
> # gfs-01-01 /GFS/backup01
> # gfs-01-02 /GFS/backup01
>
> volume posix
>  type storage/posix
>  option directory /GFS/backup01
> end-volume
>
> volume locks
>  type features/locks
>  option mandatory-locks on
>
Is there a reason why this has been switched on?. Does your application does
mandatory locking?.

Please read through "
http://gluster.com/community/documentation/index.php/Translators/features/locks"
the NOTE to get the exact behaviour of using this.

Thanks
Harshavardhana


>  subvolumes posix
> end-volume
>
> volume writebehind
>  type performance/write-behind
>  option cache-size 8MB
>  option flush-behind on
>  subvolumes locks
> end-volume
>
> volume backup01
>  type performance/io-threads
>  option thread-count 16
>  subvolumes writebehind
> end-volume
>
> volume server
>  type protocol/server
>  option transport-type tcp
>  option transport.socket.listen-port 6996
>  option auth.addr.backup01.allow *
>  subvolumes backup01
> end-volume
>
>
> -----------------------------------------------------------------------------------------------------
>
>
> client replication config:
>
> volume gfs-01-01-vol
>  type protocol/client
>  option transport-type tcp
>  option remote-host gfs-01-01
>  option remote-port 6996
>  option remote-subvolume backup01
> end-volume
>
> volume gfs-01-02-vol
>  type protocol/client
>  option transport-type tcp
>  option remote-host gfs-01-02
>  option remote-port 6996
>  option remote-subvolume backup01
> end-volume
>
> volume mirror
>  type cluster/replicate
>  subvolumes gfs-01-01-vol gfs-01-02-vol
> end-volume
>
> volume readahead
>  type performance/read-ahead
>  option page-size 1MB              # unit in bytes
>  option page-count 4              # cache per file  = (page-count x
> page-size)
>  subvolumes mirror
> end-volume
>
> volume writebehind
>  type performance/write-behind
>  option cache-size 1024KB
>  option flush-behind on
>  subvolumes readahead
> end-volume
>
> volume cache
>  type performance/io-cache
>  option cache-size 128MB
>  subvolumes writebehind
> end-volume
>
>
> Thank you for your reply
> regards, roland
>
>
>
>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20100216/03674b5e/attachment-0003.html>


More information about the Gluster-devel mailing list