[Gluster-users] Replicated striped data lose
Mahdi Adnan
mahdi.adnan at earthlinktele.com
Sat Mar 12 15:51:34 UTC 2016
Thanks David,
My settings are all defaults, i have just created the pool and started it.
I have set the settings as your recommendation and it seems to be the
same issue;
Type: Striped-Replicate
Volume ID: 44adfd8c-2ed1-4aa5-b256-d12b64f7fc14
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gfs001:/bricks/t1/s
Brick2: gfs002:/bricks/t1/s
Brick3: gfs001:/bricks/t2/s
Brick4: gfs002:/bricks/t2/s
Options Reconfigured:
performance.stat-prefetch: off
network.remote-dio: on
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.readdir-ahead: on
On 03/12/2016 03:25 PM, David Gossage wrote:
>
>
> On Sat, Mar 12, 2016 at 1:55 AM, Mahdi Adnan
> <mahdi.adnan at earthlinktele.com <mailto:mahdi.adnan at earthlinktele.com>>
> wrote:
>
> Dears,
>
> I have created a replicated striped volume with two bricks and two
> servers but I can't use it because when I mount it in ESXi and try
> to migrate a VM to it, the data get corrupted.
> Is any one have any idea why is this happening ?
>
> Dell 2950 x2
> Seagate 15k 600GB
> CentOS 7.2
> Gluster 3.7.8
>
> Appreciate your help.
>
>
> Most reports of this I have seen end up being settings related. Post
> gluster volume info. Below is what I have seen as most common
> recommended settings.
> I'd hazard a guess you may have some the read ahead cache or prefetch on.
>
> quick-read=off
> read-ahead=off
> io-cache=off
> stat-prefetch=off
> eager-lock=enable
> remote-dio=on
>
>
> Mahdi Adnan
> System Admin
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160312/644a24c0/attachment.html>
More information about the Gluster-users
mailing list