[Gluster-users] "mismatching layouts" flooding in the logs

Shishir Gowda sgowda at redhat.com
Tue Jun 12 10:07:31 UTC 2012


Hi Tomasz,

What version of gluster are you running?

What was the rebalance commands you issued? and are these messages logged after rebalance completed successfully?

With regards,
Shishir

----- Original Message -----
From: gluster-users-request at gluster.org
To: gluster-users at gluster.org
Sent: Tuesday, June 12, 2012 12:30:01 AM
Subject: Gluster-users Digest, Vol 50, Issue 33

Send Gluster-users mailing list submissions to
	gluster-users at gluster.org

To subscribe or unsubscribe via the World Wide Web, visit
	http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
	gluster-users-request at gluster.org

You can reach the person managing the list at
	gluster-users-owner at gluster.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."


Today's Topics:

   1. Re: Volume info out of sync (Brian Candler)
   2. Gluster NFS performance issue upgrading from 3.2.5	to
      3.2.6/3.3.0 (Simon Detheridge)
   3. "mismatching layouts" flooding in the logs (Tomasz Chmielewski)
   4. Re: Gluster 3.3.0 and VMware ESXi 5 (Vijay Bellur)


----------------------------------------------------------------------

Message: 1
Date: Mon, 11 Jun 2012 15:05:13 +0100
From: Brian Candler <B.Candler at pobox.com>
Subject: Re: [Gluster-users] Volume info out of sync
To: gluster-users at gluster.org
Message-ID: <20120611140513.GA49082 at nsrc.org>
Content-Type: text/plain; charset=us-ascii

On Mon, Jun 11, 2012 at 09:50:50AM +0100, Brian Candler wrote:
> However, when I brought dev-storage2 back online, "gluster volume info" on
> that node doesn't show the newly-created volume.

FYI, this is no longer a problem - I left the servers for a while, and after
I came back, they had synchronised automatically.

It also turns out I was using "gluster volume sync" wrongly anyway, because
I hadn't read the CLI help properly - but the error message was confusing me.
Raised as https://bugzilla.redhat.com/show_bug.cgi?id=830845


------------------------------

Message: 2
Date: Mon, 11 Jun 2012 15:15:26 +0100 (BST)
From: Simon Detheridge <simon at widgit.com>
Subject: [Gluster-users] Gluster NFS performance issue upgrading from
	3.2.5	to 3.2.6/3.3.0
To: gluster-users <gluster-users at gluster.org>
Message-ID: <19567418-25a8-44c3-a329-19666b1a7250 at ken>
Content-Type: text/plain; charset=utf-8

Hi,

I have a situation where I'm mounting a gluster volume on several web servers via NFS. The web servers run Rails applications off the gluster NFS mounts. The whole thing is running on EC2.

On 3.2.5, starting a Rails application on the web server was sluggish but acceptable. However, after upgrading to 3.2.6 the length of time taken to start a Rails application has increased by over 10 times, to something that's not really suitable for a production environment. The situation still occurs with 3.3.0 as well.

If I attach strace to the rails process as it's starting up, I see that it's looking for a very large number of nonexistent files. I think this is something that Rails does that can't be helped - it checks to see if a file is there for many things, and does something accordingly if it does.

Has something changed that could negatively affect the length of time it takes to stat a nonexistent file over a NFS mount to a gluster volume, between 3.2.5 and 3.2.6? Is there any way I can get the old behaviour without downgrading?

-- I don't currently have proof that it's the nonexistent files that's causing the problem, but it seems highly likely as performance for the other tasks that the servers carry out appears unaffected.

Sorry this is slightly vague. I can run some more tests/benchmarks to try and figure out what's going on in more detail, but thought I would ask here first in case this is related to a known issue.

Thanks,
Simon

-- 
Simon Detheridge - CTO, Widgit Software
26 Queen Street, Cubbington, CV32 7NA - Tel: +44 (0)1926 333680


------------------------------

Message: 3
Date: Mon, 11 Jun 2012 22:17:33 +0700
From: Tomasz Chmielewski <mangoo at wpkg.org>
Subject: [Gluster-users] "mismatching layouts" flooding in the logs
To: Gluster General Discussion List <gluster-users at gluster.org>
Message-ID: <4FD60C0D.2030901 at wpkg.org>
Content-Type: text/plain; charset=UTF-8

I have the following appended to gluster logs at around 100kB of logs per second, on all 10 gluster servers:

[2012-06-11 15:08:15.729429] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 966367638 - 1002159031; disk layout - 930576244 - 966367637
[2012-06-11 15:08:15.729465] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts for /gluster/pub/one/content/2012/2/23
[2012-06-11 15:08:15.733110] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-sites-dht: subvol: sites-client-41; inode layout - 572662304 - 608453697; disk layout - 536870910 - 572662303
[2012-06-11 15:08:15.733161] I [dht-common.c:525:dht_revalidate_cbk] 0-sites-dht: mismatching layouts for /gluster/pub/one/content/2012/6/10

Is there a way to get rid of that?

I did a big add brick / remove brick operation before, followed by layout / migrate-data rebalance.



-- 
Tomasz Chmielewski
http://www.ptraveler.com


------------------------------

Message: 4
Date: Mon, 11 Jun 2012 22:23:47 +0530
From: Vijay Bellur <vbellur at redhat.com>
Subject: Re: [Gluster-users] Gluster 3.3.0 and VMware ESXi 5
To: "Fernando Frediani (Qube)" <fernando.frediani at qubenet.net>
Cc: Krishna Srinivas <ksriniva at redhat.com>,
	"'gluster-users at gluster.org'" <gluster-users at gluster.org>,	Rajesh
	Amaravathi <ramarava at redhat.com>
Message-ID: <4FD6229B.5000001 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 06/11/2012 05:52 PM, Fernando Frediani (Qube) wrote:
> Was doing some read on RedHat website and found this URL which I wonder if the problem would have anything to do with this:
> http://docs.redhat.com/docs/en-US/Red_Hat_Storage_Software_Appliance/3.2/html/User_Guide/ch14s04s08.html
>
> Although both servers and client are 64 I wonder if somehow this could be related as it seems the closest thing I could think of.
>
> The error I get when trying to power up a VM is:
>
> An unexpected error was received from the ESX host while powering on VM vm-21112.
> Failed to power on VM.
> Unable to retrieve the current working directory: 0 (No such file or directory). Check if the directory has been deleted or unmounted.
> Unable to retrieve the current working directory: 0 (No such file or directory). Check if the directory has been deleted or unmounted.
> Unable to retrieve the current working directory: 0 (No such file or directory). Check if the directory has been deleted or unmounted.
>
>

Can you please post nfs log file from the Gluster server that you are 
trying to mount from?

Thanks,
Vijay


------------------------------

_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


End of Gluster-users Digest, Vol 50, Issue 33
*********************************************



More information about the Gluster-users mailing list