[Gluster-users] Directory listings not working (Khawaja Shams)

Khawaja Shams khawaja.shams at gmail.com
Tue Oct 11 11:01:00 UTC 2011


Hi,
  We dug deeper and found the answer.

SInce we are deploying in the EC2 environment, we had multiple gluster
machines based of the same image. As you can imagine, each of them had the
same UUID. This issue manifested itself when we tried to grow our cluster
beyond more than 2 nodes - other than directory listings, we appeared to
have full functionality on two nodes despite this misconfiguration. After
changing the UUIDs on each of the boxes, everything worked like a dream.


Just as an FYI, using 24 EBS volumes across 3 HPC nodes on AWS, we were able
to obtain ~1.3Gpbs of write throughput to our volumes from a single client
node for a several hundred gigabytes write tests. Breaking the 1Gbps barrier
so easily feels nice :). We will continue to optimize our setup and share
the results with the community as we get them.

Thanks for all your help.

Regards,
Khawaja

On Thu, Oct 6, 2011 at 10:10 PM, Shishir Nagaraja. Gowda <
shishirng at gluster.com> wrote:

> Hi Khawaja,
>
> Can you please provide the client logs to help us triage the issue?
>
> With regards,
> Shishir
> ________________________________________
> From: gluster-users-bounces at gluster.org [gluster-users-bounces at gluster.org]
> on behalf of gluster-users-request at gluster.org [
> gluster-users-request at gluster.org]
> Sent: Friday, October 07, 2011 12:30 AM
> To: gluster-users at gluster.org
> Subject: Gluster-users Digest, Vol 42, Issue 8
>
> Send Gluster-users mailing list submissions to
>        gluster-users at gluster.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> or, via email, send a message with subject or body 'help' to
>        gluster-users-request at gluster.org
>
> You can reach the person managing the list at
>        gluster-users-owner at gluster.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Gluster-users digest..."
>
>
> Today's Topics:
>
>   1. Re: Gluster on EC2 - how to replace failed        EBS     volume?
>      (Don Spidell)
>   2. Re: Directory listings not working (Khawaja Shams)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 06 Oct 2011 13:42:33 -0400 (EDT)
> From: Don Spidell <dspidell at nxtbookmedia.com>
> Subject: Re: [Gluster-users] Gluster on EC2 - how to replace failed
>        EBS     volume?
> To: Olivier Nicole <Olivier.Nicole at cs.ait.ac.th>
> Cc: gluster-users at gluster.org
> Message-ID: <62d849c4-e532-494f-a0a1-b69af59114b8 at dspidell>
> Content-Type: text/plain; charset=utf-8
>
> Olivier,
>
> That is a brilliant idea.  I implemented it in a test environment today and
> am doing some benchmarks.  Great idea to eliminate RAID0.  I was only using
> it to get better I/O throughput on EC2 EBS.  I didn't know that Gluster
> would handle the striping like it does.
>
> Thank you very much!
> Don
>
>
>
>
> ----- Original Message -----
> From: "Olivier Nicole" <Olivier.Nicole at cs.ait.ac.th>
> To: dspidell at nxtbookmedia.com
> Cc: gluster-users at gluster.org
> Sent: Wednesday, October 5, 2011 10:45:13 PM
> Subject: Re: [Gluster-users] Gluster on EC2 - how to replace failed EBS
> volume?
>
> Hi Don,
>
> > Thanks for your reply.  Can you explain what you mean by:
> >
> > > Instead of configuring your 8 disks in RAID 0, I would use JOBD and
> > > let Gluster do the concatenation. That way, when you replace a disk,
> > > you just have 125 GB to self-heal.
>
> If I am not mistaken, RAID 0 provides no redundancy, it just
> concatenates the 8 125GB disks together so they appear as one big 1TB
> disk.
>
> So I would not use any RAID on the machine, just have 8 independent
> disks and mount the 8 disks at eight locations:
>
> mount /dev/sda1 /
> mount /dev/sdb1 /datab
> mount /dev/sdc1 /datac
> etc.
>
> The in gluster I would have the bricks
>
> server:/data
> server:/datab
> server:/datac
> etc.
>
> If any disk (except the system disk) fails, you can simply fit in a
> new disk and let gluster self-heal.
>
> Even if RAID 0 increases the disk throughput because it does stripping
> (write different blocks to different disks), gluster does the same
> more or less, each new file will end up in a different disk. So the
> trhoughput should be close.
>
> The only disadvantage is that gluster will have some space overhead,
> as it will create a replicate of the directory tree on each disk.
>
> I think that you should only use RAID with gluster when RAID provides
> local redundancy (RAID 1 or above): in that case, when a disk fails,
> gluster will not notice the problem, you swap to a new disk and let
> RAID rebuild the information.
>
> Bests,
>
> Olivier
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 6 Oct 2011 11:17:45 -0700
> From: Khawaja Shams <kshams at usc.edu>
> Subject: Re: [Gluster-users] Directory listings not working
> To: Luis Cerezo <lec at luiscerezo.org>
> Cc: gluster-users at gluster.org
> Message-ID:
>        <CAD7LovQV4Ciz80XTGY4+7d5EDe=BriwEH_qWSiVRSpg4RMC1pA at mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi,
>  To clarify, we are not intentionally using a nightly build. We downloaded
> the src from here:
> http://download.gluster.com/pub/gluster/glusterfs/LATEST/
>
> After installing from the source, we get the version specified in the email
> above. Is this incorrect? Where should we be downloading the stable build
> from? Thanks!
>
> Regards,
> Khawaja
>
> On Wed, Oct 5, 2011 at 3:44 PM, Khawaja Shams <kshams at usc.edu> wrote:
>
> > Hi Luis,
> >    Thanks for responding. We are using the nightly build from October
> 4th.
> > Maybe that is our problem.
> >
> > glusterfs 3.2.4 built on Oct  4 2011 22:49:01
> > Repository revision: git://git.gluster.com/glusterfs.git
> > Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
> > GlusterFS comes with ABSOLUTELY NO WARRANTY.
> > You may redistribute copies of GlusterFS under the terms of the GNU
> General
> > Public License.
> >
> >
> > Our ls is the same as /bin/ls:
> > # which ls
> > alias ls='ls --color=tty'
> >     /bin/ls
> >
> > Any suggestions?
> >
> > Regards,
> > Khawaja
> >
> > On Wed, Oct 5, 2011 at 5:48 AM, Luis Cerezo <lec at luiscerezo.org> wrote:
> >
> >> you don't say what version. is there a difference for you between
> /bin/ls
> >> and ls?
> >>
> >> -luis
> >>
> >>
> >> On Oct 5, 2011, at 6:04 AM, Khawaja Shams wrote:
> >>
> >> Hello,
> >>   I just finished installing gluster on two machines in server mode in
> >> EC2. I have mounted it via fuse on one of the boxes. Here is my volume
> info:
> >>
> >>
> >> # gluster volume info
> >>
> >> Volume Name: fast
> >> Type: Stripe
> >> Status: Started
> >> Number of Bricks: 2
> >> Transport-type: tcp
> >> Bricks:
> >> Brick1: server1:/data2
> >> Brick2: server2:/data
> >>
> >>
> >> All of this works great, and I can write files at a fairly high
> >> throughput. However, I cannot list files in the directory. I can write
> files
> >> and then read them back without any concerns. Furthermore, I can see
> parts
> >> of the files in the /data and /data2 directories on the server.
> >>
> >>   Did I miss a step? Thank you.
> >>
> >>
> >> Regards,
> >> Khawaja
> >>
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> >>
> >>
> >>
> >> Luis E. Cerezo
> >>
> >> http://www.luiscerezo.org
> >> http://twitter.com/luiscerezo
> >> http://flickr.com/photos/luiscerezo
> >> photos for sale:
> >> http://photos.luiscerezo.org
> >> Voice: 412 223 7396
> >>
> >>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://gluster.org/pipermail/gluster-users/attachments/20111006/36018e47/attachment.html
> >
>
> ------------------------------
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
> End of Gluster-users Digest, Vol 42, Issue 8
> ********************************************
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20111011/66533b69/attachment.html>


More information about the Gluster-users mailing list