Fwd: [Gluster-devel] cluster/stripe

Raghavendra G raghavendra.hg at gmail.com
Wed Sep 26 05:54:07 UTC 2007


Resending to gluster-devel...

---------- Forwarded message ----------
From: Alexey Filin <alexey.filin at gmail.com>
Date: Sep 26, 2007 1:29 AM
Subject: Re: [Gluster-devel] cluster/stripe
To: Raghavendra G <raghavendra.hg at gmail.com>

Hi Raghavendra,

I found and fixed error in stripe.c (like Matthias proposed already) and
nothing has changed, so I simplified configs to:
--------------------------------------------------------------
servers:

# Namespace posix
volume brick-ns
  type storage/posix
  option directory /data/export-ns
end-volume

volume brick
  type storage/posix
  option directory /data/export
end-volume

### Trace storage/posix translator.
volume trace
  type debug/trace
  subvolumes brick
  option debug on
end-volume

volume server
 type protocol/server
 subvolumes brick brick-ns
 option transport-type tcp/server
# option bind-address 172.30.2.       # Default is to listen on all
interfaces
 option listen-port 6996                # Default is 6996
# option client-volume-filename /etc/glusterfs/glusterfs-client.vol
 option auth.ip.brick.allow 172.30.2.*
 option auth.ip.brick-ns.allow 172.30.2.*
end-volume
--------------------------------------------------------------
client:

volume client1
 type protocol/client
 option transport-type tcp/client
 option remote-host 172.30.2.1
 option remote-subvolume brick
end-volume

volume client2
 type protocol/client
 option transport-type tcp/client
 option remote-host 172.30.2.2
 option remote-subvolume brick
end-volume

volume stripe1
 type cluster/stripe
 subvolumes client1 client2
# option block-size *:10MB
 option block-size *:1MB
end-volume

### Trace storage/posix translator.
volume trace
  type debug/trace
  subvolumes stripe1
  option debug on
end-volume
--------------------------------------------------------------

added debug print to stripe.c:stripe_open()

--------------------------------------------------------------
  striped = data_to_int8 (dict_get (loc->inode->ctx, this->name));
  local->striped = striped;

+gf_log (this->name,
+    GF_LOG_WARNING,
+    "MY: stripe_open: local->stripe_size=%i local->striped=%i
this->name=(%s)",
+    (int)local->stripe_size, (int)local->striped, this->name);

  if (striped == 1) {
    local->call_count = 1;
--------------------------------------------------------------

got in client log file something like:

2007-09-26 00:57:25 W [stripe.c:1804:stripe_open] stripe1: MY: stripe_open:
local->stripe_size=0 local->striped=1 this->name=(stripe1)

file is created on one node only, changed the condition above to:

--------------------------------------------------------------
/*  if (striped == 1) { */
  if (!striped) {
    local->call_count = 1;
--------------------------------------------------------------

because the condition works wrong (?):

  } else {
    /* Striped files */

got the same, file is created on one node only :(
set/getfattr -n trusted.stripe1.stripe-size manually works fine

regards, Alexey

On 9/25/07, Raghavendra G <raghavendra.hg at gmail.com> wrote:
>
> Hi Alexey,
>
> Can you please try with glusterfs--mainline--2.5--patch-493 and check
> whether the bug still persists? Also If the bug is not fixed can you please
> send glusterfs server and client configuration files?
>
> regards,
>
> On 9/25/07, Alexey Filin < alexey.filin at gmail.com> wrote:
>
> > Hi,
> >
> > gluster at sv.gnu.org/glusterfs--mainline--2.5--patch-485
> > fuse-2.7.0-glfs3
> > Linux 2.6.9-55.0.2.EL.cern (Scientific Linux CERN SLC release
> > 4.5(Beryllium)), i386
> >
> > 4 HPC cluster work nodes, each node has two Gigabit interfaces for two
> > LANs
> > (Data Acquisition System LAN and SAN).
> >
> > server.vol and client.vol were made with example in
> > http://gluster.org/docs/index.php/GlusterFS_Translators but with alu
> > scheduler:
> >
> > brick->posix-locks->io-thr->wb->ra->server
> > ((client1+client2)->stripe1)+((client3+client4)->stripe2)->afr->unify->iot->wb->ra->ioc
> >
> >
> > unify is supposed to connect another 4 nodes after tests
> >
> > copying from local FS to GlusterFS and inversely on client1 works fine,
> > performance nearly native (as for local to local)
> > back-end FS ext3 (get/setfattr don't work)=> afr works fine, stripe
> > doesn't
> > work at all
> > back-end FS xfs (get/setfattr work fine) => afr works fine, stripe
> > doesn't
> > work at all
> >
> > changed client.vol to: (client1+client2)->stripe1->iot->wb->ra->ioc =>
> > stripe doesn't work
> >
> > log files don't contain anything interesting,
> > 1) How to make cluster/stripe work?
> >
> > http://gluster.org/docs/index.php/GlusterFS_FAQ
> > "...if one uses 'cluster/afr' translator with 'cluster/stripe' then
> > GlusterFS can provide high availability."
> > 2) Is HA provided for stripe+afr only or for afr alone too?
> >
> > I suppose to use cluster work nodes with local hard disks as a
> > distributed
> > on-line and off-line storage for raw data acquired on our experimental
> > setup
> > (tape back-up is provided of course). It's supposed to keep 10-20
> > terabytes
> > of raw data in total (the cluster is supposed to upgrade in future).
> > 3) Has it sense to use cluster/stripe (HA is very desirable) in the
> > case?
> >
> > Thanks for answers in advance.
> >
> > Alexey Filin.
> > Experiment OKA, Institute for High Energy Physics, Protvino, Russia
> >
> > PS Also I tortured GlusterFS with direct file manipulation (through
> > back-end
> > FS), results are good for me
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > http://lists.nongnu.org/mailman/listinfo/gluster-devel
> >
>
>
>
> --
> Raghavendra G




-- 
Raghavendra G



More information about the Gluster-devel mailing list