[Gluster-users] upgrading from 1.3.10 to 2.0.0rc7
Raghavendra G
raghavendra.hg at gmail.com
Tue Apr 14 05:37:57 UTC 2009
Hi Matthew,
please find the inlined comments.
On Tue, Apr 7, 2009 at 5:38 AM, Matthew Wilkins <daibutsu at gmail.com> wrote:
> Hi,
>
> I am currently running glusterfs-1.3.10 in a NUFA situation using the
> unify translator, nufa scheduler, and an AFR'ed namespace. A
> config from one of my servers is below if you want the full
> details. I want to upgrade to 2.0.0rc7 and use the nufa
> translator instead of unify. The nufa cluster translator sounds like
> the better option. I have some questions for you helpful people!
>
> 1. Is it possible to do such an upgrade? I was thinking I would
> umount the fs and stop the gluster daemons. Upgrade to 2.0.0rc7 and
> put the new config files in. Mount up the fs again. Note the
> namespace will have disappeared because it isn't necessary in version
> 2 right? So what will happen now? Will there be a lot of
> self-healing necessary? Can that just happen as people use the
> gluster? (btw, total size is 32T, used size is about 15T).
>
> 2. I have a sneaking suspicion that the answer to 1 is 'No it wont
> work'. Either way I am wondering how the distributed translator
> works. The nufa translator a special case of the distributed
> translator correct? (but is must have some bias towards the local
> server?). Anyway, how do they work? When a client wants to find a
> file a hash is done on the filename, somehow that maps to a particular
> server, then the client asks that server for the file? Are any
> extended attributes set on the file?
yes, files are mapped to servers based on their hash value. DHT does not use
any extended attributes.
>
>
> 3. One of my servers has two bricks. The nufa example at
> http://gluster.org/docs/index.php/NUFA_with_single_process
> doesn't show me what to do. It has two examples; the first when each node
> has one brick, and another where nodes have more than one brick but
> nufa is not used, rather unify. So how can I used the nufa translator
> when one or more nodes contribute more than one brick? I was thinking
> something like a server side unify, then nufa on top, but I'm not sure
> of the syntax. If it isn't possible it isn't the end of the world
> (the second brick isn't that big).
export each brick separately and have the client protocols corresponding to
each of the exported brick as children of nufa.
>
>
> 4. At the end of my new config for version 2 I have the following:
>
> volume nufa
> type cluster/nufa
> option local-volume-name `hostname`
> subvolumes tur-awc1 tur-awc2 tur-awc3
>
> volume writebehind
> type performance/write-behind
> option page-size 128KB
> option cache-size 1MB
> subvolumes nufa
> end-volume
>
> volume ra
> end-volume
>
> Is that the correct order for these performance translators, it isn't
> obvious to me if the read-ahead should be before or after write-behind
> (all the examples I have seen have io-cache at the end though). Does
> the order matter?
>
> Thank you very much for any help. If you can only help with one
> question I would still very much appreciate it.
>
> Matt
>
> Here is my current config:
>
> volume brick0
> type storage/posix
> option directory /export/brick0
> end-volume
>
> volume iot-0
> type performance/io-threads
> subvolumes brick0
> end-volume
>
> volume brick1
> type storage/posix
> option directory /export/brick1
> end-volume
>
> volume iot-1
> type performance/io-threads
> subvolumes brick1
> end-volume
>
> volume brick-ns
> type storage/posix
> option directory /export/brick-ns
> end-volume
>
> volume iot-ns
> type performance/io-threads
> subvolumes brick-ns
> end-volume
>
> volume server
> type protocol/server
> subvolumes iot-0 iot-1 iot-ns
> option transport-type tcp/server # For TCP/IP transport
> option auth.ip.iot-0.allow *
> option auth.ip.iot-1.allow *
> option auth.ip.iot-ns.allow *
> end-volume
>
>
> volume client-tardis-0
> type protocol/client
> option transport-type tcp/client
> option remote-host tardis
> option remote-subvolume iot-0
> end-volume
>
> volume client-orac-0
> type protocol/client
> option transport-type tcp/client
> option remote-host orac
> option remote-subvolume iot-0
> end-volume
>
> volume client-orac-ns
> type protocol/client
> option transport-type tcp/client
> option remote-host orac
> option remote-subvolume iot-ns
> end-volume
>
> volume afr-ns
> type cluster/afr
> subvolumes iot-ns client-orac-ns
> end-volume
>
> volume unify
> type cluster/unify
> option namespace afr-ns
> option scheduler nufa
> option nufa.local-volume-name iot-0,iot-1
> option nufa.limits.min-free-disk 5%
> subvolumes iot-0 iot-1 client-orac-0 client-tardis-0
> end-volume
>
> volume ra
> type performance/read-ahead
> subvolumes unify
> end-volume
>
> volume ioc
> type performance/io-cache
> subvolumes ra
> end-volume
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>
--
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090414/4938c8fe/attachment.html>
More information about the Gluster-users
mailing list