[Gluster-users] GlusterFS compared to KosmosFS (now called cloudstore)?
Keith Freedman
freedman at FreeFormIT.com
Mon Oct 20 20:10:04 UTC 2008
At 12:12 PM 10/20/2008, Stas Oskin wrote:
>Hi.
>
>Thanks for all the answers.
>
>I should say that indeed especially the metaserver-less (P2P?)
>approach of GlusterFS makes it a very attractive option, as it
>basically cancels any single points of failure.
I think it's important that people understand the tradeoffs.
Having a central metaserver insures the integrity of the data.
With Gluster, by not having a meta server, they introduce different
problems (and different solutions).
With AFR, you can get into a split brain situation. So examining
glusters split brain resolution would be necessary and you'd have to
determine if you're comfortable with the tradeoffs. In my view, it's
a good workable solution but it may not work for everyone.
The other issue is the namespace brick on the unify translator. this
is effectively a meta-data like thing. you can afr the namespace
brick to provide additional availability, but if your namespace brick
is unavailable then you have a similar problem as you have with a
metadata server outage in another solution.
So, while I personally think gluster is one of the "best" solutions
out there, it's because the numbers for my situation point in that
direction but it wont for everyone.
>My largest concert over GlusterFs is really the luck of central
>administration tool. Modifying the configuration files on every
>server/client with every topology change becomes a hurdle on 10
>servers already, and probably impossbile beyond 100.
in most cases, your client configurations are pretty much identical,
so maintaining these is relatively simple. If your server topology
changes often then it can be inconvenient, partly because you have to
deal with IP addresses.
It's also not good for certain grid operating systems which use
internal IP's and the IP's change randomly, or if you for some reason
have servers using dhcp.
>Hence, I'm happy to hear version 1.4 will have some kind of a web
>interface. The only questions are:
>
>1) Will it support a central management of all serves/clients,
>including the global AFR settings?
>
>2) When it comes out? :)
>
>Regards.
>
>2008/10/20 Vikas Gorur <<mailto:vikasgp at gmail.com>vikasgp at gmail.com>
>2008/10/18 Stas Oskin <<mailto:stas.oskin at gmail.com>stas.oskin at gmail.com>:
> > Hi.
> >
> > I'm evaluating GlusterFS for our DFS implementation, and wondered how it
> > compares to KFS/CloudStore?
> >
> > These features here look especially nice
> >
> (<http://kosmosfs.sourceforge.net/features.html>http://kosmosfs.sourceforge.net/features.html).
> Any idea what of them exist
> > in GlusterFS as well?
>
>Stas,
>
>Here's how GlusterFS compares to KFS, feature by feature:
>
> > Incremental scalability:
>
>Currently adding new storage nodes requires a change in the config
>file and restarting servers and clients. However, there is no need to
>move/copy data or perform any other maintenance steps. "Hot add"
>capability is planned for the 1.5 release.
>
> > Availability
>
>GlusterFS supports n-way data replication through the AFR translator.
>
> > Per file degree of replication
>
>GlusterFS used to have this feature, but it was dropped due to lack
>of interest. It would not be too hard to bring it back.
>
> > Re-balancing
>
>The DHT and unify translators have extensive support for distributing
>data across nodes. One can use unify schedulers to define file creation
>policies such as:
>
>* ALU - Adaptively (based on disk space utilization, disk speed, etc.)
>schedule file creation.
>
>* Round robin
>
>* Non uniform (NUFA) - prefer a local volume for file creation and use remote
>ones only when there is no space on the local volume.
>
> > Data integrity
>
>GlusterFS arguably provides better data integrity since it runs over
>an existing filesystem, and does not access disks at the block level.
>Thus in the worst case (which shouldn't happen), even if GlusterFS
>crashes, your data will still be readable with normal tools.
>
> > Rack-aware data placement
>
>None of our users have mentioned this need until now, thus GlusterFS
>has no rack awareness. One could incorporate this intelligence into
>our cluster translators (unify, afr, stripe) quite easily.
>
> > File writes and caching
>
>GlusterFS provides a POSIX-compliant filesystem interface. GlusterFS
>has fine-tunable caching translators, such as read-ahead (to read ahead),
>write-behind (to reduce write latency), and io-cache (caching file data).
>
> > Language support
>
>This is irrelevant to GlusterFS since it is mounted and accessed as a normal
>filesystem, through FUSE. This means all your applications can run
>on GlusterFS
>without any modifications.
>
> > Deploy scripts
>
>Users have found GlusterFS to be so simple to setup compared to other
>cluster filesystems that there isn't really a need for deploy scripts. ;)
>
> > Local read optimization
>
>As mentioned earlier, if your data access patterns justify it (that
>is, if users generally access local data and only occassionly want
>remote data), you can configure 'unify' with the NUFA scheduler to achieve
>this.
>
>In addition, I'd like to mention two particular strengths of GlusterFS.
>
>1) GlusterFS has no notion of a 'meta-server'. I have not looked through
>KFS' design in detail, but the mention of a 'meta-server' leads me to
>believe that failure of the meta-server will take the entire cluster offline.
>Please correct me if the impression is wrong.
>
>GlusterFS on the other hand has no single point of failure such as central
>meta server.
>
>2) GlusterFS 1.4 will have a web-based interface which will allow
>you to start/stop GlusterFS, monitor logs and performance, and do
>other admin activities.
>
>
>Please contact us if you need further clarifications or details.
>
>Vikas Gorur
>Engineer - Z Research
>
>_______________________________________________
>Gluster-users mailing list
>Gluster-users at gluster.org
>http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
More information about the Gluster-users
mailing list