[Gluster-devel] Some FAQs ...
Steffen Grunewald
steffen.grunewald at aei.mpg.de
Wed Apr 25 14:15:00 UTC 2007
Hi,
I'm in the process of evaluating parallel file systems for a cluster made
of 15 storage servers and about 600 compute nodes, and came across GlusterFS.
Having read most of the documentation, I've got some more FAQs I couldn't
find in the Wiki. I'd appreciate any answer...
- The two example configs are a bit confusing. In particular, I suppose I
don't have to assign different names to all 15 volumes? Different
ports are only used to address a certain sub-server?
- This would mean I could use the same glusterfs-server.vol for all
storage bricks?
- The "all-in-one" configuration suggests that servers can be clients at the
same time? (meaning, there's no real need to separately build
server and client)
- The instructions to add a new brick (reproduce the directory tree with
cpio) suggest that it would be possible to form a GluFS from
already existing separate file servers, each holding part of the
"greater truth", by building a unified directory tree (only
partly populated) on each of them, then unifying them using
GluFS. Am I right?
- Would it still be possible to access the underlying filesystems, using
NFS with read-only export?
- What would happen if files are added to the underlying filesystem on one
of the bricks? Since there's no synchronization mechanism this should
look the same as f the file entered through GluFS?
- What's the recommended way to backup such a file system? Snapshots?
- Is there a Debian/GNU version already available, or someone working on it?
- Are there plans to implement "relaxed" RAID-1 by writing identical copies
of the same file (the same way AFR does) to different servers?
_ I couldn't find any indication of metadata being kept somewhere - how do
I find out which files were affected if a brick fails and cannot
be repaired? (How does AFR handle such situations?) I suppose there
are no tools to re-establish redundancy when slipping in a fresh
brick - what's the roadmap for this feature?
- In several places, the FAQ refers to "the next release" for certain
features - it would make sense to put the release number there.
- The benchmark GluFS vs. Lustre looks almost too good - what was the
underlying filesystem on the bricks? Don't the results reflect
the big (6GB) buffer cache instead of the real FS performance?
More to come...
Cheers,
Steffen
--
Steffen Grunewald * MPI Grav.Phys.(AEI) * Am Mühlenberg 1, D-14476 Potsdam
Cluster Admin * http://pandora.aei.mpg.de/merlin/ * http://www.aei.mpg.de/
* e-mail: steffen.grunewald(*)aei.mpg.de * +49-331-567-{fon:7233,fax:7298}
No Word/PPT mails - http://www.gnu.org/philosophy/no-word-attachments.html
More information about the Gluster-devel
mailing list