[Gluster-users] Share Experience with Productive Gluster Setup <<Try to reanimate Elvis' Gluster Thread>>
Felix Kölzow
felix.koelzow at gmx.de
Thu Jul 18 07:27:56 UTC 2019
Dear Gluster-Community,
we try to implement a gluster setup in a productive environment.
During the development process, we found this nice thread regarding to
gluster
storage experience:
https://forums.overclockers.com.au/threads/glusterfs-800tb-and-growing.1078674/
This thread seems to be in the early retirement. Thus, i try to
reanimate this thread
to generate a concise gluster real life experience for further readers.
Unfortunately, I don't have the given permissions by the admin
to post this message, so I attached the message below.
I plan to write news/updates within this thread (link above).
Regards
Felix
>>> Should be me first post <<<
Dear Community,
thank you very much for such a very informative talk regarding gluster and
gluster real life experience. We also plan to build a gluster system and
I thought
it is maybe a good idea to reanimate this little old thread to unify
the experience for further readers.
The main idea is to discuss our current setup (this exists only in our
minds actually)
and share our experience with the community if we really go ahead with
gluster and
real hardware.
* Requirements:
- Current Volume: about 55TB
- Future Volume: REALLY unpredictable for several reasons
- could be 120 TB in 5 years or
- could be 500 TB in 5 years
- Files:
- well known office files
- research files
- larger text files (> 10GB)
- many pictures from experimental investigations (currently 500 GB
- 1.5TB) for each experiment
- large HDF5 files
- Clients:
- Windows 98,2000,XP,7,10; Linux (Arch, Ubuntu, CentOS), MacOS
- Some data must be immediately available in 10 - 20 years (data cannot
exclusively moved to tape storage, must available on hot storage)
- Use Redhat Gluster Storage for production environment
- Currently planed: Backup-Side using CentOS8 + tape storage
- timeline: if gluster is realized, it should be productive in end of
this year
* Support
- It is very likely that we are going use Redhat Gluster Storage
and corresponding support.
* Planned Architecture:
- Distributed Replicated (Replica 3-Setup) Volume with two 60 TB Bricks
per node and 500GB LVM Cache PCIe NVMe, i.e. 120TB Total Storage
* Scenario
- It is planned that we run 3 nodes with 2 volumes within a distributed
replicated setup
- Also a dispersed volume could be possible?!
- Using LVM Cache
- two volumes
- one volume for office data (almost for pdf, MS Office :-/,...)
- one volume for research data (see above)
- The Gluster Cluster uses a geo-replication to a single node that is
located in a different building. Thus, we have a asynchronus native
backup (realized to due snapshots) that is easily to restore.
Snapshots will be retained for 7 days.
- The native backup server and the server for compression/deduplication
are located next to each other (for no network bottleneck reasons).
Bacula/Bareos should be used for weekly, monthly and yearly backups.
- Yearly and certain monthly backups are archived on tape-storage
- Tapes will be stored at two different locations
* Hardware:
** Gluster Storage Server: GEH 4u Supermicro | CSE-847BE1C-R1K28LPB
- Mainboard X11DPi-NT
- 2x Intel Xeon LGA 3647 Silver, 8 Cores
- 96 GB RAM, DDR4 2666 ECC
- Adaptec ASR 8805 Raid-Controller
- Two smaller enterprise SSD 2.5" for os
- 2x Intel Optane ca. 500GB PCIe NMVe for LVM Cache (Software Raid 0)
- 4x 10Gbps RJ45 for Network
- 24x 6TB Drives (2 Bricks, 12 Drives per Brick, HW-Raid6 per Brick)
** Server for Native Backup (1 Node Geo-Replication)
- Mainboard X11DPi-NT
- 2x Intel Xeon LGA 3647 Silver, 8 Cores
- 96 GB RAM, DDR4 2666 ECC
- Adaptec ASR 8805 Raid-Controller (IT-Mode)
- Two smaller enterprise SSD 2.5" for os (xfs)
- 4x 10Gbps RJ45 for Network
- 12x 10 TB Drives (planned for ZFSonLinux, raidz2)
- ZFS compression on the fly, no deduplication
** Server for Compressed/Deduplicated Backup
- Mainboard X11DPi-NT
- 2x Intel Xeon LGA 3647 Silver, 8 Cores
- 192 GB RAM, DDR4 2666 ECC (considering ZFS RAM consumption during
deduplication)
- Adaptec ASR 8805 Raid-Controller (IT-Mode)
- Two smaller enterprise SSD 2.5" for os (xfs)
- 4x 10Gbps RJ45 for Network
- 12x 6 TB Drives (planned for ZFSonLinux, raidz2)
** Tape-Storage
- Hardware not defined yet
* Software/OS
** Gluster Storage Server
- Running RHEL8 and REDHAT Gluster Storage
- With Support
** Server for Native Backup
- Running CentOS 8 (not released yet)
- Running COMMUNITY Gluster
** Server for Compressed/Deduplicated Backup
- Running CentOS 8 (not released yet)
- Backup using Bareos or Bacula (without Support, but 'personal'
training planned)
- Deduplication using ZFSonLinux
* Volumes
** Linux Clients and Linux Computing Environment
- Using FUSE
** Windows Clients / MacOS
- CTDB
* Questions:
1. Any hints/remarks to the considered hardware?
2. Does anybody realized a comparable setup and is able to share
performance results?
3. Is ZFSonLinux really ready for production for this scenario.
4. Is it reliable to mix Gluster Versions, i.e. production system on
Redhat Gluster Storage, Native Backup Server on Community Gluster
The aim here is really to share experience and we try to document our
experience and performance tests
if we are really going ahead with gluster.
Every remarks/hints are very welcome.
I think this is enough for the first post. Feel free to ask!
Thanks to image-ascii converter, the shorted setup could look like this.
:-/::--./: Geo-Replication ::--:/-:::/.// ::- `.
`.`- .` `. -
/:/.- ` ` `` `` `` /:/`` `` `.:` `:``:`:` `
`......-::::::.-.:::/::-:::::::-----::.-::-:::::::--.::.....`
d/h.sy-o/:yo::+-s+/o.hh-++///s/s:/s::s/soo/o`s:+s-o+:
s+s/ Production System Building A y-h:os
:. ----://--:-: `.. s/o-+o Backup System
Building B o--/:yy/
-/:`/-/:...//-sh.// :` .-`//`/:-::::.:--: /` `-
Master ..`----`.-.- Slave :
s/o-+o:++-/o-+o-h/+--+:o/.////`o-::o o///+`//o--/:yy/
`` .. .`` ` `.` :
..`+:/+/+//+:` ://:+:-+: :
.. `````````` ````````` :
`` ` ` ` ``
.. ` ` ```` ` `` ` :
//-//:::::`::o-::::/:: : ::-.::::` +/.:--:. /
.. /`::/:+-+/.-:/:::-v \ : /
:-::-:-:.--.`+:+-:--:::
`.`.``.``` ````.```... -...``.-.` ...-.... .
.. .`.`.`....````...`.` `
\|/ .`.-./..-.`. ..-.-`--.-
.`````````````````````````-:`````````````````````````.
..`````````````````````````/`````````````````````````-
-``` ..
..- .... : ..-
-``` ..
..- .``` : ``-
` `
-``.``````````````````````--``````````````````````.``-
..`-```````````````````````/``````````````````````..`-
`+:+:::-/-- - . .. .
- .` . : `. -
....``.-`. - . .. .
- .` . -h: `. -
- . .. .
- .` :-:``:-::-:.:----------:o:-:.:------:---::--`-::. - `
- . .. .
- .` //+ `..--:`:------..-------`:-------.---.`` ./+` -
`/-::---- ``````````````.`
- . .. .
- .` //+ .:` ./+` - .....-.. /`````````````:--`
` -
/--`.------.-------.---:/---.-------.------.`--/ - .`
//+ +s- ./+` -
: -` .-`
o-:--./-//:-/.-: - +/: ````````````````.+o.`.
.```````````` :/+ - .` //+ ``` ```````````````` ````````
`` ./+` - : -` .-
-.``...`........ - +// `.... Gluster Storage Node
1.`....` :/+ - .` //+ --://+.///////::///////.+///////-///:-.
./+` - : -. `--
-
:..`.-..............-..::..-`.............-.`..: - .` //+
+////+./ Native Backup Server /////: ./+` - :
./:..-:-.-//-...../
- . .. .
- .` //+ -:///+.///////::///////.+///////-////:. ./+` - :
..-::----:/-` :
- . .. .
- .` //+ `.. .......``....... ........`..` ./+` -
+-...-....-.-.....``-..+
`` -
/--`.--------------.---:/--:.--------------.`--/ - .`
-..`````````````````````:.```````````````````..-. -
``+------/:------/-.:---/y
/-/:/:/--/` - +/: ``````````` ``.oo.``
```````````` :/+ - .` . -`
``..........--.:..................`: ``..`-` :
` ``...:`.` - +/: ``.... Gluster Storage Node
2......`` :/+ - .` . .-.........`` `.
- : --:.::` :
-
:..``..................::...`...............`..: - .`
. `- `. -
o...`.- `...`...````.`./
................ - . .. .
- .` . `- `. - s.://-/-+-+::./:/.::./-s
: .:-. - . .. .
- .` . +y` `. - :
` ./
: .- `-. -
/--`.---::--:::::::.::::/:::.:::::::--::---.`--/ - .`
::/`.:::::/.::::::::::::/o::./:::::::-:::::-`-:/. -
: :
: .- `-. - +/: ```````````` ```++```
```````````` :/+ - .` //+
``.---`-------..-------`--------`--.`` ./+` - : :
: .- `.-. - +/: ``...... Gluster Storage Node
3.....`` :/+ - .` //+ -/` ./+` -
/``````````````````````/
: ......../ -
:..``..............`........`..............``..: - .`
//+ /o. ./+` - ` ` ` `
````````````````````````
: : -
. . - .` //+
``.. .......``....... ........`..` ./+` - :-//-:/::::
: : -
. . - .` //+
-:://+.///////::///////.+///////-//:::. ./+` - ` `.``.````
:.............--`
: : `+s/
. . - .` //+
+////+ Backup - Server //-/////: ./+` -
: -.-.`
: ..` `````````````` :..`---
. . - .` //+
-::// Compression / Deduplication ::-. ./+` -
: -` `-.
: :/::/::/::::::.::: : -
. . - .`
//+`````...`................`........`...````-/+` -
: -` `-.
: `-..-.`..```````.` : -
. . - .`
-..`````````````````````:`````````````````````... -
: -.`````.--
: --/-://`:-/:::/:/:/` : -
. . - .`
. : `. -
: ````````/
: ```..` `..`.`.```` : -
. . - .`
. :s. `. -
: ` :
: //::::-/::::.: : -
. . - .`
. -/++++/. .+ `-/++++:` `. -
: `/:::::: :
: .``````.-:``` : -
. . - .`
. `s/:o .+o` -s/+: `-o/ `. - ````````````:
Tape Storage :
: : -
. . - .`
. y: :++```+o d. ++/
`.y:/s........--./........``````.````: :/:-::./-::/` :
: ` : -
. . - .`
. h. /sy--//y`d soh`//o+.: `. -
: Building B :
: .:/-:-/-+-/./::::. : -
. . - .`
. :s..s` .y- o+`-+` -y` `. -
: :
: `-.`...`.........` : -
. . - .`
. .+o+::/ods+++hys/-:/o+` `. -
: :
: `++-:--::-./:-:-/` :
-``-``````````````````````````````````````````````-``-
..`-```````````````-::.````````.-::.``````````````..`:
: :
: ....:-...--....- : -
` ``- .```
``- : :
: `:./:: .//:- :
-... --- ....
..: : :
/......:/::/..::::...../
-````````````````````````````````````````````````````-
..```````````````````````````````````````````````````-
/....................../
More information about the Gluster-users
mailing list