<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Hi Felix,<br></div><div><br></div><div>As I don't have much expertise on hardware side, I would not comment on that segment.<br></div><div>Based on your "Requirements", I would say that it looks very much feasible. As you are talking about storing large files, I would say that disperse volume could be a good choice.</div><div>However, disperse volumes may not give you good performance for frequent random IO's. If the read/writes are sequential than this could be the better choice.<br></div><div><br></div><div>I also don't think that mixing up gluster versions for the same storage solution would be a smart thing to do. <br></div><div>BTW, pictures were not visible on this mail. <br></div><div><br></div><div>In case you want to share your ideas and discuss these things with other gluster community users, you can participate in community meeting. <br></div><div>Meeting Details - <br></div><div><ul><li>APAC friendly hours<ul><li>Every 2nd and 4th Tuesday at 11:30AM IST</li><li>Bridge: <a href="https://bluejeans.com/836554017" rel="nofollow" data-mce-href="https://bluejeans.com/836554017">https://bluejeans.com/836554017</a><br data-mce-bogus="1"></li></ul></li><li>NA/EMEA<ul><li>Every 1st and 3rd Tuesday at 01:00 PM EDT</li><li>Bridge: <a href="https://bluejeans.com/486278655" rel="nofollow" data-mce-href="https://bluejeans.com/486278655">https://bluejeans.com/486278655</a><br data-mce-bogus="1"></li></ul></li></ul></div><div><br></div><div>---<br></div><div>Ashish<br></div><div><br></div><div><br></div><div><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;" data-mce-style="color: #000; font-weight: normal; font-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-serif; font-size: 12pt;"><b>From: </b>"Felix Kölzow" <felix.koelzow@gmx.de><br><b>To: </b>Gluster-users@Gluster.org<br><b>Sent: </b>Thursday, July 18, 2019 12:57:56 PM<br><b>Subject: </b>[Gluster-users] Share Experience with Productive Gluster Setup <<Try to reanimate Elvis' Gluster Thread>><br><div><br></div>Dear Gluster-Community,<br><div><br></div>we try to implement a gluster setup in a productive environment.<br><div><br></div>During the development process, we found this nice thread regarding to <br>gluster<br><div><br></div>storage experience:<br><div><br></div>https://forums.overclockers.com.au/threads/glusterfs-800tb-and-growing.1078674/<br><div><br></div><br>This thread seems to be in the early retirement. Thus, i try to <br>reanimate this thread<br><div><br></div>to generate a concise gluster real life experience for further readers.<br><div><br></div>Unfortunately, I don't have the given permissions by the admin<br><div><br></div>to post this message, so I attached the message below.<br><div><br></div>I plan to write news/updates within this thread (link above).<br><div><br></div><br>Regards<br><div><br></div>Felix<br><div><br></div><br> >>> Should be me first post <<<<br><div><br></div>Dear Community,<br><div><br></div><br>thank you very much for such a very informative talk regarding gluster and<br>gluster real life experience. We also plan to build a gluster system and <br>I thought<br>it is maybe a good idea to reanimate this little old thread to unify<br>the experience for further readers.<br><div><br></div>The main idea is to discuss our current setup (this exists only in our <br>minds actually)<br>and share our experience with the community if we really go ahead with <br>gluster and<br>real hardware.<br><div><br></div>* Requirements:<br>- Current Volume: about 55TB<br>- Future Volume: REALLY unpredictable for several reasons<br> - could be 120 TB in 5 years or<br> - could be 500 TB in 5 years<br>- Files:<br> - well known office files<br> - research files<br> - larger text files (> 10GB)<br> - many pictures from experimental investigations (currently 500 GB <br>- 1.5TB) for each experiment<br> - large HDF5 files<br>- Clients:<br> - Windows 98,2000,XP,7,10; Linux (Arch, Ubuntu, CentOS), MacOS<br>- Some data must be immediately available in 10 - 20 years (data cannot <br>exclusively moved to tape storage, must available on hot storage)<br>- Use Redhat Gluster Storage for production environment<br>- Currently planed: Backup-Side using CentOS8 + tape storage<br><div><br></div>- timeline: if gluster is realized, it should be productive in end of <br>this year<br><div><br></div><br>* Support<br>- It is very likely that we are going use Redhat Gluster Storage<br> and corresponding support.<br><div><br></div>* Planned Architecture:<br>- Distributed Replicated (Replica 3-Setup) Volume with two 60 TB Bricks <br>per node and 500GB LVM Cache PCIe NVMe, i.e. 120TB Total Storage<br><div><br></div>* Scenario<br>- It is planned that we run 3 nodes with 2 volumes within a distributed <br>replicated setup<br>- Also a dispersed volume could be possible?!<br>- Using LVM Cache<br>- two volumes<br> - one volume for office data (almost for pdf, MS Office :-/,...)<br> - one volume for research data (see above)<br>- The Gluster Cluster uses a geo-replication to a single node that is<br> located in a different building. Thus, we have a asynchronus native<br> backup (realized to due snapshots) that is easily to restore.<br> Snapshots will be retained for 7 days.<br>- The native backup server and the server for compression/deduplication<br> are located next to each other (for no network bottleneck reasons).<br> Bacula/Bareos should be used for weekly, monthly and yearly backups.<br>- Yearly and certain monthly backups are archived on tape-storage<br>- Tapes will be stored at two different locations<br><div><br></div>* Hardware:<br><div><br></div>** Gluster Storage Server: GEH 4u Supermicro | CSE-847BE1C-R1K28LPB<br>- Mainboard X11DPi-NT<br>- 2x Intel Xeon LGA 3647 Silver, 8 Cores<br>- 96 GB RAM, DDR4 2666 ECC<br>- Adaptec ASR 8805 Raid-Controller<br>- Two smaller enterprise SSD 2.5" for os<br>- 2x Intel Optane ca. 500GB PCIe NMVe for LVM Cache (Software Raid 0)<br>- 4x 10Gbps RJ45 for Network<br>- 24x 6TB Drives (2 Bricks, 12 Drives per Brick, HW-Raid6 per Brick)<br><div><br></div>** Server for Native Backup (1 Node Geo-Replication)<br>- Mainboard X11DPi-NT<br>- 2x Intel Xeon LGA 3647 Silver, 8 Cores<br>- 96 GB RAM, DDR4 2666 ECC<br>- Adaptec ASR 8805 Raid-Controller (IT-Mode)<br>- Two smaller enterprise SSD 2.5" for os (xfs)<br>- 4x 10Gbps RJ45 for Network<br>- 12x 10 TB Drives (planned for ZFSonLinux, raidz2)<br>- ZFS compression on the fly, no deduplication<br><div><br></div>** Server for Compressed/Deduplicated Backup<br>- Mainboard X11DPi-NT<br>- 2x Intel Xeon LGA 3647 Silver, 8 Cores<br>- 192 GB RAM, DDR4 2666 ECC (considering ZFS RAM consumption during <br>deduplication)<br>- Adaptec ASR 8805 Raid-Controller (IT-Mode)<br>- Two smaller enterprise SSD 2.5" for os (xfs)<br>- 4x 10Gbps RJ45 for Network<br>- 12x 6 TB Drives (planned for ZFSonLinux, raidz2)<br><div><br></div>** Tape-Storage<br>- Hardware not defined yet<br><div><br></div><br>* Software/OS<br><div><br></div>** Gluster Storage Server<br>- Running RHEL8 and REDHAT Gluster Storage<br>- With Support<br><div><br></div>** Server for Native Backup<br>- Running CentOS 8 (not released yet)<br>- Running COMMUNITY Gluster<br><div><br></div>** Server for Compressed/Deduplicated Backup<br>- Running CentOS 8 (not released yet)<br>- Backup using Bareos or Bacula (without Support, but 'personal' <br>training planned)<br>- Deduplication using ZFSonLinux<br><div><br></div>* Volumes<br><div><br></div>** Linux Clients and Linux Computing Environment<br>- Using FUSE<br><div><br></div>** Windows Clients / MacOS<br>- CTDB<br><div><br></div>* Questions:<br><div><br></div>1. Any hints/remarks to the considered hardware?<br>2. Does anybody realized a comparable setup and is able to share <br>performance results?<br>3. Is ZFSonLinux really ready for production for this scenario.<br>4. Is it reliable to mix Gluster Versions, i.e. production system on <br>Redhat Gluster Storage, Native Backup Server on Community Gluster<br><div><br></div>The aim here is really to share experience and we try to document our <br>experience and performance tests<br>if we are really going ahead with gluster.<br><div><br></div>Every remarks/hints are very welcome.<br><div><br></div>I think this is enough for the first post. Feel free to ask!<br><div><br></div><br>Thanks to image-ascii converter, the shorted setup could look like this.<br><div><br></div><br>:-/::--./: Geo-Replication ::--:/-:::/.// ::- `. <br>`.`- .` `. -<br> /:/.- ` ` `` `` `` /:/`` `` `.:` `:``:`:` ` <br>`......-::::::.-.:::/::-:::::::-----::.-::-:::::::--.::.....` <br>d/h.sy-o/:yo::+-s+/o.hh-++///s/s:/s::s/soo/o`s:+s-o+:<br> s+s/ Production System Building A y-h:os <br>:. ----://--:-: `.. s/o-+o Backup System <br>Building B o--/:yy/<br> -/:`/-/:...//-sh.// :` .-`//`/:-::::.:--: /` `- <br>Master ..`----`.-.- Slave : <br>s/o-+o:++-/o-+o-h/+--+:o/.////`o-::o o///+`//o--/:yy/<br> `` .. .`` ` `.` :<br>..`+:/+/+//+:` ://:+:-+: :<br>.. `````````` ````````` :<br> `` ` ` ` `` <br>.. ` ` ```` ` `` ` :<br> //-//:::::`::o-::::/:: : ::-.::::` +/.:--:. / <br>.. /`::/:+-+/.-:/:::-v \ : / <br>:-::-:-:.--.`+:+-:--:::<br> `.`.``.``` ````.```... -...``.-.` ...-.... . <br>.. .`.`.`....````...`.` ` <br>\|/ .`.-./..-.`. ..-.-`--.-<br>.`````````````````````````-:`````````````````````````. <br>..`````````````````````````/`````````````````````````-<br> -``` .. <br>..- .... : ..-<br> -``` .. <br>..- .``` : ``-<br> ` ` <br>-``.``````````````````````--``````````````````````.``- <br>..`-```````````````````````/``````````````````````..`-<br> `+:+:::-/-- - . .. . <br>- .` . : `. -<br> ....``.-`. - . .. . <br>- .` . -h: `. -<br> - . .. . <br>- .` :-:``:-::-:.:----------:o:-:.:------:---::--`-::. - `<br> - . .. . <br>- .` //+ `..--:`:------..-------`:-------.---.`` ./+` - <br>`/-::---- ``````````````.`<br> - . .. . <br>- .` //+ .:` ./+` - .....-.. /`````````````:--`<br> ` - <br>/--`.------.-------.---:/---.-------.------.`--/ - .` <br>//+ +s- ./+` - <br>: -` .-`<br> o-:--./-//:-/.-: - +/: ````````````````.+o.`. <br>.```````````` :/+ - .` //+ ``` ```````````````` ```````` <br>`` ./+` - : -` .-<br> -.``...`........ - +// `.... Gluster Storage Node <br>1.`....` :/+ - .` //+ --://+.///////::///////.+///////-///:-. <br>./+` - : -. `--<br> - <br>:..`.-..............-..::..-`.............-.`..: - .` //+ <br>+////+./ Native Backup Server /////: ./+` - : <br>./:..-:-.-//-...../<br> - . .. . <br>- .` //+ -:///+.///////::///////.+///////-////:. ./+` - : <br>..-::----:/-` :<br> - . .. . <br>- .` //+ `.. .......``....... ........`..` ./+` - <br>+-...-....-.-.....``-..+<br> `` - <br>/--`.--------------.---:/--:.--------------.`--/ - .` <br>-..`````````````````````:.```````````````````..-. - <br>``+------/:------/-.:---/y<br> /-/:/:/--/` - +/: ``````````` ``.oo.`` <br>```````````` :/+ - .` . -` <br>``..........--.:..................`: ``..`-` :<br> ` ``...:`.` - +/: ``.... Gluster Storage Node <br>2......`` :/+ - .` . .-.........`` `. <br>- : --:.::` :<br> - <br>:..``..................::...`...............`..: - .` <br>. `- `. - <br>o...`.- `...`...````.`./<br> ................ - . .. . <br>- .` . `- `. - s.://-/-+-+::./:/.::./-s<br> : .:-. - . .. . <br>- .` . +y` `. - : <br>` ./<br> : .- `-. - <br>/--`.---::--:::::::.::::/:::.:::::::--::---.`--/ - .` <br>::/`.:::::/.::::::::::::/o::./:::::::-:::::-`-:/. - <br>: :<br> : .- `-. - +/: ```````````` ```++``` <br>```````````` :/+ - .` //+ <br>``.---`-------..-------`--------`--.`` ./+` - : :<br> : .- `.-. - +/: ``...... Gluster Storage Node <br>3.....`` :/+ - .` //+ -/` ./+` - <br>/``````````````````````/<br> : ......../ - <br>:..``..............`........`..............``..: - .` <br>//+ /o. ./+` - ` ` ` ` <br>````````````````````````<br> : : - <br>. . - .` //+ <br>``.. .......``....... ........`..` ./+` - :-//-:/::::<br> : : - <br>. . - .` //+ <br>-:://+.///////::///////.+///////-//:::. ./+` - ` `.``.```` <br>:.............--`<br> : : `+s/ <br>. . - .` //+ <br>+////+ Backup - Server //-/////: ./+` - <br>: -.-.`<br> : ..` `````````````` :..`--- <br>. . - .` //+ <br>-::// Compression / Deduplication ::-. ./+` - <br>: -` `-.<br> : :/::/::/::::::.::: : - <br>. . - .` <br>//+`````...`................`........`...````-/+` - <br>: -` `-.<br> : `-..-.`..```````.` : - <br>. . - .` <br>-..`````````````````````:`````````````````````... - <br>: -.`````.--<br> : --/-://`:-/:::/:/:/` : - <br>. . - .` <br>. : `. - <br>: ````````/<br> : ```..` `..`.`.```` : - <br>. . - .` <br>. :s. `. - <br>: ` :<br> : //::::-/::::.: : - <br>. . - .` <br>. -/++++/. .+ `-/++++:` `. - <br>: `/:::::: :<br> : .``````.-:``` : - <br>. . - .` <br>. `s/:o .+o` -s/+: `-o/ `. - ````````````: <br>Tape Storage :<br> : : - <br>. . - .` <br>. y: :++```+o d. ++/ <br>`.y:/s........--./........``````.````: :/:-::./-::/` :<br> : ` : - <br>. . - .` <br>. h. /sy--//y`d soh`//o+.: `. - <br>: Building B :<br> : .:/-:-/-+-/./::::. : - <br>. . - .` <br>. :s..s` .y- o+`-+` -y` `. - <br>: :<br> : `-.`...`.........` : - <br>. . - .` <br>. .+o+::/ods+++hys/-:/o+` `. - <br>: :<br> : `++-:--::-./:-:-/` : <br>-``-``````````````````````````````````````````````-``- <br>..`-```````````````-::.````````.-::.``````````````..`: <br>: :<br> : ....:-...--....- : - <br>` ``- .``` <br>``- : :<br> : `:./:: .//:- : <br>-... --- .... <br>..: : :<br> /......:/::/..::::...../ <br>-````````````````````````````````````````````````````- <br>..```````````````````````````````````````````````````- <br>/....................../<br>_______________________________________________<br>Gluster-users mailing list<br>Gluster-users@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-users</div><div><br></div></div></body></html>