[Gluster-users] glusterfs best usage / best storage type or model
Roman
romeo.r at gmail.com
Mon Mar 28 11:49:38 UTC 2016
have anyone had any disaster recovery actions on such setup?
For how long it could take to heal the volume in case of disk failure?
and count in this setup means, how many bricks will be counted as bricks
for meta-data ?
Just need some more information on this kind of setup, seems like I like it
:)
2016-03-28 14:21 GMT+03:00 Roman <romeo.r at gmail.com>:
> Hi Joe,
>
> thanks for an answer. but in the case of 37 8TB bricks the data won't be
> available if one of servers fails anyway :) And it seems to me, that it
> would be even bigger mess to undarstand, what files are up and what are
> down with bricks.. Or am I missing something? Reading this one
> https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-dispersed-volumes
> And what would be the redundancy count in case of 37 8TB bricks? still 1?
>
> 2016-03-28 11:53 GMT+03:00 Joe Julian <joe at julianfamily.org>:
>
>> You're "wasting" the same amount of space either way. Make 37 8TB bricks
>> and use disperse.
>>
>>
>> On March 28, 2016 10:33:52 AM GMT+02:00, Roman <romeo.r at gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> Thanks for an option, but it seems that it is not that good in our
>>> situation. I can't waste storage space on bricks for disperse and disperse
>>> volumes require having bricks of the same size. We will start with
>>> distributed volume of uneven size at the beginning. As we are speaking of
>>> archive server, it is not that critical, if some portion of data won't be
>>> available for some time (maintenance time). Having like 22 disks per server
>>> makes the proability of raid5 failure,when 2 or more disks will fail a bit
>>> higher though, so I'll really have to decide something about it :)
>>>
>>> 2016-03-28 1:35 GMT+03:00 Russell Purinton <russell.purinton at gmail.com>:
>>>
>>>> You might get better results if you forget about using RAID all together
>>>>
>>>> For example, GlusterFS supports “disperse” volumes which act like
>>>> RAID5/6. It has the advantage that you can maintain access to things even
>>>> if a whole server goes down. If you are using local RAID for redundancy and
>>>> that server goes offline you’ll be missing files.
>>>>
>>>>
>>>>
>>>> On Mar 27, 2016, at 6:29 PM, Roman <romeo.r at gmail.com> wrote:
>>>>
>>>> Hi,
>>>>
>>>> Need an advice from heavy glusterfs users and may be devs..
>>>>
>>>> Going to give a try for glusterfs in new direction for me. All the time
>>>> I was using GlusterFS as VM storage for KVM guests.
>>>>
>>>> Now going to use it as a main distributed storage archive for
>>>> digitalized (scanned) books in one of libraries in Estonia.
>>>>
>>>> At the very start we are going to scan about 346 GB - 495 GB daily,
>>>> which is about 7000 - 10 000 pages. 600 GB in the future. There are some
>>>> smaller files per book: a small xml file and compressed pdf (while all the
>>>> original files will be tiff). This data goes to production server and then
>>>> we are going to archive it on our new glusterfs archive.
>>>>
>>>> At this moment, we've got 2 servers:
>>>>
>>>> one with 22x8TB 5400 RPM SATA HDD disks
>>>> second with 15x8TB 5400 RPM SATA HDD disks
>>>> We are planning to add remaining disks to the second server at the end
>>>> of the year, being budget based institue is crap, I know. So it should be
>>>> as easy as extend LVM volume and remount it.
>>>>
>>>> Both the servers will run raid5 or raid6, haven't decided yet, but as
>>>> we need as much storage space as possibe per server, seems like it will be
>>>> raid5.
>>>>
>>>> At this moment I'm planing to create just a single distributed storage
>>>> over these two servers and mount them on the production server, so it could
>>>> archive files there. So it would be like 168+112 = 280 TB storage pool. We
>>>> are planing to extend this one anually, by adding HDDs to second server at
>>>> the end of first year and then adding some storage by extending the ammount
>>>> of servers, wich means, just adding the bricks to the distributed storage
>>>> massive.
>>>>
>>>> Any better solutions or possibilities ?
>>>>
>>>> --
>>>> Best regards,
>>>> Roman.
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Roman.
>>>
>>> ------------------------------
>>>
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>> --
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>
>
>
>
> --
> Best regards,
> Roman.
>
--
Best regards,
Roman.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160328/9586130c/attachment.html>
More information about the Gluster-users
mailing list