[Gluster-users] Kudos to All

Pranith Kumar Karampuri pkarampu at redhat.com
Mon Apr 18 06:09:28 UTC 2016


hi Lindsay,
              Thanks for this :-). Since you are probably the first user 
who is putting sharding in production, you may be the first person to 
run into some issues which noone faced till now. We want to make sure 
all your questions/problems are addressed. Special thanks to Krutika for 
being very active on addressing almost all problems pointed out by you.

Pranith

On 04/17/2016 05:01 PM, Lindsay Mathieson wrote:
> Hi developers, didn't want to be whiny all the time re possible issues 
> :) and to congratulate you on the 3.7.x release - its come a long way 
> since the 3.5 range I initially looked at and I really appreciate the 
> attention to details for VM hosting environments.
>
> The chunking really works and makes a big difference to handling the 
> relatively small number of very large files on a VM hosting, it makes 
> a big difference to server reboots. I've found performance, both raw 
> and IOPS to be much better than any other clustered filesystems I've 
> tried. I can launch 12 VM simultaneously on my relatively limited 
> hardware with iowaits remaining below 7%, under previous setups on the 
> same hardware it was brought to its knees.
>
> Management tools and documentation still a little quirky :) but the 
> community is good and once you know your way round the system it is 
> pretty straightforward. I very much appreciate that gluster plays well 
> with ZFS as its host file system, I think they are a perfect match.
>
> Overall I think that gluster is a great match for the small to large 
> business and I appreciate the can do attitude behind the team.
>
> Spent this weekend fine tuning and benchmarking a volume, found what I 
> think is the sweet spot for us and have started a production trial - 
> moved 2 developers (including myself), 2 support guys and one 
> receptionist to it (12  VM's total). I'll let you know if I still feel 
> the same by the end of the week :)
>
> Cheers,
>



More information about the Gluster-users mailing list