[Gluster-devel] [ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"
ykaul at redhat.com
Sat Sep 29 09:57:10 UTC 2018
On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo <hetz at hetz.biz> wrote:
> Gobinda, great work!
> One thing though - the device names (sda, sdb etc..)
> On many servers, it's hard to know which disk is which. Lets say I have 10
> spinning disk + 2 SSD's. Which is sda? what about NVME? worse - sometimes
> replacing disks replaces the sda to something else. We used to have the
> same problem with NICs and now this has been resolved on CentOS/RHEL 7.X
> Could the HCI part - the disk selection part specifically - give more
> details? maybe Disk ID or WWN, or anything that can identify a disk?
/dev/disk/by-id is the right identifier.
During installation, it'd be nice if it could show as much data as possible
- sdX, /dev/disk/by-id, size and perhaps manufacturer.
> Also - SSD caching, most of the time it is recommended to use 2 drives if
> possible for good performance. Can a user select X number of drives?
> On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das <godas at redhat.com> wrote:
>> Hi All,
>> Status update on "Hyperconverged Gluster oVirt support"
>> Features Completed:
>> 1- Asymmetric brick configuration.Brick can be configured per host
>> basis i.e. If the user wanted to make use of sdb from host1, sdc from
>> host2, and sdd from host3.
>> 2- Dedupe and Compression integration via VDO support (see
>> https://github.com/dm-vdo/kvdo). Gluster bricks are created on vdo
>> 3- LVM cache configuration support (Configure cache by using fast block
>> device such as SSD drive to imrove the performance of a larger and slower
>> logical volumes)
>> 4- Auto addition of 2nd and 3rd hosts in a 3 node setup during
>> 5- Auto creation of storage domains based on gluster volumes created
>> during setup
>> 6- Single node deployment support via Cockpit UI. For details on single
>> node deployment -
>> 7- Gluster Management Dashboard (Dashboard will show the nodes in
>> cluster,Volumes and bricks. User can expand the cluster and also can create
>> new volume in existing cluster nodes )
>> 1- Reset brick support from UI to allow users to replace a faulty brick
>> 2- Create brick from engine now supports configuring an SSD device as
>> lvmcache device when bricks are created on spinning disks
>> 3- VDO monitoring
>> Enhancements to performance with fuse by 15x
>> 1. Cluster after eager lock change for better detection of multiple
>> 2. Changing qemu option aio to "native" instead of "threads".
>> end-to-end deployment:
>> 1- End to end deployment of a Gluster + Ovirt hyperconverged environment
>> using ansible roles (
>> https://github.com/gluster/gluster-ansible/tree/master/playbooks ). The
>> only pre-requisite is a CentOS node/oVirt node
>> Future Plan:
>> 1- ansible-roles integration for deployment
>> 2- Support for different volume types
>> 1- Python3 compatibility of vdsm-gluster
>> 2- Native 4K support
>> Devel mailing list -- devel at ovirt.org
>> To unsubscribe send an email to devel-leave at ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> List Archives:
> Devel mailing list -- devel at ovirt.org
> To unsubscribe send an email to devel-leave at ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> List Archives:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-devel