[Gluster-devel] Some updates on the eventing framework for Gluster
Dusmant Kumar Pati
dpati at redhat.com
Wed Dec 2 10:54:07 UTC 2015
As we had discussed in the past about "Eventing on the storage
cluster, be it Gluster or Ceph, is one of the key features through which
management stations can get updates about specific events immediately
than waiting for a poll interval cycle". So, we have put forth an
overall architecture for Eventing framework
We in USM / SkyRing  project are using SALT event bus. I have
attached the slide, which gives a summarized view of the Eventing
framework in USM.
We have already done a decent amount of implementation ( from node
events point of view ) to get events from the nodes in Ceph cluster and
done a POC for Gluster as well.
It would be good to be in synch, on the Event bus and how it can be
consumed by not only Management application like USM, but also by the
other entities in the cluster if required.
On 12/02/2015 06:08 AM, Samikshan Bairagya wrote:
> The updates for the eventing framework for gluster can be divided into
> the following two parts.
> 1. Bubbling out notifications through dbus signals from every gluster
> * The 'glusterfs' module in storaged  exports objects on the system
> bus for every gluster volume. These objects hold the following
> - Name
> - Id
> - Status (0 = Created, 1 = Started, 2 = Stopped)
> - Brickcount
> * A singleton dbus object corresponding to glusterd is also exported
> by storaged on the system bus. This object holds properties to track
> the state of glusterd (LoadState and ActiveState).
> 2. Aggregating all these signals from each node over an entire cluster.
> * Using Kafka  for messaging over a cluster: Implementing a (dbus
> signal) listener in python that converts these dbus signals from
> objects to 'keyed messages' in Kafka under a particular 'topic'.
> For example, if a volume 'testvol' is started, a message is published
> under topic 'testvol', with 'status' as the 'key' and the changed
> status ('1' in this case) as the 'value'.
> *** Near term plans:
> - Export dbus objects corresponding to bricks.
> - Figure out how to map the path to the brick directory to the block
> device and consequently the drive object. The 'SmartFailing' property
> from org.storaged.Storaged.Drive.Ata  interface can then be used to
> track brick failures.
> - Make the framework work over a multi-node cluster with possibly a
> multi-broker kafka setup to identify redundancies as well as to keep
> consistent information across the cluster.
> Views/feedback/queries are welcome.
>  https://github.com/samikshan/storaged/tree/glusterfs
>  http://kafka.apache.org/documentation.html#introduction
> Thanks and Regards,
> Gluster-devel mailing list
> Gluster-devel at gluster.org
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 115014 bytes
Desc: not available
More information about the Gluster-devel