[Gluster-devel] New project on the Forge - gstatus

Krishnan Parthasarathi kparthas at redhat.com
Wed Feb 12 03:29:24 UTC 2014


The premise under which rrdtool integration with io-stats was
suggested to offset the current deficiency of io-stats. io-stats
logs its counters to the log file or presents it to glusterd/cli
via an RPC. These are not really good interfaces for external monitoring

----- Original Message -----
> glusterfsiostat is a great idea!
> 
> I do wonder if the inclusion of rrd in the design is adding complication
> though. For example, does cifsiostat and nfsiostat do this?
> 
> As an admin, would I not just run the glusterfsiostat command with an
> interval/count - and if I want to see the stats over a time period, couldn't
> I just pipe it to a file and background the process? I could get the high
> level perf ctrs for any time period that way and not be bound to the fixed
> rrd file size.
> 
> For my money - longer term time series observations don't belong in rrd, but
> should be forwarded to a "management layer" - and in that context would we
> get the value out of the addtional integration work with rrd?
> 
> Just another point of view to consider
> 
> ----- Original Message -----
> 
> > From: "Vipul Nayyar" <nayyar_vipul at yahoo.com>
> > To: "Vijay Bellur" <vbellur at redhat.com>, "Paul Cuzner"
> > <pcuzner at redhat.com>,
> > "gluster-devel" <gluster-devel at nongnu.org>
> > Cc: "Krishnan Parthasarathi" <kparthas at redhat.com>
> > Sent: Wednesday, 12 February, 2014 3:54:10 AM
> > Subject: Re: [Gluster-devel] New project on the Forge - gstatus
> 
> > Hello,
> 
> > I'm Vipul, a Computer Engg. student studying in New Delhi. I have some past
> > experience in contributing to open source and I'm interested in
> > contributing
> > to Gluster along with learning from the community.
> 
> > For the past couple of weeks, I've been in constant contact with Krishnan
> > Parthasarathi, regarding building a tool named glusterfsiostat, an
> > nfsiostat
> > clone integrated with rrdtool(a data logging system)[1]. We believe that
> > storing io-stats data in a database will be a great improvement over
> > dumping
> > it in a log file. Since it's a Round Robin database, so it's more suitable
> > for our cause as we'll be generating time-series data and our window size
> > would also be fixed. Plus, the data can be easily accessed from the db and
> > processed/modified with a perl/bash script according to the consumer's
> > requirement.
> 
> > As for the issue of integrating the io-stats xlator code with rrdtool,
> > Krish
> > asked me to explore two aspects of it. First, compiling rrdtool code in
> > io-stats optionally, only when the user provides a --enable-rrdtool like
> > parameter during configure, can be done similar to how --disable-xml-output
> > option is dealt with by configure and code compiled optionally by checking
> > certain macros defined in confdefs.h.
> 
> > On the second note, rrdtool provides a C API which works quite similar to
> > the
> > rrd comand line tool. So including the rrd C api in io-stats will do the
> > work of storing stats in db. As written earlier, getting/displaying the
> > data
> > would just be a simple task of querying the rrd database.
> 
> > Although I've spent some considerable time studying the io-stats code and
> > the
> > data structures being used, I think starting to work on a prototype along
> > with everyone's criticism will help me a lot. Is there anywhere written,
> > exactly what data is currently being logged and dumped in the log file ??
> > This will help me in identifying the important data structures and place
> > the
> > rrdtool code in the proper place, where it needs to be.
> 
> > I know the draft above, seems quite simple and maybe it doesn't cover too
> > many aspects that need to be dealt with beforehand, but that's where as an
> > amateur contributor, I need the community's help.
> 
> > I'll send a patch soon, your way, if you think that the direction in which
> > I'm planning to tread is good for the community.
> 
> > [1] http://oss.oetiker.ch/rrdtool/
> 
> > Hoping to hear your views soon.
> 
> > Regards
> > Vipul Nayyar
> 
> > On Monday, 10 February 2014 8:29 PM, Vijay Bellur <vbellur at redhat.com>
> > wrote:
> > On 02/10/2014 02:00 AM, Paul Cuzner wrote:
> > >
> > > Hi,
> > >
> > > I've started a new project on the forge, called gstatus.- wiki page is
> > > https://forge.gluster.org/gstatus/pages/Home
> > >
> > > The idea is to provide admins with a single command to assess the state
> > > of the components of a cluster - nodes, bricks and volume states -
> > > together with capacity information.
> > >
> > > It's the kind of feature that would be great (IMO) as a sub command of
> > > gluster i.e. gluster status - but as a stop gap here's the python
> > > project (we could even use this as a prototype!)
> > >
> > > On the wiki page, you'll find some additional volume status definitions
> > > that I've dreamt up - online-degraded, online-partial, to describe the
> > > effect brick down events have on a volume's data availability. There are
> > > output examples on the wiki, but here's some examples to show you what
> > > you currently get from the tool
> > >
> > > On my test 4-way cluster, this is what a healthy state looks like
> > >
> > > [ root at rhs1-1 gstatus]# ./gstatus.py
> > > Analysis complete
> > >
> > > Cluster Summary:
> > > Version - 3.4.0.44rhs Nodes - 4/ 4 Bricks - 4/ 4 Volumes - 1/ 1
> > >
> > > Volume Summary
> > > myvol ONLINE (4/4 bricks online) - Distributed-Replicate
> > > Capacity: 64.53 MiB/19.97 GiB (used,total)
> > >
> > > Status Messages
> > > Cluster is healthy, all checks successful
> > >
> > > And then if I take a *two nodes* down, that provide bricks to the *same
> > > replica set*, I see;
> > >
> > > Analysis complete
> > >
> > >
> > > Cluster Summary:
> > > Version - 3.4.0.44rhs Nodes - 2/ 4 Bricks - 2/ 4 Volumes - 0/ 1
> > >
> > > Volume Summary
> > > myvol ONLINE_PARTIAL (2/4 bricks online) - Distributed-Replicate
> > > Capacity: 32.27 MiB/9.99 GiB (used,total)
> > >
> > >
> > > Status Messages
> > > - rhs1-4 is down
> > > - rhs1-2 is down
> > > - Brick rhs1-4:/gluster/brick1 is down/unavailable
> > > - Brick rhs1-2:/gluster/brick1 is down/unavailable
> > >
> > >
> > >
> 
> > This is great!
> 
> > I think adding one more for the client stack would be neat. A tool
> > similar to nfsstat/nfsiostat which can expose various counters in
> > iostats xlator and also status information like brick connectivity from
> > the client perspective. I also have a cool name for that - glusteriostat ;)
> 
> > Cheers,
> > Vijay
> 
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at nongnu.org
> > https://lists.nongnu.org/mailman/listinfo/gluster-devel
> 




More information about the Gluster-devel mailing list