[Gluster-devel] Logging in a multi-brick daemon

Dustin Black dblack at redhat.com
Thu Feb 16 19:24:57 UTC 2017


On Feb 15, 2017 5:39 PM, "Jeff Darcy" <jdarcy at redhat.com> wrote:

One of the issues that has come up with multiplexing is that all of the
bricks in a process end up sharing a single log file.  The reaction from
both of the people who have mentioned this is that we should find a way to
give each brick its own log even when they're in the same process, and make
sure gf_log etc. are able to direct messages to the correct one.  I can
think of ways to do this, but it doesn't seem optimal to me.  It will
certainly use up a lot of file descriptors.  I think it will use more
memory.  And then there's the issue of whether this would really be better
for debugging.  Often it's necessary to look at multiple brick logs while
trying to diagnose this problem, so it's actually kind of handy to have
them all in one file.  Which would you rather do?

(a) Weave together entries in multiple logs, either via a script or in your
head?

(b) Split or filter entries in a single log, according to which brick
they're from?


+1 for a single log file with tagging, combined with necessary grep-fu.
Plus I like the idea of an included script or other facility to aid said
grepping.


To me, (b) seems like a much more tractable problem.  I'd say that what we
need is not multiple logs, but *marking of entries* so that everything
pertaining to one brick can easily be found.  One way to do this would be
to modify volgen so that a brick ID (not name because that's a path and
hence too long) is appended/prepended to the name of every translator in
the brick.  Grep for that brick ID, and voila!  You now have all log
messages for that brick and no other.  A variant of this would be to leave
the names alone and modify gf_log so that it adds the brick ID
automagically (based on a thread-local variable similar to THIS).  Same
effect, other than making translator names longer, so I'd kind of prefer
this approach.  Before I start writing the code, does anybody else have any
opinions, preferences, or alternatives I haven't mentioned yet?

_______________________________________________
Gluster-devel mailing list
Gluster-devel at gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170216/38bf2935/attachment.html>


More information about the Gluster-devel mailing list