[Gluster-devel] syncops and thread specific memory regions
Raghavendra Gowdappa
rgowdapp at redhat.com
Thu Jul 3 03:31:58 UTC 2014
----- Original Message -----
> From: "Xavier Hernandez" <xhernandez at datalab.es>
> To: "Raghavendra Gowdappa" <rgowdapp at redhat.com>
> Cc: gluster-devel at gluster.org
> Sent: Wednesday, July 2, 2014 6:50:51 PM
> Subject: Re: [Gluster-devel] syncops and thread specific memory regions
>
> On Wednesday 02 July 2014 07:57:52 Raghavendra Gowdappa wrote:
> > Hi all,
> >
> > The bug fixed by [1] is a one instance of the class of problems where:
> > 1. we access a variable which is stored in thread-specific area and hence
> > can be stored in different memory regions across different threads. 2. A
> > single (code) control flow is executed in more than one thread. 3.
> > Optimization prevents recalculating address of variable mentioned in 1
> > every time its accessed, instead using an address calculated earlier.
> >
> > The bug fixed by [1] involved "errno" as the variable. However there are
> > other pointers which are stored in TLS like, 1. The xlator object in whose
> > context the current code is executing in (aka THIS, set/read by using
> > __glusterfs_this_location() ). 2. A buffer used to parse binary uuids into
> > strings (used by uuid_utoa () ).
> >
> > I think we can hit the corruption uncovered by [1] in the above two
> > scenarios too. Comments?
>
> I did discuss these same two problems with Pranith some time ago [1].
>
> Basically the errno issue was caused because __errno_location() is declared
> with 'const':
>
> extern int *__errno_location (void) __THROW __attribute__ ((__const__));
> # define errno (*__errno_location ())
>
> __gluster_this_location() is not declared as 'const', so the compiler doesn't
> optimize it so much as __errno_location() and this bug is not present.
>
> The uuid_utoa() issue is not a problem as long as it is only used for logging
> purposes or very local access. The returned pointer cannot be stored anywhere
> for future access. At the time of that discussion, all these conditions were
> satisfied.
Ah! ok. Sorry had missed the discussion :)
>
> Refer to the emails [1] for more detailed information.
>
> Xavi
>
> [1] http://gluster.org/pipermail/gluster-devel/2013-December/026279.html
>
>
More information about the Gluster-devel
mailing list