[Gluster-devel] Update on mgmt_v3-locks.t failure in netbsd

Kaushal M kshlmster at gmail.com
Tue Apr 28 11:26:29 UTC 2015


I seem to have found the issue.

The uatomic_add_return function is defined in urcu/uatomic.h as
```
/* uatomic_add_return */

static inline __attribute__((always_inline))
unsigned long __uatomic_add_return(void *addr, unsigned long val,
                                int len)
{
       switch (len) {
       case 1:
       {
               unsigned char result = val;

               __asm__ __volatile__(
               "lock; xaddb %1, %0"
                       : "+m"(*__hp(addr)), "+q" (result)
                       :
                       : "memory");
               return result + (unsigned char)val;
       }
       case 2:
       {
               unsigned short result = val;

               __asm__ __volatile__(
               "lock; xaddw %1, %0"
                       : "+m"(*__hp(addr)), "+r" (result)
                       :
                       : "memory");
               return result + (unsigned short)val;
       }
       case 4:
       {
               unsigned int result = val;

               __asm__ __volatile__(
               "lock; xaddl %1, %0"
                       : "+m"(*__hp(addr)), "+r" (result)
                       :
                       : "memory");
               return result + (unsigned int)val;
       }
#if (CAA_BITS_PER_LONG == 64)
       case 8:
       {
               unsigned long result = val;

               __asm__ __volatile__(
               "lock; xaddq %1, %0"
                       : "+m"(*__hp(addr)), "+r" (result)
                       :
                       : "memory");
               return result + (unsigned long)val;
       }
#endif
       }
       /*
        * generate an illegal instruction. Cannot catch this with
        * linker tricks when optimizations are disabled.
        */
       __asm__ __volatile__("ud2");
       return 0;
}
```

As we can see, uatomic_add_return uses different assembly instructions
to perform the add based on the size of the datatype of the value. If
the size of the value doesn't exactly match one of the sizes in the
switch case, it deliberately generates a SIGILL.

The case for size 8, is conditionally compiled as we can see above.
>From the backtrace Atin provided earlier, we see that the size of the
value is indeed 8 (we use uint64_t). Because we had a SIGILL, we can
conclude that the case for size 8 wasn't compiled.

I don't know why this compilation didn't (or as this is in a header
file, doesn't) happen on the NetBSD slaves and this is something I'd
like to find out.

~kaushal

On Tue, Apr 28, 2015 at 1:50 PM, Anand Nekkunti <anekkunt at redhat.com> wrote:
>
> On 04/28/2015 01:40 PM, Emmanuel Dreyfus wrote:
>>
>> On Tue, Apr 28, 2015 at 01:37:42PM +0530, Anand Nekkunti wrote:
>>>
>>>         __asm__ is for to write assembly code in c (gcc),
>>> __volatile__(:::)
>>> compiler level  barrier to force the compiler not to do reorder the
>>> instructions(to avoid optimization ) .
>>
>> Sure, but the gory details should be of no interest to the developer
>> engaged in debug: if it crashes this is probably because it is called
>> with wrong arguments, hence the question:
>>   ccing gluster-devel
>>>>
>>>>          new_peer->generation = uatomic_add_return (&conf->generation,
>>>> 1);
>>>> Are new_peer->generation and conf->generation sane?
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list