[Gluster-devel] Update on mgmt_v3-locks.t failure in netbsd

Joe Julian joe at julianfamily.org
Tue Apr 28 12:20:08 UTC 2015


No Raspberry Pi servers any more? 

On April 28, 2015 5:07:06 AM PDT, Justin Clift <justin at gluster.org> wrote:
>Does this mean we're officially no longer supporting 32 bit
>architectures?
>
>(or is that just on x86?)
>
>+ Justin
>
>
>On 28 Apr 2015, at 12:45, Kaushal M <kshlmster at gmail.com> wrote:
>> Found the problem. The NetBSD slaves are running a 32-bit kernel and
>userspace.
>> ```
>> nbslave7a# uname -p
>> i386
>> ```
>> 
>> Because of this CAA_BITS_PER_LONG is set to 32 and the case for size
>8
>> isn't compiled in uatomic_add_return. Even though the underlying
>> (virtual) hardware has 64-bit support, and supports the required
>> 8-byte wide instrcution, it cannot be used because we are running on
>a
>> 32-bit kernel with a 32-bit userspace.
>> 
>> Manu, was there any particular reason why you 32-bit NetBSD? If there
>> are none, can you please replace the VMs with 64-bit NetBSD. Until
>> then you can keep mgmt_v3-locks.t disabled.
>> 
>> ~kaushal
>> 
>> On Tue, Apr 28, 2015 at 4:56 PM, Kaushal M <kshlmster at gmail.com>
>wrote:
>>> I seem to have found the issue.
>>> 
>>> The uatomic_add_return function is defined in urcu/uatomic.h as
>>> ```
>>> /* uatomic_add_return */
>>> 
>>> static inline __attribute__((always_inline))
>>> unsigned long __uatomic_add_return(void *addr, unsigned long val,
>>>                                int len)
>>> {
>>>       switch (len) {
>>>       case 1:
>>>       {
>>>               unsigned char result = val;
>>> 
>>>               __asm__ __volatile__(
>>>               "lock; xaddb %1, %0"
>>>                       : "+m"(*__hp(addr)), "+q" (result)
>>>                       :
>>>                       : "memory");
>>>               return result + (unsigned char)val;
>>>       }
>>>       case 2:
>>>       {
>>>               unsigned short result = val;
>>> 
>>>               __asm__ __volatile__(
>>>               "lock; xaddw %1, %0"
>>>                       : "+m"(*__hp(addr)), "+r" (result)
>>>                       :
>>>                       : "memory");
>>>               return result + (unsigned short)val;
>>>       }
>>>       case 4:
>>>       {
>>>               unsigned int result = val;
>>> 
>>>               __asm__ __volatile__(
>>>               "lock; xaddl %1, %0"
>>>                       : "+m"(*__hp(addr)), "+r" (result)
>>>                       :
>>>                       : "memory");
>>>               return result + (unsigned int)val;
>>>       }
>>> #if (CAA_BITS_PER_LONG == 64)
>>>       case 8:
>>>       {
>>>               unsigned long result = val;
>>> 
>>>               __asm__ __volatile__(
>>>               "lock; xaddq %1, %0"
>>>                       : "+m"(*__hp(addr)), "+r" (result)
>>>                       :
>>>                       : "memory");
>>>               return result + (unsigned long)val;
>>>       }
>>> #endif
>>>       }
>>>       /*
>>>        * generate an illegal instruction. Cannot catch this with
>>>        * linker tricks when optimizations are disabled.
>>>        */
>>>       __asm__ __volatile__("ud2");
>>>       return 0;
>>> }
>>> ```
>>> 
>>> As we can see, uatomic_add_return uses different assembly
>instructions
>>> to perform the add based on the size of the datatype of the value.
>If
>>> the size of the value doesn't exactly match one of the sizes in the
>>> switch case, it deliberately generates a SIGILL.
>>> 
>>> The case for size 8, is conditionally compiled as we can see above.
>>> From the backtrace Atin provided earlier, we see that the size of
>the
>>> value is indeed 8 (we use uint64_t). Because we had a SIGILL, we can
>>> conclude that the case for size 8 wasn't compiled.
>>> 
>>> I don't know why this compilation didn't (or as this is in a header
>>> file, doesn't) happen on the NetBSD slaves and this is something I'd
>>> like to find out.
>>> 
>>> ~kaushal
>>> 
>>> On Tue, Apr 28, 2015 at 1:50 PM, Anand Nekkunti
><anekkunt at redhat.com> wrote:
>>>> 
>>>> On 04/28/2015 01:40 PM, Emmanuel Dreyfus wrote:
>>>>> 
>>>>> On Tue, Apr 28, 2015 at 01:37:42PM +0530, Anand Nekkunti wrote:
>>>>>> 
>>>>>>        __asm__ is for to write assembly code in c (gcc),
>>>>>> __volatile__(:::)
>>>>>> compiler level  barrier to force the compiler not to do reorder
>the
>>>>>> instructions(to avoid optimization ) .
>>>>> 
>>>>> Sure, but the gory details should be of no interest to the
>developer
>>>>> engaged in debug: if it crashes this is probably because it is
>called
>>>>> with wrong arguments, hence the question:
>>>>>  ccing gluster-devel
>>>>>>> 
>>>>>>>         new_peer->generation = uatomic_add_return
>(&conf->generation,
>>>>>>> 1);
>>>>>>> Are new_peer->generation and conf->generation sane?
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Gluster-devel mailing list
>>>> Gluster-devel at gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-devel
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>
>--
>GlusterFS - http://www.gluster.org
>
>An open source, distributed file system scaling to several
>petabytes, and handling thousands of clients.
>
>My personal twitter: twitter.com/realjustinclift
>
>_______________________________________________
>Gluster-devel mailing list
>Gluster-devel at gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-devel

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150428/81898cdd/attachment-0001.html>


More information about the Gluster-devel mailing list