Re: [Gluster-devel] Сrash - 2.0.git-2009.06.16

NovA av.nova at gmail.com
Thu Jul 2 11:38:42 UTC 2009


2 July 2009 13:29 Shehjar Tikoo (shehjart at gluster.com) wrote:
> NovA wrote:
>> Recently I've migrated our small 24-node HPC-cluster from glusterFS
>> 1.3.8 unify to 2.0 distribute. It seems that performance really
>> increased a lot. Thanks for your work!
>>
>> I use the following translators. On servers:
>> posix->locks->iothreads->protocol/server; on clients:
>> protocol/client->distribute->iothreads->write-behind. IO-threads
>> translator uses 4 threads, NO autoscaling.
>>
>> Unfortunately, after upgrade I've got new issues. First, I've noticed
>> a very high memory usage. Now GlusterFS on the head node eats 737Mb of
>> RES memory and doesnt return it back. The memory usage have been
>> increased in the migration process by the command "cd
>> ${namespace_export} && find . | (cd ${distribute_mount} && xargs -d
>> '\n' stat -c '%n')". Note that provided migrate-unify-to-distribute.sh
>> script (with "execute_on" function) doesn't work...
>>
>> Second problem is more important. A client on one of the nodes has
>> crashed today with the following backtrace:
>
> Hi
>
> We've observed a crash on the mainline branch with write-behind
> recently. I am trying to determine if the crash observed by you below
> could be related to our write-behind crash. Could you please apply
> the following patch to see it fixes this particular crash also?
>
> http://patches.gluster.com/patch/667/
>
> Once you've downloaded the patch from the above URL, you can apply it
> using the command:
>
> $ git am <downloaded-file-name>
>
> in the glusterfs source directory.
>

Hi Shehjar!

Unfortunately, I can't test new versions for now. There are critical
CFD tasks for our cluster. Yesterday I've switched to 'release-2.0'
git branch and everything is OK so far. Thanks!  But I can't break
/home file system again or users will kill me. :) Hope there will be a
possibility to test new GlusterFS versions couple of weeks later.

Best regards,
  Andrey





More information about the Gluster-devel mailing list