[Gluster-users] Getting timedout error while rebalancing
deepu srinivasan
sdeepugd at gmail.com
Wed Feb 6 13:37:41 UTC 2019
Please find the glusterd.log file attached.
On Wed, Feb 6, 2019 at 2:01 PM Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Tue, Feb 5, 2019 at 8:43 PM Nithya Balachandran <nbalacha at redhat.com>
> wrote:
>
>>
>>
>> On Tue, 5 Feb 2019 at 17:26, deepu srinivasan <sdeepugd at gmail.com> wrote:
>>
>>> HI Nithya
>>> We have a test gluster setup.We are testing the rebalancing option of
>>> gluster. So we started the volume which have 1x3 brick with some data on it
>>> .
>>> command : gluster volume create test-volume replica 3
>>> 192.168.xxx.xx1:/home/data/repl 192.168.xxx.xx2:/home/data/repl
>>> 192.168.xxx.xx3:/home/data/repl.
>>>
>>> Now we tried to expand the cluster storage by adding three more bricks.
>>> command : gluster volume add-brick test-volume 192.168.xxx.xx4:/home/data/repl
>>> 192.168.xxx.xx5:/home/data/repl 192.168.xxx.xx6:/home/data/repl
>>>
>>> So after the brick addition we tried to rebalance the layout and the
>>> data.
>>> command : gluster volume rebalance test-volume fix-layout start.
>>> The command exited with status "Error : Request timed out".
>>>
>>
>> This sounds like an error in the cli or glusterd. Can you send the
>> glusterd.log from the node on which you ran the command?
>>
>
> It seems to me that glusterd took more than 120 seconds to process the
> command and hence cli timed out. We can confirm the same by checking the
> status of the rebalance below which indicates rebalance did kick in and
> eventually completed. We need to understand why did it take such longer, so
> please pass on the cli and glusterd log from all the nodes as Nithya
> requested for.
>
>
>> regards,
>> Nithya
>>
>>>
>>> After the failure of the command, we tried to view the status of the
>>> command and it is something like this :
>>>
>>> Node Rebalanced-files size
>>> scanned failures skipped status run time
>>> in h:m:s
>>>
>>> --------- ----------- -----------
>>> ----------- ----------- ----------- ------------
>>> --------------
>>>
>>> localhost 41 41.0MB
>>> 8200 0 0 completed
>>> 0:00:09
>>>
>>> 192.168.xxx.xx4 79 79.0MB
>>> 8231 0 0 completed
>>> 0:00:12
>>>
>>> 192.168.xxx.xx6 58 58.0MB
>>> 8281 0 0 completed
>>> 0:00:10
>>>
>>> 192.168.xxx.xx2 136 136.0MB
>>> 8566 0 136 completed
>>> 0:00:07
>>>
>>> 192.168.xxx.xx4 129 129.0MB
>>> 8566 0 129 completed
>>> 0:00:07
>>>
>>> 192.168.xxx.xx6 201 201.0MB
>>> 8566 0 201 completed
>>> 0:00:08
>>>
>>> Is the rebalancing option working fine? Why did gluster throw the error
>>> saying that "Error : Request timed out"?
>>> .On Tue, Feb 5, 2019 at 4:23 PM Nithya Balachandran <nbalacha at redhat.com>
>>> wrote:
>>>
>>>> Hi,
>>>> Please provide the exact step at which you are seeing the error. It
>>>> would be ideal if you could copy-paste the command and the error.
>>>>
>>>> Regards,
>>>> Nithya
>>>>
>>>>
>>>>
>>>> On Tue, 5 Feb 2019 at 15:24, deepu srinivasan <sdeepugd at gmail.com>
>>>> wrote:
>>>>
>>>>> HI everyone. I am getting "Error : Request timed out " while doing
>>>>> rebalance . I have aded new bricks to my replicated volume.i.e. First it
>>>>> was 1x3 volume and added three more bricks to make it
>>>>> distributed-replicated volume(2x3) . What should i do for the timeout error
>>>>> ?
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190206/287a38e3/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: glusterd.log
Type: application/octet-stream
Size: 4051106 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190206/287a38e3/attachment-0001.obj>
More information about the Gluster-users
mailing list