[Gluster-users] rebalance fix layout necessary
Nithya Balachandran
nbalacha at redhat.com
Fri Apr 7 05:35:23 UTC 2017
On 6 April 2017 at 14:56, Amudhan P <amudhan83 at gmail.com> wrote:
> Hi,
>
> I was able to add bricks to the volume successfully.
> Client was reading, writing and listing data from mount point.
> But after adding bricks I had issues in folder listing (not listing all
> folders or returning empty folder list) and write was interrupted.
>
This is strange.The issue with listing folders you referred to earlier was
because of the rebalance but this seems new.
How many bricks did you add and what is your volume config? What errors did
you see while writing or listing folders?
remounting volume has solved the issue and now working fine.
>
> I was under the impression that running rebalance would cause folder
> listing issue but now adding brick itself created a problem.
> It's irrelevant whether client busy or idle need to remount to solve the
> issue.
>
> Also, i would like to know using brick in a volume without fix-layout
> cause folder listing slowness.
>
>
> Below a snippet of log from client when this happened. let me know if you
> any more additional info.
>
> Client and Servers are 3.10.1, volume mounted thru fuse.
>
> Machine busy downloading & uploading
>
> [2017-04-05 13:39:33.487176] I [MSGID: 114021] [client.c:2361:notify]
> 0-gfs-vol-client-1107: current graph is no longer active, destroying
> rpc_client
> [2017-04-05 13:39:33.487196] I [MSGID: 114021] [client.c:2361:notify]
> 0-gfs-vol-client-1108: current graph is no longer active, destroying
> rpc_client
> [2017-04-05 13:39:33.487201] I [MSGID: 114018] [client.c:2276:client_rpc_notify]
> 0-gfs-vol-client-1107: disconnected from gfs-vol-client-1107. Client
> process will keep trying to connect to glusterd until brick's port is
> available
> [2017-04-05 13:39:33.487212] I [MSGID: 114021] [client.c:2361:notify]
> 0-gfs-vol-client-1109: current graph is no longer active, destroying
> rpc_client
> [2017-04-05 13:39:33.487217] I [MSGID: 114018] [client.c:2276:client_rpc_notify]
> 0-gfs-vol-client-1108: disconnected from gfs-vol-client-1108. Client
> process will keep trying to connect to glusterd until brick's port is
> available
> [2017-04-05 13:39:33.487232] I [MSGID: 114018] [client.c:2276:client_rpc_notify]
> 0-gfs-vol-client-1109: disconnected from gfs-vol-client-1109. Client
> process will keep trying to connect to glusterd until brick's port is
> available
>
>
> Idle system
>
> 2017-04-05 13:40:07.692336] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-1065: Server lk version = 1
> [2017-04-05 13:40:07.692383] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-995: Server lk version = 1
> [2017-04-05 13:40:07.692430] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-965: Server lk version = 1
> [2017-04-05 13:40:07.692485] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-1075: Server lk version = 1
> [2017-04-05 13:40:07.692532] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-1025: Server lk version = 1
> [2017-04-05 13:40:07.692569] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-1055: Server lk version = 1
> [2017-04-05 13:40:07.692620] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-955: Server lk version = 1
> [2017-04-05 13:40:07.692681] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-1035: Server lk version = 1
> [2017-04-05 13:40:07.692870] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk]
> 2-gfs-vol-client-1045: Server lk version = 1
>
>
> Regards,
> Amudhan
>
> On Tue, Apr 4, 2017 at 4:31 PM, Amudhan P <amudhan83 at gmail.com> wrote:
>
>> I mean time takes for listing folders and files? because of "rebalance
>> fix layout" was not done.
>>
>>
>> On Tue, Apr 4, 2017 at 1:51 PM, Amudhan P <amudhan83 at gmail.com> wrote:
>>
>>> Ok, good to hear.
>>>
>>> will there be any impact in listing folder and files?.
>>>
>>>
>>> On Tue, Apr 4, 2017 at 1:43 PM, Nithya Balachandran <nbalacha at redhat.com
>>> > wrote:
>>>
>>>>
>>>>
>>>> On 4 April 2017 at 12:33, Amudhan P <amudhan83 at gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I have a query on rebalancing.
>>>>>
>>>>> let's consider following is my folder hierarchy.
>>>>>
>>>>> parent1-fol (parent folder)
>>>>> |_
>>>>> class-fol-1 ( 1 st level subfolder)
>>>>> |_
>>>>> A ( 2 nd level subfolder)
>>>>> |_
>>>>> childfol-1 (child folder created
>>>>> every time before writing files)
>>>>>
>>>>>
>>>>> Now, I have a running cluster with 3.10.1 with disperse volume and I
>>>>> am planning to expand cluster by adding bricks.
>>>>>
>>>>> will there be a problem using newly added bricks without doing a
>>>>> "rebalance fix layout" other than existing files cannot be rebalanced to
>>>>> new brick and files created under existing folder will not go to new brick?.
>>>>>
>>>>> I tested above case in my test setup and observed files created under
>>>>> new folder goes to new brick. and I don't see any issue on listing files
>>>>> and folder.
>>>>>
>>>>> so, My case is we create child folder every time before creating files.
>>>>>
>>>>> The reason to avoid rebalance is I have more than 10000 folders across
>>>>> 1080 bricks. so triggering rebalance will take a long time and in my
>>>>> previous expansion in 3.7 was not able to access some folders randomly
>>>>> until fix layout completes.
>>>>>
>>>>>
>>>> It sounds like you will not need to run a rebalance or fix-layout for
>>>> this. It should work fine.
>>>>
>>>> Regards,
>>>> Nithya
>>>>
>>>>>
>>>>> regards
>>>>> Amudhan
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Gluster-users mailing list
>>>>> Gluster-users at gluster.org
>>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170407/7cd6d400/attachment.html>
More information about the Gluster-users
mailing list