[Gluster-users] Data migration and rebalance

F. Ozbek ozbek at gmx.com
Sat Jan 19 18:43:30 UTC 2013


Jon,

try moosefs. http://www.moosefs.org/

we tried both gluster and ceph, they both failed in many ways.
moosefs passed the same tests with flying colors.

moose is your friend.

On 11/24/2012 07:54 PM, Jonathan Lefman wrote:
> I am sorry; I really didn't intend for this to be insulting.  First of all, to the hard-working people who make this tool available, I want to apologize if my
> remarks came across as insulting or anything like that.  I want it to be clear that this was not my intention.  Perhaps my choice of words should have been
> chosen more wisely so that I gave a better indication of what I want to find out.
>
> My goal of the comments was to get information if I am at a dead-end for my task.  I feel that I am there.  I was hoping to get feedback if others have used
> something else successfully.  I was hoping that someone would reply to me that I am incorrect about what I am experiencing and tell me that I must have
> forgotten to do something or check something out.
>
> Thank you for letting me know right away and sending me feedback.
>
> -Jon
>
> On Sat, Nov 24, 2012 at 7:48 PM, Joe Julian <joe at julianfamily.org <mailto:joe at julianfamily.org>> wrote:
>
>     Please don't insult the people that work hard to give you a free tool, nor those who spend their own limited time to offer you free support.
>
>     If you need to try a different tool, but just do. We don't need to hear about it.
>
>     If you have found a bug or have encountered a problem, ask for help if you want help. File a bug report if you want it fixed. All insults do is create
>     frustration.
>
>     Jonathan Lefman <jonathan.lefman at essess.com <mailto:jonathan.lefman at essess.com>> wrote:
>
>         My gluster volume is more-or-less useless from an administration point of view.  I am unable to stop the volume because it claims it is rebalancing or
>         gluster says the command failed.  When I try to stop, start, or get the status of rebalancing, I get nothing returned.  I have stopped and restarted all
>         glusterfsd processes on each host.  Nothing seems to bring sanity back to the volume.
>
>         This is bad news for gluster's reliability.  I am unable to find a source of the problem.  Regular methods for resetting the system to usable state are
>         not working.  I think it is time to call it quits and find another solution.  Ceph?
>
>
>         On Fri, Nov 23, 2012 at 1:21 PM, Jonathan Lefman <jonathan.lefman at essess.com <mailto:jonathan.lefman at essess.com>> wrote:
>
>             At the same time, when looking at the rebalance log, it appears that the rebalance is still going on in the background because I am seeing entries
>             related to rebalancing.  However, the detail status command shows that the distribution for files is still stable on the older nodes.
>
>
>
>             On Fri, Nov 23, 2012 at 1:10 PM, Jonathan Lefman <jonathan.lefman at essess.com <mailto:jonathan.lefman at essess.com>> wrote:
>
>                 Volume type:
>
>                 non-replicated, 29 nodes, xfs formats
>
>                 Number of files/directories:
>
>                 There are about 5000-10000 directories
>
>                 Average size of files:
>
>                 There are two distributions of files:  a vast majority of files is around 200-300 kilobytes, with about 1000-fold fewer files with a size around
>                 1 gigabyte
>
>                 Average number of files per directory:
>
>                 Around 1800 files per directory
>
>                 glusterd log below:
>
>                 When trying
>
>                 sudo gluster volume rebalance essess_data status
>
>                 OR
>
>                 sudo gluster volume status myvol
>                 operation failed
>
>                 Log for this time from /var/log/glusterfs/etc-glusterfs-glusterd.vol.log:
>
>                 [2012-11-23 13:05:00.489567] E [glusterd-handler.c:458:glusterd_op_txn_begin] 0-management: Unable to acquire local lock, ret: -1
>                 [2012-11-23 13:07:09.102007] I [glusterd-handler.c:2670:glusterd_handle_status_volume] 0-management: Received status volume req for volume
>                 essess_data
>                 [2012-11-23 13:07:09.102056] E [glusterd-utils.c:277:glusterd_lock] 0-glusterd: Unable to get lock for uuid:
>                 ee33fd05-135e-40e7-a157-3c1e0b9be073, lock held by: ee33fd05-135e-40e7-a157-3c1e0b9be073
>                 [2012-11-23 13:07:09.102073] E [glusterd-handler.c:458:glusterd_op_txn_begin] 0-management: Unable to acquire local lock, ret: -1
>
>
>
>
>                 On Fri, Nov 23, 2012 at 12:58 PM, Vijay Bellur <vbellur at redhat.com <mailto:vbellur at redhat.com>> wrote:
>
>                     On 11/23/2012 11:14 PM, Jonathan Lefman wrote:
>
>                         The rebalance command has run for quite a while.  Now when I issue the
>                         rebalance status command,
>
>                         sudo gluster volume rebalance myvol status
>
>                         I get nothing back; just a return to the command prompt.  Any ideas of
>                         what is going on?
>
>
>                     A few questions:
>
>                     - What is your volume type?
>                     - How many files and directories do you have in your volume?
>                     - What is the average size of files?
>                     - What is the average number of files per directory?
>                     - Can you please share glusterd logs from the time when the command returns without displaying any output?
>
>                     Thanks,
>                     Vijay
>
>
>
>
>                 --
>
>
>
>         Gluster-users mailing list
>         Gluster-users at gluster.org  <mailto:Gluster-users at gluster.org>
>         http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



More information about the Gluster-users mailing list