[Gluster-users] Advice for auto-scaling
Mathieu Chateau
mathieu.chateau at lotp.fr
Wed Sep 16 12:39:09 UTC 2015
Hello,
I am doing that in production for web farm.
My experience:
- Gluster is synchronous (client writes to all replicated nodes), so no
issue with old content
- Gluster is sloooowww with small files in replicated mode due to
metadata
- for configuration, I ended replicating locally instead for availability
So it work as you can imagine (good), just slow
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2015-09-16 14:23 GMT+02:00 Paul Thomas <paul at thomas3.plus.com>:
> Hi,
>
> I’m new to shared file systems and horizontal cloud scaling.
>
> I have already played with auto-scaling on aws/ec2. In term of spawning a
> destroying and I can achieve that.
>
> I just want to some advice of how best implement syncing for web files,
> infrastructure, data, etc.
>
> I have pretty much decided to put the database side of things on a private
> instance.
> I'll worry about db clustering later I’m not to bothered about this not,
> because the software supports it.
>
> It seems logical to put the web folder / application layer on a shared
> file system, maybe some configuration too.
>
> What I'm really unsure about is how to ensure that the current system is
> up to date and the configuration tweaked for the physical specs.
>
> How do people typically approach this? I'm guessing it not always viable
> to have a shared file system for everything.
>
> Is the approach a disciplined one? Where say I have development instance
> for infrastructure changes.
> Then there is a deployment flow where production instances are somehow
> refreshed without downtime.
>
> Or is there some other approach?
>
> I notice on sites like yahoo, things are often noticeably unsynced, mostly
> on the data front, but also other things.
> This would be unacceptable in my case.
>
> I appreciate any help I can get regarding this.
>
> My typical load is from php-fpm/nginx processes, mysql bellow this.
>
> Should the memory cache also be separated, or as I think it is quite good
> for this to be divided up with the infrastructure to support each public
> instance individually?
>
> Paul
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150916/70a56b53/attachment.html>
More information about the Gluster-users
mailing list