[Gluster-devel] Puppet-Gluster+ThinP
James
purpleidea at gmail.com
Mon Apr 21 00:11:08 UTC 2014
On Sun, Apr 20, 2014 at 7:59 PM, Ric Wheeler <rwheeler at redhat.com> wrote:
> The amount of space you set aside is very much workload dependent (rate of
> change, rate of deletion, rate of notifying the storage about the freed
> space).
>From the Puppet-Gluster perspective, this will be configurable. I
would like to set a vaguely sensible default though, which I don't
have at the moment.
>
> Keep in mind with snapshots (and thinly provisioned storage, whether using a
> software target or thinly provisioned array) we need to issue the "discard"
> commands down the IO stack in order to let the storage target reclaim space.
>
> That typically means running the fstrim command on the local file system
> (XFS, ext4, btrfs, etc) every so often. Less typically, you can mount your
> local file system with "-o discard" to do it inband (but that comes at a
> performance penalty usually).
Do you think it would make sense to have Puppet-Gluster add a cron job
to do this operation?
Exactly what command should run, and how often? (Again for having
sensible defaults.)
>
> There is also a event mechanism to help us get notified when we hit a target
> configurable watermark ("help, we are running short on real disk, add more
> or clean up!").
Can you point me to some docs about this feature?
>
> Definitely worth following up with the LVM/device mapper people on how to do
> this best,
>
> Ric
Thanks for the comments. From everyone I've talked to, it seems some
of the answers are still in progress. The good news is, that I'm ahead
of the curve for being ready for when this becomes more mainstream. I
think Paul is in the same position too.
James
More information about the Gluster-devel
mailing list