[Gluster-users] Access data directly from underlying storage

Rumen Telbizov telbizov at gmail.com
Thu Mar 19 20:11:07 UTC 2015


Thank you for your answer Melkor.

This is the kind of experience I was looking for actually. I am happy that
it has worked fine for you.

Anybody coming across any issues while reading directly from the underlying
disk?

Thank you again,
Rumen Telbizov

On Thu, Mar 19, 2015 at 12:29 AM, Melkor Lord <melkor.lord at gmail.com> wrote:

> On Wed, Mar 18, 2015 at 6:22 PM, Rumen Telbizov <telbizov at gmail.com>
> wrote:
>
>>
>>
>> *Can I directly access the data on the underlying storage volumes?*
>>>
>>> If you are just doing just read()/access()/stat() like operations, you
>>> should be fine. If you are not using any new features (like
>>> quota/geo-replication etc etc) then technically, you can modify (but surely
>>> not rename(2) and link(2)) the data inside.
>>>
>>> Note that this is not tested as part of gluster’s release cycle and not
>>> recommended for production use.
>>>
>>
>> The last sentence doesn't recommend it for production use. I was
>> wondering if there's any other concern besides the fact that it's not
>> tested as part of the release cycle or one could expect actual some
>> problems with the data being read while doing so?
>>
>> What I am interested is *only* read operations (readdir, stat, read
>> data). All the write operations will continue going over the shared/mounted
>> drive. So what I want to know is that the data that I am reading will be
>> consistent with the rest of the bricks and not corrupted in any way.
>>
>
> This is not necessarily a direct answer to your question but I've tested
> something similar. With a running volume (but not mounted anywhere), I
> copied a file directly to the underlying FS directory (a tarball) to test
> how it would react if a client would mount the volume afterwards.
>
> When a client mounted the Gluster filesystem (FUSE client), after some
> time, the tarball I copied on one of the bricks was replicated to the other
> servers in my 3 replica test environment.
>
> I tested the tarball on each gluster server and it was perfectly
> consistent.
>
> During all my other tests, I did things like the one you intend to do.
> Mounted the gluster volume on a client and copied some big files there.
> While the copy was doing its job, I directly accessed the resulting file on
> the servers to see if it was consistent (checking the first few KB of the
> file to check headers)
>
> I haven't found anything to complain about and all seemed consistent to me
> so I'd say that what you plan to do is fairly safe.
>
>
>
> --
> Unix _IS_ user friendly, it's just selective about who its friends are.
>



-- 
Rumen Telbizov
Unix Systems Administrator <http://telbizov.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150319/f8347713/attachment.html>


More information about the Gluster-users mailing list