[Gluster-users] File Size and Brick Size

Ravishankar N ravishankar at redhat.com
Tue Sep 27 01:42:57 UTC 2016


On 09/27/2016 05:15 AM, ML Wong wrote:
> Have anyone in the list who has tried copying file which is bigger 
> than the individual brick/replica size?
> Test Scenario:
> Distributed-Replicated volume, 2GB size, 2x2 = 4 bricks, 2 replicas
> Each replica has 1GB
>
> When i tried to copy file this volume, by both fuse, or nfs mount. i 
> get I/O error.
> Filesystem                  Size  Used Avail Use% Mounted on
> /dev/mapper/vg0-brick1     1017M   33M  985M   4% /data/brick1
> /dev/mapper/vg0-brick2     1017M  109M  909M  11% /data/brick2
> lbre-cloud-dev1:/sharevol1  2.0G  141M  1.9G   7% /sharevol1
>
> [xxxxxx at cloud-dev1 ~]$ du -sh /var/tmp/ovirt-live-el7-3.6.2.iso
> 1.3G/var/tmp/ovirt-live-el7-3.6.2.iso
>
> [melvinw at lbre-cloud-dev1 ~]$ sudo cp /var/tmp/ovirt-live-el7-3.6.2.iso 
> /sharevol1/
> cp: error writing ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: Input/output 
> error
> cp: failed to extend ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: 
> Input/output error
> cp: failed to close ‘/sharevol1/ovirt-live-el7-3.6.2.iso’: 
> Input/output error

Does the mount log give you more information? It it was a disk full 
issue, the error you would get is ENOSPC and not EIO. This looks like 
something else.
>
> I know, we have experts in this mailing list. And, i assume, this is a 
> common situation where many Gluster users may have encountered.  The 
> worry i have what if you have a big VM file sitting on top of Gluster 
> volume ...?
>
It is recommended to use sharding 
(http://blog.gluster.org/2015/12/introducing-shard-translator/) for VM 
workloads to alleviate these kinds of issues.
-Ravi

> Any insights will be much appreciated.
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160927/dfc040eb/attachment.html>


More information about the Gluster-users mailing list