[Gluster-users] 答复: how to use a small part of glusterfs volume in kubernetes

likun kun.li at ucarinc.com
Sun Jan 1 08:30:21 UTC 2017


Since we use coreos,  mount directly into host can’t be done. 

We used a very complex procedure to handle this before, first mounted into a glusterfs container, then shared to os, then other container mounted it. But I thought It’s too complicated, and abandoned it. 

 

As to PV and PVC, I just did some testing. I seted up PVC with glusterfs, limited the capacity to 8GB, but when I mounted the corresponding PV from pod, I had the entire 1.8TB volume. It was not what I expected.

 

Likun

发件人: Vijay Bellur [mailto:vbellur at redhat.com] 
发送时间: 2016年12月31日 7:34
收件人: likun
抄送: gluster-users
主题: Re: [Gluster-users] how to use a small part of glusterfs volume in kubernetes

 

 

 

On Fri, Dec 30, 2016 at 4:07 AM, likun <kun.li at ucarinc.com <mailto:kun.li at ucarinc.com> > wrote:

Anyone use glusterfs in kubernetes environment?

 

We use coreos. As you know, from 1.4.3, coreos version kubernetes has included glusterfs-client debian package in hypercube image. So recently we moved our kubernetes to 1.5.1, and began to mount glusterfs from pod directly.

 

But we can just mount the whole glusterfs volume, is there any way to use a small part of glusterfs volume?like a directory in the volume, and 10G limited through quota. Can PV and PVC do this ?

 

 

You can possibly accomplish this by mounting the entire glusterfs volume into the container host and bind mounting different sub-directories of the volume in different containers. Gluster supports configuring quota on sub-directories. Note that data services like geo-replication, snapshots etc. cannot be configured for sub-directories.

 

Are your PVs  for read write once or read write many workloads? We are looking at adding read write once support with iSCSI in Gluster 3.10.

 

Regards,

Vijay

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20170101/58142acc/attachment.html>


More information about the Gluster-users mailing list