SV: [Gluster-devel] Re-exporting NFS to vmware

Fredrik Widlund fredrik.widlund at qbrick.com
Thu Jan 6 16:49:38 UTC 2011


If you're re-exporting a gluster filesystem, the re-exporting node will act as a proxy. As a concept this is fairly natural, and in itself it shouldn't be a problem. 

And I did say that it is possible to re-export a FUSE filesystem, not impossible. 

Kind regards,
Fredrik Widlund
	
-----Ursprungligt meddelande-----
Från: 沈允中 [mailto:kimula at cht.com.tw] 
Skickat: den 6 januari 2011 15:27
Till: Fredrik Widlund; Gordan Bobic; 'gluster-devel at nongnu.org'
Ämne: RE: [Gluster-devel] Re-exporting NFS to vmware

Hi,
Thanks for your advices. It helps me a lot.
And now I know that it's impossible for nfsd to re-export a FUSE mounted filesystem.
But the workflow of the gluster native nfsd is not smart just as the white paper mentioned.
Gluster will act stupid when not using glusterfs protocol.
1. A client asks server A for a file but server A doesn't have it.
2. Server A finds server B has the file.
3. Server B transfers the file to the server A.
4. Server A transfers the file to the client.

So step 3 is a stupid and time-wasting process.
How do you solve this problem when you have to use nfs protocol?
Thanks in advance.

Best Regards,
Sylar Shen
________________________________________
Hi,

The Linux kernel nfsd does work with re-exporting a FUSE mounted gluster filesystem. The gluster native nfsd is clearly better/faster than unfsd, but in benchmarks we've done both the gluster and the kernel nfs does have performance bottlenecks that limit throughput to around 500MB/s with many concurrent sessions even though the storage backend support a much higher load.

Kind regards,
Fredrik Widlund


-----Ursprungligt meddelande-----
Från: gluster-devel-bounces+fredrik.widlund=qbrick.com at nongnu.org [mailto:gluster-devel-bounces+fredrik.widlund=qbrick.com at nongnu.org] För Gordan Bobic
Skickat: den 6 januari 2011 13:15
Till: 'gluster-devel at nongnu.org'
Ämne: Re: [Gluster-devel] Re-exporting NFS to vmware

沈允中 wrote:
> Hi,
> Thanks for the advice.
> My problem is just like you said.
> But is there any alternative way that I can solve my problem?
> Because vmware really doesn't have the glusterfs protocol to mount.
> I know that the Gluster.com may publish their VMStor product to improve this.
> However, to tell the truth, I don't want to spend money.......:p
>
> If the problem cannot be solved now, does anyone know other file systems which are similar to Gluster so that I can mount by nfs protocol without losing performance?
> Thanks in advance.
>
>
> Best Regards,
> Sylar Shen
> ________________________________________
>
>
> Sylar wrote:
>> Hi All:
>>
>> I wanted to use GlusterFS as a share storage to connect with vmware.
>>
>> But the nfs protocol had a poor performance when the scalability got
>> larger.(I have 20 servers as a GlusterFS)
>>
>> So I figured out a way when I saw the wiki of Ceph
>>
>> http://ceph.newdream.net/wiki/Re-exporting_NFS
>>
>>
>>
>> I think that I can add a middle-converter between vmware and GlusterFS.
>>
>> It can connect with vmware by nfs and mount GlusterFS by glusterfs.
>>
>> Here is the architecture I thought.......
>>
>> And then I had a problem. The middle-tier is OK to connect with
>> GlusterFS by glusterfs protocol.
>>
>> But the errors happened when vmware connects with middle-tier by nfs
>> protocol.
>>
>> The vmware cannot mount middle-tier by nfs at the first time.
>>
>> Even if vmware can mount the middle-tier by nfs, it cannot see the data
>> in the GlusterFS.
>>
>> It can only see the data(directory) in the middle-tier
>>
>>
>>
>> Does anyone have the same problem as I ?
>>
>> How do you solve this thorny problem?
>
> Are you saying you are mounting GlusterFS on an interim node, and then
> re-exporting that via NFS? What are you using for the NFS export? Last I
> checked kernel nfsd didn't work with fuse based file systems, so you'd
> have to use something like unfsd (user-space) instead. You may, however,
> find that if you do that, the extra performance hit from unfsd will undo
> most of the speed-up you are hoping to achieve.

If the only problem you have is providing an NFS share to the client,
then you could either use unfsd (google it, I'm sure you'll find it), or
use the GlusterFS's NFS interface that is supposed to be more efficient
than unfsd. Both of these were discussed here a while back, check the
archives.

Gordan

_______________________________________________
Gluster-devel mailing list
Gluster-devel at nongnu.org
http://lists.nongnu.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list