[Gluster-users] iSCSI and Gluster

Carlos Capriotti capriotti.carlos at gmail.com
Wed Mar 5 23:25:46 UTC 2014


As explained before, it is currently NFS, not iSCSI.

Here is a sample of my nfs.log. I have tons of this:

[2014-03-05 23:09:47.293822] D [nfs3-helpers.c:3514:nfs3_log_readdir_call]
0-nfs
-nfsv3: XID: 27dce0a, READDIRPLUS: args: FH: exportid
27566f19-3945-4fda-bbea-3d
3b1b29a32f, gfid 00000000-0000-0000-0000-000000000001, dircount: 1008,
maxcount:
 8064
[2014-03-05 23:09:47.294285] D [nfs3-helpers.c:3480:nfs3_log_readdirp_res]
0-nfs
-nfsv3: XID: 27dce0a, READDIRPLUS: NFS: 0(Call completed successfully.),
POSIX:
117(Structure needs cleaning), dircount: 1008, maxcount: 8064, cverf:
30240636,
is_eof: 0
[2014-03-05 23:09:47.294522] D [nfs3-helpers.c:3514:nfs3_log_readdir_call]
0-nfs
-nfsv3: XID: 27dce0b, READDIRPLUS: args: FH: exportid
27566f19-3945-4fda-bbea-3d
3b1b29a32f, gfid 00000000-0000-0000-0000-000000000001, dircount: 1008,
maxcount:
 8064



one of the bricks:

[2014-03-05 23:21:42.469118] D [io-threads.c:325:iot_schedule]
0-stdata-io-threads: READDIRP scheduled as fast fop
[2014-03-05 23:21:42.469403] D [io-threads.c:325:iot_schedule]
0-stdata-io-threads: FSTAT scheduled as fast fop
[2014-03-05 23:21:42.470167] D [io-threads.c:325:iot_schedule]
0-stdata-io-threads: READDIRP scheduled as fast fop
[2014-03-05 23:21:42.470757] D [io-threads.c:325:iot_schedule]
0-stdata-io-threads: FSTAT scheduled as fast fop

volume definition:

Volume Name: stdata
Type: Stripe
Volume ID: 27566f19-3945-4fda-bbea-3d3b1b29a32f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.0.1.25:/stripe0
Brick2: 10.0.1.25:/stripe1
Options Reconfigured:
diagnostics.client-log-level: DEBUG
diagnostics.brick-log-level: DEBUG

If there is anything else I can provide you, to troubleshoot this volume on
esxi, just let me know.

KR,

Carlos.




On Wed, Mar 5, 2014 at 6:35 PM, Anand Avati <avati at gluster.org> wrote:
>
> Can you please post some logs (the client logs which is exporting ISCSI)?
It is hard to diagnose issues without logs.
>
> thanks,
> Avati
>
>
> On Wed, Mar 5, 2014 at 9:28 AM, Carlos Capriotti <
capriotti.carlos at gmail.com> wrote:
>>
>> Hi all. Again.
>>
>> I am still fighting that "VMware esxi cannot use striped gluster
volumes" thing, and a couple of crazy ideas are coming to mind.
>>
>> One of them is using iSCSI WITH gluster, and esxi connecting via iSCSI.
>>
>> My experience with iSCSI is limited to a couple of FreeNAS test
installs, and some tuning on FreeNAS and esxi in order to implement
multipathing, but nothing dead serious.
>>
>> I remember that after creating a volume and formating it (zvol), THEN
space was allocated to iSCSI. Makes some sense, since iSCIS is a block
device, and after it is available, the operating system will actually use
it. But it is a bit foggy.
>>
>> I am trying to bypass the present limitation on Gluster, which refuses
to talk to esxi using a striped volume.
>>
>> So, here is the question: anyone here uses gluster and iSCSI ?
>>
>> Would anyone care to comment on performance of this kind of solution,
pros and cons ?
>>
>> Thanks.
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140306/f8f699d5/attachment.html>


More information about the Gluster-users mailing list