[Gluster-users] iscsi and distributed volume
Dan Lambright
dlambrig at redhat.com
Fri Apr 3 21:05:50 UTC 2015
----- Original Message -----
> From: "Niels de Vos" <ndevos at redhat.com>
> To: "Jon Heese" <jonheese at jonheese.com>
> Cc: "Dan Lambright" <dlambrig at redhat.com>, "Gluster-users at gluster.org List" <gluster-users at gluster.org>, "Humble
> Chirammal" <hchiramm at redhat.com>, "Lalatendu Mohanty" <lmohanty at redhat.com>
> Sent: Friday, April 3, 2015 4:20:05 AM
> Subject: Re: [Gluster-users] iscsi and distributed volume
>
> On Thu, Apr 02, 2015 at 12:08:00AM +0000, Jon Heese wrote:
> > Dan,
> >
> > I've read your blog post about this, but I've been unable to find a way
> > to install this "plugin" on CentOS 6 for use with tgtd.
> >
> > There appears to be a "scsi-target-utils-gluster" RPM out there that has
> > what appears to be a module that would accomplish this, but I can only
> > find this package for EL7-based OSes.
> >
> > Do I have to build the module myself for tgtd on CentOS 6? If so, do
> > you have instructions to do so? Thanks.
Here is what I do. I have not tried on RHEL/CentOS6.
Download the target daemon's source:
$ git clone https://github.com/fujita/tgt.git
Download the gluster development API:
$ sudo yum -y install glusterfs-api-devel
Set the GLFS_BS environment variable.
$ export GLFS_BD=1
Go into the the directory and compile/install.
$ cd tgt; sudo make -j install
More instructions are in the doc/README.glfs subdirectory, also here:
https://forge.gluster.org/gfapi-module-for-linux-target-driver-/gfapi-module-for-linux-target-driver-/blobs/master/doc/README.glfs
I have not done much with it in a year. Its good hearing of interest in gluster+iSCSI. Let us know your suggestions.
Dan
>
> This definitely sounds as if we should get it included in the CentOS
> Storage SIG repositories.
>
> http://wiki.centos.org/SpecialInterestGroup/Storage
>
> I am not sure yet how packages get added to the SIG, Lala and Humble on
> CC should be able to explain/help with that.
>
> For now, rebuilding your own package seems needed :-/ I would start with
> the EL7 version and build that on a EL6 system with glusterfs-api-devel
> installed.
>
> HTH,
> Niels
>
> >
> > Regards,
> > Jon Heese
> >
> > On 4/1/2015 4:21 PM, Dan Lambright wrote:
> > > incidentally , for all you iSCSI on gluster fans.. gluster has a "plugin"
> > > to LIO and the target daemon (tgt). The plugin makes it so the server
> > > can send IO directly between the iSCSI server and gluster process in
> > > user space (as opposed to routing it all through FUSE). Its a nice speed
> > > up, in case anyone is looking for a performance bump :)
> > >
> > > ----- Original Message -----
> > >> From: "Jon Heese" <jonheese at jonheese.com>
> > >> To: "Gluster-users at gluster.org List" <gluster-users at gluster.org>
> > >> Sent: Wednesday, April 1, 2015 3:20:41 PM
> > >> Subject: Re: [Gluster-users] iscsi and distributed volume
> > >>
> > >> Or use multipath I/O (assuming your iSCSI initiator OS supports it) to
> > >> mount
> > >> the iSCSI LUN on both nodes in an active/passive manner.
> > >>
> > >> I do this with tgtd directly on the Gluster nodes to serve up iSCSI
> > >> disks
> > >> from an image file sitting on a replicated volume to a VMware ESXi 5.5
> > >> cluster.
> > >>
> > >> If you go this route, be sure to configure the iSCSI initiator(s)
> > >> multipath
> > >> to be active/passive (or similar) as my testing with round-robin
> > >> produced
> > >> very poor performance and data corruption.
> > >>
> > >> Regards,
> > >> Jon Heese
> > >> ________________________________________
> > >> From: gluster-users-bounces at gluster.org
> > >> <gluster-users-bounces at gluster.org>
> > >> on behalf of Paul Robert Marino <prmarino1 at gmail.com>
> > >> Sent: Wednesday, April 01, 2015 2:59 PM
> > >> To: Dan Lambright
> > >> Cc: Gluster-users at gluster.org List
> > >> Subject: Re: [Gluster-users] iscsi and distributed volume
> > >>
> > >> You do realize you would have to put the ISCSI target disk image on
> > >> the mounted Gluster volume not directly on the brick.
> > >> So as long as you have replication your volume would remain accessible.
> > >> You can not point the ISCSI process directly to the brick or
> > >> replication and striping wont work properly.
> > >> That said you could consider using something like keepalived with a
> > >> monitoring script to handle a VIP for failover in case a node or some
> > >> of the underlying processes go down.
> > >>
> > >>
> > >> On Wed, Apr 1, 2015 at 10:17 AM, Dan Lambright <dlambrig at redhat.com>
> > >> wrote:
> > >>>
> > >>>
> > >>> ----- Original Message -----
> > >>>> From: "Roman" <romeo.r at gmail.com>
> > >>>> To: gluster-users at gluster.org
> > >>>> Sent: Wednesday, April 1, 2015 4:38:50 AM
> > >>>> Subject: [Gluster-users] iscsi and distributed volume
> > >>>>
> > >>>> Hi devs, list!
> > >>>>
> > >>>> I've got somewhat simple but in same time pretty difficult question.
> > >>>> But
> > >>>> I'm
> > >>>> running glusterf in production and don't have any option to test
> > >>>> myself :(
> > >>>>
> > >>>> say I've got a distributed gluster volume of 2x350GB
> > >>>> I want to export ISCSI target for M$ server and I want it to be 600GB.
> > >>>> I understand, that when I create a large file for ISCSI target with
> > >>>> dd, it
> > >>>> will be distributed between two bricks. And here comes the question:
> > >>>>
> > >>>> What will happen when
> > >>>>
> > >>>> 1. one of bricks goes down? Ok, simple - target won't be accessible.
> > >>>> 2. would be data available again, when the brick comes back up? (ie
> > >>>> failure
> > >>>> due to network or power)
> > >>>>
> > >>>> yes, we have backup server and ups and generator, as we are running
> > >>>> DC,
> > >>>> but
> > >>>> I'm just curious if we will have to restore the data from backups or
> > >>>> it
> > >>>> will
> > >>>> be available after brick comes back up?
> > >>>
> > >>> What kind of gluster volume is it- I would hope it is replicated?
> > >>>
> > >>> Data within the file is not distributed between two bricks, unless your
> > >>> volume type is striped.
> > >>>
> > >>> Assuming its replicated, if one brick went down, the other replica
> > >>> would
> > >>> continue to operate, so you would have availability.
> > >>>
> > >>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> --
> > >>>> Best regards,
> > >>>> Roman.
> > >>>>
> > >>>> _______________________________________________
> > >>>> Gluster-users mailing list
> > >>>> Gluster-users at gluster.org
> > >>>> http://www.gluster.org/mailman/listinfo/gluster-users
> > >>> _______________________________________________
> > >>> Gluster-users mailing list
> > >>> Gluster-users at gluster.org
> > >>> http://www.gluster.org/mailman/listinfo/gluster-users
> > >> _______________________________________________
> > >> Gluster-users mailing list
> > >> Gluster-users at gluster.org
> > >> http://www.gluster.org/mailman/listinfo/gluster-users
> > >> _______________________________________________
> > >> Gluster-users mailing list
> > >> Gluster-users at gluster.org
> > >> http://www.gluster.org/mailman/listinfo/gluster-users
> > >>
> > >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
More information about the Gluster-users
mailing list