[Gluster-users] CentOS Storage SIG Repo for 3.8 is behind 3 versions (3.8.5). Is it still being used and can I help?

Pavel Szalbot pavel.szalbot at gmail.com
Wed Feb 15 13:53:26 UTC 2017


Hi, tested it with 3.8.8 on client (CentOS) and server (Ubuntu) and
everything is OK now.

-ps

On Wed, Feb 15, 2017 at 11:49 AM, Pavel Szalbot <pavel.szalbot at gmail.com>
wrote:

> Hi Daryl,
>
> I must have missed your reply and found out about it when reading about
> 3.8.9 and searching in gluster-users history.
>
> I will test the same setup with gluster 3.8.8 i.e. libvirt
> 2.0.0-10.el7_3.4, glusterfs 3.8.8-1.el7 and gluster 3.8.8 on servers
> (Ubuntu) and let you know.
>
> This is libvirt log for instance that used gluster storage backend
> (libvirt 2.0.0, gluster client 3.8.5 and later 3.8.7, probably 3.8.5 on
> servers, not sure):
>
> [2017-01-03 17:10:58.155566] I [MSGID: 104045] [glfs-master.c:91:notify]
> 0-gfapi: New graph 6e6f6465-342d-6d69-6372-6f312e707267 (0) coming up
> [2017-01-03 17:10:58.155615] I [MSGID: 114020] [client.c:2356:notify]
> 0-gv_openstack_0-client-6: parent translators are ready, attempting connect
> on transport
> [2017-01-03 17:10:58.186043] I [MSGID: 114020] [client.c:2356:notify]
> 0-gv_openstack_0-client-7: parent translators are ready, attempting connect
> on transport
> [2017-01-03 17:10:58.186518] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> 0-gv_openstack_0-client-6: changing port to 49156 (from 0)
> [2017-01-03 17:10:58.215411] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> 0-gv_openstack_0-client-7: changing port to 49153 (from 0)
> [2017-01-03 17:10:58.243706] I [MSGID: 114057] [client-handshake.c:1446:
> select_server_supported_programs] 0-gv_openstack_0-client-6: Using
> Program GlusterFS 3.3, Num (1298437), Version (330)
> [2017-01-03 17:10:58.244215] I [MSGID: 114046] [client-handshake.c:1222:client_setvolume_cbk]
> 0-gv_openstack_0-client-6: Connected to gv_openstack_0-client-6, attached
> to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> [2017-01-03 17:10:58.244235] I [MSGID: 114047] [client-handshake.c:1233:client_setvolume_cbk]
> 0-gv_openstack_0-client-6: Server and Client lk-version numbers are not
> same, reopening the fds
> [2017-01-03 17:10:58.244318] I [MSGID: 108005]
> [afr-common.c:4301:afr_notify] 0-gv_openstack_0-replicate-0: Subvolume
> 'gv_openstack_0-client-6' came back up; going online.
> [2017-01-03 17:10:58.244437] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk]
> 0-gv_openstack_0-client-6: Server lk version = 1
> [2017-01-03 17:10:58.246940] I [MSGID: 114057] [client-handshake.c:1446:
> select_server_supported_programs] 0-gv_openstack_0-client-7: Using
> Program GlusterFS 3.3, Num (1298437), Version (330)
> [2017-01-03 17:10:58.247252] I [MSGID: 114046] [client-handshake.c:1222:client_setvolume_cbk]
> 0-gv_openstack_0-client-7: Connected to gv_openstack_0-client-7, attached
> to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> [2017-01-03 17:10:58.247273] I [MSGID: 114047] [client-handshake.c:1233:client_setvolume_cbk]
> 0-gv_openstack_0-client-7: Server and Client lk-version numbers are not
> same, reopening the fds
> [2017-01-03 17:10:58.257855] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk]
> 0-gv_openstack_0-client-7: Server lk version = 1
> [2017-01-03 17:10:58.259641] I [MSGID: 104041] [glfs-resolve.c:885:__glfs_active_subvol]
> 0-gv_openstack_0: switched to graph 6e6f6465-342d-6d69-6372-6f312e707267
> (0)
> [2017-01-03 17:10:58.439897] I [MSGID: 104045] [glfs-master.c:91:notify]
> 0-gfapi: New graph 6e6f6465-342d-6d69-6372-6f312e707267 (0) coming up
> [2017-01-03 17:10:58.439929] I [MSGID: 114020] [client.c:2356:notify]
> 0-gv_openstack_0-client-6: parent translators are ready, attempting connect
> on transport
> [2017-01-03 17:10:58.519082] I [MSGID: 114020] [client.c:2356:notify]
> 0-gv_openstack_0-client-7: parent translators are ready, attempting connect
> on transport
> [2017-01-03 17:10:58.519527] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> 0-gv_openstack_0-client-6: changing port to 49156 (from 0)
> [2017-01-03 17:10:58.550482] I [MSGID: 114057] [client-handshake.c:1446:
> select_server_supported_programs] 0-gv_openstack_0-client-6: Using
> Program GlusterFS 3.3, Num (1298437), Version (330)
> [2017-01-03 17:10:58.550997] I [MSGID: 114046] [client-handshake.c:1222:client_setvolume_cbk]
> 0-gv_openstack_0-client-6: Connected to gv_openstack_0-client-6, attached
> to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> [2017-01-03 17:10:58.551021] I [MSGID: 114047] [client-handshake.c:1233:client_setvolume_cbk]
> 0-gv_openstack_0-client-6: Server and Client lk-version numbers are not
> same, reopening the fds
> [2017-01-03 17:10:58.551089] I [MSGID: 108005]
> [afr-common.c:4301:afr_notify] 0-gv_openstack_0-replicate-0: Subvolume
> 'gv_openstack_0-client-6' came back up; going online.
> [2017-01-03 17:10:58.551199] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk]
> 0-gv_openstack_0-client-6: Server lk version = 1
> [2017-01-03 17:10:58.554413] I [rpc-clnt.c:1947:rpc_clnt_reconfig]
> 0-gv_openstack_0-client-7: changing port to 49153 (from 0)
> [2017-01-03 17:10:58.600956] I [MSGID: 114057] [client-handshake.c:1446:
> select_server_supported_programs] 0-gv_openstack_0-client-7: Using
> Program GlusterFS 3.3, Num (1298437), Version (330)
> [2017-01-03 17:10:58.601276] I [MSGID: 114046] [client-handshake.c:1222:client_setvolume_cbk]
> 0-gv_openstack_0-client-7: Connected to gv_openstack_0-client-7, attached
> to remote volume '/export/gfs_0/gv_openstack_0_brick'.
> [2017-01-03 17:10:58.601293] I [MSGID: 114047] [client-handshake.c:1233:client_setvolume_cbk]
> 0-gv_openstack_0-client-7: Server and Client lk-version numbers are not
> same, reopening the fds
> [2017-01-03 17:10:58.616249] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk]
> 0-gv_openstack_0-client-7: Server lk version = 1
> [2017-01-03 17:10:58.617781] I [MSGID: 104041] [glfs-resolve.c:885:__glfs_active_subvol]
> 0-gv_openstack_0: switched to graph 6e6f6465-342d-6d69-6372-6f312e707267
> (0)
> warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
> warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit
> 22]
> warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
> warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
> warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
> warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit
> 2]
> warning: host doesn't support requested feature: CPUID.01H:ECX.monitor
> [bit 3]
> warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit
> 4]
> warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
> warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
> warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
> warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
> warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit
> 14]
> warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit
> 15]
> warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
> warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave
> [bit 27]
> warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
> warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit
> 22]
> warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
> warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
> warning: host doesn't support requested feature: CPUID.01H:EDX.pbe [bit 31]
> warning: host doesn't support requested feature: CPUID.01H:ECX.dtes64 [bit
> 2]
> warning: host doesn't support requested feature: CPUID.01H:ECX.monitor
> [bit 3]
> warning: host doesn't support requested feature: CPUID.01H:ECX.ds_cpl [bit
> 4]
> warning: host doesn't support requested feature: CPUID.01H:ECX.vmx [bit 5]
> warning: host doesn't support requested feature: CPUID.01H:ECX.smx [bit 6]
> warning: host doesn't support requested feature: CPUID.01H:ECX.est [bit 7]
> warning: host doesn't support requested feature: CPUID.01H:ECX.tm2 [bit 8]
> warning: host doesn't support requested feature: CPUID.01H:ECX.xtpr [bit
> 14]
> warning: host doesn't support requested feature: CPUID.01H:ECX.pdcm [bit
> 15]
> warning: host doesn't support requested feature: CPUID.01H:ECX.dca [bit 18]
> warning: host doesn't support requested feature: CPUID.01H:ECX.osxsave
> [bit 27]
> 2017-01-04T12:21:14.293630Z qemu-kvm: terminating on signal 15 from pid 1
> [2017-01-04 12:21:14.396691] I [MSGID: 114021] [client.c:2365:notify]
> 0-gv_openstack_0-client-6: current graph is no longer active, destroying
> rpc_client
> [2017-01-04 12:21:14.396895] I [MSGID: 114021] [client.c:2365:notify]
> 0-gv_openstack_0-client-7: current graph is no longer active, destroying
> rpc_client
> [2017-01-04 12:21:14.396910] I [MSGID: 114018] [client.c:2280:client_rpc_notify]
> 0-gv_openstack_0-client-6: disconnected from gv_openstack_0-client-6.
> Client process will keep trying to connect to glusterd until brick's port
> is available
> [2017-01-04 12:21:14.396927] I [MSGID: 114018] [client.c:2280:client_rpc_notify]
> 0-gv_openstack_0-client-7: disconnected from gv_openstack_0-client-7.
> Client process will keep trying to connect to glusterd until brick's port
> is available
> [2017-01-04 12:21:14.396942] E [MSGID: 108006]
> [afr-common.c:4323:afr_notify] 0-gv_openstack_0-replicate-0: All subvolumes
> are down. Going offline until atleast one of them comes back up.
> [2017-01-04 12:21:14.397274] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gfapi: size=84 max=1 total=1
> [2017-01-04 12:21:14.397565] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gfapi: size=188 max=2 total=2
> [2017-01-04 12:21:14.397816] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gfapi: size=140 max=2 total=79
> [2017-01-04 12:21:14.397993] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-client-6: size=1324 max=64 total=70716
> [2017-01-04 12:21:14.398002] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-client-7: size=1324 max=64 total=49991
> [2017-01-04 12:21:14.398010] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-replicate-0: size=10580 max=464 total=48110
> [2017-01-04 12:21:14.398277] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-dht: size=1148 max=0 total=0
> [2017-01-04 12:21:14.398376] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-dht: size=3380 max=233 total=37020
> [2017-01-04 12:21:14.398583] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-readdir-ahead: size=60 max=0 total=0
> [2017-01-04 12:21:14.398591] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-io-cache: size=68 max=0 total=0
> [2017-01-04 12:21:14.398636] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-io-cache: size=252 max=64 total=11093
> [2017-01-04 12:21:14.398664] I [io-stats.c:3747:fini] 0-gv_openstack_0:
> io-stats translator unloaded
> [2017-01-04 12:21:14.398854] I [MSGID: 101191] [event-epoll.c:659:event_dispatch_epoll_worker]
> 0-epoll: Exited thread with index 2
> [2017-01-04 12:21:14.398861] I [MSGID: 101191] [event-epoll.c:659:event_dispatch_epoll_worker]
> 0-epoll: Exited thread with index 1
> [2017-01-04 12:21:15.240813] I [MSGID: 114021] [client.c:2365:notify]
> 0-gv_openstack_0-client-6: current graph is no longer active, destroying
> rpc_client
> [2017-01-04 12:21:15.241016] I [MSGID: 114021] [client.c:2365:notify]
> 0-gv_openstack_0-client-7: current graph is no longer active, destroying
> rpc_client
> [2017-01-04 12:21:15.241061] I [MSGID: 114018] [client.c:2280:client_rpc_notify]
> 0-gv_openstack_0-client-6: disconnected from gv_openstack_0-client-6.
> Client process will keep trying to connect to glusterd until brick's port
> is available
> [2017-01-04 12:21:15.241089] I [MSGID: 114018] [client.c:2280:client_rpc_notify]
> 0-gv_openstack_0-client-7: disconnected from gv_openstack_0-client-7.
> Client process will keep trying to connect to glusterd until brick's port
> is available
> [2017-01-04 12:21:15.241108] E [MSGID: 108006]
> [afr-common.c:4323:afr_notify] 0-gv_openstack_0-replicate-0: All subvolumes
> are down. Going offline until atleast one of them comes back up.
> [2017-01-04 12:21:15.241511] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gfapi: size=84 max=1 total=1
> [2017-01-04 12:21:15.241906] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gfapi: size=188 max=2 total=2
> [2017-01-04 12:21:15.242243] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gfapi: size=140 max=2 total=155
> [2017-01-04 12:21:15.242264] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-client-6: size=1324 max=21 total=610
> [2017-01-04 12:21:15.242282] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-client-7: size=1324 max=21 total=1646
> [2017-01-04 12:21:15.242303] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-replicate-0: size=10580 max=40 total=1619
> [2017-01-04 12:21:15.242838] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-dht: size=1148 max=0 total=0
> [2017-01-04 12:21:15.243016] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-dht: size=3380 max=20 total=1482
> [2017-01-04 12:21:15.243288] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-readdir-ahead: size=60 max=0 total=0
> [2017-01-04 12:21:15.243303] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-io-cache: size=68 max=0 total=0
> [2017-01-04 12:21:15.243461] I [MSGID: 101053] [mem-pool.c:641:mem_pool_destroy]
> 0-gv_openstack_0-io-cache: size=252 max=3 total=140
> [2017-01-04 12:21:15.243483] I [io-stats.c:3747:fini] 0-gv_openstack_0:
> io-stats translator unloaded
> [2017-01-04 12:21:15.243603] I [MSGID: 101191] [event-epoll.c:659:event_dispatch_epoll_worker]
> 0-epoll: Exited thread with index 1
> [2017-01-04 12:21:15.243631] I [MSGID: 101191] [event-epoll.c:659:event_dispatch_epoll_worker]
> 0-epoll: Exited thread with index 2
> 2017-01-04 12:21:16.363+0000: shutting down
>
>
>
>
> -ps
>
> On Sun, Jan 15, 2017 at 8:16 PM, Niels de Vos <ndevos at redhat.com> wrote:
>
>> On Fri, Jan 13, 2017 at 11:01:38AM +0100, Pavel Szalbot wrote:
>> > Hi, you can install 3.8.7 from centos-gluster38-test using:
>> >
>> > yum --enablerepo=centos-gluster38-test install glusterfs
>> >
>> > I am not sure how QA works for CentOS Storage SIG, but 3.8.7 works same
>> as
>> > 3.8.5 for me - libvirt gfapi is unfortunately broken, no other problems
>> > detected.
>>
>> Could you explain a little more of how this is broken? You would
>> probably do good to report a bug as well:
>>
>>   https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS&
>> version=3.8&component=gfapi
>>
>> If the bug contain steps that we can follow to reproduce the problem you
>> are facing, it will be easier to investigate the cause and fix it.
>>
>> Thanks,
>> Niels
>>
>>
>> >
>> > Btw 3.9 is short term maintenance release (
>> > https://lists.centos.org/pipermail/centos-devel/2016-Septemb
>> er/015197.html).
>> >
>> >
>> > -ps
>> >
>> > On Fri, Jan 13, 2017 at 1:18 AM, Daryl lee <daryllee at ece.ucsb.edu>
>> wrote:
>> >
>> > > Hey Gluster Community,
>> > >
>> > > According to the community packages list I get the impression that 3.8
>> > > would be released to the CentOS Storage SIG Repo, but this seems to
>> have
>> > > stopped with 3.8.5 and 3.9 is still missing all together.   However,
>> 3.7 is
>> > > still being updated and is at 3.7.8 so I am confused why the other two
>> > > versions have stopped.
>> > >
>> > >
>> > >
>> > > I did some looking on the past posts to this list and found a
>> conversation
>> > > about 3.9 on the CentOS repo last year but it looks like it's still
>> not up
>> > > yet; possibly due to a lack of community involvement in the testing
>> and
>> > > reporting back to whoever the maintainer is (which we don’t know
>> yet)?   I
>> > > might be in a position to help since I have a test environment that
>> mirrors
>> > > my production environment setup that I would use for testing the patch
>> > > anyways, I might as well provide some good to the community..   At
>> this
>> > > point I know to do " yum install --enablerepo=centos-gluster38-test
>> > > glusterfs-server" but I'm not sure who to tell if it works or not,
>> and what
>> > > kind of info they are looking for.    If someone wanted to give me a
>> little
>> > > guidance that would be awesome, especially if it will save me from
>> having
>> > > to switch to manually downloading packages.
>> > >
>> > >
>> > >
>> > > I guess the basic question is do we expect releases to resume for 3.8
>> on
>> > > the CentOS Storage SIG repo or should I be looking to move to manual
>> > > patching for 3.8.  Additionally, if the person who does the releases
>> to the
>> > > CentOS Storage SIG is waiting for someone to tell them it looks
>> fine,  who
>> > > should I contact to do so?
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > Thanks!
>> > >
>> > >
>> > >
>> > > Daryl
>> > >
>> > > _______________________________________________
>> > > Gluster-users mailing list
>> > > Gluster-users at gluster.org
>> > > http://www.gluster.org/mailman/listinfo/gluster-users
>> > >
>>
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users at gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170215/a1fd8ed9/attachment.html>


More information about the Gluster-users mailing list