[Bugs] [Bug 1406569] Element missing for arbiter bricks in XML volume status details output

bugzilla at redhat.com bugzilla at redhat.com
Thu Jan 5 12:44:46 UTC 2017


https://bugzilla.redhat.com/show_bug.cgi?id=1406569

Giuseppe Ragusa <giuseppe.ragusa at hotmail.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|needinfo?(giuseppe.ragusa at h |
                   |otmail.com)                 |



--- Comment #3 from Giuseppe Ragusa <giuseppe.ragusa at hotmail.com> ---
Hi,
I checked /var/log/glusterfs/etc-glusterfs-glusterd.vol.log and you're right:

[2017-01-05 12:21:52.343032] E [MSGID: 106301]
[glusterd-syncop.c:1281:gd_stage_op_phase] 0-management: Staging of operation
'Volume Status' failed on localhost : No brick details in volume home

while getting this:

[root at shockley tmp]# gluster volume status home detail --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>home</volName>
        <nodeCount>9</nodeCount>
        <node>
          <hostname>read.gluster.private</hostname>
          <path>/srv/glusterfs/disk0/home_brick</path>
          <peerid>f1be76be-dec9-46be-98cb-a89c65aebde9</peerid>
          <status>1</status>
          <port>49152</port>
          <ports>
            <tcp>49152</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2773</pid>
          <sizeTotal>3767015563264</sizeTotal>
          <sizeFree>3045302603776</sizeFree>
          <device>/dev/sdb4</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,relatime,attr2,inode64,noquota</mntOptions>
          <fsName>xfs</fsName>
        </node>
        <node>
          <hostname>hall.gluster.private</hostname>
          <path>/srv/glusterfs/disk0/home_brick</path>
          <peerid>e391505d-372f-4148-9d3f-7dbdb8ad0366</peerid>
          <status>1</status>
          <port>49152</port>
          <ports>
            <tcp>49152</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>5052</pid>
          <sizeTotal>3767015563264</sizeTotal>
          <sizeFree>3045302202368</sizeFree>
          <device>/dev/sdb4</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,relatime,attr2,inode64,noquota</mntOptions>
          <fsName>xfs</fsName>
        </node>
        <node>
          <hostname>shockley.gluster.private</hostname>
          <path>/srv/glusterfs/disk0/home_arbiter_brick</path>
          <peerid>3075fdea-4bb6-4fad-94b3-b09b13d7d6a7</peerid>
          <status>1</status>
          <port>49152</port>
          <ports>
            <tcp>49152</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>10557</pid>
          <sizeTotal>1767605006336</sizeTotal>
          <sizeFree>1764696260608</sizeFree>
          <device>/dev/sda4</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,relatime,attr2,inode64,noquota</mntOptions>
          <fsName>xfs</fsName>
        </node>
        <node>
          <hostname>read.gluster.private</hostname>
          <path>/srv/glusterfs/disk1/home_brick</path>
          <peerid>f1be76be-dec9-46be-98cb-a89c65aebde9</peerid>
          <status>1</status>
          <port>49153</port>
          <ports>
            <tcp>49153</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2763</pid>
          <sizeTotal>3767015563264</sizeTotal>
          <sizeFree>3059324719104</sizeFree>
          <device>/dev/sdc4</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,relatime,attr2,inode64,noquota</mntOptions>
          <fsName>xfs</fsName>
        </node>
        <node>
          <hostname>hall.gluster.private</hostname>
          <path>/srv/glusterfs/disk1/home_brick</path>
          <peerid>e391505d-372f-4148-9d3f-7dbdb8ad0366</peerid>
          <status>1</status>
          <port>49153</port>
          <ports>
            <tcp>49153</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>5058</pid>
          <sizeTotal>3767015563264</sizeTotal>
          <sizeFree>3059324432384</sizeFree>
          <device>/dev/sdc4</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,relatime,attr2,inode64,noquota</mntOptions>
          <fsName>xfs</fsName>
        </node>
        <node>
          <hostname>shockley.gluster.private</hostname>
          <path>/srv/glusterfs/disk1/home_arbiter_brick</path>
          <peerid>3075fdea-4bb6-4fad-94b3-b09b13d7d6a7</peerid>
          <status>1</status>
          <port>49153</port>
          <ports>
            <tcp>49153</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>10568</pid>
          <sizeTotal>1767605006336</sizeTotal>
          <sizeFree>1766170935296</sizeFree>
          <device>/dev/sdb4</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,relatime,attr2,inode64,noquota</mntOptions>
          <fsName>xfs</fsName>
        </node>
        <node>
          <hostname>read.gluster.private</hostname>
          <path>/srv/glusterfs/disk2/home_brick</path>
          <peerid>f1be76be-dec9-46be-98cb-a89c65aebde9</peerid>
          <status>1</status>
          <port>49171</port>
          <ports>
            <tcp>49171</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>2768</pid>
          <sizeTotal>3998831407104</sizeTotal>
          <sizeFree>3233375506432</sizeFree>
          <device>/dev/sda2</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,relatime,attr2,inode64,noquota</mntOptions>
          <fsName>xfs</fsName>
        </node>
        <node>
          <hostname>hall.gluster.private</hostname>
          <path>/srv/glusterfs/disk2/home_brick</path>
          <peerid>e391505d-372f-4148-9d3f-7dbdb8ad0366</peerid>
          <status>1</status>
          <port>49171</port>
          <ports>
            <tcp>49171</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>5064</pid>
          <sizeTotal>3998831407104</sizeTotal>
          <sizeFree>3233376612352</sizeFree>
          <device>/dev/sda2</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,seclabel,relatime,attr2,inode64,noquota</mntOptions>
          <fsName>xfs</fsName>
        </node>
        <node>
          <hostname>shockley.gluster.private</hostname>
          <path>/srv/glusterfs/disk2/home_arbiter_brick</path>
          <peerid>3075fdea-4bb6-4fad-94b3-b09b13d7d6a7</peerid>
          <status>1</status>
          <port>49171</port>
          <ports>
            <tcp>49171</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>10579</pid>
          <sizeTotal>1767605006336</sizeTotal>
          <sizeFree>1764696260608</sizeFree>
          <blockSize>4096</blockSize>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>


Now that I think about it, I suspect that there is one trick in my setup that
is not interpreted correctly: let me explain (it is a bit involved, sorry ;-) )

Since it is well known that the arbiter does not need to have the same disk
space available as the "full replica" nodes, I used smaller disks on the
arbiter node (I have all arbiter bricks confined to the same node, so I call
this node the "arbiter node") and then when I needed more storage I kept adding
disks to the "full replica" nodes but not to the arbiter one (since it should
be more than enough for arbiter bricks already).

Note also that I use a separate mount point /srv/glusterfs/diskN for each new
disk that I add, and then I create individual brick subdirs for my GlusterFS
volumes on each disk /srv/glusterfs/diskN/volname_brick or
/srv/glusterfs/diskN/volname_arbiter_brick.

Now all is well, but I like to have coherent output too on all nodes (arbiter
or full replica), so I used a higher-level symlink on the arbiter to fake the
presence of the further disk and here it is the actual XFS filesystem view of
my arbiter bricks:

[root at shockley tmp]# tree -L 3 /srv/
/srv/
├── ctdb
│   └── lockfile
└── glusterfs
    ├── disk0
    │   ├── ctdb_arbiter_brick
    │   ├── disk2
    │   ├── enginedomain_arbiter_brick
    │   ├── exportdomain_arbiter_brick
    │   ├── home_arbiter_brick
    │   ├── isodomain_arbiter_brick
    │   ├── share_arbiter_brick
    │   ├── software_arbiter_brick
    │   ├── src_arbiter_brick
    │   ├── tmp_arbiter_brick
    │   └── vmdomain_arbiter_brick
    ├── disk1
    │   ├── ctdb_arbiter_brick
    │   ├── enginedomain_arbiter_brick
    │   ├── exportdomain_arbiter_brick
    │   ├── home_arbiter_brick
    │   ├── isodomain_arbiter_brick
    │   ├── share_arbiter_brick
    │   ├── software_arbiter_brick
    │   ├── src_arbiter_brick
    │   ├── tmp_arbiter_brick
    │   └── vmdomain_arbiter_brick
    └── disk2 -> disk0/disk2

26 directories, 1 file
[root at shockley tmp]# tree -L 1 /srv/glusterfs/disk0/disk2
/srv/glusterfs/disk0/disk2
├── enginedomain_arbiter_brick
├── exportdomain_arbiter_brick
├── home_arbiter_brick
├── share_arbiter_brick
├── software_arbiter_brick
├── src_arbiter_brick
└── vmdomain_arbiter_brick

7 directories, 0 files

At this point you can guess that I added the bricks by specifing the "fake"
(symlinked) path for the sake of uniformity.

I thought that this would not cause issues, but it seems I was wrong :-(

Is this kind of setup really unsupported?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=mIucVgnAjl&a=cc_unsubscribe


More information about the Bugs mailing list