[Bugs] [Bug 1223185] New: [SELinux] [BVT]: Selinux throws AVC errors while running DHT automation on Rhel6.6

bugzilla at redhat.com bugzilla at redhat.com
Wed May 20 03:56:36 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1223185

            Bug ID: 1223185
           Summary: [SELinux] [BVT]: Selinux throws AVC errors while
                    running DHT automation on Rhel6.6
           Product: GlusterFS
           Version: mainline
         Component: glusterd
          Keywords: Triaged
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: anekkunt at redhat.com
                CC: akhakhar at redhat.com, anekkunt at redhat.com,
                    bugs at gluster.org, gluster-bugs at redhat.com,
                    kaushal at redhat.com, kparthas at redhat.com,
                    lvrabec at redhat.com, mgrepl at redhat.com,
                    mmalik at redhat.com, pprakash at redhat.com,
                    rcyriac at redhat.com, sgraf at redhat.com,
                    storage-qa-internal at redhat.com
        Depends On: 1222869



+++ This bug was initially created as a clone of Bug #1222869 +++

+++ This bug was initially created as a clone of Bug #1210404 +++

Description of problem:
Selinux throws a AVC errors while running DHT automated test cases

Info: Searching AVC errors produced since 1428537291.96 (Thu Apr  9 05:24:51
2015)
Searching logs...
Running '/usr/bin/env LC_ALL=en_US.UTF-8 /sbin/ausearch -m AVC -m USER_AVC -m
SELINUX_ERR -ts 04/09/2015 05:24:51 < /dev/null
>/mnt/testarea/tmp.rhts-db-submit-result.03LjWL 2>&1'
----
time->Thu Apr  9 05:48:02 2015
type=SYSCALL msg=audit(1428538682.822:73): arch=c000003e syscall=42 success=no
exit=-111 a0=c a1=7fff14694310 a2=6e a3=7fb7a2c45673 items=0 ppid=22549
pid=22550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0
fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd"
subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1428538682.822:73): avc:  denied  { write } for  pid=22550
comm="glusterd" name="glusterd.socket" dev=dm-0 ino=924731
scontext=unconfined_u:system_r:glusterd_t:s0
tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
----
time->Thu Apr  9 05:48:02 2015
type=SYSCALL msg=audit(1428538682.823:74): arch=c000003e syscall=87 success=yes
exit=0 a0=7fff14694312 a1=7fff14694310 a2=6f a3=7fb7a2c45673 items=0 ppid=22549
pid=22550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0
fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd"
subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1428538682.823:74): avc:  denied  { unlink } for  pid=22550
comm="glusterd" name="glusterd.socket" dev=dm-0 ino=924731
scontext=unconfined_u:system_r:glusterd_t:s0
tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
Fail: AVC messages found.

Version-Release number of selected component (if applicable):
Upstream glusterfs3.7 on Rhel7.1 server

How reproducible: 
Always when we run BVT with upstream glusterfs3.7

Steps to Reproduce:
1. Not a manual process
2. Running DHT BVT on rhel7.1 server with upstream glusterfs3.7 packages

Actual results:
Selinux AVC errors found

Expected results:
No AVC errors


Additional info:
The BVT test result link, that has all the avc logs:
https://beaker.engineering.redhat.com/jobs/925560

--- Additional comment from Apeksha on 2015-04-10 02:20:39 EDT ---

sosreports and avc logs attached to following location:

http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1210404/

--- Additional comment from Niels de Vos on 2015-04-14 08:28:49 EDT ---

Apeksha, could you please write a public comment about this problem? This is a
Gluster Community bug and members of the community can not see any details
here.

Thanks!

--- Additional comment from Apeksha on 2015-04-15 02:05:41 EDT ---

Description of problem:
Selinux throws a AVC errors while running DHT automated test cases

Info: Searching AVC errors produced since 1428537291.96 (Thu Apr  9 05:24:51
2015)
Searching logs...
Running '/usr/bin/env LC_ALL=en_US.UTF-8 /sbin/ausearch -m AVC -m USER_AVC -m
SELINUX_ERR -ts 04/09/2015 05:24:51 < /dev/null
>/mnt/testarea/tmp.rhts-db-submit-result.03LjWL 2>&1'
----
time->Thu Apr  9 05:48:02 2015
type=SYSCALL msg=audit(1428538682.822:73): arch=c000003e syscall=42 success=no
exit=-111 a0=c a1=7fff14694310 a2=6e a3=7fb7a2c45673 items=0 ppid=22549
pid=22550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0
fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd"
subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1428538682.822:73): avc:  denied  { write } for  pid=22550
comm="glusterd" name="glusterd.socket" dev=dm-0 ino=924731
scontext=unconfined_u:system_r:glusterd_t:s0
tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
----
time->Thu Apr  9 05:48:02 2015
type=SYSCALL msg=audit(1428538682.823:74): arch=c000003e syscall=87 success=yes
exit=0 a0=7fff14694312 a1=7fff14694310 a2=6f a3=7fb7a2c45673 items=0 ppid=22549
pid=22550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0
fsgid=0 tty=(none) ses=4294967295 comm="glusterd" exe="/usr/sbin/glusterfsd"
subj=unconfined_u:system_r:glusterd_t:s0 key=(null)
type=AVC msg=audit(1428538682.823:74): avc:  denied  { unlink } for  pid=22550
comm="glusterd" name="glusterd.socket" dev=dm-0 ino=924731
scontext=unconfined_u:system_r:glusterd_t:s0
tcontext=unconfined_u:object_r:var_run_t:s0 tclass=sock_file
Fail: AVC messages found.

Version-Release number of selected component (if applicable):
Upstream glusterfs3.7 on Rhel7.1 server

How reproducible: 
Always when we run BVT with upstream glusterfs3.7

Steps to Reproduce:
1. Not a manual process
2. Running DHT BVT on rhel7.1 server with upstream glusterfs3.7 packages

Actual results:
Selinux AVC errors found

Expected results:
No AVC errors

--- Additional comment from Niels de Vos on 2015-04-15 12:02:09 EDT ---

This shows that glusterd is deleting a glusterd.sock which has not the right
SElinux context:

avc:  denied  { unlink } for  pid=22550 comm="glusterd" name="glusterd.socket"
    scontext=unconfined_u:system_r:glusterd_t:s0 
    tcontext=unconfined_u:object_r:var_run_t:s0

My guess is that glusterd creates a /var/run/glusterd.sock socket (from the rpm
scriptlet, like bug 1162125?), and that deleting that socket fails. This might
be a selinux-policy issue, but maybe the path of the socket changed upstream?

--- Additional comment from Stanislav Graf on 2015-04-20 09:16:22 EDT ---

CCing mmalik, mgrepl and lvrabec

--- Additional comment from Milos Malik on 2015-04-20 11:30:24 EDT ---

Does following command help?

# restorecon -Rv /var/run/gluster*

Correct label for the socket is:

# matchpathcon /var/run/glusterd.sock
/var/run/glusterd.sock    system_u:object_r:glusterd_var_run_t:s0
#

--- Additional comment from Stanislav Graf on 2015-04-21 15:16:21 EDT ---

Apeksha can you check? Feel free to ping me on IRC to coordinate if needed.

--- Additional comment from Stanislav Graf on 2015-04-27 16:20:55 EDT ---



--- Additional comment from Stanislav Graf on 2015-04-27 16:22:13 EDT ---



--- Additional comment from Stanislav Graf on 2015-04-27 16:25:29 EDT ---

(In reply to Milos Malik from comment #6)

Original job
------------
https://beaker.engineering.redhat.com/jobs/925560
AVCs in attachment 1019427

Job with restorecon
-------------------
https://beaker.engineering.redhat.com/jobs/939894
AVCs in attachment 1019428

I'll ping you tomorrow to sync-up.

--- Additional comment from Stanislav Graf on 2015-04-29 07:04:05 EDT ---

(In reply to Stanislav Graf from comment #10)

Rerunning BVT with some changes, will gather AVCs once it's done and post
results here.

--- Additional comment from Stanislav Graf on 2015-04-30 04:15:20 EDT ---



--- Additional comment from Stanislav Graf on 2015-04-30 04:18:08 EDT ---

(In reply to Stanislav Graf from comment #10)

https://beaker.engineering.redhat.com/jobs/943702

After installation we called:
# semanage fcontext -a -f '' -t bin_t
'/var/lib/glusterd/hooks/(.*)/(.*)/(.*)/.*\.sh'
# restorecon -Rv /var/run/gluster* (comment 6)
# restorecon -Rv /var/lib/glusterd

AVCs in attachment 1020434

--- Additional comment from Stanislav Graf on 2015-05-03 12:29:28 EDT ---



--- Additional comment from Stanislav Graf on 2015-05-03 12:30:40 EDT ---

(In reply to Stanislav Graf from comment #13)

function selinux_workaround
{
    yum list installed policycoreutils-python || yum -y install
policycoreutils-python

    ls -Z /var/run/gluster*
    ls -Z /var/lib/glusterd

cat > mypolicy.te <<_EOPOLICY_
policy_module(mypolicy, 1.0)
require {
  type glusterd_t;
  class capability { mknod sys_ptrace };
}
corenet_tcp_connect_portmap_port(glusterd_t)
files_manage_isid_type_blk_files(glusterd_t)
files_manage_isid_type_chr_files(glusterd_t)
samba_domtrans_smbd(glusterd_t)
samba_signal_smbd(glusterd_t)
allow glusterd_t glusterd_t:capability { mknod sys_ptrace };
fstools_domtrans(glusterd_t)
_EOPOLICY_

    make -f /usr/share/selinux/devel/Makefile
    semodule -i mypolicy.pp

    semanage fcontext -a -f '' -t bin_t
'/var/lib/glusterd/hooks/(.*)/(.*)/(.*)/.*\.sh'

    restorecon -Rv /var/run/gluster*
    restorecon -Rv /var/lib/glusterd
    chcon -t fsadm_exec_t /usr/sbin/xfs_growfs

    matchpathcon /var/run/glusterd.sock
}

https://beaker.engineering.redhat.com/jobs/945703
AVCs in attachment 1021372

--- Additional comment from Miroslav Grepl on 2015-05-15 05:11:27 EDT ---

We really need to run

restorecon /var/run/glusterd.sock

wherethis socket is created for the first time. rpm scriptlet?

We are not able to get in working on RHEL6 without filename transition rule
which we have in RHEL7.

--- Additional comment from Kaushal on 2015-05-18 03:27:47 EDT ---

(In reply to Miroslav Grepl from comment #16)
> We really need to run
> 
> restorecon /var/run/glusterd.sock
> 
> wherethis socket is created for the first time. rpm scriptlet?
> 
> We are not able to get in working on RHEL6 without filename transition rule
> which we have in RHEL7.

GlusterD itself creates this file if it doesn't exist. glusterd will be run as
a part of rpm post upgrade, so the file would get created then. Would this be
useful?

I was under the impression that if a path had a context defined in a loaded
policy, the kernel would ensure that the context was applied when the file was
created. From what I understand, the policy defines the context for a regex
path under /var/run/gluster*, but rhel-6.6 doesn't work with this correctly.
Did I understand this correctly? If so, wouldn't it be just enough that we add
an entry for the exact path in the policy?

--- Additional comment from Milos Malik on 2015-05-18 03:44:17 EDT ---

The policy says that any file, directory or socket created under /var/run by
any process running as glusterd_t will get glusterd_var_run_t label.

# sesearch -s glusterd_t -t var_run_t -T
Found 3 semantic te rules:
   type_transition glusterd_t var_run_t : file glusterd_var_run_t; 
   type_transition glusterd_t var_run_t : dir glusterd_var_run_t; 
   type_transition glusterd_t var_run_t : sock_file glusterd_var_run_t; 

#

But glusterd runs as rpm_script_t when it's executed from the rpm scriptlet.

# sesearch -s rpm_script_t -t glusterd_exec_t -T

# sesearch -s rpm_t -t glusterd_exec_t -T

# 

You need to run restorecon or setfiles to apply a label based on fcontext
patterns.

# semanage fcontext -l | grep /var/run/gluster
/var/run/gluster(/.*)?                             all files         
system_u:object_r:glusterd_var_run_t:s0 
/var/run/glusterd.*                                regular file      
system_u:object_r:glusterd_var_run_t:s0 
/var/run/glusterd.*                                socket            
system_u:object_r:glusterd_var_run_t:s0 
#

--- Additional comment from Kaushal on 2015-05-18 06:10:44 EDT ---

So running a restorecon on /var/run/gluster* at the end of the post upgrade
scriptlet will solve this?

Also, just for my understanding, could you explain why is this a problem with
rhel-6 only and not rhel-7?

--- Additional comment from Milos Malik on 2015-05-18 06:45:15 EDT ---

Yes.

We can specify a filename inside transition rules in RHEL-7:

# sesearch -s rpm_script_t -t var_run_t -c sock_file -T

Found 3 named file transition filename_trans:
type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t
"glusterd.socket"; 
type_transition rpm_script_t var_run_t : sock_file rpcbind_var_run_t
"rpcbind.sock"; 
type_transition rpm_script_t var_run_t : sock_file docker_var_run_t
"docker.sock"; 

#

But we cannot specify a filename inside transition rules in RHEL-6. If there
was a transition rule like the following in RHEL-6, all sockets in /var/run
created by any RPM scriptlet would have been labeled glusterd_var_run_t, which
is wrong, because there are various socket file inside /var/run which have no
relation to gluster.

   type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t;

--- Additional comment from Anand Nekkunti on 2015-05-19 02:59:50 EDT ---

(In reply to Milos Malik from comment #20)
> Yes.
> 
> We can specify a filename inside transition rules in RHEL-7:
> 
> # sesearch -s rpm_script_t -t var_run_t -c sock_file -T
> 
> Found 3 named file transition filename_trans:
> type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t
> "glusterd.socket"; 
> type_transition rpm_script_t var_run_t : sock_file rpcbind_var_run_t
> "rpcbind.sock"; 
> type_transition rpm_script_t var_run_t : sock_file docker_var_run_t
> "docker.sock"; 
> 
> #
> 
> But we cannot specify a filename inside transition rules in RHEL-6. If there
> was a transition rule like the following in RHEL-6, all sockets in /var/run
> created by any RPM scriptlet would have been labeled glusterd_var_run_t,
> which is wrong, because there are various socket file inside /var/run which
> have no relation to gluster.
> 
>    type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t;


 I think  restorecon command fail if I run during post upgrade because
glusterd.socket file not exist in that time which created later by glusterd.

can i create glusterd.socket file and run restorecon during post upgrade for
RHEL6 ?  is it correct ?

--- Additional comment from Anand Nekkunti on 2015-05-19 03:03:11 EDT ---

(In reply to Milos Malik from comment #20)
> Yes.
> 
> We can specify a filename inside transition rules in RHEL-7:
> 
> # sesearch -s rpm_script_t -t var_run_t -c sock_file -T
> 
> Found 3 named file transition filename_trans:
> type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t
> "glusterd.socket"; 
> type_transition rpm_script_t var_run_t : sock_file rpcbind_var_run_t
> "rpcbind.sock"; 
> type_transition rpm_script_t var_run_t : sock_file docker_var_run_t
> "docker.sock"; 
> 
> #
> 
> But we cannot specify a filename inside transition rules in RHEL-6. If there
> was a transition rule like the following in RHEL-6, all sockets in /var/run
> created by any RPM scriptlet would have been labeled glusterd_var_run_t,
> which is wrong, because there are various socket file inside /var/run which
> have no relation to gluster.
> 
>    type_transition rpm_script_t var_run_t : sock_file glusterd_var_run_t;


 I think  restorecon command fail if I run during post upgrade because
glusterd.socket file not exist in that time which created later by glusterd.

can i create glusterd.socket file and run restorecon during post upgrade for
RHEL6 ?  is it correct ?

--- Additional comment from Anand Nekkunti on 2015-05-19 05:51:25 EDT ---

I am  running the restorecon -vR /var/run/glusterd*  command during glusterfs
post upgrade , is it solve this problem ? 
I have sent the patch for that
http://review.gluster.org/#/c/10815/1/glusterfs.spec.in

--- Additional comment from Anand Avati on 2015-05-19 06:37:25 EDT ---

REVIEW: http://review.gluster.org/10815 (Build: Restoring selinux context for
rhel6 during post run) posted (#3) for review on master by Anand Nekkunti
(anekkunt at redhat.com)


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1222869
[Bug 1222869] [SELinux] [BVT]: Selinux throws AVC errors while running DHT
automation on Rhel6.6
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list