[Gluster-users] Gluster issue

Penza Kenneth at MITA kenneth.penza at gov.mt
Wed Mar 12 08:51:34 UTC 2014


Good morning,

 

                I am trying out glusterfs to have a file system mirrored
across two nodes on Vmware guests. After installing the latest glusterfs
version 3.4.2-1 on both servers and client I am experiencing a strange
issue.  When extracting the Linux kernel source, the symbolic links are
being replaced  by empty files. With the read-hash-mode option set to 1,
the number of symlinks replaced decreased from 3 to 2, but apparently
the issue is not solved.  A further observation is that if I shut down
one node, everything works fine.                

 

                Any pointers on how to solve this issue? 

 

 

Server config:

 

    volume datavol-posix

        type storage/posix

        option volume-id 3404b7dd-c4b3-47cc-8e7e-126e5ea5d867

        option directory /data/csto1

    end-volume

 

    volume datavol-access-control

        type features/access-control

        subvolumes datavol-posix

    end-volume

 

    volume datavol-locks

        type features/locks

        subvolumes datavol-access-control

    end-volume

 

    volume datavol-io-threads

        type performance/io-threads

        option thread-count 20

        subvolumes datavol-locks

    end-volume

 

    volume datavol-index

        type features/index

        option index-base /data/csto1 /.glusterfs/indices

        subvolumes datavol-io-threads

    end-volume

 

    volume datavol-marker

        type features/marker

        option quota off

        option xtime off

        option timestamp-file
/var/lib/glusterd/vols/datavol/marker.tstamp

        option volume-uuid 3404b7dd-c4b3-47cc-8e7e-126e5ea5d867

        subvolumes datavol-index

    end-volume

 

    volume /data/csto1

        type debug/io-stats

        option count-fop-hits off

        option latency-measurement off

        subvolumes datavol-marker

    end-volume

 

    volume datavol-server

        type protocol/server

        option auth.addr./data/csto1.allow *

        option auth.login.b5d3cd7f-9b9b-437c-a1ec-b2a773c25354.password
004c5514-7ae1-4b80-acf5-a024d57facb3

        option auth.login./data/csto1.allow
b5d3cd7f-9b9b-437c-a1ec-b2a773c25354

        option transport-type tcp

        subvolumes /data/csto1

    end-volume

 

 

Client config:

 

    volume datavol-client-0

        type protocol/client

        option transport-type tcp

        option remote-subvolume /data/csto1

        option remote-host csto1

    end-volume

 

    volume datavol-client-1

        type protocol/client

        option transport-type tcp

        option remote-subvolume /data/csto2

        option remote-host csto2

    end-volume

 

    volume datavol-replicate-0

        type cluster/replicate

        option read-hash-mode 1

        subvolumes datavol-client-0 datavol-client-1

    end-volume

 

    volume datavol-dht

        type cluster/distribute

        subvolumes datavol-replicate-0

    end-volume

 

    volume datavol-write-behind

        type performance/write-behind

        subvolumes datavol-dht

    end-volume

 

    volume datavol-read-ahead

        type performance/read-ahead

        subvolumes datavol-write-behind

    end-volume

 

    volume datavol-io-cache

        type performance/io-cache

        option cache-size 67108864

        subvolumes datavol-read-ahead

    end-volume

 

    volume datavol-quick-read

        type performance/quick-read

        option cache-size 67108864

        subvolumes datavol-io-cache

    end-volume

 

    volume datavol-open-behind

        type performance/open-behind

        subvolumes datavol-quick-read

    end-volume

 

    volume datavol-md-cache

        type performance/md-cache

        subvolumes datavol-open-behind

    end-volume

 

    volume datavol

        type debug/io-stats

        option count-fop-hits off

        option latency-measurement off

        subvolumes datavol-md-cache

    end-volume

 

 

Regards

Kenneth

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140312/a69de360/attachment.html>


More information about the Gluster-users mailing list