[Gluster-users] I can't write files whose form is ".<FILE>.[hash]"
Taehwa Lee
alghost.lee at gmail.com
Fri Nov 25 05:41:53 UTC 2016
Hi, Niels de Vos
I’ve been working for about a year using glusterfs in my company.
Recently, we have tested a Distribute-replicated volume using NFS.
We have used rsync to test.
This is my volume configuration
----------------------------------------------------------------------
Volume Name: rep4x2
Type: Distributed-Replicate
Volume ID: 7376a4ad-c50b-40d8-8fe1-ab84111ece26
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: 10.10.1.151:/volume/rep4x2
Brick2: 10.10.1.152:/volume/rep4x2
Brick3: 10.10.1.153:/volume/rep4x2
Brick4: 10.10.1.154:/volume/rep4x2
Brick5: 10.10.1.155:/volume/rep4x2
Brick6: 10.10.1.156:/volume/rep4x2
Brick7: 10.10.1.157:/volume/rep4x2
Brick8: 10.10.1.158:/volume/rep4x2
Options Reconfigured:
nfs.disable: false
server.root-squash: off
nfs.volume-access: read-write
nfs.rpc-auth-allow: *
nfs.ports-insecure: off
server.allow-insecure: on
diagnostics.brick-sys-log-level: WARNING
diagnostics.client-sys-log-level: WARNING
network.ping-timeout: 5
performance.readdir-ahead: on
----------------------------------------------------------------------
plus, I have a node as client
I did below on client
10.10.2.151 is the other address to access Brick1 node.
Mount
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext3 5.0G 3.1G 1.7G 66% /
/dev/sda2 ext3 5.0G 992M 3.7G 21% /var
none tmpfs 5.9G 120K 5.9G 1% /dev/shm
/dev/mapper/LD-PlugDISK_DB xfs 50G 2.7G 48G 6% /PlugDISK_DB
/dev/mapper/LD-LV xfs 8.2T 4.2T 4.0T 52% /LV
10.10.2.151:/rep4x2 nfs 2.3T 35G 2.3T 2% /mnt/ac2-8node
# rsync -a /source /mnt/ac2-8node
When I did above, rsync generate and write temporary files which look like “.<FILENAME>.[hash]”
original: /PATH/CentOS_release_6.5-i686.tar
temporary file: /PATH/.CentOS_release_6.5-i686.tar.3UUsQ6
and some files raise I/O error while writing.
but when I did same act into a path which is mounted as fuse, It worked properly.
/var/log/glusterfs/nfs.log
: https://gist.github.com/Alghost/86a4b6c9f26c18a8e3af26628571a2df
volume status is below
Status of volume: rep4x2
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.10.1.151:/volume/rep4x2 49153 0 Y 2401
Brick 10.10.1.152:/volume/rep4x2 49153 0 Y 16285
Brick 10.10.1.153:/volume/rep4x2 49153 0 Y 24049
Brick 10.10.1.154:/volume/rep4x2 49152 0 Y 6470
Brick 10.10.1.155:/volume/rep4x2 49152 0 Y 31469
Brick 10.10.1.156:/volume/rep4x2 49152 0 Y 25676
Brick 10.10.1.157:/volume/rep4x2 49152 0 Y 20197
Brick 10.10.1.158:/volume/rep4x2 49152 0 Y 12305
NFS Server on localhost 2049 0 Y 2447
Self-heal Daemon on localhost N/A N/A Y 2457
NFS Server on 10.10.1.153 2049 0 Y 24185
Self-heal Daemon on 10.10.1.153 N/A N/A Y 24193
NFS Server on 10.10.1.158 2049 0 Y 12325
Self-heal Daemon on 10.10.1.158 N/A N/A Y 12355
NFS Server on 10.10.1.152 2049 0 Y 16321
Self-heal Daemon on 10.10.1.152 N/A N/A Y 16329
NFS Server on 10.10.1.155 2049 0 Y 31490
Self-heal Daemon on 10.10.1.155 N/A N/A Y 31598
NFS Server on 10.10.1.156 2049 0 Y 25696
Self-heal Daemon on 10.10.1.156 N/A N/A Y 25704
NFS Server on 10.10.1.154 2049 0 Y 6490
Self-heal Daemon on 10.10.1.154 N/A N/A Y 6520
NFS Server on 10.10.1.157 2049 0 Y 20360
Self-heal Daemon on 10.10.1.157 N/A N/A Y 20369
Task Status of Volume rep4x2
------------------------------------------------------------------------------
There are no active volume tasks
I guess, It would be related to DHT trick which is mentioned on GlusterSummit
: https://twitter.com/raghavendra_t/status/784310769491914752
If you want me to test something or give you information,
you can email me anytime!
Regards,
-----------------------------------------
이 태 화
Taehwa Lee
Gluesys Co.,Ltd.
alghost.lee at gmail.com
010-3420-6114, 070-8785-6591
-----------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161125/3c682fa4/attachment.html>
More information about the Gluster-users
mailing list