[Bugs] [Bug 1248123] New: writes to glusterfs folder are not synced to other nodes unless they are explicitly read from gluster mount
bugzilla at redhat.com
bugzilla at redhat.com
Wed Jul 29 16:05:26 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1248123
Bug ID: 1248123
Summary: writes to glusterfs folder are not synced to other
nodes unless they are explicitly read from gluster
mount
Product: GlusterFS
Version: 3.7.2
Component: replicate
Assignee: bugs at gluster.org
Reporter: cmptuomp3 at gmail.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Description of problem: writes to glusterfs folder are not synced to other
nodes unless they are explicitly read from glusterfs mount
for example, I can write 1000 files in a folder in the glusterfs mount, then if
I ssh to another server and query the actual brick, the files are not there, if
I `cat` each file from the glusterfs mount - files appear in the brick mount.
Aren't files supposed to be automatically copied to the brick? What if
meanwhile the server on which I performed the writes goes down, are they lost?
Version-Release number of selected component (if applicable): 3.7.2-3.el7
How reproducible: very
Steps to Reproduce:
1. touch /ssd_data/test.file on 10.0.1.3 (ssd_data is glusterfs mount)
2. try and open /ssd_storage/test.file on 10.0.1.2 (not found) (ssd_storage is
the brick mount)
Actual results:
file not found
Expected results:
file is present
Additional info:
Number of volumes 2
Volume Names ssd-gluster-data, sata-gluster-data
Type of volumes replica 3
Output of gluster volume info
gluster volume info
Volume Name: sata-gluster-data
Type: Replicate
Volume ID: dbb498cc-fa8f-4513-8d6c-1aa9a159b7cf
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.0.1.3:/sata_storage/GLUSTER_DO_NOT_MODIFY
Brick2: 10.0.1.2:/sata_storage/GLUSTER_DO_NOT_MODIFY
Brick3: 10.0.1.1:/sata_storage/GLUSTER_DO_NOT_MODIFY
Options Reconfigured:
nfs.disable: off
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 60
Volume Name: ssd-gluster-data
Type: Replicate
Volume ID: 967b2328-c5d3-4ea1-8cd0-8114796f3b50
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.0.1.3:/ssd_storage/GLUSTER_DO_NOT_MODIFY
Brick2: 10.0.1.2:/ssd_storage/GLUSTER_DO_NOT_MODIFY
Brick3: 10.0.1.1:/ssd_storage/GLUSTER_DO_NOT_MODIFY
Options Reconfigured:
nfs.disable: off
cluster.server-quorum-type: server
cluster.server-quorum-ratio: 60
Output of gluster volume status
gluster volume status
Status of volume: sata-gluster-data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.0.1.3:/sata_storage/GLUSTER_DO_NOT
_MODIFY 49153 0 Y 16358
Brick 10.0.1.2:/sata_storage/GLUSTER_DO_NOT
_MODIFY 49155 0 Y 8691
Brick 10.0.1.1:/sata_storage/GLUSTER_DO_NOT
_MODIFY 49154 0 Y 24495
NFS Server on localhost 2049 0 Y 8705
Self-heal Daemon on localhost N/A N/A Y 8713
NFS Server on 10.0.1.1 2049 0 Y 24509
Self-heal Daemon on 10.0.1.1 N/A N/A Y 24517
NFS Server on 10.0.1.3 2049 0 Y 30660
Self-heal Daemon on 10.0.1.3 N/A N/A Y 30699
Task Status of Volume sata-gluster-data
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: ssd-gluster-data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.0.1.3:/ssd_storage/GLUSTER_DO_NOT_
MODIFY 49152 0 Y 16366
Brick 10.0.1.2:/ssd_storage/GLUSTER_DO_NOT_
MODIFY 49152 0 Y 8698
Brick 10.0.1.1:/ssd_storage/GLUSTER_DO_NOT_
MODIFY 49153 0 Y 24502
NFS Server on localhost 2049 0 Y 8705
Self-heal Daemon on localhost N/A N/A Y 8713
NFS Server on 10.0.1.1 2049 0 Y 24509
Self-heal Daemon on 10.0.1.1 N/A N/A Y 24517
NFS Server on 10.0.1.3 2049 0 Y 30660
Self-heal Daemon on 10.0.1.3 N/A N/A Y 30699
Task Status of Volume ssd-gluster-data
------------------------------------------------------------------------------
There are no active volume tasks
fstab:
10.0.1.3:/ssd-gluster-data /ssd_data glusterfs
defaults,_netdev,backup-volfile-servers=10.0.1.1:10.0.1.2 0 0
10.0.1.3:/sata-gluster-data /sata_data glusterfs
defaults,_netdev,backup-volfile-servers=10.0.1.1:10.0.1.2 0 0
Client Information
OS Type: Linux
Mount type: fuse.glusterfs
OS Version: CentOS 7, x86_64
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list