[Gluster-users] Help me please with replication

Виктор Вислобоков corochoone at gmail.com
Thu May 14 10:26:21 UTC 2009


Hello All.

I run into the problem with replication.
I have two servers (192.168.0.62 and 192.168.0.37) and I want to do one
replicated volume.
I reviewed documentation and made following:

---glusterfsd.vol---
volume posix
  type storage/posix
  option directory /var/share
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 16
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow *
  option auth.addr.brick-ns.allow *
  subvolumes brick
end-volume
-----------------------

and

---glusterfs.vol---
volume remote1
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.0.62
  option remote-subvolume brick
end-volume

volume remote2
  type protocol/client
  option transport-type tcp
  option remote-host 192.168.0.37
  option remote-subvolume brick
end-volume

volume replicate
  type cluster/replicate
  subvolumes remote1 remote2
end-volume

volume writebehind
  type performance/write-behind
  option aggregate-size 128KB
  option window-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume
--------------

In the documentation wrote, that replicate transport is RAID1, but it isn't
true!
If both servers up, everything works fine. But if second server down (due
lost network connection), I run into the problem.
If I deleted a file from first server and second server will be up, the
deleted file will be recreated at the first server!
As you see, this is not RAID1.
Please help me. Can GlusertFS work as completely RAID1 or no?

With best wishes,
Victor
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090514/ecb4934e/attachment.html>


More information about the Gluster-users mailing list