[Gluster-users] GlusterFS First Time
Atul Yadav
atulyadavtech at gmail.com
Sat Feb 13 09:22:13 UTC 2016
Thanks for the reply.
As per your guidance, the glusterfs information is given below:
Volume Name: share
Type: Replicate
Volume ID: bd545058-0fd9-40d2-828b-7e60a4bae53c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: master1:/data/brick1/share
Brick2: master2:/data/brick2/share
Options Reconfigured:
performance.readdir-ahead: on
cluster.self-heal-daemon: enable
network.ping-timeout: 25
cluster.eager-lock: on
[root at master1 ~]# gluster pool list
UUID Hostname State
5a479842-8ee9-4160-b8c6-0802d633b80f master2 Connected
5bbfaa4a-e7c5-46dd-9f5b-0a44f1a583e8 localhost Connected
Host information is given below:-
192.168.10.103 master1.local master1 #Fixed
192.168.10.104 master2.local master2 #Fixed
Test case is given below:-
Case 1
While writing 20MB of files continuously on client side. one of the
glusterfs server(Master1) is powered off.
Impact
Client IO operation will be on hold for 25 to 30 second and after that it
will work normally.
Case 2
When failed server is power-up during the IO operation at client side.
Impact
Client IO operation will be on hold for 25 to 30 second and after that it
will work normally.
Result:
There will be no IO loss during the event of failure. But there is
difference of data size on both the servers.
*Master1*
*Size */dev/mapper/brick1-brick1 19912704 508320
19404384 3% /data/brick1
*Inodes */dev/mapper/brick1-brick1 9961472 1556
9959916 1% /data/brick1
*Master2*
*Size* /dev/mapper/brick2-brick2 19912704
522608 19390096 3% /data/brick2
*Inodes */dev/mapper/brick2-brick2 9961472 1556
9959916 1% /data/brick2
*Client*
*Size *master1.local:/share 19912704 522624 19390080 3% /media
*Inodes *master1.local:/share 9961472 1556 9959916 1% /media
How we can match the size of data on both the servers or this normal
behavior.
And there will be impact on data integrity.
Thank You
Atul Yadav
09980066464
On Fri, Feb 12, 2016 at 1:21 AM, Gmail <b.s.mikhael at gmail.com> wrote:
> find my answers inline.
>
> *— Bishoy*
>
> On Feb 11, 2016, at 11:42 AM, Atul Yadav <atulyadavtech at gmail.com> wrote:
>
> HI Team,
>
>
> I am totally new in Glusterfs and evaluating glusterfs for my requirement.
>
> Need your valuable input on achieving below requirement :-
> File locking
>
> Gluster uses DLM for locking.
>
> Performance
>
> It depends on your work load (is it small files big files, etc…), the
> number of drives and the kind of volume you create.
> I suggest you start with just a Distributed Replicated volume and from
> that point you can plan for the hardware and software configuration.
>
> High Availability
>
> I suggest replicating the bricks across the two nodes, as erasure coding
> with two nodes and a single drive on each one will not be of any benefit.
>
>
>
> Existing Infra details is given below:-
> CentOS 6.6
> glusterfs-server-3.7.8-1.el6.x86_64
> glusterfs-client-xlators-3.7.8-1.el6.x86_64
> GlusterFS server Quantity 2 with independent 6 TB storage
> 24 Glusterfs Client.
> Brick replication
>
>
> Thank You
> Atul Yadav
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160213/c7866223/attachment.html>
More information about the Gluster-users
mailing list