[Gluster-users] Glusterfs could not open spec file

Raghavendra G raghavendra at zresearch.com
Wed Jul 2 08:19:14 UTC 2008


I had sent it earlier from my gmail id which is not added to gluster-users.

On Wed, Jul 2, 2008 at 12:16 PM, Raghavendra G <raghavendra.hg at gmail.com>
wrote:

> Hi Rajasekhar,
> Please find comments inlined.
>
> On Fri, Jun 27, 2008 at 4:20 PM, rajasekhar gurram <
> rajasekhar.gurram at locuz.com> wrote:
>
>>  Dear Team,
>> I have installed and configured gluster in one server and client.
>> one time it was worked fine, again later it is not working.
>> my configuration files.
>> server
>> [root at rhel2 ~]# cat /etc/glusterfs/glusterfs-server.vol
>> volume rhel2
>>   type storage/posix                   # POSIX FS translator
>>   option directory /opt        # Export this directory
>> end-volume
>>
>> volume rhel2
>>   type protocol/server
>>   option transport-type tcp/server     # For TCP/IP transport
>>   option client-volume-filename /etc/glusterfs/glusterfs-client.vol
>>   subvolumes export test
>>   option auth.ip.rhel2.allow * # Allow access to "brick" volume
>> end-volume
>>
> * Both xlators (protocol/server and storage/posix) are named as rhel2. Each
> translator instance should be given a different name.
>
> * protocol/server lists "export" as one of its subvolumes, which is not
> present in the specfile. Try removing it if you dont need it.
>
> * server should have "option client-volume-specfile
> <glusterfs-volume-specification-file>" or a client-specfile should be
> present as <glusterfs-install-prefix>/etc/glusterfs-client.vol
>
>>
>> [root at rhel2 ~]#
>>
>> client
>> [root at test ~]# cat /etc/glusterfs/glusterfs-client.vol
>> volume rhel2
>>   type protocol/client
>>   option transport-type tcp/client
>>   option remote-host 10.129.150.227
>>   option remote-subvolume rhel2
>> end-volume
>> [root at test ~]#
>> problem:
>>
>> [root at test ~]# glusterfs --server 10.129.150.227 /mnt/glusterfs/
>> --volume-name r                                              hel2
>> glusterfs: could not open specfile
>>
>
>
>>
>> [root at test ~]#
>>
>> Apart from these i have few doubts,
>> 1)In our website it told that gluster there is no single point of failure.
>> But as per configuration point of view it is like server and client model
>> so if server fails
>> client cannot able to mount.
>>
> No single point of failure when glusterfs is run in clustered mode. What it
> exactly means that there is nothing like a single metadata server, failure
> of which renders cluster not operational (Though currently unify has a
> limitation in the form of namespace, which will be fixed in future
> releases).
>
>>
>> 2)In server how can i confirm whether server was exported or not where as
>> in NFS we have command to check showmount.
>>
> There is no tool currently which tell the directories exported by server.
> But this can be found out by checking logfiles of server.
>
>>
>>
>> kindly inform me If iam doing wrong, it will help full for me to go
>> further checks.
>>
>> Thanks and Regards
>> G.Rajasekhar,
>> System Engineer.
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>
>
> --
> Raghavendra G
>
> A centipede was happy quite, until a toad in fun,
> Said, "Prey, which leg comes after which?",
> This raised his doubts to such a pitch,
> He fell flat into the ditch,
> Not knowing how to run.
> -Anonymous




-- 
Raghavendra G

A centipede was happy quite, until a toad in fun,
Said, "Prey, which leg comes after which?",
This raised his doubts to such a pitch,
He fell flat into the ditch,
Not knowing how to run.
-Anonymous
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080702/58ca1e89/attachment.html>


More information about the Gluster-users mailing list