[Gluster-users] A very special announcement from Gluster.org

Jeff White jaw171 at pitt.edu
Fri Jun 1 12:52:57 UTC 2012


I had the same thing happen to me on RHEL6 with /var being it's own 
mount point.  All I had to do was copy /etc/glusterd to /var/lib/ as you 
did, run the remaining part of the RPM's script by hand, then rename my 
vol files back in place.

To get the RPM script: rpm -q --scripts glusterfs-server

Just run everything other than the first if block after you move the dir 
by hand.  Next, rename your vol files (move the .rpmsave ones to their 
real names): find /var/lib/glusterd/ -name '*.rpmsave'

Jeff White - Linux/Unix Systems Engineer
University of Pittsburgh - CSSD


On 06/01/2012 08:00 AM, David Coulson wrote:
> I experienced the following going from both 3.2.5 and 3.2.6 (using 
> 'official' gluster packages) on RHEL6.
>
> [root at rhesproddns02 ~]# rpm -Uvh glusterfs-*3.3.0*
> Preparing...                
> ########################################### [100%]
>    1:glusterfs              
> ########################################### [ 33%]
>    2:glusterfs-fuse         
> ########################################### [ 67%]
>    3:glusterfs-server       
> ########################################### [100%]
> mv: inter-device move failed: `/etc/glusterd' to `/var/lib/glusterd'; 
> unable to remove target: Is a directory
> glusterd: symbol lookup error: glusterd: undefined symbol: 
> xdr_gf_event_notify_rsp
> warning: %post(glusterfs-server-3.3.0-1.el6.x86_64) scriptlet failed, 
> exit status 127
>
> I copied /etc/glusterd/* to /var/lib/glusterd/ and it seems to work. 
> Is there some other issue I should expect to hit, or is the rpm just 
> broken in a weird way?
>
> On 5/31/12 2:55 PM, John Mark Walker wrote:
>> See this post - 
>> http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
>>
>> Will publish that on gluster.org very soon.
>>
>> -JM
>>
>>
>> ------------------------------------------------------------------------
>>
>>     Is there a migration guide from 3.2.5 to 3.3 available?
>>
>>     On 5/31/12 12:33 PM, John Mark Walker wrote:
>>
>>         Today, we’re announcing the next generation of GlusterFS
>>         <http://www.gluster.org/>, version 3.3. The release has been
>>         a year in the making and marks several firsts: the first
>>         post-acquisition release under Red Hat, our first major act
>>         as an openly-governed project
>>         <http://www.gluster.org/roadmaps/>and our first foray beyond
>>         NAS. We’ve also taken our first steps towards merging big
>>         data and unstructured data storage, giving users and
>>         developers new ways of managing their data scalability
>>         challenges.
>>
>>         GlusterFS is an open source, fully distributed storage
>>         solution for the world’s ever-increasing volume of
>>         unstructured data. It is a software-only, highly available,
>>         scale-out, centrally managed storage pool that can be backed
>>         by POSIX filesystems that support extended attributes, such
>>         as Ext3/4, XFS, BTRFS and many more.
>>
>>         This release provides many of the most commonly requested
>>         features including proactive self-healing, quorum
>>         enforcement, and granular locking for self-healing, as well
>>         as many additional bug fixes and enhancements.
>>
>>         Some of the more noteworthy features include:
>>
>>             * Unified File and Object storage – Blending OpenStack’s
>>               Object Storage API
>>               <http://openstack.org/projects/storage/> with GlusterFS
>>               provides simultaneous read and write access to data as
>>               files or as objects.
>>             * HDFS compatibility – Gives Hadoop administrators the
>>               ability to run MapReduce jobs on unstructured data on
>>               GlusterFS and access the data with well-known tools and
>>               shell scripts.
>>             * Proactive self-healing – GlusterFS volumes will now
>>               automatically restore file integrity after a replica
>>               recovers from failure.
>>             * Granular locking – Allows large files to be accessed
>>               even during self-healing, a feature that is
>>               particularly important for VM images.
>>             * Replication improvements – With quorum enforcement you
>>               can be confident that  your data has been written in at
>>               least the configured number of places before the file
>>               operation returns, allowing a user-configurable
>>               adjustment to fault tolerance vs performance.
>>
>>         *
>>         *Visit http://www.gluster.org <http://gluster.org/> to
>>         download. Packages are available for most distributions,
>>         including Fedora, Debian, RHEL, Ubuntu and CentOS.
>>
>>         Get involved! Join us on #gluster on freenode, join our
>>         mailing list <http://www.gluster.org/interact/mailinglists/>,
>>         ‘like’ our Facebook page <http://facebook.com/GlusterInc>,
>>         follow us on Twitter <http://twitter.com/glusterorg>, or
>>         check out our LinkedIn group
>>         <http://www.linkedin.com/groups?gid=99784>.
>>
>>         GlusterFS is an open source project sponsored by Red Hat
>>         <http://www.redhat.com/>®, who uses it in its line of Red Hat
>>         Storage <http://www.redhat.com/storage/> products.
>>
>>         (this post published at
>>         http://www.gluster.org/2012/05/introducing-glusterfs-3-3/ )
>>
>>
>>
>>         _______________________________________________
>>         Gluster-users mailing list
>>         Gluster-users at gluster.org
>>         http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
>>     _______________________________________________
>>     Gluster-users mailing list
>>     Gluster-users at gluster.org
>>     http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120601/a0136945/attachment.html>


More information about the Gluster-users mailing list