[Gluster-users] GlusterFS Failing to write Maya/Renderman Studio Pointcloud files

Todd Daugherty todd at truthout.org
Wed Jul 22 14:54:26 UTC 2009


fuse version used from the fedora repos is what we are using. And
there is NOTHING in the logs. I have watched the client and server
logs while this testing has gone on and NOTHING. There are LOTS of
logs from Maya. But I don't think that will help you. If you want I
will tar up the logs. But I did most of the testing on Tuesday and
there is nothing in the lots. Is there a way to turn up verbosity on
the logs? This is a VERY repeatable problem. (Every time I do this
procedure).

Everything else so far works great. And this works when the volume is
exported via Samba. So it sounds like it is in the gluster client. I
turned off BDB and that did not work either. The command (ptfilter)
opens a file and inserts points into it. (x,y,z's) So it is how this
command writes to the filesystem.

ok I just saw --log-level= should I put DEBUG in there?

Thanks

Todd

On Wed, Jul 22, 2009 at 3:02 PM, Harshavardhana<harsha at gluster.com> wrote:
> Hi Todd,
>
>      we would need the client and server log files, which by default are
> present at "/var/log/glusterfs" . This would help us in understanding the
> issue better.  Also is the fuse version used from the fedora repos or fuse
> patched by us "2.7.4glfs11" ?
>
> Regards
> --
> Harshavardhana
> Gluster - http://www.gluster.com
>
>
> On Wed, Jul 22, 2009 at 4:48 PM, Todd Daugherty <todd at truthout.org> wrote:
>>
>> What other Info would you like?
>>
>> Client Config
>>
>> volume remote01
>>  type protocol/client
>>  option transport-type tcp
>>  option remote-host slave14
>>  option remote-subvolume brick01
>> end-volume
>>
>> volume remote02
>>  type protocol/client
>>  option transport-type tcp
>>  option remote-host slave15
>>  option remote-subvolume brick02
>> end-volume
>>
>> volume remote03
>>  type protocol/client
>>  option transport-type tcp
>>  option remote-host slave16
>>  option remote-subvolume brick03
>> end-volume
>>
>> volume remote04
>>  type protocol/client
>>  option transport-type tcp
>>  option remote-host slave20
>>  option remote-subvolume brick04
>> end-volume
>>
>> volume distribute
>>  type cluster/distribute
>>  subvolumes remote01 remote02 remote03 remote04
>> end-volume
>>
>> volume writebehind
>>  type performance/write-behind
>>  option aggregate-size 128KB
>>  option window-size 1MB
>>  subvolumes distribute
>> end-volume
>>
>> volume cache
>>  type performance/io-cache
>>  option cache-size 128MB
>>  subvolumes writebehind
>> end-volume
>>
>> Server Config
>>
>> volume posix
>>  type storage/posix
>>  option directory /node
>> end-volume
>>
>> volume locks
>>  type features/locks
>>  subvolumes posix
>> end-volume
>>
>> volume brick01
>>  type performance/io-threads
>>  option thread-count 8
>>  subvolumes locks
>> end-volume
>>
>> volume server
>>  type protocol/server
>>  option transport-type tcp
>>  option auth.addr.brick01.allow 127.0.0.1,192.168.1*
>>  subvolumes brick01
>> end-volume
>>
>> On Wed, Jul 22, 2009 at 7:55 AM, Harshavardhana<harsha at gluster.com> wrote:
>> > Hi Todd,
>> >
>> >      Could you provide us with your volume configuration files and log
>> > files
>> > for both server and client?.  Also can you please file a bug report with
>> > these at http://bugs.gluster.com/
>> >
>> > Thanks
>> > --
>> > Harshavardhana
>> > Gluster - http://www.gluster.com
>> >
>> >
>> > On Wed, Jul 22, 2009 at 12:00 AM, Todd Daugherty <todd at truthout.org>
>> > wrote:
>> >>
>> >> I have a 4 node cluster in test production and this is quite the
>> >> problem.
>> >>
>> >> Linux Fedora 10/11 client Fuse 2.74 Gluster 2.0.3
>> >> Gentoo 2.6.27-gentoo-r8 server Gluster 2.0.3
>> >>
>> >> When mounted native the filesystem does not complete the writing of
>> >> Point Cloud files. When mounted via CIFS (glusterfs exported via
>> >> Samba) it writes the files.
>> >>
>> >> I started an strace of the process but it crashed. Too many steps
>> >> before it actually gets to the Point Cloud generation. Renderman
>> >> Studio runs via Maya so that is VERY slow on the strace as well.
>> >>
>> >> Any ideas of how to solve this problem?
>> >>
>> >> Everything thing else is working. Had a problem with Houdini Mantra in
>> >> Gluster 2.0.1 that was fixed with an upgrade to 2.0.3. I was hoping
>> >> 2.0.4 would fix this problem but no dice.
>> >>
>> >> Todd
>> >>
>> >> _______________________________________________
>> >> Gluster-users mailing list
>> >> Gluster-users at gluster.org
>> >> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>> >>
>> >
>> >
>>
>
>




More information about the Gluster-users mailing list