[Gluster-users] Strange errors reading/writing/editing/deleting JPGs, PDFs and PNG from PHP Application
Alan Zapolsky
alan at droptheworld.com
Fri Jun 10 15:12:51 UTC 2011
Here is the log. Nothing really stands out. There is one entry from today
and the previous log entry was from 6/3.
[alan at app1:10.71.57.82:glusterfs]$ sudo cat /var/log/glusterfs/drives-d1.log
[2011-06-03 18:17:05.160722] W [io-stats.c:1644:init] d1: dangling volume.
check volfile
[2011-06-03 18:17:05.160865] W [dict.c:1205:data_to_str] dict: @data=(nil)
[2011-06-03 18:17:05.160897] W [dict.c:1205:data_to_str] dict: @data=(nil)
Given volfile:
+------------------------------------------------------------------------------+
1: volume d1-client-0
2: type protocol/client
3: option remote-host 10.198.6.214
4: option remote-subvolume /data/d1
5: option transport-type tcp
6: end-volume
7:
8: volume d1-client-1
9: type protocol/client
10: option remote-host 10.195.15.38
11: option remote-subvolume /data/d1
12: option transport-type tcp
13: end-volume
14:
15: volume d1-replicate-0
16: type cluster/replicate
17: subvolumes d1-client-0 d1-client-1
18: end-volume
19:
20: volume d1-write-behind
21: type performance/write-behind
22: option cache-size 4MB
23: subvolumes d1-replicate-0
24: end-volume
25:
26: volume d1-read-ahead
27: type performance/read-ahead
28: subvolumes d1-write-behind
29: end-volume
30:
31: volume d1-io-cache
32: type performance/io-cache
33: option cache-size 1024MB
34: subvolumes d1-read-ahead
35: end-volume
36:
37: volume d1-quick-read
38: type performance/quick-read
39: option cache-size 1024MB
40: subvolumes d1-io-cache
41: end-volume
42:
43: volume d1-stat-prefetch
44: type performance/stat-prefetch
45: subvolumes d1-quick-read
46: end-volume
47:
48: volume d1
49: type debug/io-stats
50: subvolumes d1-stat-prefetch
51: end-volume
+------------------------------------------------------------------------------+
[2011-06-03 18:17:08.676157] I
[client-handshake.c:1005:select_server_supported_programs] d1-client-0:
Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2011-06-03 18:17:08.684299] I
[client-handshake.c:1005:select_server_supported_programs] d1-client-1:
Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2011-06-03 18:17:08.718624] I [client-handshake.c:841:client_setvolume_cbk]
d1-client-1: Connected to 10.195.15.38:24009, attached to remote volume
'/data/d1'.
[2011-06-03 18:17:08.718687] I [afr-common.c:2572:afr_notify]
d1-replicate-0: Subvolume 'd1-client-1' came back up; going online.
[2011-06-03 18:17:08.732772] I [fuse-bridge.c:2821:fuse_init]
glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13 kernel
7.14
[2011-06-03 18:17:08.735602] I [afr-common.c:819:afr_fresh_lookup_cbk]
d1-replicate-0: added root inode
[2011-06-03 18:17:08.748443] I [client-handshake.c:841:client_setvolume_cbk]
d1-client-0: Connected to 10.198.6.214:24009, attached to remote volume
'/data/d1'.
[2011-06-10 06:33:08.255922] W [fuse-bridge.c:2510:fuse_getxattr]
glusterfs-fuse: 3480740: GETXATTR (null)/3039291028 (security.capability)
(fuse_loc_fill() failed)
[alan at app1:10.71.57.82:glusterfs]$
Forgive me, I'm relatively new to GlusterFS. I'm not sure what level of
logging I have setup. How can I tell the level of logging I have
configured? Perhaps I could increase this? Hopefully capture more detailed
information.
Thanks again for the help!
- Alan
Just in case this helps, here are the volume configuration files from the
server-
[alan at file1:10.198.6.214:d1]$ sudo cat d1-fuse.vol
volume d1-client-0
type protocol/client
option remote-host 10.198.6.214
option remote-subvolume /data/d1
option transport-type tcp
end-volume
volume d1-client-1
type protocol/client
option remote-host 10.195.15.38
option remote-subvolume /data/d1
option transport-type tcp
end-volume
volume d1-replicate-0
type cluster/replicate
subvolumes d1-client-0 d1-client-1
end-volume
volume d1-write-behind
type performance/write-behind
option cache-size 4MB
subvolumes d1-replicate-0
end-volume
volume d1-read-ahead
type performance/read-ahead
subvolumes d1-write-behind
end-volume
volume d1-io-cache
type performance/io-cache
option cache-size 1024MB
subvolumes d1-read-ahead
end-volume
volume d1-quick-read
type performance/quick-read
option cache-size 1024MB
subvolumes d1-io-cache
end-volume
volume d1-stat-prefetch
type performance/stat-prefetch
subvolumes d1-quick-read
end-volume
volume d1
type debug/io-stats
subvolumes d1-stat-prefetch
end-volume
[alan at file1:10.198.6.214:d1]$
[alan at file1:10.198.6.214:d1]$ sudo cat d1.10.195.15.38.data-d1.vol
volume d1-posix
type storage/posix
option directory /data/d1
end-volume
volume d1-access-control
type features/access-control
subvolumes d1-posix
end-volume
volume d1-locks
type features/locks
subvolumes d1-access-control
end-volume
volume d1-io-threads
type performance/io-threads
option thread-count 8
subvolumes d1-locks
end-volume
volume /data/d1
type debug/io-stats
subvolumes d1-io-threads
end-volume
volume d1-server
type protocol/server
option transport-type tcp
option auth.addr./data/d1.allow *
subvolumes /data/d1
end-volume
[alan at file1:10.198.6.214:d1]$
volume d1-posix
type storage/posix
option directory /data/d1
end-volume
volume d1-access-control
type features/access-control
subvolumes d1-posix
end-volume
volume d1-locks
type features/locks
subvolumes d1-access-control
end-volume
volume d1-io-threads
type performance/io-threads
option thread-count 8
subvolumes d1-locks
end-volume
volume /data/d1
type debug/io-stats
subvolumes d1-io-threads
end-volume
volume d1-server
type protocol/server
option transport-type tcp
option auth.addr./data/d1.allow *
subvolumes /data/d1
end-volume
On Fri, Jun 10, 2011 at 10:43 AM, Anand Avati <anand.avati at gmail.com> wrote:
> Do you find anything in the client logs?
>
> On Fri, Jun 10, 2011 at 3:20 AM, Alan Zapolsky <alan at droptheworld.com>wrote:
>
>> Hello,
>>
>> I have a PHP web application that uses Gluster to store its files.
>> There are a few areas of the application that perform multiple
>> operations on small to medium size files (such as .jpg and .pdf files)
>> in quick succession. My dev environment does not use Gluster and had
>> no problems - but in production, I am seeing some strange errors and
>> am wondering if perhaps Gluster could be the cause.
>>
>> Example 1: I may perform the following sequence of operations on a
>> batch of 3,000 photos:
>>
>> 1. Copy the photo from remote URL to a temp folder
>> 2. Create a unique directory based on that photo_id
>> 3. Move the photo from the temp dir to the new dir
>> 4. Create a medium (1000px), small (250px), and tiny (72px) versions
>> of the photo and save them to the temp dir.
>>
>> Example 2: Turning a PDF to a PNG
>>
>> 1. Create a PDF document, ranging from a few MB to tens of MB or more.
>> 2. Use Ghostscript to read a specific page and create a temp .png file
>> from it.
>> 3. Use Imagemagick to resize the .png and add a drop shadow.
>> 4. Delete the temp .png file
>>
>> In example 1, I am finding that some of the resulting JPG files wind
>> up being corrupted - when attempting to read the problematic JPGs, I
>> receive messages such as "bad Huffman code" or "premature end of data
>> segment".
>>
>> In example 2, I am getting strange errors on step 3 - it sometimes has
>> a problem finding the file created by step 2. And then on step 4, I
>> sometimes get an error that really baffles me - a line of PHP code
>> such as:
>>
>> if (is_file($file)) unlink($file);
>>
>> .. will sometimes produce an error "unlink: no such file or
>> directory". I have no idea how it passes the is_file() check and then
>> unlink() says it's not there.
>>
>> At this point I'm just wondering if Gluster could be the culprit for
>> any of this strange behavior, considering the types of operations and
>> file sizes I'm working with. I've included my Gluster volume info
>> below.
>>
>> Thanks for the help.
>>
>> - Alan
>>
>> Gluster Volume Info
>> [alan at file1:10.X.X.X:d1]$ sudo cat info
>> type=2
>> count=2
>> status=1
>> sub_count=2
>> version=1
>> transport-type=0
>> volume-id=3fc69046-a324-42de-bf8a-e1bd2e6e45ab
>> brick-0=10.X.X.X:-data-d1
>> brick-1=10.Y.Y.Y.38:-data-d1
>> performance.cache-size=1024MB
>> performance.write-behind-window-size=4MB
>> performance.io-thread-count=8
>> [alan at file1:10.X.X.X:d1]$
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110610/166c19a3/attachment.html>
More information about the Gluster-users
mailing list