[Gluster-devel] Open source SPC-1 Workload IO Pattern

Michael O'Sullivan michael.osullivan at auckland.ac.nz
Wed Nov 26 13:53:23 UTC 2014


Hi Luis,

We worked with Jens Axobe for a little bit to try and merge things but then just got busy testing distributed file systems as opposed to raw storage.

We had an email in 2012 from 

>>I encountered a couple of segfaults when modifying the sample configuration file.
>>
>>I've thought to revamp it and make it more "fio" like, possibly turning SPC into a profile so that someone can just run "fio --profile=spc"

But the person that emailed did not follow up.

I think having an fio --profile=spc-1 would be great and I'd be happy to help get this working, but fio-type testing is not my core research area/area of expertise. We used fio+spc-1 to test disks in order to get inputs for optimal infrastructure design research (which is one mof my core research areas). That said I did a lot of the original development, so I can probably help people understand what the code is trying to do.

I hope this helps. Please let me know if you'd like to revamp fio+spc-1 and if you need my help.

Thanks, Mike

-----Original Message-----
From: Luis Pabón [mailto:lpabon at redhat.com] 
Sent: Friday, 21 November 2014 3:24 a.m.
To: Michael O'Sullivan; Justin Clift
Cc: gluster-devel at gluster.org
Subject: Re: [Gluster-devel] Open source SPC-1 Workload IO Pattern

Hi Michael,
     I noticed the code on the fio branch (that is where I grabbed the spc1.[hc] files :-) ).  Do you know why that branch has not being merged to master?

- Luis

On 11/18/2014 11:56 PM, Michael O'Sullivan wrote:
> Hi Justin & Luis,
>
> We did a branch of fio that implemented this SPC-1 trace a few years ago. I can dig up the code and paper we wrote if it is useful?
>
> Cheers, Mike
>
>> On 19/11/2014, at 4:21 pm, "Justin Clift" <justin at gluster.org> wrote:
>>
>> Nifty. :)
>>
>> (Yeah, catching up on old unread email, as the wifi in this hotel is 
>> so bad I can barely do anything else.  8-10 second ping times to 
>> www.gluster.org. :/)
>>
>> As a thought, would there be useful analysis/visualisation 
>> capabilities if you stored the data into a time series database (eg 
>> InfluxDB) then used Grafana (http://grafana.org) on it?
>>
>> + Justin
>>
>>
>> On Fri, 07 Nov 2014 12:01:56 +0100
>> Luis Pabón <lpabon at redhat.com> wrote:
>>
>>> Hi guys,
>>> I created a simple test program to visualize the I/O pattern of 
>>> NetApp's open source spc-1 workload generator. SPC-1 is an 
>>> enterprise OLTP type workload created by the Storage Performance 
>>> Council (http://www.storageperformance.org/results).  Some of the 
>>> results are published and available here:
>>> http://www.storageperformance.org/results/benchmark_results_spc1_active .
>>>
>>> NetApp created an open source version of this workload and described 
>>> it in their publication "A portable, open-source implementation of 
>>> the SPC-1 workload" ( 
>>> http://www3.lrgl.uqam.ca/csdl/proceedings/iiswc/2005/9461/00/0152601
>>> 4.pdf
>>> )
>>>
>>> The code is available onGithub: https://github.com/lpabon/spc1 .  
>>> All it does at the moment is capture the pattern, no real IO is 
>>> generated. I will be working on a command line program to enable 
>>> usage on real block storage systems.  I may either extend fio or 
>>> create a tool specifically tailored to the requirements needed to 
>>> run this workload.
>>>
>>> On github, I have an example IO pattern for a simulation running 50 
>>> mil IOs using HRRW_V2. The simulation ran with an ASU1 (Data Store) 
>>> size of 45GB, ASU2 (User Store) size of 45GB, and ASU3 (Log) size of 
>>> 10GB.
>>>
>>> - Luis
>>>
>>> _______________________________________________
>>> Gluster-devel mailing list
>>> Gluster-devel at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel
>>
>>
>> --
>> GlusterFS - http://www.gluster.org
>>
>> An open source, distributed file system scaling to several petabytes, 
>> and handling thousands of clients.
>>
>> My personal twitter: twitter.com/realjustinclift 
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-devel



More information about the Gluster-devel mailing list