Skip to content
emarsh edited this page Jun 4, 2015 · 4 revisions

Welcome to the fsbench wiki!

The title of this tool is a bit on the misguided side. The tool is will give a quick check of file I/O. The tool can be used for local development to see if file systems are behaving normally. I would use this tool to see if there are any anomalies that happen during execution during development of I/O. When referencing actual bench marking tools for file I/O there are many dependencies in the system, such as hardware (RAM, CPU, DISK) and circumstances under which the measurements are made.

The following is a list of web sites that describe actual file I/O bench marking

  1. http://www.iozone.org/
  2. http://www.textuality.com/bonnie
  3. http://www.coker.com.au/bonnie++/
  4. http://www.iometer.org
  5. SPECsfs http://www.spec.org
  6. http://fsbench.filesystms.org
  7. http://www.fsl.cs.sunysb.edu/docs/fsbench/checklist.html
  8. http://filesystems.org/project-fsbench.html

There are no accepted gold standards for file I/O at the time of writing (June/2015), that are available to the open source community. In reading some of the above there are some studies that suggest the following be done to truly establish a benchmark:

           Establish timing accuracy (Verified timing to known standard)
           Establish known benchmark Standards:
                  * Random Read/Write
                  * Sequential Read/Write
                  * Read 10K files (or some standard number)
                  * Write 10K files (or some standard number)
                  * Create/Delete 10K files (or some standard number)
            System configuration reports (CPU, DISK, RAM) etc.
            Performance metrics
                  * Cache size
                  * Threads per client
                  * I/O size
            Performance Statistics
                  * Standard Deviations
                  * Confidence Level

Lastly, an open database with reporting stats for various configurations may help disk farm communities measure and report on what has been done. This type of results sharing could guide better understanding of I/O metrics.

Clone this wiki locally