I'm doing an experiment for fun, but I can't seem to accomplish what I want to do.
Here's what I'm trying to do. I'm trying to set up a set of benchmarks for flash drives I'm getting soon. Synthetic benchmarks (like many of the disk "benchmarking" programs for windows) don't seem to give "real world" results.
The behavior I'm trying to replicate is that if you change your "cluster size/unit allocation size" when you're formatting a drive, you can get different performance for different sized files. So with a small cluster size you'll be able to write small files quickly (with possibly a penalty for writing large files), but with a large allocation size it'll take ages to write those same small files. Also, those small files now take up more disk space. This behavior is perfectly reproducible in windows GUI file transfers.
I've tried the windows disk benchmarking software, but the ones that test writes seem to erase the drive and format it to whatever they want, bypassing the effect of the cluster size. (Some of them don't, but I can't seem to get them to show me the same "real world performance.")
Anybody have any advice to help me accomplish this? Sure, the synthetic benchmarks WILL tell me which drive is the fastest, but it isn't applicable, and I can't make conclusions for how the drive should be used best (small files, large files, etc.)
EDIT: For example, crystaldiskmark on windows gave me IDENTICAL read and write results when I formatted the drive with a 1024 B allocation size, and with an 8192 B size. I guess I'm not sure how it can test 4k writes if the "file size" you select is 50 MB. That doesn't make sense.
Edited by corrado33, 26 October 2016 - 12:03 PM.