Jump to content


 


Register a free account to unlock additional features at BleepingComputer.com
Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site.


Click here to Register a free account now! or read our Welcome Guide to learn how to use this site.

Theoretical Question About SSD Write Amplification


  • Please log in to reply
3 replies to this topic

#1 Guest_hollowface_*

Guest_hollowface_*

  • Guests
  • OFFLINE
  •  

Posted 10 July 2015 - 01:25 AM

jdCaxF4.png

 

Not sure if this is the correct place to post this. The question is about software but mostly hardware so I figured the hardware section makes sense. I chose external hardware because my SSD is external, but it really doesn't matter because I'm not planning on attempting this, just interested. I actually wrote this question months ago, but never got around to posting it.

 

Would setting the filesytem cluster size to match the SSD's block size reduce write amplification caused by erasal of shared-blocks?

I can't seem to find any information about this, but it makes sense in my head; maybe I'm over-looking something? If I knew the exact sum capacity of all the pages in a block on my ssd, aligned the partition, and applied a filesystem with a cluster size equating to the size of an entire block (is that possible? I assume each filesystem has it's own cluster-size limitations?), would write amplification caused by erasal of used blocks be reduced? Doing this should result in all files being stored in their own cluster, or split across their own cluster-series, so when writing new data to previously used pages all data in the entire block would be safe to discard. The trade-off would be the reduced inode quantity, but depending what the ssd is used for that may be acceptable.

I suppose in the case of small files a reduced inode quantity wouldn't be the only side effect, because in the case of writing files where multiple could normally have fit into a single block, they will now be spread across more blocks, which is a different form of amplification. Not sure there are terms for such but I would put forth spread-amplification, because the data would be spread over a larger area, block-amplification, because a larger number of blocks would be used, and potential-erasal-amplification, because there are more blocks involved that could later have erasals performed on them. Regardless, there should still be reduced write-amplification in the context I mentioned, which is what my question is about.

Are there any free software tools one could use to detect the required information about their SSD, or would the manufacterer have to supply it with the drive?
 



BC AdBot (Login to Remove)

 


m

#2 Angoid

Angoid

  • Security Colleague
  • 299 posts
  • OFFLINE
  •  
  • Gender:Male
  • Location:East Midlands UK

Posted 10 July 2015 - 03:07 AM

Hmmm, interesting as I've been reading about SSDs recently as I'm thinking of putting one into my own PC.  Haven't yet made the decision, but I've come across the idea of over-provisioning, which may be the answer you're looking for:

 

http://blog.seagate.com/intelligent/gassing-up-your-ssd/

http://blog.seagate.com/wp-content/uploads/2014/10/FMS-2012-Tutorial-E-21-Understanding-SSD-Overprovisioning-Kent-Smith.pdf

 

https://en.wikipedia.org/wiki/Write_amplification

 

Like you, I can't find anything about the scenario you pose but it raises the question for me as to whether you're asking the right question?

 

I'll admit that I'm a bit of a n00b when it comes to SSDs, so perhaps someone more knowledgeable than me will chip in...


Helping a loved one through a mental health issue?  Remember ALGEE...

Assess the risk | Listen nonjudgementally | Give reassurance and info | Encourage professional help | Encourage self-help and support network

#3 YeahBleeping

YeahBleeping

  • Members
  • 1,258 posts
  • OFFLINE
  •  
  • Gender:Male
  • Local time:03:35 AM

Posted 10 July 2015 - 08:02 AM

I think theoretically your 'thoughts' have some merit.  But what your thinking would have to be done by the SSD's controller at the hardware level.  I suspect that if your idea had valid real world application.  It mayhaps have already been 'tested' by the HD / SSD manufacturer.



#4 Guest_hollowface_*

Guest_hollowface_*

  • Guests
  • OFFLINE
  •  

Posted 10 July 2015 - 12:42 PM

@Angoid

 

I can't find anything about the scenario you pose but it raises the question for me as to whether you're asking the right question?


I think part of it might be that most people probably aren't interested in reducing the storage capacity of their drives, especially since typical consumer nand-flash SSDs aren't that big yet. My understanding is that nand-flash SSDs range commonly from 32 (2KB each) to 256 (16KB each) pages per block, so that would mean if you had 64 pages per block (4KB each, the equivalent of like a 4K drive), set a matching cluster-size, and wrote a 4KB file, you'd use 64 times more space than normal. If you wrote nothing but 4KB-256KB files to a 256GB nand-flash SSD with a 256KB block size (64 x 4KB pages), and set a matching cluster size, you'd have the equivalent of a 1GB SSD. The trade-off would be that no blocks would be sharing files.

 

@YeahBleeping

what your thinking would have to be done by the SSD's controller at the hardware level


Why?

I suspect that if your idea had valid real world application.  It mayhaps have already been 'tested' by the HD / SSD manufacturer.


I agree, as the benefit of what I'm suggesting is likely not high enough for any sane person to justify doing.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users