Advertisement
Articles
Advertisement

Designing SSDs for write-intensive Embedded System Applications

Fri, 09/25/2009 - 10:53am
Gary Drossel, Director of Product Planning, Solid-State Storage Business Unit, Western Digital Technologies

Write-intensive applications require a mix of controller, storage media, and solid state storage management algorithms to achieve optimal endurance and lifespan, while read-intensive applications just go for the lowest cost per gigabyte.

A key design metric for long product lifespan requires an understanding of the concept of write amplification and its crushing affects resulting in SSD failure when overlooked. Write amplification is a measure of the efficiency of the SSD controller, and it defines the number of writes the controller makes to the NAND for every write from the host system. The concept stems from the mismatch between erase block sizes (256KB for 50nm SLC), page sizes (4KB for 50nm SLC) and sector sizes (512 bytes).  Embedded applications typically transfer data in a series of short, random transactions, stepping up the effects of write amplification.

The minimum write size from an SSD controller to the NAND is usually the page size, which in the above example is 4KB. Most SSDs erase before writing so a 4KB write from the host will, at worst case, require a whole erase block (256KB) to be erased and written. To state it another way, 256KB are written from the SSD controller to the NAND for a 4KB write from the host to the SSD controller. The result is a 256:4 or 64:1 write amplification. In this worst-case scenario, writing 64 times the data will certainly cause the drive to wear out and fail faster than projected.
In truth, write amplification is somewhere between perfect (1:1) and worst case, which is defined as “erase block size divided by page size:1”. The bottom line for OEM designers is to understand their applications’ usage model and know how much data is being written.
This is easier said than done. One way is to program the host system to perform writes that are larger in size that align to sector boundaries.  Another is to “over provision”, which fundamentally means providing more capacity (at more cost) than is required to extend product life.

A more concrete alternative is to use advanced solid state storage technologies that are now integrated into certain SSDs, such as those from Western Digital.  These new technologies give OEMs the ability to monitor and report real-time useable life data on the actual SSD.  Choosing SSDs with integrated monitoring technology takes the guesswork out of predicting SSD life and provides a solid answer to the question, “How long will this SSD last?” 

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading