By Mike Clements posted Oct 27th 2011
Long time Corsair forum member Rafael Jaimes III, better known to forum readers as "Synbios", recently sent us some documentation of SSD life testing utilizing our Force F40-A SSDs that he's done. He is extremely knowledgeable and helpful to Corsair forum readers. We found the results to be very interesting and we think you might also. Here is his submission in its entirety.
Rafael “Synbios” Jaimes III
For Corsair, Inc.
October 11th, 2011
A common question appearing on the forums these days is the lifespan of a Solid State Drive (SSD). Being a very new technology compared to mechanical platter technology, the numbers of failures of SSDs are currently extremely low: there just hasn’t been enough SSDs out there long enough yet to have a significant number failing yet. However, the lifespan of the SSD is a growing concern for many consumers looking to make the switch to SSD technology. It is expected that SSDs have much shorter lifespans than HDDs because of the limitation of NAND flash memory itself; the current flash technology used in most SSDs including all available Corsair SSD products.
The current accepted method to report lifespan estimates of hardware products is the mean-time between failure (MTBF) times. The current Corsair products have MTBF times between 1,000,000 and 2,000,000 hours. Here’s a table compiling the latest Corsair drives and their MTBF times in a more useful scale: years.
Obviously, the MTBF has not been experimentally validated. Also, the MTBF time is predicted based off power-on hours. If you simply plugged in a SSD, but never actually transferred any data to it, it would most likely stay functional for over 100 years. A drive like the Force GT, could possibly reach over 200 years in power-on time.
We need a more accurate metric to which can accurately determine the lifespan of an SSD under regular use: while writing and deleting files. Additionally, constant writing and deleting (P/E cycles) is a concern for those who want to maintain their SSD’s extreme performance. For this experiment, I used a utility in order to simulate real world scenarios and test the robustness (performance, file integrity, and life) of an SSD.
Materials & Methods
Anvil’s Storage Utilities 1.0.27 Beta5 (8/1/2011) was used to perform the endurance testing (writing and deleting of files). ATTO was used for benchmarks to test sequential data rates. Average random write speed is reported in Anvil’s application. SSDife Pro was used to confirm the health of the drive at the end of the test. The test machine was an Asus P5W-DH Deluxe with an Intel QX6700 and ICH-7R controller (SATA II support) running Windows 7 SP1 Enterprise 64-bit. TRIM is fully supported and enabled. Most importantly of all, a 40 GB Corsair Force drive was used. The drive was Rev. A, which utilizes 25nm technology. The latest firmware at the time of writing, 2.2, was pre-installed on the drive.
The following figure shows SMART data from CrystalDiskInfo. Notice the health status is “Good” at 100% and there are no lifetime writes or reads (F1 and F2 values respectively) to this virgin drive.
ATTO benchmarking utility shows initial sequential data rates at around 265 MB/sec writes and 280 MB/sec reads:
Three files were placed on the drive: a 35 KiB excel file, 49 KiB word document, and 2,181 KiB JPEG image. MD5 hashes were generated for the three files before testing commenced. Throughout the testing, the MD5 for the files were repeatedly generated and no changes in MD5 value ever occurred. File integrity remained at 100%.
The 40 GB (true SI notation, 1000 MB = 1 GB) Force drive has approximately 37.2 GiB (1 Gibibyte = 1024 MiB) of capacity. Note from here on, I will not be using the true SI notation of Giga and Tera-bytes, bur rather Gibi and Tibi-bytes, which is actually what Windows uses and what the common reader would be familiar with. The endurance test was set to fill 36 GiB of the drive at a time with random data, then delete it and repeat. It is believed the static data on the drive (the three test files) are rotated among the cells for even wear leveling.
The health indicator (E7) remained at 100% until just over 19 TiB of written data (F1) when it dropped to 97%. The average random write speed reported by Anvil’s application was 151 MiB/s.
After about three weeks of testing and 127 TiB written and deleted to the drive, the average random write speed remained steady at 146 MB/s. The health status was down to 57%. There were still no retired blocks (05), but the wear range delta started increasing and was at 49 at this point (B1).
Surprisingly, SSDlife Pro still shows good health for the drive, and ATTO benchmarks were only slightly changed from the virgin drive after the 127 TiB of written and deleted data.
By 168 TiB written, the MWI had dropped to 25%. The speeds stayed the same and still no retired blocks (05). The wear range delta continued rising slowly and was up to 83 at this point.
By 189 TiB written, the MWI was down to 10%. The average write speed was still the same and no retired blocks. It stayed at this MWI even beyond 206 TiB as shown by SSDLife. Even up to 228 TiB, the MWI was still at 10%, speeds stayed consistent, and no blocks were ever retired. The MD5 hashes were generated and still unchanged.
By 240.8 TiB, the drive had suddenly disappeared from the controller. Finally, it had called quits. The overall history of this drive is shown in the following plots. Due to natural compression, the true amount of data written to the drive is actually less than the amount reported by Anvil's program. In order to illustrate any potential differences, I plotted the MWI health status (E7) vs. both E9 (true data written to SSD blocks) and EA/F1 (effective data written, “lifetime writes from host”).
It is important to note that the speed and wear range delta never changed throughout the entire experiment. Also, the MD5 hashes were generated every iteration of the program loop, there was never any discrepancy of file integrity throughout the experiment.
The Corsair Force drive, thanks to SandForce DuraClass technology along with TRIM, can maintain factory performance and 100% file integrity up to 240 TiB of writes and deletes. Assuming the average user writes data at 20 GiB/day, this accounts to over 33 years under normal usage! Unfortunately the drive did disappear after these tests, but it is believed the failure was due to the SSD drive controller on the disk itself. If the blocks are transferred to a new drive controller it is very likely the data will be retrievable. Overall, the SandForce controller is extremely robust and showed very consistent speeds throughout its entire life. No secure erases or special maintenance are required to maintain factory write and read speeds as long as TRIM is enabled. For anyone with these SSD, I recommend checking the health status on a regular basis using a SMART reading tool or SSDLife. Once the health reaches 10%, I would begin considering purchasing a new drive. There is about a 40 TiB buffer from the point of 10% health to complete controller failure, which should give a user plenty of time to purchase a new drive and backup their contents.