dm-cache

As a result, the costly speed of SSDs becomes combined with the storage capacity offered by slower but less expensive HDDs.

[6] When configured to use the multiqueue (mq) or stochastic multiqueue (smq) cache policy, with the latter being the default, dm-cache uses SSDs to store the data associated with performed random reads and writes, capitalizing on near-zero seek times of SSDs and avoiding such I/O operations as typical HDD performance bottlenecks.

[7] Another dm-cache project with similar goals was announced by Eric Van Hensbergen and Ming Zhao in 2006, as the result of an internship work at IBM.

[8] Later, Joe Thornber, Heinz Mauelshagen and Mike Snitzer provided their own implementation of the concept, which resulted in the inclusion of dm-cache into the Linux kernel.

For the write-through operating mode, write requests are not returned as completed until the data reaches both the origin and cache devices, with no clean blocks becoming marked as dirty.

[6][7] As of August 2015[update] and version 4.2 of the Linux kernel,[12] the following three cache policies are distributed with the Linux kernel mainline, out of which dm-cache by default uses the stochastic multiqueue policy:[6][7] Logical Volume Manager includes lvmcache, which provides a wrapper for dm-cache integrated with LVM.