Use LVM thin LV on cache LV
As of the LVM release 2.02.106, LVM supports the use of fast block devices (such as SSD drives) as write-back or write-though caches for larger slower block devices.
The cache logical volume type uses a small and fast LV to improve the performance of a large and slow LV. It's based on dm-cache(the kernel driver), so you can only use it on CentOS7.1/Ubuntu 15.04 or later.
For this article, I will assume you using Ubuntu 15.04.
You may need to understand some terms before to doing anything:
origin LV OriginLV large slow LV cache data LV CacheDataLV small fast LV for cache pool data cache metadata LV CacheMetaLV small fast LV for cache pool metadata cache pool LV CachePoolLV CacheDataLV + CacheMetaLV cache LV CacheLV OriginLV + CachePoolLV
Create origin LV
First we need a origin LV to store data, you can create one by:
lvcreate -n OriginLV -L LargeSize VG SlowPVs
LVM is restrict the cache LV and origin LV in same VG at right now. So when you follow this post to create LV, make sure you are target to the same VG.
Create cache LV on origin LV
Install
thin-provisioning-tools
package.apt-get install -y thin-provisioning-tools
Create the cache data LV.
lvcreate -n CacheLV -L CacheSize VG FastPVs
Create the cache pool LV, you can use
writeback|writethrough
as cache mode. Writethrough ensures that any data written will be stored both in the cache pool LV and on the origin LV. Writeback delays writing data blocks from the cache pool back to the origin LV. So I choosedwriteback
for better performance.lvconvert --type cache-pool --cachemode writeback VG/CacheLV FastPVs
Create a cache LV by linking the cache pool LV to the origin LV. CacheLV will takes the name of OriginLV, and OriginLV is renamed to OriginLV_corig and becomes hidden.
lvconvert --type cache --cachepool VG/CacheLV VG/OriginLV
Create thin pool LV on cache LV
Just convert the renamed CacheLV(as OriginLV) to thin pool LV.
lvconvert --type thin-pool VG/OriginLV FastPVs
Check cache status
You can use dmsetup status
to check status. The output should contains a line like this:
vg-thin_data_tdata: 0 170409984 cache 8 1711/8192 128 80811/573440 61934 28840 42226 46293 0 1320 0 1 writeback 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 512 discard_promote_adjustment 1 read_promote_adjustment 4 write_promote_adjustment 8
The content after colon follow this format(doc link):
<metadata block size> <#used metadata blocks>/<#total metadata blocks>
<cache block size> <#used cache blocks>/<#total cache blocks>
<#read hits> <#read misses> <#write hits> <#write misses>
<#demotions> <#promotions> <#dirty> <#features> <features>*
<#core args> <core args>* <policy name> <#policy args> <policy args>*
<cache metadata mode>
Use thin/cache LV as root
If you want use thin/cache LV as root filesystem, you may can't boot after install. Because the initrd
image doesn't contains modules for thin/cache
, either some tools for LV check. You can put the script at bellow into /etc/initramfs-tools/hooks/
, add execute permission, then update initrd
by update-initramfs -vu
.
Hi,
I think your wrong: "everything must be in a single volume group"
have a look at here:
https://rwmj.wordpress.com/2014/05/22/using-lvms-new-cache-feature/
"Creating the cache layer
What is not clear from the documentation is that everything must be in a single volume group. That is, you must create a volume group which includes both the slow and fast disks — it simply doesn’t work otherwise."
Thanks for your tips!
I only used one VG in my system and this post, so I didn't notice that issue.
I'll update it in my post, thanks!