When EMC first announced VFCache back in February, I was one of the first folks to write about using VFCache in a VMware enviroment. At launch, the short answer was: “You can, but there are some limits that might cause you to not want to…”, so I’m really pretty happy to report on EMC’s latest update to the product.
EMC’s official announcement goes on about a number of new features in VFCache, including an industry first, and I’ll get to those, but I’m going to start with what I, personally, consider the best feature add:
VFCache Now Supports VMware vMotion
That’s right, VFCache now has full interoperability with vMotion. Whereas before, you needed to go through a multi-step work-around to vMotion your VFCache-enabled VM from one ESXi host to another, now you no longer require any additional steps and can vMotion that VM the same as you would any VM that’s not using VFCache.
It’s not 100% seamless — but all the set-up tasks are done once up-front, and then you’re good to go. Enabling vMotion with VFCache requires the use of EMC’s Virtual Storage Integrator (VSI) vCenter plug-in. VSI manages the VM’s relationship with the VFCache on the ESXi hosts involved, allowing the vMotion to work without manual intervention (which is to say, “allowing vMotion to work like, well, vMotion”…).
I’m excited by this news, and I know that lots of other folks who would love to get the performance-enhancing benefits of VFCache for their virtualizaed workloads will be excited, too.
The best part: according to the announcement, this new interoperability with vMotion is available right now. Existing VFCache customers will likely need to update their VFCache software and VSI plug-in in order to get the new features.
Other Enhancements in VFCache
EMC also announced other enhancements to the VFCache product, including:
- In-line deduplication of data in the cache, allowing for a larger effective cache size, as well as extending the life of the cache cards (as fewer write operations will be performed) — this is the industry first mentioned above
- Support for multiple PCIe cards per server
- Support for larger PCIe cards — now up to 700GB per card
- Support for LSI’s customized mezzanine cards for Cisco UCS B-Series Blade Servers
- Tie-ins to provide VFCache reporting info to EMC VMAX storage arrays (similar functionality for EMC VNX will be coming in 2013)
All in all, a great update to a very promising product, originally known as Project Lightning.
Project Thunder, EMC’s network-attached, sharable Flash device receives a footnote as a future offering. I have no info on plans for its availability (and even if I did, it’s likely that NDAs wouldn’t allow me to say anything anyway). I’ll post more on Thunder as more info becomes available.