EMC Updates VFCache – Now Supports VMware vMotion

When EMC first announced VFCache back in February, I was one of the first folks to write about using VFCache in a VMware enviroment.  At launch, the short answer was: “You can, but there are some limits that might cause you to not want to…”, so I’m really pretty happy to report on EMC’s latest update to the product.

EMC’s official announcement goes on about a number of new features in VFCache, including an industry first, and I’ll get to those, but I’m going to start with what I, personally, consider the best feature add:

VFCache Now Supports VMware vMotion

That’s right, VFCache now has full interoperability with vMotion.  Whereas before, you needed to go through a multi-step work-around to vMotion your VFCache-enabled VM from one ESXi host to another, now you no longer require any additional steps and can vMotion that VM the same as you would any VM that’s not using VFCache.

It’s not 100% seamless — but all the set-up tasks are done once up-front, and then you’re good to go.  Enabling vMotion with VFCache requires the use of EMC’s Virtual Storage Integrator (VSI) vCenter plug-in.  VSI manages the VM’s relationship with the VFCache on the ESXi hosts involved, allowing the vMotion to work without manual intervention (which is to say, “allowing vMotion to work like, well, vMotion”…).

I’m excited by this news, and I know that lots of other folks who would love to get the performance-enhancing benefits of VFCache for their virtualizaed workloads will be excited, too.

The best part: according to the announcement, this new interoperability with vMotion is available right now.  Existing VFCache customers will likely need to update their VFCache software and VSI plug-in in order to get the new features.

Other Enhancements in VFCache

EMC also announced other enhancements to the VFCache product, including:

  • In-line deduplication of data in the cache, allowing for a larger effective cache size, as well as extending the life of the cache cards (as fewer write operations will be performed) — this is the industry first mentioned above
  • Support for multiple PCIe cards per server
  • Support for larger PCIe cards — now up to 700GB per card
  • Support for LSI’s customized mezzanine cards for Cisco UCS B-Series Blade Servers
  • Tie-ins to provide VFCache reporting info to EMC VMAX storage arrays (similar functionality for EMC VNX will be coming in 2013)

All in all, a great update to a very promising product, originally known as Project Lightning.

Project Thunder, EMC’s network-attached, sharable Flash device receives a footnote as a future offering.  I have no info on plans for its availability (and even if I did, it’s likely that NDAs wouldn’t allow me to say anything anyway).  I’ll post more on Thunder as more info becomes available.

EMC VFCache (Project Lightning) in a VMware Environment

[UPDATE: 4 September 2012: This post refers to VFCache at the time of launch and is outdated now that EMC has released VFCache 1.5. You can read my post about that release here.]

Today, EMC announced their much-anticipated server-side Flash product, Project Lightning, under its official name, VFCache.

Lots of folks are writing about VFCache and all the amazing performance-enhancing results it produces.  Some are doing deep dives into how it works.  Since there are plenty of places to get all that information (there are five VFCache white papers on the EMC company site), I don’t see any reason to cover that same territory in detail here.  Instead, after a brief overview of VFCache, I’ll be discussing how it works in a VMware environment.

Full Disclosure: In my position as an EMC employee I’ve had access to VFCache information in advance of today’s launch.  I also have access to roadmap information about planned future enhancements and expansions.  Under my agreements with EMC, I am not in a position to discuss VFCache futures.  I will restrict my commentary to VFCache capabilities “at launch”.

Continue reading