Tuesday, February 12, 2013

SSD cache released by Intel

Intel these days discharged a version of its SSD-based Cache Acceleration computer code (CAS) for Linux servers, that it same offers up to eighteen times the performance for read-intensive applications, like on-line dealings process systems. in addition, Intel's CAS currently supports caching to NAND flash (solid-state drives and PCIe cards) in storage arrays. And it supports VMware vMotion, that permits virtual machines to migrate between systems whereas maintaining hot knowledge on cache, notwithstanding the host machine. "The advantage isn't simply to produce higher performance, however guarantee regardless of what happens, that performance remains consistent," same Apostle Flint, Intel's CAS product manager. Intel noninheritable  its CAS technology in Sept from its acquisition of Canadian startup Nevex. Nevex oversubscribed the computer code as CacheWorks, however Intel quickly rebranded it CAS.

The use of cache acceleration and management computer code for NAND nonvolatile storage could be a hot market. over a dozen vendors square measure shipping merchandise, and acquisitions square measure on the increase. Earlier last year, SanDisk noninheritable  FlashSoft for
its flash cache acceleration and management computer code. That was followed by Samsung's acquisition of Nvelo for its Dataplex SSD caching computer code. The computer code identifies knowledge experiencing high levels of reads and moves it to NAND flash within the type of SSDs to spice up performance. Intel proclaimed CAS support for Windows systems in Dec. the newest support, for Linux, conjointly permits admins to pick out applications which will like the upper performance SSDs or permit the CAS computer code to mechanically spread I/O-intensive knowledge to the nonvolatile storage. Intel same its CAS product will target hot knowledge on back-end storage, like a SAN, for each Windows and Linux machines and permit virtual machine migration whereas maintaining high I/O performance with flash cache. "We took the matter of the I/O bottleneck from the facet of fast applications," Flint same. "We have technology to direct performance to applications. as a result of we have a tendency to do this, we discover most of our sales square measure to the DBAs and also the app admins at corporations. "You do not have to rearchitect or tack together your applications in any manner, form or kind. you do not got to do something on backend storage," Flint continued . "Neither finish even is aware of the caching is going on. It sits within the middle, mechanically identifies hot, active knowledge, places a duplicate on high-speed media and also the applications by extension go quicker." Flint same on customary databases, the caching computer code will triple performance. On OLTP applications, that square measure additional browse intensive, performance will jump 18-fold, he said. whereas application servers perform a definite quantity of caching in volatile DRAM already, the quantity of caching is proscribed by the 4GB to 8GB of memory usually on board a server. Intel's CAS computer code takes advantage of higher-capacity SSDs, which might have the maximum amount as a computer memory unit of capability, to boost the performance for much additional knowledge. Performance improvement varies on systems and depends on the magnitude relation of backend knowledge to active knowledge on the server and whether or not the info is browse or write intensive. CAS is read-acceleration cache. "If your application is doing nothing over a bunch of re-reads, you'll need constant performance on our two-level cache as a single-level SSD cache," Flint same. "On the opposite hand, in situations wherever you are running databases ... wherever things like indexes or worker tables become additional hot for a amount of your time, {we can|we will|we square measure able to} guarantee those indices and worker tables are in memory, and in doing thus, we are able to drive performance out of the system that goes well higher than from what you'd get from simply moving all of your knowledge to SSD." to make sure it doesn't duplicate work already being performed in DRAM, the CAS computer code takes management of all knowledge access. the most well liked knowledge is placed on DRAM, whereas less hot, however still I/O active knowledge, is placed on SSD. Intel's CAS computer code provides cooperative integration between the quality server DRAM cache and also the Intel SSD cache, making a multi-level cache that optimizes the utilization of system memory and mechanically determines the most effective cache level for active knowledge.

Intel has pre-qualified the CAS computer code to be used with its enterprise-class 3700 SATA SSD or its 910 series PCIe flash card.

No comments:

Post a Comment