The Gateway to Algorithmic and Automated Trading

Gemstone Systems releases Gemstone Enterprise 5.1

First Published 10th June 2008

performance, enterprise data fabric (EDF). The new GemFire Enterprise 5.1 release serves as a distributed operational data management infrastructure that sits between clustered application processes and back-end data sources to provide very low-latency, predictable, high-throughput data sharing and event distribution.

Jags Ramnarayan, chief architect at GemStone Systems: "GemFire Enterprise 5.1 will control how the concurrent load is handled on any server by a configurable set of workers and be assured that events enqueued for delivery to clients can survive server failures."

By managing data in memory, GemFire Enterprise 5.1 enables extremely high-speed data sharing that turns a network of machines into a single, logical data management unit or a data fabric.

GemFire Enterprise 5.1 introduces an advanced set of technical features to deliver powerful, end-to-end scalability and performance improvements. By augmenting native C++/C# caching capabilities, GemFire Enterprise 5.1 provides highly available asynchronous cache update notifications to ensure clients are protected against server failures.

"As enterprises seek to move from a typical disaster recovery scenario to a resilient architecture, companies need a dynamic distributed cache to support next-generation enterprise utilities, especially for compute-intensive, fault-tolerant applications," said Chris Wolf, senior analyst, Burton Group.

"There are a large number of variables in a distributed system which significantly increase the possibility of an error, such as loss of data consistency, missed event notifications, or failure conditions arising from applications, resource limitations or machine failures," said Jags Ramnarayan, chief architect at GemStone Systems. "With this release, GemFire Enterprise 5.1 minimizes the application risk under such conditions and specifies any level of redundancy when partitioning the data across the cluster. GemFire Enterprise 5.1 will control how the concurrent load is handled on any server by a configurable set of workers and be assured that events enqueued for delivery to clients can survive server failures."

The combination of distributed data caching with reliable message delivery provides customers with the tools to build next generation high-performance, real-time applications. For grid users, GemFire Enterprise 5.1 offers scalability and the predictability that it becomes near linear when additional resources become available to the data fabric.

"As more and more organizations turn to distributed data grids to improve application performance, minimize latency and reduce operating expenses, they must address the growing reliability and scalability challenges," continues Ramnarayan. "GemFire Enterprise 5.1 will allow users to leverage native client cache enhancements, configure more than one level of redundancy and optimize for high concurrency to guarantee data availability and integrity. This release reinforces our commitment to delivering reliable solutions to improve and simplify our client's most critical IT processes and deliver best-in-class scalability for distributed data grids with sub millisecond latency."

New features of GemFire Enterprise 5.1 include:

Partitioned Data Regions

Data partitioning in GemFire Enterprise 5.1 has improved redundancy. Partitioned regions configured with redundancy, listener invocation automatically fails over to the new designated primary. Partitioned regions can inter-operate transparently with non-partitioned regions within a distributed system and improve eager or lazy recovery. The user-defined policies/configurations also control the memory management and redundancy of the partitioned regions, guaranteeing "total ordering" of all events across the distributed system without requiring transaction/locks. As a result, all updates can be routed through the primary partition and ensure a balanced memory usage profile.

Reliable and Highly Available Event Delivery

GemFire Enterprise 5.1 ensures clients are resilient to server failures and enjoy continuous availability and on-demand scalability. The high-speed transport layer based on TCP and reliable multi-cast ensures that there is 100 percent data availability with no downtime. With distributed event notifications, data updates are uniformly spread across the data set to process events. GemFire Enterprise 5.1 also offers distributed query support to execute OQL for 'scatter-gather' algorithms.

Concurrent Workload Management

GemFire Enterprise 5.1 allows users to handle hundreds of client connections multiplexed for a configurable number of workers, providing better concurrency workload management capabilities so clients experience reduced buffers. With control over the number of threads, the amount of conserve-sockets set to false can be used to parallelize data traffic to peer members and provide better overall throughput, especially if nodes are multi-honed. By reducing the number of active client connections and providing a configuration option for the client to only connect to one endpoint, connections can now be acquired more lazily than in the past.

New High Performance Persistence Implementation

GemFire Enterprise 5.1 offers high performance persistence implementation designed so that every operation is appended to disk files. Circular event log files, which can grow to a configured size, will automatically rollover to a new file. The thread coalesces the logs to reclaim disk space, resulting in almost a 100 percent throughput gain for asynchronous persistence and a 50 percent gain for synchronous persistence.

Improved Native C++/C# Client Cache

Several native client cache enhancements have been implemented in the client-server caching model of GemFire Enterprise 5.1 to foster easy data sharing and collaboration across applications. Cache level heap LRU implementations will reduce risks from fragmentation when working with varying object sizes in the cache. By executing queries on the server side, clients will be able to access partitioned regions and receive reliable event notifications through subscriptions. Client-side performance will not bottleneck the cache server or impede its ability to scale to a growing number of clients, ensuring seamless scalability for grid-like environments.