This article explores the offering of the various Java caching technologies that can play critical roles in improving application performance.
What is Cache Management?
A cache is a hot or a temporary memory buffer which stores most frequently used data like the live transactions, logical datasets, etc. This intensely improves the performance of an application, as read/write happens in the memory buffer thus reducing retrieval time and load on the primary source. Implementing and maintaining a cache in any Java enterprise application is important.
- The client-side cache is used to temporarily store the static data transmitted over the network from the server to avoid unnecessarily calling to the server.
- The server-side cache could be a query cache, CDN cache or a proxy cache where the data is stored in the respective servers instead of temporarily storing it on the browser.
Adoption of the right caching technique and tools allows the programmer to focus on the implementation of business logic; leaving the backend complexities like cache expiration, mutual exclusion, spooling, cache consistency to the frameworks and tools.
Caching should be designed specifically for the environment considering a single/multiple JVM and clusters. Given below multiple scenarios where caching can be used to improve performance.
1. In-process Cache – The In-process/local cache is the simplest cache, where the cache-store is effectively an object which is accessed inside the application process. It is much faster than any other cache accessed over a network and is strictly available only to the process that hosted it.
- If the application is deployed only in one node, then in-process caching is the right candidate to store frequently accessed data with fast data access.
- If the in-process cache is to be deployed in multiple instances of the application, then keeping data in-sync across all instances could be a challenge and cause data inconsistency.
- An in-process cache can bring down the performance of any application where the server memory is limited and shared. In such cases, a garbage collector will be invoked often to clean up objects that may lead to performance overhead.
In-Memory Distributed Cache
Distributed caches can be built externally to an application that supports read/write to/from data repositories, keeps frequently accessed data in RAM, and avoid continuous fetching data from the data source. Such caches can be deployed on a cluster of multiple nodes, forming a single logical view.
- In-memory distributed cache is suitable for applications running on multiple clusters where performance is key. Data inconsistency and shared memory aren’t matters of concern, as a distributed cache is deployed in the cluster as a single logical state.
- As inter-process is required to access caches over a network, latency, failure, and object serialization are some overheads that could degrade performance.
2. In-memory database
In-memory database (IMDB) stores data in the main memory instead of a disk to produce quicker response times. The query is executed directly on the dataset stored in memory, thereby avoiding frequent read/writes to disk which provides better throughput and faster response times. It provides a configurable data persistence mechanism to avoid data loss.
Redis is an open-source in-memory data structure store used as a database, cache, and message broker. It offers data replication, different levels of persistence, HA, automatic partitioning that improves read/write.
Replacing the RDBMS with an in-memory database will improve the performance of an application without changing the application layer.
3. In-Memory Data Grid
An in-memory data grid (IMDG) is a data structure that resides entirely in RAM and is distributed among multiple servers.
- Parallel computation of the data in memory
- Search, aggregation, and sorting of the data in memory
- Transactions management in memory
Cache Use Cases
There are use cases where a specific caching should be adapted to improve the performance of the application.
1. Application Cache
Application cache caches web content that can be accessed offline. Application owners/developers have the flexibility to configure what to cache and make it available for offline users. It has the following advantages:
- Offline browsing
- Quicker retrieval of data
- Reduced load on servers
2. Level 1 (L1) Cache
This is the default transactional cache per session. It can be managed by any Java persistence framework (JPA) or object-relational mapping (ORM) tool.
The L1 cache stores entities that fall under a specific session and are cleared once a session is closed. If there are multiple transactions inside one session, all entities will be stored from all these transactions.
3. Level 2 (L2) Cache
The L2 cache can be configured to provide custom caches that can hold onto the data for all entities to be cached. It’s configured at the session factory-level and exists as long as the session factory is available.
- Sessions in an application.
- Applications on the same servers with the same database.
- Application clusters running on multiple nodes but pointing to the same database.
4. Proxy / Load balancer cache
Enabling this reduces the load on application servers. When similar content is queried/requested frequently, proxy takes care of serving the content from the cache rather than routing the request back to application servers.
When a dataset is requested for the first time, proxy saves the response from the application server to a disk cache and uses them to respond to subsequent client requests without having to route the request back to the application server. Apache, NGINX, and F5 support proxy cache.
5. Hybrid Cache
A hybrid cache is a combination of JPA/ORM frameworks and open source services. It is used in applications where response time is a key factor.
Caching Design Considerations
- Data loading/updating
- Performance/memory size
- Eviction policy
- Cache statistics.
1. Data Loading/Updating
Data loading into a cache is an important design decision to maintain consistency across all cached content. The following approaches can be considered to load data:
- Using default function/configuration provided by JPA and ORM frameworks to load/update data.
- Implementing key-value maps using open-source cache APIs.
- Programmatically loading entities through automatic or explicit insertion.
- External application through synchronous or asynchronous communication.
2. Performance/Memory Size
Resource configuration is an important factor in achieving the performance SLA. Available memory and CPU architecture play a vital role in application performance. Available memory has a direct impact on garbage collection performance. More GC cycles can bring down the performance.
3. Eviction Policy
An eviction policy enables a cache to ensure that the size of the cache doesn’t exceed the maximum limit. The eviction algorithm decides what elements can be removed from the cache depending on the configured eviction policy thereby creating space for the new datasets.
There are various popular eviction algorithms used in cache solution:
- Least Recently Used (LRU)
- Least Frequently Used (LFU)
- First In, First Out (FIFO)
Concurrency is a common issue in enterprise applications. It creates conflict and leaves the system in an inconsistent state. It can occur when multiple clients try to update the same data object at the same time during cache refresh. A common solution is to use a lock, but this may affect performance. Hence, optimization techniques should be considered.
5. Cache Statistics
Cache statistics are used to identify the health of cache and provide insights about its behavior and performance. Following attributes can be used:
- Hit Count: Indicates the number of times the cache lookup has returned a cached value.
- Miss Count: Indicates number of times cache lookup has returned a null or newly loaded or uncached value
- Load success count: Indicates the number of times the cache lookup has successfully loaded a new value.
- Total load time: Indicates time spent (nanoseconds) in loading new values.
- Load exception count: Number of exceptions thrown while loading an entry
- Eviction count: Number of entries evicted from the cache
Various Caching Solutions
There are various Java caching solutions available — the right choice depends on the use case.
At GAVS, we focus on building a strong foundation of coding practices. We encourage and implement the “Design First, Code Later” principle and “Design Oriented Coding Practices” to bring in design thinking and engineering mindset to build stronger solutions.
We have been training and mentoring our talent on cutting-edge JAVA technologies, building reusable frameworks, templates, and solutions on the major areas like Security, DevOps, Migration, Performance, etc. Our objective is to “Partner with customers to realize business benefits through effective adoption of cutting-edge JAVA technologies thereby enabling customer success”.
About the Author –
Sivaprakash is a solutions architect with strong solutions and design skills. He is a seasoned expert in JAVA, Big Data, DevOps, Cloud, Containers, and Micro Services. He has successfully designed and implemented a stable monitoring platform for ZIF. He has also designed and driven Cloud assessment/migration, enterprise BRMS, and IoT-based solutions for many of our customers. At present, his focus is on building ‘ZIF Business’ a new-generation AIOps platform aligned to business outcomes.