Cache penetration, cache breakdown, cache avalanche detailed explanation

1. Cache penetration

Cache penetration means that the data requested by the client does not exist in the cache or the database, so the cache will never take effect and these requests will be knocked down to the database.

That is to say, this data does not exist at all. If a hacker attacks and activates many threads to keep sending requests for this non-existent data, then the requests will be sent to the database all the time, which can easily crash the database.

solution:

1. Cache empty objects

Advantages: simple implementation and easy maintenance

Disadvantages: Extra memory consumption, because some empty objects corresponding to made-up IDs are cached, but this can be solved by setting TTL for the objects, but it will cause short-term data inconsistency.

2. Bloom filter

Advantages: less memory usage

Disadvantages: complex implementation, misjudgment exists

3.Others

Use bitmaps types to define access whitelists, or perform real-time monitoring, and work with operation and maintenance personnel to troubleshoot access objects and access data to set up blacklist restriction services

2. Cache breakdown

Cache breakdown refers to when a hotspot key expires at a certain point in time, and there are a large number of concurrent requests for this key at this point in time, so a large number of requests hit the db, which is a common “hotspot” problem.

solution:

1. Preset popular data and store it in cache in advance

2. Monitor popular data in real time and adjust key expiration time

3. Second-level cache: Carry out second-level cache for hotspot data, and set different expiration times for different levels of cache.

4. Set up distributed locks

3. Cache avalanche

A large number of application requests cannot be processed in the Redis cache, and then the application sends a large number of requests to the database layer, causing the pressure on the database layer to surge.

The difference between breakdown and avalanche is that breakdown is for specific hotspot data, while avalanche is for all data.

Reason 1: A large number of keys in the cache expire at the same time, resulting in a large number of requests that cannot be processed, and a large amount of data needs to be returned to the source database.

Option 1: Differential setting of expiration time
Differentiate the cache expiration time and do not let a large number of keys expire at the same time. For example, when initializing the cache, add a small random number to the expiration time of these data, so that the expiration time of different data is different but not very different, that is, it avoids the expiration of a large amount of data at the same time and ensures that these data Expired at a similar time

Option 2 service downgrade
Allow core businesses to access the database, and non-core businesses directly return predefined information.

Option 3 does not set an expiration time
When initializing the cached data, set the cache to never expire, then start a background thread to regularly update all data to the cache every 30 seconds, and control the frequency of updating data from the database through appropriate sleep to reduce database pressure.

Reason 2: The Redis instance fails and is unable to process requests, resulting in a large backlog of requests to the database layer.

Solution 1 service circuit breaker
Pause business applications’ access to cache services to reduce pressure on the database

Option 2: Request current limit
Control the number of requests entering the application per second to avoid too many requests being sent to the database

Solution 3: Redis builds a high-reliability cluster
Build a Redis high-reliability cluster through master-slave nodes. It can ensure that when the Redis master node fails and goes down, the slave node switches to the master node and continues to provide services, avoiding cache avalanche due to cache instance downtime.

Related Posts

Solve vue multi-level routing cache failure Solve the multi-level routing cache problem based on keep-alive vue keep-alive cache failure vue-element-admin multi-level routing cache failure

[Distributed cache] Distributed cache-caching technology

Microservice design guidance – redis double cache design solves the problem of an app version check API result causing a system crash

Redis – three caching problems

Redis settings start automatically at boot

[Brief explanation] Browser cache

How to register Redis as a local service

Detailed explanation of the five data types of Redis

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*