Tuan-Dat Tran 6e8a742705 feat: add cache simulation with experiments for TTL and eviction strategies
Introduce a new simulation for Age of Information (AoI) cache management, focusing on varying TTL values and eviction strategies (LRU and Random Eviction). This includes:
- New Python script  for event-driven cache simulations using .
- Experiments for "No Refresh" across multiple TTL configurations (, , ..., ) with:
  - Hit rate and object age tracking (, , etc.).
  - Visualizations (e.g., , ).
- Updated  to describe experimental setup and configurations.
- Log export file () for simulation results.
- Refactor of  with detailed strategy configurations and runtime notes.

### Reason
The commit enhances the project by enabling detailed experiments with configurable cache parameters, supporting analysis of cache efficiency and AoI under varying conditions. This provides a foundation for more sophisticated simulations and insights.

### Performance
- Runtime: ~4m 29s for .

Co-authored experiments introduce structured data files and visualizations, improving clarity for future iterations.

Signed-off-by: Tuan-Dat Tran <tuan-dat.tran@tudattr.dev>
2024-11-27 16:31:46 +01:00

2.2 KiB

Age Caching Simulation

Client -> TTL Cache -> Database Capacity C (C = n (example: 100))

TTL increases on cache hit Age of information / Age of the entry in the cache Database has latest object, cache entry may be old (we don't know)

Age of entry should have low age of information

Update function from cache to refresh based on mu (refresh rate)

Loss function based on TTL and age in cache called beta(i)

Event based simulation

lambda(i) is zipf distribution describing the rate the client requests the object "i"

Inter arrival time of each object => exponential

Hit rate and the average age of the object based on TTL

Notes 11/13/2024

Limitations of time

Run the simulation not time based but based on when the least ranked object is requested at least times for example. Least ranked object -> least zipf value

2-3h

mu

Simulate lambda and mu to see what increases the cost function

Bandwidth

Finite bandwidth between cache and server miss requests and cache updates should not go over the bandwidth

Two versions

  • Default

    • Do Refresh
    • Do Request
  • No Refresh

    • Just Request
    • No Refresh
  • Infinite TTL

    • LRU
    • Infinite TTL
    • No Refresh
  • Random Eviction

    • Random eviction
    • Regular TTL
    • With Refresh
  • Random Eviction w/o Refresh

    • Random eviction
    • Regular TTL
    • No Refresh
Name Cache Capacity MAX_REFRESH_RATE cache_type CACHE_TTL
Default DATABASE_OBJECTS 1< CacheType.LRU 5
No Refresh DATABASE_OBJECTS 0 CacheType.LRU 5
Infinite TTL DATABASE_OBJECTS / 2 0 CacheType.LRU 0
Random Eviction (RE) DATABASE_OBJECTS / 2 1< CacheType.RANDOM_EVICTION 5
RE without Refresh DATABASE_OBJECTS / 2 0 CacheType.RANDOM_EVICTION 5

Runtime

CPU times: user 3min 46s, sys: 43 s, total: 4min 29s Wall time: 4min 29s for ACCESS_COUNT_LIMIT = 10_000 # Total time to run the simulation

Notes 11/27/2024