paint-brush
In-Memory Caching in Golang by@vgukasov
56,728 reads
56,728 reads

In-Memory Caching in Golang

by Vladislav GukasovDecember 19th, 2021
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

How to implement in-memory cache in Golang App

Company Mentioned

Mention Thumbnail
featured image - In-Memory Caching in Golang
Vladislav Gukasov HackerNoon profile picture
Eventually, all projects end up with a need to speed up service responses or some heavy calculations. The fast and easy solution is to use cache. Usually, there is Redis or Memcached but we don’t really need them in single instance microservices. Sometimes it’s better to use a simple in-memory cache in your Go app and today I want to show you the main ways to implement it.

Simple map

The first way is a simple explicit implementation of cache. Usually, it uses map and stores structs there. Also, there is a need to watch the expirations of keys and the cache size.
I put all the implementations in .


package go_cache

import (
	"errors"
	"sync"
	"time"
)

type user struct {
	Id    int64  `json:"id"`
	Email string `json:"email"`
}

type cachedUser struct {
	user
	expireAtTimestamp int64
}

type localCache struct {
	stop chan struct{}

	wg    sync.WaitGroup
	mu    sync.RWMutex
	users map[int64]cachedUser
}

func newLocalCache(cleanupInterval time.Duration) *localCache {
	lc := &localCache{
		users: make(map[int64]cachedUser),
		stop:  make(chan struct{}),
	}

	lc.wg.Add(1)
	go func(cleanupInterval time.Duration) {
		defer lc.wg.Done()
		lc.cleanupLoop(cleanupInterval)
	}(cleanupInterval)

	return lc
}

func (lc *localCache) cleanupLoop(interval time.Duration) {
	t := time.NewTicker(interval)
	defer t.Stop()

	for {
		select {
		case <-lc.stop:
			return
		case <-t.C:
			lc.mu.Lock()
			for uid, cu := range lc.users {
				if cu.expireAtTimestamp <= time.Now().Unix() {
					delete(lc.users, uid)
				}
			}
			lc.mu.Unlock()
		}
	}
}

func (lc *localCache) stopCleanup() {
	close(lc.stop)
	lc.wg.Wait()
}

func (lc *localCache) update(u user, expireAtTimestamp int64) {
	lc.mu.Lock()
	defer lc.mu.Unlock()

	lc.users[u.Id] = cachedUser{
		user:              u,
		expireAtTimestamp: expireAtTimestamp,
	}
}

var (
	errUserNotInCache = errors.New("the user isn't in cache")
)

func (lc *localCache) read(id int64) (user, error) {
	lc.mu.RLock()
	defer lc.mu.RUnlock()

	cu, ok := lc.users[id]
	if !ok {
		return user{}, errUserNotInCache
	}

	return cu.user, nil
}

func (lc *localCache) delete(id int64) {
	lc.mu.Lock()
	defer lc.mu.Unlock()

	delete(lc.users, id)
}


We use a user’s ID as the key of a cached item. Since there is a map, all updates/reads/deletes take O(1) time.


Pros

  • explicit implementation
  • good performance


Cons

  • extra time to implement a cache for each data struct
  • extra time to test the cache logic
  • extra time to fix bugs

gCache Library

abstracts away the cache management and includes various configurations. For instance, you can simply set up the cache eviction rules, max size, expiration TTL, etc.


package go_cache

import (
	"errors"
	"fmt"
	"github.com/bluele/gcache"
	"time"
)

type gCache struct {
	users gcache.Cache
}

const (
	cacheSize = 1_000_000
	cacheTTL  = 1 * time.Hour // default expiration
)

func newGCache() *gCache {
	return &gCache{
		users: gcache.New(cacheSize).Expiration(cacheTTL).ARC().Build(),
	}
}

func (gc *gCache) update(u user, expireIn time.Duration) error {
	return gc.users.SetWithExpire(u.Id, u, expireIn)
}

func (gc *gCache) read(id int64) (user, error) {
	val, err := gc.users.Get(id)
	if err != nil {
		if errors.Is(err, gcache.KeyNotFoundError) {
			return user{}, errUserNotInCache
		}

		return user{}, fmt.Errorf("get: %w", err)
	}

	return val.(user), nil
}

func (gc *gCache) delete(id int64) {
	gc.users.Remove(id)
}


Pros

  • production-ready cache management
  • single interface that works with any data type
  • different cache eviction policies: LRU, LFU, ARC

Cons

  • need to manually cast a cache value on each read that leads to poor performance
  • the library hasn’t been maintained for a while

BigCache Library

is fast, concurrent, evicting in-memory cache written to keep an enormous number of entries without impacting performance. BigCache holds entries on the heap but omits GC for them.


package go_cache

import (
	"encoding/json"
	"errors"
	"fmt"
	"github.com/allegro/bigcache"
	"strconv"
	"time"
)

type bigCache struct {
	users *bigcache.BigCache
}

func newBigCache() (*bigCache, error) {
	bCache, err := bigcache.NewBigCache(bigcache.Config{
		// number of shards (must be a power of 2)
		Shards: 1024,

		// time after which entry can be evicted
		LifeWindow: 1 * time.Hour,

		// Interval between removing expired entries (clean up).
		// If set to <= 0 then no action is performed.
		// Setting to < 1 second is counterproductive — bigcache has a one second resolution.
		CleanWindow: 5 * time.Minute,

		// rps * lifeWindow, used only in initial memory allocation
		MaxEntriesInWindow: 1000 * 10 * 60,

		// max entry size in bytes, used only in initial memory allocation
		MaxEntrySize: 500,

		// prints information about additional memory allocation
		Verbose: false,

		// cache will not allocate more memory than this limit, value in MB
		// if value is reached then the oldest entries can be overridden for the new ones
		// 0 value means no size limit
		HardMaxCacheSize: 256,

		// callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A bitmask representing the reason will be returned.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		OnRemove: nil,

		// OnRemoveWithReason is a callback fired when the oldest entry is removed because of its expiration time or no space left
		// for the new entry, or because delete was called. A constant representing the reason will be passed through.
		// Default value is nil which means no callback and it prevents from unwrapping the oldest entry.
		// Ignored if OnRemove is specified.
		OnRemoveWithReason: nil,
	})
	if err != nil {
		return nil, fmt.Errorf("new big cache: %w", err)
	}

	return &bigCache{
		users: bCache,
	}, nil
}

func (bc *bigCache) update(u user) error {
	bs, err := json.Marshal(&u)
	if err != nil {
		return fmt.Errorf("marshal: %w", err)
	}

	return bc.users.Set(userKey(u.Id), bs)
}

func userKey(id int64) string {
	return strconv.FormatInt(id, 10)
}

func (bc *bigCache) read(id int64) (user, error) {
	bs, err := bc.users.Get(userKey(id))
	if err != nil {
		if errors.Is(err, bigcache.ErrEntryNotFound) {
			return user{}, errUserNotInCache
		}

		return user{}, fmt.Errorf("get: %w", err)
	}

	var u user
	err = json.Unmarshal(bs, &u)
	if err != nil {
		return user{}, fmt.Errorf("unmarshal: %w", err)
	}

	return u, nil
}

func (bc *bigCache) delete(id int64) {
	bc.users.Delete(userKey(id))
}


We used JSON to encode/decode values, but there is an opportunity to use any data format. For instance, one of the binary formats — Protobuf, can boost performance significantly.


Pros

  • production-ready cache management
  • the rich configuration of cache
  • maintained
  • the cache does not trigger GC, therefore excellent performance on big cache size


Cons

  • need to implement your encoder/decoder of values

Benchmarks

goos: darwin
goarch: amd64
pkg: go-cache
cpu: Intel(R) Core(TM) i5-8257U CPU @ 1.40GHz
Benchmark_bigCache
Benchmark_bigCache-8     	 1751281	       688.0 ns/op	     390 B/op	       6 allocs/op
Benchmark_gCache
Benchmark_gCache-8       	  772846	      1699 ns/op	     373 B/op	       8 allocs/op
Benchmark_localCache
Benchmark_localCache-8   	 1534795	       756.6 ns/op	     135 B/op	       0 allocs/op
PASS
ok  	go-cache	6.044s


BigCache is the fastest cache. gCache loses in performance because of the need to cast values from interface{}

Conclusion

We researched different ways of in-memory cache in Golang. Remember that there is no best solution, and it depends from case to case. Use the article to compare the solutions and decide which one fits the needs of your project.
바카라사이트 바카라사이트 온라인바카라