GenPool outperforms sync.Pool in scenarios where objects are retained for longer periods, while also giving you fine-grained control over memory reclamation timing and aggressiveness.
If your system rarely retains objects, you’re unlikely to benefit from GenPool’s design.
- Performance
- Installation
- Quick Start
- Features
- Cleanup Policy
- Cleanup Levels
- Growth Policy
- Manual Control
- Contributing
- License
- GOOS: darwin
- GOARCH: arm64
- CPU: Apple M1
- Package:
github.com/AlexsanderHamir/GenPool/pool
Latency Level Workload | Metric | GenPool | SyncPool | Delta Value | Delta % |
---|---|---|---|---|---|
High | Avg Iterations | 92,090 | 85,018 | +7,072 | +8.32% |
Avg Time (ns) | 12,268 | 13,070 | -802 | -6.14% | |
Moderate | Avg Iterations | 869,492 | 840,131 | +29,361 | +3.49% |
Avg Time (ns) | 1,223.8 | 1,316.7 | -92.9 | -7.05% | |
Low | Avg Iterations | 6,004,695 | 6,099,886 | -95,191 | -1.56% |
Avg Time (ns) | 197.46 | 194.26 | +3.2 | +1.65% |
Full benchmark details: GenPool vs sync.Pool
-
As shown, GenPool performs better when objects are held for longer periods. Its performance degrades as retention time decreases, due to the overhead of its sharded design.
-
For detailed results and interactive graphs, see the Benchmark Results Transparency page.
- In short, the benchmarks revealed that across all scenarios—whether using a single shard or many, and under both high and low concurrency—the key factor influencing performance was how quickly objects were returned. The closer you're to doing nothing with the object, the more likely
sync.Pool
was to outperform GenPool.
- In short, the benchmarks revealed that across all scenarios—whether using a single shard or many, and under both high and low concurrency—the key factor influencing performance was how quickly objects were returned. The closer you're to doing nothing with the object, the more likely
For best results under contention make sure that pool.PoolFields[Object]
is on a separate cache line from your fields (add padding if needed). This avoids false sharing and improves cache performance across cores. (example)
For detailed technical explanations and implementation details, please refer to the docs directory:
- Overall Design - Technical design and architecture overview
- Cleanup Mechanism - Details about the pool's cleanup and eviction policies
go get github.com/AlexsanderHamir/GenPool
Here's a simple example of how to use the object pool:
package main
import (
"sync/atomic"
"time"
"github.com/AlexsanderHamir/GenPool/pool"
)
// By embedding [Fields], Object automatically satisfies the Poolable interface.
// Objects are pooled using an atomic, sharded (per CPU) linked list.
type Object struct {
Name string
Data []byte
pool.Fields[Object]
}
func allocator() *Object {
return &Object{Name: "test"}
}
// Used internally by PUT
func cleaner(obj *Object) {
obj.Name = ""
obj.Data = obj.Data[:0]
// or
// *obj = Object{}
}
func main() {
// Create custom cleanup policy
cleanupPolicy := pool.CleanupPolicy{
Enabled: true,
Interval: 10 * time.Minute,
MinUsageCount: 20, // eviction happens below this number of usage per object
}
// Create pool with custom configuration
config := pool.Config[Object, *Object]{
Cleanup: cleanupPolicy,
Allocator: allocator,
Cleaner: cleaner,
}
benchPool, err := pool.NewPoolWithConfig(config)
if err != nil {
panic(err)
}
defer benchPool.Close()
obj := benchPool.Get()
obj.Name = "Robert"
obj.Data = append(obj.Data, 34)
benchPool.Put(obj)
}
-
Growth Control Limit how large the pool can grow using the
GrowthPolicy
, giving you precise control over memory usage. -
Cleanup Control Fine-tune how often and how aggressively the pool is cleaned up with
CleanupPolicy
. -
Set Cleaner Once Provide a
cleaner
function to automatically reset or sanitize objects before reuse—no manual cleanup required.
If no growth policy is provided, the pool will grow indefinitely. In this case, any resource control will rely entirely on the CleanupPolicy
.
// GrowthPolicy defines constraints on how the pool is allowed to grow.
type GrowthPolicy struct {
// Enable determines whether growth limiting is active.
// If false, the pool can grow and shrink without restriction.
Enable bool
// MaxPoolSize sets the upper limit on the number of objects the pool can hold.
MaxPoolSize int64
}
If the pool reaches its limit, it returns nil. To avoid this behavior, you can use GetBlock() or PutBlock(), which block until resources become available.
If no cleanup policy is provided in the config, the zero value will be used by default, which means automatic cleanup is disabled.
// CleanupPolicy defines how the pool should automatically clean up unused objects.
type CleanupPolicy struct {
// Enabled indicates whether automatic cleanup is active.
Enabled bool
// Interval specifies how frequently the cleanup process should run.
Interval time.Duration
// MinUsageCount sets the usage threshold below which objects will be evicted.
MinUsageCount int64
}
Use DefaultCleanupPolicy(level)
to get a predefined [CleanupPolicy].
Level | Interval | MinUsageCount | When to Use |
---|---|---|---|
disable |
— | — | Manual control / predictable workloads |
low |
10m | 1 | High reuse, latency-sensitive |
moderate |
2m | 2 | Balanced default |
aggressive |
30s | 3 | Low memory tolerance / bursty usage |
Cleanup: pool.DefaultCleanupPolicy(pool.GcModerate)
For advanced users who prefer full control over memory reclamation, GenPool allows you to disable automatic cleanup using the GcDisable
policy, and the pool exposes its internal fields to allow for custom logic.
The ShardedPool
and Shard
types expose the internals you need:
type ShardedPool[T any, P Poolable[T]] struct {
Shards []*Shard[T, P] // All shards
}
type Shard[T any, P Poolable[T]] struct {
Head atomic.Pointer[T] // Head of the linked list for this shard
}
You can safely traverse and modify these shards to implement your own retention, eviction, or tracking strategies.
We welcome contributions! Before you start contributing, please ensure you have:
- Go 1.24.3 or later installed
- Git for version control
- Basic understanding of Go testing and benchmarking
# Fork and clone the repository
git clone https://github.com/AlexsanderHamir/GenPool.git
cd GenPool
# Run tests to verify setup
go test -v ./...
go test -bench=. ./...
# Check for linter errors
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
golangci-lint run
- Run benchmarks before anything to stablish a baseline.
- Ensure any new functionality didn't regress the performance.
- Write unit and black box tests for new functionality.
- Update documentation for user-facing changes.
- Ensure all tests pass before submitting PRs.
- PRs will only be merged if it passed all required github actions.
The best improvement is to do less!!!
This project is licensed under the MIT License - see the LICENSE file for details.