Dynamically Sized Object Pool

This is the second post in a series about object pools in C++.

In the previous post, we built a simple object pool that could only store a fixed number of objects. This post will build upon ideas presented in the first post and it is recommended that you read it first.

Use Case

The fixed size pool is a good choice in situations where we up front the max number of object we will ever need. Very often, however, we don’t know how many objects we will be creating. Imagine we are making a bullet hell game and don’t want to allocate each time we need to create a new projectile so we opt to use a pool. We know there will be a lot of projectiles on the screen at once, but the total number will depend on how often the player shoots and how often the enemy AI decides to shoot. We could use our fixed size pool and cap the number of active projectiles that can be out at once, but that wouldn’t be very fun! We need a pool that can handle growing if we need more objects than we initially thought.

Design

Our dynamically sized pool will have the same goals to the fixed size object pool, with one exception:

  • Able to allocate space for additional objects at run time

Upgrading Our Object Storage

Our fixed size object pool used a single array of a fixed size to store it’s objects. We will need something fancier, however, to support a dynamic number of objects.

A Pitfall

In cases like this it is tempting to reach for a std::vector to use as our object storage. After all, this is exactly what vectors are for, they will give us contiguous memory and resize automatically when we run out of space.

The issue with using a std::vector is in its resize behavior. When a vector resizes, it first allocates a new, larger array, it then copies existing elements to the new array, and finally frees the old array. When users allocate an object using our pool we return a pointer to the newly allocated object. If the pointer we return is pointing to an object within a vector’s array and that vector resizes, then the pointer will be invalidated. It will now be pointing to the vector’s old internal array and attempting to access it will result in undefined behavior (hopefully a crash!). That’s no good! We need to guarantee every pointer we give the user will remain valid for the entire life of the object.

One Slab at a Time

To provide pointer stability, we can’t move any objects we’ve allocated. When we are at max capacity and a user attempts an allocation, we will have to allocate an additional region of memory. We will refer to these regions of memory as a Slab.

using Slab = std::array<ObjectWrapper<T>, nObjectsPerSlab>;

Using a std::array as our slab type requires us to know the size of the slab at compile time. To accomplish this, we will add an additional template parameter to our class. We will also include a static_assert to ensure that the size provided greater than 1, because a slab size of 0 wouldn’t work and a slab size of 1 is no different than allocating on the heap! Our class definition looks like this now:

template <typename T, size_t nObjectsPerSlab>
class ObjectPool
{
  static_assert(nObjectsPerSlab > 1);

  ...

}

To keep track of the slabs we can store them in a std::list. We don’t need all of the slabs to be contiguous and we don’t want them to move around on us so a list is a good choice.

std::list<Slab> m_slabs;

Keeping Track of Free and Allocated Objects

Keeping track of our objects is almost identical to the fixed size pool.

To Recap, in our fixed size pool we did the following:

  • We process our storage array into a singly linked list of free objects
  • When a user requests an allocation we give away the first object in the list
  • When a user deallocates we put the freed object at the head of the list

Two changes are needed to support the new slab based approach.

First, each time we allocate a new slab of memory we will need to process it and add all of the ObjectWrappers it contains to our free list. We can bundle the add/process behavior together in a single, private function, which we can use in our constructor to initialize our pool, as well as each time we need to allocate more space.

void addSlab()
{
  // allocate a new slab of memory
  m_slabs.emplace_back();
  Slab& slab = m_slabs.back();

  // link each object in the slab together
  for (size_t i = 0; i < slab.size() - 1; ++i)
  {
    slab[i].u.pNext = &slab[i + 1];
  }

  // point the last object at the current free list head
  slab[slab.size() - 1].u.pNext = m_pFreeListHead;

  // make the first object in the slab the new head of the list
  m_pFreeListHead = &slab.front();

  m_nCapacity += slab.size();
}

The second change needed is that when a user requests an allocation and we do not have any free objects in our free list, we need to call our new addSlab function to increase our capacity, and then carry on as usual.

template <typename... Args>
T* allocate(Args&&... args)
{
  if (!m_pFreeListHead) [[ unlikely ]]
  {
    addSlab();
  }

  ObjectWrapper<T>* pWrapper = m_pFreeListHead;
  m_pFreeListHead = m_pFreeListHead->u.pNext;
  ++m_nUsed;

  return std::launder(new (&pWrapper->u.object) T(std::forward<Args>(args)...));
}

Trade Offs

There are tons of different ways to design object pools. The approach I have taken here favors simplicity but that simplicity comes at a cost.

The Pool Only Grows

Each time we run out of capacity for new objects, we allocate a new slab of memory. When the memory is later deallocated we did not add any checks to see if a slab has been fully deallocated and to free the slab. An approach like this is possible, but it would require more bookkeeping for each slab. It would also require a quick way to associate ObjectWrappers with the slab to which they belong.

No Growth Factor

In order to achieve true amortized constant allocation time, it is necessary for each of our successive slabs allocated to increase in size. Said another way, if the slab size selected is much smaller than the number of objects that will be created in the pool, performance will suffer because you will allocate new slabs on the heap frequently. If the slab size were to grow with each allocation, no matter what your initial slab size was you would allocate on the heap much less frequently.

Conclusion

With the foundation of our fixed size pool and the addition of the above changes, we have created a pool that is able to resize at run time while still minimizing the amount of times we need to do a full allocation on the heap. This pool, while simple, is powerful enough for my current needs. In the future I may revisit it and add an additional post to this series.

Download

The full source of this pool can be found here.