Skip to main content
Knowledgebase
Home
Renesas Electronics Europe - Knowledgebase

Using Memory Block Pools

Last Updated: 11/20/2015

What are Memory Block Pools?

Memory Block Pools are pools of fixed-size memory blocks that can be used to allocate memory in a fast and deterministic manner.

What are the benefits of Memory Block Pools?

Because memory block pools consist of fixed-size blocks, there are never any fragmentation problems. Of course, fragmentation causes behavior that is inherently un-deterministic. In addition, the time required to allocate and free a fixed-size memory block is comparable to that of simple linked-list manipulation. Furthermore, memory block allocation and de-allocation is done at the head of the available list. This provides the fastest possible linked list processing and might help keep the actual memory block in cache.

What are the shortcomings of Memory Block Pools?

Lack of flexibility is the main drawback of fixed-size memory pools. The block size of a pool must be large enough to handle the worst case memory requirements of its users. Of course, memory may be wasted if many different size memory requests are made to the same pool. A possible solution is to make several different memory block pools that contain different sized memory blocks.

How are Memory Block Pools created?

Memory block pools are created either during initialization or during run-time by application threads. There is no limit on the number of memory block pools in an application.

How is the size of blocks specified?

As mentioned earlier, memory block pools contain a number of fixed-size blocks. The block size, in bytes, is specified during creation of the pool. ThreadX adds a small amount of overhead—the size of a C pointer—to each memory block in the pool. In addition, ThreadX might have to pad the block size to keep the beginning of each memory block on proper alignment.

What is the Memory Block Pool capacity?

The number of memory blocks in a pool is a function of the block size and the total number of bytes in the memory area supplied during creation. The capacity of a pool is calculated by dividing the block size (including padding and the pointer overhead bytes) into the total number of bytes in the supplied memory area.

What happens if a thread tries to allocate from a Memory Block Pool that is already full?

Application threads can suspend while waiting for a memory block from an empty pool. When a block is returned to the pool, the suspended thread is given this block and the thread is resumed. If multiple threads are suspended on the same memory block pool, they are resumed in the order they were suspended (FIFO). However, priority resumption is also possible if the application calls tx_block_pool_prioritize prior to the block release call that lifts thread suspension. The block pool prioritize service places the highest priority thread at the front of the suspension list, while leaving all other suspended threads in the same FIFO order.

Where can I find more information?

You can find more information on the ThreadX user manual, available here

 

©1997-2015 by Express Logic, Inc. All rights reserved. This document and the associated ThreadX software are the sole property of Express Logic, Inc. Each contains proprietary information of Express Logic, Inc.

 

  • Was this article helpful?