您的位置:首页 > 运维架构 > Linux

Linux-2.6驱动开发 4 内存分配

2010-01-25 11:02 435 查看

4.1 Memory

#include <linux/slab.h>
void *kmalloc(size_t size, int flags);
flags的定义在<linux/gfp.h>,一般使用的选项如下:
GFP_ATOMIC
Used to allocate memory from interrupt handlers and other code outside of a
process context. Never sleeps.
GFP_KERNEL
Normal allocation of kernel memory. May sleep.
GFP_USER
Used to allocate memory for user-space pages; it may sleep.
GFP_HIGHUSER
Like GFP_USER, but allocates from high memory, if any. High memory is
described in the next subsection.
GFP_NOIO
GFP_NOFS
These flags function like GFP_KERNEL, but they add restrictions on what the ker-
nel can do to satisfy the request. A GFP_NOFS allocation is not allowed to perform
any filesystem calls, while GFP_NOIO disallows the initiation of any I/O at all.
They are used primarily in the filesystem and virtual memory code where an allo-
cation may be allowed to sleep, but recursive filesystem calls would be a bad
idea.
以上选项可以与下面的选项用或(|)同时使用:
__GFP_DMA
This flag requests allocation to happen in the DMA-capable memory zone. The
exact meaning is platform-dependent and is explained in the following section.
__GFP_HIGHMEM
This flag indicates that the allocated memory may be located in high memory.
__GFP_COLD
Normally, the memory allocator tries to return “cache warm” pages—pages that
are likely to be found in the processor cache. Instead, this flag requests a “cold”
page, which has not been used in some time. It is useful for allocating pages for
DMA reads, where presence in the processor cache is not useful. See the section
“Direct Memory Access” in Chapter 1 for a full discussion of how to allocate
DMA buffers.
__GFP_NOWARN
This rarely used flag prevents the kernel from issuing warnings (with printk)
when an allocation cannot be satisfied.
__GFP_HIGH
This flag marks a high-priority request, which is allowed to consume even the
last pages of memory set aside by the kernel for emergencies.
__GFP_REPEAT
__GFP_NOFAIL
__GFP_NORETRY
These flags modify how the allocator behaves when it has difficulty satisfying an
allocation. __GFP_REPEAT means “try a little harder” by repeating the attempt—
but the allocation can still fail. The __GFP_NOFAIL flag tells the allocator never to
fail; it works as hard as needed to satisfy the request. Use of __GFP_NOFAIL is very
strongly discouraged; there will probably never be a valid reason to use it in a
device driver. Finally, __GFP_NORETRY tells the allocator to give up immediately if
the requested memory is not available.
 
void kfree(void *obj);

4.2 Cache

#include <linux/slab.h>
kmem_cache_t *kmem_cache_create(const char *name, size_t size,
                                size_t offset,
                                unsigned long flags,
                                void (*constructor)(void *, kmem_cache_t *,
                                                    unsigned long flags),
                                void (*destructor)(void *, kmem_cache_t *,
                                                   unsigned long flags));
flags 的选项如下:
SLAB_NO_REAP
Setting this flag protects the cache from being reduced when the system is loo
ing for memory. Setting this flag is normally a bad idea; it is important to avo
restricting the memory allocator’s freedom of action unnecessarily.
SLAB_HWCACHE_ALIGN
This flag requires each data object to be aligned to a cache line; actual alignme
depends on the cache layout of the host platform. This option can be a goo
choice if your cache contains items that are frequently accessed on SM
machines. The padding required to achieve cache line alignment can end u
wasting significant amounts of memory, however.
SLAB_CACHE_DMA
This flag requires each data object to be allocated in the DMA memory zone.
 
void *kmem_cache_alloc(kmem_cache_t *cache, int flags);
void kmem_cache_free(kmem_cache_t *cache, const void *obj);
 
int kmem_cache_destroy(kmem_cache_t *cache);
 
应用:
/* declare one cache pointer: use it for all devices */
kmem_cache_t *scullc_cache;
/* scullc_init: create a cache for our quanta */
scullc_cache = kmem_cache_create("scullc", scullc_quantum,
        0, SLAB_HWCACHE_ALIGN, NULL, NULL); /* no ctor/dtor */
if (!scullc_cache) {
    scullc_cleanup( );
    return -ENOMEM;
}
/* Allocate a quantum using the memory cache */
if (!dptr->data[s_pos]) {
    dptr->data[s_pos] = kmem_cache_alloc(scullc_cache, GFP_KERNEL);
    if (!dptr->data[s_pos])
        goto nomem;
    memset(dptr->data[s_pos], 0, scullc_quantum);
}
for (i = 0; i < qset; i++)
if (dptr->data[i])
        kmem_cache_free(scullc_cache, dptr->data[i]);
/* scullc_cleanup: release the cache of our quanta */
if (scullc_cache)
kmem_cache_destroy(scullc_cache);

4.3 Memory Pools

#include <linux/mempool.h>
mempool_t *mempool_create(int min_nr,
                          mempool_alloc_t *alloc_fn,
                          mempool_free_t *free_fn,
                          void *pool_data);
typedef void *(mempool_alloc_t)(int gfp_mask, void *pool_data);
typedef void (mempool_free_t)(void *element, void *pool_data);
int mempool_resize(mempool_t *pool, int new_min_nr, int gfp_mask);
void mempool_destroy(mempool_t *pool);
应用:
cache = kmem_cache_create(. . .);
pool = mempool_create(MY_POOL_MINIMUM,
                      mempool_alloc_slab, mempool_free_slab,
                      cache);
void *mempool_alloc(mempool_t *pool, int gfp_mask);
void mempool_free(void *element, mempool_t *pool);

4.4 get_free_page

如果分配的内存很大,可以使用Page的技术。
get_zeroed_page(unsigned int flags);
Returns a pointer to a new page and fills the page with zeros.
__get_free_page(unsigned int flags);
Similar to get_zeroed_page, but doesn’t clear the page.
__get_free_pages(unsigned int flags, unsigned int order);
Allocates and returns a pointer to the first byte of a memory area that is poten-
tially several (physically contiguous) pages long but doesn’t zero the area.
 
void free_page(unsigned long addr);
void free_pages(unsigned long addr, unsigned long order);
应用:
/* Here's the allocation of a single quantum */
if (!dptr->data[s_pos]) {
    dptr->data[s_pos] =
        (void *)__get_free_pages(GFP_KERNEL, dptr->order);
    if (!dptr->data[s_pos])
        goto nomem;
    memset(dptr->data[s_pos], 0, PAGE_SIZE << dptr->order);
}
/* This code frees a whole quantum-set */
for (i = 0; i < qset; i++)
    if (dptr->data[i])
        free_pages((unsigned long)(dptr->data[i]),
                dptr->order);

4.5 alloc_pages

struct page *alloc_pages_node(int nid, unsigned int flags,
                              unsigned int order);
struct page *alloc_pages(unsigned int flags, unsigned int order);
struct page *alloc_page(unsigned int flags);
void __free_page(struct page *page);
void __free_pages(struct page *page, unsigned int order);
void free_hot_page(struct page *page);
void free_cold_page(struct page *page);

4.6 vmalloc

分配虚拟地址空间
#include <linux/vmalloc.h>
void *vmalloc(unsigned long size);
void vfree(void * addr);
void *ioremap(unsigned long offset, unsigned long size);
void iounmap(void * addr);
应用:
/* Allocate a quantum using virtual addresses */
if (!dptr->data[s_pos]) {
    dptr->data[s_pos] =
        (void *)vmalloc(PAGE_SIZE << dptr->order);
    if (!dptr->data[s_pos])
        goto nomem;
    memset(dptr->data[s_pos], 0, PAGE_SIZE << dptr->order);
}
/* Release the quantum-set */
for (i = 0; i < qset; i++)
    if (dptr->data[i])
        vfree(dptr->data[i]);

4.7 分配大量连续的空间

#include <linux/bootmem.h>
void *alloc_bootmem(unsigned long size);
void *alloc_bootmem_low(unsigned long size);
void *alloc_bootmem_pages(unsigned long size);
void *alloc_bootmem_low_pages(unsigned long size);
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息