您的位置:首页 > 运维架构 > Linux

Linux源码解析-内存描述符(mm_struct)

2017-07-22 14:52 453 查看

1.简介

一个进程的虚拟地址空间主要由两个数据结来描述。一个是最高层次的:mm_struct(定义在mm_types.h中),一个是较高层次的:vm_area_structs。最高层次的mm_struct结构描述了一个进程的整个虚拟地址空间。较高层次的结构vm_area_truct描述了虚拟地址空间的一个区间(简称虚拟区)。每个进程只有一个mm_struct结构,在每个进程的task_struct结构中,有一个指向该进程的结构。可以说,mm_struct结构是对整个用户空间(注意,是用户空间)的描述。

2.task_struct中关于内存描述的成员

struct task_struct
{
// ...
struct mm_struct *mm, *active_mm;
// ...
};


struct mm_struct *mm, *active_mm;

mm:进程所拥有的用户空间内存描述符,

active_mm:进程运行时所使用的内存描述符,

注意:

对于普通进程,这两个指针变量相同
对于内核线程,不拥有任何内存描述符,mm成员总是设为NULL
当内核线程运行时,它的active_mm成员被初始化为前一个运行进程的active_mm值
每一个进程都会有自己独立的mm_struct,这样每一个进程都会有自己独立的地址空间,这样才能互不干扰。
当进程之间的地址空间被共享的时候,我们可以理解为这个时候是多个进程使用一份地址空间,这就是线程。

3.进程的地址空间分布



如图,mm_struct后面一层是不是很熟悉?没错,栈区、内存映射区、堆区、BBS区、数据区、文本区(后续继续讨论)



上图中我们需要注意的是:对于物理内存当中的内核kernel,是只存在一份,所有的进程是用来共享的,内核当中会利用PCB(进程控制块)来管理不同的进程。

4.mm_struct数据结构详解

以下总结摘自:http://blog.csdn.net/qq_26768741/article/details/54375524

在地址空间中,mmap为地址空间的内存区域(用vm_area_struct结构来表示)链表,mm_rb用红黑树来存储,链表表示起来更加方便,红黑树表示起来更加方便查找。

区别是,当虚拟区较少的时候,这个时候采用单链表,由mmap指向这个链表,当虚拟区多时此时采用红黑树的结构,由mm_rb指向这棵红黑树。这样就可以在大量数据的时候效率更高。

所有的mm_struct结构体通过自身的mmlist域链接在一个双向链表上,该链表的首元素是init_mm内存描述符,代表init进程的地址空间。

atomic_t mm_users;

atomic_t mm_count;

每一个进程都可以被别的进程来共享,也就是和别的进程来共享mm_struct
kernel线程是没有地址空间的,也就没有对应的mm_struct,kernel线程使用之前运行的进程的内存描述符。
程序中通常用到的地址常常具有局部性,当前最近一次用的虚拟地址区间很可能下一次还是需要用到,所以我们采用局部性原理,通常时候我们去吧当前地址周围一个区间的内存放入高速缓存当中,这个区间在mm_struct当中就是由mmap_cache来维护。
struct mm_struct {
struct vm_area_struct * mmap; /* list of VMAs,指向线性区对象的链表头部 */
struct rb_root mm_rb; /* 指向线性区对象的红黑树*/
struct vm_area_struct * mmap_cache; /* last find_vma result 指向最近找到的虚拟区间 */
#ifdef CONFIG_MMU

/*用来在进程地址空间中搜索有效的进程地址空间的函数*/

unsigned long (*get_unmapped_area) (struct file *filp,
unsigned long addr, unsigned long len,
unsigned long pgoff, unsigned long flags);
/*释放线性区的调用方法*/
void (*unmap_area) (struct mm_struct *mm, unsigned long addr);
#endif
unsigned long mmap_base; /* base of mmap area ,内存映射区的基地址*/
unsigned long task_size; /* size of task vm space */
unsigned long cached_hole_size; /* if non-zero, the largest hole below free_area_cache */
unsigned long free_area_cache; /* first hole of size cached_hole_size or larger */
pgd_t * pgd; /* 页表目录指针*/
atomic_t mm_users; /* How many users with user space?,共享进程的个数 */
atomic_t mm_count; /* How many references to "struct mm_struct" (users count as 1),主使用计数器,采用引用计数,描述有多少指针指向当前的mm_struct */
int map_count; /* number of VMAs ,线性区个数*/
struct rw_semaphore mmap_sem;
spinlock_t page_table_lock; /* Protects page tables and some counters,保护页表和引用计数的锁 (使用的自旋锁)*/

struct list_head mmlist; /* List of maybe swapped mm's. These are globally strung
* together off init_mm.mmlist, and are protected
* by mmlist_lock
*/

unsigned long hiwater_rss; /* High-watermark of RSS usage,进程拥有的最大页表数目 */
unsigned long hiwater_vm; /* High-water virtual memory usage ,进程线性区的最大页表数目*/

unsigned long total_vm, locked_vm, shared_vm, exec_vm;
unsigned long stack_vm, reserved_vm, def_flags, nr_ptes;
unsigned long start_code, end_code, start_data, end_data; /*维护代码区和数据区的字段*/
unsigned long start_brk, brk, start_stack; /*维护堆区和栈区的字段*/
unsigned long arg_start, arg_end, env_start, env_end; /*命令行参数的起始地址和尾地址,环境变量的起始地址和尾地址*/

unsigned long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */

/*
* Special counters, in some configurations protected by the
* page_table_lock, in other configurations by being atomic.
*/
struct mm_rss_stat rss_stat;

struct linux_binfmt *binfmt;

cpumask_t cpu_vm_mask;

/* Architecture-specific MM context */
mm_context_t context;

/* Swap token stuff */
/*
* Last value of global fault stamp as seen by this process.
* In other words, this value gives an indication of how long
* it has been since this task got the token.
* Look at mm/thrash.c
*/
unsigned int faultstamp;
unsigned int token_priority;
unsigned int last_interval;

unsigned long flags; /* Must use atomic bitops to access the bits */

struct core_state *core_state; /* coredumping support */
#ifdef CONFIG_AIO
spinlock_t ioctx_lock;
struct hlist_head ioctx_list;
#endif
#ifdef CONFIG_MM_OWNER
/*
* "owner" points to a task that is regarded as the canonical
* user/owner of this mm. All of the following must be true in
* order for it to be changed:
*
* current == mm->owner
* current->mm != mm
* new_owner->mm == mm
* new_owner->alloc_lock is held
*/
struct task_struct *owner;
#endif

#ifdef CONFIG_PROC_FS
/* store ref to file /proc/<pid>/exe symlink points to */
struct file *exe_file;
unsigned long num_exe_file_vmas;
#endif
#ifdef CONFIG_MMU_NOTIFIER
struct mmu_notifier_mm *mmu_notifier_mm;
#endif
};




5.页表概述

以下摘自http://blog.csdn.net/qq_26768741/article/details/54375524

linux kernel 使用内存管理的时候,采取的是页式的管理方式,应用程序给出的内存地址是虚拟地址,是经过若干层的页表的转换才能得到真正的物理地址,所以相对来说,进程的

地址空间是一份虚拟的地址空间,每一个地址通过页表的转换映射到所谓的物理地址空间上。在这里所共享的1G的kernel在内存地址是只存一份的,但是对于每一个进程其他的3G的空间,

是存储其他不同的东西,另外,页表具有权限限定,这样也就提供给了每块内存区域,比如我定义了:

char * p="12342";

这里的“12342”是一个常量字符串,它被存放在只读常量存储区,所以这个区域的页表的属性就是只读,这样就可以高效的维护整个进程的地址空间。

每一个进程都会有一个进程描述符,task_struct,task_strust当中的mm指针指向每个进程的内存描述符,而对于每个mm,有都会有单独的页表,

pgt区间是用来维护页表的目录,每一个进程的都有自己的页表目录,需要注意进程的页目录和内核的页目录是不一样的,当程序调度器调度程序运行的时候,这个时候这个地址就会转换成为

物理地址,linux一般采用三级页表进行转换。

6.vm_area_struct结构

/*
* This struct defines a memory VMM memory area. There is one of these
* per VM-area/task.  A VM area is any part of the process virtual memory
* space that has a special rule for the page-fault handlers (ie a shared
* library, the executable area etc).
*/
struct vm_area_struct {
struct mm_struct * vm_mm;	/* The address space we belong to. */
unsigned long vm_start;		/* Our start address within vm_mm. */
unsigned long vm_end;		/* The first byte after our end address
within vm_mm. */

/* linked list of VM areas per task, sorted by address */
struct vm_area_struct *vm_next;

pgprot_t vm_page_prot;		/* Access permissions of this VMA. */
unsigned long vm_flags;		/* Flags, see mm.h. */

struct rb_node vm_rb;

/*
* For areas with an address space and backing store,
* linkage into the address_space->i_mmap prio tree, or
* linkage to the list of like vmas hanging off its node, or
* linkage of vma in the address_space->i_mmap_nonlinear list.
*/
union {
struct {
struct list_head list;
void *parent;	/* aligns with prio_tree_node parent */
struct vm_area_struct *head;
} vm_set;

struct raw_prio_tree_node prio_tree_node;
} shared;

/*
* A file's MAP_PRIVATE vma can be in both i_mmap tree and anon_vma
* list, after a COW of one of the file pages.	A MAP_SHARED vma
* can only be in the i_mmap tree.  An anonymous MAP_PRIVATE, stack
* or brk vma (with NULL file) can only be in an anon_vma list.
*/
struct list_head anon_vma_chain; /* Serialized by mmap_sem &
* page_table_lock */
struct anon_vma *anon_vma;	/* Serialized by page_table_lock */

/* Function pointers to deal with this struct. */
const struct vm_operations_struct *vm_ops;

/* Information about our backing store: */
unsigned long vm_pgoff;		/* Offset (within vm_file) in PAGE_SIZE
units, *not* PAGE_CACHE_SIZE */
struct file * vm_file;		/* File we map to (can be NULL). */
void * vm_private_data;		/* was vm_pte (shared mem) */
unsigned long vm_truncate_count;/* truncate_count or restart_addr */

#ifndef CONFIG_MMU
struct vm_region *vm_region;	/* NOMMU mapping region */
#endif
#ifdef CONFIG_NUMA
struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
#endif
};
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: