From 75382a2dca0e9e9e57e88b479cf537549461a934 Mon Sep 17 00:00:00 2001 From: Jiajian Ye Date: Thu, 28 Apr 2022 23:15:56 -0700 Subject: tools/vm/page_owner_sort.c: support for multi-value selection in single argument When viewing page owner information, we may want to select blocks whose PID/TGID/TASK_COMM_NAME appears in a user-specified list for data analysis and aggregation. But currently page_owner_sort only supports selecting blocks associated with only one specified PID/TGID/TASK_COMM_NAME. Therefore, following adjustments are made to fix the problem: 1. Enhance selecting function to support the selection of multiple PIDs/TGIDs/TASK_COMM_NAMEs. The enhanced usages are as follows: --pid Select by pid. This selects the blocks whose PID numbers appear in . --tgid Select by tgid. This selects the blocks whose TGID numbers appear in . --name Select by task command name. This selects the blocks whose task command name appear in . Where , , are single arguments in the form of a comma-separated list,which offers a way to specify individual selecting rules. For example, if you want to select blocks whose tgids are 1, 2 or 3, you have to use 4 commands as follows: ./page_owner_sort --tgid=1 ./page_owner_sort --tgid=2 ./page_owner_sort --tgid=3 cat > With this patch, you can use only 1 command to obtain the same result as above: ./page_owner_sort --tgid=1,2,3 2. Update explanations of --pid, --tgid and --name in the function usage() and the document(Documents/vm/page_owner.rst). This work is coauthored by Yixuan Cao Shenghong Han Yinan Zhang Chongxi Zhao Yuhong Feng Yongqiang Liu Link: https://lkml.kernel.org/r/20220401024856.767-2-yejiajian2018@email.szu.edu.cn Signed-off-by: Jiajian Ye Cc: Chongxi Zhao Cc: Shenghong Han Cc: Yinan Zhang Cc: Yixuan Cao Cc: Yongqiang Liu Cc: Yuhong Feng Cc: Haowen Bai Cc: Sean Anderson Signed-off-by: Andrew Morton --- Documentation/vm/page_owner.rst | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) (limited to 'Documentation/vm') diff --git a/Documentation/vm/page_owner.rst b/Documentation/vm/page_owner.rst index 7e0c3f574e78..3102f91d635c 100644 --- a/Documentation/vm/page_owner.rst +++ b/Documentation/vm/page_owner.rst @@ -129,7 +129,6 @@ Usage Specify culling rules.Culling syntax is key[,key[,...]].Choose a multi-letter key from the **STANDARD FORMAT SPECIFIERS** section. - is a single argument in the form of a comma-separated list, which offers a way to specify individual culling rules. The recognized keywords are described in the **STANDARD FORMAT SPECIFIERS** section below. @@ -137,7 +136,6 @@ Usage the STANDARD SORT KEYS section below. Mixed use of abbreviated and complete-form of keys is allowed. - Examples: ./page_owner_sort --cull=stacktrace ./page_owner_sort --cull=st,pid,name @@ -147,9 +145,21 @@ Usage -f Filter out the information of blocks whose memory has been released. Select: - --pid Select by pid. - --tgid Select by tgid. - --name Select by task command name. + --pid Select by pid. This selects the blocks whose process ID + numbers appear in . + --tgid Select by tgid. This selects the blocks whose thread + group ID numbers appear in . + --name Select by task command name. This selects the blocks whose + task command name appear in . + + , , are single arguments in the form of a comma-separated list, + which offers a way to specify individual selecting rules. + + + Examples: + ./page_owner_sort --pid=1 + ./page_owner_sort --tgid=1,2,3 + ./page_owner_sort --name name1,name2 STANDARD FORMAT SPECIFIERS ========================== -- cgit v1.2.3 From ebbeae36387ccf1326c896167872c3acf6c3c956 Mon Sep 17 00:00:00 2001 From: Jiajian Ye Date: Thu, 28 Apr 2022 23:15:57 -0700 Subject: tools/vm/page_owner_sort.c: support sorting blocks by multiple keys When viewing page owner information, we may want to sort blocks of information by multiple keys, since one single key does not uniquely identify a block. Therefore, following adjustments are made: 1. Add a new --sort option to support sorting blocks of information by multiple keys. ./page_owner_sort --sort= ./page_owner_sort --sort is a single argument in the form of a comma-separated list, which offers a way to specify sorting order. Sorting syntax is [+|-]key[,[+|-]key[,...]]. The ascending or descending order can be specified by adding the + (ascending, default) or - (descend -ing) prefix to the key: ./page_owner_sort [option] --sort -key1,+key2,key3... For example, to sort the blocks first by task command name in lexicographic order and then by pid in ascending numerical order, use the following: ./page_owner_sort --sort=name,+pid To sort the blocks first by pid in ascending order and then by timestamp of the page when it is allocated in descending order, use the following: ./page_owner_sort --sort=pid,-alloc_ts 2. Add explanations of a newly added --sort option in the function usage() and the document(Documentation/vm/page_owner.rst). This work is coauthored by Yixuan Cao Shenghong Han Yinan Zhang Chongxi Zhao Yuhong Feng Yongqiang Liu Link: https://lkml.kernel.org/r/20220401024856.767-3-yejiajian2018@email.szu.edu.cn Signed-off-by: Jiajian Ye Cc: Chongxi Zhao Cc: Shenghong Han Cc: Yinan Zhang Cc: Yixuan Cao Cc: Yongqiang Liu Cc: Yuhong Feng Cc: Haowen Bai Cc: Sean Anderson Signed-off-by: Andrew Morton --- Documentation/vm/page_owner.rst | 24 +++++- tools/vm/page_owner_sort.c | 164 ++++++++++++++++++++++++++++++++++------ 2 files changed, 165 insertions(+), 23 deletions(-) (limited to 'Documentation/vm') diff --git a/Documentation/vm/page_owner.rst b/Documentation/vm/page_owner.rst index 3102f91d635c..523bf3419512 100644 --- a/Documentation/vm/page_owner.rst +++ b/Documentation/vm/page_owner.rst @@ -121,6 +121,14 @@ Usage -r Sort by memory release time. -s Sort by stack trace. -t Sort by times (default). + --sort Specify sorting order. Sorting syntax is [+|-]key[,[+|-]key[,...]]. + Choose a key from the **STANDARD FORMAT SPECIFIERS** section. The "+" is + optional since default direction is increasing numerical or lexicographic + order. Mixed use of abbreviated and complete-form of keys is allowed. + + Examples: + ./page_owner_sort --sort=n,+pid,-tgid + ./page_owner_sort --sort=at additional function:: @@ -165,9 +173,23 @@ STANDARD FORMAT SPECIFIERS ========================== :: +For --sort option: + + KEY LONG DESCRIPTION + p pid process ID + tg tgid thread group ID + n name task command name + st stacktrace stack trace of the page allocation + T txt full text of block + ft free_ts timestamp of the page when it was released + at alloc_ts timestamp of the page when it was allocated + +For --curl option: + KEY LONG DESCRIPTION p pid process ID tg tgid thread group ID n name task command name f free whether the page has been released or not - st stacktrace stace trace of the page allocation + st stacktrace stack trace of the page allocation + diff --git a/tools/vm/page_owner_sort.c b/tools/vm/page_owner_sort.c index 16fb034c6a4e..beca990707fb 100644 --- a/tools/vm/page_owner_sort.c +++ b/tools/vm/page_owner_sort.c @@ -53,15 +53,29 @@ enum CULL_BIT { CULL_COMM = 1<<4, CULL_STACKTRACE = 1<<5 }; +enum ARG_TYPE { + ARG_TXT, ARG_COMM, ARG_STACKTRACE, ARG_ALLOC_TS, ARG_FREE_TS, + ARG_CULL_TIME, ARG_PAGE_NUM, ARG_PID, ARG_TGID, ARG_UNKNOWN, ARG_FREE +}; +enum SORT_ORDER { + SORT_ASC = 1, + SORT_DESC = -1, +}; struct filter_condition { - pid_t *tgids; - int tgids_size; pid_t *pids; - int pids_size; + pid_t *tgids; char **comms; + int pids_size; + int tgids_size; int comms_size; }; +struct sort_condition { + int (**cmps)(const void *, const void *); + int *signs; + int size; +}; static struct filter_condition fc; +static struct sort_condition sc; static regex_t order_pattern; static regex_t pid_pattern; static regex_t tgid_pattern; @@ -107,14 +121,14 @@ static int compare_num(const void *p1, const void *p2) { const struct block_list *l1 = p1, *l2 = p2; - return l2->num - l1->num; + return l1->num - l2->num; } static int compare_page_num(const void *p1, const void *p2) { const struct block_list *l1 = p1, *l2 = p2; - return l2->page_num - l1->page_num; + return l1->page_num - l2->page_num; } static int compare_pid(const void *p1, const void *p2) @@ -180,6 +194,16 @@ static int compare_cull_condition(const void *p1, const void *p2) return 0; } +static int compare_sort_condition(const void *p1, const void *p2) +{ + int cmp = 0; + + for (int i = 0; i < sc.size; ++i) + if (cmp == 0) + cmp = sc.signs[i] * sc.cmps[i](p1, p2); + return cmp; +} + static int search_pattern(regex_t *pattern, char *pattern_str, char *buf) { int err, val_len; @@ -345,6 +369,29 @@ static char *get_comm(char *buf) return comm_str; } +static int get_arg_type(const char *arg) +{ + if (!strcmp(arg, "pid") || !strcmp(arg, "p")) + return ARG_PID; + else if (!strcmp(arg, "tgid") || !strcmp(arg, "tg")) + return ARG_TGID; + else if (!strcmp(arg, "name") || !strcmp(arg, "n")) + return ARG_COMM; + else if (!strcmp(arg, "stacktrace") || !strcmp(arg, "st")) + return ARG_STACKTRACE; + else if (!strcmp(arg, "free") || !strcmp(arg, "f")) + return ARG_FREE; + else if (!strcmp(arg, "txt") || !strcmp(arg, "T")) + return ARG_TXT; + else if (!strcmp(arg, "free_ts") || !strcmp(arg, "ft")) + return ARG_FREE_TS; + else if (!strcmp(arg, "alloc_ts") || !strcmp(arg, "at")) + return ARG_ALLOC_TS; + else { + return ARG_UNKNOWN; + } +} + static bool match_num_list(int num, int *list, int list_size) { for (int i = 0; i < list_size; ++i) @@ -428,21 +475,86 @@ static bool parse_cull_args(const char *arg_str) int size = 0; char **args = explode(',', arg_str, &size); - for (int i = 0; i < size; ++i) - if (!strcmp(args[i], "pid") || !strcmp(args[i], "p")) + for (int i = 0; i < size; ++i) { + int arg_type = get_arg_type(args[i]); + + if (arg_type == ARG_PID) cull |= CULL_PID; - else if (!strcmp(args[i], "tgid") || !strcmp(args[i], "tg")) + else if (arg_type == ARG_TGID) cull |= CULL_TGID; - else if (!strcmp(args[i], "name") || !strcmp(args[i], "n")) + else if (arg_type == ARG_COMM) cull |= CULL_COMM; - else if (!strcmp(args[i], "stacktrace") || !strcmp(args[i], "st")) + else if (arg_type == ARG_STACKTRACE) cull |= CULL_STACKTRACE; - else if (!strcmp(args[i], "free") || !strcmp(args[i], "f")) + else if (arg_type == ARG_FREE) cull |= CULL_UNRELEASE; else { free_explode(args, size); return false; } + } + free_explode(args, size); + return true; +} + +static void set_single_cmp(int (*cmp)(const void *, const void *), int sign) +{ + if (sc.signs == NULL || sc.size < 1) + sc.signs = calloc(1, sizeof(int)); + sc.signs[0] = sign; + if (sc.cmps == NULL || sc.size < 1) + sc.cmps = calloc(1, sizeof(int *)); + sc.cmps[0] = cmp; + sc.size = 1; +} + +static bool parse_sort_args(const char *arg_str) +{ + int size = 0; + + if (sc.size != 0) { /* reset sort_condition */ + free(sc.signs); + free(sc.cmps); + size = 0; + } + + char **args = explode(',', arg_str, &size); + + sc.signs = calloc(size, sizeof(int)); + sc.cmps = calloc(size, sizeof(int *)); + for (int i = 0; i < size; ++i) { + int offset = 0; + + sc.signs[i] = SORT_ASC; + if (args[i][0] == '-' || args[i][0] == '+') { + if (args[i][0] == '-') + sc.signs[i] = SORT_DESC; + offset = 1; + } + + int arg_type = get_arg_type(args[i]+offset); + + if (arg_type == ARG_PID) + sc.cmps[i] = compare_pid; + else if (arg_type == ARG_TGID) + sc.cmps[i] = compare_tgid; + else if (arg_type == ARG_COMM) + sc.cmps[i] = compare_comm; + else if (arg_type == ARG_STACKTRACE) + sc.cmps[i] = compare_stacktrace; + else if (arg_type == ARG_ALLOC_TS) + sc.cmps[i] = compare_ts; + else if (arg_type == ARG_FREE_TS) + sc.cmps[i] = compare_free_ts; + else if (arg_type == ARG_TXT) + sc.cmps[i] = compare_txt; + else { + free_explode(args, size); + sc.size = 0; + return false; + } + } + sc.size = size; free_explode(args, size); return true; } @@ -485,13 +597,13 @@ static void usage(void) "--pid \tSelect by pid. This selects the information of blocks whose process ID numbers appear in .\n" "--tgid \tSelect by tgid. This selects the information of blocks whose Thread Group ID numbers appear in .\n" "--name \n\t\tSelect by command name. This selects the information of blocks whose command name appears in .\n" - "--cull \tCull by user-defined rules. is a single argument in the form of a comma-separated list with some common fields predefined\n" + "--cull \tCull by user-defined rules. is a single argument in the form of a comma-separated list with some common fields predefined\n" + "--sort \tSpecify sort order as: [+|-]key[,[+|-]key[,...]]\n" ); } int main(int argc, char **argv) { - int (*cmp)(const void *, const void *) = compare_num; FILE *fin, *fout; char *buf; int ret, i, count; @@ -502,37 +614,38 @@ int main(int argc, char **argv) { "tgid", required_argument, NULL, 2 }, { "name", required_argument, NULL, 3 }, { "cull", required_argument, NULL, 4 }, + { "sort", required_argument, NULL, 5 }, { 0, 0, 0, 0}, }; while ((opt = getopt_long(argc, argv, "afmnprstP", longopts, NULL)) != -1) switch (opt) { case 'a': - cmp = compare_ts; + set_single_cmp(compare_ts, SORT_ASC); break; case 'f': filter = filter | FILTER_UNRELEASE; break; case 'm': - cmp = compare_page_num; + set_single_cmp(compare_page_num, SORT_DESC); break; case 'p': - cmp = compare_pid; + set_single_cmp(compare_pid, SORT_ASC); break; case 'r': - cmp = compare_free_ts; + set_single_cmp(compare_free_ts, SORT_ASC); break; case 's': - cmp = compare_stacktrace; + set_single_cmp(compare_stacktrace, SORT_ASC); break; case 't': - cmp = compare_num; + set_single_cmp(compare_num, SORT_DESC); break; case 'P': - cmp = compare_tgid; + set_single_cmp(compare_tgid, SORT_ASC); break; case 'n': - cmp = compare_comm; + set_single_cmp(compare_comm, SORT_ASC); break; case 1: filter = filter | FILTER_PID; @@ -563,6 +676,13 @@ int main(int argc, char **argv) exit(1); } break; + case 5: + if (!parse_sort_args(optarg)) { + fprintf(stderr, "wrong argument after --sort option:%s\n", + optarg); + exit(1); + } + break; default: usage(); exit(1); @@ -622,7 +742,7 @@ int main(int argc, char **argv) } } - qsort(list, count, sizeof(list[0]), cmp); + qsort(list, count, sizeof(list[0]), compare_sort_condition); for (i = 0; i < count; i++) { if (cull == 0) -- cgit v1.2.3 From f09654bb88127473b4baf3bc0b68d4d4695aca7b Mon Sep 17 00:00:00 2001 From: Yixuan Cao Date: Thu, 28 Apr 2022 23:15:57 -0700 Subject: tools/vm/page_owner_sort.c: provide allocator labelling and update --cull and --sort options An application is suspected of having memory leak when its memory consumption is high and keeps increasing. There are several commonly used memory allocators: slab, cma, vmalloc, etc. The memory leak identification can be sped up if the page information allocated by an allocator can be analyzed separately. This patch provides supports for memory allocator labelling for slab, vmalloc, and cma. The pages allocated by slab and cma can be confirmed from the "PFN" line according to the kernel codes, and the label of the vmalloc allocator can be obtained by analyzing the stack trace. Thanks for Vlastimil Babka's constructive suggestions. Based on Yinan Zhang's study, the call chain of vmalloc() is vmalloc() -> ... -> __vmalloc_node_range() -> __vmalloc_area_node(). __vmalloc_area_node() requests memory through the interface of buddy allocation system. In the current version, __vmalloc_area_node() uses four interfaces: alloc_pages_bulk_array_mempolicy(), alloc_pages_bulk_array_node(), alloc_pages() and alloc_pages_node(). By disassembling the code, we find that __vmalloc_area_node() is expanded in __vmalloc_node_range(). So __vmalloc_area_node is not in the stack trace. On the test machine, the stack trace of pages allocated by vmalloc has the following four forms: __alloc_pages_bulk+0x230/0x6a0 __vmalloc_node_range+0x19c/0x598 alloc_pages_bulk_array_mempolicy+0xbc/0x278 __vmalloc_node_range+0x1e8/0x598 __alloc_pages+0x160/0x2b0 __vmalloc_node_range+0x234/0x598 alloc_pages+0xac/0x150 __vmalloc_node_range+0x44c/0x598 Therefore, in two consecutive lines of stacktrace, if the first line contains the word "alloc_pages" and the second line contains the word "__vmalloc_node_range", it can be determined that the page is allocated by vmalloc. And the function offset and size are not the same on different machines, so there is no need to match them. At the same time, this patch updates the --cull and --sort options to support allocator-based merge statistics and sorting. The added functions are fully compatible with the original work. When using, you can use "allocator", or abbreviated as "ator". Relevant updates have also been made in the documentation(Documentation/vm/page_owner.rst). Example: ./page_owner_sort --cull=st,pid,name,allocator ./page_owner_sort --sort=ator,pid,name This work is coauthored by Jiajian Ye, Yinan Zhang, Shenghong Han, Chongxi Zhao, Yuhong Feng and Yongqiang Liu. Link: https://lkml.kernel.org/r/20220410132932.9402-1-caoyixuan2019@email.szu.edu.cn Signed-off-by: Yixuan Cao Cc: Chongxi Zhao Cc: Haowen Bai Cc: Jiajian Ye Cc: Sean Anderson Cc: Shenghong Han Cc: Yinan Zhang Cc: Yongqiang Liu Cc: Yuhong Feng Signed-off-by: Andrew Morton --- Documentation/vm/page_owner.rst | 3 +- tools/vm/page_owner_sort.c | 112 ++++++++++++++++++++++++++++++++++------ 2 files changed, 99 insertions(+), 16 deletions(-) (limited to 'Documentation/vm') diff --git a/Documentation/vm/page_owner.rst b/Documentation/vm/page_owner.rst index 523bf3419512..25622c715823 100644 --- a/Documentation/vm/page_owner.rst +++ b/Documentation/vm/page_owner.rst @@ -183,6 +183,7 @@ For --sort option: T txt full text of block ft free_ts timestamp of the page when it was released at alloc_ts timestamp of the page when it was allocated + ator allocator memory allocator for pages For --curl option: @@ -192,4 +193,4 @@ For --curl option: n name task command name f free whether the page has been released or not st stacktrace stack trace of the page allocation - + ator allocator memory allocator for pages diff --git a/tools/vm/page_owner_sort.c b/tools/vm/page_owner_sort.c index a32e446e5bb2..fa2e4d2a9d68 100644 --- a/tools/vm/page_owner_sort.c +++ b/tools/vm/page_owner_sort.c @@ -39,6 +39,7 @@ struct block_list { int page_num; pid_t pid; pid_t tgid; + int allocator; }; enum FILTER_BIT { FILTER_UNRELEASE = 1<<1, @@ -51,11 +52,19 @@ enum CULL_BIT { CULL_PID = 1<<2, CULL_TGID = 1<<3, CULL_COMM = 1<<4, - CULL_STACKTRACE = 1<<5 + CULL_STACKTRACE = 1<<5, + CULL_ALLOCATOR = 1<<6 +}; +enum ALLOCATOR_BIT { + ALLOCATOR_CMA = 1<<1, + ALLOCATOR_SLAB = 1<<2, + ALLOCATOR_VMALLOC = 1<<3, + ALLOCATOR_OTHERS = 1<<4 }; enum ARG_TYPE { ARG_TXT, ARG_COMM, ARG_STACKTRACE, ARG_ALLOC_TS, ARG_FREE_TS, - ARG_CULL_TIME, ARG_PAGE_NUM, ARG_PID, ARG_TGID, ARG_UNKNOWN, ARG_FREE + ARG_CULL_TIME, ARG_PAGE_NUM, ARG_PID, ARG_TGID, ARG_UNKNOWN, ARG_FREE, + ARG_ALLOCATOR }; enum SORT_ORDER { SORT_ASC = 1, @@ -89,15 +98,20 @@ static int cull; static int filter; static bool debug_on; -int read_block(char *buf, int buf_size, FILE *fin) +static void set_single_cmp(int (*cmp)(const void *, const void *), int sign); + +int read_block(char *buf, char *ext_buf, int buf_size, FILE *fin) { char *curr = buf, *const buf_end = buf + buf_size; while (buf_end - curr > 1 && fgets(curr, buf_end - curr, fin)) { - if (*curr == '\n') /* empty line */ + if (*curr == '\n') { /* empty line */ return curr - buf; - if (!strncmp(curr, "PFN", 3)) + } + if (!strncmp(curr, "PFN", 3)) { + strcpy(ext_buf, curr); continue; + } curr += strlen(curr); } @@ -146,6 +160,13 @@ static int compare_tgid(const void *p1, const void *p2) return l1->tgid - l2->tgid; } +static int compare_allocator(const void *p1, const void *p2) +{ + const struct block_list *l1 = p1, *l2 = p2; + + return l1->allocator - l2->allocator; +} + static int compare_comm(const void *p1, const void *p2) { const struct block_list *l1 = p1, *l2 = p2; @@ -192,6 +213,8 @@ static int compare_cull_condition(const void *p1, const void *p2) return compare_comm(p1, p2); if ((cull & CULL_UNRELEASE) && compare_release(p1, p2)) return compare_release(p1, p2); + if ((cull & CULL_ALLOCATOR) && compare_allocator(p1, p2)) + return compare_allocator(p1, p2); return 0; } @@ -395,11 +418,42 @@ static int get_arg_type(const char *arg) return ARG_FREE_TS; else if (!strcmp(arg, "alloc_ts") || !strcmp(arg, "at")) return ARG_ALLOC_TS; + else if (!strcmp(arg, "allocator") || !strcmp(arg, "ator")) + return ARG_ALLOCATOR; else { return ARG_UNKNOWN; } } +static int get_allocator(const char *buf, const char *migrate_info) +{ + char *tmp, *first_line, *second_line; + int allocator = 0; + + if (strstr(migrate_info, "CMA")) + allocator |= ALLOCATOR_CMA; + if (strstr(migrate_info, "slab")) + allocator |= ALLOCATOR_SLAB; + tmp = strstr(buf, "__vmalloc_node_range"); + if (tmp) { + second_line = tmp; + while (*tmp != '\n') + tmp--; + tmp--; + while (*tmp != '\n') + tmp--; + first_line = ++tmp; + tmp = strstr(tmp, "alloc_pages"); + if (tmp) { + if (tmp && first_line <= tmp && tmp < second_line) + allocator |= ALLOCATOR_VMALLOC; + } + } + if (allocator == 0) + allocator = ALLOCATOR_OTHERS; + return allocator; +} + static bool match_num_list(int num, int *list, int list_size) { for (int i = 0; i < list_size; ++i) @@ -437,7 +491,7 @@ static bool is_need(char *buf) return true; } -static void add_list(char *buf, int len) +static void add_list(char *buf, int len, char *ext_buf) { if (list_size != 0 && len == list[list_size-1].len && @@ -471,6 +525,7 @@ static void add_list(char *buf, int len) list[list_size].stacktrace++; list[list_size].ts_nsec = get_ts_nsec(buf); list[list_size].free_ts_nsec = get_free_ts_nsec(buf); + list[list_size].allocator = get_allocator(buf, ext_buf); list_size++; if (list_size % 1000 == 0) { printf("loaded %d\r", list_size); @@ -496,12 +551,16 @@ static bool parse_cull_args(const char *arg_str) cull |= CULL_STACKTRACE; else if (arg_type == ARG_FREE) cull |= CULL_UNRELEASE; + else if (arg_type == ARG_ALLOCATOR) + cull |= CULL_ALLOCATOR; else { free_explode(args, size); return false; } } free_explode(args, size); + if (sc.size == 0) + set_single_cmp(compare_num, SORT_DESC); return true; } @@ -556,6 +615,8 @@ static bool parse_sort_args(const char *arg_str) sc.cmps[i] = compare_free_ts; else if (arg_type == ARG_TXT) sc.cmps[i] = compare_txt; + else if (arg_type == ARG_ALLOCATOR) + sc.cmps[i] = compare_allocator; else { free_explode(args, size); sc.size = 0; @@ -588,6 +649,19 @@ static int *parse_nums_list(char *arg_str, int *list_size) return list; } +static void print_allocator(FILE *out, int allocator) +{ + fprintf(out, "allocated by "); + if (allocator & ALLOCATOR_CMA) + fprintf(out, "CMA "); + if (allocator & ALLOCATOR_SLAB) + fprintf(out, "SLAB "); + if (allocator & ALLOCATOR_VMALLOC) + fprintf(out, "VMALLOC "); + if (allocator & ALLOCATOR_OTHERS) + fprintf(out, "OTHERS "); +} + #define BUF_SIZE (128 * 1024) static void usage(void) @@ -614,8 +688,8 @@ static void usage(void) int main(int argc, char **argv) { FILE *fin, *fout; - char *buf; - int ret, i, count; + char *buf, *ext_buf; + int i, count; struct stat st; int opt; struct option longopts[] = { @@ -724,16 +798,18 @@ int main(int argc, char **argv) list = malloc(max_size * sizeof(*list)); buf = malloc(BUF_SIZE); - if (!list || !buf) { + ext_buf = malloc(BUF_SIZE); + if (!list || !buf || !ext_buf) { fprintf(stderr, "Out of memory\n"); exit(1); } for ( ; ; ) { - ret = read_block(buf, BUF_SIZE, fin); - if (ret < 0) + int buf_len = read_block(buf, ext_buf, BUF_SIZE, fin); + + if (buf_len < 0) break; - add_list(buf, ret); + add_list(buf, buf_len, ext_buf); } printf("loaded %d\n", list_size); @@ -757,9 +833,11 @@ int main(int argc, char **argv) qsort(list, count, sizeof(list[0]), compare_sort_condition); for (i = 0; i < count; i++) { - if (cull == 0) - fprintf(fout, "%d times, %d pages:\n%s\n", - list[i].num, list[i].page_num, list[i].txt); + if (cull == 0) { + fprintf(fout, "%d times, %d pages, ", list[i].num, list[i].page_num); + print_allocator(fout, list[i].allocator); + fprintf(fout, ":\n%s\n", list[i].txt); + } else { fprintf(fout, "%d times, %d pages", list[i].num, list[i].page_num); @@ -769,6 +847,10 @@ int main(int argc, char **argv) fprintf(fout, ", TGID %d", list[i].pid); if (cull & CULL_COMM || filter & FILTER_COMM) fprintf(fout, ", task_comm_name: %s", list[i].comm); + if (cull & CULL_ALLOCATOR) { + fprintf(fout, ", "); + print_allocator(fout, list[i].allocator); + } if (cull & CULL_UNRELEASE) fprintf(fout, " (%s)", list[i].free_ts_nsec ? "UNRELEASED" : "RELEASED"); -- cgit v1.2.3 From 60a427db0f80f16b9bb9efe6cc79c93f336e8466 Mon Sep 17 00:00:00 2001 From: Joao Martins Date: Thu, 28 Apr 2022 23:16:15 -0700 Subject: mm/hugetlb_vmemmap: move comment block to Documentation/vm In preparation for device-dax for using hugetlbfs compound page tail deduplication technique, move the comment block explanation into a common place in Documentation/vm. Link: https://lkml.kernel.org/r/20220420155310.9712-4-joao.m.martins@oracle.com Signed-off-by: Joao Martins Reviewed-by: Muchun Song Reviewed-by: Dan Williams Suggested-by: Dan Williams Cc: Muchun Song Cc: Mike Kravetz Cc: Christoph Hellwig Cc: Jane Chu Cc: Jason Gunthorpe Cc: Jonathan Corbet Cc: Matthew Wilcox Cc: Vishal Verma Signed-off-by: Andrew Morton --- Documentation/vm/index.rst | 1 + Documentation/vm/vmemmap_dedup.rst | 173 +++++++++++++++++++++++++++++++++++++ mm/hugetlb_vmemmap.c | 168 +---------------------------------- 3 files changed, 175 insertions(+), 167 deletions(-) create mode 100644 Documentation/vm/vmemmap_dedup.rst (limited to 'Documentation/vm') diff --git a/Documentation/vm/index.rst b/Documentation/vm/index.rst index 44365c4574a3..2fb612bb72c9 100644 --- a/Documentation/vm/index.rst +++ b/Documentation/vm/index.rst @@ -37,5 +37,6 @@ algorithms. If you are looking for advice on simply allocating memory, see the transhuge unevictable-lru vmalloced-kernel-stacks + vmemmap_dedup z3fold zsmalloc diff --git a/Documentation/vm/vmemmap_dedup.rst b/Documentation/vm/vmemmap_dedup.rst new file mode 100644 index 000000000000..485ccf4f7b10 --- /dev/null +++ b/Documentation/vm/vmemmap_dedup.rst @@ -0,0 +1,173 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================== +Free some vmemmap pages of HugeTLB +================================== + +The struct page structures (page structs) are used to describe a physical +page frame. By default, there is a one-to-one mapping from a page frame to +it's corresponding page struct. + +HugeTLB pages consist of multiple base page size pages and is supported by many +architectures. See Documentation/admin-guide/mm/hugetlbpage.rst for more +details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB are +currently supported. Since the base page size on x86 is 4KB, a 2MB HugeTLB page +consists of 512 base pages and a 1GB HugeTLB page consists of 4096 base pages. +For each base page, there is a corresponding page struct. + +Within the HugeTLB subsystem, only the first 4 page structs are used to +contain unique information about a HugeTLB page. __NR_USED_SUBPAGE provides +this upper limit. The only 'useful' information in the remaining page structs +is the compound_head field, and this field is the same for all tail pages. + +By removing redundant page structs for HugeTLB pages, memory can be returned +to the buddy allocator for other uses. + +Different architectures support different HugeTLB pages. For example, the +following table is the HugeTLB page size supported by x86 and arm64 +architectures. Because arm64 supports 4k, 16k, and 64k base pages and +supports contiguous entries, so it supports many kinds of sizes of HugeTLB +page. + ++--------------+-----------+-----------------------------------------------+ +| Architecture | Page Size | HugeTLB Page Size | ++--------------+-----------+-----------+-----------+-----------+-----------+ +| x86-64 | 4KB | 2MB | 1GB | | | ++--------------+-----------+-----------+-----------+-----------+-----------+ +| | 4KB | 64KB | 2MB | 32MB | 1GB | +| +-----------+-----------+-----------+-----------+-----------+ +| arm64 | 16KB | 2MB | 32MB | 1GB | | +| +-----------+-----------+-----------+-----------+-----------+ +| | 64KB | 2MB | 512MB | 16GB | | ++--------------+-----------+-----------+-----------+-----------+-----------+ + +When the system boot up, every HugeTLB page has more than one struct page +structs which size is (unit: pages):: + + struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + +Where HugeTLB_Size is the size of the HugeTLB page. We know that the size +of the HugeTLB page is always n times PAGE_SIZE. So we can get the following +relationship:: + + HugeTLB_Size = n * PAGE_SIZE + +Then:: + + struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE + = n * sizeof(struct page) / PAGE_SIZE + +We can use huge mapping at the pud/pmd level for the HugeTLB page. + +For the HugeTLB page of the pmd level mapping, then:: + + struct_size = n * sizeof(struct page) / PAGE_SIZE + = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE + = sizeof(struct page) / sizeof(pte_t) + = 64 / 8 + = 8 (pages) + +Where n is how many pte entries which one page can contains. So the value of +n is (PAGE_SIZE / sizeof(pte_t)). + +This optimization only supports 64-bit system, so the value of sizeof(pte_t) +is 8. And this optimization also applicable only when the size of struct page +is a power of two. In most cases, the size of struct page is 64 bytes (e.g. +x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the +size of struct page structs of it is 8 page frames which size depends on the +size of the base page. + +For the HugeTLB page of the pud level mapping, then:: + + struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) + = PAGE_SIZE / 8 * 8 (pages) + = PAGE_SIZE (pages) + +Where the struct_size(pmd) is the size of the struct page structs of a +HugeTLB page of the pmd level mapping. + +E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB +HugeTLB page consists in 4096. + +Next, we take the pmd level mapping of the HugeTLB page as an example to +show the internal implementation of this optimization. There are 8 pages +struct page structs associated with a HugeTLB page which is pmd mapped. + +Here is how things look before optimization:: + + HugeTLB struct pages(8 pages) page frame(8 pages) + +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + | | | 0 | -------------> | 0 | + | | +-----------+ +-----------+ + | | | 1 | -------------> | 1 | + | | +-----------+ +-----------+ + | | | 2 | -------------> | 2 | + | | +-----------+ +-----------+ + | | | 3 | -------------> | 3 | + | | +-----------+ +-----------+ + | | | 4 | -------------> | 4 | + | PMD | +-----------+ +-----------+ + | level | | 5 | -------------> | 5 | + | mapping | +-----------+ +-----------+ + | | | 6 | -------------> | 6 | + | | +-----------+ +-----------+ + | | | 7 | -------------> | 7 | + | | +-----------+ +-----------+ + | | + | | + | | + +-----------+ + +The value of page->compound_head is the same for all tail pages. The first +page of page structs (page 0) associated with the HugeTLB page contains the 4 +page structs necessary to describe the HugeTLB. The only use of the remaining +pages of page structs (page 1 to page 7) is to point to page->compound_head. +Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of page structs +will be used for each HugeTLB page. This will allow us to free the remaining +7 pages to the buddy allocator. + +Here is how things look after remapping:: + + HugeTLB struct pages(8 pages) page frame(8 pages) + +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + | | | 0 | -------------> | 0 | + | | +-----------+ +-----------+ + | | | 1 | ---------------^ ^ ^ ^ ^ ^ ^ + | | +-----------+ | | | | | | + | | | 2 | -----------------+ | | | | | + | | +-----------+ | | | | | + | | | 3 | -------------------+ | | | | + | | +-----------+ | | | | + | | | 4 | ---------------------+ | | | + | PMD | +-----------+ | | | + | level | | 5 | -----------------------+ | | + | mapping | +-----------+ | | + | | | 6 | -------------------------+ | + | | +-----------+ | + | | | 7 | ---------------------------+ + | | +-----------+ + | | + | | + | | + +-----------+ + +When a HugeTLB is freed to the buddy system, we should allocate 7 pages for +vmemmap pages and restore the previous mapping relationship. + +For the HugeTLB page of the pud level mapping. It is similar to the former. +We also can use this approach to free (PAGE_SIZE - 1) vmemmap pages. + +Apart from the HugeTLB page of the pmd/pud level mapping, some architectures +(e.g. aarch64) provides a contiguous bit in the translation table entries +that hints to the MMU to indicate that it is one of a contiguous set of +entries that can be cached in a single TLB entry. + +The contiguous bit is used to increase the mapping size at the pmd and pte +(last) level. So this type of HugeTLB page can be optimized only when its +size of the struct page structs is greater than 1 page. + +Notice: The head vmemmap page is not freed to the buddy allocator and all +tail vmemmap pages are mapped to the head vmemmap page frame. So we can see +more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) +associated with each HugeTLB page. The compound_head() can handle this +correctly (more details refer to the comment above compound_head()). diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 2655434a946b..29554c6ef2ae 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -6,173 +6,7 @@ * * Author: Muchun Song * - * The struct page structures (page structs) are used to describe a physical - * page frame. By default, there is a one-to-one mapping from a page frame to - * it's corresponding page struct. - * - * HugeTLB pages consist of multiple base page size pages and is supported by - * many architectures. See hugetlbpage.rst in the Documentation directory for - * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB - * are currently supported. Since the base page size on x86 is 4KB, a 2MB - * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of - * 4096 base pages. For each base page, there is a corresponding page struct. - * - * Within the HugeTLB subsystem, only the first 4 page structs are used to - * contain unique information about a HugeTLB page. __NR_USED_SUBPAGE provides - * this upper limit. The only 'useful' information in the remaining page structs - * is the compound_head field, and this field is the same for all tail pages. - * - * By removing redundant page structs for HugeTLB pages, memory can be returned - * to the buddy allocator for other uses. - * - * Different architectures support different HugeTLB pages. For example, the - * following table is the HugeTLB page size supported by x86 and arm64 - * architectures. Because arm64 supports 4k, 16k, and 64k base pages and - * supports contiguous entries, so it supports many kinds of sizes of HugeTLB - * page. - * - * +--------------+-----------+-----------------------------------------------+ - * | Architecture | Page Size | HugeTLB Page Size | - * +--------------+-----------+-----------+-----------+-----------+-----------+ - * | x86-64 | 4KB | 2MB | 1GB | | | - * +--------------+-----------+-----------+-----------+-----------+-----------+ - * | | 4KB | 64KB | 2MB | 32MB | 1GB | - * | +-----------+-----------+-----------+-----------+-----------+ - * | arm64 | 16KB | 2MB | 32MB | 1GB | | - * | +-----------+-----------+-----------+-----------+-----------+ - * | | 64KB | 2MB | 512MB | 16GB | | - * +--------------+-----------+-----------+-----------+-----------+-----------+ - * - * When the system boot up, every HugeTLB page has more than one struct page - * structs which size is (unit: pages): - * - * struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE - * - * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size - * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following - * relationship. - * - * HugeTLB_Size = n * PAGE_SIZE - * - * Then, - * - * struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE - * = n * sizeof(struct page) / PAGE_SIZE - * - * We can use huge mapping at the pud/pmd level for the HugeTLB page. - * - * For the HugeTLB page of the pmd level mapping, then - * - * struct_size = n * sizeof(struct page) / PAGE_SIZE - * = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE - * = sizeof(struct page) / sizeof(pte_t) - * = 64 / 8 - * = 8 (pages) - * - * Where n is how many pte entries which one page can contains. So the value of - * n is (PAGE_SIZE / sizeof(pte_t)). - * - * This optimization only supports 64-bit system, so the value of sizeof(pte_t) - * is 8. And this optimization also applicable only when the size of struct page - * is a power of two. In most cases, the size of struct page is 64 bytes (e.g. - * x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the - * size of struct page structs of it is 8 page frames which size depends on the - * size of the base page. - * - * For the HugeTLB page of the pud level mapping, then - * - * struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd) - * = PAGE_SIZE / 8 * 8 (pages) - * = PAGE_SIZE (pages) - * - * Where the struct_size(pmd) is the size of the struct page structs of a - * HugeTLB page of the pmd level mapping. - * - * E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB - * HugeTLB page consists in 4096. - * - * Next, we take the pmd level mapping of the HugeTLB page as an example to - * show the internal implementation of this optimization. There are 8 pages - * struct page structs associated with a HugeTLB page which is pmd mapped. - * - * Here is how things look before optimization. - * - * HugeTLB struct pages(8 pages) page frame(8 pages) - * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ - * | | | 0 | -------------> | 0 | - * | | +-----------+ +-----------+ - * | | | 1 | -------------> | 1 | - * | | +-----------+ +-----------+ - * | | | 2 | -------------> | 2 | - * | | +-----------+ +-----------+ - * | | | 3 | -------------> | 3 | - * | | +-----------+ +-----------+ - * | | | 4 | -------------> | 4 | - * | PMD | +-----------+ +-----------+ - * | level | | 5 | -------------> | 5 | - * | mapping | +-----------+ +-----------+ - * | | | 6 | -------------> | 6 | - * | | +-----------+ +-----------+ - * | | | 7 | -------------> | 7 | - * | | +-----------+ +-----------+ - * | | - * | | - * | | - * +-----------+ - * - * The value of page->compound_head is the same for all tail pages. The first - * page of page structs (page 0) associated with the HugeTLB page contains the 4 - * page structs necessary to describe the HugeTLB. The only use of the remaining - * pages of page structs (page 1 to page 7) is to point to page->compound_head. - * Therefore, we can remap pages 1 to 7 to page 0. Only 1 page of page structs - * will be used for each HugeTLB page. This will allow us to free the remaining - * 7 pages to the buddy allocator. - * - * Here is how things look after remapping. - * - * HugeTLB struct pages(8 pages) page frame(8 pages) - * +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ - * | | | 0 | -------------> | 0 | - * | | +-----------+ +-----------+ - * | | | 1 | ---------------^ ^ ^ ^ ^ ^ ^ - * | | +-----------+ | | | | | | - * | | | 2 | -----------------+ | | | | | - * | | +-----------+ | | | | | - * | | | 3 | -------------------+ | | | | - * | | +-----------+ | | | | - * | | | 4 | ---------------------+ | | | - * | PMD | +-----------+ | | | - * | level | | 5 | -----------------------+ | | - * | mapping | +-----------+ | | - * | | | 6 | -------------------------+ | - * | | +-----------+ | - * | | | 7 | ---------------------------+ - * | | +-----------+ - * | | - * | | - * | | - * +-----------+ - * - * When a HugeTLB is freed to the buddy system, we should allocate 7 pages for - * vmemmap pages and restore the previous mapping relationship. - * - * For the HugeTLB page of the pud level mapping. It is similar to the former. - * We also can use this approach to free (PAGE_SIZE - 1) vmemmap pages. - * - * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures - * (e.g. aarch64) provides a contiguous bit in the translation table entries - * that hints to the MMU to indicate that it is one of a contiguous set of - * entries that can be cached in a single TLB entry. - * - * The contiguous bit is used to increase the mapping size at the pmd and pte - * (last) level. So this type of HugeTLB page can be optimized only when its - * size of the struct page structs is greater than 1 page. - * - * Notice: The head vmemmap page is not freed to the buddy allocator and all - * tail vmemmap pages are mapped to the head vmemmap page frame. So we can see - * more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) - * associated with each HugeTLB page. The compound_head() can handle this - * correctly (more details refer to the comment above compound_head()). + * See Documentation/vm/vmemmap_dedup.rst */ #define pr_fmt(fmt) "HugeTLB: " fmt -- cgit v1.2.3 From 4917f55b4ef963e2d2288fe4eb651728be8db406 Mon Sep 17 00:00:00 2001 From: Joao Martins Date: Thu, 28 Apr 2022 23:16:16 -0700 Subject: mm/sparse-vmemmap: improve memory savings for compound devmaps A compound devmap is a dev_pagemap with @vmemmap_shift > 0 and it means that pages are mapped at a given huge page alignment and utilize uses compound pages as opposed to order-0 pages. Take advantage of the fact that most tail pages look the same (except the first two) to minimize struct page overhead. Allocate a separate page for the vmemmap area which contains the head page and separate for the next 64 pages. The rest of the subsections then reuse this tail vmemmap page to initialize the rest of the tail pages. Sections are arch-dependent (e.g. on x86 it's 64M, 128M or 512M) and when initializing compound devmap with big enough @vmemmap_shift (e.g. 1G PUD) it may cross multiple sections. The vmemmap code needs to consult @pgmap so that multiple sections that all map the same tail data can refer back to the first copy of that data for a given gigantic page. On compound devmaps with 2M align, this mechanism lets 6 pages be saved out of the 8 necessary PFNs necessary to set the subsection's 512 struct pages being mapped. On a 1G compound devmap it saves 4094 pages. Altmap isn't supported yet, given various restrictions in altmap pfn allocator, thus fallback to the already in use vmemmap_populate(). It is worth noting that altmap for devmap mappings was there to relieve the pressure of inordinate amounts of memmap space to map terabytes of pmem. With compound pages the motivation for altmaps for pmem gets reduced. Link: https://lkml.kernel.org/r/20220420155310.9712-5-joao.m.martins@oracle.com Signed-off-by: Joao Martins Reviewed-by: Muchun Song Cc: Christoph Hellwig Cc: Dan Williams Cc: Jane Chu Cc: Jason Gunthorpe Cc: Jonathan Corbet Cc: Matthew Wilcox Cc: Mike Kravetz Cc: Vishal Verma Signed-off-by: Andrew Morton --- Documentation/vm/vmemmap_dedup.rst | 56 +++++++++++++++- include/linux/mm.h | 2 +- mm/memremap.c | 1 + mm/sparse-vmemmap.c | 132 ++++++++++++++++++++++++++++++++++--- 4 files changed, 177 insertions(+), 14 deletions(-) (limited to 'Documentation/vm') diff --git a/Documentation/vm/vmemmap_dedup.rst b/Documentation/vm/vmemmap_dedup.rst index 485ccf4f7b10..c9c495f62d12 100644 --- a/Documentation/vm/vmemmap_dedup.rst +++ b/Documentation/vm/vmemmap_dedup.rst @@ -1,8 +1,11 @@ .. SPDX-License-Identifier: GPL-2.0 -================================== -Free some vmemmap pages of HugeTLB -================================== +========================================= +A vmemmap diet for HugeTLB and Device DAX +========================================= + +HugeTLB +======= The struct page structures (page structs) are used to describe a physical page frame. By default, there is a one-to-one mapping from a page frame to @@ -171,3 +174,50 @@ tail vmemmap pages are mapped to the head vmemmap page frame. So we can see more than one struct page struct with PG_head (e.g. 8 per 2 MB HugeTLB page) associated with each HugeTLB page. The compound_head() can handle this correctly (more details refer to the comment above compound_head()). + +Device DAX +========== + +The device-dax interface uses the same tail deduplication technique explained +in the previous chapter, except when used with the vmemmap in +the device (altmap). + +The following page sizes are supported in DAX: PAGE_SIZE (4K on x86_64), +PMD_SIZE (2M on x86_64) and PUD_SIZE (1G on x86_64). + +The differences with HugeTLB are relatively minor. + +It only use 3 page structs for storing all information as opposed +to 4 on HugeTLB pages. + +There's no remapping of vmemmap given that device-dax memory is not part of +System RAM ranges initialized at boot. Thus the tail page deduplication +happens at a later stage when we populate the sections. HugeTLB reuses the +the head vmemmap page representing, whereas device-dax reuses the tail +vmemmap page. This results in only half of the savings compared to HugeTLB. + +Deduplicated tail pages are not mapped read-only. + +Here's how things look like on device-dax after the sections are populated:: + + +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ + | | | 0 | -------------> | 0 | + | | +-----------+ +-----------+ + | | | 1 | -------------> | 1 | + | | +-----------+ +-----------+ + | | | 2 | ----------------^ ^ ^ ^ ^ ^ + | | +-----------+ | | | | | + | | | 3 | ------------------+ | | | | + | | +-----------+ | | | | + | | | 4 | --------------------+ | | | + | PMD | +-----------+ | | | + | level | | 5 | ----------------------+ | | + | mapping | +-----------+ | | + | | | 6 | ------------------------+ | + | | +-----------+ | + | | | 7 | --------------------------+ + | | +-----------+ + | | + | | + | | + +-----------+ diff --git a/include/linux/mm.h b/include/linux/mm.h index 80bba49387e9..b9316b2c11ce 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3161,7 +3161,7 @@ p4d_t *vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node); pud_t *vmemmap_pud_populate(p4d_t *p4d, unsigned long addr, int node); pmd_t *vmemmap_pmd_populate(pud_t *pud, unsigned long addr, int node); pte_t *vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap); + struct vmem_altmap *altmap, struct page *reuse); void *vmemmap_alloc_block(unsigned long size, int node); struct vmem_altmap; void *vmemmap_alloc_block_buf(unsigned long size, int node, diff --git a/mm/memremap.c b/mm/memremap.c index af0223605e69..c33bcd0398c9 100644 --- a/mm/memremap.c +++ b/mm/memremap.c @@ -287,6 +287,7 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid) { struct mhp_params params = { .altmap = pgmap_altmap(pgmap), + .pgmap = pgmap, .pgprot = PAGE_KERNEL, }; const int nr_range = pgmap->nr_range; diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index ef15664c6b6c..f4fa61dbbee3 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -533,16 +533,31 @@ void __meminit vmemmap_verify(pte_t *pte, int node, } pte_t * __meminit vmemmap_pte_populate(pmd_t *pmd, unsigned long addr, int node, - struct vmem_altmap *altmap) + struct vmem_altmap *altmap, + struct page *reuse) { pte_t *pte = pte_offset_kernel(pmd, addr); if (pte_none(*pte)) { pte_t entry; void *p; - p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); - if (!p) - return NULL; + if (!reuse) { + p = vmemmap_alloc_block_buf(PAGE_SIZE, node, altmap); + if (!p) + return NULL; + } else { + /* + * When a PTE/PMD entry is freed from the init_mm + * there's a a free_pages() call to this page allocated + * above. Thus this get_page() is paired with the + * put_page_testzero() on the freeing path. + * This can only called by certain ZONE_DEVICE path, + * and through vmemmap_populate_compound_pages() when + * slab is available. + */ + get_page(reuse); + p = page_to_virt(reuse); + } entry = pfn_pte(__pa(p) >> PAGE_SHIFT, PAGE_KERNEL); set_pte_at(&init_mm, addr, pte, entry); } @@ -609,7 +624,8 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node) } static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, - struct vmem_altmap *altmap) + struct vmem_altmap *altmap, + struct page *reuse) { pgd_t *pgd; p4d_t *p4d; @@ -629,7 +645,7 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, pmd = vmemmap_pmd_populate(pud, addr, node); if (!pmd) return NULL; - pte = vmemmap_pte_populate(pmd, addr, node, altmap); + pte = vmemmap_pte_populate(pmd, addr, node, altmap, reuse); if (!pte) return NULL; vmemmap_verify(pte, node, addr, addr + PAGE_SIZE); @@ -639,13 +655,14 @@ static pte_t * __meminit vmemmap_populate_address(unsigned long addr, int node, static int __meminit vmemmap_populate_range(unsigned long start, unsigned long end, int node, - struct vmem_altmap *altmap) + struct vmem_altmap *altmap, + struct page *reuse) { unsigned long addr = start; pte_t *pte; for (; addr < end; addr += PAGE_SIZE) { - pte = vmemmap_populate_address(addr, node, altmap); + pte = vmemmap_populate_address(addr, node, altmap, reuse); if (!pte) return -ENOMEM; } @@ -656,7 +673,95 @@ static int __meminit vmemmap_populate_range(unsigned long start, int __meminit vmemmap_populate_basepages(unsigned long start, unsigned long end, int node, struct vmem_altmap *altmap) { - return vmemmap_populate_range(start, end, node, altmap); + return vmemmap_populate_range(start, end, node, altmap, NULL); +} + +/* + * For compound pages bigger than section size (e.g. x86 1G compound + * pages with 2M subsection size) fill the rest of sections as tail + * pages. + * + * Note that memremap_pages() resets @nr_range value and will increment + * it after each range successful onlining. Thus the value or @nr_range + * at section memmap populate corresponds to the in-progress range + * being onlined here. + */ +static bool __meminit reuse_compound_section(unsigned long start_pfn, + struct dev_pagemap *pgmap) +{ + unsigned long nr_pages = pgmap_vmemmap_nr(pgmap); + unsigned long offset = start_pfn - + PHYS_PFN(pgmap->ranges[pgmap->nr_range].start); + + return !IS_ALIGNED(offset, nr_pages) && nr_pages > PAGES_PER_SUBSECTION; +} + +static pte_t * __meminit compound_section_tail_page(unsigned long addr) +{ + pte_t *pte; + + addr -= PAGE_SIZE; + + /* + * Assuming sections are populated sequentially, the previous section's + * page data can be reused. + */ + pte = pte_offset_kernel(pmd_off_k(addr), addr); + if (!pte) + return NULL; + + return pte; +} + +static int __meminit vmemmap_populate_compound_pages(unsigned long start_pfn, + unsigned long start, + unsigned long end, int node, + struct dev_pagemap *pgmap) +{ + unsigned long size, addr; + pte_t *pte; + int rc; + + if (reuse_compound_section(start_pfn, pgmap)) { + pte = compound_section_tail_page(start); + if (!pte) + return -ENOMEM; + + /* + * Reuse the page that was populated in the prior iteration + * with just tail struct pages. + */ + return vmemmap_populate_range(start, end, node, NULL, + pte_page(*pte)); + } + + size = min(end - start, pgmap_vmemmap_nr(pgmap) * sizeof(struct page)); + for (addr = start; addr < end; addr += size) { + unsigned long next = addr, last = addr + size; + + /* Populate the head page vmemmap page */ + pte = vmemmap_populate_address(addr, node, NULL, NULL); + if (!pte) + return -ENOMEM; + + /* Populate the tail pages vmemmap page */ + next = addr + PAGE_SIZE; + pte = vmemmap_populate_address(next, node, NULL, NULL); + if (!pte) + return -ENOMEM; + + /* + * Reuse the previous page for the rest of tail pages + * See layout diagram in Documentation/vm/vmemmap_dedup.rst + */ + next += PAGE_SIZE; + rc = vmemmap_populate_range(next, last, node, NULL, + pte_page(*pte)); + if (rc) + return -ENOMEM; + } + + return 0; } struct page * __meminit __populate_section_memmap(unsigned long pfn, @@ -665,12 +770,19 @@ struct page * __meminit __populate_section_memmap(unsigned long pfn, { unsigned long start = (unsigned long) pfn_to_page(pfn); unsigned long end = start + nr_pages * sizeof(struct page); + int r; if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) || !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION))) return NULL; - if (vmemmap_populate(start, end, nid, altmap)) + if (is_power_of_2(sizeof(struct page)) && + pgmap && pgmap_vmemmap_nr(pgmap) > 1 && !altmap) + r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap); + else + r = vmemmap_populate(start, end, nid, altmap); + + if (r < 0) return NULL; return pfn_to_page(pfn); -- cgit v1.2.3 From d1ed51fcdbd69be3729f6e249b61cc73fb3b2dd8 Mon Sep 17 00:00:00 2001 From: Akira Yokosawa Date: Thu, 12 May 2022 20:23:07 -0700 Subject: docs: vm/page_owner: tweak literal block in STANDARD FORMAT SPECIFIERS A semantic conflict between commit 5603f9bdea68 ("docs: vm/page_owner: use literal blocks for param description") and a change queued for v5.19 authored by Jiajian Ye ("tools/vm/page_owner_sort.c: support sorting blocks by multiple keys") results in a warning from "make htmldocs" saying: [...]/vm/page_owner.rst:176: WARNING: Literal block expected; none found. This is because a literal block in ReST ends at a line which has the same indent as the paragraph preceding it. In this case the one with no indent. Indent the two "For --xxxx option:" lines by two columns and make the whole section a literal block. While at it, fix indents by white spaces of "ator" keys. Link: https://lkml.kernel.org/r/fdfecc82-d41e-6d8a-738d-4beb6faa27fb@gmail.com Signed-of-by: Akira Yokosawa Reported-by: Shenghong Han Cc: Jiajian Ye Cc: Chongxi Zhao Cc: Yinan Zhang Cc: Yixuan Cao Cc: Yongqiang Liu Cc: Yuhong Feng Cc: Haowen Bai Cc: Jonathan Corbet Signed-off-by: Andrew Morton --- Documentation/vm/page_owner.rst | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) (limited to 'Documentation/vm') diff --git a/Documentation/vm/page_owner.rst b/Documentation/vm/page_owner.rst index 25622c715823..f5c954afe97c 100644 --- a/Documentation/vm/page_owner.rst +++ b/Documentation/vm/page_owner.rst @@ -173,7 +173,7 @@ STANDARD FORMAT SPECIFIERS ========================== :: -For --sort option: + For --sort option: KEY LONG DESCRIPTION p pid process ID @@ -183,9 +183,9 @@ For --sort option: T txt full text of block ft free_ts timestamp of the page when it was released at alloc_ts timestamp of the page when it was allocated - ator allocator memory allocator for pages + ator allocator memory allocator for pages -For --curl option: + For --curl option: KEY LONG DESCRIPTION p pid process ID @@ -193,4 +193,4 @@ For --curl option: n name task command name f free whether the page has been released or not st stacktrace stack trace of the page allocation - ator allocator memory allocator for pages + ator allocator memory allocator for pages -- cgit v1.2.3 From 174270c2d664ca8ff60d2d95211b4790cedf881c Mon Sep 17 00:00:00 2001 From: Fabio M. De Francesco Date: Fri, 13 May 2022 16:48:55 -0700 Subject: Documentation/vm: include kdocs from highmem*.h into highmem.rst kernel-docs that are in include/linux/highmem.h and in include/linux/highmem-internal.h should be included in highmem.rst. Use kdocs directives to include the above-mentioned comments into highmem.rst. Link: https://lkml.kernel.org/r/20220428212455.892-3-fmdefrancesco@gmail.com Signed-off-by: Fabio M. De Francesco Acked-by: Mike Rapoport Reviewed-by: Ira Weiny Suggested-by: Ira Weiny Reviewed-by: Sebastian Andrzej Siewior Cc: Jonathan Corbet Cc: Matthew Wilcox Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Catalin Marinas Cc: Peter Collingbourne Cc: Vlastimil Babka Cc: Will Deacon Cc: Jonathan Corbet Signed-off-by: Andrew Morton --- Documentation/vm/highmem.rst | 7 +++++++ 1 file changed, 7 insertions(+) (limited to 'Documentation/vm') diff --git a/Documentation/vm/highmem.rst b/Documentation/vm/highmem.rst index 0f69a9fec34d..ccff08a8211d 100644 --- a/Documentation/vm/highmem.rst +++ b/Documentation/vm/highmem.rst @@ -145,3 +145,10 @@ The general recommendation is that you don't use more than 8GiB on a 32-bit machine - although more might work for you and your workload, you're pretty much on your own - don't expect kernel developers to really care much if things come apart. + + +Functions +========= + +.. kernel-doc:: include/linux/highmem.h +.. kernel-doc:: include/linux/highmem-internal.h -- cgit v1.2.3 From 85a85e7601263f2b4edc86136dd0172c2cfbfaa1 Mon Sep 17 00:00:00 2001 From: Fabio M. De Francesco Date: Fri, 13 May 2022 16:48:55 -0700 Subject: Documentation/vm: move "Using kmap-atomic" to highmem.h The use of kmap_atomic() is new code is being deprecated in favor of kmap_local_page(). For this reason the "Using kmap_atomic" section in highmem.rst is obsolete and unnecessary, but it can still help developers if it were moved to kdocs in highmem.h. Therefore, move the relevant parts of this section from highmem.rst and merge them with the kdocs in highmem.h. Link: https://lkml.kernel.org/r/20220428212455.892-4-fmdefrancesco@gmail.com Signed-off-by: Fabio M. De Francesco Suggested-by: Ira Weiny Reviewed-by: Sebastian Andrzej Siewior Reviewed-by: Ira Weiny Cc: Jonathan Corbet Cc: Matthew Wilcox Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Catalin Marinas Cc: Mike Rapoport Cc: Peter Collingbourne Cc: Vlastimil Babka Cc: Will Deacon Cc: Jonathan Corbet Signed-off-by: Andrew Morton --- Documentation/vm/highmem.rst | 35 ----------------------------------- include/linux/highmem.h | 31 +++++++++++++++++++++++++++++++ 2 files changed, 31 insertions(+), 35 deletions(-) (limited to 'Documentation/vm') diff --git a/Documentation/vm/highmem.rst b/Documentation/vm/highmem.rst index ccff08a8211d..e05bf5524174 100644 --- a/Documentation/vm/highmem.rst +++ b/Documentation/vm/highmem.rst @@ -72,41 +72,6 @@ The kernel contains several ways of creating temporary mappings: It may be assumed that k[un]map_atomic() won't fail. -Using kmap_atomic -================= - -When and where to use kmap_atomic() is straightforward. It is used when code -wants to access the contents of a page that might be allocated from high memory -(see __GFP_HIGHMEM), for example a page in the pagecache. The API has two -functions, and they can be used in a manner similar to the following:: - - /* Find the page of interest. */ - struct page *page = find_get_page(mapping, offset); - - /* Gain access to the contents of that page. */ - void *vaddr = kmap_atomic(page); - - /* Do something to the contents of that page. */ - memset(vaddr, 0, PAGE_SIZE); - - /* Unmap that page. */ - kunmap_atomic(vaddr); - -Note that the kunmap_atomic() call takes the result of the kmap_atomic() call -not the argument. - -If you need to map two pages because you want to copy from one page to -another you need to keep the kmap_atomic calls strictly nested, like:: - - vaddr1 = kmap_atomic(page1); - vaddr2 = kmap_atomic(page2); - - memcpy(vaddr1, vaddr2, PAGE_SIZE); - - kunmap_atomic(vaddr2); - kunmap_atomic(vaddr1); - - Cost of Temporary Mappings ========================== diff --git a/include/linux/highmem.h b/include/linux/highmem.h index c05ae395898a..3af34de54330 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -145,6 +145,37 @@ static inline void *kmap_local_folio(struct folio *folio, size_t offset); * Mappings should always be released by kunmap_atomic(). * * Do not use in new code. Use kmap_local_page() instead. + * + * It is used in atomic context when code wants to access the contents of a + * page that might be allocated from high memory (see __GFP_HIGHMEM), for + * example a page in the pagecache. The API has two functions, and they + * can be used in a manner similar to the following: + * + * -- Find the page of interest. -- + * struct page *page = find_get_page(mapping, offset); + * + * -- Gain access to the contents of that page. -- + * void *vaddr = kmap_atomic(page); + * + * -- Do something to the contents of that page. -- + * memset(vaddr, 0, PAGE_SIZE); + * + * -- Unmap that page. -- + * kunmap_atomic(vaddr); + * + * Note that the kunmap_atomic() call takes the result of the kmap_atomic() + * call, not the argument. + * + * If you need to map two pages because you want to copy from one page to + * another you need to keep the kmap_atomic calls strictly nested, like: + * + * vaddr1 = kmap_atomic(page1); + * vaddr2 = kmap_atomic(page2); + * + * memcpy(vaddr1, vaddr2, PAGE_SIZE); + * + * kunmap_atomic(vaddr2); + * kunmap_atomic(vaddr1); */ static inline void *kmap_atomic(struct page *page); -- cgit v1.2.3 From 110bf7a523075e42703fbe9c136ad00b867bec3a Mon Sep 17 00:00:00 2001 From: Fabio M. De Francesco Date: Fri, 13 May 2022 16:48:55 -0700 Subject: Documentation/vm: rework "Temporary Virtual Mappings" section Extend and rework the "Temporary Virtual Mappings" section of the highmem.rst documentation. Despite the local kmaps were introduced by Thomas Gleixner in October 2020, documentation was still missing information about them. These additions rely largely on Gleixner's patches, Jonathan Corbet's LWN articles, comments by Ira Weiny and Matthew Wilcox, and in-code comments from ./include/linux/highmem.h. 1) Add a paragraph to document kmap_local_page(). 2) Reorder the list of functions by decreasing order of preference of use. 3) Rework part of the kmap() entry in list. Link: https://lkml.kernel.org/r/20220428212455.892-5-fmdefrancesco@gmail.com Signed-off-by: Fabio M. De Francesco Suggested-by: Ira Weiny Reviewed-by: Sebastian Andrzej Siewior Reviewed-by: Ira Weiny Cc: Jonathan Corbet Cc: Matthew Wilcox Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Catalin Marinas Cc: Mike Rapoport Cc: Peter Collingbourne Cc: Vlastimil Babka Cc: Will Deacon Cc: Jonathan Corbet Signed-off-by: Andrew Morton --- Documentation/vm/highmem.rst | 70 +++++++++++++++++++++++++++++++++++++------- 1 file changed, 59 insertions(+), 11 deletions(-) (limited to 'Documentation/vm') diff --git a/Documentation/vm/highmem.rst b/Documentation/vm/highmem.rst index e05bf5524174..c9887f241c6c 100644 --- a/Documentation/vm/highmem.rst +++ b/Documentation/vm/highmem.rst @@ -50,26 +50,74 @@ space when they use mm context tags. Temporary Virtual Mappings ========================== -The kernel contains several ways of creating temporary mappings: +The kernel contains several ways of creating temporary mappings. The following +list shows them in order of preference of use. -* vmap(). This can be used to make a long duration mapping of multiple - physical pages into a contiguous virtual space. It needs global - synchronization to unmap. +* kmap_local_page(). This function is used to require short term mappings. + It can be invoked from any context (including interrupts) but the mappings + can only be used in the context which acquired them. + + This function should be preferred, where feasible, over all the others. -* kmap(). This permits a short duration mapping of a single page. It needs - global synchronization, but is amortized somewhat. It is also prone to - deadlocks when using in a nested fashion, and so it is not recommended for - new code. + These mappings are thread-local and CPU-local, meaning that the mapping + can only be accessed from within this thread and the thread is bound the + CPU while the mapping is active. Even if the thread is preempted (since + preemption is never disabled by the function) the CPU can not be + unplugged from the system via CPU-hotplug until the mapping is disposed. + + It's valid to take pagefaults in a local kmap region, unless the context + in which the local mapping is acquired does not allow it for other reasons. + + kmap_local_page() always returns a valid virtual address and it is assumed + that kunmap_local() will never fail. + + Nesting kmap_local_page() and kmap_atomic() mappings is allowed to a certain + extent (up to KMAP_TYPE_NR) but their invocations have to be strictly ordered + because the map implementation is stack based. See kmap_local_page() kdocs + (included in the "Functions" section) for details on how to manage nested + mappings. * kmap_atomic(). This permits a very short duration mapping of a single page. Since the mapping is restricted to the CPU that issued it, it performs well, but the issuing task is therefore required to stay on that CPU until it has finished, lest some other task displace its mappings. - kmap_atomic() may also be used by interrupt contexts, since it is does not - sleep and the caller may not sleep until after kunmap_atomic() is called. + kmap_atomic() may also be used by interrupt contexts, since it does not + sleep and the callers too may not sleep until after kunmap_atomic() is + called. + + Each call of kmap_atomic() in the kernel creates a non-preemptible section + and disable pagefaults. This could be a source of unwanted latency. Therefore + users should prefer kmap_local_page() instead of kmap_atomic(). - It may be assumed that k[un]map_atomic() won't fail. + It is assumed that k[un]map_atomic() won't fail. + +* kmap(). This should be used to make short duration mapping of a single + page with no restrictions on preemption or migration. It comes with an + overhead as mapping space is restricted and protected by a global lock + for synchronization. When mapping is no longer needed, the address that + the page was mapped to must be released with kunmap(). + + Mapping changes must be propagated across all the CPUs. kmap() also + requires global TLB invalidation when the kmap's pool wraps and it might + block when the mapping space is fully utilized until a slot becomes + available. Therefore, kmap() is only callable from preemptible context. + + All the above work is necessary if a mapping must last for a relatively + long time but the bulk of high-memory mappings in the kernel are + short-lived and only used in one place. This means that the cost of + kmap() is mostly wasted in such cases. kmap() was not intended for long + term mappings but it has morphed in that direction and its use is + strongly discouraged in newer code and the set of the preceding functions + should be preferred. + + On 64-bit systems, calls to kmap_local_page(), kmap_atomic() and kmap() have + no real work to do because a 64-bit address space is more than sufficient to + address all the physical memory whose pages are permanently mapped. + +* vmap(). This can be used to make a long duration mapping of multiple + physical pages into a contiguous virtual space. It needs global + synchronization to unmap. Cost of Temporary Mappings -- cgit v1.2.3