]> git.baikalelectronics.ru Git - kernel.git/commitdiff
mm: skip invalid pages block at a time in zero_resv_unresv()
authorPavel Tatashin <pasha.tatashin@oracle.com>
Fri, 17 Aug 2018 22:44:52 +0000 (15:44 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Fri, 17 Aug 2018 23:20:28 +0000 (16:20 -0700)
The role of zero_resv_unavail() is to make sure that every struct page
that is allocated but is not backed by memory that is accessible by
kernel is zeroed and not in some uninitialized state.

Since struct pages are allocated in blocks (2M pages in x86 case), we
can skip pageblock_nr_pages at a time, when the first one is found to be
invalid.

This optimization may help since now on x86 every hole in e820 maps is
marked as reserved in memblock, and thus will go through this function.

This function is called before sched_clock() is initialized, so I used
my x86 early boot clock patches to measure the performance improvement.

With 1T hole on i7-8700 currently we would take 0.606918s of boot time,
but with this optimization 0.001103s.

Link: http://lkml.kernel.org/r/20180615155733.1175-1-pasha.tatashin@oracle.com
Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Pasha Tatashin <Pavel.Tatashin@microsoft.com>
Cc: Steven Sistare <steven.sistare@oracle.com>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: "Huang, Ying" <ying.huang@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/page_alloc.c

index 0922ef5d2e46075d187a49fbb5d2a44d00545b85..33f6745bb6494e29da5ae1fbcb4302d8b37afc73 100644 (file)
@@ -6405,8 +6405,11 @@ void __paginginit zero_resv_unavail(void)
        pgcnt = 0;
        for_each_resv_unavail_range(i, &start, &end) {
                for (pfn = PFN_DOWN(start); pfn < PFN_UP(end); pfn++) {
-                       if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages)))
+                       if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) {
+                               pfn = ALIGN_DOWN(pfn, pageblock_nr_pages)
+                                       + pageblock_nr_pages - 1;
                                continue;
+                       }
                        mm_zero_struct_page(pfn_to_page(pfn));
                        pgcnt++;
                }