From 4c4915627f94a81a834a7a65dee83acdfb45788c Mon Sep 17 00:00:00 2001 From: Eric Dumazet Date: Wed, 30 Jan 2008 13:32:50 +0100 Subject: x86: make arch/x86/kernel/acpi/wakeup_32.S use a separate While examining vmlinux namelist on i386 (nm -v vmlinux) I noticed : c01021d0 t es7000_rename_gsi c010221a T es7000_start_cpu c0103000 T thread_saved_pc and c0113218 T acpi_restore_state_mem c0113219 T acpi_save_state_mem c0114000 t wakeup_code This is because arch/x86/kernel/acpi/wakeup_32.S forces a .text alignment of 4096 bytes. (I have no idea if it is really needed, since arch/x86/kernel/acpi/wakeup_64.S uses a 16 bytes alignment *only*) So arch/x86/kernel/built-in.o also has this alignment arch/x86/kernel/built-in.o: file format elf32-i386 Sections: Idx Name Size VMA LMA File off Algn 0 .text 00018c94 00000000 00000000 00001000 2**12 CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODE But as arch/x86/kernel/acpi/wakeup_32.o is not the first object linked into arch/x86/kernel/built-in.o, linker had to build several holes to meet alignement requirements, because of .o nestings in the kbuild process. This can be solved by using a special section, .text.page_aligned, so that no holes are needed. # size vmlinux.before vmlinux.after text data bss dec hex filename 4619942 422838 458752 5501532 53f25c vmlinux.before 4610534 422838 458752 5492124 53cd9c vmlinux.after This saves 9408 bytes Signed-off-by: Eric Dumazet Signed-off-by: Ingo Molnar Signed-off-by: Thomas Gleixner --- arch/x86/kernel/vmlinux_32.lds.S | 2 ++ 1 file changed, 2 insertions(+) (limited to 'arch/x86/kernel/vmlinux_32.lds.S') diff --git a/arch/x86/kernel/vmlinux_32.lds.S b/arch/x86/kernel/vmlinux_32.lds.S index ec072588ff01..f1148ac8abe3 100644 --- a/arch/x86/kernel/vmlinux_32.lds.S +++ b/arch/x86/kernel/vmlinux_32.lds.S @@ -38,6 +38,8 @@ SECTIONS /* read-only */ .text : AT(ADDR(.text) - LOAD_OFFSET) { + . = ALIGN(4096); /* not really needed, already page aligned */ + *(.text.page_aligned) TEXT_TEXT SCHED_TEXT LOCK_TEXT -- cgit v1.2.3