cachetlb.txt
上传用户:lgb322
上传日期:2013-02-24
资源大小:30529k
文件大小:14k
- Cache and TLB Flushing
- Under Linux
- David S. Miller <davem@redhat.com>
- This document describes the cache/tlb flushing interfaces called
- by the Linux VM subsystem. It enumerates over each interface,
- describes it's intended purpose, and what side effect is expected
- after the interface is invoked.
- The side effects described below are stated for a uniprocessor
- implementation, and what is to happen on that single processor. The
- SMP cases are a simple extension, in that you just extend the
- definition such that the side effect for a particular interface occurs
- on all processors in the system. Don't let this scare you into
- thinking SMP cache/tlb flushing must be so inefficient, this is in
- fact an area where many optimizations are possible. For example,
- if it can be proven that a user address space has never executed
- on a cpu (see vma->cpu_vm_mask), one need not perform a flush
- for this address space on that cpu.
- First, the TLB flushing interfaces, since they are the simplest. The
- "TLB" is abstracted under Linux as something the cpu uses to cache
- virtual-->physical address translations obtained from the software
- page tables. Meaning that if the software page tables change, it is
- possible for stale translations to exist in this "TLB" cache.
- Therefore when software page table changes occur, the kernel will
- invoke one of the following flush methods _after_ the page table
- changes occur:
- 1) void flush_tlb_all(void)
- The most severe flush of all. After this interface runs,
- any previous page table modification whatsoever will be
- visible to the cpu.
- This is usually invoked when the kernel page tables are
- changed, since such translations are "global" in nature.
- 2) void flush_tlb_mm(struct mm_struct *mm)
- This interface flushes an entire user address space from
- the TLB. After running, this interface must make sure that
- any previous page table modifications for the address space
- 'mm' will be visible to the cpu. That is, after running,
- there will be no entries in the TLB for 'mm'.
- This interface is used to handle whole address space
- page table operations such as what happens during
- fork, and exec.
- 3) void flush_tlb_range(struct mm_struct *mm,
- unsigned long start, unsigned long end)
- Here we are flushing a specific range of (user) virtual
- address translations from the TLB. After running, this
- interface must make sure that any previous page table
- modifications for the address space 'mm' in the range 'start'
- to 'end' will be visible to the cpu. That is, after running,
- there will be no entries in the TLB for 'mm' for virtual
- addresses in the range 'start' to 'end'.
- Primarily, this is used for munmap() type operations.
- The interface is provided in hopes that the port can find
- a suitably efficient method for removing multiple page
- sized translations from the TLB, instead of having the kernel
- call flush_tlb_page (see below) for each entry which may be
- modified.
- 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
- This time we need to remove the PAGE_SIZE sized translation
- from the TLB. The 'vma' is the backing structure used by
- Linux to keep track of mmap'd regions for a process, the
- address space is available via vma->vm_mm. Also, one may
- test (vma->vm_flags & VM_EXEC) to see if this region is
- executable (and thus could be in the 'instruction TLB' in
- split-tlb type setups).
- After running, this interface must make sure that any previous
- page table modification for address space 'vma->vm_mm' for
- user virtual address 'page' will be visible to the cpu. That
- is, after running, there will be no entries in the TLB for
- 'vma->vm_mm' for virtual address 'page'.
- This is used primarily during fault processing.
- 5) void flush_tlb_pgtables(struct mm_struct *mm,
- unsigned long start, unsigned long end)
- The software page tables for address space 'mm' for virtual
- addresses in the range 'start' to 'end' are being torn down.
- Some platforms cache the lowest level of the software page tables
- in a linear virtually mapped array, to make TLB miss processing
- more efficient. On such platforms, since the TLB is caching the
- software page table structure, it needs to be flushed when parts
- of the software page table tree are unlinked/freed.
- Sparc64 is one example of a platform which does this.
- Usually, when munmap()'ing an area of user virtual address
- space, the kernel leaves the page table parts around and just
- marks the individual pte's as invalid. However, if very large
- portions of the address space are unmapped, the kernel frees up
- those portions of the software page tables to prevent potential
- excessive kernel memory usage caused by erratic mmap/mmunmap
- sequences. It is at these times that flush_tlb_pgtables will
- be invoked.
- 6) void update_mmu_cache(struct vm_area_struct *vma,
- unsigned long address, pte_t pte)
- At the end of every page fault, this routine is invoked to
- tell the architecture specific code that a translation
- described by "pte" now exists at virtual address "address"
- for address space "vma->vm_mm", in the software page tables.
- A port may use this information in any way it so chooses.
- For example, it could use this event to pre-load TLB
- translations for software managed TLB configurations.
- The sparc64 port currently does this.
- Next, we have the cache flushing interfaces. In general, when Linux
- is changing an existing virtual-->physical mapping to a new value,
- the sequence will be in one of the following forms:
- 1) flush_cache_mm(mm);
- change_all_page_tables_of(mm);
- flush_tlb_mm(mm);
- 2) flush_cache_range(mm, start, end);
- change_range_of_page_tables(mm, start, end);
- flush_tlb_range(mm, start, end);
- 3) flush_cache_page(vma, page);
- set_pte(pte_pointer, new_pte_val);
- flush_tlb_page(vma, page);
- The cache level flush will always be first, because this allows
- us to properly handle systems whose caches are strict and require
- a virtual-->physical translation to exist for a virtual address
- when that virtual address is flushed from the cache. The HyperSparc
- cpu is one such cpu with this attribute.
- The cache flushing routines below need only deal with cache flushing
- to the extent that it is necessary for a particular cpu. Mostly,
- these routines must be implemented for cpus which have virtually
- indexed caches which must be flushed when virtual-->physical
- translations are changed or removed. So, for example, the physically
- indexed physically tagged caches of IA32 processors have no need to
- implement these interfaces since the caches are fully synchronized
- and have no dependency on translation information.
- Here are the routines, one by one:
- 1) void flush_cache_all(void)
- The most severe flush of all. After this interface runs,
- the entire cpu cache is flushed.
- This is usually invoked when the kernel page tables are
- changed, since such translations are "global" in nature.
- 2) void flush_cache_mm(struct mm_struct *mm)
- This interface flushes an entire user address space from
- the caches. That is, after running, there will be no cache
- lines associated with 'mm'.
- This interface is used to handle whole address space
- page table operations such as what happens during
- fork, exit, and exec.
- 3) void flush_cache_range(struct mm_struct *mm,
- unsigned long start, unsigned long end)
- Here we are flushing a specific range of (user) virtual
- addresses from the cache. After running, there will be no
- entries in the cache for 'mm' for virtual addresses in the
- range 'start' to 'end'.
- Primarily, this is used for munmap() type operations.
- The interface is provided in hopes that the port can find
- a suitably efficient method for removing multiple page
- sized regions from the cache, instead of having the kernel
- call flush_cache_page (see below) for each entry which may be
- modified.
- 4) void flush_cache_page(struct vm_area_struct *vma, unsigned long page)
- This time we need to remove a PAGE_SIZE sized range
- from the cache. The 'vma' is the backing structure used by
- Linux to keep track of mmap'd regions for a process, the
- address space is available via vma->vm_mm. Also, one may
- test (vma->vm_flags & VM_EXEC) to see if this region is
- executable (and thus could be in the 'instruction cache' in
- "Harvard" type cache layouts).
- After running, there will be no entries in the cache for
- 'vma->vm_mm' for virtual address 'page'.
- This is used primarily during fault processing.
- There exists another whole class of cpu cache issues which currently
- require a whole different set of interfaces to handle properly.
- The biggest problem is that of virtual aliasing in the data cache
- of a processor.
- Is your port susceptible to virtual aliasing in it's D-cache?
- Well, if your D-cache is virtually indexed, is larger in size than
- PAGE_SIZE, and does not prevent multiple cache lines for the same
- physical address from existing at once, you have this problem.
- If your D-cache has this problem, first define asm/shmparam.h SHMLBA
- properly, it should essentially be the size of your virtually
- addressed D-cache (or if the size is variable, the largest possible
- size). This setting will force the SYSv IPC layer to only allow user
- processes to mmap shared memory at address which are a multiple of
- this value.
- NOTE: This does not fix shared mmaps, check out the sparc64 port for
- one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
- Next, you have two methods to solve the D-cache aliasing issue for all
- other cases. Please keep in mind that fact that, for a given page
- mapped into some user address space, there is always at least one more
- mapping, that of the kernel in it's linear mapping starting at
- PAGE_OFFSET. So immediately, once the first user maps a given
- physical page into its address space, by implication the D-cache
- aliasing problem has the potential to exist since the kernel already
- maps this page at its virtual address.
- First, I describe the old method to deal with this problem. I am
- describing it for documentation purposes, but it is deprecated and the
- latter method I describe next should be used by all new ports and all
- existing ports should move over to the new mechanism as well.
- flush_page_to_ram(struct page *page)
- The physical page 'page' is about to be place into the
- user address space of a process. If it is possible for
- stores done recently by the kernel into this physical
- page, to not be visible to an arbitrary mapping in userspace,
- you must flush this page from the D-cache.
- If the D-cache is writeback in nature, the dirty data (if
- any) for this physical page must be written back to main
- memory before the cache lines are invalidated.
- Admittedly, the author did not think very much when designing this
- interface. It does not give the architecture enough information about
- what exactly is going on, and there is no context to base a judgment
- on about whether an alias is possible at all. The new interfaces to
- deal with D-cache aliasing are meant to address this by telling the
- architecture specific code exactly which is going on at the proper points
- in time.
- Here is the new interface:
- void copy_user_page(void *to, void *from, unsigned long address)
- void clear_user_page(void *to, unsigned long address)
- These two routines store data in user anonymous or COW
- pages. It allows a port to efficiently avoid D-cache alias
- issues between userspace and the kernel.
- For example, a port may temporarily map 'from' and 'to' to
- kernel virtual addresses during the copy. The virtual address
- for these two pages is chosen in such a way that the kernel
- load/store instructions happen to virtual addresses which are
- of the same "color" as the user mapping of the page. Sparc64
- for example, uses this technique.
- The "address" parameter tells the virtual address where the
- user will ultimately have this page mapped.
- If D-cache aliasing is not an issue, these two routines may
- simply call memcpy/memset directly and do nothing more.
- void flush_dcache_page(struct page *page)
- Any time the kernel writes to a page cache page, _OR_
- the kernel is about to read from a page cache page and
- user space shared/writable mappings of this page potentially
- exist, this routine is called.
- NOTE: This routine need only be called for page cache pages
- which can potentially ever be mapped into the address
- space of a user process. So for example, VFS layer code
- handling vfs symlinks in the page cache need not call
- this interface at all.
- The phrase "kernel writes to a page cache page" means,
- specifically, that the kernel executes store instructions
- that dirty data in that page at the page->virtual mapping
- of that page. It is important to flush here to handle
- D-cache aliasing, to make sure these kernel stores are
- visible to user space mappings of that page.
- The corollary case is just as important, if there are users
- which have shared+writable mappings of this file, we must make
- sure that kernel reads of these pages will see the most recent
- stores done by the user.
- If D-cache aliasing is not an issue, this routine may
- simply be defined as a nop on that architecture.
- There is a bit set aside in page->flags (PG_arch_1) as
- "architecture private". The kernel guarantees that,
- for pagecache pages, it will clear this bit when such
- a page first enters the pagecache.
- This allows these interfaces to be implemented much more
- efficiently. It allows one to "defer" (perhaps indefinitely)
- the actual flush if there are currently no user processes
- mapping this page. See sparc64's flush_dcache_page and
- update_mmu_cache implementations for an example of how to go
- about doing this.
- The idea is, first at flush_dcache_page() time, if
- page->mapping->i_mmap{,_shared} are empty lists, just mark the
- architecture private page flag bit. Later, in
- update_mmu_cache(), a check is made of this flag bit, and if
- set the flush is done and the flag bit is cleared.
- IMPORTANT NOTE: It is often important, if you defer the flush,
- that the actual flush occurs on the same CPU
- as did the cpu stores into the page to make it
- dirty. Again, see sparc64 for examples of how
- to deal with this.
- void flush_icache_range(unsigned long start, unsigned long end)
- When the kernel stores into addresses that it will execute
- out of (eg when loading modules), this function is called.
- If the icache does not snoop stores then this routine will need
- to flush it.
- void flush_icache_page(struct vm_area_struct *vma, struct page *page)
- All the functionality of flush_icache_page can be implemented in
- flush_dcache_page and update_mmu_cache. In 2.5 the hope is to
- remove this interface completely.