lkml.org 
[lkml]   [2019]   [Jul]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper
From
Date


Le 30/07/2019 à 09:42, Jason Yan a écrit :
> Add a new helper create_tlb_entry() to create a tlb entry by the virtual
> and physical address. This is a preparation to support boot kernel at a
> randomized address.
>
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>

Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>

> ---
> arch/powerpc/kernel/head_fsl_booke.S | 29 ++++++++++++++++++++++++++++
> arch/powerpc/mm/mmu_decl.h | 1 +
> 2 files changed, 30 insertions(+)
>
> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
> index adf0505dbe02..04d124fee17d 100644
> --- a/arch/powerpc/kernel/head_fsl_booke.S
> +++ b/arch/powerpc/kernel/head_fsl_booke.S
> @@ -1114,6 +1114,35 @@ __secondary_hold_acknowledge:
> .long -1
> #endif
>
> +/*
> + * Create a 64M tlb by address and entry
> + * r3/r4 - physical address
> + * r5 - virtual address
> + * r6 - entry
> + */
> +_GLOBAL(create_tlb_entry)
> + lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
> + rlwimi r7,r6,16,4,15 /* Setup MAS0 = TLBSEL | ESEL(r6) */
> + mtspr SPRN_MAS0,r7 /* Write MAS0 */
> +
> + lis r6,(MAS1_VALID|MAS1_IPROT)@h
> + ori r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
> + mtspr SPRN_MAS1,r6 /* Write MAS1 */
> +
> + lis r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
> + ori r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
> + and r6,r6,r5
> + ori r6,r6,MAS2_M@l
> + mtspr SPRN_MAS2,r6 /* Write MAS2(EPN) */
> +
> + ori r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)
> + mtspr SPRN_MAS3,r8 /* Write MAS3(RPN) */
> +
> + tlbwe /* Write TLB */
> + isync
> + sync
> + blr
> +
> /*
> * Create a tlb entry with the same effective and physical address as
> * the tlb entry used by the current running code. But set the TS to 1.
> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
> index 32c1a191c28a..a09f89d3aa0f 100644
> --- a/arch/powerpc/mm/mmu_decl.h
> +++ b/arch/powerpc/mm/mmu_decl.h
> @@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
> extern void adjust_total_lowmem(void);
> extern int switch_to_as1(void);
> extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
> +void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
> #endif
> extern void loadcam_entry(unsigned int index);
> extern void loadcam_multi(int first_idx, int num, int tmp_idx);
>

\
 
 \ /
  Last update: 2019-07-30 11:26    [W:0.086 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site