lkml.org 
[lkml]   [2021]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH v13 4/7] arm64: mte: Enable TCO in functions that can read beyond buffer limits
    On Thu, Feb 11, 2021 at 03:33:50PM +0000, Vincenzo Frascino wrote:
    > diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
    > index 706b7ab75f31..65ecb86dd886 100644
    > --- a/arch/arm64/kernel/mte.c
    > +++ b/arch/arm64/kernel/mte.c
    > @@ -26,6 +26,10 @@ u64 gcr_kernel_excl __ro_after_init;
    >
    > static bool report_fault_once = true;
    >
    > +/* Whether the MTE asynchronous mode is enabled. */
    > +DEFINE_STATIC_KEY_FALSE(mte_async_mode);
    > +EXPORT_SYMBOL_GPL(mte_async_mode);
    > +
    > static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
    > {
    > pte_t old_pte = READ_ONCE(*ptep);
    > @@ -119,12 +123,24 @@ static inline void __mte_enable_kernel(const char *mode, unsigned long tcf)
    > void mte_enable_kernel_sync(void)
    > {
    > __mte_enable_kernel("synchronous", SCTLR_ELx_TCF_SYNC);
    > +
    > + /*
    > + * This function is called on each active smp core at boot
    > + * time, hence we do not need to take cpu_hotplug_lock again.
    > + */
    > + static_branch_disable_cpuslocked(&mte_async_mode);
    > }
    > EXPORT_SYMBOL_GPL(mte_enable_kernel_sync);
    >
    > void mte_enable_kernel_async(void)
    > {
    > __mte_enable_kernel("asynchronous", SCTLR_ELx_TCF_ASYNC);
    > +
    > + /*
    > + * This function is called on each active smp core at boot
    > + * time, hence we do not need to take cpu_hotplug_lock again.
    > + */
    > + static_branch_enable_cpuslocked(&mte_async_mode);
    > }

    Sorry, I missed the cpuslocked aspect before. Is there any reason you
    need to use this API here? I suggested to add it to the
    mte_enable_kernel_sync() because kasan may at some point do this
    dynamically at run-time, so the boot-time argument doesn't hold. But
    it's also incorrect as this function will be called for hot-plugged
    CPUs as well after boot.

    The only reason for static_branch_*_cpuslocked() is if it's called from
    a region that already invoked cpus_read_lock() which I don't think is
    the case here.

    --
    Catalin

    \
     
     \ /
      Last update: 2021-02-12 18:24    [W:4.064 / U:0.136 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site