lkml.org 
[lkml]   [2019]   [Feb]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [GIT PULL] x86/mm changes for v4.21
On Thu, Feb 07, 2019 at 09:36:00AM -0800, Luck, Tony wrote:
> On Thu, Feb 07, 2019 at 03:01:31PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 07, 2019 at 11:50:52AM +0000, Linus Torvalds wrote:
> > > If you re-generate the canonical address in __cpa_addr(), now we'll
> > > actually have the real virtual address around for a lot of code-paths
> > > (pte lookup etc), which was what people wanted to avoid in the first
> > > place.
> >
> > Note that it's an 'unsigned long' address, not an actual pointer, and
> > (afaict) non of the code paths use it as a pointer. This _should_ avoid
> > the CPU from following said pointer and doing a deref on it.
>
> The type doesn't matter. You want to avoid having the
> true value in the register as long as possible. Ideal
> spot would be the instruction before the TLB is flushed.
>
> The speculative issue is that any branch you encounter
> while you have the address in a register may be mispredicted.
> You might also get a bogus hit in the branch target cache
> and speculatively jump into the weeds. While there you
> could find an instruction that loads using that register, and
> even though it is speculative and the instruction won't
> retire, a machine check log will be created in a bank (no
> machine check is signalled).
>
> Once the TLB is updated, you are safe. A speculative
> access to an uncached address will not load or log anything.

Something like so then? AFAICT CLFLUSH will also #GP if feed it crap.


diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 4f8972311a77..d3ae92ad72a6 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -230,6 +230,28 @@ static bool __cpa_pfn_in_highmap(unsigned long pfn)

#endif

+/*
+ * Machine check recovery code needs to change cache mode of poisoned
+ * pages to UC to avoid speculative access logging another error. But
+ * passing the address of the 1:1 mapping to set_memory_uc() is a fine
+ * way to encourage a speculative access. So we cheat and flip the top
+ * bit of the address. This works fine for the code that updates the
+ * page tables. But at the end of the process we need to flush the cache
+ * and the non-canonical address causes a #GP fault when used by the
+ * CLFLUSH instruction.
+ *
+ * But in the common case we already have a canonical address. This code
+ * will fix the top bit if needed and is a no-op otherwise.
+ */
+static inline unsigned long fix_addr(unsigned long addr)
+{
+#ifdef CONFIG_X86_64
+ return (long)(addr << 1) >> 1;
+#else
+ return addr;
+#endif
+}
+
static unsigned long __cpa_addr(struct cpa_data *cpa, unsigned long idx)
{
if (cpa->flags & CPA_PAGES_ARRAY) {
@@ -313,7 +335,7 @@ void __cpa_flush_tlb(void *data)
unsigned int i;

for (i = 0; i < cpa->numpages; i++)
- __flush_tlb_one_kernel(__cpa_addr(cpa, i));
+ __flush_tlb_one_kernel(fix_addr(__cpa_addr(cpa, i)));
}

static void cpa_flush(struct cpa_data *data, int cache)
@@ -347,7 +369,7 @@ static void cpa_flush(struct cpa_data *data, int cache)
* Only flush present addresses:
*/
if (pte && (pte_val(*pte) & _PAGE_PRESENT))
- clflush_cache_range_opt((void *)addr, PAGE_SIZE);
+ clflush_cache_range_opt((void *)fix_addr(addr), PAGE_SIZE);
}
mb();
}
@@ -1627,29 +1649,6 @@ static int __change_page_attr_set_clr(struct cpa_data *cpa, int checkalias)
return ret;
}

-/*
- * Machine check recovery code needs to change cache mode of poisoned
- * pages to UC to avoid speculative access logging another error. But
- * passing the address of the 1:1 mapping to set_memory_uc() is a fine
- * way to encourage a speculative access. So we cheat and flip the top
- * bit of the address. This works fine for the code that updates the
- * page tables. But at the end of the process we need to flush the cache
- * and the non-canonical address causes a #GP fault when used by the
- * CLFLUSH instruction.
- *
- * But in the common case we already have a canonical address. This code
- * will fix the top bit if needed and is a no-op otherwise.
- */
-static inline unsigned long make_addr_canonical_again(unsigned long addr)
-{
-#ifdef CONFIG_X86_64
- return (long)(addr << 1) >> 1;
-#else
- return addr;
-#endif
-}
-
-
static int change_page_attr_set_clr(unsigned long *addr, int numpages,
pgprot_t mask_set, pgprot_t mask_clr,
int force_split, int in_flag,
\
 
 \ /
  Last update: 2019-02-07 18:58    [W:0.078 / U:0.860 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site