lkml.org 
[lkml]   [2013]   [Aug]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 2/2] mm: make lru_add_drain_all() selective
On Wed, 7 Aug 2013 16:52:22 -0400 Chris Metcalf <cmetcalf@tilera.com> wrote:

> This change makes lru_add_drain_all() only selectively interrupt
> the cpus that have per-cpu free pages that can be drained.
>
> This is important in nohz mode where calling mlockall(), for
> example, otherwise will interrupt every core unnecessarily.
>
> ...
>
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -405,6 +405,11 @@ static void activate_page_drain(int cpu)
> pagevec_lru_move_fn(pvec, __activate_page, NULL);
> }
>
> +static bool need_activate_page_drain(int cpu)
> +{
> + return pagevec_count(&per_cpu(activate_page_pvecs, cpu)) != 0;
> +}
> +
> void activate_page(struct page *page)
> {
> if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
> @@ -422,6 +427,11 @@ static inline void activate_page_drain(int cpu)
> {
> }
>
> +static bool need_activate_page_drain(int cpu)
> +{
> + return false;
> +}
> +
> void activate_page(struct page *page)
> {
> struct zone *zone = page_zone(page);
> @@ -683,7 +693,32 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy)
> */
> int lru_add_drain_all(void)
> {
> - return schedule_on_each_cpu(lru_add_drain_per_cpu);
> + cpumask_var_t mask;
> + int cpu, rc;
> +
> + if (!alloc_cpumask_var(&mask, GFP_KERNEL))
> + return -ENOMEM;

Newly adding a GFP_KERNEL allocation attempt into lru_add_drain_all()
is dangerous and undesirable. I took a quick look at all the callsites
and didn't immediately see a bug, but it's hard because they're
splattered all over the place. It would be far better if we were to
not do this.

Rather than tossing this hang-grenade in there we should at a reluctant
minimum change lru_add_drain_all() to take a gfp_t argument and then
carefully review and update the callers.

> + cpumask_clear(mask);
> +
> + /*
> + * Figure out which cpus need flushing. It's OK if we race
> + * with changes to the per-cpu lru pvecs, since it's no worse
> + * than if we flushed all cpus, since a cpu could still end
> + * up putting pages back on its pvec before we returned.
> + * And this avoids interrupting other cpus unnecessarily.
> + */
> + for_each_online_cpu(cpu) {
> + if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) ||
> + pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) ||
> + pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) ||
> + need_activate_page_drain(cpu))
> + cpumask_set_cpu(cpu, mask);
> + }
> +
> + rc = schedule_on_cpu_mask(lru_add_drain_per_cpu, mask);

And it seems pretty easy to avoid the allocation. Create a single
cpumask at boot (or, preferably, at compile-time) and whenever we add a
page to a drainable pagevec, do

cpumask_set_cpu(smp_processor_id(), global_cpumask);

and to avoid needlessly dirtying a cacheline,

if (!cpu_isset(smp_processor_id(), global_cpumask))
cpumask_set_cpu(smp_processor_id(), global_cpumask);


This means that lru_add_drain_all() will need to clear the mask at some
point and atomicity issues arise. It would be better to do the
clearing within schedule_on_cpu_mask() itself, using
cpumask_test_and_clear_cpu().



Also, what's up with the get_online_cpus() handling?
schedule_on_each_cpu() does it, lru_add_drain_all() does not do it and
the schedule_on_cpu_mask() documentation forgot to mention it.




\
 
 \ /
  Last update: 2013-08-12 23:41    [W:2.524 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site