lkml.org 
[lkml]   [2021]   [Oct]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 14/23] x86/resctrl: Calculate bandwidth from the previous __mon_event_count() chunks
From
Date
Hi James,

On 10/1/2021 9:02 AM, James Morse wrote:
> mbm_bw_count() is only called by the mbm_handle_overflow() worker once a
> second. It reads the hardware register, calculates the bandwidth and
> updates m->prev_bw_msr which is used to hold the previous hardware register
> value.
>
> Operating directly on hardware register values makes it difficult to make
> this code architecture independent, so that it can be moved to /fs/,
> making the mba_sc feature something resctrl supports with no additional
> support from the architecture.
> Prior to calling mbm_bw_count(), mbm_update() reads from the same hardware
> register using __mon_event_count().

Looking back I think 06c5fe9b12dd ("x86/resctrl: Fix incorrect local
bandwidth when mba_sc is enabled") may explain how the code ended up the
way it is.

> Change mbm_bw_count() to use the current chunks value from
> __mon_event_count() to calculate bandwidth. This means it no longer
> operates on hardware register values.

ok ... so could the patch just do this as it is stated here? The way it
is implemented is very complicated and hard (for me) to verify the
correctness (more below).

>
> Signed-off-by: James Morse <james.morse@arm.com>
> ---
> Changes since v1:
> * This patch was rewritten
> ---
> arch/x86/kernel/cpu/resctrl/internal.h | 4 ++--
> arch/x86/kernel/cpu/resctrl/monitor.c | 24 +++++++++++++++---------
> 2 files changed, 17 insertions(+), 11 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
> index 1b07e49564cf..0a5721e1cc07 100644
> --- a/arch/x86/kernel/cpu/resctrl/internal.h
> +++ b/arch/x86/kernel/cpu/resctrl/internal.h
> @@ -289,7 +289,7 @@ struct rftype {
> * struct mbm_state - status for each MBM counter in each domain
> * @chunks: Total data moved (multiply by rdt_group.mon_scale to get bytes)
> * @prev_msr: Value of IA32_QM_CTR for this RMID last time we read it
> - * @prev_bw_msr:Value of previous IA32_QM_CTR for bandwidth counting
> + * @prev_bw_chunks: Previous chunks value read when for bandwidth calculation
> * @prev_bw: The most recent bandwidth in MBps
> * @delta_bw: Difference between the current and previous bandwidth
> * @delta_comp: Indicates whether to compute the delta_bw
> @@ -297,7 +297,7 @@ struct rftype {
> struct mbm_state {
> u64 chunks;
> u64 prev_msr;
> - u64 prev_bw_msr;
> + u64 prev_bw_chunks;
> u32 prev_bw;
> u32 delta_bw;
> bool delta_comp;
> diff --git a/arch/x86/kernel/cpu/resctrl/monitor.c b/arch/x86/kernel/cpu/resctrl/monitor.c
> index 6c8226987dd6..a1232462db14 100644
> --- a/arch/x86/kernel/cpu/resctrl/monitor.c
> +++ b/arch/x86/kernel/cpu/resctrl/monitor.c
> @@ -315,7 +315,7 @@ static u64 __mon_event_count(u32 rmid, struct rmid_read *rr)
>
> if (rr->first) {
> memset(m, 0, sizeof(struct mbm_state));
> - m->prev_bw_msr = m->prev_msr = tval;
> + m->prev_msr = tval;
> return 0;
> }
>
> @@ -329,27 +329,32 @@ static u64 __mon_event_count(u32 rmid, struct rmid_read *rr)
> }
>
> /*
> + * mbm_bw_count() - Update bw count from values previously read by
> + * __mon_event_count().
> + * @rmid: The rmid used to identify the cached mbm_state.
> + * @rr: The struct rmid_read populated by __mon_event_count().
> + *
> * Supporting function to calculate the memory bandwidth
> - * and delta bandwidth in MBps.
> + * and delta bandwidth in MBps. The chunks value previously read by
> + * __mon_event_count() is compared with the chunks value from the previous
> + * invocation. This must be called oncer per second to maintain values in MBps.
> */
> static void mbm_bw_count(u32 rmid, struct rmid_read *rr)
> {
> struct rdt_hw_resource *hw_res = resctrl_to_arch_res(rr->r);
> struct mbm_state *m = &rr->d->mbm_local[rmid];
> - u64 tval, cur_bw, chunks;
> + u64 cur_bw, chunks, cur_chunks;
>
> - tval = __rmid_read(rmid, rr->evtid);
> - if (tval & (RMID_VAL_ERROR | RMID_VAL_UNAVAIL))
> - return;
> + cur_chunks = rr->val;
> + chunks = cur_chunks - m->prev_bw_chunks;
> + m->prev_bw_chunks = cur_chunks;
>
> - chunks = mbm_overflow_count(m->prev_bw_msr, tval, hw_res->mbm_width);
> - cur_bw = (get_corrected_mbm_count(rmid, chunks) * hw_res->mon_scale) >> 20;
> + cur_bw = (chunks * hw_res->mon_scale) >> 20;

I find this quite confusing. What if a new m->prev_chunks is introduced
instead and initialized in __mon_event_count() to the value of chunks,
and then here in mbm_bw_count it could just retrieve it (chunks =
m->prev_chunks).

>
> if (m->delta_comp)
> m->delta_bw = abs(cur_bw - m->prev_bw);
> m->delta_comp = false;
> m->prev_bw = cur_bw;
> - m->prev_bw_msr = tval;
> }
>
> /*
> @@ -509,6 +514,7 @@ static void mbm_update(struct rdt_resource *r, struct rdt_domain *d, int rmid)
> rr.first = false;
> rr.r = r;
> rr.d = d;
> + rr.val = 0;
>
> /*
> * This is protected from concurrent reads from user
>

Reinette

\
 
 \ /
  Last update: 2021-10-16 00:29    [W:0.159 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site