lkml.org 
[lkml]   [2018]   [May]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [PATCH] cpufreq: Add Kryo CPU scaling driver
Date


> -----Original Message-----
> From: Sudeep Holla <sudeep.holla@arm.com>
> Sent: Monday, May 21, 2018 16:05
> To: ilialin@codeaurora.org; mturquette@baylibre.com; sboyd@kernel.org;
> robh@kernel.org; mark.rutland@arm.com; viresh.kumar@linaro.org;
> nm@ti.com; lgirdwood@gmail.com; broonie@kernel.org;
> andy.gross@linaro.org; david.brown@linaro.org; catalin.marinas@arm.com;
> will.deacon@arm.com; rjw@rjwysocki.net; linux-clk@vger.kernel.org
> Cc: Sudeep Holla <sudeep.holla@arm.com>; devicetree@vger.kernel.org;
> linux-kernel@vger.kernel.org; linux-pm@vger.kernel.org; linux-arm-
> msm@vger.kernel.org; linux-soc@vger.kernel.org; linux-arm-
> kernel@lists.infradead.org; rnayak@codeaurora.org;
> amit.kucheria@linaro.org; nicolas.dechesne@linaro.org;
> celster@codeaurora.org; tfinkel@codeaurora.org
> Subject: Re: [PATCH] cpufreq: Add Kryo CPU scaling driver
>
>
>
> On 21/05/18 13:57, ilialin@codeaurora.org wrote:
> >
> [...]
>
> >>> +#include <linux/cpu.h>
> >>> +#include <linux/err.h>
> >>> +#include <linux/init.h>
> >>> +#include <linux/kernel.h>
> >>> +#include <linux/module.h>
> >>> +#include <linux/nvmem-consumer.h>
> >>> +#include <linux/of.h>
> >>> +#include <linux/platform_device.h>
> >>> +#include <linux/pm_opp.h>
> >>> +#include <linux/slab.h>
> >>> +#include <linux/soc/qcom/smem.h>
> >>> +
> >>> +#define MSM_ID_SMEM 137
> >>> +#define SILVER_LEAD 0
> >>> +#define GOLD_LEAD 2
> >>> +
> >>
> >> So I gather form other emails, that these are physical cpu number(not
> >> even unique identifier like MPIDR). Will this work on parts or
> >> platforms that need to boot in GOLD LEAD cpus.
> >
> > The driver is for Kryo CPU, which (and AFAIK all multicore MSMs)
> > always boots on the CPU0.
>
>
> That may be true and I am not that bothered about it. But assuming physical
> ordering from the logical cpu number is *incorrect* and will break if kernel
> decides to change the allocation algorithm. Kernel provides no guarantee on
> that, so you need to depend on some physical ID or may be DT to achieve
> what your want. But the current code as it stands is wrong.

Got your point. In fact CPUs are numbered 0-3 and ordered into 2 clusters in the DT:

cpus {
#address-cells = <2>;
#size-cells = <0>;

CPU0: cpu@0 {
...
reg = <0x0 0x0>;
...
};

CPU1: cpu@1 {
...
reg = <0x0 0x1>;
...
};

CPU2: cpu@100 {
...
reg = <0x0 0x100>;
...
};

CPU3: cpu@101 {
...
reg = <0x0 0x101>;
...
};

cpu-map {
cluster0 {
core0 {
cpu = <&CPU0>;
};

core1 {
cpu = <&CPU1>;
};
};

cluster1 {
core0 {
cpu = <&CPU2>;
};

core1 {
cpu = <&CPU3>;
};
};
};
};

As far, as I understand, they are probed in the same order. However, to be certain that the physical CPU is the one I intend to configure, I have to fetch the device structure pointer for the cpu-map -> clusterX -> core0 -> cpu path. Could you suggest a kernel API to do that?



>
> --
> Regards,
> Sudeep

\
 
 \ /
  Last update: 2018-05-22 08:57    [W:0.187 / U:0.364 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site