Messages in this thread | | | Date | Mon, 4 Jul 2022 19:45:12 +0200 | Subject | Re: [PATCH v7 07/10] tmp, tmp_tis: Implement usage counter for locality | From | Lino Sanfilippo <> |
| |
On 01.07.22 01:29, Jarkko Sakkinen wrote:
> > I'm kind of thinking that should tpm_tis_data have a lock for its > contents?
Most of the tpm_tis_data structure elements are set once during init and then never changed but only read. So no need for locking for these. The exceptions I see are
- flags - locality_count - locality
whereby "flags" is accessed by atomic bit manipulating functions and thus does not need extra locking. "locality_count" is protected by the locality_count_mutex. "locality" is only set in check_locality() which is called from tpm_tis_request_locality_locked() which holds the locality_count_mutex. So check_locality() is also protected by the locality_count_mutex (which for this reason should probably rather be called locality_mutex since it protects both the "locality_count" and the "locality" variable).
There is one other place check_locality() is called from, namely the interrupt handler. This is also the only place in which "locality" could be assigned another value than 0 (aka the default). In this case there is no lock, so this could indeed by racy.
The solution I see for this is: 1. remove the entire loop that checks for the current locality, i.e. this code:
if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT) for (i = 0; i < 5; i++) if (check_locality(chip, i)) break;
So we avoid "locality" from being changed to something that is not the default.
2. grab the locality_count_mutex and protect "locality":
if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT) mutex_lock(&priv->locality_count_mutex); for (i = 0; i < 5; i++) if (check_locality(chip, i)) break; mutex_unlock(&priv->locality_count_mutex);
I dont see the reason why we should store which locality is the active one, since the only thing that ever would change it from 0 (i.e. the default which we use) to something else is some external instance.
So I would vote for option 1.
> > I kind of doubt that we would ever need more than one lock for it, > and it would give some more ensurance to not be race, especially > when re-enabling interrupts this feels important to be "extra safe". > > I looked at this commit, and did not see anything that would prevent > using a spin lock instead of mutex. With a spin lock priv can be > accessed also in the interrupt context. > > So instead prepend this patch with a patch that adds: > > struct spin_lock lock; > > And something like: > > static inline struct tpm_tis_data *tpm_tis_priv_get(struct tpm_chip *chip) > { > struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); > > spin_lock(&priv->lock); > return priv; > } > > static inline void tpm_tis_priv_put(struct tpm_tis_data *priv) > { > spin_unlock(&priv->lock); > } > > And change the sites where priv is used to acquire the instance with this. >
In this patch we need the mutex to protect the locality counter. We have to hold the mutex while we do a register access that requires a locality (to make sure that the locality is not released by another thread shortly before we do the access).
We cannot do the register access while holding a spinlock, since for SPI the (SPI) bus lock mutex is used which needs a sleepable context. That is not given while holding a spinlock, so I think we have no choice here unfortunately.
Regards, Lino
| |