lkml.org 
[lkml]   [2022]   [Oct]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] KVM: Align two spacebar after period key in annotation
Date
After you hit the period key, you'd hit the spacebar twice in a line.
A little change in annotation does not affect code.

Signed-off-by: Jun Miao <jun.miao@intel.com>
---
virt/kvm/kvm_main.c | 44 ++++++++++++++++++++++----------------------
1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e30f1b4ecfa5..c81b973a3b02 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -285,10 +285,10 @@ static void kvm_make_vcpu_request(struct kvm_vcpu *vcpu, unsigned int req,
* after kvm_request_needs_ipi(), which could result in sending an IPI
* to the previous pCPU. But, that's OK because the purpose of the IPI
* is to ensure the vCPU returns to OUTSIDE_GUEST_MODE, which is
- * satisfied if the vCPU migrates. Entering READING_SHADOW_PAGE_TABLES
+ * satisfied if the vCPU migrates. Entering READING_SHADOW_PAGE_TABLES
* after this point is also OK, as the requirement is only that KVM wait
* for vCPUs that were reading SPTEs _before_ any changes were
- * finalized. See kvm_vcpu_kick() for more details on handling requests.
+ * finalized. See kvm_vcpu_kick() for more details on handling requests.
*/
if (kvm_request_needs_ipi(vcpu, req)) {
cpu = READ_ONCE(vcpu->cpu);
@@ -362,13 +362,13 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)

/*
* We want to publish modifications to the page tables before reading
- * mode. Pairs with a memory barrier in arch-specific code.
+ * mode. Pairs with a memory barrier in arch-specific code.
* - x86: smp_mb__after_srcu_read_unlock in vcpu_enter_guest
* and smp_mb in walk_shadow_page_lockless_begin/end.
* - powerpc: smp_mb in kvmppc_prepare_to_enter.
*
* There is already an smp_mb__after_atomic() before
- * kvm_make_all_cpus_request() reads vcpu->mode. We reuse that
+ * kvm_make_all_cpus_request() reads vcpu->mode. We reuse that
* barrier here.
*/
if (!kvm_arch_flush_remote_tlb(kvm)
@@ -730,8 +730,8 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start,
} else {
/*
* Fully tracking multiple concurrent ranges has diminishing
- * returns. Keep things simple and just find the minimal range
- * which includes the current and new ranges. As there won't be
+ * returns. Keep things simple and just find the minimal range
+ * which includes the current and new ranges. As there won't be
* enough information to subtract a range after its invalidate
* completes, any ranges invalidated concurrently will
* accumulate and persist until all outstanding invalidates
@@ -863,13 +863,13 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn,
* Even though we do not flush TLB, this will still adversely
* affect performance on pre-Haswell Intel EPT, where there is
* no EPT Access Bit to clear so that we have to tear down EPT
- * tables instead. If we find this unacceptable, we can always
+ * tables instead. If we find this unacceptable, we can always
* add a parameter to kvm_age_hva so that it effectively doesn't
* do anything on clear_young.
*
* Also note that currently we never issue secondary TLB flushes
* from clear_young, leaving this job up to the regular system
- * cadence. If we find this inaccurate, we might come up with a
+ * cadence. If we find this inaccurate, we might come up with a
* more sophisticated heuristic later.
*/
return kvm_handle_hva_range_no_flush(mn, start, end, kvm_age_gfn);
@@ -1513,7 +1513,7 @@ static void kvm_replace_memslot(struct kvm *kvm,
/*
* If the memslot gfn is unchanged, rb_replace_node() can be used to
* switch the node in the gfn tree instead of removing the old and
- * inserting the new as two separate operations. Replacement is a
+ * inserting the new as two separate operations. Replacement is a
* single O(1) operation versus two O(log(n)) operations for
* remove+insert.
*/
@@ -1568,7 +1568,7 @@ static void kvm_swap_active_memslots(struct kvm *kvm, int as_id)
spin_unlock(&kvm->mn_invalidate_lock);

/*
- * Acquired in kvm_set_memslot. Must be released before synchronize
+ * Acquired in kvm_set_memslot. Must be released before synchronize
* SRCU below in order to avoid deadlock with another thread
* acquiring the slots_arch_lock in an srcu critical section.
*/
@@ -1730,7 +1730,7 @@ static void kvm_invalidate_memslot(struct kvm *kvm,

/*
* Activate the slot that is now marked INVALID, but don't propagate
- * the slot to the now inactive slots. The slot is either going to be
+ * the slot to the now inactive slots. The slot is either going to be
* deleted or recreated as a new slot.
*/
kvm_swap_active_memslots(kvm, old->as_id);
@@ -1796,7 +1796,7 @@ static void kvm_update_flags_memslot(struct kvm *kvm,
{
/*
* Similar to the MOVE case, but the slot doesn't need to be zapped as
- * an intermediate step. Instead, the old memslot is simply replaced
+ * an intermediate step. Instead, the old memslot is simply replaced
* with a new, updated copy in both memslot sets.
*/
kvm_replace_memslot(kvm, old, new);
@@ -2192,13 +2192,13 @@ static int kvm_get_dirty_log_protect(struct kvm *kvm, struct kvm_dirty_log *log)
* @kvm: kvm instance
* @log: slot id and address to which we copy the log
*
- * Steps 1-4 below provide general overview of dirty page logging. See
+ * Steps 1-4 below provide general overview of dirty page logging. See
* kvm_get_dirty_log_protect() function description for additional details.
*
* We call kvm_get_dirty_log_protect() to handle steps 1-3, upon return we
* always flush the TLB (step 4) even if previous step failed and the dirty
- * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
- * does not preclude user space subsequent dirty log read. Flushing TLB ensures
+ * bitmap may be corrupt. Regardless of previous outcome the KVM logging API
+ * does not preclude user space subsequent dirty log read. Flushing TLB ensures
* writes will be marked dirty for next log read.
*
* 1. Take a snapshot of the bit and clear it if needed.
@@ -2341,7 +2341,7 @@ struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn
return slot;

/*
- * Fall back to searching all memslots. We purposely use
+ * Fall back to searching all memslots. We purposely use
* search_memslots() instead of __gfn_to_memslot() to avoid
* thrashing the VM-wide last_used_slot in kvm_memslots.
*/
@@ -2622,7 +2622,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma,
* struct pages, but be allocated without refcounting e.g.,
* tail pages of non-compound higher order allocations, which
* would then underflow the refcount when the caller does the
- * required put_page. Don't allow those pages here.
+ * required put_page. Don't allow those pages here.
*/
if (!kvm_try_get_pfn(pfn))
r = -EFAULT;
@@ -3641,16 +3641,16 @@ EXPORT_SYMBOL_GPL(kvm_vcpu_yield_to);
*
* (b) VCPU which has done pl-exit/ cpu relax intercepted but did not get
* chance last time (mostly it has become eligible now since we have probably
- * yielded to lockholder in last iteration. This is done by toggling
+ * yielded to lockholder in last iteration. This is done by toggling
* @dy_eligible each time a VCPU checked for eligibility.)
*
* Yielding to a recently pl-exited/cpu relax intercepted VCPU before yielding
* to preempted lock-holder could result in wrong VCPU selection and CPU
- * burning. Giving priority for a potential lock-holder increases lock
+ * burning. Giving priority for a potential lock-holder increases lock
* progress.
*
* Since algorithm is based on heuristics, accessing another VCPU data without
- * locking does not harm. It may result in trying to yield to same VCPU, fail
+ * locking does not harm. It may result in trying to yield to same VCPU, fail
* and continue with next VCPU and so on.
*/
static bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu)
@@ -6010,9 +6010,9 @@ static int kvm_vm_worker_thread(void *context)
* execution.
*
* kthread_stop() waits on the 'exited' completion condition which is
- * set in exit_mm(), via mm_release(), in do_exit(). However, the
+ * set in exit_mm(), via mm_release(), in do_exit(). However, the
* kthread is removed from the cgroup in the cgroup_exit() which is
- * called after the exit_mm(). This causes the kthread_stop() to return
+ * called after the exit_mm(). This causes the kthread_stop() to return
* before the kthread actually quits the cgroup.
*/
rcu_read_lock();
--
2.32.0
\
 
 \ /
  Last update: 2022-10-10 09:45    [W:0.037 / U:1.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site