Commit 4ae3cb3a authored by Lan Tianyu's avatar Lan Tianyu Committed by Paolo Bonzini

KVM: Replace smp_mb() with smp_load_acquire() in the kvm_flush_remote_tlbs()

smp_load_acquire() is enough here and it's cheaper than smp_mb().
Adding a comment about reusing memory barrier of kvm_make_all_cpus_request()
here to keep order between modifications to the page tables and reading mode.
Signed-off-by: default avatarLan Tianyu <>
Signed-off-by: default avatarPaolo Bonzini <>
parent 7bfdf217
......@@ -191,9 +191,23 @@ bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req)
void kvm_flush_remote_tlbs(struct kvm *kvm)
long dirty_count = kvm->tlbs_dirty;
* Read tlbs_dirty before setting KVM_REQ_TLB_FLUSH in
* kvm_make_all_cpus_request.
long dirty_count = smp_load_acquire(&kvm->tlbs_dirty);
* We want to publish modifications to the page tables before reading
* mode. Pairs with a memory barrier in arch-specific code.
* - x86: smp_mb__after_srcu_read_unlock in vcpu_enter_guest
* and smp_mb in walk_shadow_page_lockless_begin/end.
* - powerpc: smp_mb in kvmppc_prepare_to_enter.
* There is already an smp_mb__after_atomic() before
* kvm_make_all_cpus_request() reads vcpu->mode. We reuse that
* barrier here.
if (kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH))
cmpxchg(&kvm->tlbs_dirty, dirty_count, 0);
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment