As noticed by Ulrich Obergfell <uobergfe@redhat.com>, the mmu
counters are for beancounting purposes only - so n_used_mmu_pages and
n_max_mmu_pages could be relaxed (example: before 
f0f5933a1626c8df7b),
resulting in n_used_mmu_pages > n_max_mmu_pages.
Make code robust against n_used_mmu_pages > n_max_mmu_pages.
Reviewed-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
 
 static inline unsigned int kvm_mmu_available_pages(struct kvm *kvm)
 {
-       return kvm->arch.n_max_mmu_pages -
-               kvm->arch.n_used_mmu_pages;
+       if (kvm->arch.n_max_mmu_pages > kvm->arch.n_used_mmu_pages)
+               return kvm->arch.n_max_mmu_pages -
+                       kvm->arch.n_used_mmu_pages;
+
+       return 0;
 }
 
 static inline void kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)