Messages in this thread |  | | From | Michal Nazarewicz <> | Subject | Re: [PATCH] mm/cma: fix cma bitmap aligned mask computing | Date | Fri, 10 Oct 2014 16:18:54 +0200 |
| |
On Fri, Oct 10 2014, Weijie Yang wrote: > The current cma bitmap aligned mask compute way is incorrect, it could > cause an unexpected align when using cma_alloc() if wanted align order > is bigger than cma->order_per_bit. > > Take kvm for example (PAGE_SHIFT = 12), kvm_cma->order_per_bit is set to 6, > when kvm_alloc_rma() tries to alloc kvm_rma_pages, it will input 15 as > expected align value, after using current computing, however, we get 0 as > cma bitmap aligned mask other than 511. > > This patch fixes the cma bitmap aligned mask compute way. > > Signed-off-by: Weijie Yang <weijie.yang@samsung.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Should that also get:
Cc: <stable@vger.kernel.org> # v3.17
> --- > mm/cma.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/cma.c b/mm/cma.c > index c17751c..f6207ef 100644 > --- a/mm/cma.c > +++ b/mm/cma.c > @@ -57,7 +57,10 @@ unsigned long cma_get_size(struct cma *cma) > > static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order) > { > - return (1UL << (align_order >> cma->order_per_bit)) - 1; > + if (align_order <= cma->order_per_bit) > + return 0; > + else > + return (1UL << (align_order - cma->order_per_bit)) - 1; > } > > static unsigned long cma_bitmap_maxno(struct cma *cma) > -- > 1.7.10.4 > >
-- Best regards, _ _ .o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o ..o | Computer Science, Michał “mina86” Nazarewicz (o o) ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo-- -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |