lkml.org 
[lkml]   [2013]   [Jan]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH v2 1/1] block: blk-merge: don't merge the pages with non-contiguous descriptors
From
Date
On Thu, 2013-01-17 at 10:47 +0000, Russell King - ARM Linux wrote:
> On Thu, Jan 17, 2013 at 10:37:42AM +0000, Russell King - ARM Linux wrote:
> > On Thu, Jan 17, 2013 at 09:11:20AM +0000, James Bottomley wrote:
> > > I'd actually prefer page = pfn_to_page(page_to_pfn(page) + 1); because
> > > it makes the code look like the hack it is. The preferred form for all
> > > iterators like this should be to iterate over the pfn instead of a
> > > pointer into the page arrays, because that will always work correctly no
> > > matter how many weird and wonderful memory schemes we come up with.
> >
> > So, why don't we update the code to do that then?

We can, but it involves quite a rewrite within the arm dma-mapping code
to use pfn instead of page. It looks like it would make the code
cleaner because there are a lot of page_to_pfn transformations in there.
However, the current patch is the simplest one for stable and I don't
actually have any arm build and test environments.

> Also, couldn't the addition of the scatterlist offset to the page also
> be buggy too?

No, fortunately, offset must be within the first page from the point of
view of block generated sg lists. As long as nothing within arm
violates this, it should be a safe assumption ... although the code
seems to assume otherwise.

James

> So, what about this patch which addresses both additions by keeping our
> iterator as a pfn as you suggest. It also simplifies some of the code
> in the loop too.
>
> Can the original folk with the problem test this patch?
>
> arch/arm/mm/dma-mapping.c | 18 ++++++++++--------
> 1 files changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index 6b2fb87..076c26d 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -774,25 +774,27 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> size_t size, enum dma_data_direction dir,
> void (*op)(const void *, size_t, int))
> {
> + unsigned long pfn;
> + size_t left = size;
> +
> + pfn = page_to_pfn(page) + offset / PAGE_SIZE;
> + offset %= PAGE_SIZE;
> +
> /*
> * A single sg entry may refer to multiple physically contiguous
> * pages. But we still need to process highmem pages individually.
> * If highmem is not configured then the bulk of this loop gets
> * optimized out.
> */
> - size_t left = size;
> do {
> size_t len = left;
> void *vaddr;
>
> + page = pfn_to_page(pfn);
> +
> if (PageHighMem(page)) {
> - if (len + offset > PAGE_SIZE) {
> - if (offset >= PAGE_SIZE) {
> - page += offset / PAGE_SIZE;
> - offset %= PAGE_SIZE;
> - }
> + if (len + offset > PAGE_SIZE)
> len = PAGE_SIZE - offset;
> - }
> vaddr = kmap_high_get(page);
> if (vaddr) {
> vaddr += offset;
> @@ -809,7 +811,7 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
> op(vaddr, len, dir);
> }
> offset = 0;
> - page++;
> + pfn++;
> left -= len;
> } while (left);
> }

Looks reasonable modulo all the simplification we could do if we can
assume offset < PAGE_SIZE

James




\
 
 \ /
  Last update: 2013-01-17 12:41    [W:0.635 / U:0.172 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site