lkml.org 
[lkml]   [2007]   [Jun]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Intel IOMMU 00/10] Intel IOMMU support, take #2
On Tue, Jun 26, 2007 at 09:12:45AM +0200, Andi Kleen wrote:

> There are some potential performance benefits too:
> - When you have a device that cannot address the complete address range
> an IOMMU can remap its memory instead of bounce buffering. Remapping
> is likely cheaper than copying.

But those devices aren't likely to be found on modern systems.

> - The IOMMU can merge sg lists into a single virtual block. This could
> potentially speed up SG IO when the device is slow walking SG lists.
> [I long ago benchmarked 5% on some block benchmark with an old
> MPT Fusion; but it probably depends a lot on the HBA]

But most devices are SG-capable.

> And you get better driver debugging because unexpected memory
> accesses from the devices will cause an trapable event.

That and direct-access for KVM the big ones, IMHO, and definitely
justify merging.

> > Does it slow anything down?
>
> It adds more overhead to each IO so yes.

How much? we have numbers (to be presented at OLS later this week)
that show that on bare-metal an IOMMU can cost as much as 15%-30% more
CPU utilization for an IO intensive workload (netperf). It will be
interesting to see comparable numbers for VT-d.

Cheers,
Muli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-06-26 13:17    [W:0.479 / U:0.268 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site