lkml.org 
[lkml]   [2006]   [Jan]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRE: My vote against eepro* removal
On Mon, 23 Jan 2006, kus Kusche Klaus wrote:
> From: John Ronciak
> > Can we try a couple of things? 1) just comment out all the check for
> > link code in the e100 driver and give that a try and 2) just comment
> > out the update stats call and see if that works.  These seem to be the
> > differences and we need to know which one is causing the problem.
>
> First of all, I am still unable to get any traces of this in the
> latency tracer. Moreover, as I told before, removing parts of the
> watchdog usually made my eth0 nonfunctional (which is bad - this
> is an embedded system with ssh access).
>
> Hence, I explicitely instrumented the watchdog function with tsc.
> Output of the timings is done by a background thread, so the
> timings should not increase the runtime of the watchdog.
>
> Here are my results:
>
> If the watchdog doesn't get interrupted, preempted, or whatever,
> it spends 340 us in its body:
> * 303 us in the mii code
> *  36 us in the following code up to e100_adjust_adaptive_ifs
> *   1 us in the remaining code (I think my chip doesn't need any
> of those chip-specific fixups)
>
> The 303 us in the mii code are divided in the following way:
> * 101 us in mii_ethtool_gset
> * 135 us in the whole if
> *  67 us in mii_check_link
>
> This is with the udelay(2) instead of udelay(20) hack applied.
> With udelay(20), the mii times are 128 + 170 + 85 us,
> i.e. 383 us instead of 303 us, or >= 420 us for the whole watchdog.
>
> As the RTC runs with 8192 Hz during my tests, the watchdog is hit
> by 2-3 interrupts, which adds another 75 - 110 us to its total
> execution time, i.e. the time it blocks other rtprio 1 threads.

Thank you very much for that detailed analysis! okay, so calls to mii.c
take too long, but those depend on mmio_read in e100 to do the work, so
this patch attempts to minimize the latency.

This patch is against linus-2.6.git, I compile and ssh/ping tested it.
Would you be willing to send your instrumentation patches? I could then
test any fixes easier.

e100: attempt a shorter delay for mdio reads

Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>

Simply reorder our write/read sequence for mdio reads to minimize latency
as well as delay a shorter interval for each loop.
---

drivers/net/e100.c | 12 +++++++-----
1 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/net/e100.c b/drivers/net/e100.c
--- a/drivers/net/e100.c
+++ b/drivers/net/e100.c
@@ -891,23 +891,25 @@ static u16 mdio_ctrl(struct nic *nic, u3
* procedure it should be done under lock.
*/
spin_lock_irqsave(&nic->mdio_lock, flags);
- for (i = 100; i; --i) {
+ for (i = 1000; i; --i) {
if (readl(&nic->csr->mdi_ctrl) & mdi_ready)
break;
- udelay(20);
+ udelay(2);
}
if (unlikely(!i)) {
- printk("e100.mdio_ctrl(%s) won't go Ready\n",
+ DPRINTK(PROBE, ERR, "e100.mdio_ctrl(%s) won't go Ready\n",
nic->netdev->name );
spin_unlock_irqrestore(&nic->mdio_lock, flags);
return 0; /* No way to indicate timeout error */
}
writel((reg << 16) | (addr << 21) | dir | data, &nic->csr->mdi_ctrl);

- for (i = 0; i < 100; i++) {
- udelay(20);
+ /* to avoid latency, read to flush the write, then delay, and only
+ * delay 2us per loop, manual says read should complete in < 64us */
+ for (i = 0; i < 1000; i++) {
if ((data_out = readl(&nic->csr->mdi_ctrl)) & mdi_ready)
break;
+ udelay(2);
}
spin_unlock_irqrestore(&nic->mdio_lock, flags);
DPRINTK(HW, DEBUG,
\
 
 \ /
  Last update: 2006-01-23 21:27    [W:0.027 / U:0.600 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site