lkml.org 
[lkml]   [2004]   [Aug]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [patch] Latency Tracer, voluntary-preempt-2.6.8-rc4-O6
Ingo Molnar wrote:
>...
>
>>With binary search you would need to backward search to find the stem
>>for the stem compression. It's probably doable, but would be a bit
>>ugly I guess.
>
>
> yeah. Maybe someone will find the time to improve the algorithm. But
> it's not a highprio thing.

Well, I found some time and decided to give it a go :)

I first built a small test program that could provide the same symbol
data to the lookup function, so that I could test using a user space app.

This way I could benchmark both the original algorithm and any
improvement I could make, and do it confortably from user space.

The original algorithm took, on average, 1340us per lookup on my P4
2.8GHz. The compile settings for the test are not the same on the
kernel, so this can be only compared against other results from the same
setup.

With the attached patch it takes 14us per lookup. This is almost a 100x
improvement.

The largest portion of the time it took to do a lookup, was the
decompression of the symbol names. It seemed a waste of time to keep
strcpy'ing over the result lots of names that would probably not
contribute to the final name.

With the strcpy's out, the speed-up was around 5x, but even then,
looking sequentially for the symbol name was still slow.

The final algorithm pre-calculates markers on the compressed symbols so
that the search time is almost divided by the number of markers.

There are still a few issues with this approach. The biggest issue is
that this is clearly a speed/space trade-off, and maybe we don't want to
waste the space on a code path that is not supposed to be "hot". If this
is the case, I can make a smaller patch, that fixes just the name
"decompression" strcpy's.

As always, any comments will be greatly appreciated.


Just one side note: gcc gives a warning about 2 variables that might be
used before initialization. I know they are not, and it didn't seem a
good idea to put extra code just to shut up gcc. What is the standard
way to convince gcc that those vars are ok?

--
Paulo Marques - www.grupopie.com
"In a world without walls and fences who needs windows and gates?"
--- kernel/kallsyms.c.old 2004-08-14 00:28:58.000000000 +0100
+++ kernel/kallsyms.c 2004-08-14 05:10:09.873194752 +0100
@@ -22,6 +22,13 @@ extern char kallsyms_names[] __attribute
/* Defined by the linker script. */
extern char _stext[], _etext[], _sinittext[], _einittext[];

+/* auxiliary markers to speed up symbol lookup */
+#define KALLSYMS_STEM_MARKS 8
+
+static int kallsyms_stem_mark_idx[KALLSYMS_STEM_MARKS];
+static char *kallsyms_stem_mark[KALLSYMS_STEM_MARKS];
+
+
static inline int is_kernel_inittext(unsigned long addr)
{
if (addr >= (unsigned long)_sinittext
@@ -56,13 +63,42 @@ unsigned long kallsyms_lookup_name(const
return module_kallsyms_lookup_name(name);
}

+/* build markers into the compressed symbol table, so that lookups can be faster */
+static void build_stem_marks(void)
+{
+ char *name = kallsyms_names;
+ int i, mark_cnt;
+
+ unsigned prefix;
+
+ mark_cnt = 0;
+ for (i = 0 ; i < kallsyms_num_syms; i++) {
+ prefix=*name;
+ if (prefix == 0) {
+ /* if this is the first 0-prefix stem in the desired interval */
+ if(i > (mark_cnt + 1) * (kallsyms_num_syms / (KALLSYMS_STEM_MARKS + 1)) &&
+ kallsyms_stem_mark_idx[mark_cnt]==0) {
+ kallsyms_stem_mark[mark_cnt] = name;
+ kallsyms_stem_mark_idx[mark_cnt] = i;
+ mark_cnt++;
+ if(mark_cnt >= KALLSYMS_STEM_MARKS) break;
+ }
+ }
+ do {
+ name++;
+ } while(*name);
+ name ++;
+ }
+}
/* Lookup an address. modname is set to NULL if it's in the kernel. */
const char *kallsyms_lookup(unsigned long addr,
unsigned long *symbolsize,
unsigned long *offset,
char **modname, char *namebuf)
{
- unsigned long i, best = 0;
+ unsigned long i, last_0idx;
+ unsigned long mark, low, high, mid;
+ char *last_0prefix;

/* This kernel should never had been booted. */
BUG_ON(!kallsyms_addresses);
@@ -72,39 +108,67 @@ const char *kallsyms_lookup(unsigned lon

if (is_kernel_text(addr) || is_kernel_inittext(addr)) {
unsigned long symbol_end;
- char *name = kallsyms_names;
+ char *name;

- /* They're sorted, we could be clever here, but who cares? */
- for (i = 0; i < kallsyms_num_syms; i++) {
- if (kallsyms_addresses[i] > kallsyms_addresses[best] &&
- kallsyms_addresses[i] <= addr)
- best = i;
+ /* do a binary search on the sorted kallsyms_addresses array */
+ low = 0;
+ high = kallsyms_num_syms;
+ while( high-low > 1 ) {
+ mid = (low + high) / 2;
+ if( kallsyms_addresses[mid] <= addr ) low = mid;
+ else high = mid;
}

/* Grab name */
- for (i = 0; i <= best; i++) {
- unsigned prefix = *name++;
- strncpy(namebuf + prefix, name, KSYM_NAME_LEN - prefix);
- name += strlen(name) + 1;
- }
+ i = 0;
+ name = kallsyms_names;

- /* At worst, symbol ends at end of section. */
- if (is_kernel_inittext(addr))
- symbol_end = (unsigned long)_einittext;
- else
- symbol_end = (unsigned long)_etext;
+ if(kallsyms_stem_mark_idx[0]==0)
+ build_stem_marks();
+
+ for(mark = 0; mark < KALLSYMS_STEM_MARKS; mark++) {
+ if( low >= kallsyms_stem_mark_idx[mark] ) {
+ i = kallsyms_stem_mark_idx[mark];
+ name = kallsyms_stem_mark[mark];
+ }
+ else break;
+ }

- /* Search for next non-aliased symbol */
- for (i = best+1; i < kallsyms_num_syms; i++) {
- if (kallsyms_addresses[i] > kallsyms_addresses[best]) {
- symbol_end = kallsyms_addresses[i];
- break;
+ /* find the last stem before the actual symbol that as 0 prefix */
+ unsigned prefix;
+ for (; i <= low; i++) {
+ prefix=*name;
+ if (prefix == 0) {
+ last_0prefix = name;
+ last_0idx = i;
}
+ do {
+ name++;
+ } while(*name);
+ name ++;
}

- *symbolsize = symbol_end - kallsyms_addresses[best];
+ /* build the name from there */
+ name = last_0prefix;
+ for (i = last_0idx; i <= low; i++) {
+ prefix = *name++;
+ strncpy(namebuf + prefix, name, KSYM_NAME_LEN - prefix);
+ name += strlen(name) + 1;
+ }
+
+ if(low == kallsyms_num_syms - 1) {
+ /* At worst, symbol ends at end of section. */
+ if (is_kernel_inittext(addr))
+ symbol_end = (unsigned long)_einittext;
+ else
+ symbol_end = (unsigned long)_etext;
+ }
+ else
+ symbol_end = kallsyms_addresses[low + 1];
+
+ *symbolsize = symbol_end - kallsyms_addresses[low];
*modname = NULL;
- *offset = addr - kallsyms_addresses[best];
+ *offset = addr - kallsyms_addresses[low];
return namebuf;
}
\
 
 \ /
  Last update: 2005-03-22 14:05    [W:0.094 / U:1.620 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site