lkml.org 
[lkml]   [2018]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectFwd: [PATCH v4.15.7 1/1] on Intel, VDSO should handle CLOCK_MONOTONIC_RAW and export 'tsc_calibration' pointer
On 06/03/2018, Thomas Gleixner <tglx@linutronix.de> wrote:
> Jason,
>
> On Mon, 5 Mar 2018, Jason Vas Dias wrote:
>
> thanks for providing this. A few formal nits first.
>
> Please read Documentation/process/submitting-patches.rst
>
> Patches need a concise subject line and the subject line wants a prefix, in
> this case 'x86/vdso'.
>
> Please don't put anything past the patch. Your delimiters are human
> readable, but cannot be handled by tools.
>
> Also please follow the kernel coding style guide lines.
>
>> It also provides a new function in the VDSO :
>>
>> struct linux_timestamp_conversion
>> { u32 mult;
>> u32 shift;
>> };
>> extern
>> const struct linux_timestamp_conversion *
>> __vdso_linux_tsc_calibration(void);
>>
>> which can be used by user-space rdtsc / rdtscp issuers
>> by using code such as in
>> tools/testing/selftests/vDSO/parse_vdso.c
>> to call vdso_sym("LINUX_2.6", "__vdso_linux_tsc_calibration"),
>> which returns a pointer to the function in the VDSO, which
>> returns the address of the 'mult' field in the vsyscall_gtod_data.
>
> No, that's just wrong. The VDSO data is solely there for the VDSO accessor
> functions and not to be exposed to random user space.
>
>> Thus user-space programs can use rdtscp and interpret its return values
>> in exactly the same way the kernel would, but without entering the
>> kernel.
>
> The VDSO clock_gettime() functions are providing exactly this mechanism.
>
>> As pointed out in Bug # 198961 :
>> https://bugzilla.kernel.org/show_bug.cgi?id=198961
>> which contains extra test programs and the full story behind this
>> change,
>> using CLOCK_MONOTONIC_RAW without the patch results in
>> a minimum measurable time (latency) of @ 300 - 700ns because of
>> the syscall used by vdso_fallback_gtod() .
>>
>> With the patch, the latency falls to @ 100ns .
>>
>> The latency would be @ 16 - 32 ns if the do_monotonic_raw()
>> handler could record its previous TSC value and seconds return value
>> somewhere, but since the VDSO has no data region or writable page,
>> of course it cannot .
>
> And even if it could, it's not as simple as you want it to be. Clocksources
> can change during runtime and without effective protection the values are
> just garbage.
>
>> Hence, to enable effective use of TSC by user space programs, Linux must
>> provide a way for them to discover the calibration mult and shift values
>> the kernel uses for the clock source ; only by doing so can user-space
>> get values that are comparable to kernel generated values.
>
> Linux must not do anything. It can provide a vdso implementation of
> CLOCK_MONOTONIC_RAW, which does not enter the kernel, but not exposure to
> data which is not reliably accessible by random user space code.
>
>> And I'd really like to know: why does the gtod->mult value change ?
>> After TSC calibration, it and the shift are calculated to render the
>> best approximation of a nanoseconds value from the TSC value.
>>
>> The TSC is MEANT to be monotonic and to continue in sleep states
>> on modern Intel CPUs . So why does the gtod->mult change ?
>
> You are missing the fact that gtod->mult/shift are used for CLOCK_MONOTONIC
> and CLOCK_REALTIME, which are adjusted by NTP/PTP to provide network
> synchronized time. That means CLOCK_MONOTONIC is providing accurate
> and slope compensated nanoseconds.
>
> The raw TSC conversion, even if it is sane hardware, provides just some
> approximation of nanoseconds which can be off by quite a margin.
>
>> But the mult value does change. Currently there is no way for user-space
>> programs to discover that such a change has occurred, or when . With this
>> very tiny simple patch, they could know instantly when such changes
>> occur, and could implement TSC readers that perform the full conversion
>> with latencies of 15-30ns (on my CPU).
>
> No. Accessing the mult/shift pair without protection is racy and can lead
> to completely erratic results.
>
>> +notrace static int __always_inline do_monotonic_raw( struct timespec
>> *ts)
>> +{
>> + volatile u32 tsc_lo=0, tsc_hi=0, tsc_cpu=0; // so same instrs
>> generated for 64-bit as for 32-bit builds
>> + u64 ns;
>> + register u64 tsc=0;
>> + if (gtod->vclock_mode == VCLOCK_TSC)
>> + { asm volatile
>> + ( "rdtscp"
>> + : "=a" (tsc_lo)
>> + , "=d" (tsc_hi)
>> + , "=c" (tsc_cpu)
>> + ); // : eax, edx, ecx used - NOT rax, rdx, rcx
>
> If you look at the existing VDSO time getters then you'll notice that
> they use accessor functions and not open coded asm constructs.
>
>> + tsc = ((((u64)tsc_hi) & 0xffffffffUL) << 32) | (((u64)tsc_lo)
>> & 0xffffffffUL);
>> + tsc *= gtod->mult;
>> + tsc >>= gtod->shift;
>> + ts->tv_sec = __iter_div_u64_rem(tsc, NSEC_PER_SEC, &ns);
>> + ts->tv_nsec = ns;
>
> This is horrible. Please look at the kernel side implementation of
> clock_gettime(CLOCK_MONOTONIC_RAW). The VDSO side can be implemented in the
> same way. All what is required is to expose the relevant information in the
> existing vsyscall_gtod_data data structure.
>
>> +struct linux_timestamp_conversion
>> +{ u32 mult;
>> + u32 shift;
>> +};
>
> This wont happen because unprotected access is bad. But even if it would go
> anywhere defining the structure which should be accessible by user space
> code in the C-file is just wrong. If at all it needs to be defined in a
> user space exposed header.
>
>> +extern
>> + const struct linux_timestamp_conversion *
>> + __vdso_linux_tsc_calibration(void);
>> +
>> +notrace
>> + const struct linux_timestamp_conversion *
>> + __vdso_linux_tsc_calibration(void)
>> +{ if( gtod->vclock_mode == VCLOCK_TSC )
>> + return ((struct linux_timestamp_conversion*) &gtod->mult);
>> + return 0UL;
>
> This is the most horrible coding style I saw in a long time. The kernel has
> a very well defined coding style.....
>
>> That multiplication and shift really doesn't leave very many
>> significant seconds bits!
>
> It's the wrong way to do that.
>
>> Please, can the VDSO include some similar functionality to NOT always
>> enter the kernel for CLOCK_MONOTONIC_RAW ,
>
> Yes, this can be done if properly implemented.
>
>> and to export a pointer to the LIVE (kernel updated) gtod->mult and
>> gtod->shift values somehow .
>
> No. That's not going to happen.
>
>> The documentation states for CLOCK_MONOTONIC_RAW that it is the
>> same as CLOCK_MONOTONIC except it is NOT subject to NTP adjustments .
>> This is very far from the case currently, without a patch like the one
>> above.
>
> Errm. The syscall based implementation provides exactly that, so your claim
> is just wrong.
>
>> And the kernel should not restrict user-space programs to only being able
>> to either measure an NTP adjusted time value, or a time value difference
>> of greater than 1000ns with any accuracy, on a modern Intel CPU whose TSC
>> ticks 2.8 times per nanosecond (picosecond resolution is theoretically
>> possible).
>
> The kernel does not restrict anything. It provides a fast CLOCK_MONOTONIC
> time getter in the VDSO and as nobody so far asked for a fast
> CLOCK_MONOTONIC_RAW time getter, it just has never been implemented. It can
> be done, but it has to be done by following the rules of the VDSO and not
> by randomly exposing data.
>
> Thanks,
>
> tglx
>


Thank you very much for your guidance, Thomas.

I will prepare a new patch that meets submission + coding style guidelines and
does not expose pointers within the vsyscall_gtod_data region to
user-space code -
but I don't really understand why not, since only the gtod->mult value will
change as long as the clocksource remains TSC, and updates to it by the kernel
are atomic and partial values cannot be read .

The code in the patch reverts to old behavior for clocks which are not the
TSC and provides a way for users to determine if the clock is still the TSC
by calling '__vdso_linux_tsc_calibration()', which would return NULL if
the clock is not the TSC .

I have never seen Linux on a modern intel box spontaneously decide to
switch from the TSC clocksource after calibration succeeds and
it has decided to use the TSC as the system / platform clock source -
what would make it do this ?

But for the highly controlled systems I am doing performance testing on,
I can guarantee that the clocksource does not change.

There is no way user code can write those pointers or do anything other
than read them, so I see no harm in exposing them to user-space ; then
user-space programs can issue rdtscp and use the same calibration values
as the kernel, and use some cached 'previous timespec value' to avoid
doing the long division every time.

If the shift & mult are not accurate TSC calibration values, then the
kernel should put other more accurate calibration values in the gtod .

RE:
> Please look at the kernel side implementation of
> clock_gettime(CLOCK_MONOTONIC_RAW).
> The VDSO side can be implemented in the
> same way.
> All what is required is to expose the relevant information in the
> existing vsyscall_gtod_data data structure.

I agree - that is my point entirely , & what I was trying to do .

The code in the kernel that handles clock_gettime
CLOCK_MONOTONIC_RAW syscall, when the
clock source is TSC, does in fact seem to use the mult & shift
values in the same way :

AFAICS, the call chain for clock_gettime(CLOCK_MONOTONIC_RAW, &ts)
is :

kernel/time/posix-timers.c , @ line 233

static int posix_get_monotonic_raw(clockid_t which_clock, struct timespec64 *tp)
{
getrawmonotonic64(tp);
return 0;
}

...
@line 1058:
SYSCALL_DEFINE2(clock_gettime, const clockid_t, which_clock,
struct timespec __user *,tp)
{
const struct k_clock *kc = clockid_to_kclock(which_clock);
...
error = kc->clock_get(which_clock, &kernel_tp);
if (!error && put_timespec64(&kernel_tp, tp))
...
}
...
static const struct k_clock clock_monotonic_raw = {
.clock_getres = posix_get_hrtimer_res,
.clock_get = posix_get_monotonic_raw,
};
...

getmonotonicraw64() is in kernel/time/timekeeping.c , @ line 1418:

void getrawmonotonic64(struct timespec64 *ts)
{
struct timekeeper *tk = &tk_core.timekeeper;
unsigned long seq;
u64 nsecs;

do {
seq = read_seqcount_begin(&tk_core.seq);
# ^-- I think this is the source of the locking
# and the very long latencies !
#
ts->tv_sec = tk->raw_sec;
nsecs = timekeeping_get_ns(&tk->tkr_raw);

} while (read_seqcount_retry(&tk_core.seq, seq));

ts->tv_nsec = 0;
timespec64_add_ns(ts, nsecs);
}

this uses, from an earlier part of timekeeping.c , @ line 346:

static inline u64 timekeeping_delta_to_ns(struct tk_read_base *tkr, u64 delta)
{
u64 nsec;

nsec = delta * tkr->mult + tkr->xtime_nsec;
nsec >>= tkr->shift;

/* If arch requires, add in get_arch_timeoffset() */
return nsec + arch_gettimeoffset();
}

static inline u64 timekeeping_get_delta(struct tk_read_base *tkr)
{
u64 cycle_now, delta;

/* read clocksource */
cycle_now = tk_clock_read(tkr);

/* calculate the delta since the last update_wall_time */
delta = clocksource_delta(cycle_now, tkr->cycle_last, tkr->mask);

return delta;
}

static inline u64 timekeeping_get_ns(struct tk_read_base *tkr)
{
u64 delta;

delta = timekeeping_get_delta(tkr);
return timekeeping_delta_to_ns(tkr, delta);
}


So the kernel is basically doing, in timekeeping_delta_to_ns :

nsec_delta = delta * gtod->mult;
nsec_delta >>= gtod->shift;
add_nsec_delta_to_previous_value( previous_value, nsec_delta);



So in fact, when the clock source is TSC, the value recorded in 'ts'
by clock_gettime(CLOCK_MONOTONIC_RAW, &ts) is very similar to
u64 tsc = rdtscp();
tsc *= gtod->mult;
tsc >>= gtod->shift;
ts.tv_sec=tsc / NSEC_PER_SEC;
ts.tv_nsec=tsc % NSEC_PER_SEC;

which is the algorithm I was using in the VDSO fast TSC reader,
do_monotonic_raw() .

Please, suggest a better algorithm using available data in the
gtod, and I'll use it.

The problem with doing anything more in the VDSO is that there
is of course nowhere in the VDSO to store any data, as it has
no data section or writable pages . So some kind of writable
page would need to be added to the vdso , complicating its
vdso/vma.c, etc., which is not desirable.

But it is equally not desirable to have the clock_gettime
system call have a latency exceeding 300ns for CLOCK_MONOTONIC_RAW
clocks, nor to measure all the adjustments & previous value dependencies
that the current implementation is prone to .

Perhaps I should instead use the calibrated TSC frequency and just do
a multiply divide ?

ie. the calibrated TSC frequency is 2893.300 MHz , so each TSC tick
has a period of 1/2,893,300,000 seconds or 1 / 2.8933 nanoseconds ,
or 3.4562 NS.

So, if that number was communicated somehow to the VDSO, it could do:

do_monotonic_raw() { ...
u64 tsc = rdtscp();
tsc = ((tsc * 1000) / gtod->pico_period); /* period of clock
in picoseconds */
ts.tv_sec=tsc / NSEC_PER_SEC;
ts.tv_nsec=tsc % NSEC_PER_SEC;
}

But there is no other communication of TSC calibration data currently
in the gtod
other than 'shift' and 'mult' ,which do actually give results which correlate
approximately to the pico_period division above.

Attempts to change the vsyscall_gtod_data structure in any way have resulted
in nasty core dumps of clock_gettime() using processes.
I'd much appreciate any tips on how to change its layout without breaking
everything.

One currently needs to parse the system log messages to get the
'Refined TSC clocksource calibration', or lookup its value in the
live kernel by looking up the 'tsc_khz' symbol at its
address given in System.map with gdb / objdump ,
converting to offset into data region, and using
lseek() on /proc/kcore (as root).

The point is, user-space must rely on the kernel to perform
TSC calibration properly, and the only data in the gtod that
reflects the result of that calibration are the mult and shift values.
If linux wants to support user-space TSC readers, it needs to communicate
the TSC calibration somehow in the VDSO gtod area.

I've been getting good low-latency time values using my current patch under both
Linux 4.15.7 and 3.10.0-693.17.1.el7.jvd - I've attached a log of 100
consecutive
calls and a printout of the sum and averages :

#define N_SAMPLES 10

unsigned int cnt=N_SAMPLES, s=0;
do
clock_gettime( CLOCK_MONOTONIC_RAW, &sample_ts[s++] );
while(--cnt);
#define TS2NS(_TS_) ((((unsigned long
long)(_TS_).tv_sec)*1000000000ULL) + (((unsigned long
long)((_TS_).tv_nsec))))
unsigned long long sum = 0;
std::cerr << sample_ts[0].tv_sec << " " << sample_ts[0].tv_nsec <<
std::endl;
for( s=1; s< N_SAMPLES; s+=1)
{ sum += TS2NS(sample_ts[s]) - TS2NS(sample_ts[s-1]);
std::cerr << sample_ts[s].tv_sec << " " << sample_ts[s].tv_nsec
<< std::endl;
}
std::cerr << "Sum: " << sum << " avg: " << (sum/N_SAMPLES) << std::endl;


I enclose a log of the above program's output (tts.log) .


The print outs of the timespec structure values show monotonically increasing
values with very similar deltas (@ 16ns in this case) , eg:
871 255831891
871 255831911
871 255831928
871 255831949
871 255831966
871 255831986
871 255832004
and end with a low latency average (minimum measurable time):
Sum: 1920 avg: 19

The deltas for clock_gettime(CLOCK_MONOTONIC_RAW) under a kernel
without the patch are @ 300-700ns :

$ ./timer
sum: 69234
Total time: 0.000069234S - Average Latency: 0.000000692S

The timer program source is also attached, as are the patches
for linux-4.15.7 and 3.10.0 I am testing with .

I've also attached the user-space tsc-reading
'clock_get_time_raw()' function (in test_vdso_tsc.c) ,
which illustrates user-space use of the TSC calibration
shift and mult in the gtod, and a way to avoid long division
every time, which achieves much lower latencies than
the VDSO do_monotonic_raw() function .


Sorry, but I urgently need to use the TSC from user-space to
obtain nanosecond resolution unadjusted timer values,
with latencies of less than 30ns,
which are comparable to those returned by linux , and
this appears to be the only way to do it .

Please, if there is a better way to read the TSC and convert
its value in the same way linux does, using its calibration, and
rendering comparable results, but in user-space, let me know
and I'll use it .

And please consider SOME patch to the VDSO to implement
user-space TSC reading on the intel / AMD , to return a completely
unadjusted value for CLOCK_MONOTONIC_RAW, as
documented, but not as Linux currently does.

BTW, reading the Documentation/process/submitting-patches.rst file:
6) No MIME, no links, no compression, no attachments. Just plain text
...
For this reason, all patches should be submitted by e-mail "inline".
Be wary of your editor's word-wrap corrupting your patch,
if you choose to cut-n-paste your patch.
Do not attach the patch as a MIME attachment, compressed or not.

So I'm a bit confused as to how to attach patches.
How do you attach a patch without mime ?
Where is the inline patch submission format defined ?
BTW, it is not my 'editor's word-wrap corruping your patch', it is the
Gmail IMAP server, which believes all lines in plain-text messages
must be wrapped to 72 characters max, and enforces this belief,
correct or not. I think kernel.org needs to update its tools to parse
mime attachments or handle email server line-wrapping, which I have
no control over - I wish I could move from the gmail server, but it would
be too much trouble.

Thanks & Best Regards,
Jason
High Resolution TimeStamps Enabled - Got TSC calibration @ 0xffffffffff5ff0a0: mult: 4945757 shift: 24
871 255831236
871 255831281
871 255831319
871 255831347
871 255831365
871 255831383
871 255831401
871 255831420
871 255831438
871 255831458
871 255831476
871 255831496
871 255831513
871 255831533
871 255831551
871 255831571
871 255831589
871 255831609
871 255831627
871 255831647
871 255831664
871 255831684
871 255831702
871 255831722
871 255831740
871 255831760
871 255831778
871 255831798
871 255831815
871 255831835
871 255831853
871 255831873
871 255831891
871 255831911
871 255831928
871 255831949
871 255831966
871 255831986
871 255832004
871 255832024
871 255832042
871 255832062
871 255832079
871 255832099
871 255832117
871 255832137
871 255832155
871 255832175
871 255832193
871 255832213
871 255832230
871 255832250
871 255832268
871 255832288
871 255832306
871 255832326
871 255832344
871 255832364
871 255832381
871 255832401
871 255832419
871 255832439
871 255832457
871 255832477
871 255832494
871 255832515
871 255832532
871 255832552
871 255832570
871 255832590
871 255832608
871 255832628
871 255832645
871 255832665
871 255832683
871 255832703
871 255832721
871 255832741
871 255832759
871 255832779
871 255832796
871 255832816
871 255832834
871 255832854
871 255832872
871 255832892
871 255832910
871 255832930
871 255832947
871 255832967
871 255832985
871 255833005
871 255833023
871 255833043
871 255833060
871 255833081
871 255833098
871 255833118
871 255833136
871 255833156
Sum: 1920 avg: 19

/*
* Program to measure high-res timer latency.
*
*/
#include <stdint.h>
#include <stdbool.h>
#include <sys/types.h>
#include <unistd.h>
#include <time.h>
#include <errno.h>
#include <string.h>
#include <stdio.h>

#ifndef N_SAMPLES
#define N_SAMPLES 100
#endif


int main(int argc, char *const* argv, char *const* envp)
{ clockid_t clk = CLOCK_MONOTONIC_RAW;
if((argc > 1) && (argv[1] != NULL) && (0 == strcmp(argv[1],"-m")))
clk = CLOCK_MONOTONIC;
struct timespec sample[N_SAMPLES+1];
unsigned int cnt=N_SAMPLES, s=0 ;
do
{ if( 0 != clock_gettime(CLOCK_MONOTONIC_RAW, &sample[s++]) )
{ fprintf(stderr,"oops, clock_gettime() failed: %d: '%s'.\n", errno, strerror(errno));
return 1;
}
}while( --cnt );
clock_gettime(CLOCK_MONOTONIC_RAW, &sample[s]);
#define TS2NS(_TS_) ((((unsigned long long)(_TS_).tv_sec)*1000000000ULL) + (((unsigned long long)((_TS_).tv_nsec))))
unsigned long long
deltas [ N_SAMPLES ]
, t1, t2, sum=0
, t_start = TS2NS(sample[0]);
for(s=1; s < (N_SAMPLES+1); s+=1)
{ t1 = TS2NS(sample[s-1]);
t2 = TS2NS(sample[s]);
deltas[s-1]=t2-t1;
}
for(s = 0; s < N_SAMPLES; s+=1)
sum += deltas[s];
fprintf(stderr,"sum: %llu\n",sum);
unsigned long long avg_ns = sum / N_SAMPLES;
t1=(t2 - t_start);
fprintf(stderr,
"Total time: %1.1llu.%9.9lluS - Average Latency: %1.1llu.%9.9lluS\n",
t1/1000000000, t1-((t1/1000000000)*1000000000),
avg_ns/1000000000, avg_ns-((avg_ns/1000000000)*1000000000)
);
return 0;
}

[unhandled content-type:application/octet-stream][unhandled content-type:application/octet-stream]/*
* this a a copy of
* tools/testing/selftests/vDSO/parse_vdso.c
* from the Linux kernel tree, with a little main() function at the end
* which uses the new '__vdso_linux_tsc_calibration' function to get a pointer to
* the kernel maintained clocksource mult and shift values in the vsyscall_gtod_data :
*/
#include <stdbool.h>
#include <stdint.h>
#include <limits.h>
#include <time.h>
#include <sys/auxv.h>
#include <errno.h>
#include <string.h>
#include <stdio.h>

#ifndef PATH_TO_PARSE_VDSO_C
#error Please set compilation flag -DPATH_TO_PARSE_VDSO_C="..." .
#endif

#include PATH_TO_PARSE_VDSO_C

typedef
struct lnx_tsc_calibration_s
{ unsigned int mult, shift;
} LnxTSCCalibration_t;

typedef
struct tsc_buffer_s
{ unsigned long tsc, sec, nsec;
} LnxTSCBuffer_t;

void clock_get_time_raw( register struct timespec *ts , register const LnxTSCCalibration_t * tsc_cal, register LnxTSCBuffer_t *last )
{ volatile unsigned int tsc_lo=0, tsc_hi=0, tsc_cpu=0; // so same instrs generated for 64-bit as for 32-bit builds
register unsigned long tsc=0;
asm volatile
( "rdtscp"
: "=a" (tsc_lo)
, "=d" (tsc_hi)
, "=c" (tsc_cpu)
); // : eax, edx, ecx used - NOT rax, rdx, rcx
tsc = ((((unsigned long)tsc_hi) & 0xffffffffUL) << 32) | (((unsigned long)tsc_lo) & 0xffffffffUL);
tsc *= tsc_cal->mult;
tsc >>= tsc_cal->shift;
if ( last && last->tsc )
{ register unsigned long tsc_diff = tsc - last->tsc;
if ( (tsc_diff + last->nsec) > 999999999UL )
{ ts->tv_sec = tsc / 1000000000;
ts->tv_nsec = tsc % 1000000000;
}else
{ ts->tv_sec = last->sec;
ts->tv_nsec = tsc_diff + last->nsec;
}
last->tsc = tsc;
last->sec = ts->tv_sec;
last->nsec = ts->tv_nsec;
}else
{ ts->tv_sec = tsc / 1000000000;
ts->tv_nsec = tsc % 1000000000;
if(last)
{ last->tsc = tsc;
last->sec = ts->tv_sec;
last->nsec = ts->tv_nsec;
}
}
}

#ifndef N_SAMPLES
#define N_SAMPLES 100
#endif

int main( int argc, const char *const* argv , const char **const envp )
{
register unsigned long sysinfo_ehdr = getauxval( AT_SYSINFO_EHDR );
if( 0 == sysinfo_ehdr )
{ fprintf(stderr,"getauxval failed: %d : '%s'.\n", errno, strerror(errno));
return 1;
}
vdso_init_from_sysinfo_ehdr( sysinfo_ehdr );
if ( ! vdso_info.valid )
{ fprintf(stderr,"vdso_init_from_sysinfo_ehdr failed\n");
return 1;
}
const struct lnx_tsc_calibration_s*
(*linux_tsc_cal)(void) = vdso_sym("LINUX_2.6", "__vdso_linux_tsc_calibration");
if( linux_tsc_cal == 0UL )
{ fprintf(stderr,"vdso_sym failed\n");
return 1;
}
const struct lnx_tsc_calibration_s *clock_source = (*linux_tsc_cal)();
fprintf(stderr,"Got TSC calibration @ %p: mult: %u shift: %u\n",
(void*)clock_source, clock_source->mult, clock_source->shift
);
#define TS2NS(_TS_) ((((unsigned long long)(_TS_).tv_sec)*1000000000ULL) + (((unsigned long long)((_TS_).tv_nsec))))
struct timespec t_s={0,0};
LnxTSCBuffer_t last_tsc = {0,0,0};
clock_get_time_raw( &t_s, clock_source, &last_tsc);
unsigned long long
sample [ N_SAMPLES ]
, t1, t2
, t_start = TS2NS(t_s);
unsigned int s=0;
do
{ clock_get_time_raw( &t_s, clock_source, &last_tsc);
t1 = TS2NS(t_s);
clock_get_time_raw( &t_s, clock_source, &last_tsc);
t2 = TS2NS(t_s);
sample [ s ] = t2 - t1;
}while(++s < N_SAMPLES);
unsigned long long sum = 0;
for(s = 0; s < N_SAMPLES; s+=1)
sum += sample[s];
fprintf(stderr,"sum: %llu\n",sum);
unsigned long long avg_ns = sum / N_SAMPLES;
t1=(t2 - t_start);
fprintf(stderr,
"Total time: %1.1llu.%9.9lluS - Average Latency: %1.1llu.%9.9lluS\n",
t1/1000000000, t1-((t1/1000000000)*1000000000),
avg_ns/1000000000, avg_ns-((avg_ns/1000000000)*1000000000)
);
return 0;
}
\
 
 \ /
  Last update: 2018-03-06 18:14    [W:0.262 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site