[Xorp-users] measuring update processing time

Ratul Mahajan ratul@cs.washington.edu
Sat, 10 Sep 2005 18:27:34 -0700


 > This doesn't answer your question, but perhaps means that under some
 > circumstances the question isn't the right question?

let me explain what i'm trying to do. i am evaluating a modified version 
of bgp which causes some additional computation at the router. i want to 
quanitfy the impact of this computation on the time it takes a router to 
process incoming updates. the extra computation does not happen in line 
with update processing, so i cannot simply measure the time from when 
xorp starts and ends processing a particular update (this time would be 
similar to that of vanilla bgp).

 > Being single-threaded or multi-threaded doesn't change things (unless
 > you've got multiple CPUs) in either case.    If the machine is
 > underloaded, the interval will be equally short.

this is mostly an aside but a multi-threaded implementation could have 
helped, assuming that one thread was responsible for extracting incoming 
messages which enables the clock on this message to start ticking as 
soon as it reaches the router. if the other threads are busy doing other 
things, this would be reflected in the measured time for this message. 
in a single-threaded implementation, the message is retrieved when the 
router is not doing other things (and the clock will start ticking later).

cheers.

Mark Handley wrote:
> 
> 
> On 9/9/05, *Ratul Mahajan* <ratul@cs.washington.edu 
> <mailto:ratul@cs.washington.edu>> wrote:
> 
>     does anyone know what the best way is to measure (BGP) update processing
>     time, starting from when the message arrives at the box? given that the
>     implementation is single-threaded, measuring it from within the xorp
>     code will ignore time for which the message was sitting in the socket
>     buffers, right?
> 
> 
> You're right that messages can spend some time in either the outgoing 
> socket buffer on the sending machine, or the inbound socket buffer on 
> the receiving machine.  In general, the time spent in the socket buffer 
> on the receiving machine should be very short if the machine is 
> underloaded, but it's pretty hard to measure exactly what it is.
> 
> Being single-threaded or multi-threaded doesn't change things (unless 
> you've got multiple CPUs) in either case.    If the machine is 
> underloaded, the interval will be equally short.
> 
> In the overload case, you want to push back to the socket buffer in 
> either case.  This is the only way you can rate control incoming data, 
> and if you can't handle the update rate, you don't want to build the 
> queue up internally, because that's a guaranteed way to run out of 
> memory.  Better to push back to the router upstream, and let it generate 
> the messages at a rate you can handle.  Thus even a multi-threaded 
> solution would best block back to the socket buffer under overload 
> conditions.
> 
> This doesn't answer your question, but perhaps means that under some 
> circumstances the question isn't the right question?
> 
> - Mark