[Xorp-users] Anyone using XORP in live production environment with big traffic load?

Mark Handley M.Handley at cs.ucl.ac.uk
Mon Jul 14 08:14:42 PDT 2008


Ok, as a rough rule of thumb:

You need a minimum of about ten memory access to forward a packet
(three for the NIC to receive (DMA of data + buffer descriptor r/w),
three to transmit, and four for the CPU to look at the packet and set
up DMA).  Likely you need a few more than this if you've a non-trivial
forwarding table, decrement TTL, etc - ten is for simple L2
forwarding.

The best case for non-burst transfer of data to/from memory is you pay
the CAS latency.  On the DDR2 memory we have, CL=5, and the memory bus
speed is 333MHz (ie 3ns per clock).  That's 15ns latency per memory
access (assuming accesses are independent, but you don't have to
change RAS).  If you get no memory parallelism, ten memory accesses
ties up the memory bus for 150ns per packet, which gives 6.7Mpps.

Of course this argument glosses over lots of details about how modern
memory subsystems work, so it's not completely accurate, but it gives
you some idea of the ballpark numbers, and it matches pretty well the
actual numbers we get for Click forwarding (about 7.1Mpps).  Large
packets cost a little more than small ones, but you usually get to
burst transfer the bytes, so they don't cost all that much more (at
least not in memory latency).  These numbers also assume you've got a
bunch of cores doing SMP (if not, you'll be CPU limited instead), and
you set things up so as not to copy packets from core to core unless
they share an L2 cache.

You ought to be able to do better on a AMD NUMA box than on an Intel
uniform memory architecture box if you're smart about paralllelizing
memory access, but our preliminary numbers are actually worse, so
presumably we're doing something wrong.

 - Mark

On Mon, Jul 14, 2008 at 2:53 PM, Morten Pedersen <morten at workzone.no> wrote:
> Ouch.
>
> I didn't consider memory latency, that will of course be a issue. I would be interested in reading your draft research paper. We might get into trouble here, since we have a mix of both regular web-traffic together with unicast video and audio streams in different formats :(
>
> Thanks,
>
> Morten
>
> ----- Original Message -----
> From: "Mark Handley" <M.Handley at cs.ucl.ac.uk>
> To: "Morten Pedersen" <morten at workzone.no>
> Cc: xorp-users at xorp.org
> Sent: Monday, July 14, 2008 3:40:37 PM GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna
> Subject: Re: [Xorp-users] Anyone using XORP in live production environment with big traffic load?
>
> My guess is that you might have trouble forwarding 50Gb/s, primarily
> because you will hit memory bottlenecks.
>
> We've not used any 10Gig cards (we didn't have access to a polling
> driver for Click for these cards), but with 12 x 1Gig ports in a box,
> we can saturate all 12 bi-directionally with 1024byte packets, but
> with minimum sized packets we top out around 7Mpps.  This is primarily
> due to memory latency issues.  I can send you a draft research paper,
> if it's helpful to you.
>
> So, the real question if you want to consider software forwarding at
> these speeds on commodity hardware is what sort of traffic mix to you
> need to handle?
>
>  - Mark
>
> On Mon, Jul 14, 2008 at 2:10 PM, Morten Pedersen <morten at workzone.no> wrote:
>> I guess the header says it all.
>>
>> We are currently using a couple of Cisco 7206 for our BGP routing, but need to upgrade to something with more bandwith. Therefore, I was thinking about XORP as a cheaper alternative. I am planning to go for a configuration with two identical servers for failover:
>>
>> One 2U server with:
>> 2 x PCI Express x16
>> 1 x PCI Express x8
>> 2 x 2 port 10Gbit fibre network cards
>> 1 x quad 1Gbit ethernet network card
>>
>> My question is, will this be viable? Also, should I go for AMD or Intel? I need to keep in mind that my servers must be able to handle around 50Gbit of network traffic, so the choice of databus is important I guess. Another issue is the network cards. I am planning on using Intel network cards.
>>
>> Anyone with some ideas on this?
>>
>> Thanks,
>>
>> Morten
>>
>> _______________________________________________
>> Xorp-users mailing list
>> Xorp-users at xorp.org
>> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users
>>
>



More information about the Xorp-users mailing list