[Xorp-hackers] loading complete routing tables
Mark Handley
Mark Handley <M.Handley@cs.ucl.ac.uk>
Fri, 16 Sep 2005 22:22:18 +0100
------=_Part_13212_30945670.1126905738292
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline
> so i looked at the harness code some time back and came away with the
> following impressions regarding the current code:
> - it (harness/Peer::send_dump) does not support mtrd records of type
> TABLE_DUMP (entire table dumps). it assumes BGP4MP (dumps of routing
> updates).
> - it does not support having updates from multiple peers in the same
> mrtd file (as would be generated by a router with many peers).
Sounds like these serve different purposes. So far we've wanted to test=20
XORP, including the message parsing, so replaying update messages is good=
=20
for this purpose.
What you describe is loading a previously stored router state. We could in=
=20
principle build such a load function into BGP for testing purposes. As we'r=
e=20
event-driven, this would have to be implemented by effectively mapping the=
=20
route state into update messages, or we'd not end up with the right metadat=
a=20
stored in the routes. But a more tricky problem concerns filters. There's n=
o=20
way to load state in after the import filters as we only store routes prior=
=20
to filtering. Thus the only way to do a table load for a config file that=
=20
was stored from a post-import-filter table dumb would be to load it on a=20
XORP router with *no* import filters in place. This means you can't do it=
=20
for a router with EBGP peerings configured, because they must have filters.
Thus I can't really see any way to load a table dump into a XORP BGP and en=
d=20
upn with the correct behaviour unless that table dump were itself stored=20
from a XORP router, and hence represents pre-filtered routes. Loading a=20
post-filter table dump into a XORP router that had import filters configure=
d=20
would end up with the filters being run on the stored data, which simply=20
won't result in the right behaviour.
Does this make sense?
- Mark
------=_Part_13212_30945670.1126905738292
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline
<br>
<div><blockquote class=3D"gmail_quote" style=3D"border-left: 1px solid rgb(=
204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">so i looked =
at the harness code some time back and came away with the<br>
following impressions regarding the current code:<br> - it (harn=
ess/Peer::send_dump) does not support mtrd records of type<br>TABLE_DUMP (e=
ntire table dumps). it assumes BGP4MP (dumps of routing<br>updates).<br>&nb=
sp; - it does not support having updates from multiple peers in the sa=
me
<br>mrtd file (as would be generated by a router with many peers).</blockqu=
ote><div><br>
Sounds like these serve different purposes. So far we've wanted
to test XORP, including the message parsing, so replaying update
messages is good for this purpose.<br>
<br>
What you describe is loading a previously stored router state. We
could in principle build such a load function into BGP for testing
purposes. As we're event-driven, this would have to be
implemented by effectively mapping the route state into update
messages, or we'd not end up with the right metadata stored in
the routes. But a more tricky problem concerns filters.
There's no way to load state in after the import filters as we only
store routes prior to filtering. Thus the only way to do a table
load for a config file that was stored from a post-import-filter table
dumb would be to load it on a XORP router with *no* import filters in
place. This means you can't do it for a router with EBGP peerings
configured, because they must have filters.<br>
<br>
Thus I can't really see any way to load a table dump into a XORP BGP
and end upn with the correct behaviour unless that table dump were
itself stored from a XORP router, and hence represents pre-filtered
routes. Loading a post-filter table dump into a XORP router that
had import filters configured would end up with the filters being run
on the stored data, which simply won't result in the right behaviour.<br>
<br>
Does this make sense?<br>
<br>
- Mark<br>
</div><br></div><br>
------=_Part_13212_30945670.1126905738292--