From xylania@yahoo.no Thu Jun 2 18:51:01 2005 From: xylania@yahoo.no (Stig Arnesen) Date: Thu, 2 Jun 2005 19:51:01 +0200 (CEST) Subject: [Xorp-users] Cannot start vif xl0: invalid primary address Message-ID: <20050602175101.16877.qmail@web26907.mail.ukl.yahoo.com> Hi PC1 --- XORP --- Cisco router I am new to xorp. I am trying to start this ipv6 (no ipv4 is enabled) server between to systems using MLD. I did not have a plan to use PIM or do i have to? I have enabled mfea6. My main concern is that xorp_rtmgr is that the interface xl0 is invalid primary address. My system is a freebsd 5.4. There may be several more problems too, see below. Regards Stig [ 2005/06/02 19:42:28 INFO xorp_rtrmgr:89668 RTRMGR +170 master_conf_tree.cc execute ] Changed modules: interfaces, fea, mfea6, mld [ 2005/06/02 19:42:28 INFO xorp_rtrmgr:89668 RTRMGR +404 module_manager.cc run ] Running module: interfaces (/usr/home/stig/xorp-1.1/fea/xorp_fea) [ 2005/06/02 19:42:29 ERROR xorp_fea:89669 FEA +405 routing_socket_utils.cc rtm_get_to_fte_cfg ] Cannot map a discard route back to an FEA soft discard interface. [ 2005/06/02 19:42:29 ERROR xorp_fea:89669 FEA +405 routing_socket_utils.cc rtm_get_to_fte_cfg ] Cannot map a discard route back to an FEA soft discard interface. [ 2005/06/02 19:42:29 ERROR xorp_fea:89669 FEA +405 routing_socket_utils.cc rtm_get_to_fte_cfg ] Cannot map a discard route back to an FEA soft discard interface. [ 2005/06/02 19:42:30 WARNING xorp_rtrmgr:89668 XrlFinderTarget +406 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Xrl target is not enabled. [ 2005/06/02 19:42:30 INFO xorp_fea CLI ] CLI enabled [ 2005/06/02 19:42:30 INFO xorp_fea CLI ] CLI started [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] MFEA enabled [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI enabled [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI started [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] MFEA enabled [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI enabled [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI started [ 2005/06/02 19:42:31 INFO xorp_rtrmgr:89668 RTRMGR +404 module_manager.cc run ] Running module: fea (/usr/home/stig/xorp-1.1/fea/xorp_fea) [ 2005/06/02 19:42:37 INFO xorp_rtrmgr:89668 RTRMGR +404 module_manager.cc run ] Running module: mfea6 (/usr/home/stig/xorp-1.1/fea/xorp_fea) [ 2005/06/02 19:42:37 INFO xorp_fea MFEA ] Interface added: Vif[fxp0] pif_index: 1 vif_index: 0 addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP [ 2005/06/02 19:42:37 INFO xorp_fea MFEA ] Interface added: Vif[xl0] pif_index: 2 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP [ 2005/06/02 19:42:37 INFO xorp_fea MFEA ] MFEA started [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface enabled Vif[xl0] pif_index: 2 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP DOWN IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface started: Vif[xl0] pif_index: 2 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP UP IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface added: Vif[register_vif] pif_index: 2 vif_index: 2 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface enabled Vif[fxp0] pif_index: 1 vif_index: 0 addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP DOWN IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface started: Vif[fxp0] pif_index: 1 vif_index: 0 addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP UP IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface enabled Vif[register_vif] pif_index: 2 vif_index: 2 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP DOWN IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface started: Vif[register_vif] pif_index: 2 vif_index: 2 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP UP IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_rtrmgr:89668 RTRMGR +404 module_manager.cc run ] Running module: mld (/usr/home/stig/xorp-1.1/mld6igmp/xorp_mld) [ 2005/06/02 19:42:38 WARNING xorp_rtrmgr:89668 XrlFinderTarget +406 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "MLD" does not exist or is not enabled. [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Protocol enabled [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] CLI enabled [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] CLI started [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface added: Vif[fxp0] pif_index: 0 vif_index: 0 Flags: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Added new address to vif fxp0: addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface flags changed: Vif[fxp0] pif_index: 0 vif_index: 0 addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface added: Vif[xl0] pif_index: 0 vif_index: 1 Flags: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Added new address to vif xl0: addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface flags changed: Vif[xl0] pif_index: 0 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface added: Vif[register_vif] pif_index: 0 vif_index: 2 Flags: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Added new address to vif register_vif: addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface flags changed: Vif[register_vif] pif_index: 0 vif_index: 2 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Protocol started [ 2005/06/02 19:42:40 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[xl0] pif_index: 0 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP DOWN IPv6 ENABLED [ 2005/06/02 19:42:40 ERROR xorp_mld:89670 MLD6IGMP +723 mld6igmp_node.cc start_vif ] Cannot start vif xl0: invalid primary address [ 2005/06/02 19:42:40 WARNING xorp_mld XrlMld6igmpTarget ] Handling method for mld6igmp/0.1/start_vif failed: XrlCmdError 102 Command failed Cannot start vif xl0: invalid primary address [ 2005/06/02 19:42:40 ERROR xorp_rtrmgr:89668 RTRMGR +597 master_conf_tree.cc commit_pass2_done ] Commit failed: 102 Command failed Cannot start vif xl0: invalid primary address [ 2005/06/02 19:42:40 ERROR xorp_rtrmgr:89668 RTRMGR +182 master_conf_tree.cc config_done ] Configuration failed: 102 Command failed Cannot start vif xl0: invalid primary address [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +1420 task.cc run_task ] No more tasks to run [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +216 module_manager.cc terminate ] Terminating module: fea [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +216 module_manager.cc terminate ] Terminating module: interfaces [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +216 module_manager.cc terminate ] Terminating module: mfea6 [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +262 module_manager.cc terminate ] Killing module: mfea6 (pid = 89669) [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +547 module_manager.cc killed ] Module killed during shutdown: mfea6 [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +216 module_manager.cc terminate ] Terminating module: mld [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +262 module_manager.cc terminate ] Killing module: mld (pid = 89670) [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +547 module_manager.cc killed ] Module killed during shutdown: mld From xylania@yahoo.no Thu Jun 2 18:55:47 2005 From: xylania@yahoo.no (Stig Arnesen) Date: Thu, 2 Jun 2005 19:55:47 +0200 (CEST) Subject: [Xorp-users] Cannot start vif xl0: invalid primary address Message-ID: <20050602175547.17884.qmail@web26907.mail.ukl.yahoo.com> --0-638675885-1117734947=:14466 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Content-Id: Content-Disposition: inline ref my last message: Might be an idea to show the config file as well: Regards Stig /*XORP Configuration File, v1.0*/ interfaces { interface xl0 { description: "Ethernet" vif xl0 { disable: false address 3ffe:ffff::c0a8:d30b { prefix-length: 120 disable: false } } disable: false discard: false } interface fxp0 { description: "Ethernet" vif fxp0 { disable: false address 3ffe:ffff::c0a8:650a { prefix-length: 120 disable: false } } disable: false discard: false } /* interface lo0 { description: "Loopback interface" vif lo0 { disable: false } disable: false discard: false }*/ targetname: "fea" } fea { unicast-forwarding6 { disable: false } targetname: "fea" } plumbing { mfea6 { interface xl0 { vif xl0 { disable: false } } interface fxp0 { vif fxp0 { disable: false } } interface register_vif { vif register_vif { disable: false } } targetname: "MFEA_6" disable: false } } protocols { mld { interface xl0 { vif xl0 { disable: false } } interface fxp0 { vif fxp0 { disable: false } } targetname: "MLD" disable: false } } --0-638675885-1117734947=:14466 Content-Type: message/rfc822 Content-Transfer-Encoding: 8bit Received: from [80.239.6.50] by web26907.mail.ukl.yahoo.com via HTTP; Thu, 02 Jun 2005 19:51:01 CEST Date: Thu, 2 Jun 2005 19:51:01 +0200 (CEST) From: Stig Arnesen Subject: Cannot start vif xl0: invalid primary address To: xorp-users@xorp.org MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 8bit Content-Length: 1684 Hi PC1 --- XORP --- Cisco router I am new to xorp. I am trying to start this ipv6 (no ipv4 is enabled) server between to systems using MLD. I did not have a plan to use PIM or do i have to? I have enabled mfea6. My main concern is that xorp_rtmgr is that the interface xl0 is invalid primary address. My system is a freebsd 5.4. There may be several more problems too, see below. Regards Stig [ 2005/06/02 19:42:28 INFO xorp_rtrmgr:89668 RTRMGR +170 master_conf_tree.cc execute ] Changed modules: interfaces, fea, mfea6, mld [ 2005/06/02 19:42:28 INFO xorp_rtrmgr:89668 RTRMGR +404 module_manager.cc run ] Running module: interfaces (/usr/home/stig/xorp-1.1/fea/xorp_fea) [ 2005/06/02 19:42:29 ERROR xorp_fea:89669 FEA +405 routing_socket_utils.cc rtm_get_to_fte_cfg ] Cannot map a discard route back to an FEA soft discard interface. [ 2005/06/02 19:42:29 ERROR xorp_fea:89669 FEA +405 routing_socket_utils.cc rtm_get_to_fte_cfg ] Cannot map a discard route back to an FEA soft discard interface. [ 2005/06/02 19:42:29 ERROR xorp_fea:89669 FEA +405 routing_socket_utils.cc rtm_get_to_fte_cfg ] Cannot map a discard route back to an FEA soft discard interface. [ 2005/06/02 19:42:30 WARNING xorp_rtrmgr:89668 XrlFinderTarget +406 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Xrl target is not enabled. [ 2005/06/02 19:42:30 INFO xorp_fea CLI ] CLI enabled [ 2005/06/02 19:42:30 INFO xorp_fea CLI ] CLI started [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] MFEA enabled [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI enabled [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI started [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] MFEA enabled [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI enabled [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI started [ 2005/06/02 19:42:31 INFO xorp_rtrmgr:89668 RTRMGR +404 module_manager.cc run ] Running module: fea (/usr/home/stig/xorp-1.1/fea/xorp_fea) [ 2005/06/02 19:42:37 INFO xorp_rtrmgr:89668 RTRMGR +404 module_manager.cc run ] Running module: mfea6 (/usr/home/stig/xorp-1.1/fea/xorp_fea) [ 2005/06/02 19:42:37 INFO xorp_fea MFEA ] Interface added: Vif[fxp0] pif_index: 1 vif_index: 0 addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP [ 2005/06/02 19:42:37 INFO xorp_fea MFEA ] Interface added: Vif[xl0] pif_index: 2 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP [ 2005/06/02 19:42:37 INFO xorp_fea MFEA ] MFEA started [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface enabled Vif[xl0] pif_index: 2 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP DOWN IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface started: Vif[xl0] pif_index: 2 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP UP IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface added: Vif[register_vif] pif_index: 2 vif_index: 2 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface enabled Vif[fxp0] pif_index: 1 vif_index: 0 addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP DOWN IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface started: Vif[fxp0] pif_index: 1 vif_index: 0 addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP UP IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface enabled Vif[register_vif] pif_index: 2 vif_index: 2 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP DOWN IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface started: Vif[register_vif] pif_index: 2 vif_index: 2 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP UP IPv6 ENABLED [ 2005/06/02 19:42:38 INFO xorp_rtrmgr:89668 RTRMGR +404 module_manager.cc run ] Running module: mld (/usr/home/stig/xorp-1.1/mld6igmp/xorp_mld) [ 2005/06/02 19:42:38 WARNING xorp_rtrmgr:89668 XrlFinderTarget +406 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "MLD" does not exist or is not enabled. [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Protocol enabled [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] CLI enabled [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] CLI started [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface added: Vif[fxp0] pif_index: 0 vif_index: 0 Flags: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Added new address to vif fxp0: addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface flags changed: Vif[fxp0] pif_index: 0 vif_index: 0 addr: 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface added: Vif[xl0] pif_index: 0 vif_index: 1 Flags: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Added new address to vif xl0: addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface flags changed: Vif[xl0] pif_index: 0 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface added: Vif[register_vif] pif_index: 0 vif_index: 2 Flags: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Added new address to vif register_vif: addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Interface flags changed: Vif[register_vif] pif_index: 0 vif_index: 2 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 broadcast: 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Protocol started [ 2005/06/02 19:42:40 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[xl0] pif_index: 0 vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP DOWN IPv6 ENABLED [ 2005/06/02 19:42:40 ERROR xorp_mld:89670 MLD6IGMP +723 mld6igmp_node.cc start_vif ] Cannot start vif xl0: invalid primary address [ 2005/06/02 19:42:40 WARNING xorp_mld XrlMld6igmpTarget ] Handling method for mld6igmp/0.1/start_vif failed: XrlCmdError 102 Command failed Cannot start vif xl0: invalid primary address [ 2005/06/02 19:42:40 ERROR xorp_rtrmgr:89668 RTRMGR +597 master_conf_tree.cc commit_pass2_done ] Commit failed: 102 Command failed Cannot start vif xl0: invalid primary address [ 2005/06/02 19:42:40 ERROR xorp_rtrmgr:89668 RTRMGR +182 master_conf_tree.cc config_done ] Configuration failed: 102 Command failed Cannot start vif xl0: invalid primary address [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +1420 task.cc run_task ] No more tasks to run [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +216 module_manager.cc terminate ] Terminating module: fea [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +216 module_manager.cc terminate ] Terminating module: interfaces [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +216 module_manager.cc terminate ] Terminating module: mfea6 [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +262 module_manager.cc terminate ] Killing module: mfea6 (pid = 89669) [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +547 module_manager.cc killed ] Module killed during shutdown: mfea6 [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +216 module_manager.cc terminate ] Terminating module: mld [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +262 module_manager.cc terminate ] Killing module: mld (pid = 89670) [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR +547 module_manager.cc killed ] Module killed during shutdown: mld --0-638675885-1117734947=:14466-- From pavlin@icir.org Thu Jun 2 19:14:03 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Thu, 02 Jun 2005 11:14:03 -0700 Subject: [Xorp-users] Cannot start vif xl0: invalid primary address In-Reply-To: Message from Stig Arnesen of "Thu, 02 Jun 2005 19:51:01 +0200." <20050602175101.16877.qmail@web26907.mail.ukl.yahoo.com> Message-ID: <200506021814.j52IE3r1014609@possum.icir.org> > PC1 --- XORP --- Cisco router > > I am new to xorp. I am trying to start this ipv6 (no > ipv4 is enabled) server between to systems using MLD. > I did not have a plan to use PIM or do i have to? I > have enabled mfea6. > My main concern is that xorp_rtmgr is that the > interface xl0 is invalid primary address. My system is > a freebsd 5.4. There may be several more problems too, > see below. If you want to run MLD on an interface, that interface should have a link-local IPv6 address (in addition to 3ffe:ffff::c0a8:d30b which is not link-local). Typically, the link-local address is auto-generated by the OS itself when the interface is created. You will recognize it because it starts with "fe8". Did you remove it by hand or was it not there at all? Pavlin P.S. BTW, running XORP with MLD only without PIM-SM is not very helpful, because you need PIM-SM if you want to route the packets between Cisco and PC1. From pavlin@icir.org Thu Jun 2 19:21:03 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Thu, 02 Jun 2005 11:21:03 -0700 Subject: [Xorp-users] Cannot start vif xl0: invalid primary address In-Reply-To: Message from Stig Arnesen of "Thu, 02 Jun 2005 19:55:47 +0200." <20050602175547.17884.qmail@web26907.mail.ukl.yahoo.com> Message-ID: <200506021821.j52IL3Fh014678@possum.icir.org> > ref my last message: Might be an idea to show the > config file as well: I see you are explicitly configuring the IPv6 addresses inside XORP. Then you need to add to your configuration the link-local IPv6 address as well, though that address should be same as the one already assigned by the system before starting XORP. Alternatively, if the "3ffe" addresses were already added to xl0 and fxp0 before XORP was started, then you can use only the "default-system-config" statement for each interface. Pavlin > /*XORP Configuration File, v1.0*/ > interfaces { > interface xl0 { > description: "Ethernet" > vif xl0 { > disable: false > address 3ffe:ffff::c0a8:d30b { > prefix-length: 120 > disable: false > } > } > disable: false > discard: false > } > interface fxp0 { > description: "Ethernet" > vif fxp0 { > disable: false > address 3ffe:ffff::c0a8:650a { > prefix-length: 120 > disable: false > } > } > disable: false > discard: false > } > /* interface lo0 { > description: "Loopback interface" > vif lo0 { > disable: false > } > disable: false > discard: false > }*/ > targetname: "fea" > } > fea { > unicast-forwarding6 { > disable: false > } > targetname: "fea" > } > plumbing { > mfea6 { > interface xl0 { > vif xl0 { > disable: false > } > } > interface fxp0 { > vif fxp0 { > disable: false > } > } > interface register_vif { > vif register_vif { > disable: false > } > } > targetname: "MFEA_6" > disable: false > } > } > protocols { > mld { > interface xl0 { > vif xl0 { > disable: false > } > } > interface fxp0 { > vif fxp0 { > disable: false > } > } > targetname: "MLD" > disable: false > } > } > > > > --0-638675885-1117734947=:14466 > Content-Type: message/rfc822 > Content-Transfer-Encoding: 8bit > > Received: from [80.239.6.50] by web26907.mail.ukl.yahoo.com via HTTP; Thu, 02 Jun 2005 19:51:01 CEST > Date: Thu, 2 Jun 2005 19:51:01 +0200 (CEST) > From: Stig Arnesen > Subject: Cannot start vif xl0: invalid primary address > To: xorp-users@xorp.org > MIME-Version: 1.0 > Content-Type: text/plain; charset=iso-8859-1 > Content-Transfer-Encoding: 8bit > Content-Length: 1684 > > Hi > > PC1 --- XORP --- Cisco router > > I am new to xorp. I am trying to start this ipv6 (no > ipv4 is enabled) server between to systems using MLD. > I did not have a plan to use PIM or do i have to? I > have enabled mfea6. > My main concern is that xorp_rtmgr is that the > interface xl0 is invalid primary address. My system is > a freebsd 5.4. There may be several more problems too, > see below. > > Regards > Stig > > > [ 2005/06/02 19:42:28 INFO xorp_rtrmgr:89668 RTRMGR > +170 master_conf_tree.cc execute ] Changed modules: > interfaces, fea, mfea6, mld > [ 2005/06/02 19:42:28 INFO xorp_rtrmgr:89668 RTRMGR > +404 module_manager.cc run ] Running module: > interfaces (/usr/home/stig/xorp-1.1/fea/xorp_fea) > [ 2005/06/02 19:42:29 ERROR xorp_fea:89669 FEA +405 > routing_socket_utils.cc rtm_get_to_fte_cfg ] Cannot > map a discard route back to an FEA soft discard > interface. > [ 2005/06/02 19:42:29 ERROR xorp_fea:89669 FEA +405 > routing_socket_utils.cc rtm_get_to_fte_cfg ] Cannot > map a discard route back to an FEA soft discard > interface. > [ 2005/06/02 19:42:29 ERROR xorp_fea:89669 FEA +405 > routing_socket_utils.cc rtm_get_to_fte_cfg ] Cannot > map a discard route back to an FEA soft discard > interface. > [ 2005/06/02 19:42:30 WARNING xorp_rtrmgr:89668 > XrlFinderTarget +406 ../xrl/targets/finder_base.cc > handle_finder_0_2_resolve_xrl ] Handling method for > finder/0.2/resolve_xrl failed: XrlCmdError 102 Command > failed Xrl target is not enabled. > [ 2005/06/02 19:42:30 INFO xorp_fea CLI ] CLI enabled > [ 2005/06/02 19:42:30 INFO xorp_fea CLI ] CLI started > [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] MFEA > enabled > [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI enabled > [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI started > [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] MFEA > enabled > [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI enabled > [ 2005/06/02 19:42:31 INFO xorp_fea MFEA ] CLI started > [ 2005/06/02 19:42:31 INFO xorp_rtrmgr:89668 RTRMGR > +404 module_manager.cc run ] Running module: fea > (/usr/home/stig/xorp-1.1/fea/xorp_fea) > [ 2005/06/02 19:42:37 INFO xorp_rtrmgr:89668 RTRMGR > +404 module_manager.cc run ] Running module: mfea6 > (/usr/home/stig/xorp-1.1/fea/xorp_fea) > [ 2005/06/02 19:42:37 INFO xorp_fea MFEA ] Interface > added: Vif[fxp0] pif_index: 1 vif_index: 0 addr: > 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 > broadcast: :: peer: :: Flags: MULTICAST BROADCAST > UNDERLYING_VIF_UP > [ 2005/06/02 19:42:37 INFO xorp_fea MFEA ] Interface > added: Vif[xl0] pif_index: 2 vif_index: 1 addr: > 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 > broadcast: :: peer: :: Flags: MULTICAST BROADCAST > UNDERLYING_VIF_UP > [ 2005/06/02 19:42:37 INFO xorp_fea MFEA ] MFEA > started > [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface > enabled Vif[xl0] pif_index: 2 vif_index: 1 addr: > 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 > broadcast: :: peer: :: Flags: MULTICAST BROADCAST > UNDERLYING_VIF_UP DOWN IPv6 ENABLED > [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface > started: Vif[xl0] pif_index: 2 vif_index: 1 addr: > 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d300/120 > broadcast: :: peer: :: Flags: MULTICAST BROADCAST > UNDERLYING_VIF_UP UP IPv6 ENABLED > [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface > added: Vif[register_vif] pif_index: 2 vif_index: 2 > addr: 3ffe:ffff::c0a8:d30b subnet: > 3ffe:ffff::c0a8:d30b/128 broadcast: > 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER > UNDERLYING_VIF_UP > [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface > enabled Vif[fxp0] pif_index: 1 vif_index: 0 addr: > 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 > broadcast: :: peer: :: Flags: MULTICAST BROADCAST > UNDERLYING_VIF_UP DOWN IPv6 ENABLED > [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface > started: Vif[fxp0] pif_index: 1 vif_index: 0 addr: > 3ffe:ffff::c0a8:650a subnet: 3ffe:ffff::c0a8:6500/120 > broadcast: :: peer: :: Flags: MULTICAST BROADCAST > UNDERLYING_VIF_UP UP IPv6 ENABLED > [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface > enabled Vif[register_vif] pif_index: 2 vif_index: 2 > addr: 3ffe:ffff::c0a8:d30b subnet: > 3ffe:ffff::c0a8:d30b/128 broadcast: > 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER > UNDERLYING_VIF_UP DOWN IPv6 ENABLED > [ 2005/06/02 19:42:38 INFO xorp_fea MFEA ] Interface > started: Vif[register_vif] pif_index: 2 vif_index: 2 > addr: 3ffe:ffff::c0a8:d30b subnet: > 3ffe:ffff::c0a8:d30b/128 broadcast: > 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER > UNDERLYING_VIF_UP UP IPv6 ENABLED > [ 2005/06/02 19:42:38 INFO xorp_rtrmgr:89668 RTRMGR > +404 module_manager.cc run ] Running module: mld > (/usr/home/stig/xorp-1.1/mld6igmp/xorp_mld) > [ 2005/06/02 19:42:38 WARNING xorp_rtrmgr:89668 > XrlFinderTarget +406 ../xrl/targets/finder_base.cc > handle_finder_0_2_resolve_xrl ] Handling method for > finder/0.2/resolve_xrl failed: XrlCmdError 102 Command > failed Target "MLD" does not exist or is not enabled. > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] > Protocol enabled > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] CLI > enabled > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] CLI > started > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] > Interface added: Vif[fxp0] pif_index: 0 vif_index: 0 > Flags: > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Added > new address to vif fxp0: addr: 3ffe:ffff::c0a8:650a > subnet: 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: > :: > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] > Interface flags changed: Vif[fxp0] pif_index: 0 > vif_index: 0 addr: 3ffe:ffff::c0a8:650a subnet: > 3ffe:ffff::c0a8:6500/120 broadcast: :: peer: :: Flags: > MULTICAST BROADCAST UNDERLYING_VIF_UP > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] > Interface added: Vif[xl0] pif_index: 0 vif_index: 1 > Flags: > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Added > new address to vif xl0: addr: 3ffe:ffff::c0a8:d30b > subnet: 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: > :: > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] > Interface flags changed: Vif[xl0] pif_index: 0 > vif_index: 1 addr: 3ffe:ffff::c0a8:d30b subnet: > 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: > MULTICAST BROADCAST UNDERLYING_VIF_UP > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] > Interface added: Vif[register_vif] pif_index: 0 > vif_index: 2 Flags: > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] Added > new address to vif register_vif: addr: > 3ffe:ffff::c0a8:d30b subnet: 3ffe:ffff::c0a8:d30b/128 > broadcast: 3ffe:ffff::c0a8:d30b peer: :: > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] > Interface flags changed: Vif[register_vif] pif_index: > 0 vif_index: 2 addr: 3ffe:ffff::c0a8:d30b subnet: > 3ffe:ffff::c0a8:d30b/128 broadcast: > 3ffe:ffff::c0a8:d30b peer: :: Flags: PIM_REGISTER > UNDERLYING_VIF_UP > [ 2005/06/02 19:42:39 INFO xorp_mld MLD6IGMP ] > Protocol started > [ 2005/06/02 19:42:40 INFO xorp_mld MLD6IGMP ] > Interface enabled: Vif[xl0] pif_index: 0 vif_index: 1 > addr: 3ffe:ffff::c0a8:d30b subnet: > 3ffe:ffff::c0a8:d300/120 broadcast: :: peer: :: Flags: > MULTICAST BROADCAST UNDERLYING_VIF_UP DOWN IPv6 > ENABLED > [ 2005/06/02 19:42:40 ERROR xorp_mld:89670 MLD6IGMP > +723 mld6igmp_node.cc start_vif ] Cannot start vif > xl0: invalid primary address > [ 2005/06/02 19:42:40 WARNING xorp_mld > XrlMld6igmpTarget ] Handling method for > mld6igmp/0.1/start_vif failed: XrlCmdError 102 Command > failed Cannot start vif xl0: invalid primary address > [ 2005/06/02 19:42:40 ERROR xorp_rtrmgr:89668 RTRMGR > +597 master_conf_tree.cc commit_pass2_done ] Commit > failed: 102 Command failed Cannot start vif xl0: > invalid primary address > [ 2005/06/02 19:42:40 ERROR xorp_rtrmgr:89668 RTRMGR > +182 master_conf_tree.cc config_done ] Configuration > failed: 102 Command failed Cannot start vif xl0: > invalid primary address > [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR > +1420 task.cc run_task ] No more tasks to run > [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR > +216 module_manager.cc terminate ] Terminating module: > fea > [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR > +216 module_manager.cc terminate ] Terminating module: > interfaces > [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR > +216 module_manager.cc terminate ] Terminating module: > mfea6 > [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR > +262 module_manager.cc terminate ] Killing module: > mfea6 (pid = 89669) > [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR > +547 module_manager.cc killed ] Module killed during > shutdown: mfea6 > [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR > +216 module_manager.cc terminate ] Terminating module: > mld > [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR > +262 module_manager.cc terminate ] Killing module: mld > (pid = 89670) > [ 2005/06/02 19:42:40 INFO xorp_rtrmgr:89668 RTRMGR > +547 module_manager.cc killed ] Module killed during > shutdown: mld > > > --0-638675885-1117734947=:14466-- > _______________________________________________ > Xorp-users mailing list > Xorp-users@xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From ap010@terra.com.br Sat Jun 4 21:55:33 2005 From: ap010@terra.com.br (Diogo Della) Date: Sat, 4 Jun 2005 17:55:33 -0300 Subject: [Xorp-users] xorp and other network daemons Message-ID: --_=__=_XaM3_.1117918533.2A.471964.42.15113.52.42.007.1007790584 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable I'm trying to test multicast routing protocols. It is happening something weird. When I startup xorp, the other network d= aemons like SSH and FTP does not work. I have tested, I test a ssh 127.0.= 0.1 and it works, when I run xorp ssh cannot connect anymore. If I kill x= orp, than ssh connects again. I think it can be FreeBSD, because I have noticed that when a cable is di= sconnected you something is wrong on the network the FreeBSD does not ope= rate any network daemons, neither localhost. Using tcpdump, I have noticed that the handshake tcp is done, but nothing= happens. Thanks all Diogo APPEDIX KERNEL options #MULTICAST options MROUTING #DUMMYNET options DUMMYNET options IPFIREWALL options IPFIREWALL_VERBOSE options IPFIREWALL_VERBOSE_LIMIT=3D5 options IPFIREWALL_FORWARD options IPFW2 options IPDIVERT options HZ=3D1000 # Mais opcoes no Kernel segundo o HandBook 20050513 por Diogo Della options IPFIREWALL_DEFAULT_TO_ACCEPT options IPV6FIREWALL options IPV6FIREWALL_VERBOSE options IPV6FIREWALL_VERBOSE_LIMIT options IPV6FIREWALL_DEFAULT_TO_ACCEPT # Suporta ao PIM options PIM --_=__=_XaM3_.1117918533.2A.471964.42.15113.52.42.007.1007790584 Content-Type: text/html; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable
I'm trying to test multicast routing protocols.
 
It is happening something weird. When I startup xorp, the other netw= ork daemons like SSH and FTP does not work. I have tested, I test a ssh 1= 27.0.0.1 and it works, when I run xorp ssh cannot connect anymore. If I k= ill xorp, than ssh connects again.
 
I think it can be FreeBSD, because I have noticed that when a cable = is disconnected you something is wrong on the network the FreeBSD does no= t operate any network daemons, neither localhost.
 
Using tcpdump, I have noticed that the handshake tcp is done, but no= thing happens.
 
Thanks all
 
Diogo
 
APPEDIX
KERNEL options
#MULTICAST
options        = ; MROUTING
#DUMMYNET
options        = DUMMYNET
options         IPFI= REWALL
options         IPFIREW= ALL_VERBOSE
options         IP= FIREWALL_VERBOSE_LIMIT=3D5
options      =    IPFIREWALL_FORWARD
options     &= nbsp;   IPFW2
options      &nb= sp;  IPDIVERT
options       &n= bsp; HZ=3D1000
# Mais opcoes no Kernel segundo o HandBook 20050513 por Diogo Della<= BR>options    IPFIREWALL_DEFAULT_TO_ACCEPT
options = ;   IPV6FIREWALL
options    IPV6FIREWALL_VERB= OSE
options    IPV6FIREWALL_VERBOSE_LIMIT
options&nb= sp;   IPV6FIREWALL_DEFAULT_TO_ACCEPT
# Suporta ao PIM
options       = ;  PIM
 
 
 
--_=__=_XaM3_.1117918533.2A.471964.42.15113.52.42.007.1007790584-- From pavlin@icir.org Sat Jun 4 23:30:27 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Sat, 04 Jun 2005 15:30:27 -0700 Subject: [Xorp-users] xorp and other network daemons In-Reply-To: Message from "Diogo Della" of "Sat, 04 Jun 2005 17:55:33 -0300." Message-ID: <200506042230.j54MURj7014318@possum.icir.org> > I'm trying to test multicast routing protocols. > > It is happening something weird. When I startup xorp, the other network d= > aemons like SSH and FTP does not work. I have tested, I test a ssh 127.0.= > 0.1 and it works, when I run xorp ssh cannot connect anymore. If I kill x= > orp, than ssh connects again. > > I think it can be FreeBSD, because I have noticed that when a cable is di= > sconnected you something is wrong on the network the FreeBSD does not ope= > rate any network daemons, neither localhost. > > Using tcpdump, I have noticed that the handshake tcp is done, but nothing= > happens. Try to run XORP with empty configuration to see whether you still have the connection problem. If you don't see the problem with empty configuration, then try to incrementally add your configuration bottom-up (e.g., first enable "interfaces" section, then "fea", "fib2mrib", "mfea", "igmp", "pimsm4") to see what exactly triggers the problem. For example, do you explicitly configure the loopback interface or do you install any unicast routes (via static_routes, rip or bgp). Pavlin From Diogo Della" Message-ID: <001f01c569fe$a138efc0$01c8a8c0@apolo> This is a multi-part message in MIME format. ------=_NextPart_000_001C_01C569E5.7ACD9D90 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: base64 SSB3aWxsIGRvIHRoZSB0ZXN0Lg0KDQpCdXQsIEkgZGlkIG5vdCBjb25maWd1cmUgZGUgbG8gaW50 ZXJmYWNlLiBUaGUgcHJvYmxlbSBpcyB0aGF0IGZ0cCwgc3NoIGRvZXMgbm90IHdvcmsgZnJvbSBs b2NhbGhvc3Qgb3IgZnJvbSBhbnkgb3RoZXIgaG9zdCBhdCB0aGUgbmV0d29yay4NCg0KSSdtIHJ1 bm5pbmcgUklQIGZvciB1bmljYXN0IHJvdXRpbmcuDQpBdGVuY2lvc2FtZW50ZSwNCg0KRGlvZ28g RGVsbGEgVG9ycmVzIE9saXZlaXJhDQpodHRwOi8vd3d3LmRlbGxhLmVuZy5iciAgICANCmRpb2dv ZHRvQHRlcnJhLmNvbS5iciAgICAgICAgICAgICAgDQpNU046IGRpb2dvZHRvQGhvdG1haWwuY29t ICAgICAgICAgICAgICAgICAgICAgIA0KDQotLS0tLSBPcmlnaW5hbCBNZXNzYWdlIC0tLS0tIA0K ICBGcm9tOiBQYXZsaW4gUmFkb3NsYXZvdiANCiAgVG86IERpb2dvIERlbGxhIA0KICBDYzogeG9y cC11c2Vyc0B4b3JwLm9yZyANCiAgU2VudDogU2F0dXJkYXksIEp1bmUgMDQsIDIwMDUgNzozMCBQ TQ0KICBTdWJqZWN0OiBSZTogW1hvcnAtdXNlcnNdIHhvcnAgYW5kIG90aGVyIG5ldHdvcmsgZGFl bW9ucw0KDQoNCiAgPiBJJ20gdHJ5aW5nIHRvIHRlc3QgbXVsdGljYXN0IHJvdXRpbmcgcHJvdG9j b2xzLg0KICA+IA0KICA+IEl0IGlzIGhhcHBlbmluZyBzb21ldGhpbmcgd2VpcmQuIFdoZW4gSSBz dGFydHVwIHhvcnAsIHRoZSBvdGhlciBuZXR3b3JrIGQ9DQogID4gYWVtb25zIGxpa2UgU1NIIGFu ZCBGVFAgZG9lcyBub3Qgd29yay4gSSBoYXZlIHRlc3RlZCwgSSB0ZXN0IGEgc3NoIDEyNy4wLj0N CiAgPiAwLjEgYW5kIGl0IHdvcmtzLCB3aGVuIEkgcnVuIHhvcnAgc3NoIGNhbm5vdCBjb25uZWN0 IGFueW1vcmUuIElmIEkga2lsbCB4PQ0KICA+IG9ycCwgdGhhbiBzc2ggY29ubmVjdHMgYWdhaW4u DQogID4gDQogID4gSSB0aGluayBpdCBjYW4gYmUgRnJlZUJTRCwgYmVjYXVzZSBJIGhhdmUgbm90 aWNlZCB0aGF0IHdoZW4gYSBjYWJsZSBpcyBkaT0NCiAgPiBzY29ubmVjdGVkIHlvdSBzb21ldGhp bmcgaXMgd3Jvbmcgb24gdGhlIG5ldHdvcmsgdGhlIEZyZWVCU0QgZG9lcyBub3Qgb3BlPQ0KICA+ IHJhdGUgYW55IG5ldHdvcmsgZGFlbW9ucywgbmVpdGhlciBsb2NhbGhvc3QuDQogID4gDQogID4g VXNpbmcgdGNwZHVtcCwgSSBoYXZlIG5vdGljZWQgdGhhdCB0aGUgaGFuZHNoYWtlIHRjcCBpcyBk b25lLCBidXQgbm90aGluZz0NCiAgPiAgaGFwcGVucy4NCg0KICBUcnkgdG8gcnVuIFhPUlAgd2l0 aCBlbXB0eSBjb25maWd1cmF0aW9uIHRvIHNlZSB3aGV0aGVyIHlvdSBzdGlsbA0KICBoYXZlIHRo ZSBjb25uZWN0aW9uIHByb2JsZW0uDQoNCiAgSWYgeW91IGRvbid0IHNlZSB0aGUgcHJvYmxlbSB3 aXRoIGVtcHR5IGNvbmZpZ3VyYXRpb24sIHRoZW4gdHJ5IHRvDQogIGluY3JlbWVudGFsbHkgYWRk IHlvdXIgY29uZmlndXJhdGlvbiBib3R0b20tdXAgKGUuZy4sIGZpcnN0IGVuYWJsZQ0KICAiaW50 ZXJmYWNlcyIgc2VjdGlvbiwgdGhlbiAiZmVhIiwgImZpYjJtcmliIiwgIm1mZWEiLCAiaWdtcCIs DQogICJwaW1zbTQiKSB0byBzZWUgd2hhdCBleGFjdGx5IHRyaWdnZXJzIHRoZSBwcm9ibGVtLiBG b3IgZXhhbXBsZSwgZG8NCiAgeW91IGV4cGxpY2l0bHkgY29uZmlndXJlIHRoZSBsb29wYmFjayBp bnRlcmZhY2Ugb3IgZG8geW91IGluc3RhbGwNCiAgYW55IHVuaWNhc3Qgcm91dGVzICh2aWEgc3Rh dGljX3JvdXRlcywgcmlwIG9yIGJncCkuDQoNCiAgUGF2bGluDQogIF9fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fDQogIFhvcnAtdXNlcnMgbWFpbGluZyBsaXN0 DQogIFhvcnAtdXNlcnNAeG9ycC5vcmcNCiAgaHR0cDovL21haWxtYW4uSUNTSS5CZXJrZWxleS5F RFUvbWFpbG1hbi9saXN0aW5mby94b3JwLXVzZXJzDQo= ------=_NextPart_000_001C_01C569E5.7ACD9D90 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUcmFuc2l0aW9uYWwv L0VOIj4NCjxIVE1MPjxIRUFEPg0KPE1FVEEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PWlzby04ODU5LTEiPg0KPE1FVEEgY29udGVudD0iTVNIVE1M IDYuMDAuMjgwMC4xNDk4IiBuYW1lPUdFTkVSQVRPUj4NCjxTVFlMRT48L1NUWUxFPg0KPC9IRUFE Pg0KPEJPRFkgYmdDb2xvcj0jZmZmZmZmPg0KPERJVj48Rk9OVCBmYWNlPUNvdXJpZXIgc2l6ZT0y Pkkgd2lsbCBkbyB0aGUgdGVzdC48L0ZPTlQ+PC9ESVY+DQo8RElWPjxGT05UIGZhY2U9Q291cmll ciBzaXplPTI+PC9GT05UPiZuYnNwOzwvRElWPg0KPERJVj48Rk9OVCBmYWNlPUNvdXJpZXIgc2l6 ZT0yPkJ1dCwgSSBkaWQgbm90IGNvbmZpZ3VyZSBkZSBsbyBpbnRlcmZhY2UuIFRoZSANCnByb2Js ZW0gaXMgdGhhdCBmdHAsIHNzaCBkb2VzIG5vdCB3b3JrIGZyb20gbG9jYWxob3N0IG9yIGZyb20g YW55IG90aGVyIGhvc3QgYXQgDQp0aGUgbmV0d29yay48L0ZPTlQ+PC9ESVY+DQo8RElWPjxGT05U IGZhY2U9Q291cmllciBzaXplPTI+PC9GT05UPiZuYnNwOzwvRElWPg0KPERJVj48Rk9OVCBmYWNl PUNvdXJpZXIgc2l6ZT0yPkknbSBydW5uaW5nIFJJUCBmb3IgdW5pY2FzdCByb3V0aW5nLjwvRk9O VD48L0RJVj4NCjxESVY+PFBSRT5BdGVuY2lvc2FtZW50ZSwNCg0KPFNUUk9ORz5EaW9nbyBEZWxs YSBUb3JyZXMgT2xpdmVpcmE8L1NUUk9ORz4NCjxBIGhyZWY9Imh0dHA6Ly93d3cuZGVsbGEuZW5n LmJyIj5odHRwOi8vd3d3LmRlbGxhLmVuZy5icjwvQT4gICAgDQo8QSBocmVmPSJtYWlsdG86ZGlv Z29kdG9AdGVycmEuY29tLmJyIj5kaW9nb2R0b0B0ZXJyYS5jb20uYnI8L0E+ICAgICAgICAgICAg ICANCk1TTjogPEEgaHJlZj0ibWFpbHRvOmRpb2dvZHRvQGhvdG1haWwuY29tIj5kaW9nb2R0b0Bo b3RtYWlsLmNvbTwvQT4gICAgICAgICAgICAgICAgICAgICAgDQoNCjwvUFJFPjwvRElWPg0KPEJM T0NLUVVPVEUgDQpzdHlsZT0iUEFERElORy1SSUdIVDogMHB4OyBQQURESU5HLUxFRlQ6IDVweDsg TUFSR0lOLUxFRlQ6IDVweDsgQk9SREVSLUxFRlQ6ICMwMDAwMDAgMnB4IHNvbGlkOyBNQVJHSU4t UklHSFQ6IDBweCI+DQogIDxESVYgc3R5bGU9IkZPTlQ6IDEwcHQgYXJpYWwiPi0tLS0tIE9yaWdp bmFsIE1lc3NhZ2UgLS0tLS0gPC9ESVY+DQogIDxESVYgDQogIHN0eWxlPSJCQUNLR1JPVU5EOiAj ZTRlNGU0OyBGT05UOiAxMHB0IGFyaWFsOyBmb250LWNvbG9yOiBibGFjayI+PEI+RnJvbTo8L0I+ IA0KICA8QSB0aXRsZT1wYXZsaW5AaWNpci5vcmcgaHJlZj0ibWFpbHRvOnBhdmxpbkBpY2lyLm9y ZyI+UGF2bGluIFJhZG9zbGF2b3Y8L0E+IA0KICA8L0RJVj4NCiAgPERJViBzdHlsZT0iRk9OVDog MTBwdCBhcmlhbCI+PEI+VG86PC9CPiA8QSB0aXRsZT1hcDAxMEB0ZXJyYS5jb20uYnIgDQogIGhy ZWY9Im1haWx0bzphcDAxMEB0ZXJyYS5jb20uYnIiPkRpb2dvIERlbGxhPC9BPiA8L0RJVj4NCiAg PERJViBzdHlsZT0iRk9OVDogMTBwdCBhcmlhbCI+PEI+Q2M6PC9CPiA8QSB0aXRsZT14b3JwLXVz ZXJzQHhvcnAub3JnIA0KICBocmVmPSJtYWlsdG86eG9ycC11c2Vyc0B4b3JwLm9yZyI+eG9ycC11 c2Vyc0B4b3JwLm9yZzwvQT4gPC9ESVY+DQogIDxESVYgc3R5bGU9IkZPTlQ6IDEwcHQgYXJpYWwi PjxCPlNlbnQ6PC9CPiBTYXR1cmRheSwgSnVuZSAwNCwgMjAwNSA3OjMwIA0KICBQTTwvRElWPg0K ICA8RElWIHN0eWxlPSJGT05UOiAxMHB0IGFyaWFsIj48Qj5TdWJqZWN0OjwvQj4gUmU6IFtYb3Jw LXVzZXJzXSB4b3JwIGFuZCBvdGhlciANCiAgbmV0d29yayBkYWVtb25zPC9ESVY+DQogIDxESVY+ PEJSPjwvRElWPiZndDsgSSdtIHRyeWluZyB0byB0ZXN0IG11bHRpY2FzdCByb3V0aW5nIHByb3Rv Y29scy48QlI+Jmd0OyANCiAgPEJSPiZndDsgSXQgaXMgaGFwcGVuaW5nIHNvbWV0aGluZyB3ZWly ZC4gV2hlbiBJIHN0YXJ0dXAgeG9ycCwgdGhlIG90aGVyIA0KICBuZXR3b3JrIGQ9PEJSPiZndDsg YWVtb25zIGxpa2UgU1NIIGFuZCBGVFAgZG9lcyBub3Qgd29yay4gSSBoYXZlIHRlc3RlZCwgSSAN CiAgdGVzdCBhIHNzaCAxMjcuMC49PEJSPiZndDsgMC4xIGFuZCBpdCB3b3Jrcywgd2hlbiBJIHJ1 biB4b3JwIHNzaCBjYW5ub3QgDQogIGNvbm5lY3QgYW55bW9yZS4gSWYgSSBraWxsIHg9PEJSPiZn dDsgb3JwLCB0aGFuIHNzaCBjb25uZWN0cyBhZ2Fpbi48QlI+Jmd0OyANCiAgPEJSPiZndDsgSSB0 aGluayBpdCBjYW4gYmUgRnJlZUJTRCwgYmVjYXVzZSBJIGhhdmUgbm90aWNlZCB0aGF0IHdoZW4g YSBjYWJsZSANCiAgaXMgZGk9PEJSPiZndDsgc2Nvbm5lY3RlZCB5b3Ugc29tZXRoaW5nIGlzIHdy b25nIG9uIHRoZSBuZXR3b3JrIHRoZSBGcmVlQlNEIA0KICBkb2VzIG5vdCBvcGU9PEJSPiZndDsg cmF0ZSBhbnkgbmV0d29yayBkYWVtb25zLCBuZWl0aGVyIGxvY2FsaG9zdC48QlI+Jmd0OyANCiAg PEJSPiZndDsgVXNpbmcgdGNwZHVtcCwgSSBoYXZlIG5vdGljZWQgdGhhdCB0aGUgaGFuZHNoYWtl IHRjcCBpcyBkb25lLCBidXQgDQogIG5vdGhpbmc9PEJSPiZndDsmbmJzcDsgaGFwcGVucy48QlI+ PEJSPlRyeSB0byBydW4gWE9SUCB3aXRoIGVtcHR5IA0KICBjb25maWd1cmF0aW9uIHRvIHNlZSB3 aGV0aGVyIHlvdSBzdGlsbDxCUj5oYXZlIHRoZSBjb25uZWN0aW9uIA0KICBwcm9ibGVtLjxCUj48 QlI+SWYgeW91IGRvbid0IHNlZSB0aGUgcHJvYmxlbSB3aXRoIGVtcHR5IGNvbmZpZ3VyYXRpb24s IHRoZW4gDQogIHRyeSB0bzxCUj5pbmNyZW1lbnRhbGx5IGFkZCB5b3VyIGNvbmZpZ3VyYXRpb24g Ym90dG9tLXVwIChlLmcuLCBmaXJzdCANCiAgZW5hYmxlPEJSPiJpbnRlcmZhY2VzIiBzZWN0aW9u LCB0aGVuICJmZWEiLCAiZmliMm1yaWIiLCAibWZlYSIsIA0KICAiaWdtcCIsPEJSPiJwaW1zbTQi KSB0byBzZWUgd2hhdCBleGFjdGx5IHRyaWdnZXJzIHRoZSBwcm9ibGVtLiBGb3IgZXhhbXBsZSwg DQogIGRvPEJSPnlvdSBleHBsaWNpdGx5IGNvbmZpZ3VyZSB0aGUgbG9vcGJhY2sgaW50ZXJmYWNl IG9yIGRvIHlvdSBpbnN0YWxsPEJSPmFueSANCiAgdW5pY2FzdCByb3V0ZXMgKHZpYSBzdGF0aWNf cm91dGVzLCByaXAgb3IgDQogIGJncCkuPEJSPjxCUj5QYXZsaW48QlI+X19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX188QlI+WG9ycC11c2VycyANCiAgbWFpbGlu ZyBsaXN0PEJSPjxBIA0KICBocmVmPSJtYWlsdG86WG9ycC11c2Vyc0B4b3JwLm9yZyI+WG9ycC11 c2Vyc0B4b3JwLm9yZzwvQT48QlI+PEEgDQogIGhyZWY9Imh0dHA6Ly9tYWlsbWFuLklDU0kuQmVy a2VsZXkuRURVL21haWxtYW4vbGlzdGluZm8veG9ycC11c2VycyI+aHR0cDovL21haWxtYW4uSUNT SS5CZXJrZWxleS5FRFUvbWFpbG1hbi9saXN0aW5mby94b3JwLXVzZXJzPC9BPjxCUj48L0JMT0NL UVVPVEU+PC9CT0RZPjwvSFRNTD4NCg== ------=_NextPart_000_001C_01C569E5.7ACD9D90-- From ap010@terra.com.br Sun Jun 5 23:03:55 2005 From: ap010@terra.com.br (Diogo Della) Date: Sun, 5 Jun 2005 19:03:55 -0300 Subject: [Xorp-users] The problem is RIP Message-ID: --_=__=_XaM3_.1118009035.2A.250895.42.6527.52.42.007.74538367 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable I have made the tests and the problem is the RIP. If I enable RIP with XORP or use RIP with routed. The FreeBSD box does no= t receive any more connections? Why is that? At the file above, the 192.168 is disable because there is no router RIP = at this interface. Thanks. Diogo Della RIP config at XORP: /* rip { export connected { metric: 0 tag:0 } interface sis0 { vif sis0 { address 192.168.69.2 { disable: true } } } interface fxp0 { vif fxp0 { address 172.16.3.2 { disable: false } } } interface rl0 { vif rl0 { address 172.16.5.2 { disable: false } } } } */ --_=__=_XaM3_.1118009035.2A.250895.42.6527.52.42.007.74538367 Content-Type: text/html; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable
I have made the tests and the problem is the RIP.
 
If I enable RIP with XORP or use RIP with routed. The FreeBSD box do= es not receive any more connections?
 
Why is that?
 
At the file above, the 192.168 is disable because there is no router= RIP at this interface.
 
Thanks.
 
Diogo Della
 
RIP config at XORP:
/*    rip {
      &n= bsp; export connected {
       &nbs= p;        metric: 0
  &nb= sp;           &nbs= p; tag:0
        }
        interface sis0 {
 = ;            =    vif sis0 {
       &nbs= p;            = ;    address 192.168.69.2 {
    &nb= sp;           &nbs= p;            = ;   disable: true
       =             &= nbsp;    }
       &n= bsp;        }
   &nb= sp;    }
        interface fxp0 {
 = ;            =    vif fxp0 {
       &nbs= p;            = ;    address 172.16.3.2 {
     = ;            =             &= nbsp;  disable: false
       &= nbsp;           &n= bsp;    }
       &nb= sp;        }
   &nbs= p;    }
        interface rl0 {
 =             &= nbsp;  vif rl0 {
        =             &= nbsp;   address 172.16.5.2 {
     &= nbsp;           &n= bsp;           &nb= sp;  disable: false
       &nb= sp;           &nbs= p;    }
        = ;        }
    =     }
    }
*/
--_=__=_XaM3_.1118009035.2A.250895.42.6527.52.42.007.74538367-- From ap010@terra.com.br Sun Jun 5 23:51:13 2005 From: ap010@terra.com.br (Diogo Della) Date: Sun, 5 Jun 2005 19:51:13 -0300 Subject: [Xorp-users] Sorry, the problem is not RIP, but the routing table Message-ID: --_=__=_XaM3_.1118011873.2A.589272.42.31467.52.42.007.991020166 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable I made more tests. When I put routes at the route table of FreeBSD, it does not accept any m= ore connection from localhost or from other host at the subnet. Look what happens: 1- router2# ssh localhost Password: 2- route add -net 192.168.67.0/24 172.16.3.1 route add -net 192.168.68.0/24 172.16.5.3 3- router2# netstat -nr | less Routing tables Internet: Destination Gateway Flags Refs Use Netif Expir= e 127.0.0.1 127.0.0.1 UH 0 97481 lo0 172.16.3/24 link#2 UC 1 0 fxp0 172.16.3.1 00:02:2a:d3:07:ab UHLW 2 999 fxp0 97= 9 172.16.5/24 link#3 UC 1 0 rl0 172.16.5.3 link#3 UHLW 1 0 rl0 192.168.67 172.16.3.1 UGSc 0 0 fxp0 192.168.68 172.16.5.3 UGSc 0 0 rl0 192.168.69 link#1 UC 1 0 sis0 192.168.69.200 00:0c:6e:33:0c:ae UHLW 0 8 sis0 24= 3 4- router2# ssh localhost ^C (It timeout and I have to kill with CTRL + C ) 5- delete net 192.168.67.0: gateway 172.16.3.1 delete net 192.168.68.0: gateway 172.16.5.3 6- router2# ssh localhost Password: Why does this happens? Is it because a securty level of FreeBSD, how a ch= ange this? Thanks Diogo Della --_=__=_XaM3_.1118011873.2A.589272.42.31467.52.42.007.991020166 Content-Type: text/html; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable
I made more tests.
 
When I put routes at the route table of FreeBSD, it does not accept = any more connection from localhost or from other host at the subnet.
 
Look what happens:
1-
router2# ssh localhost
Password:
2-
route add -net 192.168.67.0/24 172.16.3.1
route add -net 192.168.= 68.0/24 172.16.5.3
3-
router2# netstat -nr | less
Routing tables
Internet:
Destination        G= ateway            = Flags    Refs      Use  Neti= f Expire
127.0.0.1        &nbs= p; 127.0.0.1          UH&nbs= p;         0    97= 481    lo0
172.16.3/24     &nb= sp;  link#2         &nb= sp;   UC          = 1        0   fxp0
172.16.= 3.1         00:02:2a:d3:07:ab&nbs= p; UHLW        2   &nbs= p;  999   fxp0    979
172.16.5/24 &= nbsp;      link#3     &= nbsp;       UC     = ;     1        0&n= bsp;   rl0
172.16.5.3      &nb= sp;  link#3         &nb= sp;   UHLW        1 &nb= sp;      0    rl0
192.168.67&n= bsp;        172.16.3.1  &nbs= p;      UGSc      =   0        0   fxp0
= 192.168.68         172.16.5.3&nbs= p;        UGSc    =     0        0 &nb= sp;  rl0
192.168.69       &nbs= p; link#1          &nbs= p;  UC          1 =        0   sis0
192.168.69.200=      00:0c:6e:33:0c:ae  UHLW   &n= bsp;    0        8 = ;  sis0    243
4-
router2# ssh localhost
^C
(It timeout and I have to kill with CTRL + C )
5-
delete net 192.168.67.0: gateway 172.16.3.1
delete net 192.168.68= .0: gateway 172.16.5.3
6-
router2# ssh localhost
Password:
Why does this happens? Is it because a securty level of FreeBSD, how= a change this?
 
Thanks
 
Diogo Della

 
--_=__=_XaM3_.1118011873.2A.589272.42.31467.52.42.007.991020166-- From kristian@juniks.net Mon Jun 6 03:21:43 2005 From: kristian@juniks.net (Kristian Larsson) Date: Mon, 6 Jun 2005 04:21:43 +0200 Subject: [Xorp-users] Sorry, the problem is not RIP, but the routing table In-Reply-To: References: Message-ID: <20050606022143.GA9623@juniks.net> First of all, try to keep everything in one thread. There are now numerous threads all coming from you on the same subject. And it looks real messy in my mail reader ;) Anyway, you haven't by any chance changed something in /etc/hosts, perhaps the ip of localhost? Is it just ssh or does everything, like ping and so on, stop working as well? What if you try pinging or ssh to 127.0.0.1 it looks correct from over here, and when doing this on my machine (also freebsd) I don't get the same errors. //Kristian Larsson On Sun, Jun 05, 2005 at 07:51:13PM -0300, Diogo Della wrote: > I made more tests. > > When I put routes at the route table of FreeBSD, it does not accept any more connection from localhost or from other host at the subnet. > > Look what happens: > 1- > router2# ssh localhost > Password: > 2- > route add -net 192.168.67.0/24 172.16.3.1 > route add -net 192.168.68.0/24 172.16.5.3 > 3- > router2# netstat -nr | less > Routing tables > Internet: > Destination Gateway Flags Refs Use Netif Expire > 127.0.0.1 127.0.0.1 UH 0 97481 lo0 > 172.16.3/24 link#2 UC 1 0 fxp0 > 172.16.3.1 00:02:2a:d3:07:ab UHLW 2 999 fxp0 979 > 172.16.5/24 link#3 UC 1 0 rl0 > 172.16.5.3 link#3 UHLW 1 0 rl0 > 192.168.67 172.16.3.1 UGSc 0 0 fxp0 > 192.168.68 172.16.5.3 UGSc 0 0 rl0 > 192.168.69 link#1 UC 1 0 sis0 > 192.168.69.200 00:0c:6e:33:0c:ae UHLW 0 8 sis0 243 > 4- > router2# ssh localhost > ^C > (It timeout and I have to kill with CTRL + C ) > 5- > delete net 192.168.67.0: gateway 172.16.3.1 > delete net 192.168.68.0: gateway 172.16.5.3 > 6- > router2# ssh localhost > Password: > > Why does this happens? Is it because a securty level of FreeBSD, how a change this? > > Thanks > > Diogo Della From ap010@terra.com.br Mon Jun 6 00:28:36 2005 From: ap010@terra.com.br (Diogo Della) Date: Sun, 5 Jun 2005 20:28:36 -0300 Subject: [Xorp-users] Sorry, the problem is not RIP, but the routing table Message-ID: --_=__=_XaM3_.1118014116.2A.608415.42.19091.52.42.007.697744838 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Sorry, I'm from webmail here. There is no problem with /etc/hosts. The problem haapens with ssh and ftp= . Ping there is no problem. I'm looking every where to figure this out, but I can=B4t understand. ### TEST router2# route add -net 0.0.0.0 192.168.69.200 add net 0.0.0.0: gateway 192.168.69.200 router2# ssh 127.0.0.1 ^C router2# ftp 127.0.0.1 Connected to 127.0.0.1. ^Z Suspended router2# ping 127.0.0.1 PING 127.0.0.1 (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=3D0 ttl=3D64 time=3D0.027 ms ^C --- 127.0.0.1 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max/stddev =3D 0.027/0.027/0.027/0.000 ms router2# route delete -net 0.0.0.0 192.168.69.200 delete net 0.0.0.0: gateway 192.168.69.200 router2# ssh 127.0.0.1 Password: router2# ftp 127.0.0.1 Connected to 127.0.0.1. 220 router2.multicast FTP server (Version 6.00LS) ready. Name (127.0.0.1:root): ### KERNEL OPTIONS #MULTICAST options MROUTING #DUMMYNET options DUMMYNET options IPFIREWALL options IPFIREWALL_VERBOSE options IPFIREWALL_VERBOSE_LIMIT=3D5 options IPFIREWALL_FORWARD options IPFW2 options IPDIVERT options HZ=3D1000 # Mais opcoes no Kernel segundo o HandBook 20050513 por Diogo Della options IPFIREWALL_DEFAULT_TO_ACCEPT options IPV6FIREWALL options IPV6FIREWALL_VERBOSE options IPV6FIREWALL_VERBOSE_LIMIT options IPV6FIREWALL_DEFAULT_TO_ACCEPT # Suporta ao PIM options PIM De:"Kristian Larsson" kristian@juniks.net Para:"Diogo Della" ap010@terra.com.br C=F3pia:xorp-users@xorp.org Data:Mon, 6 Jun 2005 04:21:43 +0200 Assunto:Re: [Xorp-users] Sorry, the problem is not RIP, but the routing t= able > First of all, try to keep everything in one thread. There are now > numerous threads all coming from you on the same subject. And it looks > real messy in my mail reader ;) > > Anyway, you haven't by any chance changed something in /etc/hosts, perh= aps the > ip of localhost? > Is it just ssh or does everything, like ping and so on, stop working as= well? > What if you try pinging or ssh to 127.0.0.1 > > it looks correct from over here, and when doing this on my machine > (also freebsd) I don't get the same errors. > > //Kristian Larsson > > On Sun, Jun 05, 2005 at 07:51:13PM -0300, Diogo Della wrote: > > I made more tests. > > > > When I put routes at the route table of FreeBSD, it does not accept a= ny more connection from localhost or from other host at the subnet. > > > > Look what happens: > > 1- > > router2# ssh localhost > > Password: > > 2- > > route add -net 192.168.67.0/24 172.16.3.1 > > route add -net 192.168.68.0/24 172.16.5.3 > > 3- > > router2# netstat -nr | less > > Routing tables > > Internet: > > Destination Gateway Flags Refs Use Netif Expire > > 127.0.0.1 127.0.0.1 UH 0 97481 lo0 > > 172.16.3/24 link#2 UC 1 0 fxp0 > > 172.16.3.1 00:02:2a:d3:07:ab UHLW 2 999 fxp0 979 > > 172.16.5/24 link#3 UC 1 0 rl0 > > 172.16.5.3 link#3 UHLW 1 0 rl0 > > 192.168.67 172.16.3.1 UGSc 0 0 fxp0 > > 192.168.68 172.16.5.3 UGSc 0 0 rl0 > > 192.168.69 link#1 UC 1 0 sis0 > > 192.168.69.200 00:0c:6e:33:0c:ae UHLW 0 8 sis0 243 > > 4- > > router2# ssh localhost > > ^C > > (It timeout and I have to kill with CTRL + C ) > > 5- > > delete net 192.168.67.0: gateway 172.16.3.1 > > delete net 192.168.68.0: gateway 172.16.5.3 > > 6- > > router2# ssh localhost > > Password: > > > > Why does this happens? Is it because a securty level of FreeBSD, how = a change this? > > > > Thanks > > > > Diogo Della > --_=__=_XaM3_.1118014116.2A.608415.42.19091.52.42.007.697744838 Content-Type: text/html; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable
Sorry, I'm from webmail here.
 
There is no problem with /etc/hosts. The problem haapens with ssh an= d ftp. Ping there is no problem.
 
I'm looking every where to figure this out, but I can=B4t understand= .
 
### TEST
router2# route add -net 0.0.0.0 192.168.69.200
add net 0.0.0.0: g= ateway 192.168.69.200
router2# ssh 127.0.0.1
^C
router2# ftp 127= .0.0.1
Connected to 127.0.0.1.
^Z
Suspended
router2# ping 127= .0.0.1
PING 127.0.0.1 (127.0.0.1): 56 data bytes
64 bytes from 127.= 0.0.1: icmp_seq=3D0 ttl=3D64 time=3D0.027 ms
^C
--- 127.0.0.1 ping = statistics ---
1 packets transmitted, 1 packets received, 0% packet lo= ss
round-trip min/avg/max/stddev =3D 0.027/0.027/0.027/0.000 ms
rou= ter2# route delete -net 0.0.0.0 192.168.69.200
delete net 0.0.0.0: gat= eway 192.168.69.200
router2# ssh 127.0.0.1
Password:
router2# ftp 127.0.0.1
Connected to 127.0.0.1.
220 router2.mul= ticast FTP server (Version 6.00LS) ready.
Name (127.0.0.1:root):
### KERNEL OPTIONS
#MULTICAST
options        = ; MROUTING
#DUMMYNET
options        = DUMMYNET
options         IPFI= REWALL
options         IPFIREW= ALL_VERBOSE
options         IP= FIREWALL_VERBOSE_LIMIT=3D5
options      =    IPFIREWALL_FORWARD
options     &= nbsp;   IPFW2
options      &nb= sp;  IPDIVERT
options       &n= bsp; HZ=3D1000
# Mais opcoes no Kernel segundo o HandBook 20050513 por Diogo Della<= BR>options    IPFIREWALL_DEFAULT_TO_ACCEPT
options = ;   IPV6FIREWALL
options    IPV6FIREWALL_VERB= OSE
options    IPV6FIREWALL_VERBOSE_LIMIT
options&nb= sp;   IPV6FIREWALL_DEFAULT_TO_ACCEPT
# Suporta ao PIM
options       = ;  PIM
 
De: "Kristian Larsso= n" kristian@juniks.net
Para: "Diogo Della" ap= 010@terra.com.br
C=F3pia: xorp-users@xorp.= org
Data: Mon, 6 Jun 2005 = 04:21:43 +0200
Assunto: Re: [Xorp-users]= Sorry, the problem is not RIP, but the routing table
> First of all, try to keep everything in one thread. There are n= ow
> numerous threads all coming from you on the same subject. And i= t looks
> real messy in my mail reader ;)
>
> Anyway, you haven't by any chance changed something in /etc/hos= ts, perhaps the
> ip of localhost?
> Is it just ssh or does everything, like ping and so on, stop wo= rking as well?
> What if you try pinging or ssh to 127.0.0.1
>
> it looks correct from over here, and when doing this on my mach= ine
> (also freebsd) I don't get the same errors.
>
> //Kristian Larsson
>
> On Sun, Jun 05, 2005 at 07:51:13PM -0300, Diogo Della wrote:
> > I made more tests.
> >
> > When I put routes at the route table of FreeBSD, it does n= ot accept any more connection from localhost or from other host at the su= bnet.
> >
> > Look what happens:
> > 1-
> > router2# ssh localhost
> > Password:
> > 2-
> > route add -net 192.168.67.0/24 172.16.3.1
> > route add -net 192.168.68.0/24 172.16.5.3
> > 3-
> > router2# netstat -nr | less
> > Routing tables
> > Internet:
> > Destination Gateway Flags Refs Use Netif Expire
> > 127.0.0.1 127.0.0.1 UH 0 97481 lo0
> > 172.16.3/24 link#2 UC 1 0 fxp0
> > 172.16.3.1 00:02:2a:d3:07:ab UHLW 2 999 fxp0 979
> > 172.16.5/24 link#3 UC 1 0 rl0
> > 172.16.5.3 link#3 UHLW 1 0 rl0
> > 192.168.67 172.16.3.1 UGSc 0 0 fxp0
> > 192.168.68 172.16.5.3 UGSc 0 0 rl0
> > 192.168.69 link#1 UC 1 0 sis0
> > 192.168.69.200 00:0c:6e:33:0c:ae UHLW 0 8 sis0 243
> > 4-
> > router2# ssh localhost
> > ^C
> > (It timeout and I have to kill with CTRL + C )
> > 5-
> > delete net 192.168.67.0: gateway 172.16.3.1
> > delete net 192.168.68.0: gateway 172.16.5.3
> > 6-
> > router2# ssh localhost
> > Password:
> >
> > Why does this happens? Is it because a securty level of Fr= eeBSD, how a change this?
> >
> > Thanks
> >
> > Diogo Della
>
--_=__=_XaM3_.1118014116.2A.608415.42.19091.52.42.007.697744838-- From pavlin@icir.org Mon Jun 6 22:05:40 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Mon, 06 Jun 2005 14:05:40 -0700 Subject: [Xorp-users] Sorry, the problem is not RIP, but the routing table In-Reply-To: Message from "Diogo Della" of "Sun, 05 Jun 2005 20:28:36 -0300." Message-ID: <200506062105.j56L5e5c097699@possum.icir.org> I have the feeling that either somehow the route to your interface address is missing, or "localhost" is resolved to something that is not reachable (e.g., by default "localhost" maps to the "::1" IPv6 address) or something like this. The add/delete routes obviously affect the routing table so something may be (mis)routed in an unexpected direction. Try all your ftp/ssh/ping tests with IP addresses first (127.0.0.1 and the address(es) of your interfaces), and if they succeed, then try to figure-out where a request like "ssh localhost" is sent to. Pavlin > --_=__=_XaM3_.1118014116.2A.608415.42.19091.52.42.007.697744838 > Content-Type: text/plain; charset=iso-8859-1 > Content-Transfer-Encoding: quoted-printable > > Sorry, I'm from webmail here. > > There is no problem with /etc/hosts. The problem haapens with ssh and ftp= > . Ping there is no problem. > > I'm looking every where to figure this out, but I can=B4t understand. > > ### TEST > router2# route add -net 0.0.0.0 192.168.69.200 > add net 0.0.0.0: gateway 192.168.69.200 > router2# ssh 127.0.0.1 > ^C > router2# ftp 127.0.0.1 > Connected to 127.0.0.1. > ^Z > Suspended > router2# ping 127.0.0.1 > PING 127.0.0.1 (127.0.0.1): 56 data bytes > 64 bytes from 127.0.0.1: icmp_seq=3D0 ttl=3D64 time=3D0.027 ms > ^C > --- 127.0.0.1 ping statistics --- > 1 packets transmitted, 1 packets received, 0% packet loss > round-trip min/avg/max/stddev =3D 0.027/0.027/0.027/0.000 ms > router2# route delete -net 0.0.0.0 192.168.69.200 > delete net 0.0.0.0: gateway 192.168.69.200 > router2# ssh 127.0.0.1 > Password: > router2# ftp 127.0.0.1 > Connected to 127.0.0.1. > 220 router2.multicast FTP server (Version 6.00LS) ready. > Name (127.0.0.1:root): > > > ### KERNEL OPTIONS > #MULTICAST > options MROUTING > #DUMMYNET > options DUMMYNET > options IPFIREWALL > options IPFIREWALL_VERBOSE > options IPFIREWALL_VERBOSE_LIMIT=3D5 > options IPFIREWALL_FORWARD > options IPFW2 > options IPDIVERT > options HZ=3D1000 > # Mais opcoes no Kernel segundo o HandBook 20050513 por Diogo Della > options IPFIREWALL_DEFAULT_TO_ACCEPT > options IPV6FIREWALL > options IPV6FIREWALL_VERBOSE > options IPV6FIREWALL_VERBOSE_LIMIT > options IPV6FIREWALL_DEFAULT_TO_ACCEPT > # Suporta ao PIM > options PIM > > > De:"Kristian Larsson" kristian@juniks.net > > Para:"Diogo Della" ap010@terra.com.br > > C=F3pia:xorp-users@xorp.org > > Data:Mon, 6 Jun 2005 04:21:43 +0200 > > Assunto:Re: [Xorp-users] Sorry, the problem is not RIP, but the routing t= > able > > > First of all, try to keep everything in one thread. There are now > > numerous threads all coming from you on the same subject. And it looks > > real messy in my mail reader ;) > > > > Anyway, you haven't by any chance changed something in /etc/hosts, perh= > aps the > > ip of localhost? > > Is it just ssh or does everything, like ping and so on, stop working as= > well? > > What if you try pinging or ssh to 127.0.0.1 > > > > it looks correct from over here, and when doing this on my machine > > (also freebsd) I don't get the same errors. > > > > //Kristian Larsson > > > > On Sun, Jun 05, 2005 at 07:51:13PM -0300, Diogo Della wrote: > > > I made more tests. > > > > > > When I put routes at the route table of FreeBSD, it does not accept a= > ny more connection from localhost or from other host at the subnet. > > > > > > Look what happens: > > > 1- > > > router2# ssh localhost > > > Password: > > > 2- > > > route add -net 192.168.67.0/24 172.16.3.1 > > > route add -net 192.168.68.0/24 172.16.5.3 > > > 3- > > > router2# netstat -nr | less > > > Routing tables > > > Internet: > > > Destination Gateway Flags Refs Use Netif Expire > > > 127.0.0.1 127.0.0.1 UH 0 97481 lo0 > > > 172.16.3/24 link#2 UC 1 0 fxp0 > > > 172.16.3.1 00:02:2a:d3:07:ab UHLW 2 999 fxp0 979 > > > 172.16.5/24 link#3 UC 1 0 rl0 > > > 172.16.5.3 link#3 UHLW 1 0 rl0 > > > 192.168.67 172.16.3.1 UGSc 0 0 fxp0 > > > 192.168.68 172.16.5.3 UGSc 0 0 rl0 > > > 192.168.69 link#1 UC 1 0 sis0 > > > 192.168.69.200 00:0c:6e:33:0c:ae UHLW 0 8 sis0 243 > > > 4- > > > router2# ssh localhost > > > ^C > > > (It timeout and I have to kill with CTRL + C ) > > > 5- > > > delete net 192.168.67.0: gateway 172.16.3.1 > > > delete net 192.168.68.0: gateway 172.16.5.3 > > > 6- > > > router2# ssh localhost > > > Password: > > > > > > Why does this happens? Is it because a securty level of FreeBSD, how = > a change this? > > > > > > Thanks > > > > > > Diogo Della From zec@icir.org Mon Jun 6 23:00:15 2005 From: zec@icir.org (Marko Zec) Date: Tue, 7 Jun 2005 00:00:15 +0200 Subject: [Xorp-users] Sorry, the problem is not RIP, but the routing table In-Reply-To: References: Message-ID: <200506070000.16183.zec@icir.org> On Monday 06 June 2005 01:28, Diogo Della wrote: > Sorry, I'm from webmail here. > > There is no problem with /etc/hosts. The problem haapens with ssh and > ftp. Ping there is no problem. > > I'm looking every where to figure this out, but I can´t understand. Most probably you messed up the route to your DNS server(s), so in fact your ftp/ssh daemons actually do accept TCP connections, yet later they just get stuck in an attempt to do reverse lookups on client IP addresses. Since it looks like the daemons are attempting to resolve 127.0.0.1 via DNS, my guess is that a proper entry for "localhost" is missing from your /etc/hosts file. Can you remove any nameserver entries from /etc/resolv.conf, retry the tests and report what happens? Marko > ### TEST > router2# route add -net 0.0.0.0 192.168.69.200 > add net 0.0.0.0: gateway 192.168.69.200 > router2# ssh 127.0.0.1 > ^C > router2# ftp 127.0.0.1 > Connected to 127.0.0.1. > ^Z > Suspended > router2# ping 127.0.0.1 > PING 127.0.0.1 (127.0.0.1): 56 data bytes > 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.027 ms > ^C > --- 127.0.0.1 ping statistics --- > 1 packets transmitted, 1 packets received, 0% packet loss > round-trip min/avg/max/stddev = 0.027/0.027/0.027/0.000 ms > router2# route delete -net 0.0.0.0 192.168.69.200 > delete net 0.0.0.0: gateway 192.168.69.200 > router2# ssh 127.0.0.1 > Password: > router2# ftp 127.0.0.1 > Connected to 127.0.0.1. > 220 router2.multicast FTP server (Version 6.00LS) ready. > Name (127.0.0.1:root): > > > ### KERNEL OPTIONS > #MULTICAST > options MROUTING > #DUMMYNET > options DUMMYNET > options IPFIREWALL > options IPFIREWALL_VERBOSE > options IPFIREWALL_VERBOSE_LIMIT=5 > options IPFIREWALL_FORWARD > options IPFW2 > options IPDIVERT > options HZ=1000 > # Mais opcoes no Kernel segundo o HandBook 20050513 por Diogo Della > options IPFIREWALL_DEFAULT_TO_ACCEPT > options IPV6FIREWALL > options IPV6FIREWALL_VERBOSE > options IPV6FIREWALL_VERBOSE_LIMIT > options IPV6FIREWALL_DEFAULT_TO_ACCEPT > # Suporta ao PIM > options PIM > > > De:"Kristian Larsson" kristian@juniks.net > > Para:"Diogo Della" ap010@terra.com.br > > Cópia:xorp-users@xorp.org > > Data:Mon, 6 Jun 2005 04:21:43 +0200 > > Assunto:Re: [Xorp-users] Sorry, the problem is not RIP, but the > routing table > > > First of all, try to keep everything in one thread. There are now > > numerous threads all coming from you on the same subject. And it > > looks real messy in my mail reader ;) > > > > Anyway, you haven't by any chance changed something in /etc/hosts, > > perhaps the ip of localhost? > > Is it just ssh or does everything, like ping and so on, stop > > working as well? What if you try pinging or ssh to 127.0.0.1 > > > > it looks correct from over here, and when doing this on my machine > > (also freebsd) I don't get the same errors. > > > > //Kristian Larsson > > > > On Sun, Jun 05, 2005 at 07:51:13PM -0300, Diogo Della wrote: > > > I made more tests. > > > > > > When I put routes at the route table of FreeBSD, it does not > > > accept any more connection from localhost or from other host at > > > the subnet. > > > > > > Look what happens: > > > 1- > > > router2# ssh localhost > > > Password: > > > 2- > > > route add -net 192.168.67.0/24 172.16.3.1 > > > route add -net 192.168.68.0/24 172.16.5.3 > > > 3- > > > router2# netstat -nr | less > > > Routing tables > > > Internet: > > > Destination Gateway Flags Refs Use Netif Expire > > > 127.0.0.1 127.0.0.1 UH 0 97481 lo0 > > > 172.16.3/24 link#2 UC 1 0 fxp0 > > > 172.16.3.1 00:02:2a:d3:07:ab UHLW 2 999 fxp0 979 > > > 172.16.5/24 link#3 UC 1 0 rl0 > > > 172.16.5.3 link#3 UHLW 1 0 rl0 > > > 192.168.67 172.16.3.1 UGSc 0 0 fxp0 > > > 192.168.68 172.16.5.3 UGSc 0 0 rl0 > > > 192.168.69 link#1 UC 1 0 sis0 > > > 192.168.69.200 00:0c:6e:33:0c:ae UHLW 0 8 sis0 243 > > > 4- > > > router2# ssh localhost > > > ^C > > > (It timeout and I have to kill with CTRL + C ) > > > 5- > > > delete net 192.168.67.0: gateway 172.16.3.1 > > > delete net 192.168.68.0: gateway 172.16.5.3 > > > 6- > > > router2# ssh localhost > > > Password: > > > > > > Why does this happens? Is it because a securty level of FreeBSD, > > > how a change this? > > > > > > Thanks > > > > > > Diogo Della From M.Calleja@damtp.cam.ac.uk Tue Jun 7 17:51:35 2005 From: M.Calleja@damtp.cam.ac.uk (Mark Calleja) Date: Tue, 07 Jun 2005 17:51:35 +0100 Subject: [Xorp-users] Newbie Xorp & m'cast problem Message-ID: <42A5D097.8070103@damtp.cam.ac.uk> Hi, I've got a firewall (running shorewall actually) with two NICs and I've been trying to get m'cast to pass in either way without much success. Our setup's a bit unusual in that the two NICs are actually on the same network address range, and unicast routing is achieved by proxyarp'ing. Not a nice solution, but it works. However, I've tried using Xorp to get the m'cast to be conveyed without much joy, and I'd appreciate if anyone here can spot where I'm going wrong. Here's our config.boot file: ===== interfaces { interface eth0 { disable: false description: "Global interface" default-system-config } interface eth1 { disable: false description: "Firewalled interface" default-system-config } fea { unicast-forwarding4 { disable: false } } plumbing { mfea4 { disable: false interface eth0 { vif eth0 { disable: false } } interface eth1 { vif eth1 { disable: false } } interface register_vif { vif register_vif { /* Note: this vif should be always enabled */ disable: false } } traceoptions { flag all { disable: false } } } } protocols { igmp { disable: false interface eth0 { vif eth0 { disable: false } } interface eth1 { vif eth1 { disable: false } } traceoptions { flag all { disable: false } } } } protocols { pimsm4 { disable: false interface eth0 { vif eth0 { disable: false } } interface eth1 { vif eth1 { disable: false } } interface register_vif { vif register_vif { /* Note: this vif should be always enabled */ disable: false } } switch-to-spt-threshold { disable: false interval-sec: 100 bytes: 102400 } traceoptions { flag all { disable: false } } } } protocols { fib2mrib { disable: false } } ===== The firewall is configured to allow all IGMP and packets in the range 224.0.0.0/4 through, and Xorp comes up cleanly enough, but it seems to get the information about the interfaces which are already there wrong, i.e. here's what Xorp reports: [ 2005/06/07 17:26:05 INFO xorp_igmp MLD6IGMP ] Added new address to vif eth0: addr: 131.111.20.148 subnet: 131.111.20.0/24 broadcast: 131.111.20.191 peer:0.0.0.0 [ 2005/06/07 17:26:05 INFO xorp_igmp MLD6IGMP ] Interface flags changed: Vif[eth0] pif_index: 0 vif_index: 0 addr: 131.111.20.148 subnet: 131.111.20.0/24 broadcast: 131.111.20.191 peer: 0.0.0.0 Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP [ 2005/06/07 17:26:05 INFO xorp_igmp MLD6IGMP ] Interface added: Vif[eth1] pif_index: 0 vif_index: 1 Flags: [ 2005/06/07 17:26:05 INFO xorp_igmp MLD6IGMP ] Added new address to vif eth1: addr: 131.111.20.132 subnet: 131.111.20.0/24 broadcast: 131.111.20.191 peer:0.0.0.0 However, those two addresses have subnet 131.111.20.0/26. Also, what's that peer:0.0.0.0 all about? Anyway, when I try to run an mping job across the f/w, with one machine listening on 229.255.255.2, here's Xorp's output. The listening machine has IP 131.111.20.151 and is outside the firewall on eth0 on the f/w, while the sender is 131.111.20.167 and is firewalled behind eth1: [ 2005/06/07 17:28:39 TRACE xorp_pimsm4 PIM ] TX PIM_HELLO from 131.111.20.148 to 224.0.0.13 on vif eth0 [ 2005/06/07 17:28:41 TRACE xorp_pimsm4 PIM ] TX PIM_HELLO from 131.111.20.132 to 224.0.0.13 on vif eth1 [ 2005/06/07 17:28:41 TRACE xorp_igmp MLD6IGMP ] TX IGMP_MEMBERSHIP_QUERY from 131.111.20.148 to 224.0.0.1 [ 2005/06/07 17:28:41 TRACE xorp_igmp MLD6IGMP ] RX IGMP_MEMBERSHIP_QUERY from 131.111.20.148 to 224.0.0.1 on vif eth0 [ 2005/06/07 17:28:41 TRACE xorp_igmp MLD6IGMP ] TX IGMP_MEMBERSHIP_QUERY from 131.111.20.132 to 224.0.0.1 [ 2005/06/07 17:28:41 TRACE xorp_igmp MLD6IGMP ] RX IGMP_MEMBERSHIP_QUERY from 131.111.20.132 to 224.0.0.1 on vif eth0 [ 2005/06/07 17:28:42 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.132 to 224.0.0.2 on vif eth0 [ 2005/06/07 17:28:42 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.167 to 229.255.255.2 on vif eth0 [ 2005/06/07 17:28:43 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.132 to 224.0.0.13 on vif eth0 [ 2005/06/07 17:28:45 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.144 to 239.255.255.250 on vif eth0 [ 2005/06/07 17:28:46 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.148 to 224.0.0.13 on vif eth0 [ 2005/06/07 17:28:47 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.163 to 229.255.255.2 on vif eth0 [ 2005/06/07 17:28:47 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.151 to 224.0.1.41 on vif eth0 [ 2005/06/07 17:28:51 WARNING xorp_fea MFEA ] proto_socket_read() failed: RX packet from 192.153.213.109 to 224.0.0.13: no vif found [ 2005/06/07 17:28:51 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.148 to 224.0.0.2 on vif eth0 [ 2005/06/07 17:28:52 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_LEAVE_GROUP from 131.111.20.151 to 224.0.0.2 on vif eth0 [ 2005/06/07 17:28:52 TRACE xorp_igmp MLD6IGMP ] TX IGMP_MEMBERSHIP_QUERY from 131.111.20.148 to 229.255.255.2 [ 2005/06/07 17:28:52 TRACE xorp_igmp MLD6IGMP ] RX IGMP_MEMBERSHIP_QUERY from 131.111.20.148 to 229.255.255.2 on vif eth0 [ 2005/06/07 17:28:53 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.163 to 229.255.255.2 on vif eth0 [ 2005/06/07 17:28:56 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.167 to 229.255.255.2 on vif eth0 [ 2005/06/07 17:28:57 ERROR xorp_fea:16848 MFEA +1340 mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: invalid unicast sender address: 0.0.0.0 [ 2005/06/07 17:28:57 ERROR xorp_fea:16848 MFEA +1340 mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: invalid unicast sender address: 0.0.0.0 [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: invalid unicast sender address: 0.0.0.0 [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: invalid unicast sender address: 0.0.0.0 [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: invalid unicast sender address: 0.0.0.0 [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: invalid unicast sender address: 0.0.0.0 [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: invalid unicast sender address: 0.0.0.0 I know this is a lot of stuff I've cut 'n' pasted, but any help on where I'm going wrong would be appreciated! Thanks, Mark From pavlin@icir.org Tue Jun 7 21:36:19 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Tue, 07 Jun 2005 13:36:19 -0700 Subject: [Xorp-users] Newbie Xorp & m'cast problem In-Reply-To: Message from Mark Calleja of "Tue, 07 Jun 2005 17:51:35 BST." <42A5D097.8070103@damtp.cam.ac.uk> Message-ID: <200506072036.j57KaJI7088468@possum.icir.org> > I've got a firewall (running shorewall actually) with two NICs and I've > been trying to get m'cast to pass in either way without much success. > Our setup's a bit unusual in that the two NICs are actually on the same > network address range, and unicast routing is achieved by proxyarp'ing. This may be problematic for multicast routing. Are the two NICs connected physically to the same LAN? If yes, you cannot really do multicast routing between the two interfaces. The reason you cannnot forward multicast packets from-to the same LAN is because this creates a loop. Can you draw a diagram of what exactly you want to do and we can verify whether it is really possible. > Not a nice solution, but it works. However, I've tried using Xorp to get > the m'cast to be conveyed without much joy, and I'd appreciate if anyone > here can spot where I'm going wrong. Here's our config.boot file: The config seems fine. > The firewall is configured to allow all IGMP and packets in the range > 224.0.0.0/4 through, and Xorp comes up cleanly enough, but it seems to You need to enable the PIM packets as well. > get the information about the interfaces which are already there wrong, > i.e. here's what Xorp reports: > > [ 2005/06/07 17:26:05 INFO xorp_igmp MLD6IGMP ] Added new address to vif > eth0: addr: 131.111.20.148 subnet: 131.111.20.0/24 broadcast: > 131.111.20.191 peer:0.0.0.0 > [ 2005/06/07 17:26:05 INFO xorp_igmp MLD6IGMP ] Interface flags changed: > Vif[eth0] pif_index: 0 vif_index: 0 addr: 131.111.20.148 subnet: > 131.111.20.0/24 broadcast: 131.111.20.191 peer: 0.0.0.0 Flags: MULTICAST > BROADCAST UNDERLYING_VIF_UP > [ 2005/06/07 17:26:05 INFO xorp_igmp MLD6IGMP ] Interface added: > Vif[eth1] pif_index: 0 vif_index: 1 Flags: > [ 2005/06/07 17:26:05 INFO xorp_igmp MLD6IGMP ] Added new address to vif > eth1: addr: 131.111.20.132 subnet: 131.111.20.0/24 broadcast: > 131.111.20.191 peer:0.0.0.0 > > However, those two addresses have subnet 131.111.20.0/26. Also, what's > that peer:0.0.0.0 all about? Can you double check with "ifconfig -a" and "ip addr" that your IP addresses are really 131.111.20.0/26. Also, use the xorpsh command "show interface" to see what the XORP FEA thinks the interface addresses should be. If the peer address is 0.0.0.0, you can ignore it (it is used in case of p2p links). > Anyway, when I try to run an mping job across the f/w, with one machine > listening on 229.255.255.2, here's Xorp's output. The listening machine > has IP 131.111.20.151 and is outside the firewall on eth0 on the f/w, > while the sender is 131.111.20.167 and is firewalled behind eth1: >From the logs below it looks like that 131.111.20.151 is listening to 224.0.1.41. On the other hand, 131.111.20.167 and 131.111.20.163 appear to have joined group 229.255.255.2, hence please double-check the groups each host has joined. Few more comments are inlined below. > [ 2005/06/07 17:28:39 TRACE xorp_pimsm4 PIM ] TX PIM_HELLO from > 131.111.20.148 to 224.0.0.13 on vif eth0 > [ 2005/06/07 17:28:41 TRACE xorp_pimsm4 PIM ] TX PIM_HELLO from > 131.111.20.132 to 224.0.0.13 on vif eth1 > [ 2005/06/07 17:28:41 TRACE xorp_igmp MLD6IGMP ] TX > IGMP_MEMBERSHIP_QUERY from 131.111.20.148 to 224.0.0.1 > [ 2005/06/07 17:28:41 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_MEMBERSHIP_QUERY from 131.111.20.148 to 224.0.0.1 on vif eth0 > [ 2005/06/07 17:28:41 TRACE xorp_igmp MLD6IGMP ] TX > IGMP_MEMBERSHIP_QUERY from 131.111.20.132 to 224.0.0.1 > [ 2005/06/07 17:28:41 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_MEMBERSHIP_QUERY from 131.111.20.132 to 224.0.0.1 on vif eth0 > [ 2005/06/07 17:28:42 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.132 to 224.0.0.2 on vif eth0 > [ 2005/06/07 17:28:42 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.167 to 229.255.255.2 on vif eth0 > [ 2005/06/07 17:28:43 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.132 to 224.0.0.13 on vif eth0 > [ 2005/06/07 17:28:45 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.144 to 239.255.255.250 on vif eth0 > [ 2005/06/07 17:28:46 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.148 to 224.0.0.13 on vif eth0 > [ 2005/06/07 17:28:47 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.163 to 229.255.255.2 on vif eth0 > [ 2005/06/07 17:28:47 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.151 to 224.0.1.41 on vif eth0 > [ 2005/06/07 17:28:51 WARNING xorp_fea MFEA ] proto_socket_read() > failed: RX packet from 192.153.213.109 to 224.0.0.13: no vif found This "no vif found" error is probably because the XORP router doesn't have an interface that shares the same subnet as 192.153.213.109. > [ 2005/06/07 17:28:51 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.148 to 224.0.0.2 on vif eth0 > [ 2005/06/07 17:28:52 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_LEAVE_GROUP > from 131.111.20.151 to 224.0.0.2 on vif eth0 I presume here you have stopped the 131.111.20.151 receiver. > [ 2005/06/07 17:28:52 TRACE xorp_igmp MLD6IGMP ] TX > IGMP_MEMBERSHIP_QUERY from 131.111.20.148 to 229.255.255.2 > [ 2005/06/07 17:28:52 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_MEMBERSHIP_QUERY from 131.111.20.148 to 229.255.255.2 on vif eth0 > [ 2005/06/07 17:28:53 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.163 to 229.255.255.2 on vif eth0 > [ 2005/06/07 17:28:56 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORTfrom 131.111.20.167 to 229.255.255.2 on vif eth0 > [ 2005/06/07 17:28:57 ERROR xorp_fea:16848 MFEA +1340 > mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: > invalid unicast sender address: 0.0.0.0 > [ 2005/06/07 17:28:57 ERROR xorp_fea:16848 MFEA +1340 > mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: > invalid unicast sender address: 0.0.0.0 > [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 > mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: > invalid unicast sender address: 0.0.0.0 > [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 > mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: > invalid unicast sender address: 0.0.0.0 > [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 > mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: > invalid unicast sender address: 0.0.0.0 > [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 > mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: > invalid unicast sender address: 0.0.0.0 > [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 > mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: > invalid unicast sender address: 0.0.0.0 Those "invalid unicast sender address: 0.0.0.0" messages are odd. Do you run a sender or a receiver on the same box as the XORP router? In any case, please run tcpdump on all network interfaces and try to catch if there are any IP packets that have indeed source address of 0.0.0.0. If you cannot catch such packets in action, I will send you a patch to fea/mfea_proto_comm.cc that will print some extra debug info about those misterious packets. Pavlin > > I know this is a lot of stuff I've cut 'n' pasted, but any help on where > I'm going wrong would be appreciated! > > Thanks, > Mark > > _______________________________________________ > Xorp-users mailing list > Xorp-users@xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From diogodto@terra.com.br Tue Jun 7 02:30:55 2005 From: diogodto@terra.com.br (Diogo Della) Date: Mon, 6 Jun 2005 22:30:55 -0300 Subject: [Xorp-users] Sorry, the problem is not RIP, but the routing table References: <200506062105.j56L5e5c097699@possum.icir.org> Message-ID: <001c01c56b00$8dd9cb40$01c8a8c0@apolo> This is a multi-part message in MIME format. ------=_NextPart_000_0019_01C56AE7.6769BE20 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: base64 UGF2bGluLA0KDQpJIGZpZ3VyZWQgb3V0IHRoZSB0aGUgcHJvYmxlbSB3YXMgd2l0aCBETlMsIGJ1 dCBJIGNvdWxkIGtub3cgd2h5LiBDcmlzIGV4cGxhaW5lZCBiZWxvdzoNCg0KIkl0IGNvdWxkIGJl IHRoZSByZXZlcnNlIEROUyBsb29rdXAgdGltaW5nIG91dC4gIFdoZW4gdGhlIG1haW4gY2FibGUg aXMNCnVucGx1Z2dlZCwgaXQgY2FuJ3QgcmVzb2x2ZSB0aGUgY2xpZW50IElQIGFuZCB3b24ndCBj b25uZWN0LiAgQXMgc29vbiBhcw0KY29ubmVjdGl2aXR5IGlzIHJlc3RvcmVkLCB0aGUgRE5TIHN1 Y2NlZWRzIGFuZCBjbGllbnQgbG9ncyBpbiBhZ2Fpbi4NClRyeSBhZGRpbmcgdGhlICctdicgZmxh ZyB0byBzc2ggYW5kIHNlZSB3aGVyZSBpdCdzIHRpbWluZyBvdXQuDQoNCkNocmlzIg0KQXRlbmNp b3NhbWVudGUsDQoNCkRpb2dvIERlbGxhIFRvcnJlcyBPbGl2ZWlyYQ0KaHR0cDovL3d3dy5kZWxs YS5lbmcuYnIgICAgDQpkaW9nb2R0b0B0ZXJyYS5jb20uYnIgICAgICAgICAgICAgIA0KTVNOOiBk aW9nb2R0b0Bob3RtYWlsLmNvbSAgICAgICAgICAgICAgICAgICAgICANCis1NSAoNjEpIDg0MDEt NzA3MA0KDQotLS0tLSBPcmlnaW5hbCBNZXNzYWdlIC0tLS0tIA0KICBGcm9tOiBQYXZsaW4gUmFk b3NsYXZvdiANCiAgVG86IERpb2dvIERlbGxhIA0KICBDYzogeG9ycC11c2Vyc0B4b3JwLm9yZyAN CiAgU2VudDogTW9uZGF5LCBKdW5lIDA2LCAyMDA1IDY6MDUgUE0NCiAgU3ViamVjdDogUmU6IFtY b3JwLXVzZXJzXSBTb3JyeSwgdGhlIHByb2JsZW0gaXMgbm90IFJJUCwgYnV0IHRoZSByb3V0aW5n IHRhYmxlDQoNCg0KICBJIGhhdmUgdGhlIGZlZWxpbmcgdGhhdCBlaXRoZXIgc29tZWhvdyB0aGUg cm91dGUgdG8geW91ciBpbnRlcmZhY2UNCiAgYWRkcmVzcyBpcyBtaXNzaW5nLCBvciAibG9jYWxo b3N0IiBpcyByZXNvbHZlZCB0byBzb21ldGhpbmcgdGhhdCBpcw0KICBub3QgcmVhY2hhYmxlIChl LmcuLCBieSBkZWZhdWx0ICJsb2NhbGhvc3QiIG1hcHMgdG8gdGhlICI6OjEiIElQdjYNCiAgYWRk cmVzcykgb3Igc29tZXRoaW5nIGxpa2UgdGhpcy4NCg0KICBUaGUgYWRkL2RlbGV0ZSByb3V0ZXMg b2J2aW91c2x5IGFmZmVjdCB0aGUgcm91dGluZyB0YWJsZSBzbw0KICBzb21ldGhpbmcgbWF5IGJl IChtaXMpcm91dGVkIGluIGFuIHVuZXhwZWN0ZWQgZGlyZWN0aW9uLg0KICBUcnkgYWxsIHlvdXIg ZnRwL3NzaC9waW5nIHRlc3RzIHdpdGggSVAgYWRkcmVzc2VzIGZpcnN0ICgxMjcuMC4wLjENCiAg YW5kIHRoZSBhZGRyZXNzKGVzKSBvZiB5b3VyIGludGVyZmFjZXMpLCBhbmQgaWYgdGhleSBzdWNj ZWVkLCB0aGVuDQogIHRyeSB0byBmaWd1cmUtb3V0IHdoZXJlIGEgcmVxdWVzdCBsaWtlICJzc2gg bG9jYWxob3N0IiBpcyBzZW50IHRvLg0KDQogIFBhdmxpbg0KDQogID4gLS1fPV9fPV9YYU0zXy4x MTE4MDE0MTE2LjJBLjYwODQxNS40Mi4xOTA5MS41Mi40Mi4wMDcuNjk3NzQ0ODM4DQogID4gQ29u dGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0PWlzby04ODU5LTENCiAgPiBDb250ZW50LVRy YW5zZmVyLUVuY29kaW5nOiBxdW90ZWQtcHJpbnRhYmxlDQogID4gDQogID4gU29ycnksIEknbSBm cm9tIHdlYm1haWwgaGVyZS4NCiAgPiANCiAgPiBUaGVyZSBpcyBubyBwcm9ibGVtIHdpdGggL2V0 Yy9ob3N0cy4gVGhlIHByb2JsZW0gaGFhcGVucyB3aXRoIHNzaCBhbmQgZnRwPQ0KICA+IC4gUGlu ZyB0aGVyZSBpcyBubyBwcm9ibGVtLg0KICA+IA0KICA+IEknbSBsb29raW5nIGV2ZXJ5IHdoZXJl IHRvIGZpZ3VyZSB0aGlzIG91dCwgYnV0IEkgY2FuPUI0dCB1bmRlcnN0YW5kLiANCiAgPiANCiAg PiAjIyMgVEVTVA0KICA+IHJvdXRlcjIjIHJvdXRlIGFkZCAtbmV0IDAuMC4wLjAgMTkyLjE2OC42 OS4yMDANCiAgPiBhZGQgbmV0IDAuMC4wLjA6IGdhdGV3YXkgMTkyLjE2OC42OS4yMDANCiAgPiBy b3V0ZXIyIyBzc2ggMTI3LjAuMC4xDQogID4gXkMNCiAgPiByb3V0ZXIyIyBmdHAgMTI3LjAuMC4x DQogID4gQ29ubmVjdGVkIHRvIDEyNy4wLjAuMS4NCiAgPiBeWg0KICA+IFN1c3BlbmRlZA0KICA+ IHJvdXRlcjIjIHBpbmcgMTI3LjAuMC4xDQogID4gUElORyAxMjcuMC4wLjEgKDEyNy4wLjAuMSk6 IDU2IGRhdGEgYnl0ZXMNCiAgPiA2NCBieXRlcyBmcm9tIDEyNy4wLjAuMTogaWNtcF9zZXE9M0Qw IHR0bD0zRDY0IHRpbWU9M0QwLjAyNyBtcw0KICA+IF5DDQogID4gLS0tIDEyNy4wLjAuMSBwaW5n IHN0YXRpc3RpY3MgLS0tDQogID4gMSBwYWNrZXRzIHRyYW5zbWl0dGVkLCAxIHBhY2tldHMgcmVj ZWl2ZWQsIDAlIHBhY2tldCBsb3NzDQogID4gcm91bmQtdHJpcCBtaW4vYXZnL21heC9zdGRkZXYg PTNEIDAuMDI3LzAuMDI3LzAuMDI3LzAuMDAwIG1zDQogID4gcm91dGVyMiMgcm91dGUgZGVsZXRl IC1uZXQgMC4wLjAuMCAxOTIuMTY4LjY5LjIwMA0KICA+IGRlbGV0ZSBuZXQgMC4wLjAuMDogZ2F0 ZXdheSAxOTIuMTY4LjY5LjIwMA0KICA+IHJvdXRlcjIjIHNzaCAxMjcuMC4wLjENCiAgPiBQYXNz d29yZDoNCiAgPiByb3V0ZXIyIyBmdHAgMTI3LjAuMC4xDQogID4gQ29ubmVjdGVkIHRvIDEyNy4w LjAuMS4NCiAgPiAyMjAgcm91dGVyMi5tdWx0aWNhc3QgRlRQIHNlcnZlciAoVmVyc2lvbiA2LjAw TFMpIHJlYWR5Lg0KICA+IE5hbWUgKDEyNy4wLjAuMTpyb290KToNCiAgPiANCiAgPiANCiAgPiAj IyMgS0VSTkVMIE9QVElPTlMNCiAgPiAjTVVMVElDQVNUDQogID4gb3B0aW9ucyAgICAgICAgIE1S T1VUSU5HDQogID4gI0RVTU1ZTkVUDQogID4gb3B0aW9ucyAgICAgICAgIERVTU1ZTkVUDQogID4g b3B0aW9ucyAgICAgICAgIElQRklSRVdBTEwNCiAgPiBvcHRpb25zICAgICAgICAgSVBGSVJFV0FM TF9WRVJCT1NFDQogID4gb3B0aW9ucyAgICAgICAgIElQRklSRVdBTExfVkVSQk9TRV9MSU1JVD0z RDUNCiAgPiBvcHRpb25zICAgICAgICAgSVBGSVJFV0FMTF9GT1JXQVJEDQogID4gb3B0aW9ucyAg ICAgICAgIElQRlcyDQogID4gb3B0aW9ucyAgICAgICAgIElQRElWRVJUDQogID4gb3B0aW9ucyAg ICAgICAgIEhaPTNEMTAwMA0KICA+ICMgTWFpcyBvcGNvZXMgbm8gS2VybmVsIHNlZ3VuZG8gbyBI YW5kQm9vayAyMDA1MDUxMyBwb3IgRGlvZ28gRGVsbGENCiAgPiBvcHRpb25zICAgIElQRklSRVdB TExfREVGQVVMVF9UT19BQ0NFUFQNCiAgPiBvcHRpb25zICAgIElQVjZGSVJFV0FMTA0KICA+IG9w dGlvbnMgICAgSVBWNkZJUkVXQUxMX1ZFUkJPU0UNCiAgPiBvcHRpb25zICAgIElQVjZGSVJFV0FM TF9WRVJCT1NFX0xJTUlUDQogID4gb3B0aW9ucyAgICBJUFY2RklSRVdBTExfREVGQVVMVF9UT19B Q0NFUFQNCiAgPiAjIFN1cG9ydGEgYW8gUElNDQogID4gb3B0aW9ucyAgICAgICAgIFBJTQ0KICA+ IA0KICA+IA0KICA+IERlOiJLcmlzdGlhbiBMYXJzc29uIiBrcmlzdGlhbkBqdW5pa3MubmV0DQog ID4gDQogID4gUGFyYToiRGlvZ28gRGVsbGEiIGFwMDEwQHRlcnJhLmNvbS5icg0KICA+IA0KICA+ IEM9RjNwaWE6eG9ycC11c2Vyc0B4b3JwLm9yZw0KICA+IA0KICA+IERhdGE6TW9uLCA2IEp1biAy MDA1IDA0OjIxOjQzICswMjAwDQogID4gDQogID4gQXNzdW50bzpSZTogW1hvcnAtdXNlcnNdIFNv cnJ5LCB0aGUgcHJvYmxlbSBpcyBub3QgUklQLCBidXQgdGhlIHJvdXRpbmcgdD0NCiAgPiBhYmxl DQogID4gDQogID4gPiBGaXJzdCBvZiBhbGwsIHRyeSB0byBrZWVwIGV2ZXJ5dGhpbmcgaW4gb25l IHRocmVhZC4gVGhlcmUgYXJlIG5vdw0KICA+ID4gbnVtZXJvdXMgdGhyZWFkcyBhbGwgY29taW5n IGZyb20geW91IG9uIHRoZSBzYW1lIHN1YmplY3QuIEFuZCBpdCBsb29rcw0KICA+ID4gcmVhbCBt ZXNzeSBpbiBteSBtYWlsIHJlYWRlciA7KQ0KICA+ID4gDQogID4gPiBBbnl3YXksIHlvdSBoYXZl bid0IGJ5IGFueSBjaGFuY2UgY2hhbmdlZCBzb21ldGhpbmcgaW4gL2V0Yy9ob3N0cywgcGVyaD0N CiAgPiBhcHMgdGhlDQogID4gPiBpcCBvZiBsb2NhbGhvc3Q/DQogID4gPiBJcyBpdCBqdXN0IHNz aCBvciBkb2VzIGV2ZXJ5dGhpbmcsIGxpa2UgcGluZyBhbmQgc28gb24sIHN0b3Agd29ya2luZyBh cz0NCiAgPiAgd2VsbD8NCiAgPiA+IFdoYXQgaWYgeW91IHRyeSBwaW5naW5nIG9yIHNzaCB0byAx MjcuMC4wLjENCiAgPiA+IA0KICA+ID4gaXQgbG9va3MgY29ycmVjdCBmcm9tIG92ZXIgaGVyZSwg YW5kIHdoZW4gZG9pbmcgdGhpcyBvbiBteSBtYWNoaW5lDQogID4gPiAoYWxzbyBmcmVlYnNkKSBJ IGRvbid0IGdldCB0aGUgc2FtZSBlcnJvcnMuDQogID4gPiANCiAgPiA+IC8vS3Jpc3RpYW4gTGFy c3Nvbg0KICA+ID4gDQogID4gPiBPbiBTdW4sIEp1biAwNSwgMjAwNSBhdCAwNzo1MToxM1BNIC0w MzAwLCBEaW9nbyBEZWxsYSB3cm90ZToNCiAgPiA+ID4gSSBtYWRlIG1vcmUgdGVzdHMuDQogID4g PiA+IA0KICA+ID4gPiBXaGVuIEkgcHV0IHJvdXRlcyBhdCB0aGUgcm91dGUgdGFibGUgb2YgRnJl ZUJTRCwgaXQgZG9lcyBub3QgYWNjZXB0IGE9DQogID4gbnkgbW9yZSBjb25uZWN0aW9uIGZyb20g bG9jYWxob3N0IG9yIGZyb20gb3RoZXIgaG9zdCBhdCB0aGUgc3VibmV0Lg0KICA+ID4gPiANCiAg PiA+ID4gTG9vayB3aGF0IGhhcHBlbnM6DQogID4gPiA+IDEtDQogID4gPiA+IHJvdXRlcjIjIHNz aCBsb2NhbGhvc3QNCiAgPiA+ID4gUGFzc3dvcmQ6DQogID4gPiA+IDItDQogID4gPiA+IHJvdXRl IGFkZCAtbmV0IDE5Mi4xNjguNjcuMC8yNCAxNzIuMTYuMy4xDQogID4gPiA+IHJvdXRlIGFkZCAt bmV0IDE5Mi4xNjguNjguMC8yNCAxNzIuMTYuNS4zDQogID4gPiA+IDMtDQogID4gPiA+IHJvdXRl cjIjIG5ldHN0YXQgLW5yIHwgbGVzcw0KICA+ID4gPiBSb3V0aW5nIHRhYmxlcw0KICA+ID4gPiBJ bnRlcm5ldDoNCiAgPiA+ID4gRGVzdGluYXRpb24gR2F0ZXdheSBGbGFncyBSZWZzIFVzZSBOZXRp ZiBFeHBpcmUNCiAgPiA+ID4gMTI3LjAuMC4xIDEyNy4wLjAuMSBVSCAwIDk3NDgxIGxvMA0KICA+ ID4gPiAxNzIuMTYuMy8yNCBsaW5rIzIgVUMgMSAwIGZ4cDANCiAgPiA+ID4gMTcyLjE2LjMuMSAw MDowMjoyYTpkMzowNzphYiBVSExXIDIgOTk5IGZ4cDAgOTc5DQogID4gPiA+IDE3Mi4xNi41LzI0 IGxpbmsjMyBVQyAxIDAgcmwwDQogID4gPiA+IDE3Mi4xNi41LjMgbGluayMzIFVITFcgMSAwIHJs MA0KICA+ID4gPiAxOTIuMTY4LjY3IDE3Mi4xNi4zLjEgVUdTYyAwIDAgZnhwMA0KICA+ID4gPiAx OTIuMTY4LjY4IDE3Mi4xNi41LjMgVUdTYyAwIDAgcmwwDQogID4gPiA+IDE5Mi4xNjguNjkgbGlu ayMxIFVDIDEgMCBzaXMwDQogID4gPiA+IDE5Mi4xNjguNjkuMjAwIDAwOjBjOjZlOjMzOjBjOmFl IFVITFcgMCA4IHNpczAgMjQzDQogID4gPiA+IDQtDQogID4gPiA+IHJvdXRlcjIjIHNzaCBsb2Nh bGhvc3QNCiAgPiA+ID4gXkMNCiAgPiA+ID4gKEl0IHRpbWVvdXQgYW5kIEkgaGF2ZSB0byBraWxs IHdpdGggQ1RSTCArIEMgKQ0KICA+ID4gPiA1LQ0KICA+ID4gPiBkZWxldGUgbmV0IDE5Mi4xNjgu NjcuMDogZ2F0ZXdheSAxNzIuMTYuMy4xDQogID4gPiA+IGRlbGV0ZSBuZXQgMTkyLjE2OC42OC4w OiBnYXRld2F5IDE3Mi4xNi41LjMNCiAgPiA+ID4gNi0NCiAgPiA+ID4gcm91dGVyMiMgc3NoIGxv Y2FsaG9zdA0KICA+ID4gPiBQYXNzd29yZDoNCiAgPiA+ID4gDQogID4gPiA+IFdoeSBkb2VzIHRo aXMgaGFwcGVucz8gSXMgaXQgYmVjYXVzZSBhIHNlY3VydHkgbGV2ZWwgb2YgRnJlZUJTRCwgaG93 ID0NCiAgPiBhIGNoYW5nZSB0aGlzPw0KICA+ID4gPiANCiAgPiA+ID4gVGhhbmtzDQogID4gPiA+ IA0KICA+ID4gPiBEaW9nbyBEZWxsYQ0K ------=_NextPart_000_0019_01C56AE7.6769BE20 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: base64 PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUcmFuc2l0aW9uYWwv L0VOIj4NCjxIVE1MPjxIRUFEPg0KPE1FVEEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PWlzby04ODU5LTEiPg0KPE1FVEEgY29udGVudD0iTVNIVE1M IDYuMDAuMjgwMC4xNDk4IiBuYW1lPUdFTkVSQVRPUj4NCjxTVFlMRT48L1NUWUxFPg0KPC9IRUFE Pg0KPEJPRFkgYmdDb2xvcj0jZmZmZmZmPg0KPERJVj48Rk9OVCBmYWNlPUNvdXJpZXIgc2l6ZT0y PlBhdmxpbiw8L0ZPTlQ+PC9ESVY+DQo8RElWPjxGT05UIGZhY2U9Q291cmllciBzaXplPTI+PC9G T05UPiZuYnNwOzwvRElWPg0KPERJVj48Rk9OVCBmYWNlPUNvdXJpZXIgc2l6ZT0yPkkgZmlndXJl ZCBvdXQgdGhlIHRoZSBwcm9ibGVtIHdhcyB3aXRoIEROUywgYnV0IEkgDQpjb3VsZCBrbm93IHdo eS4gQ3JpcyBleHBsYWluZWQgYmVsb3c6PC9GT05UPjwvRElWPg0KPERJVj4mbmJzcDs8L0RJVj4N CjxESVY+Ikl0IGNvdWxkIGJlIHRoZSByZXZlcnNlIEROUyBsb29rdXAgdGltaW5nIG91dC4mbmJz cDsgV2hlbiB0aGUgbWFpbiBjYWJsZSANCmlzPEJSPnVucGx1Z2dlZCwgaXQgY2FuJ3QgcmVzb2x2 ZSB0aGUgY2xpZW50IElQIGFuZCB3b24ndCBjb25uZWN0LiZuYnNwOyBBcyBzb29uIA0KYXM8QlI+ Y29ubmVjdGl2aXR5IGlzIHJlc3RvcmVkLCB0aGUgRE5TIHN1Y2NlZWRzIGFuZCBjbGllbnQgbG9n cyBpbiANCmFnYWluLjxCUj5UcnkgYWRkaW5nIHRoZSAnLXYnIGZsYWcgdG8gc3NoIGFuZCBzZWUg d2hlcmUgaXQncyB0aW1pbmcgDQpvdXQuPEJSPjxCUj5DaHJpcyI8L0RJVj4NCjxESVY+PFBSRT5B dGVuY2lvc2FtZW50ZSwNCg0KPFNUUk9ORz5EaW9nbyBEZWxsYSBUb3JyZXMgT2xpdmVpcmE8L1NU Uk9ORz4NCjxBIGhyZWY9Imh0dHA6Ly93d3cuZGVsbGEuZW5nLmJyIj5odHRwOi8vd3d3LmRlbGxh LmVuZy5icjwvQT4gICAgDQo8QSBocmVmPSJtYWlsdG86ZGlvZ29kdG9AdGVycmEuY29tLmJyIj5k aW9nb2R0b0B0ZXJyYS5jb20uYnI8L0E+ICAgICAgICAgICAgICANCk1TTjogPEEgaHJlZj0ibWFp bHRvOmRpb2dvZHRvQGhvdG1haWwuY29tIj5kaW9nb2R0b0Bob3RtYWlsLmNvbTwvQT4gICAgICAg ICAgICAgICAgICAgICAgDQorNTUgKDYxKSA4NDAxLTcwNzANCg0KPC9QUkU+PC9ESVY+DQo8QkxP Q0tRVU9URSANCnN0eWxlPSJQQURESU5HLVJJR0hUOiAwcHg7IFBBRERJTkctTEVGVDogNXB4OyBN QVJHSU4tTEVGVDogNXB4OyBCT1JERVItTEVGVDogIzAwMDAwMCAycHggc29saWQ7IE1BUkdJTi1S SUdIVDogMHB4Ij4NCiAgPERJViBzdHlsZT0iRk9OVDogMTBwdCBhcmlhbCI+LS0tLS0gT3JpZ2lu YWwgTWVzc2FnZSAtLS0tLSA8L0RJVj4NCiAgPERJViANCiAgc3R5bGU9IkJBQ0tHUk9VTkQ6ICNl NGU0ZTQ7IEZPTlQ6IDEwcHQgYXJpYWw7IGZvbnQtY29sb3I6IGJsYWNrIj48Qj5Gcm9tOjwvQj4g DQogIDxBIHRpdGxlPXBhdmxpbkBpY2lyLm9yZyBocmVmPSJtYWlsdG86cGF2bGluQGljaXIub3Jn Ij5QYXZsaW4gUmFkb3NsYXZvdjwvQT4gDQogIDwvRElWPg0KICA8RElWIHN0eWxlPSJGT05UOiAx MHB0IGFyaWFsIj48Qj5Ubzo8L0I+IDxBIHRpdGxlPWFwMDEwQHRlcnJhLmNvbS5iciANCiAgaHJl Zj0ibWFpbHRvOmFwMDEwQHRlcnJhLmNvbS5iciI+RGlvZ28gRGVsbGE8L0E+IDwvRElWPg0KICA8 RElWIHN0eWxlPSJGT05UOiAxMHB0IGFyaWFsIj48Qj5DYzo8L0I+IDxBIHRpdGxlPXhvcnAtdXNl cnNAeG9ycC5vcmcgDQogIGhyZWY9Im1haWx0bzp4b3JwLXVzZXJzQHhvcnAub3JnIj54b3JwLXVz ZXJzQHhvcnAub3JnPC9BPiA8L0RJVj4NCiAgPERJViBzdHlsZT0iRk9OVDogMTBwdCBhcmlhbCI+ PEI+U2VudDo8L0I+IE1vbmRheSwgSnVuZSAwNiwgMjAwNSA2OjA1IFBNPC9ESVY+DQogIDxESVYg c3R5bGU9IkZPTlQ6IDEwcHQgYXJpYWwiPjxCPlN1YmplY3Q6PC9CPiBSZTogW1hvcnAtdXNlcnNd IFNvcnJ5LCB0aGUgDQogIHByb2JsZW0gaXMgbm90IFJJUCwgYnV0IHRoZSByb3V0aW5nIHRhYmxl PC9ESVY+DQogIDxESVY+PEJSPjwvRElWPkkgaGF2ZSB0aGUgZmVlbGluZyB0aGF0IGVpdGhlciBz b21laG93IHRoZSByb3V0ZSB0byB5b3VyIA0KICBpbnRlcmZhY2U8QlI+YWRkcmVzcyBpcyBtaXNz aW5nLCBvciAibG9jYWxob3N0IiBpcyByZXNvbHZlZCB0byBzb21ldGhpbmcgdGhhdCANCiAgaXM8 QlI+bm90IHJlYWNoYWJsZSAoZS5nLiwgYnkgZGVmYXVsdCAibG9jYWxob3N0IiBtYXBzIHRvIHRo ZSAiOjoxIiANCiAgSVB2NjxCUj5hZGRyZXNzKSBvciBzb21ldGhpbmcgbGlrZSB0aGlzLjxCUj48 QlI+VGhlIGFkZC9kZWxldGUgcm91dGVzIA0KICBvYnZpb3VzbHkgYWZmZWN0IHRoZSByb3V0aW5n IHRhYmxlIHNvPEJSPnNvbWV0aGluZyBtYXkgYmUgKG1pcylyb3V0ZWQgaW4gYW4gDQogIHVuZXhw ZWN0ZWQgZGlyZWN0aW9uLjxCUj5UcnkgYWxsIHlvdXIgZnRwL3NzaC9waW5nIHRlc3RzIHdpdGgg SVAgYWRkcmVzc2VzIA0KICBmaXJzdCAoMTI3LjAuMC4xPEJSPmFuZCB0aGUgYWRkcmVzcyhlcykg b2YgeW91ciBpbnRlcmZhY2VzKSwgYW5kIGlmIHRoZXkgDQogIHN1Y2NlZWQsIHRoZW48QlI+dHJ5 IHRvIGZpZ3VyZS1vdXQgd2hlcmUgYSByZXF1ZXN0IGxpa2UgInNzaCBsb2NhbGhvc3QiIGlzIA0K ICBzZW50IHRvLjxCUj48QlI+UGF2bGluPEJSPjxCUj4mZ3Q7IA0KICAtLV89X189X1hhTTNfLjEx MTgwMTQxMTYuMkEuNjA4NDE1LjQyLjE5MDkxLjUyLjQyLjAwNy42OTc3NDQ4Mzg8QlI+Jmd0OyAN CiAgQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFyc2V0PWlzby04ODU5LTE8QlI+Jmd0OyAN CiAgQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogcXVvdGVkLXByaW50YWJsZTxCUj4mZ3Q7IDxC Uj4mZ3Q7IFNvcnJ5LCBJJ20gZnJvbSANCiAgd2VibWFpbCBoZXJlLjxCUj4mZ3Q7IDxCUj4mZ3Q7 IFRoZXJlIGlzIG5vIHByb2JsZW0gd2l0aCAvZXRjL2hvc3RzLiBUaGUgDQogIHByb2JsZW0gaGFh cGVucyB3aXRoIHNzaCBhbmQgZnRwPTxCUj4mZ3Q7IC4gUGluZyB0aGVyZSBpcyBubyBwcm9ibGVt LjxCUj4mZ3Q7IA0KICA8QlI+Jmd0OyBJJ20gbG9va2luZyBldmVyeSB3aGVyZSB0byBmaWd1cmUg dGhpcyBvdXQsIGJ1dCBJIGNhbj1CNHQgdW5kZXJzdGFuZC4gDQogIDxCUj4mZ3Q7IDxCUj4mZ3Q7 ICMjIyBURVNUPEJSPiZndDsgcm91dGVyMiMgcm91dGUgYWRkIC1uZXQgMC4wLjAuMCANCiAgMTky LjE2OC42OS4yMDA8QlI+Jmd0OyBhZGQgbmV0IDAuMC4wLjA6IGdhdGV3YXkgMTkyLjE2OC42OS4y MDA8QlI+Jmd0OyANCiAgcm91dGVyMiMgc3NoIDEyNy4wLjAuMTxCUj4mZ3Q7IF5DPEJSPiZndDsg cm91dGVyMiMgZnRwIDEyNy4wLjAuMTxCUj4mZ3Q7IA0KICBDb25uZWN0ZWQgdG8gMTI3LjAuMC4x LjxCUj4mZ3Q7IF5aPEJSPiZndDsgU3VzcGVuZGVkPEJSPiZndDsgcm91dGVyMiMgcGluZyANCiAg MTI3LjAuMC4xPEJSPiZndDsgUElORyAxMjcuMC4wLjEgKDEyNy4wLjAuMSk6IDU2IGRhdGEgYnl0 ZXM8QlI+Jmd0OyA2NCBieXRlcyANCiAgZnJvbSAxMjcuMC4wLjE6IGljbXBfc2VxPTNEMCB0dGw9 M0Q2NCB0aW1lPTNEMC4wMjcgbXM8QlI+Jmd0OyBeQzxCUj4mZ3Q7IC0tLSANCiAgMTI3LjAuMC4x IHBpbmcgc3RhdGlzdGljcyAtLS08QlI+Jmd0OyAxIHBhY2tldHMgdHJhbnNtaXR0ZWQsIDEgcGFj a2V0cyANCiAgcmVjZWl2ZWQsIDAlIHBhY2tldCBsb3NzPEJSPiZndDsgcm91bmQtdHJpcCBtaW4v YXZnL21heC9zdGRkZXYgPTNEIA0KICAwLjAyNy8wLjAyNy8wLjAyNy8wLjAwMCBtczxCUj4mZ3Q7 IHJvdXRlcjIjIHJvdXRlIGRlbGV0ZSAtbmV0IDAuMC4wLjAgDQogIDE5Mi4xNjguNjkuMjAwPEJS PiZndDsgZGVsZXRlIG5ldCAwLjAuMC4wOiBnYXRld2F5IDE5Mi4xNjguNjkuMjAwPEJSPiZndDsg DQogIHJvdXRlcjIjIHNzaCAxMjcuMC4wLjE8QlI+Jmd0OyBQYXNzd29yZDo8QlI+Jmd0OyByb3V0 ZXIyIyBmdHAgDQogIDEyNy4wLjAuMTxCUj4mZ3Q7IENvbm5lY3RlZCB0byAxMjcuMC4wLjEuPEJS PiZndDsgMjIwIHJvdXRlcjIubXVsdGljYXN0IEZUUCANCiAgc2VydmVyIChWZXJzaW9uIDYuMDBM UykgcmVhZHkuPEJSPiZndDsgTmFtZSAoMTI3LjAuMC4xOnJvb3QpOjxCUj4mZ3Q7IDxCUj4mZ3Q7 IA0KICA8QlI+Jmd0OyAjIyMgS0VSTkVMIE9QVElPTlM8QlI+Jmd0OyAjTVVMVElDQVNUPEJSPiZn dDsgDQogIG9wdGlvbnMmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm bmJzcDsgTVJPVVRJTkc8QlI+Jmd0OyANCiAgI0RVTU1ZTkVUPEJSPiZndDsgb3B0aW9ucyZuYnNw OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyANCiAgRFVNTVlORVQ8 QlI+Jmd0OyBvcHRpb25zJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7 Jm5ic3A7IA0KICBJUEZJUkVXQUxMPEJSPiZndDsgb3B0aW9ucyZuYnNwOyZuYnNwOyZuYnNwOyZu YnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyANCiAgSVBGSVJFV0FMTF9WRVJCT1NFPEJSPiZn dDsgDQogIG9wdGlvbnMmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm bmJzcDsgDQogIElQRklSRVdBTExfVkVSQk9TRV9MSU1JVD0zRDU8QlI+Jmd0OyANCiAgb3B0aW9u cyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyANCiAgSVBG SVJFV0FMTF9GT1JXQVJEPEJSPiZndDsgDQogIG9wdGlvbnMmbmJzcDsmbmJzcDsmbmJzcDsmbmJz cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgSVBGVzI8QlI+Jmd0OyANCiAgb3B0aW9ucyZuYnNw OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyBJUERJVkVSVDxCUj4m Z3Q7IA0KICBvcHRpb25zJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7 Jm5ic3A7IEhaPTNEMTAwMDxCUj4mZ3Q7ICMgDQogIE1haXMgb3Bjb2VzIG5vIEtlcm5lbCBzZWd1 bmRvIG8gSGFuZEJvb2sgMjAwNTA1MTMgcG9yIERpb2dvIERlbGxhPEJSPiZndDsgDQogIG9wdGlv bnMmbmJzcDsmbmJzcDsmbmJzcDsgSVBGSVJFV0FMTF9ERUZBVUxUX1RPX0FDQ0VQVDxCUj4mZ3Q7 IA0KICBvcHRpb25zJm5ic3A7Jm5ic3A7Jm5ic3A7IElQVjZGSVJFV0FMTDxCUj4mZ3Q7IG9wdGlv bnMmbmJzcDsmbmJzcDsmbmJzcDsgDQogIElQVjZGSVJFV0FMTF9WRVJCT1NFPEJSPiZndDsgb3B0 aW9ucyZuYnNwOyZuYnNwOyZuYnNwOyANCiAgSVBWNkZJUkVXQUxMX1ZFUkJPU0VfTElNSVQ8QlI+ Jmd0OyBvcHRpb25zJm5ic3A7Jm5ic3A7Jm5ic3A7IA0KICBJUFY2RklSRVdBTExfREVGQVVMVF9U T19BQ0NFUFQ8QlI+Jmd0OyAjIFN1cG9ydGEgYW8gUElNPEJSPiZndDsgDQogIG9wdGlvbnMmbmJz cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgUElNPEJSPiZndDsg PEJSPiZndDsgDQogIDxCUj4mZ3Q7IERlOiJLcmlzdGlhbiBMYXJzc29uIiA8QSANCiAgaHJlZj0i bWFpbHRvOmtyaXN0aWFuQGp1bmlrcy5uZXQiPmtyaXN0aWFuQGp1bmlrcy5uZXQ8L0E+PEJSPiZn dDsgPEJSPiZndDsgDQogIFBhcmE6IkRpb2dvIERlbGxhIiA8QSANCiAgaHJlZj0ibWFpbHRvOmFw MDEwQHRlcnJhLmNvbS5iciI+YXAwMTBAdGVycmEuY29tLmJyPC9BPjxCUj4mZ3Q7IDxCUj4mZ3Q7 IA0KICBDPUYzcGlhOnhvcnAtdXNlcnNAeG9ycC5vcmc8QlI+Jmd0OyA8QlI+Jmd0OyBEYXRhOk1v biwgNiBKdW4gMjAwNSAwNDoyMTo0MyANCiAgKzAyMDA8QlI+Jmd0OyA8QlI+Jmd0OyBBc3N1bnRv OlJlOiBbWG9ycC11c2Vyc10gU29ycnksIHRoZSBwcm9ibGVtIGlzIG5vdCBSSVAsIA0KICBidXQg dGhlIHJvdXRpbmcgdD08QlI+Jmd0OyBhYmxlPEJSPiZndDsgPEJSPiZndDsgJmd0OyBGaXJzdCBv ZiBhbGwsIHRyeSB0byANCiAga2VlcCBldmVyeXRoaW5nIGluIG9uZSB0aHJlYWQuIFRoZXJlIGFy ZSBub3c8QlI+Jmd0OyAmZ3Q7IG51bWVyb3VzIHRocmVhZHMgYWxsIA0KICBjb21pbmcgZnJvbSB5 b3Ugb24gdGhlIHNhbWUgc3ViamVjdC4gQW5kIGl0IGxvb2tzPEJSPiZndDsgJmd0OyByZWFsIG1l c3N5IGluIA0KICBteSBtYWlsIHJlYWRlciA7KTxCUj4mZ3Q7ICZndDsgPEJSPiZndDsgJmd0OyBB bnl3YXksIHlvdSBoYXZlbid0IGJ5IGFueSBjaGFuY2UgDQogIGNoYW5nZWQgc29tZXRoaW5nIGlu IC9ldGMvaG9zdHMsIHBlcmg9PEJSPiZndDsgYXBzIHRoZTxCUj4mZ3Q7ICZndDsgaXAgb2YgDQog IGxvY2FsaG9zdD88QlI+Jmd0OyAmZ3Q7IElzIGl0IGp1c3Qgc3NoIG9yIGRvZXMgZXZlcnl0aGlu ZywgbGlrZSBwaW5nIGFuZCBzbyANCiAgb24sIHN0b3Agd29ya2luZyBhcz08QlI+Jmd0OyZuYnNw OyB3ZWxsPzxCUj4mZ3Q7ICZndDsgV2hhdCBpZiB5b3UgdHJ5IHBpbmdpbmcgDQogIG9yIHNzaCB0 byAxMjcuMC4wLjE8QlI+Jmd0OyAmZ3Q7IDxCUj4mZ3Q7ICZndDsgaXQgbG9va3MgY29ycmVjdCBm cm9tIG92ZXIgDQogIGhlcmUsIGFuZCB3aGVuIGRvaW5nIHRoaXMgb24gbXkgbWFjaGluZTxCUj4m Z3Q7ICZndDsgKGFsc28gZnJlZWJzZCkgSSBkb24ndCANCiAgZ2V0IHRoZSBzYW1lIGVycm9ycy48 QlI+Jmd0OyAmZ3Q7IDxCUj4mZ3Q7ICZndDsgLy9LcmlzdGlhbiBMYXJzc29uPEJSPiZndDsgDQog ICZndDsgPEJSPiZndDsgJmd0OyBPbiBTdW4sIEp1biAwNSwgMjAwNSBhdCAwNzo1MToxM1BNIC0w MzAwLCBEaW9nbyBEZWxsYSANCiAgd3JvdGU6PEJSPiZndDsgJmd0OyAmZ3Q7IEkgbWFkZSBtb3Jl IHRlc3RzLjxCUj4mZ3Q7ICZndDsgJmd0OyA8QlI+Jmd0OyAmZ3Q7IA0KICAmZ3Q7IFdoZW4gSSBw dXQgcm91dGVzIGF0IHRoZSByb3V0ZSB0YWJsZSBvZiBGcmVlQlNELCBpdCBkb2VzIG5vdCBhY2Nl cHQgDQogIGE9PEJSPiZndDsgbnkgbW9yZSBjb25uZWN0aW9uIGZyb20gbG9jYWxob3N0IG9yIGZy b20gb3RoZXIgaG9zdCBhdCB0aGUgDQogIHN1Ym5ldC48QlI+Jmd0OyAmZ3Q7ICZndDsgPEJSPiZn dDsgJmd0OyAmZ3Q7IExvb2sgd2hhdCBoYXBwZW5zOjxCUj4mZ3Q7ICZndDsgDQogICZndDsgMS08 QlI+Jmd0OyAmZ3Q7ICZndDsgcm91dGVyMiMgc3NoIGxvY2FsaG9zdDxCUj4mZ3Q7ICZndDsgJmd0 OyANCiAgUGFzc3dvcmQ6PEJSPiZndDsgJmd0OyAmZ3Q7IDItPEJSPiZndDsgJmd0OyAmZ3Q7IHJv dXRlIGFkZCAtbmV0IA0KICAxOTIuMTY4LjY3LjAvMjQgMTcyLjE2LjMuMTxCUj4mZ3Q7ICZndDsg Jmd0OyByb3V0ZSBhZGQgLW5ldCAxOTIuMTY4LjY4LjAvMjQgDQogIDE3Mi4xNi41LjM8QlI+Jmd0 OyAmZ3Q7ICZndDsgMy08QlI+Jmd0OyAmZ3Q7ICZndDsgcm91dGVyMiMgbmV0c3RhdCAtbnIgfCAN CiAgbGVzczxCUj4mZ3Q7ICZndDsgJmd0OyBSb3V0aW5nIHRhYmxlczxCUj4mZ3Q7ICZndDsgJmd0 OyBJbnRlcm5ldDo8QlI+Jmd0OyAmZ3Q7IA0KICAmZ3Q7IERlc3RpbmF0aW9uIEdhdGV3YXkgRmxh Z3MgUmVmcyBVc2UgTmV0aWYgRXhwaXJlPEJSPiZndDsgJmd0OyAmZ3Q7IA0KICAxMjcuMC4wLjEg MTI3LjAuMC4xIFVIIDAgOTc0ODEgbG8wPEJSPiZndDsgJmd0OyAmZ3Q7IDE3Mi4xNi4zLzI0IGxp bmsjMiBVQyAxIDAgDQogIGZ4cDA8QlI+Jmd0OyAmZ3Q7ICZndDsgMTcyLjE2LjMuMSAwMDowMjoy YTpkMzowNzphYiBVSExXIDIgOTk5IGZ4cDAgDQogIDk3OTxCUj4mZ3Q7ICZndDsgJmd0OyAxNzIu MTYuNS8yNCBsaW5rIzMgVUMgMSAwIHJsMDxCUj4mZ3Q7ICZndDsgJmd0OyANCiAgMTcyLjE2LjUu MyBsaW5rIzMgVUhMVyAxIDAgcmwwPEJSPiZndDsgJmd0OyAmZ3Q7IDE5Mi4xNjguNjcgMTcyLjE2 LjMuMSBVR1NjIDAgDQogIDAgZnhwMDxCUj4mZ3Q7ICZndDsgJmd0OyAxOTIuMTY4LjY4IDE3Mi4x Ni41LjMgVUdTYyAwIDAgcmwwPEJSPiZndDsgJmd0OyAmZ3Q7IA0KICAxOTIuMTY4LjY5IGxpbmsj MSBVQyAxIDAgc2lzMDxCUj4mZ3Q7ICZndDsgJmd0OyAxOTIuMTY4LjY5LjIwMCANCiAgMDA6MGM6 NmU6MzM6MGM6YWUgVUhMVyAwIDggc2lzMCAyNDM8QlI+Jmd0OyAmZ3Q7ICZndDsgNC08QlI+Jmd0 OyAmZ3Q7ICZndDsgDQogIHJvdXRlcjIjIHNzaCBsb2NhbGhvc3Q8QlI+Jmd0OyAmZ3Q7ICZndDsg XkM8QlI+Jmd0OyAmZ3Q7ICZndDsgKEl0IHRpbWVvdXQgYW5kIA0KICBJIGhhdmUgdG8ga2lsbCB3 aXRoIENUUkwgKyBDICk8QlI+Jmd0OyAmZ3Q7ICZndDsgNS08QlI+Jmd0OyAmZ3Q7ICZndDsgZGVs ZXRlIA0KICBuZXQgMTkyLjE2OC42Ny4wOiBnYXRld2F5IDE3Mi4xNi4zLjE8QlI+Jmd0OyAmZ3Q7 ICZndDsgZGVsZXRlIG5ldCANCiAgMTkyLjE2OC42OC4wOiBnYXRld2F5IDE3Mi4xNi41LjM8QlI+ Jmd0OyAmZ3Q7ICZndDsgNi08QlI+Jmd0OyAmZ3Q7ICZndDsgDQogIHJvdXRlcjIjIHNzaCBsb2Nh bGhvc3Q8QlI+Jmd0OyAmZ3Q7ICZndDsgUGFzc3dvcmQ6PEJSPiZndDsgJmd0OyAmZ3Q7IDxCUj4m Z3Q7IA0KICAmZ3Q7ICZndDsgV2h5IGRvZXMgdGhpcyBoYXBwZW5zPyBJcyBpdCBiZWNhdXNlIGEg c2VjdXJ0eSBsZXZlbCBvZiBGcmVlQlNELCBob3cgDQogID08QlI+Jmd0OyBhIGNoYW5nZSB0aGlz PzxCUj4mZ3Q7ICZndDsgJmd0OyA8QlI+Jmd0OyAmZ3Q7ICZndDsgVGhhbmtzPEJSPiZndDsgDQog ICZndDsgJmd0OyA8QlI+Jmd0OyAmZ3Q7ICZndDsgRGlvZ28gRGVsbGE8QlI+PC9CTE9DS1FVT1RF PjwvQk9EWT48L0hUTUw+DQo= ------=_NextPart_000_0019_01C56AE7.6769BE20-- From xylania@yahoo.no Thu Jun 9 12:42:14 2005 From: xylania@yahoo.no (Stig Arnesen) Date: Thu, 9 Jun 2005 13:42:14 +0200 (CEST) Subject: [Xorp-users] Cannot map a discard route.. Message-ID: <20050609114214.59588.qmail@web26905.mail.ukl.yahoo.com> Hi, ref my earlier mails. I still have problems. To sort out the easy parts first i will start with the first error message: Cannot map a discard route back to an FEA softdiscard interface. What does this mean my fea is just like in the manual. All interfaces have ipv6 definitions. Regards Stig From pavlin@icir.org Thu Jun 9 18:31:53 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Thu, 09 Jun 2005 10:31:53 -0700 Subject: [Xorp-users] Cannot map a discard route.. In-Reply-To: Message from Stig Arnesen of "Thu, 09 Jun 2005 13:42:14 +0200." <20050609114214.59588.qmail@web26905.mail.ukl.yahoo.com> Message-ID: <200506091731.j59HVrbT008850@possum.icir.org> > Hi, ref my earlier mails. I still have problems. > To sort out the easy parts first i will start with the > first error message: Cannot map a discard route back > to an FEA softdiscard interface. > What does this mean my fea is just like in the manual. > All interfaces have ipv6 definitions. You can safely ignore those "Cannot map a discard route back to an FEA soft discard interface." messages on startup, because they are bogus. We will get rid of them for the next release. Pavlin From dan@obluda.cz Mon Jun 13 23:56:12 2005 From: dan@obluda.cz (Dan Lukes) Date: Tue, 14 Jun 2005 00:56:12 +0200 Subject: [Xorp-users] Re: Newbie Xorp & m'cast problem Message-ID: <42AE0F0C.6000301@obluda.cz> >> [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 >> mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: >> invalid unicast sender address: 0.0.0.0 > > Those "invalid unicast sender address: 0.0.0.0" messages are odd. > Do you run a sender or a receiver on the same box as the XORP > router? > In any case, please run tcpdump on all network interfaces and try to > catch if there are any IP packets that have indeed source address of > 0.0.0.0. I had a very large TCPDUMP with several packets sourced from 0.0.0.0 I deleted it already, but if I remember correctly, the suspicious packets has been IGMP leave messages sent to ALL-ROUTERS GROUP. I have the only example of it now: 00:28:51.713070 0.0.0.0 > ALL-ROUTERS.MCAST.NET: igmp leave 227.11.22.33 [tos 0xc0] [ttl 1] I have no MAC so I don't know the exact source of it. They sems to be sent sometime by hosts which lost the IP (DHCP renewal expired, for example) or shutting down. There are seems not to be reason to ignore IGMP from this source unless we are hesitate about the forged leave messages. Well, it may not help to analyse Mark's problem too much, but it's information that IGMP message from 0.0.0.0 may not be rare. Dan From dan@obluda.cz Tue Jun 14 00:21:54 2005 From: dan@obluda.cz (Dan Lukes) Date: Tue, 14 Jun 2005 01:21:54 +0200 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external Message-ID: <42AE1512.7040200@obluda.cz> I have simple configuration - FreeBSD 4.11 with PIM support; one interface (vlan31) connected to ISP supporting PIM-SM; second interface (vlan666) is local network which contain no routers - end-user stations only. I don't want misconfigured user station interfere with multicast routing, so I don't want to run the multicast routing protocol on internal interface. The native way seems to be don't run PIM on it, e.g. set protocols.pimsm4.interface vlan666.vif vlan666.disable to true Unfortunatelly, it isn't possible. The IGMP messages received from internal network may trigger NOCACHE kernel message comming from internal message. Althought the message is not created by PIM routing protocol event, the "disable" option apply. Kernel message is rejected because the source interface isn't PIM-UP. On the other side, when I enable the PIM on vlan666 it sends and accepts PIM-HELLO and do other PIM related tasks including elections. Is there an option to enable multicast routing on the interface but without PIM enabled on it ? Well, I can use firewall to block PIM-ROUTERS.MCAST.NET packets on vlan666 but it's a workaround only, not clean way ... Dan From pavlin@icir.org Wed Jun 15 07:10:27 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Tue, 14 Jun 2005 23:10:27 -0700 Subject: [Xorp-users] Re: Newbie Xorp & m'cast problem In-Reply-To: Message from Dan Lukes of "Tue, 14 Jun 2005 00:56:12 +0200." <42AE0F0C.6000301@obluda.cz> Message-ID: <200506150610.j5F6ARXT004798@possum.icir.org> > >> [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 > >> mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: > >> invalid unicast sender address: 0.0.0.0 > > > > Those "invalid unicast sender address: 0.0.0.0" messages are odd. > > Do you run a sender or a receiver on the same box as the XORP > > router? > > In any case, please run tcpdump on all network interfaces and try to > > catch if there are any IP packets that have indeed source address of > > 0.0.0.0. > > I had a very large TCPDUMP with several packets sourced from 0.0.0.0 > > I deleted it already, but if I remember correctly, the suspicious > packets has been IGMP leave messages sent to ALL-ROUTERS GROUP. > > I have the only example of it now: > > 00:28:51.713070 0.0.0.0 > ALL-ROUTERS.MCAST.NET: igmp leave 227.11.22.33 > [tos 0xc0] [ttl 1] Later Mark ran tcpdump and I think his src=0.0.0.0 packets also were "IGMP leave" messages. I don't know whether he found by the MAC address where the messages came from. > > I have no MAC so I don't know the exact source of it. > > They sems to be sent sometime by hosts which lost the IP (DHCP renewal > expired, for example) or shutting down. This is an interesting observation. If the DHCP renewal expired, and if the IP stack removed the IP address before it sent the IGMP leave message, then the IGMP leave message may indeed appear from 0.0.0.0. Looks like a bug in the particular IP stack implementation, and it would be interesting to find the OS(es) with such behavior. > There are seems not to be reason to ignore IGMP from this source unless > we are hesitate about the forged leave messages. Indeed, Section 10 (Security Considerations) in RFC 2236 says that we shouldn't accept such leave messages: - Ignore the Leave message if you cannot identify the source address of the packet as belonging to a subnet assigned to the interface on which the packet was received. This solution means that Leave messages sent by mobile hosts without addresses on the local subnet will be ignored. Furthermore, some OSes don't tell us the interface a packet has arrived on, and in case of link-local multicast we have to use the source address to match it against the corresponding interface. Hence, in such case we cannot match packets with src=0.0.0.0 to the corresponding interface. In any case, ignoring suspicious IGMP leave messages is harmless, because in the worst case then it will take up to [Group Membership Interval] (260 sec) to timeout the lastest group member. Pavlin > Well, it may not help to analyse Mark's problem too much, but it's > information that IGMP message from 0.0.0.0 may not be rare. > > Dan From pavlin@icir.org Wed Jun 15 08:14:35 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Wed, 15 Jun 2005 00:14:35 -0700 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external In-Reply-To: Message from Dan Lukes of "Tue, 14 Jun 2005 01:21:54 +0200." <42AE1512.7040200@obluda.cz> Message-ID: <200506150714.j5F7EZPR005217@possum.icir.org> > I have simple configuration - FreeBSD 4.11 with PIM support; one > interface (vlan31) connected to ISP supporting PIM-SM; second interface > (vlan666) is local network which contain no routers - end-user stations > only. > > I don't want misconfigured user station interfere with multicast > routing, so I don't want to run the multicast routing protocol on > internal interface. Is your concern that a misconfigured user station will start transmitting PIM-SM control messages that will interfere with your XORP router? It is extremely unlikely that even a badly misconfigured end-user station will somehow originate PIM-SM messages. Unless, of course, it was misconfigured to run a PIM-SM daemon or other program that is capable of transmitting PIM-SM messages. > The native way seems to be don't run PIM on it, e.g. set > protocols.pimsm4.interface vlan666.vif vlan666.disable to true > > Unfortunatelly, it isn't possible. The IGMP messages received from > internal network may trigger NOCACHE kernel message comming from > internal message. Althought the message is not created by PIM routing > protocol event, the "disable" option apply. Kernel message is rejected > because the source interface isn't PIM-UP. FYI, the NOCACHE kernel messasges should be triggered by multicast data packets seen on interfaces enabled for multicast routing. > > On the other side, when I enable the PIM on vlan666 it sends and > accepts PIM-HELLO and do other PIM related tasks including elections. > > Is there an option to enable multicast routing on the interface but > without PIM enabled on it ? Not at this time. Ideally, we should have multicast routing policy, and one of the policy options would be to disable the receiving of all (or a subset of) PIM control messages per interface. I think that in Juniper for example you can apply policy rules to disable PIM Join/Prune or Bootstrap messages per interface, but I don't know whether you can disable the PIM Hello and Assert messages as well. Have in mind that it is very dangerous to disregard legitimate PIM Assert messages from a neighbor. Hence if there was an option to disable the receiving of all PIM control messages on an interface, and if you misuse that option between two legitimate PIM neighboring routers, then bad things can happen to your network. > Well, I can use firewall to block PIM-ROUTERS.MCAST.NET packets on > vlan666 but it's a workaround only, not clean way ... First we need to look into what the multicast routing policy solution should be, and only after we design and implement it there may be an option to do exactly what you need. Unfortunately, the multicast routing policy is not on our radar yet, so in the mean time you would have to use the above workaround. Pavlin From edrt@citiz.net Wed Jun 15 13:04:36 2005 From: edrt@citiz.net (edrt) Date: Wed, 15 Jun 2005 20:04:36 +0800 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external Message-ID: <200506151200.j5FC0qfF083656@wyvern.icir.org> > >> The native way seems to be don't run PIM on it, e.g. set >> protocols.pimsm4.interface vlan666.vif vlan666.disable to true >> >> Unfortunatelly, it isn't possible. The IGMP messages received from >> internal network may trigger NOCACHE kernel message comming from >> internal message. Althought the message is not created by PIM routing >> protocol event, the "disable" option apply. Kernel message is rejected >> because the source interface isn't PIM-UP. > >FYI, the NOCACHE kernel messasges should be triggered by multicast >data packets seen on interfaces enabled for multicast routing. > I have observed router's own IGMP report message cause router receive NOCACHE notification from kernel on linux 2.4 (may due to kernel bug). If you are sure that the NOCACHE message is triggered by IGMP messages, there might be possibility FBSD kernel also contain similar bug. >> >> On the other side, when I enable the PIM on vlan666 it sends and >> accepts PIM-HELLO and do other PIM related tasks including elections. >> >> Is there an option to enable multicast routing on the interface but >> without PIM enabled on it ? > >Not at this time. Ideally, we should have multicast routing policy, >and one of the policy options would be to disable the receiving of >all (or a subset of) PIM control messages per interface. >I think that in Juniper for example you can apply policy rules to >disable PIM Join/Prune or Bootstrap messages per interface, but I >don't know whether you can disable the PIM Hello and Assert messages >as well. >Have in mind that it is very dangerous to disregard legitimate PIM >Assert messages from a neighbor. Hence if there was an option to >disable the receiving of all PIM control messages on an interface, >and if you misuse that option between two legitimate PIM neighboring >routers, then bad things can happen to your network. > This info might be helpful draft-savola-pim-lasthop-threats-01.txt/Passive Mode for PIM Eddy From edrt@citiz.net Wed Jun 15 13:11:52 2005 From: edrt@citiz.net (edrt) Date: Wed, 15 Jun 2005 20:11:52 +0800 Subject: [Xorp-users] Re: Newbie Xorp & m'cast problem Message-ID: <200506151208.j5FC83cg083735@wyvern.icir.org> >> >> [ 2005/06/07 17:29:02 ERROR xorp_fea:16848 MFEA +1340 >> >> mfea_proto_comm.cc proto_socket_read ] proto_socket_read() failed: >> >> invalid unicast sender address: 0.0.0.0 >> > >> > Those "invalid unicast sender address: 0.0.0.0" messages are odd. >> > Do you run a sender or a receiver on the same box as the XORP >> > router? >> > In any case, please run tcpdump on all network interfaces and try to >> > catch if there are any IP packets that have indeed source address of >> > 0.0.0.0. >> >> I had a very large TCPDUMP with several packets sourced from 0.0.0.0 >> >> I deleted it already, but if I remember correctly, the suspicious >> packets has been IGMP leave messages sent to ALL-ROUTERS GROUP. >> >> I have the only example of it now: >> >> 00:28:51.713070 0.0.0.0 > ALL-ROUTERS.MCAST.NET: igmp leave 227.11.22.33 >> [tos 0xc0] [ttl 1] > >Later Mark ran tcpdump and I think his src=0.0.0.0 packets also were >"IGMP leave" messages. I don't know whether he found by the MAC >address where the messages came from. > >> >> I have no MAC so I don't know the exact source of it. >> >> They sems to be sent sometime by hosts which lost the IP (DHCP renewal >> expired, for example) or shutting down. > >This is an interesting observation. >If the DHCP renewal expired, and if the IP stack removed the IP >address before it sent the IGMP leave message, then the IGMP leave >message may indeed appear from 0.0.0.0. > >Looks like a bug in the particular IP stack implementation, and it >would be interesting to find the OS(es) with such behavior. > I've once observed a popular vendor's router sometimes generate IGMP message with source address 0.0.0.0, may be the router's image needs an upgrade :) Eddy From mehdi.bensaid@rd.francetelecom.com Wed Jun 15 13:43:08 2005 From: mehdi.bensaid@rd.francetelecom.com (zze-BEN SAID Mehdi RD-CORE-ISS) Date: Wed, 15 Jun 2005 14:43:08 +0200 Subject: [Xorp-users] Problem Message-ID: Hi everybody, I have a little problem. I'm trying to get information from Xorp using a pexpect python script: #!/usr/bin/env python import sys import pexpect child=pexpect.spawn ('xorpsh') child.expect('Xorp> ') child.sendline ('show igmp group') child.expect('Xorp> ') print child.before # I use this to print the result of the show command child.close() Everything's works right, but I got nothing displayed!!! And of course, when I type it manually on the xorpsh CLI, there are so many routes. Is there any solution? Thanks, Best Regards. Mehdi From dan@obluda.cz Wed Jun 15 18:07:26 2005 From: dan@obluda.cz (Dan Lukes) Date: Wed, 15 Jun 2005 19:07:26 +0200 Subject: [Xorp-users] Re: Newbie Xorp & m'cast problem In-Reply-To: <200506150610.j5F6ARXT004798@possum.icir.org> References: <200506150610.j5F6ARXT004798@possum.icir.org> Message-ID: <42B0604E.3050207@obluda.cz> Pavlin Radoslavov wrote: > Later Mark ran tcpdump and I think his src=0.0.0.0 packets also were > "IGMP leave" messages. I don't know whether he found by the MAC > address where the messages came from. Well, I catch one source. It's very interesting. All messages come from 0.0.0.0, same MAC 0011.936d.61b2. All are IGMP leave, TTL=1, TOS=0xC0. All have the same destination - ALL-ROUTERS.MCAST.NET 12:44:35.576909 igmp leave 224.2.255.237 12:44:35.579278 igmp leave 224.2.211.67 12:44:35.581899 igmp leave 239.116.74.140 12:44:35.583400 igmp leave 233.10.47.28 12:44:35.585777 igmp leave 233.10.47.64 12:44:35.587648 igmp leave 239.192.168.1 12:48:45.567557 igmp leave 233.10.47.33 12:48:45.571181 igmp leave 233.10.47.41 12:48:45.574182 igmp leave 233.10.47.48 12:48:45.576053 igmp leave 233.10.47.60 12:48:45.577930 igmp leave 233.10.47.71 14:01:40.604579 igmp leave 233.10.47.54 14:01:40.608204 igmp leave 233.10.47.14 14:01:40.611074 igmp leave 233.10.47.19 14:01:40.612821 igmp leave 233.45.17.245 14:01:40.614319 igmp leave 233.8.102.3 14:03:45.601905 igmp leave 236.168.161.223 14:03:45.604278 igmp leave 233.10.47.81 14:10:00.604030 igmp leave 233.10.47.14 14:12:05.606224 igmp leave 233.10.47.70 14:16:15.610246 igmp leave SAP.MCAST.NET 14:20:25.610388 igmp leave 224.1.1.1 14:20:25.612135 igmp leave 227.142.142.1 14:20:25.613635 igmp leave 233.10.47.63 14:20:25.616633 igmp leave 233.10.47.70 14:20:25.619006 igmp leave 233.8.102.5 14:24:35.619030 igmp leave SAP.MCAST.NET 14:24:35.620649 igmp leave 233.10.47.14 14:24:35.622276 igmp leave 233.10.47.81 15:04:10.629388 igmp leave SAP.MCAST.NET 15:04:10.630885 igmp leave 233.10.47.54 15:04:10.633509 igmp leave 233.10.47.82 15:33:20.638180 igmp leave SAP.MCAST.NET 15:33:20.640427 igmp leave 233.10.47.70 The MAC prefix is allocated to Cisco Systems. But we have no Cisco routers. Only 38 Cisco switches. Cisco documentation says: ------ When a switch with IGMP snooping enabled receives an IP group-specific IGMPv2 leave message, it sends a group-specific query out the interface where the leave message was received to determine if there are any other hosts attached to that interface that are interested in the MAC multicast group. If the switch does not receive an IGMP join message within the query-response-interval and none of the other 31 IP groups corresponding to the MAC group are interested in the multicast traffic for that MAC group and no multicast routers have been learned on the interface, then the interface is removed from the port list of the (mac-group, vlan) entry in the L2 forwarding table. ------ Othere part of documentation says the IGMP leave packet is forwarded only when there are no other subscribers to group in question on the same switch. My theory: Imagine there is only port on a switch subscribed to a group. The link on the port is going down with no IGMP LEAVE sent by station connected to it. The switch know that last subscriber of a group disapeared. So, it sends IGMP LEAVE by self. As it have no IP it uses 0.0.0.0 which mean "unknown source". It allows the switch to unsunbscribe the unnecesarry multicast traffic as soon as possible to maintain optimal bandwidth management. I can't say it seems to be bug. >> There are seems not to be reason to ignore IGMP from this source unless >>we are hesitate about the forged leave messages. > > Indeed, Section 10 (Security Considerations) in RFC 2236 says that > we shouldn't accept such leave messages: > > - Ignore the Leave message if you cannot identify the source > address of the packet as belonging to a subnet assigned to the > interface on which the packet was received. This solution means > that Leave messages sent by mobile hosts without addresses on the > local subnet will be ignored. > > Furthermore, some OSes don't tell us the interface a packet has > arrived on, and in case of link-local multicast we have to use the > source address to match it against the corresponding interface. > Hence, in such case we cannot match packets with src=0.0.0.0 to the > corresponding interface. Well, I'm mentioned the security consideration in my sentence. On OS which don't tell you the arriving interface, you can't verify the packet arrived on apropriate interface despite of the source address. The attacker can send packet on net/interface lan1 with source IP belonging to net/interface lan2. The security is not problem of source 0.0.0.0 only. Well. In the case the XORP need to know the source address to derive the incomming interface (because OS didn't tell it to him) it can't process the packed sourced from 0.0.0.0. In the other cases, security decision should be configurable by responsible administrator. There may be some external mechanism (firewall, for example) that filter packed arrived on incorrect interface. So, the XORP got packet arriving from correct subnet only - althought it can't verify it by itself. > In any case, ignoring suspicious IGMP leave messages is harmless, > because in the worst case then it will take up to [Group Membership > Interval] (260 sec) to timeout the lastest group member. I have only short experience with multicast within large production network. I'm not sure about it. Many of our users have notebooks. Those notebooks are disconnected from network plug without downing the applications very often. Our network - it mean 900 computers. I don't know the numbers over our ISP or it's upstream ISP. With Cisco logic as described within documentation and 0.0.0.0 packets dropped by XORP there may be more than one forgotten stream at the one time. Forgotten streams may not be as rare as you wish. In out network, accepting 0.0.0.0 sourced messages by XORP is totally harmless, so forgotten streams are pure waste. As we filter the packets comming from incorrect interface at kernel level I need no to repeat the check within XORP again. I would like to disable the test by XORP configuration if I can. I'm responsible for out network security, so I hate that some software unconditionally decide upon me. Especially when the decision is not optimal for us. It's not critical to leave the stream running, but it's waste in our environment. I would like to be able to configure paranoia level of XORP to best fit local needs, but you are true - it's not critical for now. Sincerelly Dan P.S. Within other message I asked about interface connected to pure IGMP subnet without PIM allowed on it. I would ask you, Pavlin, for very short response. It no problem if current XORP didn't support this kind of configuration, but I'm not sure I didn't miss something important within documentation. Thank you very much. From pavlin@icir.org Wed Jun 15 18:17:03 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Wed, 15 Jun 2005 10:17:03 -0700 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external In-Reply-To: Message from "edrt" of "Wed, 15 Jun 2005 20:04:36 +0800." <200506151200.j5FC0qfF083656@wyvern.icir.org> Message-ID: <200506151717.j5FHH3jn010203@possum.icir.org> > >FYI, the NOCACHE kernel messasges should be triggered by multicast > >data packets seen on interfaces enabled for multicast routing. > > > > I have observed router's own IGMP report message cause router receive > NOCACHE notification from kernel on linux 2.4 (may due to kernel bug). Right, now I remember that long time ago you told be when you found this particular Linux bug. > If you are sure that the NOCACHE message is triggered by IGMP messages, > there might be possibility FBSD kernel also contain similar bug. So far I haven't seen similar bug in FreeBSD kernel, but I should admit that I haven't tried FreeBSD-4.11 yet (the version that Dan is using). Pavlin From pavlin@icir.org Wed Jun 15 18:25:40 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Wed, 15 Jun 2005 10:25:40 -0700 Subject: [Xorp-users] Problem In-Reply-To: Message from "zze-BEN SAID Mehdi RD-CORE-ISS" of "Wed, 15 Jun 2005 14:43:08 +0200." Message-ID: <200506151725.j5FHPen0010319@possum.icir.org> > Hi everybody, > I have a little problem. > I'm trying to get information from Xorp using a pexpect python script: > > #!/usr/bin/env python > import sys > import pexpect > > child=pexpect.spawn ('xorpsh') > child.expect('Xorp> ') > child.sendline ('show igmp group') > child.expect('Xorp> ') > print child.before # I use this to print the result of the show command > child.close() > > Everything's works right, but I got nothing displayed!!! And of course, > when I type it manually on the xorpsh CLI, there are so many routes. > Is there any solution? Does it work for commands like "show igmp interface" that eventually would display much shorter output? One possible problem that comes to mind if the output is so long, is that probably it goes through the xorpsh built-in pager. To verify that, try to disable the pager by using the following command inside your python script: show igmp group | no-more Pavlin From pavlin@icir.org Wed Jun 15 20:10:57 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Wed, 15 Jun 2005 12:10:57 -0700 Subject: [Xorp-users] Re: Newbie Xorp & m'cast problem In-Reply-To: Message from Dan Lukes of "Wed, 15 Jun 2005 19:07:26 +0200." <42B0604E.3050207@obluda.cz> Message-ID: <200506151910.j5FJAvw4011330@possum.icir.org> > Well, I catch one source. It's very interesting. > > All messages come from 0.0.0.0, same MAC 0011.936d.61b2. All are IGMP > leave, TTL=1, TOS=0xC0. All have the same destination - > ALL-ROUTERS.MCAST.NET > The MAC prefix is allocated to Cisco Systems. > > But we have no Cisco routers. Only 38 Cisco switches. > > Cisco documentation says: > ------ > When a switch with IGMP snooping enabled receives an IP group-specific > IGMPv2 leave message, it sends a group-specific query out the interface > where the leave message was received to determine if there are any other > hosts attached to that interface that are interested in the MAC > multicast group. If the switch does not receive an IGMP join message > within the query-response-interval and none of the other 31 IP groups > corresponding to the MAC group are interested in the multicast traffic > for that MAC group and no multicast routers have been learned on the > interface, then the interface is removed from the port list of the > (mac-group, vlan) entry in the L2 forwarding table. > ------ > > Othere part of documentation says the IGMP leave packet is forwarded > only when there are no other subscribers to group in question on the > same switch. > > My theory: > Imagine there is only port on a switch subscribed to a group. The link > on the port is going down with no IGMP LEAVE sent by station connected > to it. The switch know that last subscriber of a group disapeared. So, > it sends IGMP LEAVE by self. As it have no IP it uses 0.0.0.0 which mean > "unknown source". > > It allows the switch to unsunbscribe the unnecesarry multicast traffic > as soon as possible to maintain optimal bandwidth management. I can't > say it seems to be bug. Interesting theory, and you may be right about how those packets have been generated. I don't want to argue about the legitimacy of such packets, but insde the IGMPv3 spec (RFC-3376) which we plan to implement in the future I found the following text in the Security Considerations section: "State-Change Report messages with a source address of 0.0.0.0 SHOULD be accepted on any interface." Hence, this is a good enough reason for me to modify the MFEA to accept link-local multicast packets with source address of 0.0.0.0 and let the protocol module itself deal with it. Though, the MFEA would accept the packets only if the kernel told it where the packet came from. If time allows, we will have this implemented for the next release. In addition, there probably will be an IGMP configuration statement to enable IGMP Leave messages with source address of 0.0.0.0 (as you suggest). > P.S. Within other message I asked about interface connected to pure IGMP > subnet without PIM allowed on it. I would ask you, Pavlin, for very > short response. It no problem if current XORP didn't support this kind > of configuration, but I'm not sure I didn't miss something important > within documentation. Thank you very much. The short answer is that currently XORP doesn't support it. We would like to implement it, but it may take a while until we design and implement it properly as part of a more general multicast routing policy mechanism. As a temporary soultion we could add a PIM-SM configuration switch per interface that disables the sending and receiving of all PIM packets, but this switch may go away once we have the multicast routing policy in place. Pavlin From dan@obluda.cz Wed Jun 15 20:43:50 2005 From: dan@obluda.cz (Dan Lukes) Date: Wed, 15 Jun 2005 21:43:50 +0200 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external In-Reply-To: <200506150714.j5F7EZPR005217@possum.icir.org> References: <200506150714.j5F7EZPR005217@possum.icir.org> Message-ID: <42B084F6.5090004@obluda.cz> Pavlin Radoslavov wrote: >> I have simple configuration - FreeBSD 4.11 with PIM support; one >>interface (vlan31) connected to ISP supporting PIM-SM; second interface >>(vlan666) is local network which contain no routers - end-user stations >>only. I don't want misconfigured user station interfere with multicast >>routing, so I don't want to run the multicast routing protocol on >>internal interface. > > Is your concern that a misconfigured user station will start > transmitting PIM-SM control messages that will interfere with your > XORP router? You hit my apprehension. Our stations has no central management. The user/owner of computer is the responsible administrator for it also. The network operating rules forbids configure user stations a router of any kind, but it didn't prevent a misconfiguration. The configuration of central network elements should force policies whenever it's possible. >> The native way seems to be don't run PIM on it, e.g. set >>protocols.pimsm4.interface vlan666.vif vlan666.disable to true >> >> Unfortunatelly, it isn't possible. The IGMP messages received from >>internal network may trigger NOCACHE kernel message comming from >>internal message. Althought the message is not created by PIM routing >>protocol event, the "disable" option apply. Kernel message is rejected >>because the source interface isn't PIM-UP. > > FYI, the NOCACHE kernel messasges should be triggered by multicast > data packets seen on interfaces enabled for multicast routing. I'm not sure what you mean exactly saying "interface enabled for multicast routing". I want interface where the multicast packets are routed to and from. I don't wan't the routing protocol running on it as there are no legitimate routers within the subnet. >> Is there an option to enable multicast routing on the interface but >>without PIM enabled on it ? > > Not at this time. Ideally, we should have multicast routing policy, > and one of the policy options would be to disable the receiving of > all (or a subset of) PIM control messages per interface. > Have in mind that it is very dangerous to disregard legitimate PIM > Assert messages from a neighbor. Hence if there was an option to > disable the receiving of all PIM control messages on an interface, > and if you misuse that option between two legitimate PIM neighboring > routers, then bad things can happen to your network. There are no other router within the subnet, so there are no PIM neighbors on it. No PIM messages should be received nor sent on it. There are multicast hosts, so multicast packets should be routed to and from the subnet. Hosts are not allowed to send PIM Register by self. > First we need to look into what the multicast routing policy > solution should be, and only after we design and implement it there > may be an option to do exactly what you need. I'm no experienced PIM user. I don't know the problem we have now. Let's: 1. "interface enabled for multicast routing" mean the multicast packets can be routed to and from interface. 2. "there is routing protocol running on interface" mean "routing protocol packets are sent and received on interface I tried to run configuration with protocols pimsm4 interface vlan666 vif vlan666 disable = true protocols igmp interface vlan666 vif vlan666 disable = false wishing to got interface vlan666 enabled for multicast routing but not enabled for running of PIM routing protocol. The kernel send some NOCACHE messages originated from vlan666. Those messages are ignored because vlan666 is not up for the purpose of PIM. A) problem is with FreeBSD kernel. NOCACHED messages should not be generated or should be originated from other VIF B) problem is the XORP didn't support required configuration for now (e.g. interface enabled for multicast routing but not enabled for running of routing protocol) I'm not sure if our problem is A or B or both for now. If you think the problem is A i will debug it then send a PR to FreeBSD developpers. If the problem is B I'm probably speaking about draft-savola-pim-lasthop-threats-01 paragraph 4.1 (thank, Eddy). Stub network with one multicast router only seems to be common configuration for many end networks. I'm not sure we must wait for complex routing policy. Simple active/pasive flag on PIM interface seems to be easy to implement. It may be sufficient "policy" for most multicast end-routers. I remebber the active/pasive flags from GateD's OSPF - it has been independent from fine policy settings. But it's developpers decision, of course. edrt wrote: > If you are sure that the NOCACHE message is triggered by IGMP messages, there might be possibility FBSD kernel also contain similar bug. I'm almost sure. At the time I seen it there has been no active PIM routers on any interface (i have statis RP configured in), so NOCACHE can't be trigered by PIM activity. Dan From dan@obluda.cz Wed Jun 15 20:58:50 2005 From: dan@obluda.cz (Dan Lukes) Date: Wed, 15 Jun 2005 21:58:50 +0200 Subject: [Xorp-users] Kernel bug on FreeBSD 4.x In-Reply-To: <200506151717.j5FHH3jn010203@possum.icir.org> References: <200506151717.j5FHH3jn010203@possum.icir.org> Message-ID: <42B0887A.8050405@obluda.cz> Pavlin Radoslavov wrote: > So far I haven't seen similar bug in FreeBSD kernel, but I should > admit that I haven't tried FreeBSD-4.11 yet (the version that Dan is > using). BTW, there is bug within network implemenation on 4.x (it apply to some recent 5.x also). When VLANs are used the ALLMULTI flag is not propagated from vlan network driver to underlying physical network driver. The NIC isn't properly reprogrammed and multicast aren't seen by kernel. (it bug doesn't disrupt host-multicast sending and receiving - explicit join/leave are propagated correctly). IMHO, the bug will not be fixed within 4.x branch. It seems to be nice to mention the bug within documentation. It take two days to debug for me ... The simple workaround is enable the ALLMULTI on underlying interface by separate program. The simplest workaround is assign a 127.0.0.[2-254]/32 address to in then configure it as a standard interface to XORP. It's not necesarry to enable a protocol on it. Dan From pavlin@icir.org Wed Jun 15 21:16:11 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Wed, 15 Jun 2005 13:16:11 -0700 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external In-Reply-To: Message from Dan Lukes of "Wed, 15 Jun 2005 21:43:50 +0200." <42B084F6.5090004@obluda.cz> Message-ID: <200506152016.j5FKGBeB011841@possum.icir.org> > There are no other router within the subnet, so there are no PIM > neighbors on it. No PIM messages should be received nor sent on it. > There are multicast hosts, so multicast packets should be routed to and > from the subnet. Hosts are not allowed to send PIM Register by self. Just for the record, this is a copy of the reply I sent in another (off-topic) thread: === The short answer is that currently XORP doesn't support it. We would like to implement it, but it may take a while until we design and implement it properly as part of a more general multicast routing policy mechanism. As a temporary soultion we could add a PIM-SM configuration switch per interface that disables the sending and receiving of all PIM packets, but this switch may go away once we have the multicast routing policy in place. === FYI, you cannot prevent the hosts from originating PIM Register messages by applying the above solution, because the PIM Register messages are directly unicast to the RP. You would either have to apply firewall rules to filter those messages in all routers directly connected to the hosts, or you would have to reconfigure (if the configuration syntax allows that) your RP(s) to throw away the PIM Register messages from those hosts. If your RP is a XORP router, you should know that currently we don't have a configuration option to selectively accept the PIM Register messages, so for the time being you would have to use the firewall rules to stop the PIM Register messages. While on the subject, PIM Register-Stop messages are also unicast (from the RP to the DRs), so if you are really paranoid you need to protect your DRs against forged Register-Stop messages as well (though, for such attack the attacker must use the RP address as the source address). In any case, such discussion moves us into the topic of multicast security and implementation-wise there is much more we need to do about it. > > If you are sure that the NOCACHE message is triggered by IGMP messages, there might be possibility FBSD kernel also contain similar bug. > > I'm almost sure. At the time I seen it there has been no active PIM > routers on any interface (i have statis RP configured in), so NOCACHE > can't be trigered by PIM activity. Can you replicate the problem by running a multicast receiver (only). I have suspicions that the multicast data packets originated by an application that is both a sender and a receiver are the trigger for the NOCACHE. Pavlin From dan@obluda.cz Thu Jun 16 01:12:02 2005 From: dan@obluda.cz (Dan Lukes) Date: Thu, 16 Jun 2005 02:12:02 +0200 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external In-Reply-To: <200506152016.j5FKGBeB011841@possum.icir.org> References: <200506152016.j5FKGBeB011841@possum.icir.org> Message-ID: <42B0C3D2.4060005@obluda.cz> Pavlin Radoslavov wrote: >> There are no other router within the subnet, so there are no PIM >>neighbors on it. No PIM messages should be received nor sent on it. >>There are multicast hosts, so multicast packets should be routed to and >>from the subnet. Hosts are not allowed to send PIM Register by self. > As a temporary soultion we could add a PIM-SM configuration switch > per interface that disables the sending and receiving of all PIM > packets, but this switch may go away once we have the multicast > routing policy in place. > FYI, you cannot prevent the hosts from originating PIM Register > messages by applying the above solution, because the PIM Register > messages are directly unicast to the RP. You would either have to True, I'm still not familiar with all aspects of PIM. The policy related to unicast PIM messages not destined to router must be forced by firewall. > If your RP is a XORP router, you should know that currently we don't FYI, it's my upstream ISP's RP. AFAIK it's a Cisco 3600 router running IOS 12.1(5) or newer. >> I'm almost sure. At the time I seen it there has been no active PIM >>routers on any interface (i have statis RP configured in), so NOCACHE >>can't be trigered by PIM activity. > > Can you replicate the problem by running a multicast receiver > (only). I have suspicions that the multicast data packets originated > by an application that is both a sender and a receiver are the > trigger for the NOCACHE. I'm not sure what configuration I should try. Do you request I run a multicast receiver on router ? On a station ? With static RP or with current configuration ? With current configuration there are still some NOCACHE messages but originated from external interface (example is bellow). But I'm lost a lot. What's the problem now ? Do you think the kernel should not generate NOCACHE message ? Dan =============== [ 2005/06/16 01:45:10 TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_MEMBERSHIP_REPORT from 195.113.27.138 to 224.2.127.254 on vif vlan666 [ 2005/06/16 01:45:10 TRACE xorp_igmp MLD6IGMP ] JOIN: 195.113.27.138 joined group 224.2.127.254 [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] Add membership for (0.0.0.0,224.2.127.254) on vif vlan666 [ 2005/06/16 01:45:10 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 1 vif_index = 1 src = 194.160.23.22 dst = 224.2.127.254 [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] RX NOCACHE signal from MFEA_4: vif_index = 1 src = 194.160.23.22 dst = 224.2.127.254 [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] src = 194.160.23.22 is NOT directly connected [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] install a MFC in the kernel [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] Add MFC entry: (194.160.23.22,224.2.127.254) iif = 1 olist = ..O. [ 2005/06/16 01:45:11 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 1 vif_index = 1 src = 128.40.89.156 dst = 224.2.127.254 [ 2005/06/16 01:45:11 TRACE xorp_pimsm4 PIM ] RX NOCACHE signal from MFEA_4: vif_index = 1 src = 128.40.89.156 dst = 224.2.127.254 [ 2005/06/16 01:45:11 TRACE xorp_pimsm4 PIM ] src = 128.40.89.156 is NOT directly connected ... From pavlin@icir.org Thu Jun 16 01:51:30 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Wed, 15 Jun 2005 17:51:30 -0700 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external In-Reply-To: Message from Dan Lukes of "Thu, 16 Jun 2005 02:12:02 +0200." <42B0C3D2.4060005@obluda.cz> Message-ID: <200506160051.j5G0pU2B013965@possum.icir.org> > >> I'm almost sure. At the time I seen it there has been no active PIM > >>routers on any interface (i have statis RP configured in), so NOCACHE > >>can't be trigered by PIM activity. > > > > Can you replicate the problem by running a multicast receiver > > (only). I have suspicions that the multicast data packets originated > > by an application that is both a sender and a receiver are the > > trigger for the NOCACHE. > > I'm not sure what configuration I should try. > > Do you request I run a multicast receiver on router ? On a station ? > With static RP or with current configuration ? > > With current configuration there are still some NOCACHE messages but > originated from external interface (example is bellow). > > But I'm lost a lot. What's the problem now ? Do you think the kernel > should not generate NOCACHE message ? I was referring to your comment that IGMP messages from receivers appear to be triggering NOCACHE kernel upcalls. We know that the Linux kernel has such bug, but I was a bit surprised that you have seen similar behavior on FreeBSD. Even if FreeBSD has such bug (which I doubt), this bug should not have any show-stopping impact on your setup, so it is up to you whether you want to pursue that further. If you really want to double-check your suspicion that IGMP Joins do trigger NOCACHE, you could do something like: * Enable MFEA, IGMP and PIM-SM on an interface, and enable the TRACE messages as well. You can add static RP configuration, but it doesn't really matter, because you will be looking only for specific TRACE messages in the MFEA and IGMP. * Run tcpdump on the interface toward a host that will be a receiver (only) and listen for multicast packets. You need to do this to double-check that the receiver doesn't generate any multicast data packets. * Start on the above-chosen host a multicast receiver that you know does NOT send multicast data packets. MGEN is probably a good choice: http://tang.itd.nrl.navy.mil/5522/mgen/mgen_index.html After you start the multicast receiver, look in the XORP output for NOCACHE TRACE messages with src = the receiver's address. If you see such messages, and if all tcpdump recorded from that receiver were IGMP messages, then the FreeBSD kernel has NOCACHE-related bug. The fact that in the log included below you see IGMP JOIN trace messages immediately followed by NOCACHE signal does not mean that the JOIN message triggered the NOCACHE. It is more likely that an application program was started, and that application has started sending multicast data almost the same time it has joined the multicast group. About the "src = 128.40.89.156 is NOT directly connected" message below, are you sure that this TRACE message was in the original source code? It is normal to receive NOCACHE signal with src that is not directly connected, because NOCACHE is triggered when the multicast forwarding plane sees a multicast data packet (eventually from a sender several hops away) and the forwarding plane doesn't have a matching forwarding entry for that packet. Pavlin > =============== > [ 2005/06/16 01:45:10 TRACE xorp_igmp MLD6IGMP ] RX > IGMP_V2_MEMBERSHIP_REPORT from 195.113.27.138 to 224.2.127.254 on vif > vlan666 > [ 2005/06/16 01:45:10 TRACE xorp_igmp MLD6IGMP ] JOIN: 195.113.27.138 > joined group 224.2.127.254 > [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] Add membership for > (0.0.0.0,224.2.127.254) on vif vlan666 > [ 2005/06/16 01:45:10 TRACE xorp_fea MFEA ] RX kernel signal: > message_type = 1 vif_index = 1 src = 194.160.23.22 dst = 224.2.127.254 > [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] RX NOCACHE signal from > MFEA_4: vif_index = 1 src = 194.160.23.22 dst = 224.2.127.254 > [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] src = 194.160.23.22 is NOT > directly connected > [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] install a MFC in the kernel > [ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] Add MFC entry: > (194.160.23.22,224.2.127.254) iif = 1 olist = ..O. > [ 2005/06/16 01:45:11 TRACE xorp_fea MFEA ] RX kernel signal: > message_type = 1 vif_index = 1 src = 128.40.89.156 dst = 224.2.127.254 > [ 2005/06/16 01:45:11 TRACE xorp_pimsm4 PIM ] RX NOCACHE signal from > MFEA_4: vif_index = 1 src = 128.40.89.156 dst = 224.2.127.254 > [ 2005/06/16 01:45:11 TRACE xorp_pimsm4 PIM ] src = 128.40.89.156 is NOT > directly connected > ... From dan@obluda.cz Thu Jun 16 02:44:22 2005 From: dan@obluda.cz (Dan Lukes) Date: Thu, 16 Jun 2005 03:44:22 +0200 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external In-Reply-To: <200506160051.j5G0pU2B013965@possum.icir.org> References: <200506160051.j5G0pU2B013965@possum.icir.org> Message-ID: <42B0D976.6020602@obluda.cz> Pavlin Radoslavov wrote: >>>Can you replicate the problem by running a multicast receiver >>>(only). I have suspicions that the multicast data packets originated >>>by an application that is both a sender and a receiver are the >>>trigger for the NOCACHE. > If you really want to double-check your suspicion that IGMP Joins do > trigger NOCACHE, you could do something like: Well, I try it today evening (there is deep night now). But it seems you are right - the NOCACHE has been triggered by packet, not IGMP. > About the "src = 128.40.89.156 is NOT directly connected" message > below, are you sure that this TRACE message was in the original > source code? >>[ 2005/06/16 01:45:10 TRACE xorp_pimsm4 PIM ] src = 194.160.23.22 is NOT >>directly connected I'm sure it's not. But it's trace message - it doesn't mean that non directly connected source is an error. I added it when searching why XORP doesn't respond to NOCACHE signals (it has been due disabled PIM on the interface). > It is normal to receive NOCACHE signal with src that is not directly > connected, because NOCACHE is triggered when the multicast > forwarding plane sees a multicast data packet (eventually from a > sender several hops away) and the forwarding plane doesn't have a > matching forwarding entry for that packet. Well. And what about directly connected src ? I'm sure the NOCACHE is generated for directly connected sources also (see log bellow). If it's not kernel bug it explain why I have problem with my original configuration. There has been PIM disabled on vif_index=2 because there has been no other mrouter on the wire. When packet has been sent from 195.113.24.3 to 224.0.1.22 the kernel generate NOCACHE with vif_index=2 which has been dropped due (! pim_vif->is_up()) I'm not sure about exact interpretation of the PIM specification. It seems to be pretty correct to drop the signal when source is not directly connected as we need PIM on the interface which is disabled. On the other side, when source is directly connected, then processing can be done with no PIM communication on the interface from the triggering packet come. There seems not to be reason to deny NOCACHE message because interface is not PIM enabled - just because the PIM over disabled interface is not necesarry to process the signal. It's true ? Miss I'm somethink ? Dan [ 2005/06/16 03:09:19 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 1 vif_index = 2 src = 195.113.24.3 dst = 224.0.1.22 [ 2005/06/16 03:09:19 TRACE xorp_pimsm4 PIM ] RX NOCACHE signal from MFEA_4: vif_index = 2 src = 195.113.24.3 dst = 224.0.1.22 [ 2005/06/16 03:09:19 TRACE xorp_pimsm4 PIM ] src = 195.113.24.3 is directly connected [ 2005/06/16 03:09:19 TRACE xorp_pimsm4 PIM ] install a MFC in the kernel [ 2005/06/16 03:09:19 TRACE xorp_pimsm4 PIM ] Add MFC entry: (195.113.24.3,224.0.1.22) iif = 2 olist = ...O .... .... .... [ 2005/06/16 03:12:52 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 4 vif_index = 0 src = 0.0.0.0 dst = 0.0.0.0 [ 2005/06/16 03:12:52 TRACE xorp_fea MFEA ] RX dataflow message: src = 195.113.24.3 dst = 224.0.1.22 [ 2005/06/16 03:12:52 TRACE xorp_pimsm4 PIM ] RX DATAFLOW signal: source = 195.113.24.3 group = 224.0.1.22 threshold_interval_sec = 210 threshold_interval_usec = 0 measured_interval_sec = 210 measured_interval_usec = 790337 threshold_packets = 0 threshold_bytes = 0 measured_packets = 0 measured_bytes = 0 is_threshold_in_packets = 1 is_threshold_in_bytes = 0 is_geq_upcall = 0 is_leq_upcall = 1 [ 2005/06/16 03:12:52 TRACE xorp_pimsm4 PIM ] Delete MFC entry: (195.113.24.3,224.0.1.22) iif = 2 olist = ...O From pavlin@icir.org Thu Jun 16 03:16:22 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Wed, 15 Jun 2005 19:16:22 -0700 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external In-Reply-To: Message from Dan Lukes of "Thu, 16 Jun 2005 03:44:22 +0200." <42B0D976.6020602@obluda.cz> Message-ID: <200506160216.j5G2GMkf014539@possum.icir.org> > > It is normal to receive NOCACHE signal with src that is not directly > > connected, because NOCACHE is triggered when the multicast > > forwarding plane sees a multicast data packet (eventually from a > > sender several hops away) and the forwarding plane doesn't have a > > matching forwarding entry for that packet. > > Well. And what about directly connected src ? I'm sure the NOCACHE is > generated for directly connected sources also (see log bellow). If it's It is normal to receive NOCACHE for sources regardless whether they are directly connected or not. In any case, the interface the packet was received on must be enabled in the MFEA configuration section. One thing you were probably seeing is that in the MFEA section you enabled the interface toward the source, but you disabled it in the PIM section: the MFEA received the NOCACHE upcall and accepted it, but then PIM (correctly) threw it away. Pavlin > not kernel bug it explain why I have problem with my original > configuration. There has been PIM disabled on vif_index=2 because there > has been no other mrouter on the wire. When packet has been sent from > 195.113.24.3 to 224.0.1.22 the kernel generate NOCACHE with vif_index=2 > which has been dropped due (! pim_vif->is_up()) > > I'm not sure about exact interpretation of the PIM specification. It > seems to be pretty correct to drop the signal when source is not > directly connected as we need PIM on the interface which is disabled. On > the other side, when source is directly connected, then processing can > be done with no PIM communication on the interface from the triggering > packet come. There seems not to be reason to deny NOCACHE message > because interface is not PIM enabled - just because the PIM over > disabled interface is not necesarry to process the signal. It's true ? > Miss I'm somethink ? > > > Dan > > > > [ 2005/06/16 03:09:19 TRACE xorp_fea MFEA ] RX kernel signal: > message_type = 1 vif_index = 2 src = 195.113.24.3 dst = 224.0.1.22 > [ 2005/06/16 03:09:19 TRACE xorp_pimsm4 PIM ] RX NOCACHE signal from > MFEA_4: vif_index = 2 src = 195.113.24.3 dst = 224.0.1.22 > [ 2005/06/16 03:09:19 TRACE xorp_pimsm4 PIM ] src = 195.113.24.3 is > directly connected > [ 2005/06/16 03:09:19 TRACE xorp_pimsm4 PIM ] install a MFC in the kernel > [ 2005/06/16 03:09:19 TRACE xorp_pimsm4 PIM ] Add MFC entry: > (195.113.24.3,224.0.1.22) iif = 2 olist = ...O > .... > .... > .... > [ 2005/06/16 03:12:52 TRACE xorp_fea MFEA ] RX kernel signal: > message_type = 4 vif_index = 0 src = 0.0.0.0 dst = 0.0.0.0 > [ 2005/06/16 03:12:52 TRACE xorp_fea MFEA ] RX dataflow message: src = > 195.113.24.3 dst = 224.0.1.22 > [ 2005/06/16 03:12:52 TRACE xorp_pimsm4 PIM ] RX DATAFLOW signal: source > = 195.113.24.3 group = 224.0.1.22 threshold_interval_sec = 210 > threshold_interval_usec = 0 measured_interval_sec = 210 > measured_interval_usec = 790337 threshold_packets = 0 threshold_bytes = > 0 measured_packets = 0 measured_bytes = 0 is_threshold_in_packets = 1 > is_threshold_in_bytes = 0 is_geq_upcall = 0 is_leq_upcall = 1 > [ 2005/06/16 03:12:52 TRACE xorp_pimsm4 PIM ] Delete MFC entry: > (195.113.24.3,224.0.1.22) iif = 2 olist = ...O > > _______________________________________________ > Xorp-users mailing list > Xorp-users@xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From edrt@citiz.net Thu Jun 16 05:39:00 2005 From: edrt@citiz.net (edrt) Date: Thu, 16 Jun 2005 12:39:0 +0800 Subject: [Xorp-users] Kernel bug on FreeBSD 4.x Message-ID: <200506160441.j5G4fM0R092130@wyvern.icir.org> > > BTW, there is bug within network implemenation on 4.x (it apply to some >recent 5.x also). > > When VLANs are used the ALLMULTI flag is not propagated from vlan >network driver to underlying physical network driver. The NIC isn't >properly reprogrammed and multicast aren't seen by kernel. (it bug >doesn't disrupt host-multicast sending and receiving - explicit >join/leave are propagated correctly). > > IMHO, the bug will not be fixed within 4.x branch. It seems to be nice >to mention the bug within documentation. It take two days to debug for >me ... > Also seen this bug before, posted on fbsd-net mailing list but got no reply. http://lists.freebsd.org/pipermail/freebsd-net/2004-November/005707.html Better to post the problem to fbsd-net again. > > The simple workaround is enable the ALLMULTI on underlying interface by >separate program. The simplest workaround is assign a 127.0.0.[2-254]/32 >address to in then configure it as a standard interface to XORP. It's >not necesarry to enable a protocol on it. > > Dan > >_______________________________________________ >Xorp-users mailing list >Xorp-users@xorp.org >http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From edrt@citiz.net Thu Jun 16 06:08:53 2005 From: edrt@citiz.net (edrt) Date: Thu, 16 Jun 2005 13:8:53 +0800 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external Message-ID: <200506160511.j5G5BKte092390@wyvern.icir.org> >> > It is normal to receive NOCACHE signal with src that is not directly >> > connected, because NOCACHE is triggered when the multicast >> > forwarding plane sees a multicast data packet (eventually from a >> > sender several hops away) and the forwarding plane doesn't have a >> > matching forwarding entry for that packet. >> >> Well. And what about directly connected src ? I'm sure the NOCACHE is >> generated for directly connected sources also (see log bellow). If it's > >It is normal to receive NOCACHE for sources regardless whether they >are directly connected or not. >In any case, the interface the packet was received on must be >enabled in the MFEA configuration section. > >One thing you were probably seeing is that in the MFEA section you >enabled the interface toward the source, but you disabled it in the >PIM section: the MFEA received the NOCACHE upcall and accepted it, >but then PIM (correctly) threw it away. > >Pavlin > > This scenario could cause performance degrade of PIM router. Say, a source is multicasting large volumn data, PIM process router will constantly receive NOCACHE notification from kernel even if it isn't enabled on that interface. One method to alleviate the problem is to have MFEA MRT_ADD_VIF to kernel only those PIM enabled interfaces. But, this still doesn't prevent the not-directly-connected-network-source-address multicast host from bugging MFEA. Anybody have good method to solve the problem? Another similar scenario cause performance degrade of PIM router is about WHOLEPKT. When a source is multicasting large volumn data and PIM router has no idea about a particular RP(G), PIM process will constantly receive WHOLEPKT notification from kernel. Add below code might alleviate the problem on FBSD with advance mcast api. in ip_mroute.c/pim_register_send if (mrtdebug & DEBUG_PIM) log(LOG_DEBUG, "pim_register_send: "); + if ((mrt_api_config & MRT_MFC_RP) && + (rt->mfc_rp.s_addr == INADDR_ANY)) { + return 0; + } (Pls let me know if you find the above code piece has defect) FYI, WRONGVIF could also bugging PIM process constantly at certain topology & scenario. But the latest XORP PIM code could handle it well. Eddy From dan@obluda.cz Thu Jun 16 09:14:28 2005 From: dan@obluda.cz (Dan Lukes) Date: Thu, 16 Jun 2005 10:14:28 +0200 Subject: [Xorp-users] Kernel bug on FreeBSD 4.x In-Reply-To: <200506160441.j5G4fNMn058324@ns.obluda.cz> References: <200506160441.j5G4fNMn058324@ns.obluda.cz> Message-ID: <42B134E4.4070300@obluda.cz> edrt wrote: >> IMHO, the bug will not be fixed within 4.x branch. It seems to be nice >>to mention the bug within documentation. > Also seen this bug before, posted on fbsd-net mailing list but got no reply. > http://lists.freebsd.org/pipermail/freebsd-net/2004-November/005707.html > Better to post the problem to fbsd-net again. The best way is to send problem report also. But I'm almost sure nobody will fix it within 4.x branch. You will waste your time preparing the PR as 4.x is claimed to be obsolete. Unfortunatelly 5.x isn't production quality yet IMHO. I'm still using 4.x on our 30+ servers/routers ... Well, we are going to be off-topic on this forum ... Dan From edrt@citiz.net Thu Jun 16 11:02:41 2005 From: edrt@citiz.net (edrt) Date: Thu, 16 Jun 2005 18:2:41 +0800 Subject: [Xorp-users] Re: Newbie Xorp & m'cast problem Message-ID: <200506161005.j5GA52SD095179@wyvern.icir.org> >Pavlin Radoslavov wrote: >> Later Mark ran tcpdump and I think his src=0.0.0.0 packets also were >> "IGMP leave" messages. I don't know whether he found by the MAC >> address where the messages came from. > > Well, I catch one source. It's very interesting. > > All messages come from 0.0.0.0, same MAC 0011.936d.61b2. All are IGMP >leave, TTL=1, TOS=0xC0. All have the same destination - >ALL-ROUTERS.MCAST.NET > >12:44:35.576909 igmp leave 224.2.255.237 >12:44:35.579278 igmp leave 224.2.211.67 >12:44:35.581899 igmp leave 239.116.74.140 >12:44:35.583400 igmp leave 233.10.47.28 > > The MAC prefix is allocated to Cisco Systems. > > But we have no Cisco routers. Only 38 Cisco switches. > > Cisco documentation says: > ------ > When a switch with IGMP snooping enabled receives an IP group-specific >IGMPv2 leave message, it sends a group-specific query out the interface >where the leave message was received to determine if there are any other >hosts attached to that interface that are interested in the MAC >multicast group. If the switch does not receive an IGMP join message >within the query-response-interval and none of the other 31 IP groups >corresponding to the MAC group are interested in the multicast traffic >for that MAC group and no multicast routers have been learned on the >interface, then the interface is removed from the port list of the >(mac-group, vlan) entry in the L2 forwarding table. > ------ > > Othere part of documentation says the IGMP leave packet is forwarded >only when there are no other subscribers to group in question on the >same switch. > > My theory: >Imagine there is only port on a switch subscribed to a group. The link >on the port is going down with no IGMP LEAVE sent by station connected >to it. The switch know that last subscriber of a group disapeared. So, >it sends IGMP LEAVE by self. As it have no IP it uses 0.0.0.0 which mean >"unknown source". > > It allows the switch to unsunbscribe the unnecesarry multicast traffic >as soon as possible to maintain optimal bandwidth management. I can't >say it seems to be bug. > >>> There are seems not to be reason to ignore IGMP from this source unless >>>we are hesitate about the forged leave messages. >> Interesting, cause I'm working on IGMP snooping related stuff these days, I take time digging into this problem. Through experiment I find that if you unplug the wire connected to the multicast receiver, Cisco switch will send two IGMP leave to multicast routers - first IGMP leave message IP src = multicast host IP dst = 224.0.0.2 IGMP grp = multicast group - second IGMP leave message IP src = 0.0.0.0 IP dst = 224.0.0.2 IGMP grp = 0.0.0.0 The second IGMP leave message is actually originated from the STP ROOT switch to help the switching network quickly rebuilds the IGMP snooping entry after spanning tree reconfigures. ref "Typically, if a topology change occurs, ...", at http://www.cisco.com/univercd/cc/td/doc/product/lan/cat4000/12_1_11/config/multi.htm#1049520 Eddy From dan@obluda.cz Thu Jun 16 11:27:38 2005 From: dan@obluda.cz (Dan Lukes) Date: Thu, 16 Jun 2005 12:27:38 +0200 Subject: [Xorp-users] PIM of FreeBSD implementation (was: Multicast without PIM on internal interface while PIM on external) In-Reply-To: <200506160511.j5G5BKte092390@wyvern.icir.org> References: <200506160511.j5G5BKte092390@wyvern.icir.org> Message-ID: <42B1541A.8070908@obluda.cz> edrt napsal/wrote, On 06/16/05 07:08: >>One thing you were probably seeing is that in the MFEA section you >>enabled the interface toward the source, but you disabled it in the >>PIM section: the MFEA received the NOCACHE upcall and accepted it, >>but then PIM (correctly) threw it away. > This scenario could cause performance degrade of PIM router. > Say, a source is multicasting large volumn data, PIM process > router will constantly receive NOCACHE notification from kernel > even if it isn't enabled on that interface. > One method to alleviate the problem is to have MFEA MRT_ADD_VIF > to kernel only those PIM enabled interfaces. But, this still doesn't > prevent the not-directly-connected-network-source-address multicast > host from bugging MFEA. Anybody have good method to solve the problem? Let's speak about non-local sources only. At the first, the kernel limit the NOCACHE rate, so it's not generated with every packet coming from the source. At the second, on consistently maintained network it should not be permanent state. It should be rare condition caused by misconfiguration of a part of network. Scenario 1: Packet come from outside in. It should not come in into being. Nobody inside should subscribe G arriving throught us without our participation. This stream shouldn't be joined by us so upstream router should send this packet to us. Well, bad boy can send PI-Register to foreign RP. We should use firewall or law or whatever to force our users to configure it's computers properly. Scenario 2: Packet come from inside out. 2a: users are alowed to send packet to this group but apropriate RP(s) is/are temporary unavaiable - it's temporary network condition which must be resolved by an administrator 2b: users are not allowed to send packets to this particular group - use firewall or law or ... 2c: users are allowed to send this kind of packets, but those packet should be site-wide only so foreing RP didn't apply for those groups - create local RP which accept registration for apropriate group range. --- Well, there is problem with this implementation of multicast routing - current implementation is not optimal when you want PIM on one interface while other multicast routing protocol on second interface. E.g. - Scenario 3 - packet come from interface running other multicast routing protocol than PIM This scenario reveal lack of current kernel design (on FreeBSD at least). The kernel to userland interface (I mean NOCACHE/WHOLEPKT signal) is not generic. It's PIM specific. But it's not XORP bug. If you want the optimal implementation you must select between following types: 1. complete kernel-level implementation of a routing protocol 2. complete user-level implementation (which grab necesarry packets by BPF, for example) 3. mixed implementation with protocol dependent kernel-to-user land interface ; it mean we can implement those multicast protocol wich are supported by kernel modifications only 4. mixed implementation with protocol independent kernel-to-user land interface (it scenario is used for unicast routing) Types [1] and [2] sounds bad for me. Curent implemenation is type [3], with mrouting protocol limited to PIM which must be running on all multicast interfaces. The [4] seems to be better, but there isn't required generic interface. I don't know if we can use the standart routing socket (with necesarry modification) as used for unicast protocols. I have insufficient experience with multicast so I'm unable to recommend something. > Another similar scenario cause performance degrade of PIM router > is about WHOLEPKT. When a source is multicasting large volumn data > and PIM router has no idea about a particular RP(G), PIM process It should not come in into being unless temporary network infrastructure or RP breakdown or policy violation caused by user within my network. Well it didn't mean we can't protect against it. > Add below code might alleviate the problem on FBSD with advance mcast api. ... > (Pls let me know if you find the above code piece has defect) It must wait to somebody more familiar with FreeBSD's PIM implementation. I can't evaluate it. Dan From dan@obluda.cz Thu Jun 16 12:03:36 2005 From: dan@obluda.cz (Dan Lukes) Date: Thu, 16 Jun 2005 13:03:36 +0200 Subject: [Xorp-users] Re: Newbie Xorp & m'cast problem In-Reply-To: <200506161005.j5GA52SD095179@wyvern.icir.org> References: <200506161005.j5GA52SD095179@wyvern.icir.org> Message-ID: <42B15C88.7020207@obluda.cz> edrt napsal/wrote, On 06/16/05 12:02: >> All messages come from 0.0.0.0, same MAC 0011.936d.61b2. All are IGMP >>leave, TTL=1, TOS=0xC0. All have the same destination - >>ALL-ROUTERS.MCAST.NET ... >> My theory: >>Imagine there is only port on a switch subscribed to a group. The link >>on the port is going down with no IGMP LEAVE sent by station connected >>to it. The switch know that last subscriber of a group disapeared. So, >>it sends IGMP LEAVE by self. As it have no IP it uses 0.0.0.0 which mean >>"unknown source". > Interesting, cause I'm working on IGMP snooping related stuff these days, > I take time digging into this problem. Through experiment I find that > if you unplug the wire connected to the multicast receiver, Cisco switch > will send two IGMP leave to multicast routers > > - first IGMP leave message > IP src = multicast host > IP dst = 224.0.0.2 > IGMP grp = multicast group > > - second IGMP leave message > IP src = 0.0.0.0 > IP dst = 224.0.0.2 > IGMP grp = 0.0.0.0 > > The second IGMP leave message is actually originated from the STP ROOT switch > to help the switching network quickly rebuilds the IGMP snooping entry after > spanning tree reconfigures. "our" packets are diferend kind: >12:44:35.576909 igmp leave 224.2.255.237 >12:44:35.579278 igmp leave 224.2.211.67 E.g. IP src = 0.0.0.0, IGMP grp = multicast group In advance, STP is disabled on end-station ports, so UP/DOWN event on those interfaces shouldn't trigger an STP event. The bacbone topology isn't redundant and never changes, so it shouldn't be source of STP event also. Unfortunatelly, I can't identify the exact source of those packets. The prefix of the source MAC is Cisco's, but source MAC itself is not assigned to any port of any Cisco within our network ... I'm not sure, but it seems the source MAC is constant. I don't know if it's transmitted by one switch only or it's transmitted by any switch with special src MAC. In any case, the exact source is still unrevealed secret for me. It need futher investigation ... Dan From edrt@citiz.net Thu Jun 16 12:49:07 2005 From: edrt@citiz.net (edrt) Date: Thu, 16 Jun 2005 19:49:07 +0800 Subject: [Xorp-users] Re: Newbie Xorp & m'cast problem Message-ID: <200506161145.j5GBjDJt096088@wyvern.icir.org> > >The second IGMP leave message is actually originated from the STP ROOT switch >to help the switching network quickly rebuilds the IGMP snooping entry after >spanning tree reconfigures. > >ref "Typically, if a topology change occurs, ...", at > http://www.cisco.com/univercd/cc/td/doc/product/lan/cat4000/12_1_11/config/multi.htm#1049520 > > >Eddy > > More relevant information could be found at: draft-ietf-magma-snoop-12.txt/2.1.1/4) 4) An IGMP snooping switch should be aware of link layer topology changes caused by Spanning Tree operation. When a port is enabled or disabled by Spanning Tree, a General Query may be sent on all active non-router ports in order to reduce network convergence time. Non-Querier switches should be aware of whether the Querier is in IGMPv3 mode. If so, the switch should not spoof any General Queries unless it is able to send an IGMPv3 Query that adheres to the most recent information sent by the true Querier. In no case should a switch introduce a spoofed IGMPv2 Query into an IGMPv3 network, as this may create excessive network disruption. If the switch is not the Querier, it should use the 'all-zeros' IP Source Address in these proxy queries (even though some hosts may elect to not process queries with a 0.0.0.0 IP Source Address). When such proxy queries are received, they must not be included in the Querier election process. Eddy From dario.vieira@int-evry.fr Thu Jun 16 13:34:54 2005 From: dario.vieira@int-evry.fr (Dario VIEIRA) Date: Thu, 16 Jun 2005 14:34:54 +0200 Subject: [Xorp-users] BGP routes generator Message-ID: <1118925294.42b171eebb6ce@imp.int-evry.fr> Hi, I'm searching a traffic and routes bgp generator (freeware) that I can use with xorp. Any help will be appreciated! cheers, Dario ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From mehdi.bensaid@rd.francetelecom.com Thu Jun 16 15:39:46 2005 From: mehdi.bensaid@rd.francetelecom.com (zze-BEN SAID Mehdi RD-CORE-ISS) Date: Thu, 16 Jun 2005 16:39:46 +0200 Subject: [Xorp-users] Problem Message-ID: I tried this out, but T still got nothing... I just need a way to redirect the result of this command in a file...that's it. Please, any ideas? Thanks -----Message d'origine----- De : Pavlin Radoslavov [mailto:pavlin@icir.org] Envoyé : mercredi 15 juin 2005 19:26 À : zze-BEN SAID Mehdi RD-CORE-ISS Cc : xorp-users@xorp.org; Pavlin Radoslavov; atanu@ICSI.Berkeley.EDU Objet : Re: [Xorp-users] Problem > Hi everybody, > I have a little problem. > I'm trying to get information from Xorp using a pexpect python script: > > #!/usr/bin/env python > import sys > import pexpect > > child=pexpect.spawn ('xorpsh') > child.expect('Xorp> ') > child.sendline ('show igmp group') > child.expect('Xorp> ') > print child.before # I use this to print the result of the show > command > child.close() > > Everything's works right, but I got nothing displayed!!! And of > course, when I type it manually on the xorpsh CLI, there are so many routes. > Is there any solution? Does it work for commands like "show igmp interface" that eventually would display much shorter output? One possible problem that comes to mind if the output is so long, is that probably it goes through the xorpsh built-in pager. To verify that, try to disable the pager by using the following command inside your python script: show igmp group | no-more Pavlin From dan@obluda.cz Thu Jun 16 19:09:46 2005 From: dan@obluda.cz (Dan Lukes) Date: Thu, 16 Jun 2005 20:09:46 +0200 Subject: [Xorp-users] 'echo "any text" | xorpsh' cause abend Message-ID: <42B1C06A.1030708@obluda.cz> Issuing command: echo "show igmp interface" | xorpsh cause abend. The output: =============== [ 2005/06/16 19:31:57 INFO xorpsh CLI ] CLI enabled [ 2005/06/16 19:31:57 ERROR xorpsh:79599 CLI +319 cli_node_net.cc start_connection ] start_connection(): tcgetattr() error: Inappropriate ioctl for device Segmentation fault (core dumped) Core was generated by xorpsh'. Program terminated with signal 11, Segmentation fault. ================ Coretump backtrace: #0 CliClient::set_current_cli_prompt (this=0x0, cli_prompt=@0xbfbf89dc) at /usr/include/g++/std/bastring.h:147 #1 0x806a3a9 in RouterCLI::set_prompt (this=0x8379d00, line1=@0xbfbf89cc, line2=@0xbfbf89dc) at cli.cc:923 #2 0x80591ad in RouterCLI::operational_mode (this=0x8379d00) at cli.cc:457 #3 0x80586ef in RouterCLI::RouterCLI (this=0x8379d2c, xorpsh=@0xbfbfdb24, cli_node=@0xbfbfdd6c, verbose=false) at cli.cc:400 #4 0x80e0f9e in XorpShell::run (this=0xbfbfdb24) at xorpsh_main.cc:245 #5 0x80e4085 in main (argc=1, argv=0xbfbffbe0) at xorpsh_main.cc:612 IMHO, the responsible code is: rtrmgr/cli.cc: RouterCLI::RouterCLI(XorpShell& xorpsh, CliNode& cli_node, bool verbose) : _xorpsh(xorpsh), _cli_node(cli_node), _cli_client(*(_cli_node.add_stdio_client())), _verbose(verbose), _mode(CLI_MODE_NONE), _changes_made(false), _op_mode_cmd(0) { ... The problem is - when stdio isn't terminal then tcgetattr failed. It cause to several XORP_ERROR returns. At the end, the _cli_node.add_stdio_client() return NULL If I understant the C++ syntax correctly the _cli_client(*(_cli_node.add_stdio_client())) mean this NULL is dereferenced. Unfortunatelly my insufficient C++ skills disallow me to send a patch. The far idea is to use the isatty function within cli/cli_node_net.cc to skip tcgetattr/tcsetattr configuration when we are not on interactive terminal. This abend may or may not be the reason why mehdi.bensaid@rd.francetelecom.com can't obtain the output from xorp by python. Dan From happymonkey@gmx.de Thu Jun 16 19:39:12 2005 From: happymonkey@gmx.de (Sami Okasha) Date: Thu, 16 Jun 2005 20:39:12 +0200 Subject: [Xorp-users] Xorp on Xen compile Problems Message-ID: <200506162039.12434.happymonkey@gmx.de> Hello is it possible to compile Xorp on a Xen environment? I Tried to compile xen on 2.4 and 2.6 Xen Kernels but without success. Has anyone successully compiled on xen? On native 2.6 debian it compiles very fine. Greetings Sammy If you need more debug output just tell me how to generate and i will post ist. ---- Here is the last output on a 2.6 XenKernel (and i think i got the same message on the 2.4 XenKernel) FAIL: test_finder_events PASS: test_xrl_parser.sh Unexpected exception: thrown did not correspond to specification - fix code. InvalidAddress from line 280 of finder_tcp.cc -> Not a valid interface address ./test_finder_deaths.sh: line 15: 32623 Aborted ./xorp_finder -p ${finder_port} Finder did not start correctly. Port 17777 maybe in use by another application. FAIL: test_finder_deaths.sh LeakCheck binary not found skipping test. PASS: test_leaks.sh ==================== 9 of 18 tests failed ==================== make[2]: *** [check-TESTS] Error 1 make[2]: Leaving directory `/root/xorp-1.1/libxipc' make[1]: *** [check-am] Error 2 make[1]: Leaving directory `/root/xorp-1.1/libxipc' make: *** [check-recursive] Error 1 - From rafe1980@yahoo.com Thu Jun 16 14:13:30 2005 From: rafe1980@yahoo.com (Rafe) Date: Thu, 16 Jun 2005 06:13:30 -0700 (PDT) Subject: [Xorp-users] Remote static_routes process config. errors. Xorp 1.1 LiveCD Message-ID: <20050616131330.78205.qmail@web50109.mail.yahoo.com> All: I'm trying to set up the following test network configuration (using only static routing at this point): Host A -- Router B -- Router C -- Host D (15.0.0.2) (15.0.0.1) (1.10.2.11) (16.0.0.2) (1.10.2.10) (16.0.0.1) Routers B and C are both running Xorp 1.1 from the LiveCD. We have the static_routes process for each router on the same machine as its rtrmgr at this point, and can successfully communicate between Host A and Host D. We'd like to move the static_routes process from either router to a different machine, and have been trying to move Router B's static_routes process to Router C without success. Here is the procedure we've been following: * Restart xorp_rtrmgr on router B (adding -i 1.10.2.10 -a 1.10.2.11 to the commandline. * Set XORP_FINDER_SERVER_ADDRESS=1.10.2.10 and XORP_FINDER_CLIENT_ADDRESS=1.10.2.11 on Router C. * Start a new static_routes process on Router C (with the -F 1.10.2.10 command line option). (Eventually we'd kill the static_routes process on router B, but an error occurs before we get there). When the new static_routes process (on router C) is started it prints the following 3 error messages over and over, until a or core dump: [ 2005/06/15 14:46:26 ERROR xorp_static_routes:4057 IPC +275 sockutil.cc create_connected_ip_socket ] failed to connect to 127.0.0.1 port 1724: Connection refused [ 2005/06/15 14:46:26 ERROR xorp_static_routes:4057 XRL +55 xrl_pf_factory.cc create_sender ] XrlPFSenderFactory::create failed: XrlPFConstructorError from line 578 of xrl_pf_stcp.cc: Could not connect to 127.0.0.1:1724 [ 2005/06/15 14:46:26 ERROR xorp_static_routes:4057 XRL +358 xrl_router.cc send_resolved ] Could not create XrlPFSender for protocol = "stcp" address = "127.0.0.1:1724" The port numbers displayed in the error messages change, but the address remains 127.0.0.1. What am I missing about how to relocate a routing process? Could the problem have something to do with running two static_routes processes on the same machine? (i.e., Router C's static_routes process is running on router C, and we're trying to get Router B's static_routes process to run on router C as well). Thanks for your help, Rafe Thayer __________________________________ Discover Yahoo! Stay in touch with email, IM, photo sharing and more. Check it out! http://discover.yahoo.com/stayintouch.html From pavlin@icir.org Thu Jun 16 20:39:34 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Thu, 16 Jun 2005 12:39:34 -0700 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external In-Reply-To: Message from "edrt" of "Thu, 16 Jun 2005 13:08:53 +0800." <200506160511.j5G5BKte092390@wyvern.icir.org> Message-ID: <200506161939.j5GJdYJk023064@possum.icir.org> > >> > It is normal to receive NOCACHE signal with src that is not directly > >> > connected, because NOCACHE is triggered when the multicast > >> > forwarding plane sees a multicast data packet (eventually from a > >> > sender several hops away) and the forwarding plane doesn't have a > >> > matching forwarding entry for that packet. > >> > >> Well. And what about directly connected src ? I'm sure the NOCACHE is > >> generated for directly connected sources also (see log bellow). If it's > > > >It is normal to receive NOCACHE for sources regardless whether they > >are directly connected or not. > >In any case, the interface the packet was received on must be > >enabled in the MFEA configuration section. > > > >One thing you were probably seeing is that in the MFEA section you > >enabled the interface toward the source, but you disabled it in the > >PIM section: the MFEA received the NOCACHE upcall and accepted it, > >but then PIM (correctly) threw it away. > > > >Pavlin > > > > > > This scenario could cause performance degrade of PIM router. > Say, a source is multicasting large volumn data, PIM process > router will constantly receive NOCACHE notification from kernel > even if it isn't enabled on that interface. > > One method to alleviate the problem is to have MFEA MRT_ADD_VIF > to kernel only those PIM enabled interfaces. But, this still doesn't For the time being the simple solution is in the XORP configuration to enable the MFEA interfaces only if the PIM-SM interfaces are also enabled. What Dan was trying to do with enabling only the MFEA interfaces doesn't work anyway so there is no point of enabling an interface only in the MFEA section. Though, yes, in general I agree that the MFEA should call MRT_ADD_VIF on an interface only after a multicast routing protocol expresses interest in using that interface. This will be fixed in the future. > prevent the not-directly-connected-network-source-address multicast > host from bugging MFEA. Anybody have good method to solve the problem? It doesn't matter whether a source is directly connected. If the MFEA doesn't call MRT_ADD_VIF on an interface, then the kernel won't trigger any upcalls to the MFEA for packets that arrived on that interface. > Another similar scenario cause performance degrade of PIM router > is about WHOLEPKT. When a source is multicasting large volumn data > and PIM router has no idea about a particular RP(G), PIM process > will constantly receive WHOLEPKT notification from kernel. > Add below code might alleviate the problem on FBSD with advance mcast api. > > in ip_mroute.c/pim_register_send > > if (mrtdebug & DEBUG_PIM) > log(LOG_DEBUG, "pim_register_send: "); > > + if ((mrt_api_config & MRT_MFC_RP) && > + (rt->mfc_rp.s_addr == INADDR_ANY)) { > + return 0; > + } > > (Pls let me know if you find the above code piece has defect) The advanced multicast API has been intentionally designed to send WHOLEPKT to user-space if the RP address in the matching MFC entry in the kernel is not set. It is also documented in the multicast(4) manual page (on *BSD). The reason for this is to allow greater flexibility: for some (S,G) MFC entries the userland program may want to do something special with the PIM Registers encapsulation so it wants to receive WHOLEPKT, while for other entries it may want the kernel to handle the encapsulation. The proper solution would be that if the userland program doesn't know the RP address, then it shouldn't include the register_vif in the MFC entry's outgoing interface set. I will add a note that we should do this in XORP. Pavlin From pavlin@icir.org Thu Jun 16 21:39:13 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Thu, 16 Jun 2005 13:39:13 -0700 Subject: [Xorp-users] Problem In-Reply-To: Message from "zze-BEN SAID Mehdi RD-CORE-ISS" of "Thu, 16 Jun 2005 16:39:46 +0200." Message-ID: <200506162039.j5GKdDe6023553@possum.icir.org> > I tried this out, but T still got nothing... I tried to play a bit with the script, but it doesn't seem to work for me either. For example, command "configure" properly switched xorpsh in configuration mode, but then "show" didn't show the configuration. There were some other peculiar behavior. E.g., executing command "show interfaces" followed by "configure" blocked, so I presume there is something missing in the Python script (or in the interaction between xorpsh and Python). Unfortunately, I am not familiar with Python so I cannot help you. > I just need a way to redirect the result of this command in a file...that's it. An alternative solution would be to wait for us to fix xorpsh so after the fix it will accept commands like: echo "show igmp group" | xorpsh Pavlin > Please, any ideas? > Thanks > > -----Message d'origine----- > De : Pavlin Radoslavov [mailto:pavlin@icir.org] > Envoyé : mercredi 15 juin 2005 19:26 > À : zze-BEN SAID Mehdi RD-CORE-ISS > Cc : xorp-users@xorp.org; Pavlin Radoslavov; atanu@ICSI.Berkeley.EDU > Objet : Re: [Xorp-users] Problem > > > Hi everybody, > > I have a little problem. > > I'm trying to get information from Xorp using a pexpect python script: > > > > #!/usr/bin/env python > > import sys > > import pexpect > > > > child=pexpect.spawn ('xorpsh') > > child.expect('Xorp> ') > > child.sendline ('show igmp group') > > child.expect('Xorp> ') > > print child.before # I use this to print the result of the show > > command > > child.close() > > > > Everything's works right, but I got nothing displayed!!! And of > > course, when I type it manually on the xorpsh CLI, there are so many routes. > > Is there any solution? > > Does it work for commands like "show igmp interface" that eventually would display much shorter output? > > One possible problem that comes to mind if the output is so long, is that probably it goes through the xorpsh built-in pager. > To verify that, try to disable the pager by using the following command inside your python script: > show igmp group | no-more > > Pavlin > > _______________________________________________ > Xorp-users mailing list > Xorp-users@xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From atanu@ICSI.Berkeley.EDU Thu Jun 16 21:42:51 2005 From: atanu@ICSI.Berkeley.EDU (Atanu Ghosh) Date: Thu, 16 Jun 2005 13:42:51 -0700 Subject: [Xorp-users] 'echo "any text" | xorpsh' cause abend In-Reply-To: Message from Dan Lukes of "Thu, 16 Jun 2005 20:09:46 +0200." <42B1C06A.1030708@obluda.cz> Message-ID: <66046.1118954571@tigger.icir.org> Dan> This abend may or may not be the reason why Dan> mehdi.bensaid@rd.francetelecom.com can't obtain the output Dan> from xorp by python. The python script is using expect that runs the xorpsh connected to a pseudo terminal. In this case isatty would return true. We do have an issue with piping into a shell and we are looking into fixing it. Atanu. From pavlin@icir.org Thu Jun 16 21:45:17 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Thu, 16 Jun 2005 13:45:17 -0700 Subject: [Xorp-users] 'echo "any text" | xorpsh' cause abend In-Reply-To: Message from Dan Lukes of "Thu, 16 Jun 2005 20:09:46 +0200." <42B1C06A.1030708@obluda.cz> Message-ID: <200506162045.j5GKjHom023627@possum.icir.org> > Issuing command: > echo "show igmp interface" | xorpsh > > cause abend. The output: > > =============== > [ 2005/06/16 19:31:57 INFO xorpsh CLI ] CLI enabled > [ 2005/06/16 19:31:57 ERROR xorpsh:79599 CLI +319 cli_node_net.cc > start_connection ] > start_connection(): tcgetattr() error: Inappropriate ioctl for device > Segmentation fault (core dumped) > > Core was generated by xorpsh'. > Program terminated with signal 11, Segmentation fault. Accidentally, few days ago we had a conversation about exactly same issue: we wanted to run xorpsh as part of a UNIX command-line pipe, and it crashed. There is a patch about that, and it is just waiting to be integrated and tested. Pavlin From pavlin@icir.org Thu Jun 16 21:54:15 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Thu, 16 Jun 2005 13:54:15 -0700 Subject: [Xorp-users] Remote static_routes process config. errors. Xorp 1.1 LiveCD In-Reply-To: Message from Rafe of "Thu, 16 Jun 2005 06:13:30 PDT." <20050616131330.78205.qmail@web50109.mail.yahoo.com> Message-ID: <200506162054.j5GKsFJd023671@possum.icir.org> > I'm trying to set up the following test network configuration (using only > static routing at this point): > > Host A -- Router B -- Router C -- Host D > (15.0.0.2) (15.0.0.1) (1.10.2.11) (16.0.0.2) > (1.10.2.10) (16.0.0.1) > > Routers B and C are both running Xorp 1.1 from the LiveCD. > > We have the static_routes process for each router on the same machine as its > rtrmgr at this point, and can successfully communicate between Host A and Host > D. > > We'd like to move the static_routes process from either router to a different > machine, and have been trying to move Router B's static_routes process to > Router C without success. > > Here is the procedure we've been following: > * Restart xorp_rtrmgr on router B (adding -i 1.10.2.10 -a 1.10.2.11 to the > commandline. Before starting the rtrmgr on router B please set: XORP_FINDER_SERVER_ADDRESS=1.10.2.10 XORP_FINDER_CLIENT_ADDRESS=1.10.2.10 I am not sure it will fix the problem but give it a try. > * Set XORP_FINDER_SERVER_ADDRESS=1.10.2.10 and > XORP_FINDER_CLIENT_ADDRESS=1.10.2.11 on Router C. > * Start a new static_routes process on Router C (with the -F 1.10.2.10 command Given that you have already set XORP_FINDER_CLIENT_ADDRESS, I think you don't need the -F 1.10.2.10 command-line option so you may want to remove it to reduce the confusion. Pavlin > line option). (Eventually we'd kill the static_routes process on router B, but > an error occurs before we get there). > > When the new static_routes process (on router C) is started it prints the > following 3 error messages over and over, until a or core dump: > > [ 2005/06/15 14:46:26 ERROR xorp_static_routes:4057 IPC +275 sockutil.cc > create_connected_ip_socket ] failed to connect to 127.0.0.1 port 1724: > Connection refused > > [ 2005/06/15 14:46:26 ERROR xorp_static_routes:4057 XRL +55 xrl_pf_factory.cc > create_sender ] XrlPFSenderFactory::create failed: XrlPFConstructorError from > line 578 of xrl_pf_stcp.cc: Could not connect to 127.0.0.1:1724 > > [ 2005/06/15 14:46:26 ERROR xorp_static_routes:4057 XRL +358 xrl_router.cc > send_resolved ] Could not create XrlPFSender for protocol = "stcp" address = > "127.0.0.1:1724" > > The port numbers displayed in the error messages change, but the address > remains 127.0.0.1. > > What am I missing about how to relocate a routing process? Could the problem > have something to do with running two static_routes processes on the same > machine? (i.e., Router C's static_routes process is running on router C, and > we're trying to get Router B's static_routes process to run on router C as > well). > > Thanks for your help, > > Rafe Thayer From atanu@ICSI.Berkeley.EDU Thu Jun 16 22:00:14 2005 From: atanu@ICSI.Berkeley.EDU (Atanu Ghosh) Date: Thu, 16 Jun 2005 14:00:14 -0700 Subject: [Xorp-users] Problem In-Reply-To: Message from "zze-BEN SAID Mehdi RD-CORE-ISS" of "Thu, 16 Jun 2005 16:39:46 +0200." Message-ID: <69550.1118955614@tigger.icir.org> Hi, The following works for me but its not elegant: ---------------------------------------- #!/usr/bin/env python import time import sys import pexpect child=pexpect.spawn ('xorpsh') child.expect('Xorp> ') child.send('show ') time.sleep(1) # Defeat the command line completion. child.sendline(' igmp group | no-more') time.sleep(1) child.sendeof() while 1: line = child.readline() if not line: break print line, child.close() ---------------------------------------- Atanu. >>>>> "zze-BEN" == zze-BEN SAID Mehdi > writes: zze-BEN> I tried this out, but T still got nothing... I just need a zze-BEN> way to redirect the result of this command in a zze-BEN> file...that's it. Please, any ideas? Thanks zze-BEN> -----Message d'origine----- De : Pavlin Radoslavov zze-BEN> [mailto:pavlin@icir.org] Envoyé : mercredi 15 juin 2005 zze-BEN> 19:26 À : zze-BEN SAID Mehdi RD-CORE-ISS Cc : zze-BEN> xorp-users@xorp.org; Pavlin Radoslavov; zze-BEN> atanu@ICSI.Berkeley.EDU Objet : Re: [Xorp-users] Problem >> Hi everybody, I have a little problem. I'm trying to get >> information from Xorp using a pexpect python script: >> >> #!/usr/bin/env python import sys import pexpect >> >> child=pexpect.spawn ('xorpsh') child.expect('Xorp> ') >> child.sendline ('show igmp group') child.expect('Xorp> ') print >> child.before # I use this to print the result of the show command >> child.close() >> >> Everything's works right, but I got nothing displayed!!! And of >> course, when I type it manually on the xorpsh CLI, there are so >> many routes. Is there any solution? zze-BEN> Does it work for commands like "show igmp interface" that zze-BEN> eventually would display much shorter output? zze-BEN> One possible problem that comes to mind if the output is so zze-BEN> long, is that probably it goes through the xorpsh built-in zze-BEN> pager. To verify that, try to disable the pager by using zze-BEN> the following command inside your python script: show igmp zze-BEN> group | no-more zze-BEN> Pavlin From edrt@citiz.net Fri Jun 17 15:41:06 2005 From: edrt@citiz.net (edrt) Date: Fri, 17 Jun 2005 22:41:06 +0800 Subject: [Xorp-users] Multicast without PIM on internal interface while PIM on external Message-ID: <200506171437.j5HEbLqk009740@wyvern.icir.org> > >The proper solution would be that if the userland program doesn't >know the RP address, then it shouldn't include the register_vif in >the MFC entry's outgoing interface set. >I will add a note that we should do this in XORP. > >Pavlin > This solution is more elegant. Eddy From adam@hiddennet.net Fri Jun 17 16:49:07 2005 From: adam@hiddennet.net (adam@hiddennet.net) Date: Fri, 17 Jun 2005 16:49:07 +0100 (BST) Subject: [Xorp-users] Xorp on Xen compile Problems In-Reply-To: <200506162039.12434.happymonkey@gmx.de> References: <200506162039.12434.happymonkey@gmx.de> Message-ID: <1317.66.146.163.2.1119023347.squirrel@www.hiddennet.net> Hi Xorp does compile on xen with gentoo and running a 2.6.11-xen0 kernel. I haven't verified that it runs or anything else is just compiled for me with ./configure ; make . adam > Hello > > is it possible to compile Xorp on a Xen environment? I Tried to compile > xen on > 2.4 and 2.6 Xen Kernels but without success. > > Has anyone successully compiled on xen? On native 2.6 debian it compiles > very > fine. > > > Greetings > Sammy > > If you need more debug output just tell me how to generate and i will post > ist. > ---- > Here is the last output on a 2.6 XenKernel (and i think i got the same > message > on the 2.4 XenKernel) > > FAIL: test_finder_events > PASS: test_xrl_parser.sh > Unexpected exception: thrown did not correspond to specification - fix > code. > InvalidAddress from line 280 of finder_tcp.cc -> Not a valid interface > address > ./test_finder_deaths.sh: line 15: 32623 Aborted > ./xorp_finder > -p ${finder_port} > Finder did not start correctly. > Port 17777 maybe in use by another application. > FAIL: test_finder_deaths.sh > LeakCheck binary not found skipping test. > PASS: test_leaks.sh > ==================== > 9 of 18 tests failed > ==================== > make[2]: *** [check-TESTS] Error 1 > make[2]: Leaving directory `/root/xorp-1.1/libxipc' > make[1]: *** [check-am] Error 2 > make[1]: Leaving directory `/root/xorp-1.1/libxipc' > make: *** [check-recursive] Error 1 > > > - > > _______________________________________________ > Xorp-users mailing list > Xorp-users@xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users > From happymonkey@gmx.de Fri Jun 17 17:29:08 2005 From: happymonkey@gmx.de (Sami Okasha) Date: Fri, 17 Jun 2005 18:29:08 +0200 Subject: [Xorp-users] Xorp on Xen compile Problems In-Reply-To: <1317.66.146.163.2.1119023347.squirrel@www.hiddennet.net> References: <200506162039.12434.happymonkey@gmx.de> <1317.66.146.163.2.1119023347.squirrel@www.hiddennet.net> Message-ID: <200506171829.08725.happymonkey@gmx.de> Hi, i use Debian with 2.4.30-xenU and 2.6.11.10-xenU Kernel. i have no idea why it didn't compile, but Pavlin will debug this soon, so we can post why it didn't worked. Sammy adam@hiddennet.net schrieb am Freitag, 17. Juni 2005 17:49: > Hi > > Xorp does compile on xen with gentoo and running a 2.6.11-xen0 kernel. I > haven't verified that it runs or anything else is just compiled for me > with ./configure ; make . > > adam > > > Hello > > > > is it possible to compile Xorp on a Xen environment? I Tried to compile > > xen on > > 2.4 and 2.6 Xen Kernels but without success. > > > > Has anyone successully compiled on xen? On native 2.6 debian it compiles > > very > > fine. > > > > > > Greetings > > Sammy > > > > If you need more debug output just tell me how to generate and i will > > post ist. > > ---- > > Here is the last output on a 2.6 XenKernel (and i think i got the same > > message > > on the 2.4 XenKernel) > > > > FAIL: test_finder_events > > PASS: test_xrl_parser.sh > > Unexpected exception: thrown did not correspond to specification - fix > > code. > > InvalidAddress from line 280 of finder_tcp.cc -> Not a valid interface > > address > > ./test_finder_deaths.sh: line 15: 32623 Aborted > > ./xorp_finder > > -p ${finder_port} > > Finder did not start correctly. > > Port 17777 maybe in use by another application. > > FAIL: test_finder_deaths.sh > > LeakCheck binary not found skipping test. > > PASS: test_leaks.sh > > ==================== > > 9 of 18 tests failed > > ==================== > > make[2]: *** [check-TESTS] Error 1 > > make[2]: Leaving directory `/root/xorp-1.1/libxipc' > > make[1]: *** [check-am] Error 2 > > make[1]: Leaving directory `/root/xorp-1.1/libxipc' > > make: *** [check-recursive] Error 1 > > > > > > - > > > > _______________________________________________ > > Xorp-users mailing list > > Xorp-users@xorp.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users > > _______________________________________________ > Xorp-users mailing list > Xorp-users@xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users -- Sammy Okasha jabber:sammy@jabber.ccc.de, I Seek You: 51144829 Key ID: 6F3D8EBD6E6DB4B2 GPG Key Fingerprint: F415 08EC 90C6 383B 1BFE D0C9 6F3D 8EBD 6E6D B4B2 From ap010@terra.com.br Sun Jun 19 23:38:23 2005 From: ap010@terra.com.br (Diogo Della) Date: Sun, 19 Jun 2005 19:38:23 -0300 Subject: [Xorp-users] mrib Message-ID: --_=__=_XaM3_.1119220703.2A.659199.42.14763.52.42.007.1071630272 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable I've a network of three router running xorp and configured with RIP, IGMP= and PIM. The configuration files are the same (changing the ips and the rps). Ther= e are two routers that it works fine and the multicast traffic passes fro= m one network to the other. But there is one router that the mrib is not formed. It does not make the= multicast table or does not receive it. The unicast table is well formed. When I do a "show route table ipv4 unicast final", the router shows all t= he networks. But, when I do "show route table ipv4 multicast final", the router shows = only the nets direct connected. What are the possible problems? Thanks, Diogo Della --_=__=_XaM3_.1119220703.2A.659199.42.14763.52.42.007.1071630272 Content-Type: text/html; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable
I've a network of three router running xorp and configured with RIP,= IGMP and PIM.
 
The configuration files are the same (changing the ips and the rps).= There are two routers that it works fine and the multicast traffic passe= s from one network to the other.
 
But there is one router that the mrib is not formed. It does not mak= e the multicast table or does not receive it.
 
The unicast table is well formed.
 
When I do a "show route table ipv4 unicast final", the router shows = all the networks.
 
But, when I do "show route table ipv4 multicast final", the router s= hows only the nets direct connected.
 
What are the possible problems?
 
Thanks,
 
Diogo Della
--_=__=_XaM3_.1119220703.2A.659199.42.14763.52.42.007.1071630272-- From pavlin@icir.org Mon Jun 20 00:05:41 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Sun, 19 Jun 2005 16:05:41 -0700 Subject: [Xorp-users] mrib In-Reply-To: Message from "Diogo Della" of "Sun, 19 Jun 2005 19:38:23 -0300." Message-ID: <200506192305.j5JN5fVl034547@possum.icir.org> > I've a network of three router running xorp and configured with RIP, IGMP= > and PIM. > > The configuration files are the same (changing the ips and the rps). Ther= > e are two routers that it works fine and the multicast traffic passes fro= > m one network to the other. > > But there is one router that the mrib is not formed. It does not make the= > multicast table or does not receive it. > > The unicast table is well formed. > > When I do a "show route table ipv4 unicast final", the router shows all t= > he networks. > > But, when I do "show route table ipv4 multicast final", the router shows = > only the nets direct connected. > > What are the possible problems? The first thing that comes to mind is whether you have fib2mrib enabled in your configuration. If you have it already enabled, are there any errors or warning messages when you start XORP? Pavlin From branco@pro.via-rs.com.br Tue Jun 21 16:23:15 2005 From: branco@pro.via-rs.com.br (Renato Rodrigues Branco) Date: Tue, 21 Jun 2005 12:23:15 -0300 Subject: [Xorp-users] XORP x Zebra Message-ID: <42B830E3.3070006@pro.via-rs.com.br> Hi all ! My SO is FreeBSD and my ospfd is Zebra. I need to implement multicast and choice to use PIM-SM. Since the PIM-SM need of Unicast Routing Table onle and If I use Xorp for PIM-SMd only in the same coputer, it will work fine with Zebra ? Thanks for all ! From dario.vieira@int-evry.fr Tue Jun 21 16:58:30 2005 From: dario.vieira@int-evry.fr (Dario VIEIRA) Date: Tue, 21 Jun 2005 17:58:30 +0200 Subject: [Xorp-users] bgp Message-ID: <1119369510.42b839261f23f@imp.int-evry.fr> We are trying to perform some experiments with XORP BGP, but we are having some problems. In our network configurations, each AS consists of a single router. In addition, in each network configuration, there is an AS "X" that announces and later withdraws a single destination prefix "d". We have used some families of network topologies, as for instance, CLIQUE of size "n" (a network configuration that is made up of "n" ASes in a full mesh). We are using IMUNES in order to simulate our network. Besides, we are using XORP 1.0. We started our simulation with a CLIQUE of size 3. The simulation runs without any problem. However, for CLIQUE with size greater than 3, some XORP BGP peers crash when it is announced and later withdrew a destination prefix "d". We got the following message errors (for each bgp peer crashed): "[2005/06/21 16:32:04 INFO xorp_rip RIB] Received death event for protocol bgp shutting down---------- OriginTable: ebgp EGP Next table = Merged:(ebgp)+(ibgp)" Any idea about the origin of this problem? Thank for your help, Dario ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From pavlin@icir.org Tue Jun 21 20:11:43 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Tue, 21 Jun 2005 12:11:43 -0700 Subject: [Xorp-users] XORP x Zebra In-Reply-To: Message from Renato Rodrigues Branco of "Tue, 21 Jun 2005 12:23:15 -0300." <42B830E3.3070006@pro.via-rs.com.br> Message-ID: <200506211911.j5LJBhV7074383@possum.icir.org> > My SO is FreeBSD and my ospfd is Zebra. > I need to implement multicast and choice to use PIM-SM. > Since the PIM-SM need of Unicast Routing Table onle and > If I use Xorp for PIM-SMd only in the same coputer, it will work fine > with Zebra ? It should work, though we haven't tried it. Just make sure that you enable fib2mrib in your XORP configuration. Pavlin From dario.vieira@int-evry.fr Wed Jun 22 13:35:06 2005 From: dario.vieira@int-evry.fr (Dario VIEIRA) Date: Wed, 22 Jun 2005 14:35:06 +0200 Subject: [Xorp-users] Re: bgp In-Reply-To: <1119369510.42b839261f23f@imp.int-evry.fr> References: <1119369510.42b839261f23f@imp.int-evry.fr> Message-ID: <1119443706.42b95afa66561@imp.int-evry.fr> Hi, We tried to use the XORP 1.1, but we run into the same problem. :( Any help? It would be appreciated. Cheers, Dario Selon Dario VIEIRA : > We are trying to perform some experiments with XORP BGP, but we are having > some > problems. > > In our network configurations, each AS consists of a single router. In > addition, > in each network configuration, there is an AS "X" that announces and later > withdraws a single destination prefix "d". > > We have used some families of network topologies, as for instance, CLIQUE of > size "n" (a network configuration that is made up of "n" ASes in a full > mesh). > > We are using IMUNES in order to simulate our network. Besides, we are using > XORP > 1.0. > > We started our simulation with a CLIQUE of size 3. The simulation runs > without > any problem. > > However, for CLIQUE with size greater than 3, some XORP BGP peers crash when > it > is announced and later withdrew a destination prefix "d". We got the > following > message errors (for each bgp peer crashed): > > "[2005/06/21 16:32:04 INFO xorp_rip RIB] Received death event for protocol > bgp > shutting down---------- > OriginTable: ebgp > EGP > Next table = Merged:(ebgp)+(ibgp)" > > Any idea about the origin of this problem? > > Thank for your help, > > Dario > > > ---------------------------------------------------------------- > This message was sent using IMP, the Internet Messaging Program. > ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. From atanu@ICSI.Berkeley.EDU Wed Jun 22 21:30:14 2005 From: atanu@ICSI.Berkeley.EDU (Atanu Ghosh) Date: Wed, 22 Jun 2005 13:30:14 -0700 Subject: [Xorp-users] Re: bgp In-Reply-To: Message from Dario VIEIRA of "Wed, 22 Jun 2005 14:35:06 +0200." <1119443706.42b95afa66561@imp.int-evry.fr> Message-ID: <83204.1119472214@tigger.icir.org> >From your output it looks as if the BGP process has terminated and the RIB has being notified. The RIB therefore attempts to remove the routes installed by BGP. Did you see any errors from BGP itself? I would like to try and reproduce the problem for the four router case, could you send the configuration files? Atanu. >>>>> "Dario" == Dario VIEIRA writes: Dario> Hi, We tried to use the XORP 1.1, but we run into the same Dario> problem. :( Dario> Any help? It would be appreciated. Dario> Cheers, Dario> Dario Dario> Selon Dario VIEIRA : >> We are trying to perform some experiments with XORP BGP, but we >> are having some problems. >> >> In our network configurations, each AS consists of a single >> router. In addition, in each network configuration, there is an >> AS "X" that announces and later withdraws a single destination >> prefix "d". >> >> We have used some families of network topologies, as for >> instance, CLIQUE of size "n" (a network configuration that is >> made up of "n" ASes in a full mesh). >> >> We are using IMUNES in order to simulate our network. Besides, we >> are using XORP >> 1.0. >> >> We started our simulation with a CLIQUE of size 3. The simulation >> runs without any problem. >> >> However, for CLIQUE with size greater than 3, some XORP BGP peers >> crash when it is announced and later withdrew a destination >> prefix "d". We got the following message errors (for each bgp >> peer crashed): >> >> "[2005/06/21 16:32:04 INFO xorp_rip RIB] Received death event for >> protocol bgp shutting down---------- OriginTable: ebgp EGP Next >> table = Merged:(ebgp)+(ibgp)" >> >> Any idea about the origin of this problem? >> >> Thank for your help, >> >> Dario >> >> >> ---------------------------------------------------------------- >> This message was sent using IMP, the Internet Messaging Program. >> Dario> ---------------------------------------------------------------- Dario> This message was sent using IMP, the Internet Messaging Dario> Program. _______________________________________________ Dario> Xorp-users mailing list Xorp-users@xorp.org Dario> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From samelstob@fastmail.fm Thu Jun 23 15:23:20 2005 From: samelstob@fastmail.fm (Sam Elstob) Date: Thu, 23 Jun 2005 15:23:20 +0100 Subject: [Xorp-users] xorp_rtrmgr - Assertion failure Message-ID: <1119536600.3287.1.camel@localhost.localdomain> Managed to cause a failed assertion in XORP whilst starting up xorpsh. Here is the output. Any ideas? [ 2005/06/23 01:01:23 INFO xorp_rtrmgr:5515 XRL +460 xrl_router.cc send ] Resolving xrl:finder://finder/finder_event_notifier/0.1/register_class_event_interest?requester_instance:txt=rtrmgr-d5b380331df952fe197fd3e85e843c92@192.168.1.32&class_name:txt=xorpsh-5518-xen0 xorp_rtrmgr: ../sysdeps/generic/printf_fphex.c:163: __printf_fphex: Assertion `*decimal != '\0' && decimalwc != L'\0'' failed. [ 2005/06/23 01:01:23 ERROR xorp_fea:5516 CLI +115 xrl_cli_node.cc finder_disconnect_event ] Finder disconnect event. Exiting immediately... [ 2005/06/23 01:01:23 INFO xorp_fea CLI ] CLI stopped [ 2005/06/23 01:01:23 ERROR xorp_fea:5516 MFEA +192 xrl_mfea_node.cc finder_disconnect_event ] Finder disconnect event. Exiting immediately... Aborted sam From M.Handley@cs.ucl.ac.uk Thu Jun 23 16:23:18 2005 From: M.Handley@cs.ucl.ac.uk (Mark Handley) Date: Thu, 23 Jun 2005 16:23:18 +0100 Subject: [Xorp-users] xorp_rtrmgr - Assertion failure In-Reply-To: Your message of "Thu, 23 Jun 2005 15:23:20 BST." <1119536600.3287.1.camel@localhost.localdomain> Message-ID: <22749.1119540198@aardvark.cs.ucl.ac.uk> >Managed to cause a failed assertion in XORP whilst starting up xorpsh. >Here is the output. Any ideas? > > >[ 2005/06/23 01:01:23 INFO xorp_rtrmgr:5515 XRL +460 xrl_router.cc >send ] Resolving >xrl:finder://finder/finder_event_notifier/0.1/register_class_event_interest?re >quester_instance:txt=rtrmgr-d5b380331df952fe197fd3e85e843c92@192.168.1.32&clas >s_name:txt=xorpsh-5518-xen0 >xorp_rtrmgr: ../sysdeps/generic/printf_fphex.c:163: __printf_fphex: >Assertion `*decimal != '\0' && decimalwc != L'\0'' failed. >[ 2005/06/23 01:01:23 ERROR xorp_fea:5516 CLI +115 xrl_cli_node.cc >finder_disconnect_event ] Finder disconnect event. Exiting >immediately... >[ 2005/06/23 01:01:23 INFO xorp_fea CLI ] CLI stopped >[ 2005/06/23 01:01:23 ERROR xorp_fea:5516 MFEA +192 xrl_mfea_node.cc >finder_disconnect_event ] Finder disconnect event. Exiting >immediately... >Aborted If you've still got the core file, can you run gdb it, and give me a backtrace? - Mark From bitim@cascv.brown.edu Thu Jun 23 18:08:19 2005 From: bitim@cascv.brown.edu (Melih Bitim) Date: Thu, 23 Jun 2005 13:08:19 -0400 Subject: [Xorp-users] PIM problem: not forwarding from internal to external interface Message-ID: <20050623130819.A9173@cascv.brown.edu> Hi, We are trying to route multicast through our firewall primarily for Accessgrid access. The fw has a 2.6.9 Linux kernel, and two active interfaces, namely eth0 (internal) and eth2 (external). eth1 is NOT configured. A campus router provides the static RP, though the RP is not a next hop from the firewall. Is that a problem? Linux local Internet-- Router -- ... --Campus -- (eth2)Firewall/(eth0)--switch-- beacon (is RP) Router1 Router node a.b.r.1 a.b.c.75 a.b.c.74 a.b.d.1 a.b.d.14 ping to 224.0.0.13 from the 'beacon node' shows a.b.d.1. ping to 224.0.0.13 from the firewall (interface eth0) shows itself (a.b.d.1) ping to 224.0.0.13 from the firewall (interface eth2) shows itself(a.b.c.74) and the neighbor a.b.c.75 The firewall rules have been mended to allow IGMP, ICMP and PIM through after analyzing the firewall LOG, and nothing related is rejected or dropped by the firewall (as far as I can see). We are using a fresh CVS copy of XORP, and a pretty standart configuration file. The firewall has static routes on it for a bunch of networks behind it, and no dynamic routing protocols are used. So my assumption is that fib2mrib should be used. The 'beacon node' and the firewall have interfaces on the same network. Running the beacon software, we see multicast traffic from outside hit the beacon node. But multicast initiated from the beacon node hits the internal interface on the firewall (eth0), and doesn't get send out on eth2 (external interface). Unicast routing works perfectly. We don't have different paths for incoming/outgoing packets, so I don't think we need rp_filter=0, but it is set like that. /proc/sys/net/ipv4/conf/eth0/mc_forwarding:1 /proc/sys/net/ipv4/conf/eth2/mc_forwarding:1 /proc/sys/net/ipv4/conf/lo/mc_forwarding:0 /proc/sys/net/ipv4/conf/pimreg/mc_forwarding:1 /proc/sys/net/ipv4/ip_forward:1 /proc/sys/net/ipv4/conf/eth0/rp_filter:0 /proc/sys/net/ipv4/conf/eth2/rp_filter:0 /proc/sys/net/ipv4/conf/lo/rp_filter:1 /proc/sys/net/ipv4/conf/pimreg/rp_filter:0 Xorp> show mfea interface Interface State Vif/PifIndex Addr Flags eth0 UP 0/2 a.b.d.1 MULTICAST BROADCAST KERN_UP eth2 UP 1/4 a.b.c.74 MULTICAST BROADCAST KERN_UP register_vif UP 2/2 a.b.d.1 PIM_REGISTER KERN_UP Xorp> show igmp interface Interface State Querier Timeout Version Groups eth0 UP a.b.d.1 None 2 3 eth2 UP a.b.c.74 None 2 2 Xorp> show pim neighbors Interface DRpriority NeighborAddr V Mode Holdtime Timeout eth2 1 a.b.c.75 2 Sparse 105 94 Xorp> show pim interface Interface State Mode V PIMstate Priority DRaddr Neighbors eth0 UP Sparse 2 DR 1 a.b.d.1 0 eth2 UP Sparse 2 NotDR 1 a.b.c.75 1 register_vif UP Sparse 2 DR 1 a.b.d.1 0 Xorp> show pim rps RP Type Pri Holdtime Timeout ActiveGroups GroupPrefix a.b.r.1 static 192 -1 -1 1 224.0.0.0/4 Xorp> show pim mfc 233.4.200.19 random_beacon_IP a.b.r.1 Incoming interface : eth2 Outgoing interfaces: O.. .... tons of beacon nodes just like the on above 233.4.200.19 a.b.d.14 a.b.r.1 Incoming interface : eth0 Outgoing interfaces: ..O Xorp> show pim mrib shows the whole unicast table Xorp> show pim scope GroupPrefix Interface Xorp> show pim join Group Source RP Flags 233.4.200.19 0.0.0.0 a.b.r.1 WC Upstream interface (RP): eth2 Upstream MRIB next hop (RP): a.b.c.75 Upstream RPF'(*,G): a.b.c.75 Upstream state: Joined Join timer: 39 Local receiver include WC: O.. Joins RP: ... Joins WC: ... Join state: ... Prune state: ... Prune pending state: ... I am assert winner state: O.. I am assert loser state: ... Assert winner WC: O.. Assert lost WC: ... Assert tracking WC: OO. Could assert WC: O.. I am DR: O.O Immediate olist RP: ... Immediate olist WC: O.. Inherited olist SG: O.. Inherited olist SG_RPT: O.. PIM include WC: O.. 233.4.200.19 a.b.d.14 a.b.r.1 SG_RPT DirectlyConnectedS Upstream interface (S): eth0 Upstream interface (RP): eth2 Upstream MRIB next hop (RP): a.b.c.75 Upstream RPF'(S,G,rpt): a.b.c.75 Upstream state: Pruned Override timer: -1 Local receiver include WC: O.. Joins RP: ... Joins WC: ... Prunes SG_RPT: ... Join state: ... Prune state: ... Prune pending state: ... Prune tmp state: ... Prune pending tmp state: ... Assert winner WC: O.. Assert lost WC: ... Assert lost SG_RPT: ... Could assert WC: O.. Could assert SG: ..O I am DR: O.O Immediate olist RP: ... Immediate olist WC: O.. Inherited olist SG: O.O Inherited olist SG_RPT: O.. PIM include WC: O.. 233.4.200.19 a.b.d.14 a.b.r.1 SG SPT DirectlyConnectedS Upstream interface (S): eth0 Upstream interface (RP): eth2 Upstream MRIB next hop (RP): a.b.c.75 Upstream MRIB next hop (S): UNKNOWN Upstream RPF'(S,G): UNKNOWN Upstream state: Joined Register state: RegisterJoin RegisterCouldRegister Join timer: 39 KAT(S,G) running: true Local receiver include WC: O.. Local receiver include SG: ... Local receiver exclude SG: ... Joins RP: ... Joins WC: ... Joins SG: ..O Join state: ..O Prune state: ... Prune pending state: ... I am assert winner state: ... I am assert loser state: ... Assert winner WC: O.. Assert winner SG: ... Assert lost WC: ... Assert lost SG: ... Assert lost SG_RPT: ... Assert tracking SG: O.O Could assert WC: O.. Could assert SG: ..O I am DR: O.O Immediate olist RP: ... Immediate olist WC: O.. Immediate olist SG: ..O Inherited olist SG: O.O Inherited olist SG_RPT: O.. PIM include WC: O.. PIM include SG: ... PIM exclude SG: ... Before I send the complete log, I would like to send warning/error messages and see if they make sense to you, and if we need to to be concerned: ERROR xorp_fea:1703 MFEA +1781 mfea_proto_comm.cc proto_socket_write ] sendmsg(proto 103 from a.b.d.1 to a.b.r.1 on vif register_vif) failed: Message too long ERROR xorp_pimsm4:1729 PIM +2617 xrl_pim_node.cc mfea_client_send_protocol_message_cb ] Cannotdsend a protocol message: 102 Command failed Cannot send PIMSM_4 protocol message from a.b.d.1 to a.b.r.1 on vif register_vif WARNING xorp_fea MFEA ] proto_socket_read() failed: RX packet from a.b.r.1 to 224.0.0.2: no vif found WARNING xorp_fea XrlMfeaTarget ] Handling method for mfea/0.1/send_protocol_message4 failed: XrlCmdError 102 Command failed Cannot send PIMSM_4 protocol message from a.b.d.1 to a.b.r.1 on vif register_vif After XORP initialization, we continuosly see the following printed from XORP: [ 2005/06/23 12:49:28 TRACE xorp_pimsm4 PIM ] TX PIM_REGISTER from a.b.d.1 to a.b.r.1 on vif register_vif [ 2005/06/23 12:49:28 TRACE xorp_pimsm4 PIM ] RX WHOLEPKT signal from MFEA_4: vif_index = 2 src = a.b.d.14 dst = 233.4.200.19 [ 2005/06/23 12:49:28 TRACE xorp_pimsm4 PIM ] TX PIM_REGISTER from a.b.d.1 to a.b.r.1 on vif register_vif Any ideas ? Many thanks, your help is greatly appreciated, -- Melih Bitim Brown Univ. From pavlin@icir.org Thu Jun 23 20:22:38 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Thu, 23 Jun 2005 12:22:38 -0700 Subject: [Xorp-users] PIM problem: not forwarding from internal to external interface In-Reply-To: Message from Melih Bitim of "Thu, 23 Jun 2005 13:08:19 EDT." <20050623130819.A9173@cascv.brown.edu> Message-ID: <200506231922.j5NJMckj071304@possum.icir.org> First, thanks for the very detailed info, because it eliminates a number of extra email exchanges :) > We are trying to route multicast through our firewall primarily for Accessgrid > access. The fw has a 2.6.9 Linux kernel, and two active interfaces, namely > eth0 (internal) and eth2 (external). eth1 is NOT configured. A campus router > provides the static RP, though the RP is not a next hop from the firewall. > Is that a problem? This should be fine. > Linux local > Internet-- Router -- ... --Campus -- (eth2)Firewall/(eth0)--switch-- beacon > (is RP) Router1 Router node > a.b.r.1 a.b.c.75 a.b.c.74 a.b.d.1 a.b.d.14 [Detailed info deleted] > We are using a fresh CVS copy of XORP, and a pretty standart configuration > file. The firewall has static routes on it for a bunch of networks behind it, > and no dynamic routing protocols are used. So my assumption is that fib2mrib > should be used. Yes, you should use fib2mrib. Even if you have dynamic routing protocols, for the time being you still should use fib2mrib. > Xorp> show pim mfc > 233.4.200.19 random_beacon_IP a.b.r.1 > Incoming interface : eth2 > Outgoing interfaces: O.. > > .... tons of beacon nodes just like the on above > > 233.4.200.19 a.b.d.14 a.b.r.1 > Incoming interface : eth0 > Outgoing interfaces: ..O The above entry is the most important clue that PIM-SM (should) have installed the correct multicast forwarding entry in the kernel. You could double-check by verifying that the entry is in the kernel as well (cat /proc/net/ip_mr_cache), but in this case I don't think it is necessary because the problem seems to be elsewhere (see below). > Before I send the complete log, I would like to send warning/error messages > and see if they make sense to you, and if we need to to be concerned: > > ERROR xorp_fea:1703 MFEA +1781 mfea_proto_comm.cc proto_socket_write ] sendmsg(proto 103 from a.b.d.1 to a.b.r.1 on vif register_vif) failed: Message too long > ERROR xorp_pimsm4:1729 PIM +2617 xrl_pim_node.cc mfea_client_send_protocol_message_cb ] Cannotdsend a protocol message: 102 Command failed Cannot send PIMSM_4 protocol message from a.b.d.1 to a.b.r.1 on vif register_vif > WARNING xorp_fea MFEA ] proto_socket_read() failed: RX packet from a.b.r.1 to 224.0.0.2: no vif found > WARNING xorp_fea XrlMfeaTarget ] Handling method for mfea/0.1/send_protocol_message4 failed: XrlCmdError 102 Command failed Cannot send PIMSM_4 protocol message from a.b.d.1 to a.b.r.1 on vif register_vif Yes, I think this is probably the problem. After PIM-SM receives a data packet from the beacon, it encapsulates it by adding the PIM Register header and then tries to unicast it to the RP. It looks like that after the encapsulation the packet becomes too large and the kernel doesn't want to accept it for transmission. It is not clear to me why the kernel didn't like the packet, so to start chasing the problem can you do the following: * Run tcpdump on the interface between your XORP router and the multicast beacon and capture the original size of the multicast data packets. * Checkout the lastest version of fea/mfea_proto_comm.cc (rev 1.32) and run again XORP. The newer version prints the size of the failed data packet so this can provide some additional clue about why the kernel doesn't like the packet. > After XORP initialization, we continuosly see the following printed from XORP: > > [ 2005/06/23 12:49:28 TRACE xorp_pimsm4 PIM ] TX PIM_REGISTER from a.b.d.1 to a.b.r.1 on vif register_vif > [ 2005/06/23 12:49:28 TRACE xorp_pimsm4 PIM ] RX WHOLEPKT signal from MFEA_4: vif_index = 2 src = a.b.d.14 dst = 233.4.200.19 > [ 2005/06/23 12:49:28 TRACE xorp_pimsm4 PIM ] TX PIM_REGISTER from a.b.d.1 to a.b.r.1 on vif register_vif The above TRACE messages are normal, and they indicate that PIM-SM proparly receives the whole data packets from the kernel and then encapsulates them and initiates the transmission to the RIP. Pavlin From dario.vieira@int-evry.fr Thu Jun 23 22:14:38 2005 From: dario.vieira@int-evry.fr (Dario VIEIRA) Date: Thu, 23 Jun 2005 23:14:38 +0200 Subject: [Xorp-users] bgp Message-ID: <1119561278.42bb263ea18a9@imp.int-evry.fr> This message is in MIME format. ---MOQ11195612781c1f7c070ed8f6e50252b4a7aee3823c Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 8bit Hi Atanu, Thanks for replying me. Here are my configuration files (in attachment) for each router "r" in my topology. We didn't see any error from BGP itself! When a single destination prefix "d" is announced, everything is okay. However, when the same prefix "d" is withdrew, the problem takes place. Thanks for your help. Dario ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program. ---MOQ11195612781c1f7c070ed8f6e50252b4a7aee3823c Content-Type: application/x-gzip; name="conf.tar.gz" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="conf.tar.gz" H4sIAAGhukIAA+2a7XKiMBSG/StXkRuQJuHDGe+GQqjMUnFC3Ham03tfMAdWMDCyRGq75/EHmsDx JYl5D4lxcUifVveFUp9ug6A6nqmPbBuw5vO5jFEehAGnHq/KGQ09viLBnXWdOZUqkoSsZFGosfOS SGbFEoKWJa77X/pufbzXd1T9SUPfH+p/5m+3vf6vjv6K0HsJuuQ/7//soIRMo1iU5MMhFW0BEWpP q8J1kpXRcy52JI3yUjjr31na1tVX9OvrsihJpChLwqjru57r1eeuj1Kk2fsmF4cXtd8R7ldlpos/ nfWno990FZVCDipq6m5SRF3+r4o+HScVETTW6ZDFUak2aSHfIplkhxeffJBz1YUMHQgKdYijLFQR F3mp41RjUGVxLUkWJyV8Qt3z66m9pYN4V5t9cdy1N6DLX4WSWVyVdhT+jQ9Cn1+O8A4+bbKkH4qQ vIijfBOVOxIGlAZOe8FRCNmczT66p2dXmoAmDu2UXt0Ic8glvebv1O2LPFHZa1VZzRD1DTsXYdVb IX/57Yh74v7FHZvar62DXtwRJU8X3/h6yhVU9Huwbeiv/vnORs//3lfO/7xKDq7mfx7i/L8EI/O/ ldk2dIP6N25t/mcjithNijw3nDP/XyniI4r4TYoCdzunjew40q2WAWLNluH1LUN3PzdZBowMo2Xw IcuAeDYsQwvUo8HoaTBQpnkaxLMnULe3sQV7XdERyIYEQry5An+G+yHa//nj+X+A/r8E9/b/Ohdn Ft12vv9z9/x6IP8HU/se/t84sNH/ed+9dPcb7RVGxjT3gnj27FWPBqNAGCjT/B/i2RMIadJwBmUW 6I1nULMFov//DLT/s8fzf1z/XYR7+381WVt92p7v/2AhD+T/8Ej2Pfy/eX40+j/ru5fufqN7wciY Zq8Qz569QhYynKBMXKBoshprAuEhf/j5f6L/N4sGMwWi//8MtP/TL/X/IOzv/1f+z9H/l2AZ/3+k 9X94Qnsg/4d9UGuKvBFF3oQ9km+yI9EsuRszEmrOSIyG3yQXk1YkmjRmpp/+FQjLDsMrEhMzkmYZ w5pA2D8f/huAWWAw/jcA63s6RoG9sdIROJgyNRtBMwViyoQgCIIgCIIgCIIgCIIgCIIgCIIgCIIg CIIgCIIgiBX+AMEm1pgAUAAA ---MOQ11195612781c1f7c070ed8f6e50252b4a7aee3823c-- From Dario Vieira Thu Jun 23 14:16:17 2005 From: Dario Vieira (Dario Vieira) Date: Thu, 23 Jun 2005 15:16:17 +0200 Subject: [Xorp-users] Re: bgp In-Reply-To: <83204.1119472214@tigger.icir.org> References: <1119443706.42b95afa66561@imp.int-evry.fr> <83204.1119472214@tigger.icir.org> Message-ID: ------=_Part_9266_26364108.1119532577243 Content-Type: multipart/alternative; boundary="----=_Part_9267_15301144.1119532577243" ------=_Part_9267_15301144.1119532577243 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Hi Atanu, Thanks for replying me. Here are my configuration files (in attachment) for each router "r" in my topology. We didn't see any error from BGP itself! When a single destination prefix= =20 "d" is announced, everything is okay. However, when the same prefix "d" is= =20 withdrew, the problem takes place. Thanks for your help. Dario On 6/22/05, Atanu Ghosh wrote:=20 >=20 > From your output it looks as if the BGP process has terminated and the > RIB has being notified. The RIB therefore attempts to remove the routes= =20 > installed by BGP. >=20 > Did you see any errors from BGP itself? >=20 > I would like to try and reproduce the problem for the four router case, > could you send the configuration files? >=20 > Atanu. >=20 > >>>>> "Dario" =3D=3D Dario VIEIRA < dario.vieira@int-evry.fr> writes: >=20 > Dario> Hi, We tried to use the XORP 1.1, but we run into the same > Dario> problem. :( >=20 > Dario> Any help? It would be appreciated.=20 >=20 > Dario> Cheers, >=20 > Dario> Dario >=20 > Dario> Selon Dario VIEIRA : >=20 > >> We are trying to perform some experiments with XORP BGP, but we=20 > >> are having some problems. > >> > >> In our network configurations, each AS consists of a single > >> router. In addition, in each network configuration, there is an > >> AS "X" that announces and later withdraws a single destination=20 > >> prefix "d". > >> > >> We have used some families of network topologies, as for > >> instance, CLIQUE of size "n" (a network configuration that is > >> made up of "n" ASes in a full mesh).=20 > >> > >> We are using IMUNES in order to simulate our network. Besides, we > >> are using XORP > >> 1.0. > >> > >> We started our simulation with a CLIQUE of size 3. The simulation=20 > >> runs without any problem. > >> > >> However, for CLIQUE with size greater than 3, some XORP BGP peers > >> crash when it is announced and later withdrew a destination > >> prefix "d". We got the following message errors (for each bgp=20 > >> peer crashed): > >> > >> "[2005/06/21 16:32:04 INFO xorp_rip RIB] Received death event for > >> protocol bgp shutting down---------- OriginTable: ebgp EGP Next > >> table =3D Merged:(ebgp)+(ibgp)"=20 > >> > >> Any idea about the origin of this problem? > >> > >> Thank for your help, > >> > >> Dario > >> > >> > >> ----------------------------------------------------------------=20 > >> This message was sent using IMP, the Internet Messaging Program. > >> >=20 >=20 >=20 >=20 > Dario> ---------------------------------------------------------------- > Dario> This message was sent using IMP, the Internet Messaging=20 > Dario> Program. _______________________________________________ > Dario> Xorp-users mailing list Xorp-users@xorp.org > Dario> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users >=20 --=20 Dario Vieira dario.vieira@gmail.com http://www-lor.int-evry.fr/~vieira_c ------=_Part_9267_15301144.1119532577243 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline
Hi Atanu,
 
Thanks for replying me.
 
Here are my configuration files (in attachment) for each router "= r" in my
topology.
 
We didn't see any error from BGP itself! When a single destination pre= fix "d" is announced, everything is okay. However, when the same = prefix "d" is withdrew, the problem takes place.

Thanks for your help.

Dario


On 6/22/05, = Atanu Ghosh <atanu@icsi.b= erkeley.edu> wrote:=20
From your output it looks as if = the BGP process has terminated and the
RIB has being notified. The RIB t= herefore attempts to remove the routes=20
installed by BGP.

Did you see any errors from BGP itself?
I would like to try and reproduce the problem for the four router case,could you send the configuration files?

    &nb= sp;  Atanu.

>>>>> "Dario" =3D=3D Dario= VIEIRA <=20 dario.vieira@int-evry.fr>= ; writes:

   Dario> Hi, We tried to use the XORP 1.1, b= ut we run into the same
   Dario> problem. :(

 =   Dario> Any help? It would be appreciated.=20

   Dario> Cheers,

   Dario> Dario<= br>
   Dario> Selon Dario VIEIRA <dario.vieira@int-evry.fr>:

  = >> We are trying to perform some experiments with XORP BGP, but we= =20
   >> are having some problems.
   >>= ;
   >> In our network configurations, each AS consists = of a single
   >> router. In addition, in each network c= onfiguration, there is an
   >> AS "X" that an= nounces and later withdraws a single destination=20
   >> prefix "d".
   >>   >> We have used some families of network topologies, as= for
   >> instance, CLIQUE of size "n" (a net= work configuration that is
   >> made up of "n"= ; ASes in a full mesh).=20
   >>
   >> We are using IMUNES in o= rder to simulate our network. Besides, we
   >> are usin= g XORP
   >> 1.0.
   >>
 &nbs= p; >> We started our simulation with a CLIQUE of size 3. The simulati= on=20
   >> runs without any problem.
   >>= ;
   >> However, for CLIQUE with size greater than 3, so= me XORP BGP peers
   >> crash when it is announced and l= ater withdrew a destination
   >> prefix "d". = We got the following message errors (for each bgp=20
   >> peer crashed):
   >>
 =   >> "[2005/06/21 16:32:04 INFO xorp_rip RIB] Received deat= h event for
   >> protocol bgp shutting down---------- O= riginTable: ebgp EGP Next
   >> table =3D Merged:(ebgp)+= (ibgp)"=20
   >>
   >> Any idea about the origi= n of this problem?
   >>
   >> Thank = for your help,
   >>
   >> Dario
&= nbsp;  >>
   >>
   >> ----= ------------------------------------------------------------=20
   >> This message was sent using IMP, the Internet Mes= saging Program.
   >>




   Da= rio> ----------------------------------------------------------------   Dario> This message was sent using IMP, the Internet Messa= ging=20
   Dario> Program.  ____________________________= ___________________
   Dario> Xorp-users mailing list Xorp-users@xorp.org
   D= ario> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users



--
Dario Vieira
dario.vieira@gmail.com
http://www-lor.int-evry.fr/~vieira_c=20 ------=_Part_9267_15301144.1119532577243-- ------=_Part_9266_26364108.1119532577243 Content-Type: application/x-gzip; name="conf.tar.gz" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="conf.tar.gz" H4sIAAGhukIAA+2a7XKiMBSG/StXkRuQJuHDGe+GQqjMUnFC3Ham03tfMAdWMDCyRGq75/EHmsDx JYl5D4lxcUifVveFUp9ug6A6nqmPbBuw5vO5jFEehAGnHq/KGQ09viLBnXWdOZUqkoSsZFGosfOS SGbFEoKWJa77X/pufbzXd1T9SUPfH+p/5m+3vf6vjv6K0HsJuuQ/7//soIRMo1iU5MMhFW0BEWpP q8J1kpXRcy52JI3yUjjr31na1tVX9OvrsihJpChLwqjru57r1eeuj1Kk2fsmF4cXtd8R7ldlpos/ nfWno990FZVCDipq6m5SRF3+r4o+HScVETTW6ZDFUak2aSHfIplkhxeffJBz1YUMHQgKdYijLFQR F3mp41RjUGVxLUkWJyV8Qt3z66m9pYN4V5t9cdy1N6DLX4WSWVyVdhT+jQ9Cn1+O8A4+bbKkH4qQ vIijfBOVOxIGlAZOe8FRCNmczT66p2dXmoAmDu2UXt0Ic8glvebv1O2LPFHZa1VZzRD1DTsXYdVb IX/57Yh74v7FHZvar62DXtwRJU8X3/h6yhVU9Huwbeiv/vnORs//3lfO/7xKDq7mfx7i/L8EI/O/ ldk2dIP6N25t/mcjithNijw3nDP/XyniI4r4TYoCdzunjew40q2WAWLNluH1LUN3PzdZBowMo2Xw IcuAeDYsQwvUo8HoaTBQpnkaxLMnULe3sQV7XdERyIYEQry5An+G+yHa//nj+X+A/r8E9/b/Ohdn Ft12vv9z9/x6IP8HU/se/t84sNH/ed+9dPcb7RVGxjT3gnj27FWPBqNAGCjT/B/i2RMIadJwBmUW 6I1nULMFov//DLT/s8fzf1z/XYR7+381WVt92p7v/2AhD+T/8Ej2Pfy/eX40+j/ru5fufqN7wciY Zq8Qz569QhYynKBMXKBoshprAuEhf/j5f6L/N4sGMwWi//8MtP/TL/X/IOzv/1f+z9H/l2AZ/3+k 9X94Qnsg/4d9UGuKvBFF3oQ9km+yI9EsuRszEmrOSIyG3yQXk1YkmjRmpp/+FQjLDsMrEhMzkmYZ w5pA2D8f/huAWWAw/jcA63s6RoG9sdIROJgyNRtBMwViyoQgCIIgCIIgCIIgCIIgCIIgCIIgCIIg CIIgCIIgiBX+AMEm1pgAUAAA ------=_Part_9266_26364108.1119532577243-- From a20081@alunos.det.ua.pt Fri Jun 24 18:10:15 2005 From: a20081@alunos.det.ua.pt (Tiago Costa) Date: Fri, 24 Jun 2005 18:10:15 +0100 Subject: [Xorp-users] inter as multicast Message-ID: This is a multi-part message in MIME format. ------=_NextPart_000_0000_01C578E7.F8B88D30 Content-Type: multipart/alternative; boundary="----=_NextPart_001_0001_01C578E7.F8B88D30" ------=_NextPart_001_0001_01C578E7.F8B88D30 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Hi!! I am trying to configure one network with 2 Autonomous Systems like it shows AS1 | | AS2 Host1 --------- cisco1 --------- xorp1 ---|------|---- xorp2 ----------- cisco2 ---------- host2 | | In cisco1 I have configured RIP, MSDP and PIM SM In cisco2 I have configured RIP, MSDP and PIM SM In xorp1 I have configured RIP, MBGP and PIM SM In xorp2 I have configured RIP, MBGP and PIM SM The configuration files are in attachment. When I start a multicast session with host1 I see MSDP SA packets between cisco1 and cisco2. The multicast session is to 224.30.30.30. When host2 joins to the session I see cisco2 sending a join (S, G) message to xorp2, a see the same between xorp2 and xorp1 but I don't see any join message between xorp1 and cisco1, resulting to fail the association of host2 to the multicast session. I have seen the multicast table of xorp1 and I don't find any entry to the net where host1 is present. I think this is why xorp1 don't send the join message to cisco1. My question is am I doing something wrong in my configuration??? Tiago Costa ------=_NextPart_001_0001_01C578E7.F8B88D30 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Hi!!<= /p>

I am trying to configure one = network with 2 Autonomous Systems like it shows

 

     =             &= nbsp;          AS1           &nbs= p;            = ;      |        = |            =         = AS2

Host1 --------- cisco1 = --------- xorp1 ---|------|---- xorp2 ----------- cisco2 ---------- host2 =

     =             &= nbsp;           &n= bsp;           &nb= sp;           &nbs= p;            |        = |

 

In cisco1 I have configured = RIP, MSDP and PIM SM

In cisco2 I have configured = RIP, MSDP and PIM SM

In xorp1 I have configured = RIP, MBGP and PIM SM

In xorp2 I have configured = RIP, MBGP and PIM SM

 

The configuration files are = in attachment.

 

When I start a multicast = session with host1 I see MSDP SA packets between cisco1 and cisco2. The = multicast session is to 224.30.30.30.

When host2 joins to the = session I see cisco2 sending a join (S, G) message to xorp2, a see the same = between xorp2 and xorp1 but I don’t see any join message between xorp1 and = cisco1, resulting to fail the association of host2 to the multicast = session.

I have seen the multicast = table of xorp1 and I don’t   find any entry to the net where = host1 is present. I think this is why xorp1 don’t send the join message to = cisco1.

 

My question is am I doing = something wrong in my configuration???

 

Tiago = Costa

------=_NextPart_001_0001_01C578E7.F8B88D30-- ------=_NextPart_000_0000_01C578E7.F8B88D30 Content-Type: application/octet-stream; name="xorp1.boot" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="xorp1.boot" /* $XORP: xorp/rtrmgr/config.boot.sample,v 1.23 2005/03/09 22:50:41 = pavlin Exp $ */=0A= =0A= =0A= interfaces {=0A= interface xl0 {=0A= description: "data interface"=0A= disable: false=0A= /* default-system-config */=0A= vif xl0 {=0A= disable: false=0A= address 192.10.10.1 {=0A= prefix-length: 24=0A= broadcast: 192.10.10.255=0A= disable: false=0A= }=0A= }=0A= =0A= =0A= }=0A= interface rl0 {=0A= description: "data inerface"=0A= disable: false=0A= /* default-system-config */=0A= vif rl0 {=0A= disable: false=0A= address 192.100.0.1 {=0A= prefix-length: 24=0A= broadcast: 192.100.0.255=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= }=0A= =0A= fea {=0A= unicast-forwarding4 {=0A= disable: false=0A= }=0A= =0A= =0A= }=0A= =0A= =0A= protocols {=0A= rip {=0A= /* Redistribute routes for connected interfaces */=0A= =0A= export connected {=0A= metric: 0=0A= tag: 0=0A= }=0A= =0A= =0A= =0A= /* Run on specified network interface addresses */=0A= =0A= interface xl0 {=0A= vif xl0 {=0A= address 192.10.10.1 {=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= }=0A= }=0A= =0A= =0A= protocols {=0A= bgp {=0A= bgp-id: 192.100.0.1=0A= local-as: 101=0A= =0A= peer 192.100.0.2 {=0A= local-ip: 192.100.0.1=0A= as: 102=0A= next-hop: 192.100.0.1=0A= =0A= /* holdtime: 120 */=0A= /* disable: false */=0A= =0A= /* Optionally enable other AFI/SAFI combinations */=0A= ipv4-multicast: true=0A= =0A= }=0A= =0A= /* Originate IPv4 Routes */=0A= =0A= network4 192.10.0.0/16 {=0A= =0A= next-hop: 192.100.0.1=0A= unicast: true=0A= multicast: true=0A= }=0A= =0A= =0A= =0A= }=0A= }=0A= =0A= =0A= =0A= =0A= =0A= =0A= plumbing {=0A= mfea4 {=0A= disable: false=0A= interface xl0 {=0A= vif xl0 {=0A= disable: false=0A= }=0A= }=0A= =0A= interface rl0 {=0A= vif rl0 {=0A= disable: false=0A= }=0A= }=0A= =0A= interface register_vif {=0A= vif register_vif {=0A= /* Note: this vif should be always enabled */=0A= disable: false=0A= }=0A= }=0A= traceoptions {=0A= flag all {=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= =0A= }=0A= =0A= protocols {=0A= igmp {=0A= disable: false=0A= interface xl0 {=0A= vif xl0 {=0A= disable: false=0A= }=0A= }=0A= traceoptions {=0A= flag all {=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= }=0A= =0A= protocols {=0A= pimsm4 {=0A= disable: false=0A= interface xl0 {=0A= vif xl0 {=0A= disable: false=0A= dr-priority: 1=0A= /* alternative-subnet 10.40.0.0/16 */=0A= }=0A= }=0A= =0A= interface rl0 {=0A= vif rl0 {=0A= disable: false=0A= dr-priority: 1=0A= /* alternative-subnet 10.40.0.0/16 */=0A= }=0A= }=0A= interface register_vif {=0A= vif register_vif {=0A= /* Note: this vif should be always enabled */=0A= disable: false=0A= }=0A= }=0A= =0A= static-rps {=0A= rp 192.10.10.10 {=0A= group-prefix 225.255.255.0/24 {=0A= rp-priority: 192=0A= /* hash-mask-len: 30 */=0A= }=0A= }=0A= }=0A= =0A= =0A= =0A= switch-to-spt-threshold {=0A= /* approx. 1K bytes/s (10Kbps) threshold */=0A= disable: false=0A= interval-sec: 100=0A= bytes: 102400=0A= }=0A= =0A= traceoptions {=0A= flag all {=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= =0A= }=0A= =0A= /*=0A= * Note: fib2mrib is needed for multicast only if the unicast protocols=0A= * don't populate the MRIB with multicast-specific routes.=0A= */=0A= protocols {=0A= fib2mrib {=0A= disable: false=0A= }=0A= }=0A= ------=_NextPart_000_0000_01C578E7.F8B88D30 Content-Type: application/octet-stream; name="xorp2.boot" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="xorp2.boot" /* $XORP: xorp/rtrmgr/config.boot.sample,v 1.23 2005/03/09 22:50:41 = pavlin Exp $ */=0A= =0A= =0A= interfaces {=0A= interface xl0 {=0A= description: "data interface"=0A= disable: false=0A= /* default-system-config */=0A= vif xl0 {=0A= disable: false=0A= address 192.20.10.1 {=0A= prefix-length: 24=0A= broadcast: 192.20.10.255=0A= disable: false=0A= }=0A= }=0A= =0A= =0A= }=0A= interface xl1 {=0A= description: "data inerface"=0A= disable: false=0A= /* default-system-config */=0A= vif xl1 {=0A= disable: false=0A= address 192.100.0.2 {=0A= prefix-length: 24=0A= broadcast: 192.100.0.255=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= =0A= }=0A= =0A= fea {=0A= unicast-forwarding4 {=0A= disable: false=0A= }=0A= }=0A= protocols {=0A= rip {=0A= /* Redistribute routes for connected interfaces */=0A= =0A= export connected {=0A= metric: 0=0A= tag: 0=0A= }=0A= =0A= =0A= =0A= /* Run on specified network interface addresses */=0A= =0A= interface xl0 {=0A= vif xl0 {=0A= address 192.20.10.1 {=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= }=0A= }=0A= =0A= =0A= protocols {=0A= bgp {=0A= bgp-id: 192.100.0.2=0A= local-as: 102=0A= =0A= peer 192.100.0.1 {=0A= local-ip: 192.100.0.2=0A= as: 101=0A= next-hop: 192.100.0.2=0A= =0A= ipv4-multicast: true=0A= }=0A= =0A= /* Originate IPv4 Routes */=0A= =0A= network4 192.20.0.0/16 {=0A= =0A= next-hop: 192.100.0.2=0A= unicast: true=0A= multicast: true=0A= }=0A= }=0A= }=0A= =0A= =0A= =0A= =0A= =0A= =0A= plumbing {=0A= mfea4 {=0A= disable: false=0A= interface xl0 {=0A= vif xl0 {=0A= disable: false=0A= }=0A= }=0A= =0A= interface xl1 {=0A= vif xl1 {=0A= disable: false=0A= }=0A= }=0A= =0A= interface register_vif {=0A= vif register_vif {=0A= /* Note: this vif should be always enabled */=0A= disable: false=0A= }=0A= }=0A= traceoptions {=0A= flag all {=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= =0A= }=0A= =0A= protocols {=0A= igmp {=0A= disable: false=0A= interface xl0 {=0A= vif xl0 {=0A= disable: false=0A= }=0A= }=0A= traceoptions {=0A= flag all {=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= }=0A= =0A= protocols {=0A= pimsm4 {=0A= disable: false=0A= interface xl0 {=0A= vif xl0 {=0A= disable: false=0A= dr-priority: 1=0A= /* alternative-subnet 10.40.0.0/16 */=0A= }=0A= interface xl1 {=0A= vif xl1 {=0A= disable: false=0A= dr-priority: 1=0A= /* alternative-subnet 10.40.0.0/16 */=0A= }=0A= }=0A= =0A= interface register_vif {=0A= vif register_vif {=0A= /* Note: this vif should be always enabled */=0A= disable: false=0A= }=0A= }=0A= =0A= static-rps {=0A= rp 192.20.10.10 {=0A= group-prefix 225.255.255.0/24 {=0A= rp-priority: 192=0A= /* hash-mask-len: 30 */=0A= }=0A= }=0A= }=0A= =0A= =0A= =0A= switch-to-spt-threshold {=0A= /* approx. 1K bytes/s (10Kbps) threshold */=0A= disable: false=0A= interval-sec: 100=0A= bytes: 102400=0A= }=0A= =0A= traceoptions {=0A= flag all {=0A= disable: false=0A= }=0A= }=0A= }=0A= =0A= =0A= }=0A= =0A= /*=0A= * Note: fib2mrib is needed for multicast only if the unicast protocols=0A= * don't populate the MRIB with multicast-specific routes.=0A= */=0A= protocols {=0A= fib2mrib {=0A= disable: false=0A= }=0A= }=0A= ------=_NextPart_000_0000_01C578E7.F8B88D30 Content-Type: text/plain; name="cisco1.txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="cisco1.txt" configur termi ip route 0.0.0.0 0.0.0.0 192.10.10.1 ip multicast-routing interface fast 0/0 ip address 192.10.10.10 255.255.255.0 no shut ip pim sparse-mode ip pim rp-address 192.10.10.10 interface fast 0/1 ip address 192.10.20.20 255.255.255.0 no shut ip pim sparse-mode ip pim rp-address 192.10.10.10 ip msdp peer 192.20.10.10 connect-source fastEthernet 0/0 remote-as 102 ip msdp default-peer 192.20.10.10 router rip version 2 network 192.10.10.0 network 192.10.20.0 end write ------=_NextPart_000_0000_01C578E7.F8B88D30 Content-Type: text/plain; name="cisco2.txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="cisco2.txt" configur termi ip multicast-routing ip route 0.0.0.0 0.0.0.0 192.20.10.1 interface ethernet 0/0 ip address 192.20.10.10 255.255.255.0 no shut ip pim sparse-mode ip pim rp-address 192.20.10.10 interface ethernet 0/1 ip address 192.20.20.20 255.255.255.0 no shut ip pim sparse-mode ip pim rp-address 192.20.10.10 ip msdp peer 192.10.10.10 connect-source ethernet 0/0 remote-as 101 ip msdp default-peer 192.10.10.10 router rip version 2 network 192.20.10.0 network 192.20.20.0 end write ------=_NextPart_000_0000_01C578E7.F8B88D30-- From pavlin@icir.org Fri Jun 24 20:57:16 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Fri, 24 Jun 2005 12:57:16 -0700 Subject: [Xorp-users] inter as multicast In-Reply-To: Message from "Tiago Costa" of "Fri, 24 Jun 2005 18:10:15 BST." Message-ID: <200506241957.j5OJvGma064423@possum.icir.org> > I am trying to configure one network with 2 Autonomous Systems like it shows > > > > > AS1 | | > AS2 > > Host1 --------- cisco1 --------- xorp1 ---|------|---- xorp2 ----------- > cisco2 ---------- host2 > > | | > > > > In cisco1 I have configured RIP, MSDP and PIM SM > > In cisco2 I have configured RIP, MSDP and PIM SM > > In xorp1 I have configured RIP, MBGP and PIM SM > > In xorp2 I have configured RIP, MBGP and PIM SM > > > > The configuration files are in attachment. > > > > When I start a multicast session with host1 I see MSDP SA packets between > cisco1 and cisco2. The multicast session is to 224.30.30.30. > > When host2 joins to the session I see cisco2 sending a join (S, G) message Do you really mean (S,G) Join message or is it (*,G) Join? The (S,G) Join message should have the host1 address as S, while (*,G) join message would have the RP address instead (either 192.10.10.10 or 192.20.10.10) and the RPT and W flag set. Please double-check by running tcpdump between xorp1 and xorp2. If the message is (*,G) Join, then xorp1 will silently drop it because of the RP address mismatch (the RP inside the (*,G) Join message doesn't match the RP address for that group according to xorp1). E.g., see Section 4.5.2 of the lastest PIM-SM I-D spec (draft-ietf-pim-sm-v2-new-11.txt): ==== When a router receives a Join(*,G) or Prune(*,G) it must first check to see whether the RP in the message matches RP(G) (the router's idea of who the RP is). If the RP in the message does not match RP(G) the Join(*,G) or Prune(*,G) should be silently dropped. ==== If the message is (S,G) Join as you say, then indeed xorp1 must contain a matching entry for host1 in the MRIB, otherwise the (S,G) Join won't be propagated. If there was no matching MRIB entry, typically XORP should print a warning log message "no upstream PIM neighbors" or something like this. In your case the route to host1 should come from RIP, so you can use the following commands to locate the missing 192.10.20.0/24 route on xorp1: show route table ipv4 unicast rip show route table ipv4 unicast final show route table ipv4 multicast fib2mrib show route table ipv4 multicast final show pim mrib Pavlin > to xorp2, a see the same between xorp2 and xorp1 but I don't see any join > message between xorp1 and cisco1, resulting to fail the association of host2 > to the multicast session. > > I have seen the multicast table of xorp1 and I don't find any entry to the > net where host1 is present. I think this is why xorp1 don't send the join > message to cisco1. > > > > My question is am I doing something wrong in my configuration??? From a20081@alunos.det.ua.pt Mon Jun 27 18:00:55 2005 From: a20081@alunos.det.ua.pt (Tiago Costa) Date: Mon, 27 Jun 2005 18:00:55 +0100 Subject: [Xorp-users] inter as multicast In-Reply-To: <200506241957.j5OJvGma064423@possum.icir.org> Message-ID: This is a multi-part message in MIME format. ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit AS1 | | AS2 Host1 ---- cisco1 ------ xorp1 ----|----|---- xorp2 -----cisco2 ---- host2 The message is (S,G) I am sure of it. I have seen the unicast and multicast tables like you said so, and I ca see the network 192.10.20.0 in unicast tables but this network never appears in multicast tables. I see the join status in both xorp routers and I confirm that there is one entry for my multicast group 224.30.30.30 in (S,G) mode. (Like you can see in file xorp1(show pim join).txt) The only change I made in configuration is the group-prefix in the rp, now is 224.0.0.0/8. My question is why don't xorp add a rote in the multicast table? In attachment I send the results of the commands you suggest plus pim join table. Thanks for your help. Tiago Costa -----Mensagem original----- De: xorp-users-admin@xorp.org [mailto:xorp-users-admin@xorp.org] Em nome de Pavlin Radoslavov Enviada: sexta-feira, 24 de Junho de 2005 19:57 Para: Tiago Costa Cc: xorp-users@xorp.org; 'Pavlin Radoslavov' Assunto: Re: [Xorp-users] inter as multicast > I am trying to configure one network with 2 Autonomous Systems like it shows > > > > > AS1 | | > AS2 > > Host1 --------- cisco1 --------- xorp1 ---|------|---- xorp2 ----------- > cisco2 ---------- host2 > > | | > > > > In cisco1 I have configured RIP, MSDP and PIM SM > > In cisco2 I have configured RIP, MSDP and PIM SM > > In xorp1 I have configured RIP, MBGP and PIM SM > > In xorp2 I have configured RIP, MBGP and PIM SM > > > > The configuration files are in attachment. > > > > When I start a multicast session with host1 I see MSDP SA packets between > cisco1 and cisco2. The multicast session is to 224.30.30.30. > > When host2 joins to the session I see cisco2 sending a join (S, G) message Do you really mean (S,G) Join message or is it (*,G) Join? The (S,G) Join message should have the host1 address as S, while (*,G) join message would have the RP address instead (either 192.10.10.10 or 192.20.10.10) and the RPT and W flag set. Please double-check by running tcpdump between xorp1 and xorp2. If the message is (*,G) Join, then xorp1 will silently drop it because of the RP address mismatch (the RP inside the (*,G) Join message doesn't match the RP address for that group according to xorp1). E.g., see Section 4.5.2 of the lastest PIM-SM I-D spec (draft-ietf-pim-sm-v2-new-11.txt): ==== When a router receives a Join(*,G) or Prune(*,G) it must first check to see whether the RP in the message matches RP(G) (the router's idea of who the RP is). If the RP in the message does not match RP(G) the Join(*,G) or Prune(*,G) should be silently dropped. ==== If the message is (S,G) Join as you say, then indeed xorp1 must contain a matching entry for host1 in the MRIB, otherwise the (S,G) Join won't be propagated. If there was no matching MRIB entry, typically XORP should print a warning log message "no upstream PIM neighbors" or something like this. In your case the route to host1 should come from RIP, so you can use the following commands to locate the missing 192.10.20.0/24 route on xorp1: show route table ipv4 unicast rip show route table ipv4 unicast final show route table ipv4 multicast fib2mrib show route table ipv4 multicast final show pim mrib Pavlin > to xorp2, a see the same between xorp2 and xorp1 but I don't see any join > message between xorp1 and cisco1, resulting to fail the association of host2 > to the multicast session. > > I have seen the multicast table of xorp1 and I don't find any entry to the > net where host1 is present. I think this is why xorp1 don't send the join > message to cisco1. > > > > My question is am I doing something wrong in my configuration??? _______________________________________________ Xorp-users mailing list Xorp-users@xorp.org http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp1 (ipv4 multicast fib2mrib).txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="xorp1 (ipv4 multicast fib2mrib).txt" Network 192.10.10.0/24 Nexthop := 0.0.0.0 Metric := 65535 Protocol := fib2mrib Interface := xl0 Vif := xl0 Network 192.100.0.0/24 Nexthop := 0.0.0.0 Metric := 65535 Protocol := fib2mrib Interface := rl0 Vif := rl0 ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp1 (ipv4 multicast final).txt" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="xorp1 (ipv4 multicast final).txt" Network 192.20.0.0/16 Nexthop :=3D 192.100.0.2 Metric :=3D 0 Protocol :=3D ebgp Interface :=3D rl0 Vif = :=3D rl0 Network 192.10.10.0/24 Nexthop :=3D 192.10.10.1 Metric :=3D 0 Protocol :=3D connected Interface :=3D xl0 = Vif :=3D xl0 Network 192.100.0.0/24 Nexthop :=3D 192.100.0.1 Metric :=3D 0 Protocol :=3D connected Interface :=3D rl0 = Vif :=3D rl0 ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp1 (ipv4 unicast final).txt" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="xorp1 (ipv4 unicast final).txt" Network 192.20.0.0/16 Nexthop :=3D 192.100.0.2 Metric :=3D 0 Protocol :=3D ebgp Interface :=3D rl0 Vif = :=3D rl0 Network 192.10.10.0/24 Nexthop :=3D 192.10.10.1 Metric :=3D 0 Protocol :=3D connected Interface :=3D xl0 = Vif :=3D xl0 Network 192.10.20.0/24 Nexthop :=3D 192.10.10.10 Metric :=3D 2 Protocol :=3D rip Interface :=3D xl0 Vif = :=3D xl0 Network 192.100.0.0/24 Nexthop :=3D 192.100.0.1 Metric :=3D 0 Protocol :=3D connected Interface :=3D rl0 = Vif :=3D rl0 ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp1 (ipv4 unicast rip).txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="xorp1 (ipv4 unicast rip).txt" Network 192.10.10.0/24 Nexthop := 192.10.10.1 Metric := 0 Protocol := rip Interface := xl0 Vif := xl0 Network 192.10.20.0/24 Nexthop := 192.10.10.10 Metric := 2 Protocol := rip Interface := xl0 Vif := xl0 Network 192.100.0.0/24 Nexthop := 192.100.0.1 Metric := 0 Protocol := rip Interface := rl0 Vif := rl0 ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp1 (show pim mrib).txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="xorp1 (show pim mrib).txt" DestPrefix NextHopRouter VifName VifIndex MetricPref Metric 192.10.10.0/24 192.10.10.1 xl0 1 0 0 192.20.0.0/16 192.100.0.2 rl0 0 20 0 192.100.0.0/24 192.100.0.1 rl0 0 0 0 ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp1(show pim join).txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="xorp1(show pim join).txt" Group Source RP Flags 224.0.1.40 0.0.0.0 192.10.10.10 WC Upstream interface (RP): xl0 Upstream MRIB next hop (RP): 192.10.10.10 Upstream RPF'(*,G): 192.10.10.10 Upstream state: NotJoined Join timer: -1 Local receiver include WC: .O. Joins RP: ... Joins WC: ... Join state: ... Prune state: ... Prune pending state: ... I am assert winner state: ... I am assert loser state: ... Assert winner WC: ... Assert lost WC: ... Assert tracking WC: ... Could assert WC: ... I am DR: ..O Immediate olist RP: ... Immediate olist WC: ... Inherited olist SG: ... Inherited olist SG_RPT: ... PIM include WC: ... 224.30.30.30 192.10.20.30 192.10.10.10 SG Upstream interface (S): UNKNOWN Upstream interface (RP): xl0 Upstream MRIB next hop (RP): 192.10.10.10 Upstream MRIB next hop (S): UNKNOWN Upstream RPF'(S,G): UNKNOWN Upstream state: Joined Join timer: 29 KAT(S,G) running: false Local receiver include WC: ... Local receiver include SG: ... Local receiver exclude SG: ... Joins RP: ... Joins WC: ... Joins SG: O.. Join state: O.. Prune state: ... Prune pending state: ... I am assert winner state: ... I am assert loser state: ... Assert winner WC: ... Assert winner SG: ... Assert lost WC: ... Assert lost SG: ... Assert lost SG_RPT: ... Assert tracking SG: O.. Could assert WC: ... Could assert SG: ... I am DR: ..O Immediate olist RP: ... Immediate olist WC: ... Immediate olist SG: O.. Inherited olist SG: O.. Inherited olist SG_RPT: ... PIM include WC: ... PIM include SG: ... PIM exclude SG: ... --More-- (END) ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp2 (ipv4 multicast fib2mrib).txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="xorp2 (ipv4 multicast fib2mrib).txt" Network 192.10.0.0/16 Nexthop := 192.100.0.1 Metric := 65535 Protocol := fib2mrib Interface := xl1 Vif := xl1 Network 192.20.10.0/24 Nexthop := 0.0.0.0 Metric := 65535 Protocol := fib2mrib Interface := xl0 Vif := xl0 Network 192.100.0.0/24 Nexthop := 0.0.0.0 Metric := 65535 Protocol := fib2mrib Interface := xl1 Vif := xl1 ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp2 (ipv4 multicast final).txt" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="xorp2 (ipv4 multicast final).txt" Xorp> show route table ipv4 multicast final Network 192.10.0.0/16 Nexthop :=3D 192.100.0.1 Metric :=3D 0 Protocol :=3D ebgp Interface :=3D xl1 Vif = :=3D xl1 Network 192.20.10.0/24 Nexthop :=3D 192.20.10.1 Metric :=3D 0 Protocol :=3D connected Interface :=3D xl0 = Vif :=3D xl0 Network 192.100.0.0/24 Nexthop :=3D 192.100.0.2 Metric :=3D 0 Protocol :=3D connected Interface :=3D xl1 = Vif :=3D xl1 ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp2 (ipv4 unicast final).txt" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="xorp2 (ipv4 unicast final).txt" Network 192.10.0.0/16 Nexthop :=3D 192.100.0.1 Metric :=3D 0 Protocol :=3D ebgp Interface :=3D xl1 Vif = :=3D xl1 Network 192.20.10.0/24 Nexthop :=3D 192.20.10.1 Metric :=3D 0 Protocol :=3D connected Interface :=3D xl0 = Vif :=3D xl0 Network 192.20.20.0/24 Nexthop :=3D 192.20.10.10 Metric :=3D 2 Protocol :=3D rip Interface :=3D xl0 Vif = :=3D xl0 Network 192.100.0.0/24 Nexthop :=3D 192.100.0.2 Metric :=3D 0 Protocol :=3D connected Interface :=3D xl1 = Vif :=3D xl1 ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp2 (ipv4 unicast rip).txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="xorp2 (ipv4 unicast rip).txt" Network 192.20.10.0/24 Nexthop := 192.20.10.1 Metric := 0 Protocol := rip Interface := xl0 Vif := xl0 Network 192.20.20.0/24 Nexthop := 192.20.10.10 Metric := 2 Protocol := rip Interface := xl0 Vif := xl0 Network 192.100.0.0/24 Nexthop := 192.100.0.2 Metric := 0 Protocol := rip Interface := xl1 Vif := xl1 ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp2 (show pim join).txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="xorp2 (show pim join).txt" Group Source RP Flags 224.0.1.40 0.0.0.0 192.20.10.10 WC Upstream interface (RP): xl0 Upstream MRIB next hop (RP): 192.20.10.10 Upstream RPF'(*,G): 192.20.10.10 Upstream state: NotJoined Join timer: -1 Local receiver include WC: O.. Joins RP: ... Joins WC: ... Join state: ... Prune state: ... Prune pending state: ... I am assert winner state: ... I am assert loser state: ... Assert winner WC: ... Assert lost WC: ... Assert tracking WC: ... Could assert WC: ... I am DR: .OO Immediate olist RP: ... Immediate olist WC: ... Inherited olist SG: ... Inherited olist SG_RPT: ... PIM include WC: ... 224.30.30.30 192.10.20.30 192.20.10.10 SG Upstream interface (S): xl1 Upstream interface (RP): xl0 Upstream MRIB next hop (RP): 192.20.10.10 Upstream MRIB next hop (S): 192.100.0.1 Upstream RPF'(S,G): 192.100.0.1 Upstream state: Joined Join timer: 52 KAT(S,G) running: false Local receiver include WC: ... Local receiver include SG: ... Local receiver exclude SG: ... Joins RP: ... Joins WC: ... Joins SG: O.. Join state: O.. Prune state: ... Prune pending state: ... I am assert winner state: ... I am assert loser state: ... Assert winner WC: ... Assert winner SG: ... Assert lost WC: ... Assert lost SG: ... Assert lost SG_RPT: ... Assert tracking SG: OO. Could assert WC: ... Could assert SG: ... I am DR: .OO Immediate olist RP: ... Immediate olist WC: ... Immediate olist SG: O.. Inherited olist SG: O.. Inherited olist SG_RPT: ... PIM include WC: ... PIM include SG: ... PIM exclude SG: ... --More-- (END) ------=_NextPart_000_0018_01C57B42.2A2FF090 Content-Type: text/plain; name="xorp2 (show pim mrib).txt" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="xorp2 (show pim mrib).txt" Xorp> show pim mrib DestPrefix NextHopRouter VifName VifIndex MetricPref Metric 192.10.0.0/16 192.100.0.1 xl1 1 20 0 192.20.10.0/24 192.20.10.1 xl0 0 0 0 192.100.0.0/24 192.100.0.2 xl1 1 0 0 ------=_NextPart_000_0018_01C57B42.2A2FF090-- From pavlin@icir.org Mon Jun 27 18:41:48 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Mon, 27 Jun 2005 10:41:48 -0700 Subject: [Xorp-users] inter as multicast In-Reply-To: Message from "Tiago Costa" of "Mon, 27 Jun 2005 18:00:55 BST." Message-ID: <200506271741.j5RHfm9r038856@possum.icir.org> > The message is (S,G) I am sure of it. I have seen the unicast and multicast > tables like you said so, and I ca see the network 192.10.20.0 in unicast > tables but this network never appears in multicast tables. > I see the join status in both xorp routers and I confirm that there is one > entry for my multicast group 224.30.30.30 in (S,G) mode. (Like you can see > in file xorp1(show pim join).txt) > > The only change I made in configuration is the group-prefix in the rp, now > is 224.0.0.0/8. > > My question is why don't xorp add a rote in the multicast table? This is a very good question, because 192.10.20.0/24 is indeed missing from the fib2mrib RIB table. To debug the problem I'd suggest the following: 1. Get the lastest XORP code from CVS 2. Run only RIP on cisco1, and RIP and fib2mrib on xorp1, and verify that the 192.10.20.0/24 route is still missing from the xorp1's fib2mrib RIB table. Also, verify that the route is indeed in the kernel. 3. Set XRLTRACE environmental variable in the window you run XORP, and run "route monitor -n" in another window. Restart XORP and look for the following: - One of the routing socket messages you see in the "route monitor -n" window should show the 192.10.20.0/24 route being added to the kernel. - Look for the XRLs in the rtrmgr window that have been sent between various modules. In particular, look for the add_route4 (and maybe replace_route4) XRLs that have been sent from the FEA to the fib2mrib module. Eventually, one of those XRLs should be to add the 192.10.20.0/24 route. Just in case, look for delete_route4 XRLs as well. - Look for the XRLs that have been sent from the fib2mrib module to the RIB module. In particular, look for XRLs like add_route4, replace_route4, add_interface_route4, and replace_interface_route4. Again, if everything is working normal, one of those XRLs should be for route 192.10.20.0/24 Just in case, look for delete_route4 XRLs as well. Hopefully, the above procedure would show where we have lost the route. Pavlin From Zhaoyu Chi Mon Jun 27 21:00:11 2005 From: Zhaoyu Chi (Zhaoyu Chi) Date: Mon, 27 Jun 2005 16:00:11 -0400 Subject: [Xorp-users] About PIM-SM's interface Message-ID: <17ffd49d05062713007002ef79@mail.gmail.com> Hi: I am a newbie, and I am trying to start a multicasting project using XORP. I found that there are interfaces for getting events from IGMP (in xrl/interfacers/mld6igmp_client: add_membership4 and delete_membership4), but I could not find such interfaces provided by PIM-SM. How can I register interest and get events, such as Join/Prune? Thank you very much for help! Zhaoyu From pavlin@icir.org Mon Jun 27 21:57:54 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Mon, 27 Jun 2005 13:57:54 -0700 Subject: [Xorp-users] About PIM-SM's interface In-Reply-To: Message from Zhaoyu Chi of "Mon, 27 Jun 2005 16:00:11 EDT." <17ffd49d05062713007002ef79@mail.gmail.com> Message-ID: <200506272057.j5RKvsMW044565@possum.icir.org> > I am a newbie, and I am trying to start a multicasting project using XORP. > > I found that there are interfaces for getting events from IGMP (in > xrl/interfacers/mld6igmp_client: add_membership4 and > delete_membership4), but I could not find such interfaces provided by > PIM-SM. How can I register interest and get events, such as > Join/Prune? Currently you cannot register interest in PIM-SM and get events like Join/Prune. Such interface exists in IGMP, because PIM-SM needs to get the IGMP Join/Prune events notifications. If you need to get similar events out of PIM-SM, then you have to implement yourself the XRL interface similar as it is done for IGMP. Please let me know if you need help identifying where exactly to "tap" into the PIM-SM code to get the particular events. Pavlin From samelstob@fastmail.fm Tue Jun 28 00:02:35 2005 From: samelstob@fastmail.fm (Sam Elstob) Date: Tue, 28 Jun 2005 00:02:35 +0100 Subject: [Xorp-users] xorp_rtrmgr - Assertion failure In-Reply-To: <22749.1119540198@aardvark.cs.ucl.ac.uk> References: <22749.1119540198@aardvark.cs.ucl.ac.uk> Message-ID: <1119913356.11441.2.camel@localhost.localdomain> On Thu, 2005-06-23 at 16:23 +0100, Mark Handley wrote: > >Managed to cause a failed assertion in XORP whilst starting up xorpsh. > >Here is the output. Any ideas? > > > > > >[ 2005/06/23 01:01:23 INFO xorp_rtrmgr:5515 XRL +460 xrl_router.cc > >send ] Resolving > >xrl:finder://finder/finder_event_notifier/0.1/register_class_event_interest?re > >quester_instance:txt=rtrmgr-d5b380331df952fe197fd3e85e843c92@192.168.1.32&clas > >s_name:txt=xorpsh-5518-xen0 > >xorp_rtrmgr: ../sysdeps/generic/printf_fphex.c:163: __printf_fphex: > >Assertion `*decimal != '\0' && decimalwc != L'\0'' failed. > >[ 2005/06/23 01:01:23 ERROR xorp_fea:5516 CLI +115 xrl_cli_node.cc > >finder_disconnect_event ] Finder disconnect event. Exiting > >immediately... > >[ 2005/06/23 01:01:23 INFO xorp_fea CLI ] CLI stopped > >[ 2005/06/23 01:01:23 ERROR xorp_fea:5516 MFEA +192 xrl_mfea_node.cc > >finder_disconnect_event ] Finder disconnect event. Exiting > >immediately... > >Aborted > > If you've still got the core file, can you run gdb it, and give me a > backtrace? > > - Mark Mark here is the backtrace Looks like some non-printing characters got into the log at step #8, which caused the library to assert. #0 0xb7cf27c1 in kill () from /lib/libc.so.6 #1 0xb7cf2545 in raise () from /lib/libc.so.6 #2 0xb7cf3a88 in abort () from /lib/libc.so.6 #3 0xb7cebbbf in __assert_fail () from /lib/libc.so.6 #4 0xb7d18062 in parse_printf_format () from /lib/libc.so.6 #5 0xb7d10aa7 in vfprintf () from /lib/libc.so.6 #6 0xb7d30390 in vsnprintf () from /lib/libc.so.6 #7 0x081395f0 in x_vasprintf (ret=0xbfff755c, format=0x830aed4 "Resolving xrl:finder://xorpsh-11422-xen0/rtrmgr_client/0.2/config_changed?userid:u32=0&deltas:txt=++++interfaces+%7B%0A++++++++interface+eth0+%7B%0A", '+' , "description:+\"data+interface\"%0A++++++++"..., ap=0xbfff759c "\036n\026\b\nq\026\bd®í·øuÿ¿Ð¬.\bpÑ/\b\037N") at xlog.c:1119 #8 0x08138e31 in xlog_record_va (log_level=XLOG_LEVEL_INFO, module_name=0x8166eaf "XRL", where=0xbfff7600 "+460 xrl_router.cc send", format=0x830aed4 "Resolving xrl:finder://xorpsh-11422-xen0/rtrmgr_client/0.2/config_changed?userid:u32=0&deltas:txt=++++interfaces+%7B%0A++++++++interface+eth0+%7B%0A", '+' , "description:+\"data+interface\"%0A++++++++"..., ap=0xbfff759c "\036n\026\b\nq\026\bd®í·øuÿ¿Ð¬.\bpÑ/\b\037N") at xlog.c:630 #9 0x08138bf8 in xlog_info (module_name=0x8166eaf "XRL", where=0xbfff7600 "+460 xrl_router.cc send", fmt=0x830aed4 "Resolving xrl:finder://xorpsh-11422-xen0/rtrmgr_client/0.2/config_changed?userid:u32=0&deltas:txt=++++interfaces+%7B%0A++++++++interface+eth0+%7B%0A", '+' , "description:+\"data+interface\"%0A++++++++"...) at xlog.c:441 #10 0x08102931 in XrlRouter::send (this=0xbfffd720, xrl=@0xbfff95d0, user_cb=@0xbfff9590) at xrl_router.cc:460 #11 0x080bb700 in XrlRtrmgrClientV0p2Client::send_config_changed (this=0x82f3a44, the_tgt=0x82eac74 "xorpsh-11422-xen0", userid=@0xbfff965c, deltas=@0xbfff96a0, deletions=@0xbfff9670, cb=@0xbfff9680) at rtrmgr_client_xif.cc:93 #12 0x080a4950 in XrlRtrmgrInterface::send_client_state (this=0x82f3a38, user_id=0, user=0x82f5220) at xrl_rtrmgr_interface.cc:295 #13 0x080aa7d0 in XorpMemberCallback0B2::dispatch (this=0x8309830) at callback_nodebug.hh:898 #14 0x0814c85c in OneoffTimerNode2::expire (this=0x82f6768) at timer.cc:177 #15 0x0814bc57 in TimerList::run (this=0xbffff664) at timer.cc:372 #16 0x0813e791 in EventLoop::run (this=0xbffff660) at eventloop.cc:71 #17 0x0805f452 in Rtrmgr::run (this=0xbffff890) at main_rtrmgr.cc:339 From pavlin@icir.org Tue Jun 28 01:48:11 2005 From: pavlin@icir.org (Pavlin Radoslavov) Date: Mon, 27 Jun 2005 17:48:11 -0700 Subject: [Xorp-users] xorp_rtrmgr - Assertion failure In-Reply-To: Message from Sam Elstob of "Tue, 28 Jun 2005 00:02:35 BST." <1119913356.11441.2.camel@localhost.localdomain> Message-ID: <200506280048.j5S0mBxH046404@possum.icir.org> This particular bug was probably fixed already in CVS. Either get the lastest code from CVS, or apply the following simple change to libxipc/xrl_router.cc : #define trace_xrl(p, x) \ do { \ - if (xrl_trace.on()) XLOG_INFO(string((p) + (x).str()).c_str()); \ + if (xrl_trace.on()) XLOG_INFO("%s", string((p) + (x).str()).c_str()); \ } while (0) Pavlin > here is the backtrace > > Looks like some non-printing characters got into the log at step #8, > which caused the library to assert. > > > #0 0xb7cf27c1 in kill () from /lib/libc.so.6 > #1 0xb7cf2545 in raise () from /lib/libc.so.6 > #2 0xb7cf3a88 in abort () from /lib/libc.so.6 > #3 0xb7cebbbf in __assert_fail () from /lib/libc.so.6 > #4 0xb7d18062 in parse_printf_format () from /lib/libc.so.6 > #5 0xb7d10aa7 in vfprintf () from /lib/libc.so.6 > #6 0xb7d30390 in vsnprintf () from /lib/libc.so.6 > #7 0x081395f0 in x_vasprintf (ret=0xbfff755c, > format=0x830aed4 "Resolving xrl:finder://xorpsh-11422-xen0/rtrmgr_client/0.2/config_changed?userid:u32=0&deltas:txt=++++interfaces+%7B%0A++++++++interface+eth0+%7B%0A", '+' , "description:+\"data+interface\"%0A++++++++"..., > ap=0xbfff759c "\036n\026\b\nq\026\bd®í·øuÿ¿Ð¬.\bpÑ/\b\037N") at xlog.c:1119 > #8 0x08138e31 in xlog_record_va (log_level=XLOG_LEVEL_INFO, module_name=0x8166eaf "XRL", > where=0xbfff7600 "+460 xrl_router.cc send", > format=0x830aed4 "Resolving xrl:finder://xorpsh-11422-xen0/rtrmgr_client/0.2/config_changed?userid:u32=0&deltas:txt=++++interfaces+%7B%0A++++++++interface+eth0+%7B%0A", '+' , "description:+\"data+interface\"%0A++++++++"..., > ap=0xbfff759c "\036n\026\b\nq\026\bd®í·øuÿ¿Ð¬.\bpÑ/\b\037N") at xlog.c:630 > #9 0x08138bf8 in xlog_info (module_name=0x8166eaf "XRL", > where=0xbfff7600 "+460 xrl_router.cc send", > fmt=0x830aed4 "Resolving xrl:finder://xorpsh-11422-xen0/rtrmgr_client/0.2/config_changed?userid:u32=0&deltas:txt=++++interfaces+%7B%0A++++++++interface+eth0+%7B%0A", '+' , "description:+\"data+interface\"%0A++++++++"...) at xlog.c:441 > #10 0x08102931 in XrlRouter::send (this=0xbfffd720, xrl=@0xbfff95d0, user_cb=@0xbfff9590) > at xrl_router.cc:460 > #11 0x080bb700 in XrlRtrmgrClientV0p2Client::send_config_changed (this=0x82f3a44, > the_tgt=0x82eac74 "xorpsh-11422-xen0", userid=@0xbfff965c, deltas=@0xbfff96a0, > deletions=@0xbfff9670, cb=@0xbfff9680) at rtrmgr_client_xif.cc:93 > #12 0x080a4950 in XrlRtrmgrInterface::send_client_state (this=0x82f3a38, user_id=0, > user=0x82f5220) at xrl_rtrmgr_interface.cc:295 > #13 0x080aa7d0 in XorpMemberCallback0B2::dispatch (this=0x8309830) at callback_nodebug.hh:898 > #14 0x0814c85c in OneoffTimerNode2::expire (this=0x82f6768) at timer.cc:177 > #15 0x0814bc57 in TimerList::run (this=0xbffff664) at timer.cc:372 > #16 0x0813e791 in EventLoop::run (this=0xbffff660) at eventloop.cc:71 > #17 0x0805f452 in Rtrmgr::run (this=0xbffff890) at main_rtrmgr.cc:339 From happymonkey@gmx.de Tue Jun 28 09:09:57 2005 From: happymonkey@gmx.de (Sami Okasha) Date: Tue, 28 Jun 2005 10:09:57 +0200 Subject: [Xorp-users] Xorp on Xen compile Problems In-Reply-To: <200506171829.08725.happymonkey@gmx.de> References: <200506162039.12434.happymonkey@gmx.de> <1317.66.146.163.2.1119023347.squirrel@www.hiddennet.net> <200506171829.08725.happymonkey@gmx.de> Message-ID: <200506281009.57426.happymonkey@gmx.de> Hi, Xorp compiles and runs perfectly in a linux xen environment. Tested with Debian stable, Linux 2.6.11 host and Linux 2.6.11 and 2.4.30 guest, Xen 2.06 stable, xorp stable. The Problem was, i didn't configure the loopback device manually in the guest domain, cause other network devices are generated automatically by xen *stupid mistake*. Without the loopback interface the make check didn't finish properly. ---- Under Debian adding the loopback Interface is accomplished in /etc/network/interfaces: auto lo iface lo inet loopback ---- Sammy Sami Okasha schrieb am Freitag, 17. Juni 2005 18:29: > Hi, > > i use Debian with 2.4.30-xenU and 2.6.11.10-xenU Kernel. > > i have no idea why it didn't compile, but Pavlin will debug this soon, so > we can post why it didn't worked. > > Sammy > > adam@hiddennet.net schrieb am Freitag, 17. Juni 2005 17:49: > > Hi > > > > Xorp does compile on xen with gentoo and running a 2.6.11-xen0 kernel. I > > haven't verified that it runs or anything else is just compiled for me > > with ./configure ; make . > > > > adam > > > > > Hello > > > > > > is it possible to compile Xorp on a Xen environment? I Tried to compile > > > xen on > > > 2.4 and 2.6 Xen Kernels but without success. > > > > > > Has anyone successully compiled on xen? On native 2.6 debian it > > > compiles very > > > fine. > > > > > > > > > Greetings > > > Sammy > > > > > > If you need more debug output just tell me how to generate and i will > > > post ist. > > > ---- > > > Here is the last output on a 2.6 XenKernel (and i think i got the same > > > message > > > on the 2.4 XenKernel) > > > > > > FAIL: test_finder_events > > > PASS: test_xrl_parser.sh > > > Unexpected exception: thrown did not correspond to specification - > > > fix code. > > > InvalidAddress from line 280 of finder_tcp.cc -> Not a valid interface > > > address > > > ./test_finder_deaths.sh: line 15: 32623 Aborted > > > ./xorp_finder > > > -p ${finder_port} > > > Finder did not start correctly. > > > Port 17777 maybe in use by another application. > > > FAIL: test_finder_deaths.sh > > > LeakCheck binary not found skipping test. > > > PASS: test_leaks.sh > > > ==================== > > > 9 of 18 tests failed > > > ==================== > > > make[2]: *** [check-TESTS] Error 1 > > > make[2]: Leaving directory `/root/xorp-1.1/libxipc' > > > make[1]: *** [check-am] Error 2 > > > make[1]: Leaving directory `/root/xorp-1.1/libxipc' > > > make: *** [check-recursive] Error 1 > > > > > > > > > - > > > > > > _______________________________________________ > > > Xorp-users mailing list > > > Xorp-users@xorp.org > > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users > > > > _______________________________________________ > > Xorp-users mailing list > > Xorp-users@xorp.org > > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users -- Sammy Okasha mailto:Sammy@Okasha.de jabber:sammy@jabber.ccc.de, I Seek You: 51144829 Key ID: 6F3D8EBD6E6DB4B2 GPG Key Fingerprint: F415 08EC 90C6 383B 1BFE D0C9 6F3D 8EBD 6E6D B4B2 From atanu@ICSI.Berkeley.EDU Wed Jun 29 02:00:48 2005 From: atanu@ICSI.Berkeley.EDU (Atanu Ghosh) Date: Tue, 28 Jun 2005 18:00:48 -0700 Subject: [Xorp-users] bgp In-Reply-To: Message from Dario VIEIRA of "Thu, 23 Jun 2005 23:14:38 +0200." <1119561278.42bb263ea18a9@imp.int-evry.fr> Message-ID: <67890.1120006848@tigger.icir.org> Hi, I think I have found the problem, could you checkout the latest code from the CVS repository and try it? Atanu. >>>>> "Dario" == Dario VIEIRA writes: Dario> Hi Atanu, Thanks for replying me. Dario> Here are my configuration files (in attachment) for each Dario> router "r" in my topology. Dario> We didn't see any error from BGP itself! When a single Dario> destination prefix "d" is announced, everything is Dario> okay. However, when the same prefix "d" is withdrew, the Dario> problem takes place. Dario> Thanks for your help. Dario> Dario Dario> ---------------------------------------------------------------- Dario> This message was sent using IMP, the Internet Messaging Dario> Program. From Dario Vieira Thu Jun 30 22:54:27 2005 From: Dario Vieira (Dario Vieira) Date: Thu, 30 Jun 2005 23:54:27 +0200 Subject: [Xorp-users] bgp In-Reply-To: <67890.1120006848@tigger.icir.org> References: <1119561278.42bb263ea18a9@imp.int-evry.fr> <67890.1120006848@tigger.icir.org> Message-ID: ------=_Part_2536_23534658.1120168467002 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Hi, I'll try this new code. Thanks, Dario On 6/29/05, Atanu Ghosh wrote:=20 >=20 > Hi, >=20 > I think I have found the problem, could you checkout the latest code > from the CVS repository and try it? >=20 > > >=20 >=20 > Atanu. >=20 > >>>>> "Dario" =3D=3D Dario VIEIRA writes: >=20 > Dario> Hi Atanu, Thanks for replying me. >=20 > Dario> Here are my configuration files (in attachment) for each > Dario> router "r" in my topology. >=20 > Dario> We didn't see any error from BGP itself! When a single > Dario> destination prefix "d" is announced, everything is > Dario> okay. However, when the same prefix "d" is withdrew, the > Dario> problem takes place. >=20 > Dario> Thanks for your help. >=20 >=20 > Dario> Dario >=20 > Dario> ---------------------------------------------------------------- > Dario> This message was sent using IMP, the Internet Messaging > Dario> Program. >=20 --=20 Dario Vieira dario.vieira@gmail.com http://www-lor.int-evry.fr/~vieira_c ------=_Part_2536_23534658.1120168467002 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline
Hi,
 
I'll try this new code.
 
Thanks,
 
Dario

 
On 6/29/05, = Atanu Ghosh <atanu@icsi.b= erkeley.edu> wrote:
Hi,

I think I have found = the problem, could you checkout the latest code
from the CVS repository = and try it?

<http://xorpc.icir.org/cgi-bin/cvsweb.cgi/xorp/bgp/ro= ute_table_nhlookup.cc>


     &nbs= p; Atanu.

>>>>> "Dario" =3D=3D Dario VIEIRA= < dario.vieira@int-evry.fr>= ; writes:

   Dario> Hi Atanu, Thanks for replying me.
   Dario> Here are my configuration files (in attachment= ) for each
   Dario> router "r" in my topology.

   Dario> We didn't see any error from BGP itself! Whe= n a single
   Dario> destination prefix "d" is an= nounced, everything is
   Dario> okay. However, when the sa= me prefix "d" is withdrew, the
   Dario> problem takes place.

   Dario&g= t; Thanks for your help.


   Dario> Dario

&nb= sp;  Dario> -------------------------------------------------------= ---------
   Dario> This message was sent using IMP, the In= ternet Messaging
   Dario> Program.



-- Dario Vieira
dario.vieira@gm= ail.com
http://www-= lor.int-evry.fr/~vieira_c =20 ------=_Part_2536_23534658.1120168467002--