From dony1985.0729 at yahoo.co.id Fri Jan 2 09:32:06 2009 From: dony1985.0729 at yahoo.co.id (Dony Tata) Date: Sat, 3 Jan 2009 01:32:06 +0800 (SGT) Subject: [Xorp-users] xorp webmin module Message-ID: <67292.30189.qm@web76101.mail.sg1.yahoo.com> somebody have information with module to monitoring xorp with webmin......? Terhubung langsung dengan banyak teman di blog dan situs pribadi Anda? Buat Pingbox terbaru Anda sekarang! http://id.messenger.yahoo..com/pingbox/ -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090103/eee4e739/attachment.html From mayank.kandari at gmail.com Sun Jan 4 01:22:09 2009 From: mayank.kandari at gmail.com (Mayank Kandari) Date: Sun, 4 Jan 2009 14:52:09 +0530 Subject: [Xorp-users] Error While running click Message-ID: <328b1be80901040122jb57ccc6k77fdd4a392554273@mail.gmail.com> When I am trying to run click with XORP I am getting following error: 1) ERROR xorp_fea:24984 FEA +1038 ifconfig_set_click.cc stderr_cb ] External Click configuration generator (/usr/local/xorp/fea/xorp_fea_click_config_generator) stderr output: Syntax error (file /tmp/xorp_fea_click.sh0GHh line 5): Invalid keyword Syntax error (file /tmp/xorp_fea_click.sh0GHh line 5): Missing MAC address configuration for interface eth0 2) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +293 fibconfig_entry_set_click.cc add_entry ] Cannot find outgoing port number for the Click forwarding table element to add entry net = 202.141.152.0/24nexthop = 202.141.151.126 ifname = eth0 vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = false is_unresolved = false is_connected_route = false 3) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +75 fibconfig_transaction.cc operation_result ] FIB transaction commit failed on AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = false is_unresolved = false is_connected_route = false 4) [ 2009/01/04 14:49:27 WARNING xorp_fea XrlFeaTarget ] Handling method for redist_transaction4/0.1/commit_transaction failed: XrlCmdError 102 Command failed AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = false is_unresolved = false is_connected_route = false [ 2009/01/04 14:49:27 ERROR xorp_rib:25063 RIB +911 redist_xrl.cc dispatch_complete ] Failed to commit transaction: 102 Command failed AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = false is_unresolved = false is_connected_route = false -- regards Mayank Kandari Staff Scientist C-DAC Mumbai Juhu - 400049 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090104/cb5b12a8/attachment.html From mayank.kandari at gmail.com Sun Jan 4 04:37:27 2009 From: mayank.kandari at gmail.com (Mayank Kandari) Date: Sun, 4 Jan 2009 18:07:27 +0530 Subject: [Xorp-users] Error While running click In-Reply-To: <328b1be80901040122jb57ccc6k77fdd4a392554273@mail.gmail.com> References: <328b1be80901040122jb57ccc6k77fdd4a392554273@mail.gmail.com> Message-ID: <328b1be80901040437t7a4f6cfaicf12684891376add@mail.gmail.com> I am getting one more error [ 2009/01/04 17:55:59 ERROR xorp_fea:7346 LIBCOMM +101 comm_sock.c comm_sock_open ] Error opening socket (domain = 10, type = 2, protocol = 0): Address family not supported by protocol I don't know what this error mean ? On Sun, Jan 4, 2009 at 2:52 PM, Mayank Kandari wrote: > When I am trying to run click with XORP I am getting following error: > > > 1) ERROR xorp_fea:24984 FEA +1038 ifconfig_set_click.cc stderr_cb ] > External Click configuration generator > (/usr/local/xorp/fea/xorp_fea_click_config_generator) stderr output: Syntax > error (file /tmp/xorp_fea_click.sh0GHh line 5): Invalid keyword > Syntax error (file /tmp/xorp_fea_click.sh0GHh line 5): Missing MAC address > configuration for interface eth0 > > 2) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +293 > fibconfig_entry_set_click.cc add_entry ] Cannot find outgoing port number > for the Click forwarding table element to add entry net = 202.141.152.0/24nexthop = 202.141.151.126 ifname = eth0 vifname = eth0 metric = 1 > admin_distance = 1 xorp_route = true is_deleted = false is_unresolved = > false is_connected_route = false > > 3) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +75 > fibconfig_transaction.cc operation_result ] FIB transaction commit failed on > AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 > vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = > false is_unresolved = false is_connected_route = false > > 4) [ 2009/01/04 14:49:27 WARNING xorp_fea XrlFeaTarget ] Handling method > for redist_transaction4/0.1/commit_transaction failed: XrlCmdError 102 > Command failed AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 > ifname = eth0 vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true > is_deleted = false is_unresolved = false is_connected_route = false > [ 2009/01/04 14:49:27 ERROR xorp_rib:25063 RIB +911 redist_xrl.cc > dispatch_complete ] Failed to commit transaction: 102 Command failed > AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 > vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = > false is_unresolved = false is_connected_route = false > > > -- > regards > Mayank Kandari > Staff Scientist > C-DAC Mumbai > Juhu - 400049 > -- regards Mayank Kandari Staff Scientist C-DAC Mumbai Juhu - 400049 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090104/1b5c476f/attachment.html From syed.khalid at xorp.net Sun Jan 4 08:49:21 2009 From: syed.khalid at xorp.net (Syed Khalid) Date: Sun, 4 Jan 2009 08:49:21 -0800 Subject: [Xorp-users] Error While running click In-Reply-To: <328b1be80901040122jb57ccc6k77fdd4a392554273@mail.gmail.com> References: <328b1be80901040122jb57ccc6k77fdd4a392554273@mail.gmail.com> Message-ID: Hello Mayank Can you send us the following information: 1. Which version of Xorp are you using? 2. Configuration files 3. OS/kernel versions 4. Output of ifconfig -a and "ip addr" Thanks Syed On Sun, Jan 4, 2009 at 1:22 AM, Mayank Kandari wrote: > When I am trying to run click with XORP I am getting following error: > > > 1) ERROR xorp_fea:24984 FEA +1038 ifconfig_set_click.cc stderr_cb ] > External Click configuration generator > (/usr/local/xorp/fea/xorp_fea_click_config_generator) stderr output: Syntax > error (file /tmp/xorp_fea_click.sh0GHh line 5): Invalid keyword > Syntax error (file /tmp/xorp_fea_click.sh0GHh line 5): Missing MAC address > configuration for interface eth0 > > 2) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +293 > fibconfig_entry_set_click.cc add_entry ] Cannot find outgoing port number > for the Click forwarding table element to add entry net = 202.141.152.0/24nexthop = 202.141.151.126 ifname = eth0 vifname = eth0 metric = 1 > admin_distance = 1 xorp_route = true is_deleted = false is_unresolved = > false is_connected_route = false > > 3) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +75 > fibconfig_transaction.cc operation_result ] FIB transaction commit failed on > AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 > vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = > false is_unresolved = false is_connected_route = false > > 4) [ 2009/01/04 14:49:27 WARNING xorp_fea XrlFeaTarget ] Handling method > for redist_transaction4/0.1/commit_transaction failed: XrlCmdError 102 > Command failed AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 > ifname = eth0 vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true > is_deleted = false is_unresolved = false is_connected_route = false > [ 2009/01/04 14:49:27 ERROR xorp_rib:25063 RIB +911 redist_xrl.cc > dispatch_complete ] Failed to commit transaction: 102 Command failed > AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 > vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = > false is_unresolved = false is_connected_route = false > > > -- > regards > Mayank Kandari > Staff Scientist > C-DAC Mumbai > Juhu - 400049 > > _______________________________________________ > Xorp-users mailing list > Xorp-users at xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090104/9c39de33/attachment.html From mayank.kandari at gmail.com Sun Jan 4 22:49:14 2009 From: mayank.kandari at gmail.com (Mayank Kandari) Date: Mon, 5 Jan 2009 12:19:14 +0530 Subject: [Xorp-users] Error While running click In-Reply-To: <328b1be80901040437t7a4f6cfaicf12684891376add@mail.gmail.com> References: <328b1be80901040122jb57ccc6k77fdd4a392554273@mail.gmail.com> <328b1be80901040437t7a4f6cfaicf12684891376add@mail.gmail.com> Message-ID: <328b1be80901042249k611c8d31y3e9f1518112754d3@mail.gmail.com> Hello , I recompiled my kernel and I resolved the earlier errors, but socket opening error is still there. I am using xorp-1.4 and linux kernel 2.4.26 and click 1.4.2 . Error opening socket (domain = 10, type = 2, protocol = 0): Address family not supported by protocol On Sun, Jan 4, 2009 at 6:07 PM, Mayank Kandari wrote: > I am getting one more error > > [ 2009/01/04 17:55:59 ERROR xorp_fea:7346 LIBCOMM +101 comm_sock.c > comm_sock_open ] Error opening socket (domain = 10, type = 2, protocol = 0): > Address family not supported by protocol > > I don't know what this error mean ? > > > On Sun, Jan 4, 2009 at 2:52 PM, Mayank Kandari wrote: > >> When I am trying to run click with XORP I am getting following error: >> >> >> 1) ERROR xorp_fea:24984 FEA +1038 ifconfig_set_click.cc stderr_cb ] >> External Click configuration generator >> (/usr/local/xorp/fea/xorp_fea_click_config_generator) stderr output: Syntax >> error (file /tmp/xorp_fea_click.sh0GHh line 5): Invalid keyword >> Syntax error (file /tmp/xorp_fea_click.sh0GHh line 5): Missing MAC address >> configuration for interface eth0 >> >> 2) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +293 >> fibconfig_entry_set_click.cc add_entry ] Cannot find outgoing port number >> for the Click forwarding table element to add entry net = >> 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 vifname = eth0 >> metric = 1 admin_distance = 1 xorp_route = true is_deleted = false >> is_unresolved = false is_connected_route = false >> >> 3) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +75 >> fibconfig_transaction.cc operation_result ] FIB transaction commit failed on >> AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 >> vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = >> false is_unresolved = false is_connected_route = false >> >> 4) [ 2009/01/04 14:49:27 WARNING xorp_fea XrlFeaTarget ] Handling method >> for redist_transaction4/0.1/commit_transaction failed: XrlCmdError 102 >> Command failed AddEntry4: net = 202.141.152.0/24 nexthop = >> 202.141.151.126 ifname = eth0 vifname = eth0 metric = 1 admin_distance = 1 >> xorp_route = true is_deleted = false is_unresolved = false >> is_connected_route = false >> [ 2009/01/04 14:49:27 ERROR xorp_rib:25063 RIB +911 redist_xrl.cc >> dispatch_complete ] Failed to commit transaction: 102 Command failed >> AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 >> vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = >> false is_unresolved = false is_connected_route = false >> >> >> -- >> regards >> Mayank Kandari >> Staff Scientist >> C-DAC Mumbai >> Juhu - 400049 >> > > > > -- > regards > Mayank Kandari > Staff Scientist > C-DAC Mumbai > Juhu - 400049 > -- regards Mayank Kandari Staff Scientist C-DAC Mumbai Juhu - 400049 -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090105/14c6bff9/attachment.html From pavlin at ICSI.Berkeley.EDU Mon Jan 5 02:09:04 2009 From: pavlin at ICSI.Berkeley.EDU (Pavlin Radoslavov) Date: Mon, 05 Jan 2009 02:09:04 -0800 Subject: [Xorp-users] Error While running click In-Reply-To: <328b1be80901042249k611c8d31y3e9f1518112754d3@mail.gmail.com> References: <328b1be80901040122jb57ccc6k77fdd4a392554273@mail.gmail.com> <328b1be80901040437t7a4f6cfaicf12684891376add@mail.gmail.com> <328b1be80901042249k611c8d31y3e9f1518112754d3@mail.gmail.com> Message-ID: <200901051009.n05A94mO008603@fruitcake.ICSI.Berkeley.EDU> Mayank Kandari wrote: > Hello , > I recompiled my kernel and I resolved the earlier errors, but > socket opening error is still there. > I am using xorp-1.4 and linux kernel 2.4.26 and click 1.4.2 . > Error opening socket (domain = 10, type = 2, protocol = 0): Address > family not supported by protocol "domain = 10" is IPv6, so it looks like IPv6 is not enabled in your kernel. Either enable IPv6 in the kernel or recompile XORP without IPv6 support: ./configure --disable-ipv6 gmake Pavlin > On Sun, Jan 4, 2009 at 6:07 PM, Mayank Kandari wrote: > > > I am getting one more error > > > > [ 2009/01/04 17:55:59 ERROR xorp_fea:7346 LIBCOMM +101 comm_sock.c > > comm_sock_open ] Error opening socket (domain = 10, type = 2, protocol = 0): > > Address family not supported by protocol > > > > I don't know what this error mean ? > > > > > > On Sun, Jan 4, 2009 at 2:52 PM, Mayank Kandari wrote: > > > >> When I am trying to run click with XORP I am getting following error: > >> > >> > >> 1) ERROR xorp_fea:24984 FEA +1038 ifconfig_set_click.cc stderr_cb ] > >> External Click configuration generator > >> (/usr/local/xorp/fea/xorp_fea_click_config_generator) stderr output: Syntax > >> error (file /tmp/xorp_fea_click.sh0GHh line 5): Invalid keyword > >> Syntax error (file /tmp/xorp_fea_click.sh0GHh line 5): Missing MAC address > >> configuration for interface eth0 > >> > >> 2) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +293 > >> fibconfig_entry_set_click.cc add_entry ] Cannot find outgoing port number > >> for the Click forwarding table element to add entry net = > >> 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 vifname = eth0 > >> metric = 1 admin_distance = 1 xorp_route = true is_deleted = false > >> is_unresolved = false is_connected_route = false > >> > >> 3) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +75 > >> fibconfig_transaction.cc operation_result ] FIB transaction commit failed on > >> AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 > >> vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = > >> false is_unresolved = false is_connected_route = false > >> > >> 4) [ 2009/01/04 14:49:27 WARNING xorp_fea XrlFeaTarget ] Handling method > >> for redist_transaction4/0.1/commit_transaction failed: XrlCmdError 102 > >> Command failed AddEntry4: net = 202.141.152.0/24 nexthop = > >> 202.141.151.126 ifname = eth0 vifname = eth0 metric = 1 admin_distance = 1 > >> xorp_route = true is_deleted = false is_unresolved = false > >> is_connected_route = false > >> [ 2009/01/04 14:49:27 ERROR xorp_rib:25063 RIB +911 redist_xrl.cc > >> dispatch_complete ] Failed to commit transaction: 102 Command failed > >> AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 > >> vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true is_deleted = > >> false is_unresolved = false is_connected_route = false > >> > >> > >> -- > >> regards > >> Mayank Kandari > >> Staff Scientist > >> C-DAC Mumbai > >> Juhu - 400049 > >> > > > > > > > > -- > > regards > > Mayank Kandari > > Staff Scientist > > C-DAC Mumbai > > Juhu - 400049 > > > > > > -- > regards > Mayank Kandari > Staff Scientist > C-DAC Mumbai > Juhu - 400049 > _______________________________________________ > Xorp-users mailing list > Xorp-users at xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From syed.khalid at xorp.net Mon Jan 5 07:53:26 2009 From: syed.khalid at xorp.net (Syed Khalid) Date: Mon, 5 Jan 2009 07:53:26 -0800 Subject: [Xorp-users] Error While running click In-Reply-To: <328b1be80901042249k611c8d31y3e9f1518112754d3@mail.gmail.com> References: <328b1be80901040122jb57ccc6k77fdd4a392554273@mail.gmail.com> <328b1be80901040437t7a4f6cfaicf12684891376add@mail.gmail.com> <328b1be80901042249k611c8d31y3e9f1518112754d3@mail.gmail.com> Message-ID: Hello Could you try the XORP 1.6 RC candidate - it got released last week? Also are you using Ubuntu or Red Hat? Also please send us your xorp config files as well as the output from "ifconfig -a" and "ip addr"? Syed On Sun, Jan 4, 2009 at 10:49 PM, Mayank Kandari wrote: > Hello , > I recompiled my kernel and I resolved the earlier errors, but > socket opening error is still there. > I am using xorp-1.4 and linux kernel 2.4.26 and click 1.4.2 . > Error opening socket (domain = 10, type = 2, protocol = 0): Address > family not supported by protocol > > On Sun, Jan 4, 2009 at 6:07 PM, Mayank Kandari > wrote: > >> I am getting one more error >> >> [ 2009/01/04 17:55:59 ERROR xorp_fea:7346 LIBCOMM +101 comm_sock.c >> comm_sock_open ] Error opening socket (domain = 10, type = 2, protocol = 0): >> Address family not supported by protocol >> >> I don't know what this error mean ? >> >> >> On Sun, Jan 4, 2009 at 2:52 PM, Mayank Kandari wrote: >> >>> When I am trying to run click with XORP I am getting following error: >>> >>> >>> 1) ERROR xorp_fea:24984 FEA +1038 ifconfig_set_click.cc stderr_cb ] >>> External Click configuration generator >>> (/usr/local/xorp/fea/xorp_fea_click_config_generator) stderr output: Syntax >>> error (file /tmp/xorp_fea_click.sh0GHh line 5): Invalid keyword >>> Syntax error (file /tmp/xorp_fea_click.sh0GHh line 5): Missing MAC >>> address configuration for interface eth0 >>> >>> 2) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +293 >>> fibconfig_entry_set_click.cc add_entry ] Cannot find outgoing port number >>> for the Click forwarding table element to add entry net = >>> 202.141.152.0/24 nexthop = 202.141.151.126 ifname = eth0 vifname = eth0 >>> metric = 1 admin_distance = 1 xorp_route = true is_deleted = false >>> is_unresolved = false is_connected_route = false >>> >>> 3) [ 2009/01/04 14:49:27 ERROR xorp_fea:24984 FEA +75 >>> fibconfig_transaction.cc operation_result ] FIB transaction commit failed on >>> AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = >>> eth0 vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true >>> is_deleted = false is_unresolved = false is_connected_route = false >>> >>> 4) [ 2009/01/04 14:49:27 WARNING xorp_fea XrlFeaTarget ] Handling method >>> for redist_transaction4/0.1/commit_transaction failed: XrlCmdError 102 >>> Command failed AddEntry4: net = 202.141.152.0/24 nexthop = >>> 202.141.151.126 ifname = eth0 vifname = eth0 metric = 1 admin_distance = 1 >>> xorp_route = true is_deleted = false is_unresolved = false >>> is_connected_route = false >>> [ 2009/01/04 14:49:27 ERROR xorp_rib:25063 RIB +911 redist_xrl.cc >>> dispatch_complete ] Failed to commit transaction: 102 Command failed >>> AddEntry4: net = 202.141.152.0/24 nexthop = 202.141.151.126 ifname = >>> eth0 vifname = eth0 metric = 1 admin_distance = 1 xorp_route = true >>> is_deleted = false is_unresolved = false is_connected_route = false >>> >>> >>> -- >>> regards >>> Mayank Kandari >>> Staff Scientist >>> C-DAC Mumbai >>> Juhu - 400049 >>> >> >> >> >> -- >> regards >> Mayank Kandari >> Staff Scientist >> C-DAC Mumbai >> Juhu - 400049 >> > > > > -- > regards > Mayank Kandari > Staff Scientist > C-DAC Mumbai > Juhu - 400049 > > _______________________________________________ > Xorp-users mailing list > Xorp-users at xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090105/abab6e36/attachment.html From syed.khalid at xorp.net Thu Jan 8 19:35:08 2009 From: syed.khalid at xorp.net (Syed Khalid) Date: Thu, 8 Jan 2009 19:35:08 -0800 Subject: [Xorp-users] XORP announces Release 1.6 Message-ID: Dear XORP community members, We are pleased to announce release 1.6 of the XORP codebase to the world! The XORP team and contributors have worked very hard over the last 6 months and 1.6 is a huge milestone in the history of the XORP project. As many of you know, XORP is now commercially backed by two well-known and highly-respected venture capitalists, and one of the benefits for the XORP community is the additional amount of resources we have been able to bring to the project. These increased resources have resulted in our ability to release 1.6 just months after our announcement of XORP, Inc and release 1.5. In terms of project milestones, 1.6 marks the first time we have put the codebase through a comprehensive and rigorous QA process, using some of the leading test tools in the market for layer 3 testing. Furthermore, here are some of the key improvements in the 1.6 codebase, based on your contributions and user feedback: - Significantly improved memory performance on the BGP code (especially in scenarios with many peers) - XRL performance improvements - Improved policy code - RIP tracing capabilities - *NEW* VRRP support - Numerous bug fixes (entire list is in 1.6 Release Notes available at http://www.xorp.org/releases/1.6/docs/RELEASE_NOTES Of late, we have noticed more users incorporating XORP into their projects, both non-profit and commercial and want to continue to encourage the community to do so. With the recent world financial crisis, organizations will be looking more to open-source as a cost-effective alternative to the high premiums other networking vendors charge, so help us spread the word about XORP.org and help all of us build the momentum behind open-source networking! Thank you very much for your interest and continued participation in the XORP open networking platform project. We appreciate the help and support the community has shown us over the years and hope that you will enjoy this latest release. If you have any questions, please feel free to contact us at info at xorp.net. --The XORP Team -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090108/34191ca6/attachment.html From xorp-users at icir.org Fri Jan 16 12:11:10 2009 From: xorp-users at icir.org (xorp-users at icir.org) Date: Fri, 16 Jan 2009 12:11:10 -0800 Subject: [Xorp-users] January 77% OFF Message-ID: <20090116141113.4031.qmail@none-aff0d93116> An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090116/25742891/attachment.html From erik at slagter.name Sat Jan 17 05:03:58 2009 From: erik at slagter.name (Erik Slagter) Date: Sat, 17 Jan 2009 14:03:58 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! Message-ID: <4971D73E.3030405@slagter.name> Please help me, even though it might not be strictly xorp related. BTW I tried xorp and mrd6 to accomplish this. I have a linux server with 12 ethernet interfaces, traffic between them is routed by the kernel. All these have one or more hosts connected, some running ipv6 and/or some of them running multicast clients. I have a tool that sends mp3's to a multicast address (ff05:0:0:0:0:0:2:1) and a tool that listens to this address. Everything works if I set up a route manually for ff05:0:0:0:0:0:2:1 to one of the devices, the client indeed gets the traffic and all works. But that's not my intention. I want the traffic that is sent from the (same) server to ff05:0:0:0:0:0:2:1 to be sent out on all interfaces that have clients requesting the stream (using MLD requests). The client has indeed MLD packets sent by the kernel of the client to the server. Seems pretty basic multicasting to me. But I can't get it working. I tried xorp, plain kernel, mrd6. No errors, but the traffic is simply not sent out to the client's interfaces. The MLD membership messages are received and processed, but the traffic still goes out on one single RANDOM device (in this case a DUMMY interface!) It doesn't matter how many clients are requesting using MLD, the multicast routing table simply isn't updated and packets are not multicast to one or more desired interfaces. I would be very happy with a sample setup for either xorp or mrd6 that accomplishes this. If it's even possible without userland applications (which I suspect) then also please tell me. I am using a recent kernel (2.6.27.10) which should have ipv6 multicast routing support (and which is enabled). Thanks very much indeed! -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090117/4323fc0d/attachment.bin From erik at slagter.name Sat Jan 17 10:34:48 2009 From: erik at slagter.name (Erik Slagter) Date: Sat, 17 Jan 2009 19:34:48 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: References: <4971D73E.3030405@slagter.name> Message-ID: <497224C8.7070504@slagter.name> Syed Khalid wrote: > On Sat, Jan 17, 2009 at 5:03 AM, Erik Slagter > wrote: > > Please help me, even though it might not be strictly xorp related. BTW I > tried xorp and mrd6 to accomplish this. > > I have a linux server with 12 ethernet interfaces, traffic between them > is routed by the kernel. All these have one or more > hosts connected, some running ipv6 and/or some of them running > multicast clients. > > I have a tool that sends mp3's to a multicast address > (ff05:0:0:0:0:0:2:1) and a tool that listens to this address. > > Everything works if I set up a route manually for ff05:0:0:0:0:0:2:1 to > one of the devices, the client indeed gets the traffic and all works. > > But that's not my intention. I want the traffic that is sent from the > (same) server to > ff05:0:0:0:0:0:2:1 to be sent out on all interfaces that have clients > requesting the stream (using MLD requests). The client has indeed MLD > packets sent by the kernel of the client to the server. > > Seems pretty basic multicasting to me. But I can't get it working. I > tried xorp, plain kernel, mrd6. No errors, but the traffic is simply not > sent out to the client's interfaces. The MLD membership messages are > received and processed, but the traffic still goes out on one single > RANDOM device (in this case a DUMMY interface!) > > It doesn't matter how many clients are requesting using MLD, the > multicast routing table simply isn't updated and packets are not > multicast to one or more desired interfaces. > > I would be very happy with a sample setup for either xorp or mrd6 that > accomplishes this. If it's even possible without userland applications > (which I suspect) then also please tell me. I am using a recent kernel > (2.6.27.10) which should have ipv6 multicast routing support (and which > is enabled). > > Thanks very much indeed! > Hello Erik > Can you indicate which Xorp code you are using as well as linux type > (ubuntu or RH or ??)? Can you send your configuration files also? > Syed xorp version 1.5 from source, linux vanilla kernel 2.6.27.10 from source with everything regarding multicasting enabled including multicast route ipv6. This is my config.boot: interfaces { interface eth10 { default-system-config } } protocols { mld { disable: false interface eth10 { vif eth10 { disable: false } } traceoptions { flag all { disable: false } } } fib2mrib { disable: false } } plumbing { mfea4 { disable: false interface eth10 { vif eth10 { disable: false } } interface register_vif { vif register_vif { disable: false } } traceoptions { flag all { disable: false } } } } Please note, all xorp (or whatever what program) has to do for me, is receive MLD group membership packets from clients (attached to the server) and make the kernel transmit the packets (generated ON the server) with the relevant multicast (ipv6) address to the subscribed clients! -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090117/7de8889d/attachment.bin From pavlin at ICSI.Berkeley.EDU Sat Jan 17 11:06:46 2009 From: pavlin at ICSI.Berkeley.EDU (Pavlin Radoslavov) Date: Sat, 17 Jan 2009 11:06:46 -0800 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <497224C8.7070504@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> Message-ID: <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> Erik, XORP doesn't implement MLD proxy hence must configure PIM-SM for IPv6 to forward multicast packets between interfaces. Please see xorp/rtrmgr/config/multicast6.boot for sample config, and the XORP User Manual for details. Regards, Pavlin Erik Slagter wrote: > Syed Khalid wrote: > > On Sat, Jan 17, 2009 at 5:03 AM, Erik Slagter > > wrote: > > Please help me, even though it might not be strictly xorp related. BTW I > > tried xorp and mrd6 to accomplish this. > > I have a linux server with 12 ethernet interfaces, traffic between them > > is routed by the kernel. All these have one or more > > hosts connected, some running ipv6 and/or some of them running > > multicast clients. > > I have a tool that sends mp3's to a multicast address > > (ff05:0:0:0:0:0:2:1) and a tool that listens to this address. > > Everything works if I set up a route manually for ff05:0:0:0:0:0:2:1 to > > one of the devices, the client indeed gets the traffic and all works. > > But that's not my intention. I want the traffic that is sent from the > > (same) server to > > ff05:0:0:0:0:0:2:1 to be sent out on all interfaces that have clients > > requesting the stream (using MLD requests). The client has indeed MLD > > packets sent by the kernel of the client to the server. > > Seems pretty basic multicasting to me. But I can't get it working. I > > tried xorp, plain kernel, mrd6. No errors, but the traffic is simply not > > sent out to the client's interfaces. The MLD membership messages are > > received and processed, but the traffic still goes out on one single > > RANDOM device (in this case a DUMMY interface!) > > It doesn't matter how many clients are requesting using MLD, the > > multicast routing table simply isn't updated and packets are not > > multicast to one or more desired interfaces. > > I would be very happy with a sample setup for either xorp or mrd6 that > > accomplishes this. If it's even possible without userland applications > > (which I suspect) then also please tell me. I am using a recent kernel > > (2.6.27.10) which should have ipv6 multicast routing support (and which > > is enabled). > > Thanks very much indeed! > > > Hello Erik > > Can you indicate which Xorp code you are using as well as linux type > > (ubuntu or RH or ??)? Can you send your configuration files also? > > Syed > > xorp version 1.5 from source, linux vanilla kernel 2.6.27.10 from source > with everything regarding multicasting enabled including multicast route > ipv6. > > This is my config.boot: > > interfaces { > interface eth10 { > default-system-config > } > } > > protocols { > mld { > disable: false > > interface eth10 { > vif eth10 { > disable: false > } > } > traceoptions { > flag all { > disable: false > } > } > } > > fib2mrib { > disable: false > } > } > > plumbing { > mfea4 { > disable: false > interface eth10 { > vif eth10 { > disable: false > } > } > > interface register_vif { > vif register_vif { > disable: false > } > } > > traceoptions { > flag all { > disable: false > } > } > } > } > > Please note, all xorp (or whatever what program) has to do for me, is > receive MLD group membership packets from clients (attached to the > server) and make the kernel transmit the packets (generated ON the > server) with the relevant multicast (ipv6) address to the subscribed > clients! From erik at slagter.name Sat Jan 17 12:14:14 2009 From: erik at slagter.name (Erik Slagter) Date: Sat, 17 Jan 2009 21:14:14 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> Message-ID: <49723C16.5030409@slagter.name> Pavlin Radoslavov wrote: >> Please note, all xorp (or whatever what program) has to do for me, is >> receive MLD group membership packets from clients (attached to the >> server) and make the kernel transmit the packets (generated ON the >> server) with the relevant multicast (ipv6) address to the subscribed >> clients! > XORP doesn't implement MLD proxy hence must configure PIM-SM for > IPv6 to forward multicast packets between interfaces. > Please see xorp/rtrmgr/config/multicast6.boot for sample config, and > the XORP User Manual for details. I enabled with pim-sm/6 in my xorp boot file, with multicast6.boot as base. I must say, especially the pim configuration part is complete hocus pocus to me (and I wouldn't even want to use it!) Anyway, what I created doesn't work. It complains about "IPv6 multicast routing not supported" although I explicitly enabled it in the kernel config. Here is my config.boot: (for the moment I solely use eth10 for testing) ================================================================== interfaces { interface eth10 { default-system-config } } fea { unicast-forwarding6 { disable: false } } protocols { fib2mrib { disable: false } } protocols { mld { disable: false interface eth10 { vif eth10 { disable: false } } traceoptions { flag all { disable: false } } } } protocols { pimsm6 { interface eth10 { vif eth10 { disable: false } } interface register_vif { vif register_vif { disable: false } } /* Note: static-rps and bootstrap should not be mixed */ static-rps { rp 2001:888:133a:199::1 { group-prefix fe05::/16 { } } } /* bootstrap { disable: false cand-bsr { scope-zone ff00::/8 { cand-bsr-by-vif-name: "dc0" } } cand-rp { group-prefix ff00::/8 { cand-rp-by-vif-name: "dc0" } } } */ switch-to-spt-threshold { /* approx. 1K bytes/s (10Kbps) threshold */ disable: false interval: 100 bytes: 102400 } traceoptions { flag all { disable: false } } } } plumbing { mfea6 { disable: false interface eth10 { vif eth10 { disable: false } } interface register_vif { vif register_vif { disable: false } } traceoptions { flag all { disable: false } } } } =================================================== Output of xorp_rtrmgr: (run as root) =================================================== artemis root:/home/erik/src/xorp/xorp-1.5/rtrmgr $ ./xorp_rtrmgr -b config.boot [ 2009/01/17 21:09:48 INFO xorp_rtrmgr:22089 RTRMGR +239 master_conf_tree.cc execute ] Changed modules: interfaces, firewall, fea, mfea6, mld, rib, fib2mrib, pimsm6 [ 2009/01/17 21:09:48 INFO xorp_rtrmgr:22089 RTRMGR +96 module_manager.cc execute ] Executing module: interfaces (fea/xorp_fea) [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] MFEA enabled [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] CLI enabled [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] CLI started [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] MFEA enabled [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] CLI enabled [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] CLI started [ 2009/01/17 21:09:50 INFO xorp_rtrmgr:22089 RTRMGR +96 module_manager.cc execute ] Executing module: firewall (fea/xorp_fea) [ 2009/01/17 21:09:54 INFO xorp_rtrmgr:22089 RTRMGR +96 module_manager.cc execute ] Executing module: fea (fea/xorp_fea) [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +96 module_manager.cc execute ] Executing module: mfea6 (fea/xorp_fea) [ 2009/01/17 21:10:00 INFO xorp_fea MFEA ] Interface added: Vif[eth10] pif_index: 3 vif_index: 0 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/17 21:10:00 ERROR xorp_fea:22090 MFEA +776 mfea_mrouter.cc start_mrt ] start_mrt() failed: IPv6 multicast routing not supported [ 2009/01/17 21:10:00 INFO xorp_fea MFEA ] MFEA started [ 2009/01/17 21:10:00 INFO xorp_fea MFEA ] Interface enabled Vif[eth10] pif_index: 3 vif_index: 0 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/17 21:10:00 ERROR xorp_fea:22090 MFEA +1181 mfea_mrouter.cc add_multicast_vif ] add_multicast_vif() failed: IPv6 multicast routing not supported [ 2009/01/17 21:10:00 ERROR xorp_fea:22090 MFEA +1186 mfea_node.cc start_vif ] Cannot start vif eth10: cannot add the multicast vif to the kernel [ 2009/01/17 21:10:00 WARNING xorp_fea XrlMfeaTarget ] Handling method for mfea/0.1/start_vif failed: XrlCmdError 102 Command failed Cannot start vif eth10: cannot add the multicast vif to the kernel [ 2009/01/17 21:10:00 ERROR xorp_rtrmgr:22089 RTRMGR +681 master_conf_tree.cc commit_pass2_done ] Commit failed: 102 Command failed Cannot start vif eth10: cannot add the multicast vif to the kernel [ 2009/01/17 21:10:00 ERROR xorp_rtrmgr:22089 RTRMGR +251 master_conf_tree.cc config_done ] Configuration failed: 102 Command failed Cannot start vif eth10: cannot add the multicast vif to the kernel [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +2228 task.cc run_task ] No more tasks to run [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +171 module_manager.cc terminate ] Terminating module: fea [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +171 module_manager.cc terminate ] Terminating module: firewall [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +171 module_manager.cc terminate ] Terminating module: interfaces [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +171 module_manager.cc terminate ] Terminating module: mfea6 [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +194 module_manager.cc terminate ] Killing module: mfea6 [ 2009/01/17 21:10:00 ERROR xorp_rtrmgr:22089 RTRMGR +747 module_manager.cc done_cb ] Command "/home/erik/src/xorp/xorp-1.5/fea/xorp_fea": terminated with signal 15. [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +282 module_manager.cc module_exited ] Module killed during shutdown: mfea6 ============================================================ For completeness: artemis root:/home/erik/src/xorp/xorp-1.5/rtrmgr $ ip -6 addr show eth10 3: eth10: mtu 1500 qlen 1000 inet6 2001:888:133a:110::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::204:23ff:feaa:a983/64 scope link valid_lft forever preferred_lft forever The multicast group address to be used is fe05::1 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090117/2dcf02c4/attachment.bin From pavlin at ICSI.Berkeley.EDU Sat Jan 17 14:30:41 2009 From: pavlin at ICSI.Berkeley.EDU (Pavlin Radoslavov) Date: Sat, 17 Jan 2009 14:30:41 -0800 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <49723C16.5030409@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> Message-ID: <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> > I enabled with pim-sm/6 in my xorp boot file, with multicast6.boot as base. I > must say, especially the pim configuration part is complete hocus pocus to me > (and I wouldn't even want to use it!) Your config looks fine modulo the following: * Typically you need to enable more than one interface for multicast routing. * The multicast group prefix (fe05::/16) in the static-rps is invalid. It should be the prefix ff00::/8 (or a sub-prefix of it). > Anyway, what I created doesn't work. It complains about "IPv6 multicast > routing not supported" although I explicitly enabled it in the kernel config. It seems like the ./configure script didn't discover that the system supports IPv6 multicast routing. Please send file config.log which was generated after ./configure was run. It should contain the reason why ./configure didn't discover it. Pavlin > Here is my config.boot: (for the moment I solely use eth10 for testing) > > ================================================================== > > interfaces { > interface eth10 { > default-system-config > } > } > > fea { > unicast-forwarding6 { > disable: false > } > } > > protocols { > fib2mrib { > disable: false > } > } > > protocols { > mld { > disable: false > > interface eth10 { > vif eth10 { > disable: false > } > } > > traceoptions { > flag all { > disable: false > } > } > } > } > > protocols { > pimsm6 { > interface eth10 { > vif eth10 { > disable: false > } > } > interface register_vif { > vif register_vif { > disable: false > } > } > > /* Note: static-rps and bootstrap should not be mixed */ > static-rps { > rp 2001:888:133a:199::1 { > group-prefix fe05::/16 { > } > } > } > /* > bootstrap { > disable: false > cand-bsr { > scope-zone ff00::/8 { > cand-bsr-by-vif-name: "dc0" > } > } > cand-rp { > group-prefix ff00::/8 { > cand-rp-by-vif-name: "dc0" > } > } > } > */ > > switch-to-spt-threshold { > /* approx. 1K bytes/s (10Kbps) threshold */ > disable: false > interval: 100 > bytes: 102400 > } > > traceoptions { > flag all { > disable: false > } > } > } > } > > plumbing { > mfea6 { > disable: false > interface eth10 { > vif eth10 { > disable: false > } > } > > interface register_vif { > vif register_vif { > disable: false > } > } > > traceoptions { > flag all { > disable: false > } > } > } > } > > =================================================== > > Output of xorp_rtrmgr: (run as root) > > =================================================== > > artemis root:/home/erik/src/xorp/xorp-1.5/rtrmgr $ ./xorp_rtrmgr -b > config.boot > [ 2009/01/17 21:09:48 INFO xorp_rtrmgr:22089 RTRMGR +239 master_conf_tree.cc > execute ] Changed modules: interfaces, firewall, fea, mfea6, mld, rib, > fib2mrib, pimsm6 > [ 2009/01/17 21:09:48 INFO xorp_rtrmgr:22089 RTRMGR +96 module_manager.cc > execute ] Executing module: interfaces (fea/xorp_fea) > [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] MFEA enabled > [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] CLI enabled > [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] CLI started > [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] MFEA enabled > [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] CLI enabled > [ 2009/01/17 21:09:49 INFO xorp_fea MFEA ] CLI started > [ 2009/01/17 21:09:50 INFO xorp_rtrmgr:22089 RTRMGR +96 module_manager.cc > execute ] Executing module: firewall (fea/xorp_fea) > [ 2009/01/17 21:09:54 INFO xorp_rtrmgr:22089 RTRMGR +96 module_manager.cc > execute ] Executing module: fea (fea/xorp_fea) > [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +96 module_manager.cc > execute ] Executing module: mfea6 (fea/xorp_fea) > [ 2009/01/17 21:10:00 INFO xorp_fea MFEA ] Interface added: Vif[eth10] > pif_index: 3 vif_index: 0 addr: 2001:888:133a:110::1 subnet: > 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 > subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST > UNDERLYING_VIF_UP MTU: 1500 > [ 2009/01/17 21:10:00 ERROR xorp_fea:22090 MFEA +776 mfea_mrouter.cc > start_mrt ] start_mrt() failed: IPv6 multicast routing not supported > [ 2009/01/17 21:10:00 INFO xorp_fea MFEA ] MFEA started > [ 2009/01/17 21:10:00 INFO xorp_fea MFEA ] Interface enabled Vif[eth10] > pif_index: 3 vif_index: 0 addr: 2001:888:133a:110::1 subnet: > 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 > subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST > UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED > [ 2009/01/17 21:10:00 ERROR xorp_fea:22090 MFEA +1181 mfea_mrouter.cc > add_multicast_vif ] add_multicast_vif() failed: IPv6 multicast routing not > supported > [ 2009/01/17 21:10:00 ERROR xorp_fea:22090 MFEA +1186 mfea_node.cc start_vif > ] Cannot start vif eth10: cannot add the multicast vif to the kernel > [ 2009/01/17 21:10:00 WARNING xorp_fea XrlMfeaTarget ] Handling method for > mfea/0.1/start_vif failed: XrlCmdError 102 Command failed Cannot start vif > eth10: cannot add the multicast vif to the kernel > [ 2009/01/17 21:10:00 ERROR xorp_rtrmgr:22089 RTRMGR +681 master_conf_tree.cc > commit_pass2_done ] Commit failed: 102 Command failed Cannot start vif eth10: > cannot add the multicast vif to the kernel > [ 2009/01/17 21:10:00 ERROR xorp_rtrmgr:22089 RTRMGR +251 master_conf_tree.cc > config_done ] Configuration failed: 102 Command failed Cannot start vif eth10: > cannot add the multicast vif to the kernel > [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +2228 task.cc run_task ] > No more tasks to run > [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +171 module_manager.cc > terminate ] Terminating module: fea > [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +171 module_manager.cc > terminate ] Terminating module: firewall > [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +171 module_manager.cc > terminate ] Terminating module: interfaces > [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +171 module_manager.cc > terminate ] Terminating module: mfea6 > [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +194 module_manager.cc > terminate ] Killing module: mfea6 > [ 2009/01/17 21:10:00 ERROR xorp_rtrmgr:22089 RTRMGR +747 module_manager.cc > done_cb ] Command "/home/erik/src/xorp/xorp-1.5/fea/xorp_fea": terminated with > signal 15. > [ 2009/01/17 21:10:00 INFO xorp_rtrmgr:22089 RTRMGR +282 module_manager.cc > module_exited ] Module killed during shutdown: mfea6 > > ============================================================ > > For completeness: > > artemis root:/home/erik/src/xorp/xorp-1.5/rtrmgr $ ip -6 addr show eth10 > 3: eth10: mtu 1500 qlen 1000 > inet6 2001:888:133a:110::1/64 scope global > valid_lft forever preferred_lft forever > inet6 fe80::204:23ff:feaa:a983/64 scope link > valid_lft forever preferred_lft forever > > The multicast group address to be used is fe05::1 > _______________________________________________ > Xorp-users mailing list > Xorp-users at xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From erik at slagter.name Sun Jan 18 03:59:26 2009 From: erik at slagter.name (Erik Slagter) Date: Sun, 18 Jan 2009 12:59:26 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> Message-ID: <4973199E.60708@slagter.name> Thank you for your help this far. Proceedings this far are: Pavlin Radoslavov wrote: > * The multicast group prefix (fe05::/16) in the static-rps is > invalid. It should be the prefix ff00::/8 (or a sub-prefix of > it). This address was a typo in the config file. I corrected it. >> Anyway, what I created doesn't work. It complains about "IPv6 multicast >> routing not supported" although I explicitly enabled it in the kernel config. > > It seems like the ./configure script didn't discover that the system > supports IPv6 multicast routing. Please send file config.log which > was generated after ./configure was run. It should contain the > reason why ./configure didn't discover it. I found two problems: the first being having an obsolete "kernel headers" package installed on my system, while this usually isn't a problem because I link /usr/include/linux to the source/include directory of the currently running kernel. But every once in a while this link gets replaced by a "kernel headers" package when I update the system :-( The other problem is that xorp 1.5 apparently doesn't support ipv6 multicast routing on linux anyway (according to the changelog of 1.6). I am surprised by this because I remember having been experimenting with this earlier on. So I replaced the link to the kernel headers and got myself xorp 1.6, configure now says there is ipv6 multicast support, xorp starts, although with a lot of warnings and I even managed to "draw" a multicast stream to a client. > * Typically you need to enable more than one interface for > multicast routing. So I did, although imho it already should work with only one interface, as the stream is generated on the router host itself. I have a few issues left though: - Almost of the interfaces have clients directly connected, this means an interface will go "not running" when a client is switched off. The interface actually is still "up" and should be treated as being "up". So if I define one of the not-always-active interfaces to the xorp config, and the link is down (no carrier), xorp bails out. I worked around this for the moment by changing the is_underlying_vif_up function to always report "true". Didn't yet test it though. - The stream is being directed to the client on reception of MLD membersip packets, but the stream never stops, even when the client is long gone (and the client has had MLD packets sent stating "stop"). I suspect this has something to do with the warnings/errors I am seeing. BTW is it normal that "ip -6 mroute show" displays nothing? In the meantime I will do some more testing and trying. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090118/ccc8da18/attachment-0001.bin From erik at slagter.name Sun Jan 18 05:33:16 2009 From: erik at slagter.name (Erik Slagter) Date: Sun, 18 Jan 2009 14:33:16 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <4973199E.60708@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <4973199E.60708@slagter.name> Message-ID: <49732F9C.5070809@slagter.name> Erik Slagter wrote: > Proceedings this far are: [ .. ] It seems to work sometimes, but not systematic. Also lots of errors and warnings on stderr. Can you please have a look? boot.config: ========================================================== interfaces { interface eth00 { default-system-config } interface eth10 { default-system-config } interface eth20 { default-system-config } interface eth21 { default-system-config } } fea { unicast-forwarding6 { disable: false } } protocols { fib2mrib { disable: false } } protocols { mld { disable: false interface eth00 { vif eth00 { disable: false } } interface eth10 { vif eth10 { disable: false } } interface eth20 { vif eth20 { disable: false } } interface eth21 { vif eth21 { disable: false } } traceoptions { flag all { disable: false } } } } protocols { pimsm6 { disable: false interface eth00 { vif eth00 { disable: false } } interface eth10 { vif eth10 { disable: false } } interface eth20 { vif eth20 { disable: false } } interface eth21 { vif eth21 { disable: false } } interface register_vif { vif register_vif { disable: false } } /* Note: static-rps and bootstrap should not be mixed */ static-rps { rp 2001:888:133a:110::1 { group-prefix ff05::/16 { } } } /* bootstrap { disable: false } */ switch-to-spt-threshold { /* approx. 1K bytes/s (10Kbps) threshold */ disable: false interval: 100 bytes: 102400 } traceoptions { flag all { disable: true } } } } plumbing { mfea6 { disable: false interface eth00 { vif eth00 { disable: false } } interface eth10 { vif eth10 { disable: false } } interface eth20 { vif eth20 { disable: false } } interface eth21 { vif eth21 { disable: false } } interface register_vif { vif register_vif { disable: false } } traceoptions { flag all { disable: false } } } } ip -6 addr: =========================================================== 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth11: mtu 1500 qlen 100 inet6 2001:888:133a:111::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::204:23ff:feaa:a982/64 scope link valid_lft forever preferred_lft forever 3: eth10: mtu 1500 qlen 1000 inet6 2001:888:133a:110::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::204:23ff:feaa:a983/64 scope link valid_lft forever preferred_lft forever 4: eth21: mtu 1500 qlen 1000 inet6 2001:888:133a::1:80:81/112 scope global valid_lft forever preferred_lft forever inet6 fe80::204:23ff:febb:5aa8/64 scope link valid_lft forever preferred_lft forever 5: eth20: mtu 1500 qlen 1000 inet6 2001:888:133a:120::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::204:23ff:febb:5aa9/64 scope link valid_lft forever preferred_lft forever 6: eth41: mtu 1500 qlen 100 inet6 2001:888:133a:141::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::207:e9ff:fe18:f0fa/64 scope link valid_lft forever preferred_lft forever 7: eth40: mtu 1500 qlen 1000 inet6 2001:888:133a:140::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::207:e9ff:fe18:f0fb/64 scope link valid_lft forever preferred_lft forever 9: eth00: mtu 1500 qlen 1000 inet6 2001:888:133a:100::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::217:31ff:febb:3b14/64 scope link valid_lft forever preferred_lft forever 11: eth33: mtu 1500 qlen 1000 inet6 2001:888:133a:133::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abd8/64 scope link valid_lft forever preferred_lft forever 12: eth32: mtu 1500 qlen 1000 inet6 2001:888:133a::1:112:113/112 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abd9/64 scope link valid_lft forever preferred_lft forever 13: eth31: mtu 1500 qlen 1000 inet6 2001:888:133a::1:96:97/112 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abda/64 scope link valid_lft forever preferred_lft forever 14: eth30: mtu 1500 qlen 1000 inet6 fe80::20d:88ff:fecc:abdb/64 scope link valid_lft forever preferred_lft forever 15: dummy0: mtu 1500 inet6 2001:888:133a:199::1/64 scope global valid_lft forever preferred_lft forever inet6 2001:888:133a::1:0:1/112 scope global valid_lft forever preferred_lft forever inet6 fe80::f8ab:2bff:fe0e:60df/64 scope link valid_lft forever preferred_lft forever 22: vlan1 at eth30: mtu 1500 inet6 2001:888:133a::1:160:161/112 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abdb/64 scope link valid_lft forever preferred_lft forever 23: vlan2 at eth30: mtu 1500 inet6 2001:888:133a:182::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abdb/64 scope link valid_lft forever preferred_lft forever 24: vlan3 at eth30: mtu 1500 inet6 2001:888:133a:183::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abdb/64 scope link valid_lft forever preferred_lft forever output of xorp: (when xorp has been started, I first start the server application on the server and then the client application on the client on eth10, then I quit all of them. Multicast address is ff05::1 BTW If I can start xorp in a mode that gives less verbose but still useful information for you, please let me know. Output in the next message due to size. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090118/46aff9b4/attachment.bin From erik at slagter.name Sun Jan 18 05:33:42 2009 From: erik at slagter.name (Erik Slagter) Date: Sun, 18 Jan 2009 14:33:42 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <4973199E.60708@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <4973199E.60708@slagter.name> Message-ID: <49732FB6.1090103@slagter.name> output of xorp: (when xorp has been started, I first start the server application on the server and then the client application on the client on eth10, then I quit all of them. Multicast address is ff05::1 BTW If I can start xorp in a mode that gives less verbose but still useful information for you, please let me know. ======================================================= artemis root:~erik/src/xorp/xorp-1.6/rtrmgr $ ./xorp_rtrmgr -b ~erik/config.boot [ 2009/01/18 14:27:35 INFO xorp_rtrmgr:18971 RTRMGR +249 master_conf_tree.cc execute ] Changed modules: interfaces, firewall, fea, mfea6, mld, rib, fib2mrib, pimsm6 [ 2009/01/18 14:27:35 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: interfaces (fea/xorp_fea) [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] MFEA enabled [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] CLI enabled [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] CLI started [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] MFEA enabled [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] CLI enabled [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] CLI started [ 2009/01/18 14:27:37 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: firewall (fea/xorp_fea) [ 2009/01/18 14:27:41 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: fea (fea/xorp_fea) [ 2009/01/18 14:27:47 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: mfea6 (fea/xorp_fea) [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] MFEA started [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[register_vif] pif_index: 9 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: 2001:888:133a:100::1 peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: fe80::217:31ff:febb:3b14 peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[register_vif] pif_index: 9 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: 2001:888:133a:100::1 peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: fe80::217:31ff:febb:3b14 peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[register_vif] pif_index: 9 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: 2001:888:133a:100::1 peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: fe80::217:31ff:febb:3b14 peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: mld (mld6igmp/xorp_mld) [ 2009/01/18 14:27:47 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "MLD" does not exist or is not enabled. [ 2009/01/18 14:27:47 INFO xorp_mld MLD6IGMP ] Protocol enabled [ 2009/01/18 14:27:47 INFO xorp_mld MLD6IGMP ] CLI enabled [ 2009/01/18 14:27:47 INFO xorp_mld MLD6IGMP ] CLI started [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Protocol started [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface added: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface added: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface added: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface added: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface started: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface started: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface started: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface started: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: rib (rib/xorp_rib) [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::1:ff00:60b on vif eth10 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::1:ff00:60b) on vif eth10 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::20c:76ff:fe9b:3003 to ff02::1:ff9b:3003 on vif eth21 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::1:ff9b:3003) on vif eth21 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::20c:76ff:fe9b:3003 to ff02::202 on vif eth21 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::202) on vif eth21 [ 2009/01/18 14:27:51 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::202 on vif eth10 [ 2009/01/18 14:27:51 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::202) on vif eth10 [ 2009/01/18 14:27:51 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: fib2mrib (fib2mrib/xorp_fib2mrib) [ 2009/01/18 14:27:54 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: pimsm6 (pim/xorp_pimsm6) [ 2009/01/18 14:27:54 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "PIMSM_6" does not exist or is not enabled. [ 2009/01/18 14:27:54 INFO xorp_pimsm6 PIM ] Protocol enabled [ 2009/01/18 14:27:54 INFO xorp_pimsm6 PIM ] CLI enabled [ 2009/01/18 14:27:54 INFO xorp_pimsm6 PIM ] CLI started [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Protocol started [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[register_vif] pif_index: 0 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: :: peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::221:97ff:fe77:5836 to ff02::1:ff77:5836 on vif eth20 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::1:ff77:5836) on vif eth20 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::221:97ff:fe77:5836 to ff02::202 on vif eth20 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::202) on vif eth20 [ 2009/01/18 14:27:55 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:feaa:a983 to ff02::1: 0 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::204:23ff:feaa:a983 to ff02::1 on vif eth10 [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[register_vif] pif_index: 0 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: :: peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[register_vif] pif_index: 0 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: :: peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_rtrmgr:18971 RTRMGR +2233 task.cc run_task ] No more tasks to run [ 2009/01/18 14:27:56 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::20c:76ff:fe9b:3003 to ff02::1:ff80:82 on vif eth21 [ 2009/01/18 14:27:56 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::1:ff80:82) on vif eth21 [ 2009/01/18 14:27:56 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 vif_index = 0 src = :: dst = :: [ 2009/01/18 14:27:56 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: vif_index = 0 src = :: dst = :: message_type = 0 [ 2009/01/18 14:27:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::217:31ff:febb:3b14 to ff02::d: 0 [ 2009/01/18 14:27:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa9 to ff02::d: 0 [ 2009/01/18 14:27:58 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 vif_index = 0 src = :: dst = :: [ 2009/01/18 14:27:58 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: vif_index = 0 src = :: dst = :: message_type = 0 [ 2009/01/18 14:27:58 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::200:5aff:fe00:60b to fe80::204:23ff:feaa:a983 on vif eth10 [ 2009/01/18 14:28:03 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::200:5aff:fe00:60b to fe80::204:23ff:feaa:a983 on vif eth10 [ 2009/01/18 14:28:09 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa9 to ff02::1: 0 [ 2009/01/18 14:28:09 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::204:23ff:febb:5aa9 to ff02::1 on vif eth20 [ 2009/01/18 14:28:14 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from 2001:888:133a::1:80:82 to fe80::204:23ff:febb:5aa8 on vif eth21 [ 2009/01/18 14:28:19 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::20c:76ff:fe9b:3003 to 2001:888:133a::1:80:81 on vif eth21 [ 2009/01/18 14:28:19 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::20c:76ff:fe9b:3003 to fe80::204:23ff:febb:5aa8 on vif eth21 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] TX MLD_LISTENER_QUERY from fe80::217:31ff:febb:3b14 to ff02::1 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] TX MLD_LISTENER_QUERY from fe80::204:23ff:feaa:a983 to ff02::1 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] TX MLD_LISTENER_QUERY from fe80::204:23ff:febb:5aa9 to ff02::1 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_QUERY from fe80::217:31ff:febb:3b14 to ff02::1 on vif eth00 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_QUERY from fe80::204:23ff:feaa:a983 to ff02::1 on vif eth10 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_QUERY from fe80::204:23ff:febb:5aa9 to ff02::1 on vif eth20 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] TX MLD_LISTENER_QUERY from fe80::204:23ff:febb:5aa8 to ff02::1 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_QUERY from fe80::204:23ff:febb:5aa8 to ff02::1 on vif eth21 [ 2009/01/18 14:28:22 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::221:97ff:fe77:5836 to ff02::1:ff77:5836 on vif eth20 [ 2009/01/18 14:28:23 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::1:ff00:60b on vif eth10 [ 2009/01/18 14:28:24 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::20c:76ff:fe9b:3003 to fe80::204:23ff:febb:5aa8 on vif eth21 [ 2009/01/18 14:28:25 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPOR T from fe80::20c:76ff:fe9b:3003 to ff02::1:ff80:82 on vif eth21 [ 2009/01/18 14:28:26 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from 2001:888:133a:120:221:97ff:fe77:5836 to fe80::204:23ff:febb:5aa9 on vif eth20 [ 2009/01/18 14:28:26 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::221:97ff:fe77:5836 to fe80::204:23ff:febb:5aa9 on vif eth20 [ 2009/01/18 14:28:26 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 vif_index = 0 src = :: dst = :: [ 2009/01/18 14:28:26 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: vif_index = 0 src = :: dst = :: message_type = 0 [ 2009/01/18 14:28:28 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::217:31ff:febb:3b14 to ff02::d: 0 [ 2009/01/18 14:28:28 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa9 to ff02::d: 0 [ 2009/01/18 14:28:28 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa8 to ff02::d: 0 [ 2009/01/18 14:28:30 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::20c:76ff:fe9b:3003 to ff02::1:ff9b:3003 on vif eth21 [ 2009/01/18 14:28:31 WARNING xorp_fea FEA ] proto_socket_read() failed: RX packet from 2001:888:133a:141::2 to fe80::207:e9ff:fe18:f0fa pif_index 6: no vif found [ 2009/01/18 14:28:31 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::221:97ff:fe77:5836 to fe80::204:23ff:febb:5aa9 on vif eth20 [ 2009/01/18 14:28:32 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:feaa:a983 to ff02::1: 0 [ 2009/01/18 14:28:32 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::204:23ff:feaa:a983 to ff02::1 on vif eth10 [ 2009/01/18 14:28:36 WARNING xorp_fea FEA ] proto_socket_read() failed: RX packet from fe80::204:a7ff:fe04:da3f to fe80::207:e9ff:fe18:f0fa pif_index 6: no vif found [ 2009/01/18 14:28:40 WARNING xorp_fea FEA ] proto_socket_read() failed: RX packet from fe80::20d:88ff:fecc:abdb to ff02::1 pif_index 23: no vif found [ 2009/01/18 14:28:41 WARNING xorp_fea FEA ] proto_socket_read() failed: RX packet from fe80::204:a7ff:fe04:da3f to fe80::207:e9ff:fe18:f0fa pif_index 6: no vif found [ 2009/01/18 14:28:41 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff05::1 on vif eth10 [ 2009/01/18 14:28:41 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff05::1) on vif eth10 [ 2009/01/18 14:28:47 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff05::1 on vif eth10 [ 2009/01/18 14:28:56 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 vif_index = 0 src = :: dst = :: [ 2009/01/18 14:28:56 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: vif_index = 0 src = :: dst = :: message_type = 0 [ 2009/01/18 14:28:57 TRACE xorp_mld MLD6IGMP ] RX MLDV2_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::16 on vif eth10 [ 2009/01/18 14:28:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::217:31ff:febb:3b14 to ff02::d: 0 [ 2009/01/18 14:28:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa9 to ff02::d: 0 [ 2009/01/18 14:28:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa8 to ff02::d: 0 [ 2009/01/18 14:29:00 TRACE xorp_mld MLD6IGMP ] RX MLDV2_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::16 on vif eth10 ^C[ 2009/01/18 14:29:02 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: pimsm6 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] CLI stopped [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[register_vif] pif_index: 0 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: :: peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: eth00 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: eth10 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: eth20 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: eth21 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: register_vif [ 2009/01/18 14:29:02 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: pimsm6 [ 2009/01/18 14:29:03 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "PIMSM_6" does not exist or is not enabled. [ 2009/01/18 14:29:04 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: fib2mrib [ 2009/01/18 14:29:04 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: fib2mrib [ 2009/01/18 14:29:05 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "fib2mrib" does not exist or is not enabled. [ 2009/01/18 14:29:06 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: rib [ 2009/01/18 14:29:06 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::200:5aff:fe00:60b to fe80::204:23ff:feaa:a983 on vif eth10 [ 2009/01/18 14:29:06 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: rib [ 2009/01/18 14:29:07 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "rib" does not exist or is not enabled. [ 2009/01/18 14:29:08 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: mld [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] CLI stopped [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface stopped: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::202) on vif eth10 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::1:ff00:60b) on vif eth10 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff05::1) on vif eth10 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface stopped: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::202) on vif eth20 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::1:ff77:5836) on vif eth20 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface stopped: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::202) on vif eth21 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::1:ff80:82) on vif eth21 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::1:ff9b:3003) on vif eth21 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface stopped: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] CLI stopped [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] CLI stopped [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface deleted: eth00 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface deleted: eth10 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface deleted: eth20 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface deleted: eth21 [ 2009/01/18 14:29:08 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: mld [ 2009/01/18 14:29:09 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "MLD" does not exist or is not enabled. [ 2009/01/18 14:29:10 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: mfea6 [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] CLI stopped [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[register_vif] pif_index: 9 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: 2001:888:133a:100::1 peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: fe80::217:31ff:febb:3b14 peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:11 INFO xorp_rtrmgr:18971 RTRMGR +176 module_manager.cc terminate ] Terminating module: mfea6 [ 2009/01/18 14:29:14 INFO xorp_rtrmgr:18971 RTRMGR +176 module_manager.cc terminate ] Terminating module: fea [ 2009/01/18 14:29:17 INFO xorp_rtrmgr:18971 RTRMGR +176 module_manager.cc terminate ] Terminating module: firewall [ 2009/01/18 14:29:17 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: interfaces [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] CLI stopped [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: eth00 [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: eth10 [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: eth20 [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: eth21 [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: register_vif [ 2009/01/18 14:29:17 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: interfaces [ 2009/01/18 14:29:18 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "fea" does not exist or is not enabled. [ 2009/01/18 14:29:19 INFO xorp_rtrmgr:18971 RTRMGR +2233 task.cc run_task ] No more tasks to run -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090118/f7ec0982/attachment-0001.bin From erik at slagter.name Sun Jan 18 05:31:24 2009 From: erik at slagter.name (Erik Slagter) Date: Sun, 18 Jan 2009 14:31:24 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <4973199E.60708@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <4973199E.60708@slagter.name> Message-ID: <49732F2C.3080801@slagter.name> Erik Slagter wrote: > Proceedings this far are: [ .. ] It seems to work sometimes, but not systematic. Also lots of errors and warnings on stderr. Can you please have a look? boot.config: ========================================================== interfaces { interface eth00 { default-system-config } interface eth10 { default-system-config } interface eth20 { default-system-config } interface eth21 { default-system-config } } fea { unicast-forwarding6 { disable: false } } protocols { fib2mrib { disable: false } } protocols { mld { disable: false interface eth00 { vif eth00 { disable: false } } interface eth10 { vif eth10 { disable: false } } interface eth20 { vif eth20 { disable: false } } interface eth21 { vif eth21 { disable: false } } traceoptions { flag all { disable: false } } } } protocols { pimsm6 { disable: false interface eth00 { vif eth00 { disable: false } } interface eth10 { vif eth10 { disable: false } } interface eth20 { vif eth20 { disable: false } } interface eth21 { vif eth21 { disable: false } } interface register_vif { vif register_vif { disable: false } } /* Note: static-rps and bootstrap should not be mixed */ static-rps { rp 2001:888:133a:110::1 { group-prefix ff05::/16 { } } } /* bootstrap { disable: false } */ switch-to-spt-threshold { /* approx. 1K bytes/s (10Kbps) threshold */ disable: false interval: 100 bytes: 102400 } traceoptions { flag all { disable: true } } } } plumbing { mfea6 { disable: false interface eth00 { vif eth00 { disable: false } } interface eth10 { vif eth10 { disable: false } } interface eth20 { vif eth20 { disable: false } } interface eth21 { vif eth21 { disable: false } } interface register_vif { vif register_vif { disable: false } } traceoptions { flag all { disable: false } } } } ip -6 addr: =========================================================== 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth11: mtu 1500 qlen 100 inet6 2001:888:133a:111::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::204:23ff:feaa:a982/64 scope link valid_lft forever preferred_lft forever 3: eth10: mtu 1500 qlen 1000 inet6 2001:888:133a:110::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::204:23ff:feaa:a983/64 scope link valid_lft forever preferred_lft forever 4: eth21: mtu 1500 qlen 1000 inet6 2001:888:133a::1:80:81/112 scope global valid_lft forever preferred_lft forever inet6 fe80::204:23ff:febb:5aa8/64 scope link valid_lft forever preferred_lft forever 5: eth20: mtu 1500 qlen 1000 inet6 2001:888:133a:120::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::204:23ff:febb:5aa9/64 scope link valid_lft forever preferred_lft forever 6: eth41: mtu 1500 qlen 100 inet6 2001:888:133a:141::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::207:e9ff:fe18:f0fa/64 scope link valid_lft forever preferred_lft forever 7: eth40: mtu 1500 qlen 1000 inet6 2001:888:133a:140::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::207:e9ff:fe18:f0fb/64 scope link valid_lft forever preferred_lft forever 9: eth00: mtu 1500 qlen 1000 inet6 2001:888:133a:100::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::217:31ff:febb:3b14/64 scope link valid_lft forever preferred_lft forever 11: eth33: mtu 1500 qlen 1000 inet6 2001:888:133a:133::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abd8/64 scope link valid_lft forever preferred_lft forever 12: eth32: mtu 1500 qlen 1000 inet6 2001:888:133a::1:112:113/112 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abd9/64 scope link valid_lft forever preferred_lft forever 13: eth31: mtu 1500 qlen 1000 inet6 2001:888:133a::1:96:97/112 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abda/64 scope link valid_lft forever preferred_lft forever 14: eth30: mtu 1500 qlen 1000 inet6 fe80::20d:88ff:fecc:abdb/64 scope link valid_lft forever preferred_lft forever 15: dummy0: mtu 1500 inet6 2001:888:133a:199::1/64 scope global valid_lft forever preferred_lft forever inet6 2001:888:133a::1:0:1/112 scope global valid_lft forever preferred_lft forever inet6 fe80::f8ab:2bff:fe0e:60df/64 scope link valid_lft forever preferred_lft forever 22: vlan1 at eth30: mtu 1500 inet6 2001:888:133a::1:160:161/112 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abdb/64 scope link valid_lft forever preferred_lft forever 23: vlan2 at eth30: mtu 1500 inet6 2001:888:133a:182::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abdb/64 scope link valid_lft forever preferred_lft forever 24: vlan3 at eth30: mtu 1500 inet6 2001:888:133a:183::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::20d:88ff:fecc:abdb/64 scope link valid_lft forever preferred_lft forever output of xorp: (when xorp has been started, I first start the server application on the server and then the client application on the client on eth10, then I quit all of them. Multicast address is ff05::1 BTW If I can start xorp in a mode that gives less verbose but still useful information for you, please let me know. ======================================================= artemis root:~erik/src/xorp/xorp-1.6/rtrmgr $ ./xorp_rtrmgr -b ~erik/config.boot [ 2009/01/18 14:27:35 INFO xorp_rtrmgr:18971 RTRMGR +249 master_conf_tree.cc execute ] Changed modules: interfaces, firewall, fea, mfea6, mld, rib, fib2mrib, pimsm6 [ 2009/01/18 14:27:35 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: interfaces (fea/xorp_fea) [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] MFEA enabled [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] CLI enabled [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] CLI started [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] MFEA enabled [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] CLI enabled [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] CLI started [ 2009/01/18 14:27:37 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: firewall (fea/xorp_fea) [ 2009/01/18 14:27:41 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: fea (fea/xorp_fea) [ 2009/01/18 14:27:47 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: mfea6 (fea/xorp_fea) [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] MFEA started [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface added: Vif[register_vif] pif_index: 9 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: 2001:888:133a:100::1 peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: fe80::217:31ff:febb:3b14 peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface enabled Vif[register_vif] pif_index: 9 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: 2001:888:133a:100::1 peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: fe80::217:31ff:febb:3b14 peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_fea MFEA ] Interface started: Vif[register_vif] pif_index: 9 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: 2001:888:133a:100::1 peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: fe80::217:31ff:febb:3b14 peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:47 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: mld (mld6igmp/xorp_mld) [ 2009/01/18 14:27:47 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "MLD" does not exist or is not enabled. [ 2009/01/18 14:27:47 INFO xorp_mld MLD6IGMP ] Protocol enabled [ 2009/01/18 14:27:47 INFO xorp_mld MLD6IGMP ] CLI enabled [ 2009/01/18 14:27:47 INFO xorp_mld MLD6IGMP ] CLI started [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Protocol started [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface added: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface added: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface added: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface added: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface started: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface started: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface started: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface enabled: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_mld MLD6IGMP ] Interface started: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:49 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: rib (rib/xorp_rib) [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::1:ff00:60b on vif eth10 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::1:ff00:60b) on vif eth10 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::20c:76ff:fe9b:3003 to ff02::1:ff9b:3003 on vif eth21 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::1:ff9b:3003) on vif eth21 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::20c:76ff:fe9b:3003 to ff02::202 on vif eth21 [ 2009/01/18 14:27:50 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::202) on vif eth21 [ 2009/01/18 14:27:51 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::202 on vif eth10 [ 2009/01/18 14:27:51 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::202) on vif eth10 [ 2009/01/18 14:27:51 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: fib2mrib (fib2mrib/xorp_fib2mrib) [ 2009/01/18 14:27:54 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc execute ] Executing module: pimsm6 (pim/xorp_pimsm6) [ 2009/01/18 14:27:54 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "PIMSM_6" does not exist or is not enabled. [ 2009/01/18 14:27:54 INFO xorp_pimsm6 PIM ] Protocol enabled [ 2009/01/18 14:27:54 INFO xorp_pimsm6 PIM ] CLI enabled [ 2009/01/18 14:27:54 INFO xorp_pimsm6 PIM ] CLI started [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Protocol started [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 INFO xorp_pimsm6 PIM ] Interface added: Vif[register_vif] pif_index: 0 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: :: peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::221:97ff:fe77:5836 to ff02::1:ff77:5836 on vif eth20 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::1:ff77:5836) on vif eth20 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::221:97ff:fe77:5836 to ff02::202 on vif eth20 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::202) on vif eth20 [ 2009/01/18 14:27:55 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:feaa:a983 to ff02::1: 0 [ 2009/01/18 14:27:55 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::204:23ff:feaa:a983 to ff02::1 on vif eth10 [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface enabled: Vif[register_vif] pif_index: 0 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: :: peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_pimsm6 PIM ] Interface started: Vif[register_vif] pif_index: 0 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: :: peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 UP IPv6 ENABLED [ 2009/01/18 14:27:56 INFO xorp_rtrmgr:18971 RTRMGR +2233 task.cc run_task ] No more tasks to run [ 2009/01/18 14:27:56 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::20c:76ff:fe9b:3003 to ff02::1:ff80:82 on vif eth21 [ 2009/01/18 14:27:56 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff02::1:ff80:82) on vif eth21 [ 2009/01/18 14:27:56 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 vif_index = 0 src = :: dst = :: [ 2009/01/18 14:27:56 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: vif_index = 0 src = :: dst = :: message_type = 0 [ 2009/01/18 14:27:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::217:31ff:febb:3b14 to ff02::d: 0 [ 2009/01/18 14:27:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa9 to ff02::d: 0 [ 2009/01/18 14:27:58 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 vif_index = 0 src = :: dst = :: [ 2009/01/18 14:27:58 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: vif_index = 0 src = :: dst = :: message_type = 0 [ 2009/01/18 14:27:58 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::200:5aff:fe00:60b to fe80::204:23ff:feaa:a983 on vif eth10 [ 2009/01/18 14:28:03 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::200:5aff:fe00:60b to fe80::204:23ff:feaa:a983 on vif eth10 [ 2009/01/18 14:28:09 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa9 to ff02::1: 0 [ 2009/01/18 14:28:09 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::204:23ff:febb:5aa9 to ff02::1 on vif eth20 [ 2009/01/18 14:28:14 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from 2001:888:133a::1:80:82 to fe80::204:23ff:febb:5aa8 on vif eth21 [ 2009/01/18 14:28:19 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::20c:76ff:fe9b:3003 to 2001:888:133a::1:80:81 on vif eth21 [ 2009/01/18 14:28:19 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::20c:76ff:fe9b:3003 to fe80::204:23ff:febb:5aa8 on vif eth21 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] TX MLD_LISTENER_QUERY from fe80::217:31ff:febb:3b14 to ff02::1 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] TX MLD_LISTENER_QUERY from fe80::204:23ff:feaa:a983 to ff02::1 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] TX MLD_LISTENER_QUERY from fe80::204:23ff:febb:5aa9 to ff02::1 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_QUERY from fe80::217:31ff:febb:3b14 to ff02::1 on vif eth00 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_QUERY from fe80::204:23ff:feaa:a983 to ff02::1 on vif eth10 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_QUERY from fe80::204:23ff:febb:5aa9 to ff02::1 on vif eth20 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] TX MLD_LISTENER_QUERY from fe80::204:23ff:febb:5aa8 to ff02::1 [ 2009/01/18 14:28:21 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_QUERY from fe80::204:23ff:febb:5aa8 to ff02::1 on vif eth21 [ 2009/01/18 14:28:22 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::221:97ff:fe77:5836 to ff02::1:ff77:5836 on vif eth20 [ 2009/01/18 14:28:23 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::1:ff00:60b on vif eth10 [ 2009/01/18 14:28:24 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::20c:76ff:fe9b:3003 to fe80::204:23ff:febb:5aa8 on vif eth21 [ 2009/01/18 14:28:25 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPOR T from fe80::20c:76ff:fe9b:3003 to ff02::1:ff80:82 on vif eth21 [ 2009/01/18 14:28:26 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from 2001:888:133a:120:221:97ff:fe77:5836 to fe80::204:23ff:febb:5aa9 on vif eth20 [ 2009/01/18 14:28:26 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::221:97ff:fe77:5836 to fe80::204:23ff:febb:5aa9 on vif eth20 [ 2009/01/18 14:28:26 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 vif_index = 0 src = :: dst = :: [ 2009/01/18 14:28:26 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: vif_index = 0 src = :: dst = :: message_type = 0 [ 2009/01/18 14:28:28 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::217:31ff:febb:3b14 to ff02::d: 0 [ 2009/01/18 14:28:28 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa9 to ff02::d: 0 [ 2009/01/18 14:28:28 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa8 to ff02::d: 0 [ 2009/01/18 14:28:30 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::20c:76ff:fe9b:3003 to ff02::1:ff9b:3003 on vif eth21 [ 2009/01/18 14:28:31 WARNING xorp_fea FEA ] proto_socket_read() failed: RX packet from 2001:888:133a:141::2 to fe80::207:e9ff:fe18:f0fa pif_index 6: no vif found [ 2009/01/18 14:28:31 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::221:97ff:fe77:5836 to fe80::204:23ff:febb:5aa9 on vif eth20 [ 2009/01/18 14:28:32 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:feaa:a983 to ff02::1: 0 [ 2009/01/18 14:28:32 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::204:23ff:feaa:a983 to ff02::1 on vif eth10 [ 2009/01/18 14:28:36 WARNING xorp_fea FEA ] proto_socket_read() failed: RX packet from fe80::204:a7ff:fe04:da3f to fe80::207:e9ff:fe18:f0fa pif_index 6: no vif found [ 2009/01/18 14:28:40 WARNING xorp_fea FEA ] proto_socket_read() failed: RX packet from fe80::20d:88ff:fecc:abdb to ff02::1 pif_index 23: no vif found [ 2009/01/18 14:28:41 WARNING xorp_fea FEA ] proto_socket_read() failed: RX packet from fe80::204:a7ff:fe04:da3f to fe80::207:e9ff:fe18:f0fa pif_index 6: no vif found [ 2009/01/18 14:28:41 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff05::1 on vif eth10 [ 2009/01/18 14:28:41 TRACE xorp_mld MLD6IGMP ] Notify routing add membership for (::, ff05::1) on vif eth10 [ 2009/01/18 14:28:47 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff05::1 on vif eth10 [ 2009/01/18 14:28:56 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 vif_index = 0 src = :: dst = :: [ 2009/01/18 14:28:56 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: vif_index = 0 src = :: dst = :: message_type = 0 [ 2009/01/18 14:28:57 TRACE xorp_mld MLD6IGMP ] RX MLDV2_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::16 on vif eth10 [ 2009/01/18 14:28:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::217:31ff:febb:3b14 to ff02::d: 0 [ 2009/01/18 14:28:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa9 to ff02::d: 0 [ 2009/01/18 14:28:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc proto_socket_read ] proto_socket_read() failed: invalid interface pif_index from fe80::204:23ff:febb:5aa8 to ff02::d: 0 [ 2009/01/18 14:29:00 TRACE xorp_mld MLD6IGMP ] RX MLDV2_LISTENER_REPORT from fe80::200:5aff:fe00:60b to ff02::16 on vif eth10 ^C[ 2009/01/18 14:29:02 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: pimsm6 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] CLI stopped [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[register_vif] pif_index: 0 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: :: peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface stopped: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: eth00 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: eth10 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: eth20 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: eth21 [ 2009/01/18 14:29:02 INFO xorp_pimsm6 PIM ] Interface deleted: register_vif [ 2009/01/18 14:29:02 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: pimsm6 [ 2009/01/18 14:29:03 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "PIMSM_6" does not exist or is not enabled. [ 2009/01/18 14:29:04 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: fib2mrib [ 2009/01/18 14:29:04 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: fib2mrib [ 2009/01/18 14:29:05 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "fib2mrib" does not exist or is not enabled. [ 2009/01/18 14:29:06 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: rib [ 2009/01/18 14:29:06 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from fe80::200:5aff:fe00:60b to fe80::204:23ff:feaa:a983 on vif eth10 [ 2009/01/18 14:29:06 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: rib [ 2009/01/18 14:29:07 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "rib" does not exist or is not enabled. [ 2009/01/18 14:29:08 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: mld [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] CLI stopped [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface stopped: Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::202) on vif eth10 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::1:ff00:60b) on vif eth10 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff05::1) on vif eth10 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface stopped: Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::202) on vif eth20 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::1:ff77:5836) on vif eth20 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface stopped: Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::202) on vif eth21 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::1:ff80:82) on vif eth21 [ 2009/01/18 14:29:08 TRACE xorp_mld MLD6IGMP ] Notify routing delete membership for (::, ff02::1:ff9b:3003) on vif eth21 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface stopped: Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] CLI stopped [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] CLI stopped [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface deleted: eth00 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface deleted: eth10 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface deleted: eth20 [ 2009/01/18 14:29:08 INFO xorp_mld MLD6IGMP ] Interface deleted: eth21 [ 2009/01/18 14:29:08 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: mld [ 2009/01/18 14:29:09 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "MLD" does not exist or is not enabled. [ 2009/01/18 14:29:10 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: mfea6 [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] CLI stopped [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[eth00] pif_index: 9 vif_index: 0 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::/64 broadcast: :: peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[eth10] pif_index: 3 vif_index: 1 addr: 2001:888:133a:110::1 subnet: 2001:888:133a:110::/64 broadcast: :: peer: :: addr: fe80::204:23ff:feaa:a983 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[eth20] pif_index: 5 vif_index: 2 addr: 2001:888:133a:120::1 subnet: 2001:888:133a:120::/64 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa9 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[eth21] pif_index: 4 vif_index: 3 addr: 2001:888:133a::1:80:81 subnet: 2001:888:133a::1:80:0/112 broadcast: :: peer: :: addr: fe80::204:23ff:febb:5aa8 subnet: fe80::/64 broadcast: :: peer: :: Flags: MULTICAST BROADCAST UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:10 INFO xorp_fea MFEA ] Interface stopped Vif[register_vif] pif_index: 9 vif_index: 4 addr: 2001:888:133a:100::1 subnet: 2001:888:133a:100::1/128 broadcast: 2001:888:133a:100::1 peer: :: addr: fe80::217:31ff:febb:3b14 subnet: fe80::217:31ff:febb:3b14/128 broadcast: fe80::217:31ff:febb:3b14 peer: :: Flags: PIM_REGISTER UNDERLYING_VIF_UP MTU: 1500 DOWN IPv6 ENABLED [ 2009/01/18 14:29:11 INFO xorp_rtrmgr:18971 RTRMGR +176 module_manager.cc terminate ] Terminating module: mfea6 [ 2009/01/18 14:29:14 INFO xorp_rtrmgr:18971 RTRMGR +176 module_manager.cc terminate ] Terminating module: fea [ 2009/01/18 14:29:17 INFO xorp_rtrmgr:18971 RTRMGR +176 module_manager.cc terminate ] Terminating module: firewall [ 2009/01/18 14:29:17 INFO xorp_rtrmgr:18971 RTRMGR +1024 task.cc shutdown ] Shutting down module: interfaces [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] CLI stopped [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: eth00 [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: eth10 [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: eth20 [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: eth21 [ 2009/01/18 14:29:17 INFO xorp_fea MFEA ] Interface deleted: register_vif [ 2009/01/18 14:29:17 INFO xorp_rtrmgr:18971 RTRMGR +280 module_manager.cc module_exited ] Module normal exit: interfaces [ 2009/01/18 14:29:18 WARNING xorp_rtrmgr:18971 XrlFinderTarget +407 ../xrl/targets/finder_base.cc handle_finder_0_2_resolve_xrl ] Handling method for finder/0.2/resolve_xrl failed: XrlCmdError 102 Command failed Target "fea" does not exist or is not enabled. [ 2009/01/18 14:29:19 INFO xorp_rtrmgr:18971 RTRMGR +2233 task.cc run_task ] No more tasks to run -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090118/adfc9d48/attachment-0001.bin From info at cmd.nu Mon Jan 26 03:25:34 2009 From: info at cmd.nu (Christian Svensson) Date: Mon, 26 Jan 2009 12:25:34 +0100 Subject: [Xorp-users] BGP module dies and problems with import policies In-Reply-To: References: Message-ID: Hello. Sorry for an email filled with formatting, I didn't find any good solutions to make things clear. Kernel: Ubuntu 8.04 server (2.6.24-23-server) Xorp: We have tested 1.4, 1.5 and 1.6 for both issues described in this email Scenario: http://www.mxd.nu/info/lan-man-route-net-0901.jpg i.e 2 BGP peers and OSPF to internal network. Problem 1: Around 2009-01-15 18:50 the communication out from our BGP router suddenly stopped. Attaching logs. Logs: http://www.mxd.nu/router.log http://www.mxd.nu/router.err.log Mirror: https://denzel.cmd.nu/~bluecommand/xorp/2009-01-15 Recently this apparently happened again: [ 2009/01/25 06:15:26 INFO xorp_bgp BGP ] Sending: Notification Packet: Hold Timer Expired(4) [ 2009/01/25 06:18:19 ERROR xorp_bgp:4924 XRL +635 xrl_pf_stcp.cc die ] XrlPFSTCPSender died: Keepalive timeout [ 2009/01/25 06:18:19 ERROR xorp_fea:4920 XRL +635 xrl_pf_stcp.cc die ] XrlPFSTCPSender died: Keepalive timeout [ 2009/01/25 06:18:22 ERROR xorp_rib:4921 XRL +635 xrl_pf_stcp.cc die ] XrlPFSTCPSender died: Keepalive timeout [ 2009/01/25 06:18:23 ERROR xorp_policy:4922 XRL +635 xrl_pf_stcp.cc die ] XrlPFSTCPSender died: Keepalive timeout [ 2009/01/25 06:18:26 ERROR xorp_bgp:4924 XRL +635 xrl_pf_stcp.cc die ] XrlPFSTCPSender died: Keepalive timeout [ 2009/01/25 06:18:26 WARNING xorp_bgp:4924 LIBXORP +468 timer.cc expire_one ] Timer Expiry *much* later than scheduled: behind by 17.104458 seconds [ 2009/01/25 06:18:26 WARNING xorp_bgp:4924 LIBXORP +468 timer.cc expire_one ] Timer Expiry *much* later than scheduled: behind by 17.104535 seconds [ 2009/01/25 06:18:26 WARNING xorp_bgp:4924 LIBXORP +468 timer.cc expire_one ] Timer Expiry *much* later than scheduled: behind by 17.104547 seconds [ 2009/01/25 06:18:26 INFO xorp_bgp XRL ] Sender died (protocol = "stcp", address = "127.0.0.1:40354") [ 2009/01/25 06:21:26 ERROR xorp_bgp:4924 XRL +338 xrl_pf_stcp.cc die ] STCPRequestHandler died: life timer expired Problem 2: We are also trying to apply a special localpref value on some routes to balance our peering: *set protocols bgp peer "212.112.175.81" import "PREFERED_BGP_STHLM" *We are given the response: *"210 Transport failed*" and xorp_policy dies. Log and configuration: http://www.mxd.nu/policy-bgp-xorp.txt ip ro shows all routes after XORP has died (no processes left). It will not route any traffic on restart even though ip ro | wc -l return 270 000 (i.e. all the routes are in there). Stopping xorp, flushing addresses and routes on the router interfaces followed by a start of xorp "solves" this. -- Christian Svensson Command Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090126/f366b4e1/attachment.html From erik at slagter.name Mon Jan 26 05:09:25 2009 From: erik at slagter.name (Erik Slagter) Date: Mon, 26 Jan 2009 14:09:25 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> Message-ID: <497DB605.6050200@slagter.name> [Background information: I want to multicast a stream using IPv6 that is originating (locally) from a linux server that has a number of interfaces connected, each interface should only send out the packets when it has one or more clients that have joined the group, no PIM is in use, only MLD. The kernel native multicast forwarding functionality should be used, no packet forwarding in user space.] Finally I came to the conclusion that both xorp and mrd6 are not designed for what I need, xorp only functions with multicasting when pim is enabled (and set up properly) and the source should be outside the host xorp is running on. Mrd6 also needs an external multicast source (and maybe (configured) pim as well, I am not shure). So hacked myself a little (well..) program that does exactly this and not more. Due to the documentation about ipv6 multicast routing (api) being very sparse (not to say next to non-existent) I had to try and error the thing together. If anyone is interested (either for experimenting or for giving feedback for improvement (please!)) please yell. This is what is does: it listens on all system network interfaces for MLD messages (currently MLDv2 REPORT only, only INCLUDE and EXCLUDE, like current linux kernels send them, the RFC is incomprehensable). When a join request arrives, the requested multicast group is added to the list of groups of the interface the request came from and the multicast route for this group is again added with the changed interface list. Likewise for a leave request. Sounds simple, not? Well it's not :-/ Thanks to Todd Hayton BTW, I used his example program as a starting point (although my goal is different). Although I have it working now, I still do have some questions for the experts. - Every now and then I have a line with multicast route with an interface of "65536" and silly data in /proc/net/ip6_mr_cache. This look like a side-effect of packets being queued when no applicable mc route is found? - As far as I know, it should be ok for a multicast group to be forwarded regardless of the source address. If I set mc.mf6cc_origin to all (binary) zeroes (bzero) and then call MRT6_ADD_MFC, it simply doesn't work, it only works if I request the address from the incoming interface from the routing table, and fill the entry with that address and the corresponding interface index, which I think is a dirty hack :-/ Isn't there a way to specify a wildcard origin address and interface? - When I start the program that feeds the multicast stream, before my own program is started, the packets are routed to a (for me) random interface. This is not always the same interface. It would be logical to me if the packets are routed to the lo interface or are discarded instead, when no "MRT6_INIT" program is running... - After I start my program, and I have installed a multicast route with one or more interfaces, the packets are STILL (also) forwarded to this interface (see above) regardless whether the interface is mentioned in the mc route. - To resolve the above issue, I add a dummy ethernet interface (dummy0), which I try to set as multicast source for locally generated mc traffic using IPV6_MULTICAST_IF, sometimes this works, sometimes it doesn't. If it doesn't, I need to stop both programs (no MRT6_INIT apps, so no more mcast route cache...) and type "ip -6 route del ff05::4242" until ip returns an error. This route doesn't show up in "ip -6 route show", btw... If I now start both programs, the mc traffic seems to originate from dummy0 and is not physically sent out to unsubscribed interfaces anymore, although tshark reveals the packets are actually still "sent" on dummy0. I am not shure whether is this a result of IPV6_MULTICAST_IF or that I am just "lucky". - What is the intended use of IPV6_MULTICAST_IF actually? - Although I only request MLDv2 packets (143) I still get all sorts of messages on the raw socket (icmp type 0, expected, probably "upcall" packets) but also all sorts of other types?! - The semantics of MRT6_ADD_MFC and MRT6_DEL_MFC are still more or less a mystery to me. Adding and deleting one specific multicast route seems straightforward, but how is one supposed to add an interface to an existing route or delete one interface? Currently I call MRT6_ADD_MFC simply with the updated set of interfaces and this seems to work, but imho this is not "adding" a route like the name suggests... Likewise is it possible to call MRT6_DEL_MFC with a minimal set of information (like, delete all routes to mc group xx, regardless interface, origin address, etc.)? This to clean up the cache every now and then... Thank you for your time and efford! -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090126/e2487760/attachment.bin From erik at slagter.name Mon Jan 26 12:13:38 2009 From: erik at slagter.name (Erik Slagter) Date: Mon, 26 Jan 2009 21:13:38 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <497DB605.6050200@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <497DB605.6050200@slagter.name> Message-ID: <497E1972.3060801@slagter.name> Erik Slagter wrote: > - What is the intended use of IPV6_MULTICAST_IF actually? (to answer this myself) Looking at the kernel source, it looks like this setsockopt is meant to be used by by the multicasting application to select the default outgoing interface, when no single route is applicable and not multicast-router-app is running... I assumed it was meant to be used by the multicast-router-app... -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090126/b28d2410/attachment.bin From pavlin at ICSI.Berkeley.EDU Tue Jan 27 04:35:40 2009 From: pavlin at ICSI.Berkeley.EDU (Pavlin Radoslavov) Date: Tue, 27 Jan 2009 04:35:40 -0800 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <49732F2C.3080801@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <4973199E.60708@slagter.name> <49732F2C.3080801@slagter.name> Message-ID: <200901271235.n0RCZe8P008880@fruitcake.ICSI.Berkeley.EDU> Erik Slagter wrote: > It seems to work sometimes, but not systematic. Also lots of errors and > warnings on stderr. Can you please have a look? > > boot.config: ========================================================== Config seems fine. > ip -6 addr: =========================================================== > output of xorp: (when xorp has been started, I first start the server > application on the server and then the client application on the client on > eth10, then I quit all of them. Multicast address is ff05::1 > > BTW If I can start xorp in a mode that gives less verbose but still useful > information for you, please let me know. > > ======================================================= > > artemis root:~erik/src/xorp/xorp-1.6/rtrmgr $ ./xorp_rtrmgr -b > ~erik/config.boot > [ 2009/01/18 14:27:35 INFO xorp_rtrmgr:18971 RTRMGR +249 master_conf_tree.cc > execute ] Changed modules: interfaces, firewall, fea, mfea6, mld, rib, > fib2mrib, pimsm6 > [ 2009/01/18 14:27:35 INFO xorp_rtrmgr:18971 RTRMGR +101 module_manager.cc > execute ] Executing module: interfaces (fea/xorp_fea) > [ 2009/01/18 14:27:36 INFO xorp_fea MFEA ] MFEA enabled > [ 2009/01/18 14:27:56 INFO xorp_rtrmgr:18971 RTRMGR +2233 task.cc run_task ] > No more tasks to run > [ 2009/01/18 14:27:56 TRACE xorp_mld MLD6IGMP ] RX MLD_LISTENER_REPORT from > fe80::20c:76ff:fe9b:3003 to ff02::1:ff80:82 on vif eth21 > [ 2009/01/18 14:27:56 TRACE xorp_mld MLD6IGMP ] Notify routing add membership > for (::, ff02::1:ff80:82) on vif eth21 > [ 2009/01/18 14:27:56 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 > vif_index = 0 src = :: dst = :: > [ 2009/01/18 14:27:56 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: > vif_index = 0 src = :: dst = :: message_type = 0 > [ 2009/01/18 14:27:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc > proto_socket_read ] proto_socket_read() failed: invalid interface pif_index > from fe80::217:31ff:febb:3b14 to ff02::d: 0 > [ 2009/01/18 14:27:58 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc > proto_socket_read ] proto_socket_read() failed: invalid interface pif_index > from fe80::204:23ff:febb:5aa9 to ff02::d: 0 > [ 2009/01/18 14:27:58 TRACE xorp_fea MFEA ] RX kernel signal: message_type = 0 > vif_index = 0 src = :: dst = :: > [ 2009/01/18 14:27:58 WARNING xorp_pimsm6 PIM ] RX unknown signal from MFEA_6: > vif_index = 0 src = :: dst = :: message_type = 0 > [ 2009/01/18 14:27:58 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from > fe80::200:5aff:fe00:60b to fe80::204:23ff:feaa:a983 on vif eth10 > [ 2009/01/18 14:28:03 TRACE xorp_mld MLD6IGMP ] RX MLD_type_unknown from > fe80::200:5aff:fe00:60b to fe80::204:23ff:feaa:a983 on vif eth10 > [ 2009/01/18 14:28:09 ERROR xorp_fea:18972 FEA +1716 io_ip_socket.cc > proto_socket_read ] proto_socket_read() failed: invalid interface pif_index > from fe80::204:23ff:febb:5aa9 to ff02::1: 0 It seems the kernel upcalls contain some bogus information. The following email suggests that other folks might also had seen issues with kernel upcalls: http://www.linux-ipv6.org/ml/usagi-users/msg04077.html Unfortunately I wasn't able to replicate the problem and investigate the upcall issues. In my testing I wasn't even getting the MLD_LISTENER_REPORT delivered to the FEA, and there were no upcalls, but I didn't investigate further. A potential source of the problem is the one described in the following email: http://www.linux-ipv6.org/ml/usagi-users/msg04031.html Pavlin From erik at slagter.name Tue Jan 27 04:48:55 2009 From: erik at slagter.name (Erik Slagter) Date: Tue, 27 Jan 2009 13:48:55 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <200901271235.n0RCZe8P008880@fruitcake.ICSI.Berkeley.EDU> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <4973199E.60708@slagter.name> <49732F2C.3080801@slagter.name> <200901271235.n0RCZe8P008880@fruitcake.ICSI.Berkeley.EDU> Message-ID: <497F02B7.3040802@slagter.name> Pavlin Radoslavov wrote: > It seems the kernel upcalls contain some bogus information. Hmmm.... This may very well be the case, with my own little multicast-forwarding-using-kernel program, I experience the same, i.e. I create a raw socket, call mrt6_init, join mld multicast group, set icmp filter to "143" and then I still get all sorts of messages from the socket, some of them hem type "0" the icmp type can be anything. Indeed looks like a kernel bug to me. > The following email suggests that other folks might also had seen issues > with kernel upcalls: > http://www.linux-ipv6.org/ml/usagi-users/msg04077.html This involves PIM and router adverts. I try to avoid PIM as much as possible ;-) Waaaaay too complex. > Unfortunately I wasn't able to replicate the problem and investigate > the upcall issues. Never mind. Maybe you're using a different kernel version? > In my testing I wasn't even getting the MLD_LISTENER_REPORT > delivered to the FEA, and there were no upcalls, but I didn't > investigate further. A potential source of the problem is the one > described in the following email: > > http://www.linux-ipv6.org/ml/usagi-users/msg04031.html Yeah, I know this one, that's where I got the base for my own little app, thanks to Todd :-) I couldn't find any text where to start (api...) if you're going to do anything with ip6 multicast routing on linux :-( -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090127/85f8648a/attachment.bin From erik at slagter.name Tue Jan 27 05:21:56 2009 From: erik at slagter.name (Erik Slagter) Date: Tue, 27 Jan 2009 14:21:56 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <497DB605.6050200@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <497DB605.6050200@slagter.name> Message-ID: <497F0A74.2020200@slagter.name> Update ;-) Erik Slagter wrote: > - As far as I know, it should be ok for a multicast group to be > forwarded regardless of the source address. If I set mc.mf6cc_origin to > all (binary) zeroes (bzero) and then call MRT6_ADD_MFC, it simply > doesn't work, it only works if I request the address from the incoming > interface from the routing table, and fill the entry with that address > and the corresponding interface index, which I think is a dirty hack :-/ > Isn't there a way to specify a wildcard origin address and interface? I looked this up in the kernel source and indeed there is no way to specify a wildcard address nor a wildcard interface :-( > - When I start the program that feeds the multicast stream, before my > own program is started, the packets are routed to a (for me) random > interface. This is not always the same interface. It would be logical to > me if the packets are routed to the lo interface or are discarded > instead, when no "MRT6_INIT" program is running... Also I experienced that the "active" non-multicast route tends to change at random points in time :-( That wouldn't be a problem if it weren't that packets are also forwarded to this route, regardless multicast routes. I'll try to add an explicit route for every requested multicast group to my dummy0 interface. Maybe that helps. > - Although I only request MLDv2 packets (143) I still get all sorts of > messages on the raw socket (icmp type 0, expected, probably "upcall" > packets) but also all sorts of other types?! This seems to be a kernel bug (see other message). -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090127/c6c4e4b9/attachment-0001.bin From pavlin at ICSI.Berkeley.EDU Tue Jan 27 10:26:50 2009 From: pavlin at ICSI.Berkeley.EDU (Pavlin Radoslavov) Date: Tue, 27 Jan 2009 10:26:50 -0800 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <497DB605.6050200@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <497DB605.6050200@slagter.name> Message-ID: <200901271826.n0RIQoK2018390@fruitcake.ICSI.Berkeley.EDU> [Note: I am addressing only some of the high level questions posted below, because it seems you have found workaround for the problem] Erik Slagter wrote: > [Background information: I want to multicast a stream using IPv6 that is > originating (locally) from a linux server that has a number of interfaces > connected, each interface should only send out the packets when it has one or > more clients that have joined the group, no PIM is in use, only MLD. The > kernel native multicast forwarding functionality should be used, no packet > forwarding in user space.] > > Finally I came to the conclusion that both xorp and mrd6 are not designed for > what I need, xorp only functions with multicasting when pim is enabled (and > set up properly) and the source should be outside the host xorp is running > on. Mrd6 also needs an external multicast source (and maybe (configured) pim > as well, I am not shure). Theoretically, it shouldn't matter whether the source is on the host running XORP/MRD6 or external. Though, I have seen reports in the past for some odd behavior (for IPv4) if one of the participants (sender/receiver?) was on the same host. > So hacked myself a little (well..) program that does exactly this and not > more. Due to the documentation about ipv6 multicast routing (api) being very > sparse (not to say next to non-existent) I had to try and error the thing > together. If anyone is interested (either for experimenting or for giving > feedback for improvement (please!)) please yell. > > This is what is does: it listens on all system network interfaces for MLD > messages (currently MLDv2 REPORT only, only INCLUDE and EXCLUDE, like current > linux kernels send them, the RFC is incomprehensable). When a join request > arrives, the requested multicast group is added to the list of groups of the > interface the request came from and the multicast route for this group is > again added with the changed interface list. Likewise for a leave > request. Sounds simple, not? Well it's not :-/ > > Thanks to Todd Hayton BTW, I used his example program as a starting point > (although my goal is different). > > Although I have it working now, I still do have some questions for the experts. > > - Every now and then I have a line with multicast route with an interface of > "65536" and silly data in /proc/net/ip6_mr_cache. This look like a side-effect > of packets being queued when no applicable mc route is found? I am not familiar with the implementation details of ip6_mr_cache, but if there is no matching multicast route, it is up to the userland program to install such route. The new route should have the appropriate incoming and outgoing interfaces, or no outgoing interfaces if no receivers. In other words, it is valid to have a MFC (Multicast Forwarding Cache) entry with no outgoing interfaces, for the purpose of stopping upcalls to userland. > - As far as I know, it should be ok for a multicast group to be forwarded > regardless of the source address. If I set mc.mf6cc_origin to all (binary) > zeroes (bzero) and then call MRT6_ADD_MFC, it simply doesn't work, it only > works if I request the address from the incoming interface from the routing > table, and fill the entry with that address and the corresponding interface > index, which I think is a dirty hack :-/ > Isn't there a way to specify a wildcard origin address and interface? No unfortunately. The MFC entries granularity in the kernel (BSD/Linux/Solaris/etc) is (S,G). It is possible to modify the kernel to support (*,G) (long time ago I did a prototype implementation for BSD), but so far there hasn't been strong need for this feature to become part of the kernel. > - When I start the program that feeds the multicast stream, before my own > program is started, the packets are routed to a (for me) random > interface. This is not always the same interface. It would be logical to me if > the packets are routed to the lo interface or are discarded instead, when no > "MRT6_INIT" program is running... > > - After I start my program, and I have installed a multicast route with one or > more interfaces, the packets are STILL (also) forwarded to this interface (see > above) regardless whether the interface is mentioned in the mc route. > > - To resolve the above issue, I add a dummy ethernet interface (dummy0), which > I try to set as multicast source for locally generated mc traffic using > IPV6_MULTICAST_IF, sometimes this works, sometimes it doesn't. If it doesn't, > I need to stop both programs (no MRT6_INIT apps, so no more mcast route > cache...) and type "ip -6 route del ff05::4242" until ip returns an > error. This route doesn't show up in "ip -6 route show", btw... If I now start > both programs, the mc traffic seems to originate from dummy0 and is not > physically sent out to unsubscribed interfaces anymore, although tshark > reveals the packets are actually still "sent" on dummy0. I am not shure > whether is this a result of IPV6_MULTICAST_IF or that I am just "lucky". > > - What is the intended use of IPV6_MULTICAST_IF actually? > > - Although I only request MLDv2 packets (143) I still get all sorts of > messages on the raw socket (icmp type 0, expected, probably "upcall" packets) > but also all sorts of other types?! > > - The semantics of MRT6_ADD_MFC and MRT6_DEL_MFC are still more or less a > mystery to me. Adding and deleting one specific multicast route seems > straightforward, but how is one supposed to add an interface to an existing > route or delete one interface? Currently I call MRT6_ADD_MFC simply with the > updated set of interfaces and this seems to work, but imho this is not > "adding" a route like the name suggests... Likewise is it possible to call > MRT6_DEL_MFC with a minimal set of information (like, delete all routes to mc > group xx, regardless interface, origin address, etc.)? This to clean up the > cache every now and then... Yes, the granularity for ADD_MFC/DEL_MFC is per MFC entry. Every time you need to update the MFC entry you must call MRT6_ADD_MFC with the complete set of information for that entry (iif, updated outgoing interface set, etc). Once the entry is not needed anymore, you call MRT6_DEL_MFC to remove it from the kernel. Regards, Pavlin > Thank you for your time and efford! > _______________________________________________ > Xorp-users mailing list > Xorp-users at xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From erik at slagter.name Tue Jan 27 10:43:33 2009 From: erik at slagter.name (Erik Slagter) Date: Tue, 27 Jan 2009 19:43:33 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <200901271826.n0RIQoK2018390@fruitcake.ICSI.Berkeley.EDU> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <497DB605.6050200@slagter.name> <200901271826.n0RIQoK2018390@fruitcake.ICSI.Berkeley.EDU> Message-ID: <497F55D5.2050005@slagter.name> Pavlin Radoslavov wrote: > Theoretically, it shouldn't matter whether the source is on the host > running XORP/MRD6 or external. Though, I have seen reports in the > past for some odd behavior (for IPv4) if one of the participants > (sender/receiver?) was on the same host. This might the problems I am seeing. BTW how does the kernel decide what interface to "use" as the "input interface" for multicasting purposes (like MFC_ADD)? It looks like the kernel first takes the row of routes to ff00::/8 via each system interface into consideration, and then, if the streaming source application has made a call to IPV6_MULTICAST_IF, uses that interface, and if it didn't, it takes one of the applicable interfaces at random. I am not completely happy with that behaviour, but I saw you have an interesting alternative approach for this. > I am not familiar with the implementation details of ip6_mr_cache, > but if there is no matching multicast route, it is up to the > userland program to install such route. > The new route should have the appropriate incoming and outgoing > interfaces, or no outgoing interfaces if no receivers. > In other words, it is valid to have a MFC (Multicast Forwarding > Cache) entry with no outgoing interfaces, for the purpose of > stopping upcalls to userland. That would indeed be interesting, but I see a problem: My app cannot know in advance which multicast groups it is going to forward. It determines the groups solely from received MLD requests. OTOH I could try to intercept the kernel upcalls and install such a zero route when no receivers are known for a certain group. Interesting thought, I will try this for shure. > No unfortunately. The MFC entries granularity in the kernel > (BSD/Linux/Solaris/etc) is (S,G). It is possible to modify the > kernel to support (*,G) (long time ago I did a prototype > implementation for BSD), but so far there hasn't been strong need > for this feature to become part of the kernel. :-( Tough luck for me then. > Yes, the granularity for ADD_MFC/DEL_MFC is per MFC entry. Every > time you need to update the MFC entry you must call MRT6_ADD_MFC > with the complete set of information for that entry (iif, updated > outgoing interface set, etc). Once the entry is not needed anymore, > you call MRT6_DEL_MFC to remove it from the kernel. Okay, that's clear. Thanks for confirming this and also the other info. For the moment I now have installed a static "multicast" route ("ip -6 add multicast ff05::/16 dev dummy0) to my dummy0 device, hopefully this will keep the traffic from going out to unwanted interfaces for the moment. Whether this works or doesn't I will check out the upcall mechanism for applying null routes. My first priority now is sending MLD query requests and also I figured that actually more than one listener can be present on a link, so I should keep track of every requestor seperately. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090127/93f4be0b/attachment.bin From pavlin at ICSI.Berkeley.EDU Tue Jan 27 10:53:33 2009 From: pavlin at ICSI.Berkeley.EDU (Pavlin Radoslavov) Date: Tue, 27 Jan 2009 10:53:33 -0800 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <497F02B7.3040802@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <4973199E.60708@slagter.name> <49732F2C.3080801@slagter.name> <200901271235.n0RCZe8P008880@fruitcake.ICSI.Berkeley.EDU> <497F02B7.3040802@slagter.name> Message-ID: <200901271853.n0RIrX14022538@fruitcake.ICSI.Berkeley.EDU> Erik Slagter wrote: > Pavlin Radoslavov wrote: > > > It seems the kernel upcalls contain some bogus information. > > Hmmm.... > > This may very well be the case, with my own little > multicast-forwarding-using-kernel program, I experience the same, i.e. I > create a raw socket, call mrt6_init, join mld multicast group, set icmp filter > to "143" and then I still get all sorts of messages from the socket, some of > them hem type "0" the icmp type can be anything. Indeed looks like a kernel > bug to me. > > > The following email suggests that other folks might also had seen issues > > with kernel upcalls: > > > http://www.linux-ipv6.org/ml/usagi-users/msg04077.html > > This involves PIM and router adverts. I try to avoid PIM as much as possible > ;-) Waaaaay too complex. If you are running multicast in tightly controlled environment with a single sender in the center of a star topology directly connecting the receivers, then you can get away with your own program. Otherwise, if there are remote senders and receivers, then things can get easily out of control (there is a reason PIM is complex :) > > Unfortunately I wasn't able to replicate the problem and investigate > > the upcall issues. > > Never mind. Maybe you're using a different kernel version? I've tried with 2.6.27.2 and 2.6.28.1 but I got same result. I haven't tried yet with the exact version you are using (2.6.27.10). > > In my testing I wasn't even getting the MLD_LISTENER_REPORT > > delivered to the FEA, and there were no upcalls, but I didn't > > investigate further. A potential source of the problem is the one > > described in the following email: > > http://www.linux-ipv6.org/ml/usagi-users/msg04031.html > > Yeah, I know this one, that's where I got the base for my own little app, > thanks to Todd :-) I couldn't find any text where to start (api...) if you're > going to do anything with ip6 multicast routing on linux :-( Were you able to get kernel upcalls and MLD_LISTENER_REPORT messages by using the unmodified original program from Todd (http://www.linux-ipv6.org/ml/usagi-users/msg04077.html) ? I am asking this question because I need a calibration point. Re. API description online, you could try the FreeBSD multicast(4) manual page available from the following URL (enter keyword "multicast"): http://www.FreeBSD.org/cgi/man.cgi Most of the info there should apply for Linux as well (at least for IPv4), but Linux doesn't have the "Advanced Multicast API" support. Regards, Pavlin From pavlin at ICSI.Berkeley.EDU Tue Jan 27 12:28:05 2009 From: pavlin at ICSI.Berkeley.EDU (Pavlin Radoslavov) Date: Tue, 27 Jan 2009 12:28:05 -0800 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <497F55D5.2050005@slagter.name> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <497DB605.6050200@slagter.name> <200901271826.n0RIQoK2018390@fruitcake.ICSI.Berkeley.EDU> <497F55D5.2050005@slagter.name> Message-ID: <200901272028.n0RKS5X2006820@fruitcake.ICSI.Berkeley.EDU> Erik Slagter wrote: > Pavlin Radoslavov wrote: > > > Theoretically, it shouldn't matter whether the source is on the host > > running XORP/MRD6 or external. Though, I have seen reports in the > > past for some odd behavior (for IPv4) if one of the participants > > (sender/receiver?) was on the same host. > > This might the problems I am seeing. > > BTW how does the kernel decide what interface to "use" as the "input > interface" for multicasting purposes (like MFC_ADD)? It looks like the kernel > first takes the row of routes to ff00::/8 via each system interface into > consideration, and then, if the streaming source application has made a call > to IPV6_MULTICAST_IF, uses that interface, and if it didn't, it takes one of > the applicable interfaces at random. I am not completely happy with that > behaviour, but I saw you have an interesting alternative approach for this. I believe the above description roughtly describes the algorithm to choose the outgoing the interface for the multicast packets originated by the sender (i.e., even in the case when there is no multicast routing running on the host). Presumably, the chosen interface is also used as the incoming interface for multicast routing purpose within the IP stack. The selection algorithm shouldn't matter to you if you are receiving and processing the upcall messages like MRT6MSG_NOCACHE/MRT6MSG_WRONGMIF/MRT6MSG_WHOLEPKT. The NOCACHE upcall for example should include the (S,G) and the iif for the multicast packet, so all you need to do is install back (S,G) with that iif and the oifs. > > I am not familiar with the implementation details of ip6_mr_cache, > > but if there is no matching multicast route, it is up to the > > userland program to install such route. > > The new route should have the appropriate incoming and outgoing > > interfaces, or no outgoing interfaces if no receivers. > > In other words, it is valid to have a MFC (Multicast Forwarding > > Cache) entry with no outgoing interfaces, for the purpose of > > stopping upcalls to userland. > > That would indeed be interesting, but I see a problem: My app cannot know in > advance which multicast groups it is going to forward. It determines the > groups solely from received MLD requests. OTOH I could try to intercept the > kernel upcalls and install such a zero route when no receivers are known for a > certain group. Interesting thought, I will try this for shure. The MLD requests gives you the oifs set. You also need to consider the NOCACHE calls to catch the new (S,G) flows (and the iif, and WRONGMIF upcalls to catch the changes in the iif for those flows that have already (S,G) in the kernel. Even if you can somehow get all the mcast traffic appear with iif=dummy0, you must know the source address as well if you want to install the MFC entries in advance. Otherwise, you need to catch the NOCACHE upcalls and process them. Regards, Pavlin > > No unfortunately. The MFC entries granularity in the kernel > > (BSD/Linux/Solaris/etc) is (S,G). It is possible to modify the > > kernel to support (*,G) (long time ago I did a prototype > > implementation for BSD), but so far there hasn't been strong need > > for this feature to become part of the kernel. > > :-( Tough luck for me then. > > > Yes, the granularity for ADD_MFC/DEL_MFC is per MFC entry. Every > > time you need to update the MFC entry you must call MRT6_ADD_MFC > > with the complete set of information for that entry (iif, updated > > outgoing interface set, etc). Once the entry is not needed anymore, > > you call MRT6_DEL_MFC to remove it from the kernel. > > Okay, that's clear. Thanks for confirming this and also the other info. > > For the moment I now have installed a static "multicast" route ("ip -6 > add multicast ff05::/16 dev dummy0) to my dummy0 device, hopefully this will > keep the traffic from going out to unwanted interfaces for the moment. Whether > this works or doesn't I will check out the upcall mechanism for applying null > routes. > > My first priority now is sending MLD query requests and also I figured that > actually more than one listener can be present on a link, so I should keep > track of every requestor seperately. > _______________________________________________ > Xorp-users mailing list > Xorp-users at xorp.org > http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users From erik at slagter.name Tue Jan 27 12:42:44 2009 From: erik at slagter.name (Erik Slagter) Date: Tue, 27 Jan 2009 21:42:44 +0100 Subject: [Xorp-users] Very simple multicast setup, yet can't find any text on how to do it! In-Reply-To: <200901271853.n0RIrX14022538@fruitcake.ICSI.Berkeley.EDU> References: <4971D73E.3030405@slagter.name> <497224C8.7070504@slagter.name> <200901171906.n0HJ6kHI007438@fruitcake.ICSI.Berkeley.EDU> <49723C16.5030409@slagter.name> <200901172230.n0HMUfbd021164@fruitcake.ICSI.Berkeley.EDU> <4973199E.60708@slagter.name> <49732F2C.3080801@slagter.name> <200901271235.n0RCZe8P008880@fruitcake.ICSI.Berkeley.EDU> <497F02B7.3040802@slagter.name> <200901271853.n0RIrX14022538@fruitcake.ICSI.Berkeley.EDU> Message-ID: <497F71C4.2070804@slagter.name> Pavlin Radoslavov wrote: >> This involves PIM and router adverts. I try to avoid PIM as much as possible >> ;-) Waaaaay too complex. > > If you are running multicast in tightly controlled environment with > a single sender in the center of a star topology directly connecting > the receivers, then you can get away with your own program. > Otherwise, if there are remote senders and receivers, then things > can get easily out of control (there is a reason PIM is complex :) Yeah, after some thinking I came to the same conclusion. But fortunately my own home is such a tightly controlled controlled environment :-) > I've tried with 2.6.27.2 and 2.6.28.1 but I got same result. > I haven't tried yet with the exact version you are using > (2.6.27.10). Nah, I cannot believe it comes down to such a minor variation. > Were you able to get kernel upcalls and MLD_LISTENER_REPORT messages > by using the unmodified original program from Todd > (http://www.linux-ipv6.org/ml/usagi-users/msg04077.html) ? > I am asking this question because I need a calibration point. I will do so, but it won't be before friday. > Re. API description online, you could try the FreeBSD multicast(4) > manual page available from the following URL (enter keyword > "multicast"): I accidently found that page using google already :-) It was informative, but it doesn't supply some sort of starting point. > Most of the info there should apply for Linux as well (at least for > IPv4), but Linux doesn't have the "Advanced Multicast API" support. Really? Linux appears to have support for various "advanced "ipv4" "api" mixtures. But maybe indeed not advanced ipv6 multicasting. > The selection algorithm shouldn't matter to you if you are receiving > and processing the upcall messages like > MRT6MSG_NOCACHE/MRT6MSG_WRONGMIF/MRT6MSG_WHOLEPKT. > The NOCACHE upcall for example should include the (S,G) and the iif > for the multicast packet, so all you need to do is install back > (S,G) with that iif and the oifs. Yeah, indeed it looks like here lies the solution to my problem. If I get an upcall for a group that has no listeners -> route it to a black hole. > Even if you can somehow get all the mcast traffic appear with > iif=dummy0, you must know the source address as well if you want to > install the MFC entries in advance. Otherwise, you need to catch the > NOCACHE upcalls and process them. I resolved this using a dirty hack :/-. When a join request comes in and the multicast route needs to be updated or added, I ask the kernel for the route to the multicast address being processed. In the process this yields both address and "source" address being used. For the moment this works, although I could imagine this is not the correct way to do things and it will break someday. I was really planning to do this using the kernel netlink api, but this appears to be so complex that I gave up this approach and used a system("ip -6 route ...") instead. Not nice, but for a quick hack it works very decently ;-) But as mentioned before, processing kernel upcalls should make this problem void (fingers crossed). -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3328 bytes Desc: S/MIME Cryptographic Signature Url : http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090127/801a3690/attachment.bin From Sebastian.Zabel at fujitsu-siemens.com Thu Jan 29 04:18:26 2009 From: Sebastian.Zabel at fujitsu-siemens.com (Zabel, Sebastian) Date: Thu, 29 Jan 2009 13:18:26 +0100 Subject: [Xorp-users] Basic configuration for unicast routing Message-ID: Hi I just tried to configure XORP on Suse sles 10 to do simplest unicast routing. The two NICs are configured correctly: root at linmf4n2> show interfaces eth0/eth0: Flags: mtu 1500 speed 100 Mbps inet 172.25.92.121 subnet 172.25.92.64/26 broadcast 172.25.92.127 physical index 2 ether 0:30:5:9:43:68 eth2/eth2: Flags: mtu 1500 speed 1 Gbps inet 172.25.92.113 subnet 172.25.92.112/29 broadcast 172.25.92.119 physical index 3 ether 0:e:c:61:b:13 But when I look at the connected routes, XORP tells me the following: root at linmf4n2> show route table ipv4 unicast connected 172.25.92.64/26 [connected(0)/0] > via eth0/eth0 172.25.92.112/29 [connected(0)/0] > via eth0/eth0 root at linmf4n2> Can anybody explain me why the NIC eth2 isn't used for network .112? No ping is forwarded through the router. Thank you for helping a complete beginner, Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mailman.ICSI.Berkeley.EDU/pipermail/xorp-users/attachments/20090129/8310ed4a/attachment.html