[Xorp-users] Very simple multicast setup, yet can't find any text on how to do it!

Pavlin Radoslavov pavlin at ICSI.Berkeley.EDU
Tue Jan 27 12:28:05 PST 2009


Erik Slagter <erik at slagter.name> wrote:

> Pavlin Radoslavov wrote:
> 
> > Theoretically, it shouldn't matter whether the source is on the host
> > running XORP/MRD6 or external. Though, I have seen reports in the
> > past for some odd behavior (for IPv4) if one of the participants
> > (sender/receiver?) was on the same host.
> 
> This might the problems I am seeing.
> 
> BTW how does the kernel decide what interface to "use" as the "input
> interface" for multicasting purposes (like MFC_ADD)? It looks like the kernel
> first takes the row of routes to ff00::/8 via each system interface into
> consideration, and then, if the streaming source application has made a call
> to IPV6_MULTICAST_IF, uses that interface, and if it didn't, it takes one of
> the applicable interfaces at random. I am not completely happy with that
> behaviour, but I saw you have an interesting alternative approach for this.

I believe the above description roughtly describes the algorithm to
choose the outgoing the interface for the multicast packets
originated by the sender (i.e., even in the case when there is no
multicast routing running on the host).
Presumably, the chosen interface is also used as the incoming
interface for multicast routing purpose within the IP stack.

The selection algorithm shouldn't matter to you if you are receiving
and processing the upcall messages like
MRT6MSG_NOCACHE/MRT6MSG_WRONGMIF/MRT6MSG_WHOLEPKT.
The NOCACHE upcall for example should include the (S,G) and the iif
for the multicast packet, so all you need to do is install back
(S,G) with that iif and the oifs.

> > I am not familiar with the implementation details of ip6_mr_cache,
> > but if there is no matching multicast route, it is up to the
> > userland program to install such route.
> > The new route should have the appropriate incoming and outgoing
> > interfaces, or no outgoing interfaces if no receivers.
> > In other words, it is valid to have a MFC (Multicast Forwarding
> > Cache) entry with no outgoing interfaces, for the purpose of
> > stopping upcalls to userland.
> 
> That would indeed be interesting, but I see a problem: My app cannot know in
> advance which multicast groups it is going to forward. It determines the
> groups solely from received MLD requests. OTOH I could try to intercept the
> kernel upcalls and install such a zero route when no receivers are known for a
> certain group. Interesting thought, I will try this for shure.

The MLD requests gives you the oifs set. You also need to consider
the NOCACHE calls to catch the new (S,G) flows (and the iif, and
WRONGMIF upcalls to catch the changes in the iif for those flows
that have already (S,G) in the kernel.

Even if you can somehow get all the mcast traffic appear with
iif=dummy0, you must know the source address as well if you want to
install the MFC entries in advance. Otherwise, you need to catch the
NOCACHE upcalls and process them.

Regards,
Pavlin

> > No unfortunately. The MFC entries granularity in the kernel
> > (BSD/Linux/Solaris/etc) is (S,G). It is possible to modify the
> > kernel to support (*,G)  (long time ago I did a prototype
> > implementation for BSD), but so far there hasn't been strong need
> > for this feature to become part of the kernel.
> 
> :-( Tough luck for me then.
> 
> > Yes, the granularity for ADD_MFC/DEL_MFC is per MFC entry. Every
> > time you need to update the MFC entry you must call MRT6_ADD_MFC
> > with the complete set of information for that entry (iif, updated
> > outgoing interface set, etc). Once the entry is not needed anymore,
> > you call MRT6_DEL_MFC to remove it from the kernel.
> 
> Okay, that's clear. Thanks for confirming this and also the other info.
> 
> For the moment I now have installed a static "multicast" route ("ip -6
> add multicast ff05::/16 dev dummy0) to my dummy0 device, hopefully this will
> keep the traffic from going out to unwanted interfaces for the moment. Whether
> this works or doesn't I will check out the upcall mechanism for applying null
> routes.
> 
> My first priority now is sending MLD query requests and also I figured that
> actually more than one listener can be present on a link, so I should keep
> track of every requestor seperately.
> _______________________________________________
> Xorp-users mailing list
> Xorp-users at xorp.org
> http://mailman.ICSI.Berkeley.EDU/mailman/listinfo/xorp-users



More information about the Xorp-users mailing list