[Xorp-users] xorp multicast routing question

Pavlin Radoslavov pavlin@icir.org
Tue, 26 Jul 2005 10:27:00 -0700


> I am a software engineer working for BAE SYSTEMS in the US. I have
> configured IGMP and PIM V2 within the Linux Yellow Dog kernel. I
> configure two PC's acting as multicast routers and one host PC sending
> reports. I see query's and reports and groups are being added to the
> group table seen by CLI show igmp group. However, I do not see any
> routes with either CLI show pim mfc or using netstat -gn. Does anyone
> have any ideas what could cause this? Could it be a PIM network
> configuration issue?

Russ,

The (S,G) multicast forwarding state is installed on-demand (i.e.,
only after the senders have started transmitting data).
Both the CLI "show pim mfc" and the UNIX "netstat -gn" commands
actually show this (S,G) on-demand installed state: the former
command shows the state inside the PIM daemon; the latter shows the
state in the UNIX kernel.

If you want to show the multicast routing state that has been
created when there are new receivers, when PIM Join/Prune messages
have been received, etc, you can use command "show pim join".

If you see the receivers with the "show igmp group" command, but the
"show pim join" command doesn't show the corresponding multicast
routing state, then there is indeed some problem. If this is the
case, then please send me more details. E.g., enable "traceoptions"
in the XORP config file, and send me the log output.


Regards,
Pavlin


P.S. Below is some additional info about how things work in case of
UNIX-based multicast routers that may be useful.

What creates confusion in case of multicast on UNIX boxes is that
the multicast forwarding state (the entries installed in the
forwarding engine, i.e., the kernel in case of UNIX) is a subset of
the multicast routing state (i.e., the state created at user level
inside the PIM-SM daemon based on IGMP information, PIM-SM control
messages exchanges, etc).

The fact that the multicast forwarding state is a subset of the
multicast routing state is implementation-specific (for UNIX-based
routers), and may not be true for other platforms.

In case of UNIX, the multicast forwarding state is with (S,G)
granularity. E.g., there could be a number of (S,G) forwarding
entries that match to a single (*,G) routing entry. Furthermore,
those forwarding entries are created on-demand when the first
multicast data packet for a new (S,G) flow arrives.

If you are interested about internal details about the
implementation and how all this on-demand stuff works, the following
document describes some of the original ideas and the original
implementation from Ahmed Helmy:
* draft-helmy-pim-sm-implem-00.{txt,ps} available from:
  http://netweb.usc.edu/pim/pimd/docs/