Showing posts with label multicast. Show all posts
Showing posts with label multicast. Show all posts

Wednesday, February 25, 2009

PIM NBMA, DR and RPF issues

Below is the topology. RIP is running everywhere, PIM-SM on all interfaces and everyone has R4 at 192.168.100.4 as the static RP.


R1 has the following config on its LAN interface:
interface Ethernet0/0
ip address 192.168.0.1 255.255.255.0
ip pim sparse-mode
ip igmp join-group 239.0.0.1
Let's ping from R6:
R6#ping 239.0.0.1 re 5  

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:
.....
R6#
Hmmm....what gives? Let's look at R4:
R4#sho ip pim neighbor
PIM Neighbor Table
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
192.168.34.3 Ethernet0/0 03:29:50/00:01:39 v2 1 / S
192.168.100.2 Serial0/0 02:25:22/00:01:38 v2 1 / S
192.168.100.5 Serial0/0 02:25:22/00:01:39 v2 1 / DR S
192.168.100.1 Serial0/0 02:25:22/00:01:38 v2 1 / S

R4#sho ip mroute 239.0.0.1 | be \(
(*, 239.0.0.1), 00:24:31/00:02:33, RP 192.168.100.4, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/0, 192.168.100.2, Forward/Sparse, 00:24:31/00:02:33

(192.168.56.6, 239.0.0.1), 00:02:03/00:02:45, flags: T
Incoming interface: Serial0/0, RPF nbr 192.168.100.5
Outgoing interface list:
Serial0/0, 192.168.100.2, Forward/Sparse, 00:02:03/00:00:57

R4#
Well, it looks R2 is showing up in the OIL, but why isn't R1? It is a PIM neighbor afterall. The reason is because R2 has won the DR election and has the right to forward traffic. So it is the neighbor that sends PIM joins to R4. R1 receives the traffic, but it comes in on its LAN interface and thus fails the RPF check.

R1#debug ip mpacket
IP multicast packets debugging is on
03:40:21: IP(0): s=192.168.56.6 (Ethernet0/0) d=239.0.0.1 id=197, ttl=251, prot=1, len=114(100), not RPF interface
03:40:23: IP(0): s=192.168.56.6 (Ethernet0/0) d=239.0.0.1 id=198, ttl=251, prot=1, len=114(100), not RPF interface


It is important to remember we have at least two ways to resolve this:

1) Make R1 the DR

R1(config)#int e0/0
R1(config-if)#ip pim dr-priority 3000

R6#ping 239.0.0.1 re 1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.100.1, 60 ms
R6#

R1(config-if)#^Z
03:41:47: IP(0): s=192.168.56.6 (Serial0/0) d=239.0.0.1 (Ethernet0/0) id=207, ttl=252, prot=1, len=100(100), mforward


2) Static mroute to R2 for 192.168.56.6

R1(config)#int e0/0
R1(config-if)#no ip pim dr-priority 3000
R1(config-if)#exit
R1(config)#ip mroute 192.168.56.0 255.255.255.0 192.168.0.2

Make sure to clear mroutes otherwise previous state may mislead you :)

R4#clear ip mroute *

R6#ping 239.0.0.1 re 1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.100.1, 56 ms
R6#


This is one of those labs where I had no idea where I was going and I ended up with a nice troubleshooting scenario. If multicast is one your weaknesses than I highly recommend digging in and making something happen. Debug ip mpacket works best with "no ip mroute-cache" on your interfaces. In this scenario, I started troubleshooting on R5, then worked my way around to resolve the issue :)

Monday, February 23, 2009

PIM Forwarder and the Assert Mechanism

I know, it's a cool name for a band, huh? Ladies and gentlemen...PIM Forwarder and the Assert Mechanism! Anyways, I always get confused about PIM DR and PIM Forwarder so this is to clear up my confusion. Here we take a look at PIM Forwarder and how to verify the assert process is working.

Here is the topology:


Here is what I have enabled:
-RIP on all interfaces
-ip multicast-routing on all routers
-ip pim sparse-dense on all interfaces
-ip igmp join-group 239.0.0.1 on R5 ethernet

For debugging:
-no ip mroute-cache
-debug ip mpacket
-ping

Scenario 1: R2 is the PIM Forwarder based on highest IP

From R4 we ping twice:
R4#ping 239.0.0.1 re 2

Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.0.5, 20 ms
Reply to request 0 from 192.168.0.5, 20 ms
Reply to request 1 from 192.168.0.5, 8 ms
On R1 and R2 we see the following:

R1#
*Mar 2 02:05:36.795: IP(0): s=192.168.34.4 (Serial0/1) d=239.0.0.1 (Ethernet0/0) id=70, ttl=253, prot=1, len=100(100), mforward
*Mar 2 02:05:36.799: IP(0): s=192.168.34.4 (Ethernet0/0) d=239.0.0.1 id=70, ttl=252, prot=1, len=114(100), not RPF interface
*Mar 2 02:05:38.787: IP(0): s=192.168.34.4 (Ethernet0/0) d=239.0.0.1 id=71, ttl=252, prot=1, len=114(100), not RPF interface

R2#
*Mar 1 02:25:00.567: IP(0): s=192.168.34.4 (Serial0/1) d=239.0.0.1 (Ethernet0/0) id=70, ttl=253, prot=1, len=100(100), mforward
*Mar 1 02:25:00.571: IP(0): s=192.168.34.4 (Ethernet0/0) d=239.0.0.1 id=70, ttl=252, prot=1, len=114(100), not RPF interface
*Mar 1 02:25:02.559: IP(0): s=192.168.34.4 (Serial0/1) d=239.0.0.1 (Ethernet0/0) id=71, ttl=253, prot=1, len=100(100), mforward


Notice that each router sent the first packet onto the LAN and R5 responded to both. We can tell because R4 got two replies. What also happened is that R1 and R2 each saw that very same packet on their LAN interfaces. Immediately the PIM Assert process took over. Because both routers have the same AD (90) and metric (2) to the source, R2 won the right to forward based on highest IP.

Next we see that the second packet only gets forwarded by R2. Here we see that R2 has the A (Assert Winner) flag in its mroute entry. R1 has pruned that same interface.
R2#sho ip mroute 239.0.0.1 192.168.34.4 | be \(
(192.168.34.4, 239.0.0.1), 00:00:39/00:02:26, flags: T
Incoming interface: Serial0/1, RPF nbr 192.168.23.3
Outgoing interface list:
Ethernet0/0, Forward/Sparse-Dense, 00:00:39/00:00:00, A

R1#sho ip mroute 239.0.0.1 192.168.34.4 | be \(
(192.168.34.4, 239.0.0.1), 00:01:27/00:01:34, flags: PT
Incoming interface: Serial0/1, RPF nbr 192.168.13.3
Outgoing interface list:
Ethernet0/0, Prune/Sparse-Dense, 00:01:27/00:01:32

Scenario 2: R1 is the PIM Forwarder based on lowest AD

Now we change R1's AD for RIP below the default of 120:
R1(config)#router rip
R1(config-router)#distance 89
We see the same behavior from R4's perspective but now R1 has won the Assert process and is forwarding group 239.0.0.1 onto the LAN:
R4#ping 239.0.0.1 re 2

Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.0.5, 12 ms
Reply to request 0 from 192.168.0.5, 12 ms
Reply to request 1 from 192.168.0.5, 8 ms
R4#

R1#sho ip mroute 239.0.0.1 192.168.34.4 | be \(
(192.168.34.4, 239.0.0.1), 00:00:07/00:02:54, flags: T
Incoming interface: Serial0/1, RPF nbr 192.168.13.3
Outgoing interface list:
Ethernet0/0, Forward/Sparse-Dense, 00:00:07/00:00:00, A

R1#

Wednesday, February 11, 2009

Messin' around with multicast boundary

I got a multicast lab in dynamips going so I thought I would just play around with some lesser known commands and learn how they actually work.

Here is the topology:

R5---R6---R1---R2---R3---R4

R1 = MA and RP for 232/8, 233/8, 234/8

R4#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 232.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:10:08, expires: 00:02:48
Group(s) 233.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:10:08, expires: 00:02:47
Group(s) 234.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:10:08, expires: 00:02:46

R4 has the following on Loopback 0:

interface Loopback0
ip address 4.4.4.4 255.255.255.255
ip pim sparse-mode
ip igmp join-group 233.0.0.1
ip igmp join-group 234.0.0.1

R3 has set up a multicast boundary as follows:

access-list 1 permit 232.0.0.0 0.255.255.255
access-list 1 permit 233.0.0.0 0.255.255.255

interface Serial1/0
ip address 192.168.34.3 255.255.255.0
ip pim sparse-mode
ip multicast boundary 1

Now R3 only allows PIM joins that are in 232/8 or 233/8.

R3#sho ip mroute 234.0.0.1
Group 234.0.0.1 not found
R3#

Let's ping 233.0.0.1:

R6#ping 233.0.0.1 re 100

Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to 233.0.0.1, timeout is 2 seconds:
......................................................................
..........

Whoa now, what gives? Well...remember we only allowed 2 groups...what does Auto-RP use to propagate messages? Group 224.0.1.40! So even if you start passing traffic to 233.0.0.1 after you enable the boundary, eventually R3 will lose state for the Auto-RP discovery group and R4 will lose the RP information. All multicast traffic will then fail the RPF check.

So here is our modified ACL on R3:

R3#sho run | inc access
access-list 1 permit 224.0.1.40
access-list 1 permit 233.0.0.0 0.255.255.255
access-list 1 permit 232.0.0.0 0.255.255.255

224.0.1.39 is what the MA's listen to so we don't need to worry about that for this example. Now we can ping:

R6#ping 233.0.0.1 re 2

Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 233.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.34.4, 212 ms
Reply to request 0 from 192.168.34.4, 216 ms
Reply to request 1 from 192.168.34.4, 184 ms
Reply to request 1 from 192.168.34.4, 184 ms

Now this seems a little inefficient, right? Why should R4 even know about the RP if R3 is going to prevent mroute state from being created for 234.0.0.1 on that interface. If we could prevent R4 from learning that RP information, that would be great. Well on R3 we can modify the boundary as follows:

R3(config)#int s1/0
R3(config-if)#ip multicast boundary 1 filter-autorp

Now R3 only sends RP information for the groups permitted in the ACL:

R4#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 232.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:00:03, expires: 00:02:55
Group(s) 233.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:00:03, expires: 00:02:53
R4#

Multicast TTL-Threshold

Maybe I am misunderstanding some things, but documents and books always say that the TTL of a packet must be higher than the threshold to be forwarded. From the 12.4 command reference:

ip multicast ttl-threshold

Usage Guidelines

"Only multicast packets with a TTL value greater than the threshold are forwarded out the interface."

Oh yeah?! I guess it depends on when you look at the TTL. Consider the network:

R1----R2----R3----R4

PIM-DM is enabled everywhere.
R4 has joined 239.0.0.1
R1 is sending pings which have 255 TTL when sent from R1.
R2 receives the PING, decrements the TTL to 254 before sending to R3.

So if we set TTL threshold to 254 on R2's interface to R3, it should block it right? No:

R2(config)#int s1/0
R2(config-if)#ip multicast ttl-threshold 254

R1#ping 239.0.0.1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.34.4, 164 ms
R1#

The router will still pass packets that have a TTL equal to the threshold if it was the router that decremented the TTL to reach that value. Here we see 255 will fail:

R2(config)#int s1/0
R2(config-if)#ip multicast ttl-threshold 255

R1#ping 239.0.0.1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:
.
R1#

Friday, January 16, 2009

Troubleshooting PIM-SM issues on a LAN segment

Below is the topology for this lab. R1 is the Mapping Agent and the RP. PIM-SM is enabled everywhere except the link between R1 and R3.

All routers also have the following debug command:

debug ip pim 239.0.0.1

Let's take a look at what happens R5 joins group 239.0.0.1

R5(config)#int f0/0
R5(config-if)#ip igmp join-group 239.0.0.1

Mar 1 00:51:50.599: PIM(0): Check RP 1.1.1.1 into the (*, 239.0.0.1) entry

R4#ping 239.0.0.1 re 5 sou s1/0

Type escape sequence to abort.

Sending 5, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:

Packet sent with a source address of 172.12.14.4

.....

R4#


Hmmm....a quick check of the RP mapping and everyone knows about 1.1.1.1 (R1) as the RP. Let's take a look at the mroute table on R1:

R1#sho ip mrou 239.0.0.1 | be \(
(*, 239.0.0.1), 00:01:38/stopped, RP 1.1.1.1, flags: SP
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list: Null

(172.12.14.4, 239.0.0.1), 00:01:38/00:01:46, flags: PT
Incoming interface: Serial1/2, RPF nbr 0.0.0.0
Outgoing interface list: Null


R1 is seeing the packets from R4 but it's outgoing interface list is NULL. Let's take a look at R2's mroute table:

R2#sho ip mrou 239.0.0.1 | be \(
(*, 239.0.0.1), 00:03:27/00:02:57, RP 1.1.1.1, flags: SP
Incoming interface: Serial1/0, RPF nbr 172.12.12.1
Outgoing interface list: Null


NULL also, what gives? Let's wait and see if we get any debugs on R2:

R2#
00:55:42: PIM(0): Received v2 Join/Prune on FastEthernet0/0 from 172.12.25.6, not to us

00:55:42: PIM(0): Building Periodic Join/Prune message for 239.0.0.1


Interesting...It appears that R6 has become the DR for this segment and is responsible for sending (*,G) joins to the RP. R2 is hearing them, but ignoring them...why? What exactly is in the packet that tells R2 its not for us. Well since this is a dynamips lab, we can find out!

Here is a screenshot of the packet capture:

We can see that when R6 sends this join it is using a multicast address of 224.0.0.13. But inside of the PIM packet we can see R6 specifies an upstream neighbor of 172.12.25.3 which is R3.

Also on R6 we see the following debug messages:

R6#
*Mar 1 01:02:48.847: PIM(0): Building Periodic Join/Prune message for 239.0.0.1
*Mar 1 01:02:48.847: PIM(0): Insert (*,239.0.0.1) join in nbr 172.12.25.3's queue
*Mar 1 01:02:48.851: PIM(0): Building Join/Prune packet for nbr 172.12.25.3
*Mar 1 01:02:48.855: PIM(0): Adding v2 (1.1.1.1/32, 239.0.0.1), WC-bit, RPT-bit, S-bit Join
*Mar 1 01:02:48.859: PIM(0): Send v2 join/prune to 172.12.25.3 (FastEthernet0/0)


Can we fix this? Of course!

R6(config)#ip mroute 1.1.1.1 255.255.255.255 172.12.25.2

*Mar 1 01:05:52.019: PIM(0): Building Periodic Join/Prune message for 239.0.0.1
*Mar 1 01:05:52.019: PIM(0): Insert (*,239.0.0.1) join in nbr 172.12.25.2's queue
*Mar 1 01:05:52.023: PIM(0): Building Join/Prune packet for nbr 172.12.25.2
*Mar 1 01:05:52.027: PIM(0): Adding v2 (1.1.1.1/32, 239.0.0.1), WC-bit, RPT-bit, S-bit Join
*Mar 1 01:05:52.027: PIM(0): Send v2 join/prune to 172.12.25.2 (FastEthernet0/0)


Ping now:

R4#ping 239.0.0.1 re 5 sou s1/0

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:
Packet sent with a source address of 172.12.14.4

Reply to request 0 from 172.12.25.5, 244 ms
Reply to request 1 from 172.12.25.5, 104 ms
Reply to request 2 from 172.12.25.5, 72 ms
Reply to request 3 from 172.12.25.5, 52 ms
Reply to request 4 from 172.12.25.5, 44 ms


But wait! There's one more solution. We can make R2 the DR for the segment (Remove the mroute on R6 and clear the mroute table on R2):

R2(config)#int f0/0
R2(config-if)#ip pim dr-priority 300000

01:07:09: PIM(0): Changing DR for FastEthernet0/0, from 172.12.25.6 to 172.12.25.2 (this system)
01:07:09: %PIM-5-DRCHG: DR change from neighbor 172.12.25.6 to 172.12.25.2 on interface FastEthernet0/0 (vrf default)
01:07:09: PIM(0): Check RP 1.1.1.1 into the (*, 239.0.0.1) entry
01:07:09: PIM(0): Building Triggered Join/Prune message for 239.0.0.1
01:07:09: PIM(0): Insert (*,239.0.0.1) join in nbr 172.12.12.1's queue
01:07:09: PIM(0): Building Join/Prune packet for nbr 172.12.12.1
01:07:09: PIM(0): Adding v2 (1.1.1.1/32, 239.0.0.1), WC-bit, RPT-bit, S-bit Join
01:07:09: PIM(0): Send v2 join/prune to 172.12.12.1 (Serial1/0)

R2#sho ip mrou 239.0.0.1 | be \(
(*, 239.0.0.1), 00:01:28/00:02:31, RP 1.1.1.1, flags: SJC
Incoming interface: Serial1/0, RPF nbr 172.12.12.1
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:01:28/00:02:31


It's always good to have more than one solution up your sleeve :)

Thursday, January 15, 2009

Basic MSDP configuration

This is a short MSDP scenario designed to get familiar with the command to enable it and where you would use it. Below is the toplogy.


There are two domains, each with an RP. We seperate the domains by using the following commands on R3 and R4:

R3(config)#int s1/1
R3(config-if)#ip pim bsr-border

R4(config)#int s1/0
R4(config-if)#ip pim bsr-border


R2 and R4 have already been configured as the BSR and RP's for their respective domains. Let's verify on R1 and R5:

R1#sho ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
RP 2.2.2.2 (?), v2
Info source: 2.2.2.2 (?), via bootstrap, priority 0, holdtime 150
Uptime: 18:21:34, expires: 00:02:13

R5#sho ip pim rp map
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
RP 4.4.4.4 (?), v2
Info source: 4.4.4.4 (?), via bootstrap, priority 0, holdtime 150
Uptime: 18:19:56, expires: 00:01:52


R1 and R8 have already joined group 225.0.0.1. Let's see what happens when R6 sends a ping to this group:

R6#ping 225.0.0.1 re 10

Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 225.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.78.8, 192 ms
Reply to request 1 from 192.168.78.8, 192 ms
Reply to request 2 from 192.168.78.8, 100 ms
Reply to request 3 from 192.168.78.8, 84 ms
Reply to request 4 from 192.168.78.8, 112 ms
Reply to request 5 from 192.168.78.8, 104 ms


Only R8 responds. This is because the PIM joins from Domain 1 never get sent to the RP in Domain 2. Thus R4 never knows to forward to R3. Let's configure MSDP between R2 and R4:

R2(config)#ip msdp peer 4.4.4.4 connect-source loopback 0

R4(config)#ip msdp peer 2.2.2.2 connect-source loopback 0


It may take a moment but we will see this message:

*Mar 1 19:56:14.343: %MSDP-5-PEER_UPDOWN: Session to peer 2.2.2.2 going up

If we debug we would see this:

R4#debug ip msdp de
MSDP Detail debugging is on

*Mar 1 19:56:15.263: MSDP(0): Received 3-byte TCP segment from 2.2.2.2
*Mar 1 19:56:15.263: MSDP(0): Append 3 bytes to 0-byte msg 1170 from 2.2.2.2, qs 1
*Mar 1 19:56:15.643: MSDP(0): Sent entire mroute table, mroute_cache_index = 0, Qlen = 0
*Mar 1 19:56:15.647: MSDP(0): start_index = 0, sa_cache_index = 0, Qlen = 0
*Mar 1 19:56:15.651: MSDP(0): Sent entire sa-cache, sa_cache_index = 0, Qlen = 0
*Mar 1 19:56:16.275: MSDP(0): Received 3-byte TCP segment from 2.2.2.2
*Mar 1 19:56:16.275: MSDP(0): Append 3 bytes to 0-byte msg 1171 from 2.2.2.2, qs 1


Notice that R4 sent R2 its entire mroute table. Let's check the mroute table on R2:

R2#sho ip mroute 225.0.0.1 | be \(\*
(*, 225.0.0.1), 00:04:59/00:03:27, RP 2.2.2.2, flags: S
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial1/0, Forward/Sparse, 00:04:59/00:03:27

(192.168.56.6, 225.0.0.1), 00:01:47/00:01:12, flags: M
Incoming interface: Serial1/1, RPF nbr 192.168.23.3
Outgoing interface list:
Serial1/0, Forward/Sparse, 00:01:47/00:03:27


R2 now knows about the source of R6 and has even populated its OIL. The M flag tells us this is an MSDP created entry. Let's ping from R6:

R6#ping 225.0.0.1 re 5

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 225.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.78.8, 188 ms
Reply to request 0 from 192.168.12.1, 284 ms
Reply to request 0 from 192.168.12.1, 268 ms
Reply to request 1 from 192.168.12.1, 132 ms
Reply to request 1 from 192.168.78.8, 184 ms
Reply to request 2 from 192.168.12.1, 132 ms
Reply to request 2 from 192.168.78.8, 132 ms
Reply to request 3 from 192.168.12.1, 100 ms
Reply to request 3 from 192.168.78.8, 100 ms
Reply to request 4 from 192.168.12.1, 96 ms
Reply to request 4 from 192.168.78.8, 100 ms


Well that's it for now. You can have more complex scenarios with multiple domains (DocCD says MBGP is required for that) but the basics are easy to get down.

Thursday, December 11, 2008

ECMP Multicast Load Splitting

This is a pretty simple concept. By default when two paths to the RP exist, the router sends a join to the one with the highest IP address. When you enable multicast multipath, the router will send joins up multiple paths depending on Source address (this hash is modifiable in some IOS)

Here is the topology:


R4 has joined group 239.0.0.1. R5, R6 and R7 are all sending pings to this address. Before enabling multipath, this is what R1's mroute table looks like (it's actually bigger I am omitting output for the sake of brevity):

R1#show ip mroute | be \(

(*, 239.0.0.1), 00:00:09/stopped, RP 2.2.2.2, flags: SJC
Incoming interface: Serial1/3, RPF nbr 150.100.21.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:09/00:02:50

(6.6.6.6, 239.0.0.1), 00:00:07/00:02:58, flags: JT
Incoming interface: Serial1/3, RPF nbr 150.100.21.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:07/00:02:52

(150.100.56.5, 239.0.0.1), 00:00:05/00:02:58, flags: JT
Incoming interface: Serial1/3, RPF nbr 150.100.21.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:05/00:02:54

(150.100.56.6, 239.0.0.1), 00:00:07/00:02:58, flags: JT
Incoming interface: Serial1/3, RPF nbr 150.100.21.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:07/00:02:52

(150.100.56.7, 239.0.0.1), 00:00:10/00:02:57, flags: JT
Incoming interface: Serial1/3, RPF nbr 150.100.21.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:10/00:02:49


Notice that it has sent joins only on Serial 1/3. Thus, R2 only sends multicast traffic for 239.0.0.1 out of this interface. R2 OIL looks like this:

R2#show ip mroute 239.0.0.1 | sec Outgoing

Outgoing interface list:
Serial1/3, Forward/Sparse, 00:34:58/00:02:44

Outgoing interface list:
Serial1/3, Forward/Sparse, 00:13:39/00:02:44

Outgoing interface list:
Serial1/3, Forward/Sparse, 00:13:39/00:02:46

Outgoing interface list:
Serial1/3, Forward/Sparse, 00:13:39/00:02:45


Let's enable multicast multipath on R1:

R1(config)#ip multicast multipath

Now we can see Joins have been sent out of both interfaces:

R1#show ip mroute | be \(
(*, 239.0.0.1), 00:00:01/stopped, RP 2.2.2.2, flags: SJC
Incoming interface: Serial1/3, RPF nbr 150.100.21.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:01/00:02:58

(6.6.6.6, 239.0.0.1), 00:00:01/00:02:58, flags: JT
Incoming interface: Serial1/3, RPF nbr 150.100.21.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:01/00:02:58

(150.100.56.5, 239.0.0.1), 00:00:01/00:02:58, flags: J
Incoming interface: Serial1/2, RPF nbr 150.100.12.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:01/00:02:58

(150.100.56.6, 239.0.0.1), 00:00:01/00:02:58, flags: JT
Incoming interface: Serial1/3, RPF nbr 150.100.21.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:01/00:02:58

(150.100.56.7, 239.0.0.1), 00:00:00/00:02:59, flags: J
Incoming interface: Serial1/2, RPF nbr 150.100.12.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:00/00:02:59


R2's OIL now looks like this:

R2#show ip mroute 239.0.0.1 | section Outg

Outgoing interface list:
Serial1/3, Forward/Sparse, 00:03:03/00:03:26

Outgoing interface list:
Serial1/3, Forward/Sparse, 00:01:31/00:03:26

Outgoing interface list:
Serial1/2, Forward/Sparse, 00:01:02/00:03:25

Outgoing interface list:
Serial1/3, Forward/Sparse, 00:01:31/00:03:26


At first I wasn't sure if hashing is done on source or source/group, but I found out by sending to different groups from the same address to see if it splits up. From what I can tell it uses the source to hash, so one source sending to multiple groups will not get split.

R1#show ip mroute | be \(

(150.100.100.5, 238.0.0.1), 00:00:04/00:02:55, flags: JT
Incoming interface: Serial1/2, RPF nbr 150.100.12.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:04/00:02:55

(150.100.100.5, 239.0.0.2), 00:00:49/00:02:17, flags: JT
Incoming interface: Serial1/2, RPF nbr 150.100.12.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:49/00:02:56

(150.100.100.5, 239.0.0.3), 00:00:45/00:02:17, flags: JT
Incoming interface: Serial1/2, RPF nbr 150.100.12.2
Outgoing interface list:
FastEthernet0/0, Forward/Sparse, 00:00:50/00:02:50


There is another train of IOS where you can select what to hash on, but my IOS doesn't have it.

Key thing to remember:

-Enabling multipath causes Joins to get sent towards the RP on more than one interface. This is what causes the load-splitting. Careful not to confuse this with the downstream sending of traffic, by default the router will send it out all interfaces (in the OIL) anyway!

Wednesday, December 10, 2008

Multicast - IGMP Profile

Here is the topology for this lab:



R2 is the RP and will be sending multicast pings.
R3 is the PIM DR for the 192.168.135.0 segment.
We will prevent R5 from joining group 239.0.0.1.

To deny IGMP joins on a switch, we use the IGMP filter and profile commands.

First, create the profile:

SW1(config)#ip igmp profile 1
SW1(config-igmp-profile)#deny
SW1(config-igmp-profile)#range 239.0.0.1 239.0.0.5
SW1(config-igmp-profile)#exit


Then attach it to the port:

SW1(config)#int f0/5
SW1(config-if)#ip igmp filter 1


Now we can test by having R1 and R5 join a group in the range 239.0.0.1 - 239.0.0.5

R1(config)#int e0/0
R1(config-if)#ip igmp join-group 239.0.0.1

R5(config)#int e0/0
R5(config-if)#ip igmp join-group 239.0.0.1


Let's debug on SW1 and see what happens:

SW1#debug ip igmp filter
event debugging is on

SW1#
03:26:30: IGMPFILTER: igmp_filter_process_pkt(): checking group 239.0.0.1 from Fa0/5: deny
03:26:31: IGMPFILTER: igmp_filter_process_pkt() checking group from Fa0/3 : no profile attached
03:26:33: IGMPFILTER: igmp_filter_process_pkt() checking group from Fa0/1 : no profile attached


No let's check R3 for any joined groups:

R3#show ip igmp groups
IGMP Connected Group Membership
Group Address Interface Uptime Expires Last Reporter
239.0.0.1 Ethernet0/0 00:09:28 00:02:30 192.168.135.1
224.0.1.40 Ethernet0/1 00:29:57 00:02:09 192.168.23.2
224.0.1.40 Ethernet0/0 00:30:01 00:02:37 192.168.135.3


Just to make sure, we can verify that only R1 responds to pings:

R2#ping 239.0.0.1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.135.1, 8 ms
R2#

Tuesday, December 9, 2008

Multicast Heartbeat - Generating SNMP Traps

This was a topic I ran into while browsing the multicast configuration guide today. I had dynamips up and running so I created a small lab.

Topology:


R1---R2---R5---R7

R1 is sending traffic to 225.0.0.7
R2 is the BSR/RP
R5 is will be configured for hearbeat
R7 has "ip igmp join-group 225.0.0.7" on one of its interfaces.

The commands to enable multicast heartbeat are very simple:

R5(config)#snmp-server host 9.9.9.9 traps public ipmulticast
R5(config)#snmp-server enable traps ipmulticast

R5(config)#ip multicast heartbeat 225.0.0.7 ?
<1-100> Minimal number of intervals where the heartbeats must be seen

R5(config)#ip multicast heartbeat 225.0.0.7 1 ?
<1-100> Number of intervals to monitor for heartbeat

R5(config)#ip multicast heartbeat 225.0.0.7 1 2 ?
<10-3600> Length of intervals in seconds

R5(config)#ip multicast heartbeat 225.0.0.7 1 2 10
R5(config)#


You will see this message:

R5#
*Mar 1 00:29:58.523: MHBEAT(0): Unless packets sent to 225.0.0.7 are seen in 1 out of 2 intervals of 10 seconds, an SNMP trap may be emitted.


Let's debug SNMP packets and the heartbeat so we can see the trap:

R5#debug snmp packets
SNMP packet debugging is on
R5#debug ip mhbeat
IP multicast heartbeat debugging is on


Now on R1 start sending packets, then stop:

R1#ping 225.0.0.7 re 2

Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 225.0.0.7, timeout is 2 seconds:

Reply to request 0 from 150.100.56.7, 160 ms
Reply to request 1 from 150.100.56.7, 148 ms
R1#


Let's check R5. After a short while we see the following:

*Mar 1 00:38:48.555: MHBEAT(0): SNMP Trap for missing heartbeat
*Mar 1 00:38:48.575: SNMP: Queuing packet to 9.9.9.9
*Mar 1 00:38:48.575: SNMP: V1 Trap, ent ciscoExperiment.2.3.1, addr 150.100.56.5, gentrap 6, spectrap 1
ciscoIpMRouteHeartBeatEntry.2.225.0.0.7 = 0.0.0.0
ciscoIpMRouteHeartBeatEntry.3.225.0.0.7 = 10
ciscoIpMRouteHeartBeatEntry.4.225.0.0.7 = 2
ciscoIpMRouteHeartBeatEntry.5.225.0.0.7 = 0
*Mar 1 00:38:48.827: SNMP: Packet sent via UDP to 9.9.9.9


For reference, here is the link to the DocCD:

IP Multicast Heartbeat

Tuesday, July 29, 2008

Multicast over NBMA with auto-rp and a spoke mapping agent

Here is the topology. There is no PVC between R2 and R3. Nice frame cloud, eh?


R1 (hub), R2 and R3 are connected via frame-relay: 172.12.123.0/24.
R4 connects to R3 via another serial connection: 192.168.14.0/24.

OSPF area 0 is everywhere.

R1 will be the RP-candidate.
R3 will be the mapping agent.
R2 will be joining group 232.0.0.2.
R4 will sending pings to 232.0.0.2.

R1 will announce itself as RP candidate and R3 as the MA:

R1(config)#ip pim send-rp-announce lo 0 scope 5

R3(config)#ip pim send-rp-discovery loopback 0 scope 5


Now according to this doc:

Using IP Multicast Over Frame Relay Networks

"All candidate RPs must be connected to the MA"
AND
"All MAs must be connected to all PIM routers"

In order for R2 to successfully receive the rp-to-group mappings from R3, they need to be PIM neighbors. The reason is R1 will not forward an autorp message received on it's frame-relay interface back out that same interface. So R2 will never get it!

We can create a tunnel between the two and enabling sparse-mode on the tunnel:

R2(config)#int tun 0
R2(config-if)#ip address 172.12.23.2 255.255.255.0
R2(config-if)#tunnel source 172.12.123.2
R2(config-if)#tunnel destination 172.12.123.3
R2(config-if)#ip pim sparse-mode

R3(config)#int tun 0
R3(config-if)#ip address 172.12.23.3 255.255.255.0
R3(config-if)#tunnel source 172.12.123.3
R3(config-if)#tunnel destination 172.12.123.2
R3(config-if)#ip pim sparse-mode

Now R2 and R3 are PIM neighbors:

R2#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
S - State Refresh Capable
Neighbor Interface Uptime/Expires Ver DR
Address Prio/Mode
172.12.123.1 Serial1/0 01:44:07/00:01:38 v2 1 / S
172.12.13.3 Tunnel0 00:06:29/00:01:42 v2 1 / S


However theres is still one issue. When R2 receives the auto-rp messages from R3 the rpf check fails because R2 uses its FR interface to perform the rpf check:

R2#show ip rpf 3.3.3.3
RPF information for ? (3.3.3.3)
RPF interface: Serial1/0
RPF neighbor: ? (172.12.123.1)
RPF route/mask: 3.3.3.3/32
RPF type: unicast (ospf 1)
RPF recursion count: 0
Doing distance-preferred lookups across tables


Thus R2 will never learn the rp-to-group mapping. We can fix this by making a static mroute for R3's loopback pointing towards the tunnel:
R2(config)#ip mroute 3.3.3.3 255.255.255.255 Tunnel0


Now the RPF will pass for auto-rp messages:

R2#show ip rpf 3.3.3.3
RPF information for ? (3.3.3.3)
RPF interface: Tunnel0
RPF neighbor: ? (172.12.23.3)
RPF route/mask: 3.3.3.3/32
RPF type: static
RPF recursion count: 0
Doing distance-preferred lookups across tables


Let's verify on R4:

R4#ping 232.0.0.2 repeat 5

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 232.0.0.2, timeout is 2 seconds:

Reply to request 0 from 172.12.123.2, 500 ms
Reply to request 1 from 172.12.123.2, 188 ms
Reply to request 2 from 172.12.123.2, 492 ms
Reply to request 3 from 172.12.123.2, 300 ms
Reply to request 4 from 172.12.123.2, 292 ms


Sweet!