Wednesday, February 11, 2009

Messin' around with multicast boundary

I got a multicast lab in dynamips going so I thought I would just play around with some lesser known commands and learn how they actually work.

Here is the topology:

R5---R6---R1---R2---R3---R4

R1 = MA and RP for 232/8, 233/8, 234/8

R4#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 232.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:10:08, expires: 00:02:48
Group(s) 233.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:10:08, expires: 00:02:47
Group(s) 234.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:10:08, expires: 00:02:46

R4 has the following on Loopback 0:

interface Loopback0
ip address 4.4.4.4 255.255.255.255
ip pim sparse-mode
ip igmp join-group 233.0.0.1
ip igmp join-group 234.0.0.1

R3 has set up a multicast boundary as follows:

access-list 1 permit 232.0.0.0 0.255.255.255
access-list 1 permit 233.0.0.0 0.255.255.255

interface Serial1/0
ip address 192.168.34.3 255.255.255.0
ip pim sparse-mode
ip multicast boundary 1

Now R3 only allows PIM joins that are in 232/8 or 233/8.

R3#sho ip mroute 234.0.0.1
Group 234.0.0.1 not found
R3#

Let's ping 233.0.0.1:

R6#ping 233.0.0.1 re 100

Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to 233.0.0.1, timeout is 2 seconds:
......................................................................
..........

Whoa now, what gives? Well...remember we only allowed 2 groups...what does Auto-RP use to propagate messages? Group 224.0.1.40! So even if you start passing traffic to 233.0.0.1 after you enable the boundary, eventually R3 will lose state for the Auto-RP discovery group and R4 will lose the RP information. All multicast traffic will then fail the RPF check.

So here is our modified ACL on R3:

R3#sho run | inc access
access-list 1 permit 224.0.1.40
access-list 1 permit 233.0.0.0 0.255.255.255
access-list 1 permit 232.0.0.0 0.255.255.255

224.0.1.39 is what the MA's listen to so we don't need to worry about that for this example. Now we can ping:

R6#ping 233.0.0.1 re 2

Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 233.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.34.4, 212 ms
Reply to request 0 from 192.168.34.4, 216 ms
Reply to request 1 from 192.168.34.4, 184 ms
Reply to request 1 from 192.168.34.4, 184 ms

Now this seems a little inefficient, right? Why should R4 even know about the RP if R3 is going to prevent mroute state from being created for 234.0.0.1 on that interface. If we could prevent R4 from learning that RP information, that would be great. Well on R3 we can modify the boundary as follows:

R3(config)#int s1/0
R3(config-if)#ip multicast boundary 1 filter-autorp

Now R3 only sends RP information for the groups permitted in the ACL:

R4#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 232.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:00:03, expires: 00:02:55
Group(s) 233.0.0.0/8
RP 1.1.1.1 (?), v2v1
Info source: 1.1.1.1 (?), elected via Auto-RP
Uptime: 00:00:03, expires: 00:02:53
R4#

3 comments:

  1. Hi , very good info. This was a little part of multicast that was bothering me. An important note here , which I kept missing is that the group-list on the RP needs to precise and the acl for the boundary needs to match the needed groups as per the rp group-list acl precisely or it will drop the rp and its group subsets , (which means r4 will have NO rp info at all)
    Thanks again for the good blog
    Llewellyn Dowie - ccie to be2
    ldowie@ntlworld.com

    ReplyDelete
  2. thnx a lot for that info. I was just working my way through multicasting when I stumbled upon this. That explanation was crystal clear !!

    ReplyDelete
  3. Excellent Explanation!!!. I was struggling in filter-autorp..

    Thanks much...

    ReplyDelete

Note: Only a member of this blog may post a comment.