Linux: policy based routing - iproute2

Task: A linux box has two aliaces configured on the interface:
em2: ip 10.10.0.10;
em2:0 - ip 10.10.11.1 and em2:1 - ip 10.10.22.2;

There are two routers that traffic can be forwarded to: 10.10.0.100 and 10.10.0.200. All traffic from 10.10.11.1 should be routed to 10.10.0.100 and traffic from 10.10.22.2 should be forwarded to 10.10.0.200.

IP routing

- rules - routing policy database
# ip rule list
0:  from all lookup local
32766:  from all lookup main
32767:  from all lookup default

 

Read more

F5 Troubleshooting

F5 Troubleshooting:

# tmsh show /sys version
# tmsh show /net route
# tcpdump -ni 0.0 host 192.168.1.109 and port 80   -- tcpdump on all interfaces, filter on host and port
# bigstart status bigd   - The bigd monitor daemon provides system health checks. Impact of not running: Monitoring not available

The master control program daemon (MCPD) is the messenger service that allows two-way communication between userland processes and the Traffic Management Microkernel (TMM).
Impact of not running: No traffic management functionality; the system status cannot be retrieved or updated, and the system cannot be re-configured; other daemons will not be functional.

# tmsh list ltm pool webservers all-properties | more

# tmsh modify sys db bigd.debug value enable; tail -f /var/log/bigdlog; tmsh modify sys db bigd.debug value disable   - enable bigd debug, tail output, disable bigd debug

 

Read more

Rib-groups example

Rib-groups simple example.

We created two routing instances: test1 and test2, each instances has one interface in it:
    test1 - vlan.641:     172.16.10.1/24
    test2 - vlan.642:       172.16.20.1/24

# show routing-instances
test1 {
    instance-type virtual-router;
    interface vlan.641;

}
test2 {
    instance-type virtual-router;
    interface vlan.642;
}


Routing table looks like this:

Read more

RIB Group Confusion

This article is take from http://www.subnetzero.info/2014/04/10/rib-group-confusion/

Continuing on the subject of confusing Junos features, I’d like to talk about RIB groups. When I started here at Juniper, I remember being utterly baffled by this feature and its use. RIB groups are confusing both because the official documentation is confusing, and because many people, trying to be helpful, say things that are entirely wrong. I do think there would have been an easier way to design this feature, but RIB groups are what we have, so that’s what I’ll talk about.

Read more

Aruba controller upgrade procedure

WIFI controller Upgrade Procedure:
1. take image backup:  (ny-wifi-master2) # backup flash
2. take logs backup:   (ny-wifi-master2) # tar logs tech-support
3. copy backup flash to ftp: (ny-wifi-master2) #copy flash: flashbackup.tar.gz ftp: [server_ip] rrm *********
4. download image from ftp:

(ny-wifi-master2) #copy ftp: [server_ip] rrm ArubaOS_MMC_6.3.1.6_43301 system: partition 1
Password:*********
Copying file:....................................................
File copied successfully.
Saving file to flash:
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
The system will boot from partition 1 during the next reboot.
(ny-wifi-master2) # reload
Do you really want to restart the system(y/n): y
System will now restart!
Shutdown processing started
Syncing data....done.
Sending SIGKILL to all processes.
Please stand by while rebooting the system.
1:<7>ide-disk 0.0: shutdown
1:<0>Restarting system.
1:.
1:<2>Performing hard reset...

 

To check what software is installed, run:

#show image version
----------------------------------
Partition           : 0:0 (/dev/hda1)
Software Version    : ArubaOS 6.1.3.6 (Digitally Signed - Production Build)
Build number        : 36470
Label               : 36470
Built on            : Tue Dec 11 12:51:05 PST 2012
----------------------------------
Partition           : 0:1 (/dev/hda2) **Default boot**
Software Version    : ArubaOS 6.3.1.3 (Digitally Signed - Production Build)
Build number        : 42233
Label               : 42233
Built on            : Tue Feb 11 11:58:33 PST 2014

 

commit synchronize force

To enforce a commit synchronize on the Routing Engines, log in to the Routing Engine from which you want to synchronize and issue the commit synchronize command with the force option:

[edit]user@host# commit synchronize force

re0:re1:commit complete

re0:commit complete

[edit]user@host#

 

For the commit synchronization process, the master Routing Engine commits the configuration and sends a copy of the configuration to the backup Routing Engine. Then the backup Routing Engine loads and commits the configuration. So, the commit synchronization between the master and backup Routing Engines takes place one Routing Engine at a time. If the configuration has a large text size or many apply-groups, commit times can be longer than desired.

SRX cluster initial setup

SRX cluster initial setup:

1. Power on both SRX units. Console to the first one.


2. Enable cluster mode and reboot the devices:
    On device A:    >set chassis cluster cluster-id 1 node 0 reboot
    On device B:    >set chassis cluster cluster-id 1 node 1 reboot


3. Remove default configuration:
root> configure shared
 delete interfaces
 delete system services dhcp   
 delete security nat
 delete protocols stp
 set protocols rstp
 delete security policies
 delete security zones
 delete vlans

4. Configure authentication and ssh access on each device:
root# set system root-authentication plain-text-password
root# set system services ssh root-login allow

 

5. Configure the device specific configurations such as host names and management IP addresses. This is specific to each device and is the only part of the configuration that is unique to its specific node.  This is done by entering the following commands (all on the primary node):

on Node0:
root# set groups node0 system host-name nyc-broadway-451-0                    
root# set groups node0 interfaces fxp0 unit 0 family inet address 172.25.25.1/24   
set apply-groups "${node0}"

on Node1:
root# set groups node1 system host-name nyc-broadway-451-1                    
root# set groups node1 interfaces fxp0 unit 0 family inet address 172.25.25.2/24    
set apply-groups "${node1}"


The 'set apply-groups' command is run so that the individual configs for each node, set by the above commands, are applied only to that node. This command is required.


6. Configure the FAB links (data plane links for RTO sync, etc):
     set interfaces fab0 fabric-options member-interfaces ge-0/0/2
     set interfaces fab0 fabric-options member-interfaces ge-0/0/3
     
     set interfaces fab1 fabric-options member-interfaces ge-5/0/2     
     set interfaces fab1 fabric-options member-interfaces ge-5/0/3


7.  Configure the Redundancy Group 0 for the Routing Engine failover properties. Also configure Redundancy Group 1 (all the interfaces will be in one Redundancy Group in this example) to define the failover properties for the Reth interfaces.
    
set chassis cluster reth-count 3
set chassis cluster redundancy-group 0 node 0 priority 100
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 100
set chassis cluster redundancy-group 1 node 1 priority 1

-configure switch fabric
set interfaces swfab0 fabric-options member-interfaces ge-0/0/4
set interfaces swfab1 fabric-options member-interfaces ge-5/0/4


8. Configure interfaces
set interfaces ge-0/0/15 gigether-options redundant-parent reth0
set interfaces ge-5/0/15 gigether-options redundant-parent reth0

BGP tools, troubleshooting and monitoring external routing

This will provide a view from the street, it will display exactly what your router knows.

Route servers / Looking Glass –This is your basic  external view. Log in and see what other ASNs know about your routes.  Critically important during those “something is on fire” times mentioned above.  They are maintained by a myriad of entities and positioned all over the globe.

BGPmon –A project by Andree Toonk. Allows for the automatic discovery and monitoring of prefixes, alerting on many, many  attributes such as prefix hijacks.  Free and commercial plans available, but the commercial plans are far more feature rich and well worth it if you monitor large amounts og BGP.

Peermon –Part of the new and improved BGPmon.  Allows for the on-demand monitoring of prefixes within your network. Very useful for viewing as-path changes of destination networks for long term troubleshooting.

RouteViews – Great project out of U of Oregon that was (is?) run by the groundbreaking David Meyer of OpenDaylight (and many other things) fame.  Peers with networks and records routing changes, allows for public query and has vast historical data.

bgplay - Great visualization tool for tracking routing, as-path and prefix announcement changes. This is part of the routeviews project and utilizes their vast historical data.  L=It currently acks IPv6 support and I;m unsure if it is maintained anymore.

Router Proxies – This has been a big thing int he R&E world for quite some time.  Other entities may offer it, it’s similar to a looking glass but more easily configured to allow or disallow different show commands.  The code is open source and pretty easy to hack new commands into or adapt to new platforms (if I can do it anyone can).

Lookup tools such as whois.  I find that looking uo ASNs and networks against the ARIN, RIPE and other RIRs is very handy as a starting point.  using CLI commands such as “whois -h whois.arin.net 1224″ would display useful information.

Very handy for prefixes and ASNs.  There are also service like the Team Cymru whois server that can display date/time based information for forensics and to provide IP to ASN mappings. Also very handy.  I believe this code is also open source.

IRR Toolset.  Extremely handy for automation of routing policy configuration.  I found it a tad painful to set up but it is a useful toolkit.

Notable Mention: NLNog RING.  – This is a trust based unix host that provides a large variety of services to those that qualify for participation.  Very handy when looking for an on-net perspective.

Notable Mention / Shameless Plug: perfSonar toolkit.  In addition to thewell known performance testing tools, PS provides things like reverse traceroute and other handy networking widgets.  It also has a far lower barrier of entry than the NLNog RING.

 

There are obviously more ways to do this and there are possibly better ones, too.  This is how I’ve done it for a long time and it has mostly worked for me.  I had to learn most of this by trial and error so I thought it maybe useful to throw it all together into one place for future reference.

BGP in the Data Center: Why you need to deploy it now!

Overlay networks in the data center are here and are here to stay. It's now easier than ever to programmatically provision new networks with a click of the mouse than ever before. No need to worry about VLAN IDs, integrated bridging and routing, MC-LAG, and spanning tree. Overlay networks use data plane encapsulations such as VXLAN or GRE to transport both Layer 2 and 3 between virtual machines and physical servers. One of the key requirements in an overlay architecture in the data center is to have a rock solid IP Fabric; simply Layer 3 connectivity between every host in the network that participates in the overlay network.

Read more

Bigip mgmt IP configuration

Configuring the cluster IP address using tmsh

You can configure the cluster IP address using tmsh after you connect a blade to a serial terminal console.

  1. Connect to the system using the serial console.
  2. Type the following key sequence to set the cluster IP address and subnet mask: tmsh modify sys cluster default address <ip_address/mask> Example: tmsh modify sys cluster default address 192.168.217.44/24
  3. Type the following key sequence to set the default gateway for the cluster: tmsh modify sys management-route default gateway <gateway_ip> Example: tmsh modify sys management-route default gateway 172.20.80.254
  4. Write the running configuration to the stored configuration files. tmsh sys save [base-config | config]

The system saves the new IP address, subnet mask, and gateway address for the cluster. You can now access the browser-based Configuration utility using the cluster IP address you assigned.

Minimum effort SRX Cluster upgrade procedure

This is a minimum effort upgrade procedure for an SRX Branch cluster.

It as assumed that the cluster is being managed through a reth interface, thus there is no direct access to node1 via fxp0, and that the cluster is running at least JunOS 10.1r1, thus the ability to login to the backup node from the master node exists.

For a minimum downtime upgrade procedure instead of a minimum effort one, see Juniper KB17947, or use the cable pulling method described in these forums by contributor rahula.

Read more

BGP tools: troubleshooting and monitoring external routing in a nutshell

Time to rewind from the new and shiny and get back to roots of networking. BGP is one of those odd protocols that is foundational to the functioning of the internet but yet somewhat hard to get experience with.  Say what you will about this venerable protocol, it’s been here a while and it is not going anywhere any time soon. I’ve been doing BGP since around late 1999, and I completely fell into it by accident, having only the Cisco Internet Routing Architectures book (which I literally read cover to cover) and the Ulysses Black Routing Protocols Book  and whatever I could find on a random search engine to guide me, and that was only after having to learn on the CLI for the first 6-7 months. In actuality, that is how many of the folks of my vintage came into doing BGP. Someone needed to announce some routes that were allocated to them by an RIR, or bring up some multi-homing or whatever.

Read more

How to upgrade a SRX cluster with minimal downtime

Problem or Goal:

At no time can a cluster have mismatched code versions. This can result in network instability and unpredictable behavior. This means that to properly upgrade a cluster without ISSU (not supported on SRX Branch devices), you would need to ensure that both nodes are rebooted and do not attempt to connect to each other with different Junos code versions.

Zero downtime is not currently possible on SRX clusters. The goal of this article is to provide a means to upgrade an SRX cluster with the minimum amount of downtime possible. The following events can be expected during this process:

  • All sessions, which have network address translation, will be lost.

  • All sessions utilizing ALG (Like FTP, SIP, and so on) will be lost.

  • Dynamic routing protocol adjacencies will need to be re-established upon failover between the devices.

  • All other existing sessions will be able to fail between devices.

  • Depending on the network configuration, traffic will failover between devices with mimimal packet loss.
Read more