Sunday, September 22, 2013

Fixing SSH timeouts on the ASA

I spent a bunch of time getting my head around class maps, policy maps, and service policies, in an effort to correct the issue of idle SSH connections being timed out after an hour (the default idle timeout for TCP connections on the ASA at version 9.1). The documentation is a confusing web of headache, but I found this blog post to be useful reading. My solution was to leave the timeout unchanged, but to enable Dead Connection Detection for SSH connections. Essentially, when an idle SSH connection hits the threshold, the ASA forges a packet to the endpoints to verify that the socket is still open. If there is a response, the idle timer is reset.


access-list ssh_ports remark access list to id ssh traffic for the ssh_ports class map
access-list ssh_ports extended permit tcp any any eq ssh 
access-list ssh_ports extended permit tcp any any eq 2222
class-map ssh_traffic
 description identify SSH traffic, so we can apply policy
 match access-list ssh_ports
policy-map generic_interface_policy
 class ssh_traffic
  set connection timeout dcd 
service-policy generic_interface_policy interface outside
service-policy generic_interface_policy interface inside

Tuesday, September 17, 2013

Cisco ASA Remote Access configuration for Mac OS X

I spent the day fighting to get a remote access IPSec connection set up as follows:

  • ASA 5515-X, running version 9.1.
  • ASA network interfaces are already configured.
  • IPSec clients are assigned addresses from the range 123.0.0.199-201.
  • Client is running OS X 10.8.4 Mountain Lion.
  • Client is using the built-in OS X IPSec client.
  • Client IP is private, behind NAT, with a DHCP-assigned WAN IP.
  • After connecting, client should be able to reach the internal networks 123.0.0.128/26, 123.0.0.192/27.
  • All other traffic is not sent across the VPN.
The following configuration should be added to the ASA:

ip local pool REMOTE_ACCESS_POOL 123.0.0.199-123.0.0.201
management-access inside
access-list REMOTE_ACCESS_SPLIT_TUNNEL remark The corporate network behind the ASA.
access-list REMOTE_ACCESS_SPLIT_TUNNEL standard permit 123.0.0.128 255.255.255.192 
access-list REMOTE_ACCESS_SPLIT_TUNNEL standard permit 123.0.0.192 255.255.255.224 
crypto ipsec ikev1 transform-set REMOTE_ACCESS_TS esp-aes-256 esp-sha-hmac 
crypto dynamic-map REMOTE_ACCESS_DYNMAP 1 set ikev1 transform-set REMOTE_ACCESS_TS
crypto map REMOTE_ACCESS_MAP 1 ipsec-isakmp dynamic REMOTE_ACCESS_DYNMAP
crypto map REMOTE_ACCESS_MAP interface outside
crypto ikev1 enable outside
crypto ikev1 policy 1
 authentication pre-share
 encryption aes-256
 hash sha
 group 2
 lifetime 7200
group-policy REMOTE_ACCESS_GP internal
group-policy REMOTE_ACCESS_GP attributes
 split-tunnel-policy tunnelspecified
 split-tunnel-network-list value REMOTE_ACCESS_SPLIT_TUNNEL
username hunter password **** encrypted
tunnel-group REMOTE_ACCESS_TUNNELGRP type remote-access
tunnel-group REMOTE_ACCESS_TUNNELGRP general-attributes
 address-pool REMOTE_ACCESS_POOL
 default-group-policy REMOTE_ACCESS_GP
tunnel-group REMOTE_ACCESS_TUNNELGRP ipsec-attributes
 ikev1 pre-shared-key *****

For explanation of what all this does, I recommend reading the following Cisco docs. It is worth noting that this configuration does not work with Windows 7/8, which use IKEv2 instead of v1.

The configuration for the built-in OS X IPSec client is described in the following doc. One gotcha I ran into (which is clearly stated in the document) is that the tunnel-group name must be specified in the 'Group Name' field on the Mac. In the case of the above configuration, the group name is REMOTE_ACCESS_TUNNELGRP.

Friday, September 13, 2013

Using FreeBSD as a DomU on XenServer

I am investigating a move away from VMWare ESXi to $hypervisor, as a part of our new data center build. The primary candidates I am looking at are XenServer, and an XAPI stack on Debian. Citrix doesn't officially support FreeBSD as a DomU; at least not as of 6.2. However, FreeBSD seems to pretty happily install as an HVM DomU, if you specify the "Other install media" template from the XenCenter "New VM" wizard.

I installed XenServer 6.2 on an old Dell 2950, with a pair of dual-core Xeons, 8GB RAM and 6x15k SAS drives in a RAID10 configuration. As an aside, I get garbage output on the boot prompt when I boot the host...a problem happily solved by mashing the Enter key in frustration until XenServer began booting.

As mentioned above, I installed a FreeBSD 9.1-RELEASE amd64 guest without much difficulty. I then wanted to see how the performance stacked up against our existing ESXi 5.0 infrastructure. As a crude benchmark, I ran a time portsnap extract on the XenSever guest, another FreeBSD guest on a similarly spec'd 1950 running ESXi, and on a 2950 with FreeBSD installed natively. The wall times were as follows.
  1. XenServer FreeBSD DomU: 12:20
  2. ESXi FreeBSD guest: 6:30
  3. Raw hardware: 5:28
I was rather disappointed to see XenServer fare so poorly against VMWare. Not all was lost though, because my XenServer guest was running in HVM mode. I expected that I would see some performance improvement by using the Paravirtualized drivers available in FreeBSD. To summarize the FreeBSD Wiki, full PV support is only available on i386, but amd64 can use the PV block device and network interfaces. I tried building an PV image and shipping it over to the XenServer host, without success. I was unable to get XenServer to even attempt to boot my image.

I went back to my HVM DomU and installed the 9.1-p7 XENHVM kernel. On reboot, the guest hangs immediately after detecting the CDROM drive. For several minutes it displays a message about xenbusb_nop_confighook_cb timeout periodically, then nothing. Some googling suggests that this is a known issue, with a workaround of removing the virtual CD device, as indicated in this thread. I removed the CDROM device by following these instructions, and the guest now boots happily. With the PV drivers, your virtual disk device is named "ad0" by FreeBSD. The network interface becomes "xn0". Because of these changes, you will want to update your guest /etc/fstab file, and probably the network configuration in /etc/rc.conf. Running the portsnap benchmark on the updated guest yields a time of 8:37...a 43% improvement over the full HVM DomU, but still lagging behind VMWare.

More experimentation is required to tell if more performance can be squeezed out of XenServer, or whether the live migration features justify the performance drop.

Saturday, September 7, 2013

Using FreeBSD loopback interfaces with BIRD

Why go for the simple solution, when you can first spend hours tearing out your hair in frustration?

I've been working on a new data center deployment, and getting my fingers back into the networking realm; a welcome change. This includes my first OSPFv3 deployment, and we're using BIRD. For the most part, I have been very happy with BIRD for OSPF and BGP; though I have run into some quirks.

The quirk on my mind at this moment is with loopback interfaces. The Cisco way of doing things seems to be to run iBGP sessions between loopback addresses. The rationale is that if you use an interface address, and that interface goes down, your iBGP session goes with it, as that address becomes unreachable. So you use a loopback interface, which is always up. The addresses on the loopback are advertised via an IGP, facilitating the iBGP connection.

For better or worse, I decided to follow the herd, and go with a loopback interface. For IPv4, this was pretty straightforward. Configure the loopback in the OS, add it to bird.conf as a stub interface, and good to go. For reference, here are the bits to do so on FreeBSD.

# /etc/rc.conf
cloned_interfaces="lo1"
ifconfig_lo1="inet W.X.Y.Z/32"

# /usr/local/etc/bird.conf
protocol ospf {
tick 2;
area 0 {
stub no;
interface "vlan7", "vlan500" {
cost 5;
hello 2;
dead 10;
authentication cryptographic;
password "password";
};
interface "lo1", "vlan1001" { stub; };
};
}

And since the proof is in the pudding (or output)...

bird> show route for W.X.Y.Z
W.X.Y.Z/32 via A.B.C.D on vlan500 [ospf1 09:08] * I (150/5) [W.X.Y.Z]

When I went to configure OSPF for our IPv6 allocation, things didn't go quite so smoothly. I used the following similar configuration for the v6 BIRD configuration.

# /etc/rc.conf
ifconfig_lo1_ipv6="inet6 2620:W:X:Y::Z/128"

# /usr/local/etc/bird6.conf
protocol ospf ospf_v6 {
tick 2;
area 0 {
stub no;
interface "vlan7", "vlan500" {
cost 5;
hello 2;
dead 10;
# Authentication is not supported by OSPFv3, supposed to be IPSec AH authenticated.
#authentication cryptographic;
#password "password";
};
interface "lo1", "vlan1001" { stub; };
};
}

With this configuration in place, I realized that my IPv6 loopback address was not being advertised. Examining the logs, BIRD was quite happy to tell me that it had filtered out that route. WTF? After a bunch of time wasted searching Google and throwing shit at the wall, my loopback address was still not working. I finally stumbled on this mailing list thread, where I learned that using a loopback is NOT an expected configuration; at least in the eyes of the developers. Furthermore, the fact that it is working in IPv4 was surprising, and perhaps a bug. The reason that BIRD is denying my lo1 IP is that there is no link-local IP on the interface as well. Without starting a discussion on whether Cisco or BIRD is more right, I'll just say I was *((# *grumble* *f'n BIRD*.

I jumped through a few more hoops, and finally discovered that I could use a tap interface in lieu of a loopback. It would generate a link-local address, OSPF would advertise it, and iBGP happiness filled the kingdom. Nevermind that it is about as hacky as you can get. For what it's worth, here is the rc.conf goo to make it happen.

# /etc/rc.conf
# DON'T USE THIS! IT'S HACKY AND EVERYONE WILL LAUGH AT YOU.
cloned_interfaces="tap0"
ifconfig_tap0_ipv6="inet6 2620:W:X:Y::Z/128 -ifdisabled"

I slept on it. When I woke up, I had some OSPF fixes on my mind for the ASAs (that's another raar story). I was poking around a little more when I read and was reminded about the BIRD stubnet configuration directive. In a nutshell, BIRD will always advertise a stubnet route...perfect! Changed the configuration to support this, and life is good again.

# /etc/rc.conf
ifconfig_lo1_ipv6="inet6 2620:W:X:Y::Z/128"

# /usr/local/etc/bird6.conf
protocol ospf ospf_v6 {
tick 2;
area 0 {
stub no;
interface "vlan7", "vlan500" {
cost 5;
hello 2;
dead 10;
};
interface "vlan1001" { stub; };
stubnet 2620:W:X:Y::Z/128;
};
}

and the proof!

bird> show route for 2620:W:X:Y::Z/128
2620:W:X:Y::Z/128 via fe80::225:90ff:fe6b:f52c on vlan7 [ospf_v6 09:12] * I (150/15) [W.X.Y.Z]