If you want to use CARP on your VMWare guest VMs, you will probably find that it doesn't work out of the box. This is due to ESXi rejecting promiscuous mode on the virtual switch by default. To enable promiscuous mode, go to the Network configuration section for the host (in vSphere Client), and click properties for the vSwitch. Edit the properties for the vSwitch, and change the setting of "Promiscuous Mode" to Accept under the "Security" tab.
For bonus points, if you are using NIC Teaming on ESXi (even with just a standby adapter), you will find that your CARP interfaces always remain in BACKUP state, and your logs fill with the following messages.
Nov 12 11:25:51 kernel: carp0: MASTER -> BACKUP (more frequent advertisement received)
Nov 12 11:25:51 kernel: carp0: link state changed to DOWN
Nov 12 11:25:54 kernel: carp0: link state changed to UP
This is because ESXi is rebroadcasting CARP advertisements that come back down the other members of the team. To correct this, you need to dig into the Advanced Settings, under Software. Change Net.ReversePathFwdCheckPromisc to 1. Annoyingly, you will need to reboot the host to affect these changes, but it works.
Tuesday, November 12, 2013
Wednesday, October 9, 2013
Using BIRD to route over OpenVPN tunnels.
OpenVPN tunnels, good. BIRD routing daemon, great. OSPF on OpenVPN tunnels, headache. The combination of OpenVPN and BIRD routing daemon for OSPF is nothing new for me. I've been using it pretty happily for over a year. However, it always seemed as though something was a bit precarious. As I have started to scale the implementation, I've realized that I needed a rethink of my strategy.
The original design used a /31 subnet on each tunnel, with each endpoint using an address. Logically, it made perfect sense. The catch is that OSPF does not automatically advertise the IP address of the tunnels (a quirk of OpenVPN, BIRD, or both). Traffic would flow through the routers, but the remote tunnel address would be unreachable. I solved this by adding a stubnet x.x.x.x/31 directive to the OSPF configuration at one of the endpoints. At the time this worked fine.
As I brought a third site online, with multiple VPN tunnels between each site, the original design quickly broke down. The /31s were no longer consistently routable. I remedied this by changing the /31 stubnet to a /32, with each router advertising its own IP addresses. Life was briefly good, until I realized that restarting the OpenVPN tunnels would fail. OpenVPN would log the error, "router-a openvpn_1199[18976]: FreeBSD ifconfig failed: external program exited with error status: 1" The following message was also logged, "ifconfig: ioctl (SIOCAIFADDR): File exists" From the depths of my memory, I recalled that this error occurs when you attempt to configure an interface with an address that already exists in the routing table. Since BIRD was already advertising the /32 host IP, trying to address the tunnel would fail.
Some creative restarting of OpenVPN and BIRD resolved the problem, but is an unacceptable solution for production. I turned to the BIRD mailing list with my situation, and very quickly received a response from one of the BIRD developers. He suggested a number of ideas, the most promising of which is to dedicate a subnet to each router, as a pool from which to draw addresses for local tunnel endpoints on that specific host. The subnet is then configured as a stubnet by BIRD. The ends of a given tunnel may not be remotely close to each other, but this is fine because a tunnel is a point-to-point link. Tunnels can be restarted at will, without contention from existing routes in the table.
The diagram below shows an example topology. Notice that the endpoint of each tunnel is in a completely different network. For example, on the tunnel between router-a and router-b, router-a uses an IP of 192.168.0.0, and router-b uses 10.0.0.0. It doesn't matter, it works just fine. With the given stubnet configured in the bird.conf for each router, all tunnel endpoints are reachable. The network and broadcast addresses for each router's stubnet are also useable, as shown in the example.
Sunday, September 22, 2013
Fixing SSH timeouts on the ASA
I spent a bunch of time getting my head around class maps, policy maps, and service policies, in an effort to correct the issue of idle SSH connections being timed out after an hour (the default idle timeout for TCP connections on the ASA at version 9.1). The documentation is a confusing web of headache, but I found this blog post to be useful reading. My solution was to leave the timeout unchanged, but to enable Dead Connection Detection for SSH connections. Essentially, when an idle SSH connection hits the threshold, the ASA forges a packet to the endpoints to verify that the socket is still open. If there is a response, the idle timer is reset.
access-list ssh_ports remark access list to id ssh traffic for the ssh_ports class map access-list ssh_ports extended permit tcp any any eq ssh access-list ssh_ports extended permit tcp any any eq 2222
class-map ssh_traffic
description identify SSH traffic, so we can apply policy
match access-list ssh_ports
policy-map generic_interface_policy
class ssh_traffic
set connection timeout dcd
service-policy generic_interface_policy interface outside
service-policy generic_interface_policy interface inside
Tuesday, September 17, 2013
Cisco ASA Remote Access configuration for Mac OS X
I spent the day fighting to get a remote access IPSec connection set up as follows:
- ASA 5515-X, running version 9.1.
- ASA network interfaces are already configured.
- IPSec clients are assigned addresses from the range 123.0.0.199-201.
- Client is running OS X 10.8.4 Mountain Lion.
- Client is using the built-in OS X IPSec client.
- Client IP is private, behind NAT, with a DHCP-assigned WAN IP.
- After connecting, client should be able to reach the internal networks 123.0.0.128/26, 123.0.0.192/27.
- All other traffic is not sent across the VPN.
The following configuration should be added to the ASA:
ip local pool REMOTE_ACCESS_POOL 123.0.0.199-123.0.0.201
management-access inside
access-list REMOTE_ACCESS_SPLIT_TUNNEL remark The corporate network behind the ASA.
access-list REMOTE_ACCESS_SPLIT_TUNNEL standard permit 123.0.0.128 255.255.255.192
access-list REMOTE_ACCESS_SPLIT_TUNNEL standard permit 123.0.0.192 255.255.255.224
crypto ipsec ikev1 transform-set REMOTE_ACCESS_TS esp-aes-256 esp-sha-hmac
crypto dynamic-map REMOTE_ACCESS_DYNMAP 1 set ikev1 transform-set REMOTE_ACCESS_TS
crypto map REMOTE_ACCESS_MAP 1 ipsec-isakmp dynamic REMOTE_ACCESS_DYNMAP
crypto map REMOTE_ACCESS_MAP interface outside
crypto ikev1 enable outside
crypto ikev1 policy 1
authentication pre-share
encryption aes-256
hash sha
group 2
lifetime 7200
group-policy REMOTE_ACCESS_GP internal
group-policy REMOTE_ACCESS_GP attributes
split-tunnel-policy tunnelspecified
split-tunnel-network-list value REMOTE_ACCESS_SPLIT_TUNNEL
username hunter password **** encrypted
tunnel-group REMOTE_ACCESS_TUNNELGRP type remote-access
tunnel-group REMOTE_ACCESS_TUNNELGRP general-attributes
address-pool REMOTE_ACCESS_POOL
default-group-policy REMOTE_ACCESS_GP
tunnel-group REMOTE_ACCESS_TUNNELGRP ipsec-attributes
ikev1 pre-shared-key *****
For explanation of what all this does, I recommend reading the following Cisco docs. It is worth noting that this configuration does not work with Windows 7/8, which use IKEv2 instead of v1.
The configuration for the built-in OS X IPSec client is described in the following doc. One gotcha I ran into (which is clearly stated in the document) is that the tunnel-group name must be specified in the 'Group Name' field on the Mac. In the case of the above configuration, the group name is REMOTE_ACCESS_TUNNELGRP.
Friday, September 13, 2013
Using FreeBSD as a DomU on XenServer
I am investigating a move away from VMWare ESXi to $hypervisor, as a part of our new data center build. The primary candidates I am looking at are XenServer, and an XAPI stack on Debian. Citrix doesn't officially support FreeBSD as a DomU; at least not as of 6.2. However, FreeBSD seems to pretty happily install as an HVM DomU, if you specify the "Other install media" template from the XenCenter "New VM" wizard.
I installed XenServer 6.2 on an old Dell 2950, with a pair of dual-core Xeons, 8GB RAM and 6x15k SAS drives in a RAID10 configuration. As an aside, I get garbage output on the boot prompt when I boot the host...a problem happily solved by mashing the Enter key in frustration until XenServer began booting.
As mentioned above, I installed a FreeBSD 9.1-RELEASE amd64 guest without much difficulty. I then wanted to see how the performance stacked up against our existing ESXi 5.0 infrastructure. As a crude benchmark, I ran a time portsnap extract on the XenSever guest, another FreeBSD guest on a similarly spec'd 1950 running ESXi, and on a 2950 with FreeBSD installed natively. The wall times were as follows.
- XenServer FreeBSD DomU: 12:20
- ESXi FreeBSD guest: 6:30
- Raw hardware: 5:28
I was rather disappointed to see XenServer fare so poorly against VMWare. Not all was lost though, because my XenServer guest was running in HVM mode. I expected that I would see some performance improvement by using the Paravirtualized drivers available in FreeBSD. To summarize the FreeBSD Wiki, full PV support is only available on i386, but amd64 can use the PV block device and network interfaces. I tried building an PV image and shipping it over to the XenServer host, without success. I was unable to get XenServer to even attempt to boot my image.
I went back to my HVM DomU and installed the 9.1-p7 XENHVM kernel. On reboot, the guest hangs immediately after detecting the CDROM drive. For several minutes it displays a message about xenbusb_nop_confighook_cb timeout periodically, then nothing. Some googling suggests that this is a known issue, with a workaround of removing the virtual CD device, as indicated in this thread. I removed the CDROM device by following these instructions, and the guest now boots happily. With the PV drivers, your virtual disk device is named "ad0" by FreeBSD. The network interface becomes "xn0". Because of these changes, you will want to update your guest /etc/fstab file, and probably the network configuration in /etc/rc.conf. Running the portsnap benchmark on the updated guest yields a time of 8:37...a 43% improvement over the full HVM DomU, but still lagging behind VMWare.
More experimentation is required to tell if more performance can be squeezed out of XenServer, or whether the live migration features justify the performance drop.
Saturday, September 7, 2013
Using FreeBSD loopback interfaces with BIRD
Why go for the simple solution, when you can first spend hours tearing out your hair in frustration?
I've been working on a new data center deployment, and getting my fingers back into the networking realm; a welcome change. This includes my first OSPFv3 deployment, and we're using BIRD. For the most part, I have been very happy with BIRD for OSPF and BGP; though I have run into some quirks.
The quirk on my mind at this moment is with loopback interfaces. The Cisco way of doing things seems to be to run iBGP sessions between loopback addresses. The rationale is that if you use an interface address, and that interface goes down, your iBGP session goes with it, as that address becomes unreachable. So you use a loopback interface, which is always up. The addresses on the loopback are advertised via an IGP, facilitating the iBGP connection.
For better or worse, I decided to follow the herd, and go with a loopback interface. For IPv4, this was pretty straightforward. Configure the loopback in the OS, add it to bird.conf as a stub interface, and good to go. For reference, here are the bits to do so on FreeBSD.
# /etc/rc.conf
cloned_interfaces="lo1"
ifconfig_lo1="inet W.X.Y.Z/32"
# /usr/local/etc/bird.conf
protocol ospf {
tick 2;
area 0 {
stub no;
interface "vlan7", "vlan500" {
cost 5;
hello 2;
dead 10;
authentication cryptographic;
password "password";
};
interface "lo1", "vlan1001" { stub; };
};
}
And since the proof is in the pudding (or output)...
bird> show route for W.X.Y.Z
W.X.Y.Z/32 via A.B.C.D on vlan500 [ospf1 09:08] * I (150/5) [W.X.Y.Z]
When I went to configure OSPF for our IPv6 allocation, things didn't go quite so smoothly. I used the following similar configuration for the v6 BIRD configuration.
# /etc/rc.conf
ifconfig_lo1_ipv6="inet6 2620:W:X:Y::Z/128"
# /usr/local/etc/bird6.conf
protocol ospf ospf_v6 {
tick 2;
area 0 {
stub no;
interface "vlan7", "vlan500" {
cost 5;
hello 2;
dead 10;
# Authentication is not supported by OSPFv3, supposed to be IPSec AH authenticated.
#authentication cryptographic;
#password "password";
};
interface "lo1", "vlan1001" { stub; };
};
}
With this configuration in place, I realized that my IPv6 loopback address was not being advertised. Examining the logs, BIRD was quite happy to tell me that it had filtered out that route. WTF? After a bunch of time wasted searching Google and throwing shit at the wall, my loopback address was still not working. I finally stumbled on this mailing list thread, where I learned that using a loopback is NOT an expected configuration; at least in the eyes of the developers. Furthermore, the fact that it is working in IPv4 was surprising, and perhaps a bug. The reason that BIRD is denying my lo1 IP is that there is no link-local IP on the interface as well. Without starting a discussion on whether Cisco or BIRD is more right, I'll just say I was *((# *grumble* *f'n BIRD*.
I jumped through a few more hoops, and finally discovered that I could use a tap interface in lieu of a loopback. It would generate a link-local address, OSPF would advertise it, and iBGP happiness filled the kingdom. Nevermind that it is about as hacky as you can get. For what it's worth, here is the rc.conf goo to make it happen.
# /etc/rc.conf
# DON'T USE THIS! IT'S HACKY AND EVERYONE WILL LAUGH AT YOU.
cloned_interfaces="tap0"
ifconfig_tap0_ipv6="inet6 2620:W:X:Y::Z/128 -ifdisabled"
I slept on it. When I woke up, I had some OSPF fixes on my mind for the ASAs (that's another raar story). I was poking around a little more when I read and was reminded about the BIRD stubnet configuration directive. In a nutshell, BIRD will always advertise a stubnet route...perfect! Changed the configuration to support this, and life is good again.
# /etc/rc.conf
ifconfig_lo1_ipv6="inet6 2620:W:X:Y::Z/128"
# /usr/local/etc/bird6.conf
protocol ospf ospf_v6 {
tick 2;
area 0 {
stub no;
interface "vlan7", "vlan500" {
cost 5;
hello 2;
dead 10;
};
interface "vlan1001" { stub; };
stubnet 2620:W:X:Y::Z/128;
};
}
and the proof!
bird> show route for 2620:W:X:Y::Z/128
2620:W:X:Y::Z/128 via fe80::225:90ff:fe6b:f52c on vlan7 [ospf_v6 09:12] * I (150/15) [W.X.Y.Z]
Thursday, June 13, 2013
Configuring NFSv4 on FreeBSD
After much monkeying around, I finally have NFSv4 running on FreeBSD. Since there seems to be a lack of documentation specifically for NFSv4, here is my take on it. The nfsv4 man page does tell you what you need to know in order to get a basic client and server running, is was clear to me only after I had it working. In my setup, my server is running FreeBSD 9.1, and my client runs 9.0.
This may indicate that you have misspelled the remote path in your mount command. It may also indicate that you have an error in your exports file, or that your exports file is not configured the way you think it is. Go back and read step 3 of the Server Configuration.
Server Configuration
- In /etc/rc.conf, add the following lines
nfs_server_enable="YES" nfsv4_server_enable="YES" nfsuserd_enable="YES"
- Start the NFS daemons by running
/etc/rc.d/nfsd start
- Open /etc/exports in your preferred editor. Add a "V4:" line to the file, specifying the root of your NFSv4 tree. There are two choices at this point. Option 1 is to place your NFSv4 root at a location other than the root of the server filesystem. With this option, you can create an arbitrary NFS tree for clients to attach to, independent of how data is actually situated on the filesystem(s). Nullfs mounts may be used to include outside directories in the NFS root. Option 2 is to make the NFSv4 root the actual root of the system. This preserves the behavior of old NFS implementations. Regardless of where you put your V4 root, you must also add export lines, in the same style as NFSv3. The exports man page can be helpful here, and discusses the security implications of using NFSv4.
# Option 1 V4: /nfsv4 -network=10.0.0.0 -mask=255.255.255.192 /nfsv4/ports -maproot=root: -network=10.0.0.0 -mask=255.255.255.192 # Option 2 V4: / -network=10.0.0.0 -mask=255.255.255.192 /usr/pxe/ports -maproot=root: -network=10.0.0.0 -mask=255.255.255.192
- Reload the exports file by signalling mountd
killall -HUP mountd
Client Configuration
Clients should now be able to mount the exported filesystem using the following commands, corresponding to the NFSv4 root options specified above. Notice that with option 1, the remote path omits the /nfsv4 prefix of the server.# Option 1 mount -t nfs -o nfsv4 server:/ports /mnt # Option 2 mount -t nfs -o nfsv4 server:/usr/pxe/ports /mnt
Errors
If you get the following error when trying to mount from the client, don't be fooled:mount_nfs: /mnt, : No such file or directory
This may indicate that you have misspelled the remote path in your mount command. It may also indicate that you have an error in your exports file, or that your exports file is not configured the way you think it is. Go back and read step 3 of the Server Configuration.
Friday, May 3, 2013
Constant load on R300 running FreeBSD 9.1
An issue cropped up on the R300 hardware with 9.1. The HPET timer selected on this hardware causes a constant load average of approximately .50 (idle). This seems to be corrected by forcing the LAPIC time source. The following sysctl can be set via the command line utility, or in /etc/sysctl.conf:
Subscribe to:
Posts (Atom)