Jay Taylor's notes

back to listing index

BFD (Bidirectional Forwarding Detection) in OpenBSD [pdf] | Hacker News

[web search]
Original source (news.ycombinator.com)
Tags: bsd openbsd news.ycombinator.com
Clipped on: 2016-07-22

Image (Asset 1/2) alt= Hacker News new | threads | comments | show | ask | jobs | submit jaytaylor (1964) | logout
Image (Asset 2/2) alt=
BFD (Bidirectional Forwarding Detection) in OpenBSD [pdf] (openbsd.org)
70 points by notaplumber 38 days ago | hide | past | web | 14 comments | favorite



Fairly standard to use this in a core network to improve link failure detection, even "fast" routing protocols can take a few seconds to hit a dead timer for troublesome links.

The key with using this in some networks is there may be multiple networks or ISPs between your two routers, so links may look up but could actually be dropping traffic.

However not too clear why you'd want this on a server?

Most servers support bonding of NIC interfaces, quickly detect NIC and physical link issues, and don't use routing protocols which are what benefit from this quick detection.


There is a new trend, especially with virtualisation, whereby you push Layer 3 all the way down to the host.

In my case, at $WORK, we use a product called Calico with OpenStack. Calico turns our standard Linux machine into a router with BGP to advertise the IP's of the VM's running on it to the rest of the network.

We use BFD on the edge to detect failures other than link down (for example, if for some odd reason the SFP is still inserted and link is still considered up, yet the ToR is deadlocked and no longer routing traffic).

Pushing layer 3 down further means we no longer have large VLAN's spanning multiple switches and having a huge broadcast domain, it provides us with more flexibility in where workloads live, and traffic can be sent to the end nodes using ECMP and other types of load balancing.

Our network guys love it, they already know how to traffic engineer using BGP, this makes it even simpler.


I'll echo that - although I won't call it new. Using routing protocols at server level, binding a service IP address to the loopback interface and announcing it through an IGP has long been a tool in the infrastructure arsenal. It is especially useful in multi-tenanted services and/or logically partitioned infrastructure, although I first came across it around fifteen years ago whilst building new-wave IP carriers in Europe.

This pattern avoids some of the use cases for overlay networks (especially tunnels, but anything that reduces MTU can still cause problems in this day and age), reduces layer 2 domain scale, aids in service mobility (especially great for anything in a logical partition whether it's a JVM, a jail/zone or a VM), adds multipath resilience for clustered services, gives infrastructure-level visibility on group membership, gives a cheap failover mechanism for intermediate devices like reverse proxies or load balancers, and is simple to extend e.g. more service addresses for new tenants. And yeah, the network guys love it.

If you are very clever it can also extend recursively into the virtual domain. A VM binding a service address to its loopback address, announcing that via an IGP over a virtual interface to the host, which is basically now just a router. This reduces L2 domain size issues for VM farms (albeit by moving them to L3), and solves IP configuration issues e.g. during VM migration to another host or during DR processes. However, the usual Knuth caveats about being very clever apply to this situation. If you do this with, say, a JVM running with docker inside a Xen domU inside a physical host then you get what's coming to you. Or, you could aggregate the announcements at the host if you didn't need the migration part.

The downside of all this is that ops people without ISP or MAN/WAN experience generally need this pattern explained a few times before they get it (some never do - try not to hire those ones though), and practically no *nix distribution or management/config tools support this out-of-the-box, so there's a build & maintenance tradeoff. You can also build things that make some applications get really, really confused. Inadvertently creating an anycasted service might be fun, might also be a recipe for inconsistent application data.


What types of problems are you solving in this space? Internal infrastructure, service provider? Do you not need any sort of layer 2 adjacency between your servers? It sounds very nice indeed to be able to resolve your location/identity split at the VM infrastructure layer. I had a very hard time getting customers away from wanting L2 adjacency between entire datacenters in my network consulting career for things like DR in their enterprises...


Very interesting, for a network engineer that really does sound like heaven, full layer 3 all the way down to the hosts!

I recently did some Cisco ACI/Nexus work with virtualisation, however the client was only really interested about having massive layer 2 bridge domains spanning across two DC sites to make their App design/failover easier.


However not too clear why you'd want this on a server?

Is that just a general question about servers? If so, you're probably right.

However, OpenBSD is frequently used as a firewall or router. So if a developer has an itch to scratch, and it's within the bailiwick of what OpenBSD is generally used for, then why not have it?


I was more surprised that this got to number 2 on the front page, it seemed that it was something people might have actually been waiting for, was wondering why that would be the case.

My day job is networking for a UK ISP/DC, and the only thing I could really think of people wanting this was on devices like route reflectors within your network, or an peering exchange.

However I suppose people are just using OpenBSD to do more than I thought.


I think there are a lot of HN readers who just like the OpenBSD project in general. They may or may not use it for anything particularly fancy (or at all), but are interested in following its developments.


I'm one of those. I like to read about what they do because I like the idea of it even if I don't have a use for OpenBSD in my life besides a router. I could read their output all day like

-Results of audits or how they got bizarre bugs to surface with security features/randomization/running on exotic hardware platforms. -Security mitigations and software hardening as they find potential attack surface -Recreating the most common use cases of a common piece of software with <5% the codebase.


OpenBSD isn't just servers...

https://news.ycombinator.com/item?id=11889000

https://news.ycombinator.com/item?id=11888695


Agreed, but generally you'd expect to see OpenBSD sit at the edge of a network (Server, VPN, Firewall), where as BFD typically is more of a Core protocol.


I use BFD on our Linux servers, in combination with Linux Virtual Server (LVS, or a Load Balancer). There are 2 LVS servers which use BFD to choose who receives traffic from the core (a Juniper EX4200). The LVS servers also share state so that they can fail over between each other and keep routing existing sessions.

I use the implementation in bird (http://bird.network.cz/) for both IPv4 and IPv6

With Juniper you can actually set a static route to be live or not live based on BFD directly, so no need to use BGP in our case though that is common for routers that can't do it directly on a static route.


> However not too clear why you'd want this on a server?

OpenBSD has a lot of routing technology built-in, so it's common to use it as a firewall, router, VPN gateway, etc.

A server running OpenBSD isn't going to replace a core router, but it's very flexible.


Most routing protocols can remove indirect next hops if the dependent interface has been determined to have gone down.

LACP/Bonding is anotjer challenge that requires either min-links or micro BFD.

BFD is often run at aggressive intervals (less than 1000ms for detecting a failure), and is not best suited for running on general purpose CPUs performing all of the management and control plane tasks. BFD is ideally implementes in hardware.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: