As now I am also maintaining a couple of firewalls I figured it might be a good idea to get familiar with their rulesets.
Frankly said, I was shocked.
Shocked by the number of conditions/rules that are used repeatedly in sub-chains for no particular reason, rules that have no effect, or rules that are plain pointless.

Here an example:

-A INPUT -s ${HOST1} -j DROP
-A INPUT -s ${HOST2} -j DROP

This is an example similar to what I actually found. The first and second rule discard packets coming from specific hosts and the third discards all packets. Unless you want to account for discarded traffic from specific hosts, and I don’t really see much reason for this, you don’t need the first two rules, as these will be covered by the third.

Here’s another one:

-A INPUT -d ${SERVER1} -p tcp –dport 80 -j ACCEPT

In this example the second rule is completely useless and will never get triggered as the previous rule already covers all packets the second rule may apply to.

IPTables is a great tool and through the creation of sub-chains it allows great modularity and flexibility.
While access to a certain host may be enabled by, for example, 5 rules it can be easily revoked by one command if those rules are in a host specific sub-chain.

In the above examples I have, for simplicity, used the INPUT chain.
Here an example that creates two subchains that allow packets to go to different ports on 2 servers.

-N server1
-A server1 -p tcp –dport 22 -j ACCEPT
-A server1 -p tcp –dport 80 -j ACCEPT

-N server2
-A server2 -p tcp –dport 22 -j ACCEPT
-A server2 -p tcp –dport 443 -j ACCEPT

-A INPUT -d ${SERVER1} -j server1
-A INPUT -d ${SERVER2} -j server2

This example shows how to create subchains per server. If I now wanted to disallow all connections to server 1, I’d just remove the branching rule for that server from the INPUT chain:

-D INPUT -d ${SERVER1} -j server1

Also I could easily change rules related to one server, like add or remove allowed ports, without having to worry they could affect the other server.
And since the rules in my chain server1 stay intact I can also quickly re-enable access.

-A INPUT -d ${SERVER1} -j server1

One thing I also run into quite frequently is the overgenerous use of the target DROP, or even setting the default policy of the built-in chains to DROP. The latter seems to be some sort of general recommendation, but I prefer to leave the policy to ACCEPT and handle all packets with rules I have written. Every packet has its rule, you could say.

Personally I prefer REJECT over DROP in most cases. Usually the only packets that I explicitly drop are those deemed invalid by the state engine.

-A INPUT -m state –state INVALID -j DROP

As for not setting the built-in chains’ default policy to DROP: all my rulesets have a chain I like to call blackhole, which will deal with all thus far unhandled packets and won’t let any packet that goes in come out again. Therefore the default policy of the built-in chains is never used in my rulesets.

Here a simple example of my blackhole chain:

-N blackhole
-A blackhole -p tcp -j REJECT –reject-with tcp-reset
-A blackhole -p udp -j REJECT –reject-with icmp-port-unreachable
-A blackhole -j REJECT –reject-with icmp-proto-unreachable

You can see that I react differently depending on the protocol used for the connection, but still default to “block everything, only allow specific connections”.

Hosts connecting through TCP will receive a RST-packet in order to gracefully terminate the connection instead of having it time out.
UDP connections are denied with the message that the requested port is not available and everything else is denied with the message that the protocol is not available.

One case, aside from invalid packets, where I do not follow my rule of friendly rejection is when I can eliminate the possibility the connecting host actually is friendly.
If you run a server with services like SSH or FTP, or pretty much anything that requires authentication, exposed to the internet you have quite likely experienced automated attacks that try hundreds of username/password combinations per minute.
My personal record has recently been upped to around 850 attempts in one minute, from a single IP.

While you could use the limit match to limit the number of new connections I prefer the recent match for this scenario.

All SSH servers I have run into thus far disconnect the client after 3 failed login attempts.
At 850 failed logins per minute you therefore get close to 300 connections.
At the apparently more common rate of 200 logins per minute you still get over 60 connections, more than one per second.

Assuming that even a user who can type very fast but is entirely incapable of remembering passwords won’t be able to open 5 connections within 10 seconds I can set up rules which will lock out any host that violates this limit.

-A blockip -m recent –set –name BLOCKIP –rsource

-A stopattacks -p tcp -m tcp –dport 22 -m state –state NEW -m recent –set –name STOPSSH –rsource
-A stopattacks -p tcp -m tcp –dport 21 -m state –state NEW -m recent –set –name STOPFTP –rsource
-A stopattacks -m recent –rcheck –seconds 10 –hitcount 5 –name STOPSSH –rsource -j blockip
-A stopattacks -m recent –rcheck –seconds 10 –hitcount 5 –name STOPFTP –rsource -j blockip
-A stopattacks -m recent –update –seconds 3600 –hitcount 1 –name BLOCKIP –rsource -j logdrop

In this example a host that creates more than 5 connections to SSH or FTP in 10 seconds is blocked for 1 hour. Every host connecting to SSH or FTP is logged in either STOPSSH or STOPFTP, but only if the number of entries exceeds 5 within the last 30 seconds an entry is made in BLOCKIP, in which case the host gets blocked for 3600 seconds (1 hour). The last rule in stopattacks in this example uses –update instead of –rcheck, which also is possible.
Choosing –rcheck here gives control over the lockout to the 2 previous rules. Once a host stops connecting to SSH or FTP the timer starts running down, and 1 hour later the blocked host will be allowed to connect again.
Choosing –update, like I have done here, gives control to the last rule, as the timer will be reset to 1 hour every single time the offending host tries to connect, no matter on which port or at which rate. In order to get unblocked the host has to stop all communication for 1 hour.
The custom chain logdrop, which is not shown in this example, does exactly what it’s name suggests, one rule logs, another rule drops.

The reason I favor DROP over REJECT in this situation is that I can safely assume that this is an automated attack. The tool running the attack will, quite likely, not be impressed by a TCP reset, but probably just try to connect again. As every packet is then rewarded with a TCP reset this will instantly trigger a reconnection attempt, effectively increasing the rate at which new connections are tried to be made. This of course has a negative effect on my bandwidth.
On the other hand, using DROP here may actually cause the program to slow down while it is waiting until the connection times out.

Another example of misunderstanding packet filters I have recently run into was a company trying to block websites based on their IP-address(es).
The problem with this is that it can have unwanted side-effects if the blocked site is on a shared server, other sites will unintentionally get blocked too as they use the same IP address. Equally it is hard or even impossible to keep track with all the IPs of ever growing sites such as Facebook or Twitter.
A packet filter has its role in filtering out unwanted websites, but that role is limited to redirecting web traffic transparently to a proxy server, which in turn does the actual filtering.

In conclusion I can sum things up in a few simple points that you should keep in mind when creating rules for a packet filter:

  • Mind your order
    Do you have rules that are subsets or supersets of other rules? Mind the order in which they are processed. If a more general rule is followed by a rule only containing a subset of the previous rule it will not get triggered, as the previous rule will deal with these packets.
    Equally, if the more specific rule is followed by a rule with a superset of the previous rule, and the same target, consider if you really need both rules. It may be interesting to keep them around for accounting purposes, but other than that you may also just scrap the one you don’t really need.
  • Know your purpose
    While packet filters can do a lot with packets, it doesn’t always make sense to do it. Some things, like filtering unwanted web sites, are better done on the application level, like with a transparent web proxy.
  • Know your enemies
    If you are subjected to automated attacks with many login attempts in a short time, and this is something you should notice from your log files, you might want to consider not only throttling the number of connections using the limit match, but actually locking out offending hosts using the recent match.

Thank you!
Dennis Wronka