Whole document tree 4. Information for ProgrammersI'll let you in on a secret: my pet hamster did all the coding. I was just a channel, a `front' if you will, in my pet's grand plan. So, don't blame me if there are bugs. Blame the cute, furry one. 4.1 Understanding ip_tablesiptables simply provides a named array of rules in memory (hence the name `iptables'), and such information as where packets from each hook should begin traversal. After a table is registered, userspace can read and replace its contents using getsockopt() and setsockopt(). iptables does not register with any netfilter hooks: it relies on other modules to do that and feed it the packets as appropriate; a module must register the netfilter hooks and ip_tables separately, and provide the mechanism to call ip_tables when the hook is reached. ip_tables Data StructuresFor convenience, the same data structure is used to represent a rule by userspace and within the kernel, although a few fields are only used inside the kernel. Each rule consists of the following parts:
The variable nature of the rule gives a huge amount of flexibility for extensions, as we'll see, especially as each match or target can carry an arbitrary amount of data. This does create a few traps, however: we have to watch out for alignment. We do this by ensuring that the `ipt_entry', `ipt_entry_match' and `ipt_entry_target' structures are conveniently sized, and that all data is rounded up to the maximal alignment of the machine using the IPT_ALIGN() macro. The `struct ipt_entry' has the following fields:
The `struct ipt_entry_match' and `struct ipt_entry_target' are very similar, in that they contain a total (IPT_ALIGN'ed) length field (`match_size' and `target_size' respectively) and a union holding the name of the match or target (for userspace), and a pointer (for the kernel). Because of the tricky nature of the rule data structure, some helper routines are provided:
ip_tables From UserspaceUserspace has four operations: it can read the current table, read the info (hook positions and size of table), replace the table (and grab the old counters), and add in new counters. This allows any atomic operation to be simulated by userspace: this is done by the libiptc library, which provides convenience "add/delete/replace" semantics for programs. Because these tables are transferred into kernel space, alignment becomes an issue for machines which have different userspace and kernelspace type rules (eg. Sparc64 with 32-bit userland). These cases are handled by overriding the definition of IPT_ALIGN for these platforms in `libiptc.h'. ip_tables Use And TraversalThe kernel starts traversing at the location indicated by the particular hook. That rule is examined, if the `struct ipt_ip' elements match, each `struct ipt_entry_match' is checked in turn (the match function associated with that match is called). If the match function returns 0, iteration stops on that rule. If it sets the `hotdrop' parameter to 1, the packet will also be immediately dropped (this is used for some suspicious packets, such as in the tcp match function). If the iteration continues to the end, the counters are incremented, the `struct ipt_entry_target' is examined: if it's a standard target, the `verdict' field is read (negative means a packet verdict, positive means an offset to jump to). If the answer is positive and the offset is not that of the next rule, the `back' variable is set, and the previous `back' value is placed in that rule's `comefrom' field. For non-standard targets, the target function is called: it returns a verdict (non-standard targets can't jump, as this would break the static loop-detection code). The verdict can be IPT_CONTINUE, to continue on to the next rule. 4.2 Extending iptablesBecause I'm lazy, Extending The KernelWriting a kernel module itself is fairly simple, as you can see from the examples. One thing to be aware of is that your code must be re-entrant: there can be one packet coming in from userspace, while another arrives on an interrupt. In fact in SMP there can be one packet on an interrupt per CPU in 2.3.4 and above. The functions you need to know about are:
One warning about doing tricky things (such as providing counters) in the extra space in your new match or target. On SMP machines, the entire table is duplicated using memcpy for each CPU: if you really want to keep central information, you should see the method used in the `limit' match. New Match FunctionsNew match functions are usually written as a standalone module. It's possible to have these modules extensible in turn, although it's usually not necessary. One way would be to use the netfilter framework's `nf_register_sockopt' function to allows users to talk to your module directly. Another way would be to export symbols for other modules to register themselves, the same way netfilter and ip_tables do. The core of your new match function is the struct ipt_match which it passes to `ipt_register_match()'. This structure has the following fields:
New TargetsIf your target alters the packet (ie. the headers or the body), it must call skb_unshare() to copy the packet in case it is cloned: otherwise any raw sockets which have a clone of the skbuff will see the alterations (ie. people will see wierd stuff happening in tcpdump). New targets are also usually written as a standalone module. The discussions under the above section on `New Match Functions' apply equally here. The core of your new target is the struct ipt_target that it passes to ipt_register_target(). This structure has the following fields:
New TablesYou can create a new table for your specific purpose if you wish. To do this, you call `ipt_register_table()', with a `struct ipt_table', which has the following fields:
Userspace ToolNow you've written your nice shiny kernel module, you may want to
control the options on it from userspace. Rather than have a branched
version of New tables generally don't require any extension to
The shared library should have an `_init()' function, which will automatically be called upon loading: the moral equivalent of the kernel module's `init_module()' function. This should call `register_match()' or `register_target()', depending on whether your shared library provides a new match or a new target. You need to provide a shared library: this can be used to initialize part of the structure, or provide additional options. I now insist on a shared library even if it doesn't do anything, to reduce problem reports where the shares libraries are missing. There are useful functions described in the `iptables.h' header, especially:
New Match FunctionsYour shared library's _init() function hands `register_match()' a pointer to a static `struct iptables_match', which has the following fields:
There are extra elements at the end of this structure for use
internally by New TargetsYour shared library's _init() function hands `register_target()' it a pointer to a static `struct iptables_target', which has similar fields to the iptables_match structure detailed above. Using `libiptc'
The kernel tables themselves are simply a table of rules, and a set of numbers representing entry points. Chain names ("INPUT", etc) are provided as an abstraction by the library. User defined chains are labelled by inserting an error node before the head of the user-defined chain, which contains the chain name in the extra data section of the target (the builtin chain positions are defined by the three table entry points). The following standard targets are supported: ACCEPT, DROP, QUEUE (which are translated to NF_ACCEPT, NF_DROP, and NF_QUEUE, respectively), RETURN (which is translated to a special IPT_RETURN value handled by ip_tables), and JUMP (which is translated from the chain name to an actual offset within the table). When `iptc_init()' is called, the table, including the counters, is read. This table is manipulated by the `iptc_insert_entry()', `iptc_replace_entry()', `iptc_append_entry()', `iptc_delete_entry()', `iptc_delete_num_entry()', `iptc_flush_entries()', `iptc_zero_entries()', `iptc_create_chain()' `iptc_delete_chain()', and `iptc_set_policy()' functions. The table changes are not written back until the `iptc_commit()' function is called. This means it is possible for two library users operating on the same chain to race each other; locking would be required to prevent this, and it is not currently done. There is no race with counters, however; counters are added back in to the kernel in such a way that counter increments between the reading and writing of the table still show up in the new table. There are various helper functions:
4.3 Understanding NATWelcome to Network Address Translation in the kernel. Note that the infrastructure offered is designed more for completeness than raw efficiency, and that future tweaks may increase the efficiency markedly. For the moment I'm happy that it works at all. NAT is separated into connection tracking (which doesn't manipulate packets at all), and the NAT code itself. Connection tracking is also designed to be used by an iptables modules, so it makes subtle distinctions in states which NAT doesn't care about. Connection TrackingConnection tracking hooks into high-priority NF_IP_LOCAL_OUT and NF_IP_PRE_ROUTING hooks, in order to see packets before they enter the system. The nfct field in the skb is a pointer to inside the struct ip_conntrack, at one of the infos[] array. Hence we can tell the state of the skb by which element in this array it is pointing to: this pointer encodes both the state structure and the relationship of this skb to that state. The best way to extract the `nfct' field is to call `ip_conntrack_get()', which returns NULL if it's not set, or the connection pointer, and fills in ctinfo which describes the relationship of the packet to that connection. This enumerated type has several values:
Hence a reply packet can be identified by testing for >= IP_CT_IS_REPLY. 4.4 Extending Connection Tracking/NATThese frameworks are designed to accommodate any number of protocols and different mapping types. Some of these mapping types might be quite specific, such as a load-balancing/fail-over mapping type. Internally, connection tracking converts a packet to a "tuple", representing the interesting parts of the packet, before searching for bindings or rules which match it. This tuple has a manipulatable part, and a non-manipulatable part; called "src" and "dst", as this is the view for the first packet in the Source NAT world (it'd be a reply packet in the Destination NAT world). The tuple for every packet in the same packet stream in that direction is the same. For example, a TCP packet's tuple contains the manipulatable part: source IP and source port, the non-manipulatable part: destination IP and the destination port. The manipulatable and non-manipulatable parts do not need to be the same type though; for example, an ICMP packet's tuple contains the manipulatable part: source IP and the ICMP id, and the non-manipulatable part: the destination IP and the ICMP type and code. Every tuple has an inverse, which is the tuple of the reply packets in the stream. For example, the inverse of an ICMP ping packet, icmp id 12345, from 192.168.1.1 to 1.2.3.4, is a ping-reply packet, icmp id 12345, from 1.2.3.4 to 192.168.1.1. These tuples, represented by the `struct ip_conntrack_tuple', are used widely. In fact, together with the hook the packet came in on (which has an effect on the type of manipulation expected), and the device involved, this is the complete information on the packet. Most tuples are contained within a `struct ip_conntrack_tuple_hash', which adds a doubly linked list entry, and a pointer to the connection that the tuple belongs to. A connection is represented by the `struct ip_conntrack': it has two `struct ip_conntrack_tuple_hash' fields: one referring to the direction of the original packet (tuplehash[IP_CT_DIR_ORIGINAL]), and one referring to packets in the reply direction (tuplehash[IP_CT_DIR_REPLY]). Anyway, the first thing the NAT code does is to see if the connection tracking code managed to extract a tuple and find an existing connection, by looking at the skbuff's nfct field; this tells us if it's an attempt on a new connection, or if not, which direction it is in; in the latter case, then the manipulations determined previously for that connection are done. If it was the start of a new connection, we look for a rule for that tuple, using the standard iptables traversal mechanism, on the `nat' table. If a rule matches, it is used to initialize the manipulations for both that direction and the reply; the connection-tracking code is told that the reply it should expect has changed. Then, it's manipulated as above. If there is no rule, a `null' binding is created: this usually does not map the packet, but exists to ensure we don't map another stream over an existing one. Sometimes, the null binding cannot be created, because we have already mapped an existing stream over it, in which case the per-protocol manipulation may try to remap it, even though it's nominally a `null' binding. Standard NAT TargetsNAT targets are like any other iptables target extensions, except they insist on being used only in the `nat' table. Both the SNAT and DNAT targets take a `struct ip_nat_multi_range' as their extra data; this is used to specify the range of addresses a mapping is allowed to bind into. A range element, `struct ip_nat_range' consists of an inclusive minimum and maximum IP address, and an inclusive maximum and minimum protocol-specific value (eg. TCP ports). There is also room for flags, which say whether the IP address can be mapped (sometimes we only want to map the protocol-specific part of a tuple, not the IP), and another to say that the protocol-specific part of the range is valid. A multi-range is an array of these `struct ip_nat_range' elements; this means that a range could be "1.1.1.1-1.1.1.2 ports 50-55 AND 1.1.1.3 port 80". Each range element adds to the range (a union, for those who like set theory). New ProtocolsInside The KernelImplementing a new protocol first means deciding what the manipulatable and non-manipulatable parts of the tuple should be. Everything in the tuple has the property that it identifies the stream uniquely. The manipulatable part of the tuple is the part you can do NAT with: for TCP this is the source port, for ICMP it's the icmp ID; something to use as a "stream identifier". The non-manipulatable part is the rest of the packet that uniquely identifies the stream, but we can't play with (eg. TCP destination port, ICMP type). Once you've decided this, you can write an extension to the connection-tracking code in the directory, and go about populating the `ip_conntrack_protocol' structure which you need to pass to `ip_conntrack_register_protocol()'. The fields of `struct ip_conntrack_protocol' are:
Once you've written and tested that you can track your new protocol, it's time to teach NAT how to translate it. This means writing a new module; an extension to the NAT code and go about populating the `ip_nat_protocol' structure which you need to pass to `ip_nat_protocol_register()'.
New NAT TargetsThis is the really interesting part. You can write new NAT targets which provide a new mapping type: two extra targets are provided in the default package: MASQUERADE and REDIRECT. These are fairly simple to illustrate the potential and power of writing a new NAT target. These are written just like any other iptables targets, but internally they will extract the connection and call `ip_nat_setup_info()'. Protocol HelpersProtocol helpers for connection tracking allow the connection tracking code to understand protocols which use multiple network connections (eg. FTP) and mark the `child' connections as being related to the initial connection, usually by reading the related address out of the data stream. Protocol helpers for NAT do two things: firstly allow the NAT code to manipulate the data stream to change the address contained within it, and secondly to perform NAT on the related connection when it comes in, based on the original connection. Connection Tracking Helper ModulesDescriptionThe duty of a connection tracking module is to specify which packets belong to an already established connection. The module has the following means to do that:
Structures and Functions AvailableYour kernel module's init function has to call `ip_conntrack_helper_register()' with a pointer to a `struct ip_conntrack_helper'. This struct has the following fields:
Example skeleton of a conntrack helper module
NAT helper modulesDescriptionNAT helper modules do some application specific NAT handling. Usually this includes on-the-fly manipulation of data: think about the PORT command in FTP, where the client tells the server which IP/port to connect to. Therefor an FTP helper module must replace the IP/port after the PORT command in the FTP control connection. If we are dealing with TCP, things get slightly more complicated. The reason is a possible change of the packet size (FTP example: the length of the string representing an IP/port tuple after the PORT command has changed). If we change the packet size, we have a syn/ack difference between left and right side of the NAT box. (i.e. if we had extended one packet by 4 octets, we have to add this offset to the TCP sequence number of each following packet). Special NAT handling of all related packets is required, too. Take as example again FTP, where all incoming packets of the DATA connection have to be NATed to the IP/port given by the client with the PORT command on the control connection, rather than going through the normal table lookup.
Structures and Functions AvailableYour nat helper module's `init()' function calls `ip_nat_helper_register()' with a pointer to a `struct ip_nat_helper'. This struct has the following members:
This is exactly the same as writing a connection tracking helper. You can also indicate your module is ready to handle the NAT of any expected connections (presumably set up by a connection tracking module), using the `ip_nat_expect_register()' function, which takes a `struct ip_nat_expect'. This struct has the following members:
Example NAT helper module
4.5 Understanding NetfilterNetfilter is pretty simple, and is described fairly thoroughly in the previous sections. However, sometimes it's necessary to go beyond what the NAT or ip_tables infrastructure offers, or you may want to replace them entirely. One important issue for netfilter (well, in the future) is caching. Each skb has an `nfcache' field: a bitmask of what fields in the header were examined, and whether the packet was altered or not. The idea is that each hook off netfilter OR's in the bits relevant to it, so that we can later write a cache system which will be clever enough to realize when packets do not need to be passed through netfilter at all. The most important bits are NFC_ALTERED, meaning the packet was altered (this is already used for IPv4's NF_IP_LOCAL_OUT hook, to reroute altered packets), and NFC_UNKNOWN, which means caching should not be done because some property which cannot be expressed was examined. If in doubt, simply set the NFC_UNKNOWN flag on the skb's nfcache field inside your hook. 4.6 Writing New Netfilter ModulesPlugging Into Netfilter HooksTo receive/mangle packets inside the kernel, you can simply write a module which registers a "netfilter hook". This is basically an expression of interest at some given point; the actual points are protocol-specific, and defined in protocol-specific netfilter headers, such as "netfilter_ipv4.h". To register and unregister netfilter hooks, you use the functions `nf_register_hook' and `nf_unregister_hook'. These each take a pointer to a `struct nf_hook_ops', which you populate as follows:
Processing Queued PacketsThis interface is currently used by ip_queue; you can register to handle queued packets for a given protocol. This has similar semantics to registering for a hook, except you can block processing the packet, and you only see packets for which a hook has replied `NF_QUEUE'. The two functions used to register interest in queued packets are `nf_register_queue_handler()' and `nf_unregister_queue_handler()'. The function you register will be called with the `void *' pointer you handed it to `nf_register_queue_handler()'. If no-one is registered to handle a protocol, then returning NF_QUEUE is equivalent to returning NF_DROP. Once you have registered interest in queued packets, they begin queueing. You can do whatever you want with them, but you must call `nf_reinject()' when you are finished with them (don't simply kfree_skb() them). When you reinject an skb, you hand it the skb, the `struct nf_info' which your queue handler was given, and a verdict: NF_DROP causes them to be dropped, NF_ACCEPT causes them to continue to iterate through the hooks, NF_QUEUE causes them to be queued again, and NF_REPEAT causes the hook which queued the packet to be consulted again (beware infinite loops). You can look inside the `struct nf_info' to get auxiliary information about the packet, such as the interfaces and hook it was on. Receiving Commands From UserspaceIt is common for netfilter components to want to interact with userspace. The method for doing this is by using the setsockopt mechanism. Note that each protocol must be modified to call nf_setsockopt() for setsockopt numbers it doesn't understand (and nf_getsockopt() for getsockopt numbers), and so far only IPv4, IPv6 and DECnet have been modified. Using a now-familiar technique, we register a `struct nf_sockopt_ops' using the nf_register_sockopt() call. The fields of this structure are as follows:
The final two fields are used internally. 4.7 Packet Handling in UserspaceUsing the libipq library and the `ip_queue' module, almost anything which can be done inside the kernel can now be done in userspace. This means that, with some speed penalty, you can develop your code entirely in userspace. Unless you are trying to filter large bandwidths, you should find this approach superior to in-kernel packet mangling. In the very early days of netfilter, I proved this by porting an embryonic version of iptables to userspace. Netfilter opens the doors for more people to write their own, fairly efficient netmangling modules, in whatever language they want. Next Previous Contents |