Any way how to use iptables inside LXC containers?

I am often seeing dictionary attacks on SSH running inside the container (despite using a non-default port). Even though the password login are disabled, I’d like like to use fail2ban to stop script kiddies from wasting their energy. But in all my LXC containers iptables segfaults. Is there a way how to make this work?

# iptables -L
Segmentation fault

Thanks

yup. You could run the fail2ban on the host OS (=in the turrisOS) , providing him with the logpath pointing to the file created by ssh log inside of the container. Indeed you’d need to tweak the actionban and actionunban. Host OS has direct access to iptables.

Thank you, that is an interesting idea. Sadly the implementation is not so easy as:

  • fail2ban for TO does not exist in repositories
  • lightweight alternative bearDropper exists but is not a drop-in solution because:
    – it parses syslog, not a (/var/log/auth.log) file
    – it only cares about the sourceIP but for forwarding chain it should also parse and use destinationIP (container IP)

Maybe a custom fail2ban action running inside container and writing an offending IP to a file, then being picked up by a script run in TO… Will think about it more. Thanks again, jose

my bad. Anyway, I was able to install and run fail2ban on my turris without any major error message:

opkg update
opkg install git
opkg install git-http
git clone http://github.com/fail2ban/fail2ban.git
cd fail2ban/
python setup.py install 

but don’t let me break your system!

Consider disk writes as fail2ban was not developed with OpenWRT/TOS on eMMC in mind!

I don‘t know if this is really relevant but I‘d rather check twice before harming my device… :slight_smile:

You can check it with disk usage module for luci statistics…

ah, I thought if you’re using LXC that you have some SSD/HDD in place (like I do). You’re definitively correct, f2b can write a lot!

EDIT: personally I’d recommend to place all containers on HDD/SSD. Just saying…

All my containers are running from mSata SSD so this is not really an issue.

I assumed that but I just wanted to point out that failtoban installed on TOS host may write into directories that are not mounted on SSD. :slight_smile:

1 Like

I assumed that but I just wanted to point out that failtoban installed on TOS

Fully agree. Maybe that’s one of the reason why it’s not packaged by default. Anyway… I came up with the following solution. Not tested much so far but for my single container it seems to do its job.

First step is to install fail2ban inside the container (Debian-based in my case). Then I create a new dummy action which literally does nothing – but fail2ban still parses the log file and save the failed logins, invalid users etc into its internal sqlite3 db.

/etc/fail2ban/action.d/void.conf:
[Definition]
actionstart = echo start
actionstop = echo stop
actionban = echo ban
actionunban = echo unban

[Init]

Now sshd jail is enabled and paired with void action defined above.

/etc/fail2ban/jail.local:
[sshd]
enabled = true

/etc/fail2ban/jail.d/sshd.local:
[sshd]
action = void

Service is restarted and fail2ban-client is used to confirm.

systemctl restart fail2ban
fail2ban-client status sshd

Now back to TO. Here I put the following script (f2b.sh):

#!/bin/bash
# f2b.sh
# script which parses fail2ban db inside lxc container(s)
# and add/removes IPs of offending sources to iptables
#
# fail2ban needs to be run inside the container but needs
# no action configured (as the blocking happens on side of the host)
#
# execution is supposed to be scheduled via cron:
#  */5 * * * * /path/f2b.sh >/dev/null 2>/dev/null

chain=fail2ban
# block host for 10 minutes (+ cron interval)
bantime=600

# check if we $chain already exists and create it otherwise
iptables -L forwarding_wan_rule | grep -q "${chain}"
if [ $? -ne 0 ]
then
  echo "Inserting chain ($chain) to forwarding_wan_rule"
  iptables -N ${chain}
  iptables -I forwarding_wan_rule 1 -j ${chain}
fi

blocked_ips=$(
echo "192.168.1.100 22 sshd mycontainer" | \
while read -r container_ip container_port jail container
do
  echo "Checking container ${container} (${container_ip}:${container_port}, jail: ${jail})" >&2
  container_db=/srv/lxc/${container}/rootfs/var/lib/fail2ban/fail2ban.sqlite3
  # fail2ban 11.0+ uses slightly changed db format
  # notice that for this never version bantime is taken from fail2ban configuration itself
  # sqlite3 -csv "${container_db}" "select ip,\"$container_ip\",$container_port from bips where (bantime + timeofban) > cast(strftime('%s', 'now') as int) and jail = \"$jail\";"

  sqlite3 -csv "${container_db}" "select ip,\"$container_ip\",$container_port from bans where ($bantime + timeofban) > cast(strftime('%s', 'now') as int) and jail = \"$jail\";"
done | tr "," " " | sort -u
)

if [ -z "${blocked_ips}" ]
then
  echo "No blocked IPs, deleting all drop rules" 2>&1
  iptables -F "${chain}"
  exit
fi

echo "Blocked IP - Container IP/port:
${blocked_ips}" 2>&1

# adding drop rules if missing
echo "${blocked_ips}" | \
while read -r srcip destip destport
do
    if [ $(iptables -nL ${chain} | awk -v source=$srcip -v destination=$destip -v dpt=$destport '$4  == source && $5 == destination && $7 == "dpt:"dpt' | wc -l) -gt 0 ]
    then
      echo "Rule already exists, skipping" >&2
      continue
    fi
    echo "Executing iptables -I ${chain} --source $srcip -d $destip -p tcp --dport $destport -j DROP" 2>&1
    iptables -I ${chain} --source $srcip -d $destip -p tcp --dport $destport -j DROP
done

# removing rules if the ip is no more in blocklist
iptables --line-numbers -nL ${chain} | awk '/DROP/{split($8, Ar, /:/); port=Ar[2]; print $1";"$5" "$6" "port}' | \
 while IFS=";" read -r rulenum rulespec
 do
   # echo "blocked_ips: ${blocked_ips}"
   # echo "rulespec: ${rulespec}"
   echo "${blocked_ips}" | grep -q "${rulespec}"
   if [ $? -ne 0 ]
   then
     echo "Rule for ${rulespec} is no longer valid, removing.." 2>&1
     iptables -D ${chain} ${rulenum}
   fi
 done

Cron is updated as specified in script header.

The concept is as follows:

  • fail2ban inside the container is watching the auth.log file or systemd’s journal and if a matches event is found, it’s written into the db
  • host OS (TO) is reading the db directly in a regular intervals and inserting to/removing DROP-rules from iptables (chain used when forwarding) based on the source IP, destination port and time of the last failed security-related event

Will update this post after running this for couple of weeks to tell if it’s a viable solution or not. Comments welcome.