[reshuffled to put more-rant-like things at the end, though they’re not very rant-like: I’m starting to think that the only thing wrong here is documentation that makes it hard to tell what things the Omnia can do that differ from stock OpenWRT, and the chaotic organization of OpenWRT’s own documentation. OpenWRT really really needs to steal FreeBSD’s documentation team. ]
@cynerd:
only things that do not work are leds configuration, second ethernet between cpu and switch and sfp
… hmm, 4.0 is seeming more and more desirable! I might well upgrade before even setting it up once I get the serial console working so I can revert if it goes wrong: there’s no point spending ages setting everything up if I have to rethink it all after a massive upgrade. The leds don’t matter to me since it’s going to be under a wooden cowling anyway (assuming the 5GHz wifi can get through it!) and I’m not using SFP (yet) so the (wonderful) second internal Ethernet interface isn’t a problem either. One presumes all the physical RJ45/Ethernet network ports work, since I have planned uses for all of them
I am not sure if you are able to setup bonding from luci but you should be able from cli
Oh no the firewall I’m replacing is a special snowflake (A&A is an unusual ISP with some strange features and I sometimes think I’m using all of them) and its handling of line bonding is both wonderful and loony. The ADSL lines are bonded, but it’s done by the ISP so the bonding need not be reflected on the Linux side at all! You plug two ADSL routers into two network interfaces with different IP addresses, throw packets out of both of them using multihop (so any given flow emerges from only one), and all incoming packets addressed to anything in my assigned IP ranges come in on both (one packet on A, the next on B, the first on A, etc), and get agglomerated automatically because they are part of the same flows. Linux can do this with a multihop outgoing route and nothing else, as long as the rp_filter is turned off (a good idea for multihomed devices anyway). All you need on top of that is a little 200-line daemon to watch the incoming network traffic on both interfaces, ping out of them individually if there is none in some suitable interval to see if either interface is down, and tear it out of the multihop route if it is (re-pinging at intervals to see if it can be brought up again).
I looked at the bonding driver in 2013 when I set this stuff up and found it wasn’t then up to the job. It wasn’t possible to tear interfaces out of the routing when the ping stopped working: the bonding driver expects to be able to do all of this itself by watching the Ethernet link status, but unfortunately my ISP’s router does not take its Ethernet connection down when the ISP link dies, so the bonding driver would not notice and I’d end up routing packets out of a link to nowhere, and there was no way to declare an interface with still-live Ethernet dead and rip it out of the bond. I just checked again, and since I investigated this last, bonding 3.0.0 has come out, which should let me detect dead links myself and tear them down on demand via e.g.
echo -eth0 > /sys/class/net/bond0/bonding/slaves
etc, and bring them up again later as well. That seems very nice, and makes using bonding rather than raw multihop practical, at least if I can still direct pings out of the component interfaces, and I don’t see why I can’t. And all of a sudden I can make multiple interfaces look like one to LuCI, I think! cheer
Time to test this on my existing Soekris firewall…
We don’t have squashfs, we have btrfs. We provide automatic updates, not manual reflash.
As for squashfs: I think I was led astray by a hyperlink pointing into the OpenWRT documentation, and read a bunch of it thinking that it would apply to the Turris (which is, after all, running OpenWRT), and failed to realise how much of what I’d read was non-Turris-specific documentation – either that, or the documentation for the Turris has changed a lot since November of last year when I read most of this. I knew it was using btrfs, but had no idea it was using it instead of the squashfs mess: I thought it was simply overlaying a btrfs fs on top of the usual OpenWRT squashfs thing, and nothing I read disabused me of this. I’m extremely happy to hear it’s using btrfs and rolling updates instead; I thought at the time it was very strange that a box with so much flash would be using squashfs and intermingling rolling updates with reflashes like a 64MiB micro-router would. IMHO rolling updates are the only way to run any system you actually want to rely on for anything: no big flag days, ever! I am much, much less worried now: it’s much easier to recover from disasters on rolling-update systems. (Because of course I expect there will be disasters. That’s how you prevent them being too disastrous.)
You know that Luci interfaces do not map one to one to netlink interfaces?
Well, no, because everything I’d read, including OpenWRT’s own documentation, said quite clearly that they did. (That’s my root problem here, which is nothing you can do anything about: OpenWRT’s problem for a new user like myself is that its documentation, while copious, is so poorly organized that it took me ages to figure out even that much, and I ended up with a whole bunch of wrong impressions).
What other interpretation could I take from this (on the old blog so more likely to apply, I suspect, to Turris OS): “UCI Firewall maps two or more Interfaces together into Zones that are used to describe default rules for a given interface”… “Zones must always be mapped onto one or more Interfaces which ultimately map onto physical devices”. At no point is it even implied that the “interfaces” mentioned are not, well, the things that Linux has always called “interfaces”: they even have the same “eventually map onto physical devices” semantics.
Was I seriously supposed to think “oh yes, these things they call interfaces are different things from Linux network interfaces and can have a many:1 mapping with them, even though this mapping is nowhere described and there is no suggestion anywhere in the documentation how one might map multiple physical interfaces onto this LuCI concept”? It would frankly have been crazy for me to think any such thing: far more likely that two things with the same name, serving the same purpose, are the same thing. Apparently they’re not: good – though I still have no idea how one maps more than one physical interface to a Luci interface, maybe that will be clearer once I start the thing up. I suppose it is vaguely implied by Network configuration [Old OpenWrt Wiki], but frankly even reading that I would have no inkling that you could attach more than one network device to one of these LuCI interfaces: it says “list of interfaces if type bridge is set”, but again that is no different from Linux network interfaces. There is no mention of attaching multiple physical interfaces to one LuCI interface without bridging them. There is no mention of how to do anything to single Linux network interfaces other than associate them with single LuCI interfaces or bridges. So yes, I assumed they were the same thing because they looked exactly like the same thing. They still do.
I have spent probably upwards of two weeks all told planning and reading all the documentation I can get my hands on on OpenWRT and the Turris both, including significant chunks of this forum. If I still have the wrong impression of things as fundamental as what filesystem the thing is using and how network interfaces work, so be it: since you point directly at the OpenWRT documentation I did assume that it documented this system too, and that if some of it did not apply there might at least be some indication as to which bits those were. Apparently not
The reason why I was trying to plan everything first, particularly stuff to do with actual connectivity, is that I wanted to be sure I understood it properly first before turning it on even if that took months, since things of this nature can in my experience fairly easily FUBAR your entire network if you start them up in an unexpected fashion (in my case, as a test installation indirectly connected to my existing firewall on a network that already has most of the services running that will later be taken over by the Omnia, so if I mess up I don’t end up severed from the Internet and unable to do anything). In the past I’ve had things as simple as wifi routers try to mess up my local net by doing DHCP and IPv6 RAs of the default route out through them without being asked, so something this complex seemed much riskier.
If my Internet connection is severed and I can’t fix it, I lose my job, so you’ll pardon me if I don’t just turn the thing on and hope, but try to do some research first! I know I’ll have to invest considerable time redoing my firewall rules if (as originally planned) I use LuCI for them for better integration with the rest of the system, so I wanted to make sure that the rather special snowflake of my existing firewall was replicable, let alone the very special snowflake of my outbound routing. It was disconcerting to find out just how many packages I had to rebuild to reproduce what I was already doing. I knew I’d have to rebuild a few (there’s no way you’d have something like Simtec’s ekeyd packaged, for instance), but finding that procmail wasn’t packaged was a disturbing shock. I had this terrible sinking feeling that I’d be fighting the thing non-stop, but I’m starting to think that perhaps I was wrong. I hope so.
the point of OpenWRT is network oriented devices not servers and for that is its software is tailed to. It is like ranting on fish that it does not run on shore.
Um… if you want the thing to only be used as a router, and complain when people turn up wanting it to do not terribly obscure things like mail delivery, you might also want to change your own description of the Omnia as ‘more than just a router’. If it is in fact meant to be just a router and not ‘the open-source center of your home’, I clearly bought the wrong device, but thankfully it seems to me perfectly possible to do mail delivery etc on it, which seems like a perfectly normal thing to want to do on a network-oriented device: it’s just that the packages to do so have to be snarfed from upstream OpenWRT and/or built by hand (e.g. procmail). As far as I know. Still replanning, not tested yet, have to bisect and fix a bug in the USB-serial code in kernel 4.20 first so I can actually connect to the serial console rather than just getting a spurious I/O error when I try. (Planning for disaster, again.)
The fact that the build system is OpenWRT’s is very nice, because that’s a nice build system as cross-compiling buildroots go. (FYI, should you ever get sick of getting packages working with cross-compilers, another approach that could be used for a subset of annoying packages might be to run a static qemu-user ARM instance via binfmt_misc, and “natively” compile on a faster x86 box. I’ve built huge monsters never intended to be cross-compiled like all of GNOME that way, and it works fine.)