This just feels like moving ssh auth to another port, using an obscure "authentication format" that nearly no one uses.<p>If you're using pubkey or certificate auth, have disabled password auth, and restrict what users are allowed to ssh in, this just feels like added complexity and points of failure (not to mention a possible source of crypto vulnerabilities) with not much added benefit.<p>Having said that, it's a cool project in general, and I feel like it could be useful to dynamically manage access to large groups of servers (perhaps not just ssh; you could use it to manage access to https interfaces or other things). Then again, if you have a large group of servers, you should have those ports blocked to the world and only allow access through a VPN and/or jump boxes.
This is very similar to port knocking, although more complicated. Although a proper JWT token is “more secure” than a sequence of 0-65535 integers, I contend that having complicated and/or unvetted logic as your first line of defense is more problematic than secure.
Setting this up makes much less sense than setting up a tested vpn, such as wireguard or open, or even a persistent ssh tunnel using autossh to your home rpi.<p>I would never allow my prod systems to be potentially exposed by an api that runs as root. (And the documentation is incorrect on that; it should run as an unprivileged user with sudo privs to only run a wrapper script that runs firewall-cmd).<p>This also makes little sense in the context of configuration management, which should be enforcing a static set of iptables rules.
I like to run fail2ban in conjunction with a non-standard SSH port on which only public key auth is available.<p>This way, most of the junkware that does rude things to port 22 is banging on a closed door; the slightly more effective junkware that actually finds the SSH port gets banned immediately, because I know anyone trying to log in with a password is full of it.
I usually just disable password auth and update regularly, when I have an SSH server open to the internet.<p>Short of an 0day in the SSH service, I expect brute forcing the private key(s) to take longer than I have years to live.
The idea is better than port knocking, in the sense that you take active action to associate your host with the server, but there's a few issues.<p>* Something this simple shouldn't need k8s, unless it was intended as an exercise by the developer for that reason<p>* It combines the idea of using a non-standard port with certificate based authentication, which you can already do with SSH-- it's functionally the same with more steps<p>Hypothetically, this approach can be more powerful as a centralized service running elsewhere, (i.e. cloud, your remote DC) and used for a whole bunch of jump boxes and end-users. End-users could run a "check-in" script wrapping around SSH that notified the service that a user was imminent, and then could check the server to see if
1) The bearer token is accepted for the destination server
2) That the destination server checked in, saw the new incoming request, and has already opened the port -- And then proceeds to run the SSH command if all is well, or fails with appropriate error.
Nope nope nope.<p>This has serious quality issues and any competent and nice sysadmin should say _nope_ to run this in any serious environment, for your own good ;-)<p>IP addresses are just strings? At least parse/validate for IPv4/IPv6.<p>Why yet another database? Can't the rules be loaded from the running system?<p>Why not just ipset-persistent + knockd + portsentry? I know it is easy to get overexcited with a new pet project, but just be careful to not put this kind of stuff in a production system kidos.
I don't like this approach very much because it's much more complicated than it would have to be. Security is about layers, and this is essentially one layer that acts as the sole guardian for sshd.<p>The way that I like to do this is to have a common 'entry point' for all my cloud systems. Instead of whitelisting IPs on every VPS or cluster I build, I just add them to the ACL on my management server. All the other systems only allow SSH connections in from the bastion server. In practice, it works like this:<p>* Add IP to the whitelist file in my change control<p>* Run the Terraform script to update the DigitalOcean ACL<p>* Start an SSH agent locally and add the key for the bastion, as well as the key for the destination<p>* Connect to the destination server by using ProxyJump<p>So, connecting to a box would always route through my bastion system first, like this:<p><pre><code> ssh -J mgmt.mydomain.net cool-app-server.mydomain.net
</code></pre>
I've been doing this for a couple years, and it works great. I practically never have attempted login attempts on my systems. And, since I use an ssh agent to forward the keys through the bastion without ever actually storing them, a compromise of that system doesn't really give the attacker anything other than access to port 22 on a bunch of systems that they wouldn't know where to find. Only the most sophisticated attack (<a href="https://xkcd.com/538/" rel="nofollow">https://xkcd.com/538/</a>) would lead to a real compromise.
I like this "proactive" solution :)
Endlessh: an SSH Tarpit
<a href="https://nullprogram.com/blog/2019/03/22/" rel="nofollow">https://nullprogram.com/blog/2019/03/22/</a>
6. Possible enhancements<p>Rate limiting the number of requests that can be made to the application<p>So this just moves the brute forcing target from ssh to a web app. A lot of work for no added security.
If I have public key authentication set up for ssh, should I even bother with fail2ban, firewalld-rest, port knocking, etc.? There's no way anyone is brute forcing my ed25519 key, so what's the point? Sure security should be layered and all that but it seems like public key auth is so strong by itself anything on top seems unnecessary.
If you do not need the more granular firewall configuration options there is also classic port knocking (<a href="https://en.wikipedia.org/wiki/Port_knocking" rel="nofollow">https://en.wikipedia.org/wiki/Port_knocking</a>) where the daemon sits behind the firewall so all ports can be closed by default.
A dynamic IP filtering list is still reactive because you haven't actually secured the box or its services. You've just made it slightly inconvenient to brute force. You might as well use port knocking, because even with a fancy schmancy authentication system (and the attack surface of a custom web app...) I can MITM your active connections just fine either way, or spoof your whitelisted IP.<p>I know everybody likes Fail2ban, but these two iptables rules (or something just like them) actually work better and don't fill up your logs:<p><pre><code> iptables -t mangle -A PREROUTING -p tcp --dport ssh -m conntrack --ctstate NEW -m recent --set
iptables -t mangle -A PREROUTING -p tcp --dport ssh -m conntrack --ctstate NEW -m recent --update --seconds 60 --hitcount 4 -j DROP
</code></pre>
To actually protect the network layer, use some kind of VPN with MFA.
This is interesting in that it combines the concept of port knocking with a REST interface, which I'm assuming is up to the user to create a front end for.<p>Unfotunately it also relies on Kubernetes which means that using it for a single system isn't practical. At least, not for <i>this</i> server owner.<p>My own approach is simply security by obscurity (a non standard port) with APF/BFD doing the needful for locking bots out if they figure out the port. I've had to change ports only once in 6 years, so it's working to keep bots out rather nicely.<p>And really that's all these things are- a way to keep bots out. A determined attacker will figure this stuff out anyway.
Secure SSH and you won't have to worry about rogue login attempts, which happen to everyone. If it really bothers you then move it to another port where it will happen less.<p>But install a new firewall management system? Sounds like it will definitely introduce more risks than the problems it solves, which for the SSH example isn't really a problem at all.
Have you tried knockd <a href="https://linux.die.net/man/1/knockd" rel="nofollow">https://linux.die.net/man/1/knockd</a> ? You send a special sequence of "knocks" to the server (packets to different ports) and it executes a command such as allowing your IP for a time period. No JWTs.
If you're gonna go through all of this work -- including creating and maintaining private keys -- why not just restrict the SSH server to only permit key-based authentication (optionally, signed by your CA)?<p>If having 22/TCP open to the world is an issue, then set up Wireguard on the host and only allow SSH connections that are coming in over the Wireguard interface.<p>Got a bunch of machines to deal with? Set up a jumpbox or two running OpenBSD, lock it down, give your users access to it via SSH (optionally, over a Wireguard connection) and then only allow SSH access to all of those other hosts from the jumpbox(es).<p>Then there's the fact that I have a <i>whole lot more</i> trust in the security of OpenSSH than I do some random web application!<p>To me, this just seems kinda pointless -- there's a bunch of other, better (IMO) ways to deal with this -- but I guess if it fits your needs ...
> <i>you can go to jwt.io and generate a valid JWT using RS256 algorithm (the payload doesn't matter). You will be using that JWT to make calls to the REST application, so keep the JWT safe.</i><p>The JWT you got after you plugged the private key into a random website is going to protect access to your machine?
Adding a JWT authenticated API layer to something is not a first choice for adding additional security.<p>If you want something like this look into platform level firewalls (ex: AWS security groups) or run spiped in front of your SSH server. I’d trust that a hell of a lot more than a REST API.
This looks like reincarnation of "Lock & Key" feature of Cisco routers available since the late 90s. There were two major issues that led to abandonment and hindered adoption. The first is that it's an extra step. Extra steps for installation, for complexity, for single point of failure, for availability of key services. The second is that instead of thwarting attackers, you're thwarting yourself, every single time. It breaks so many usecases, for example if you have a new machine, or if you want to access through a jump host, or from ssh client on phone, etc.
On EC2, I've had a great experience using AWS Systems Manager [1]. I don't need any ports open, and it works great with normal shell tools and emacs given a .ssh/config like this:<p>host i-* mi-*<p>ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"<p>[1] <a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html" rel="nofollow">https://docs.aws.amazon.com/systems-manager/latest/userguide...</a>
Documentation and idea framing is too snowflaked.<p>This a JWT based port knocking over https framework that would be useful when used in a much broader sense.<p>Framing it as proactive fail2ban is technically correct, it also masks the other and more powerful use-cases.<p>I could see this in use as a vpn bypass for a prosumer production system, where normal operational commands go over a secured vpn but in a pinch, you can disable that restriction for direct control during a partial failure.
Please don't use this software. From a security perspective this is horrible.<p>Option A) Expose all firewall rules via some hacky web-based API.<p>Option B) <a href="https://nvlpubs.nist.gov/nistpubs/ir/2015/NIST.IR.7966.pdf" rel="nofollow">https://nvlpubs.nist.gov/nistpubs/ir/2015/NIST.IR.7966.pdf</a>
If you're going to take this approach, a bigger question is why bother running the SSH jumpbox 24/7 in the first place? Shut down the SSH jumpbox when you don't need it (thus achieving SSH isolation) and start it up with your public IP in user-data to enable access to you and only you when you do need it.
This sounds like a complex port knocking setup with bigger overhead.
If you want better security and UDP is not a problem, consider use of Wireguard VPN. It is passive and silent, random attackers won’t even know it is there.
(1) The auth interface is per IP address, as visible by the listening server. (2) The word "NAT" is found 0 times in the text. (3) There is no point 3 from the practical standpoint, please move along :(
Much simpler to stick with SSH and an IP Address whitelist...<p>Then you can't even get login attempts, unless they're from the same IP... am I missing something?