Implementing ssh hostbased authentication
Most people will tell you that hostbased authentication is a bad idea, that it is not secure. So here's an invaluable lesson in the foundations of computer security:
Nothing is purely "secure" or purely "not secure". Security is something that must be measured against a security model, or design, or policy, that talks about what assets you are protecting and who you are protecting them from.
Is hostbased authentication a bad idea in many or most cases? Yes. But not always.
One typical use case for hostbased authentication is a collection of machines deemed to live within a security perimeter. They may all share the same network disk resources. For example, machines that all share the same set of accounts, and network-mounted home directories, and lie in a private network, are a perfect case. If one machine were broken into, this is bad, but if two or three machines were broken into this is arguably no worse in terms of asset access than one machine. Therefore there's no reason to restrict users from moving freely from one machine to the next. The convenience of automatic passwordless ssh (if it is helpful to your users) may outweigh any security concerns.
But primarily this is not about the why, but the how.
How does it work?
Hostbased authentication is trickier to set up than you might think and it can go astray in several places. To best be able to troubleshoot a setup, you should understand all the steps involved in completing a successful hostbased ssh authentication.
- A user on source.example.com runs "ssh destination".
- source establishes a port 22 connection to destination
- source checks its local known_hosts database (/etc/ssh/ssh_known_hosts and ~/.ssh/known_hosts) for the public host key of "destination".
- source verifies that the data sent by destination maches the public hostkey it found locally (using pubkey encryption and data encrypted by destination to test the public key). Note: local pubkey lookup for "destination" (in a known_hosts file) must be an exact match for the host you requested in the ssh command.
- source tells destination it can do hostbased authentication ("HostbasedAuthentication yes" in source's ssh_config)
- destination tells source it can do hostbased authentication ("HostbasedAuthentication yes" in destination's sshd_config)
- destination looks up source's hostname from the bound IP address and makes sure it is in /etc/hosts.equiv or /etc/shosts.equiv. [Does it look it up or use the sent data?]
- source encrypts a bit of data (perhaps its own looked-up hostname?) using source's private key, and the command ssh-keysign (which usually needs to be setuid or setgid to something that can read the private key).
- source sends destination the encrypted data
- destnation looks up "source.example.com" (probably) in its known_hosts files (/etc/ssh/ssh_known_hosts and ~/.ssh/known_hosts).
- If it finds a public key, it uses it to decrypt the encrypted data sent by source, and verifies the hosts match.
- If everything succeeded up to this point, hostbased authentication succeeds and you are logged in with no password.
How do I set it up?
Why doesn't it work?
- Is hostbased authentication turned on on both ends? Use "ssh -vvvvv destination" and make sure "hostbased" appears in both "preferred" line and "Authentications that can continue" line.
- Is ssh-keysign on the source able to read the private key and sign a key? Look for an "ssh_keysign" error in the verbose ssh output.
- Most ways that this can go wrong are mismatched hostnames. The hostname you type on the command line must match a valid known_hosts entry on the source end. Is the entry out-of-date? Does the entry use FQDN instead of shortname (or vice versa)?
- Look for the "userauth_hostbased: chost" line in the verbose ssh output. The hostname listed there must exactly match an entry in the destination /etc/hosts.equiv.
- That same chost value must exactly match a known_hosts entry on the destination end.
Proper known_hosts setup and dealing with name mismatch problems
If your environment lets users use short hostnames (e.g. your resolver is set to automatically search your domain ("example.com") if the provided host doesn't resolve as given), then users can type "ssh destination" resulting in automatically populating the ~/.ssh/known_hosts file with an entry for "destination" even though ssh is translating this into "destination.example.com". This is fine but that entry for "destination" can't be used when you ssh the other direction and "destination" is being checked against the source ssh from "destination.example.com".
A lot of these problems also come when users automatically populate their known_hosts files because StrictHostKeyChecking is set to "no" or "ask" (or "accept-new" if your system supports that) in NFS home-mounted environments. Relying on this mechanism to add keys can result in inconsistent shortname and FQDN entries being added. It can also create additional problems, as it is not intuitive for users that hostbased authentication will work between two hosts only if they've both been added to the known_hosts file (in appropriate forms). Relying on automated updates to known_hosts can be made to work but is not the recommended situation.
If you're experiencing host name matching problems there are a few things you can do to solve them (starting with the likely worst choice and moving to the arguably best):
- You can always use short names and set your hostname resolver to always return short names. Then everything will match. But using shortnames everywhere in your environment is not the best choice, as it's bound to screw up something elsewhere in your configurations).
- Conversely you can forego the convenience of typing "ssh destination" and always type "ssh destination.example.com". But where's the fun in that? One workaround is to add an ssh_config (system-wide or in ~/.ssh/config) like this:
Host source destination foo bar baz other
This makes ssh always use FQDNs for the specified hosts. (Unfortunately there's no ssh_config pattern for hosts that means "match all hostnames with no dot".) But if your ssh version is recent enough (and you aren't looking at NFS-mounted home directories in a mixed environment of different ssh versions). An even better choice is to use the canonicalization options in OpenSSH 6.5 and later:
#do not "fix" anything more than 0 dots:
#only if you have some wierd local hostnames you need to use now and then:
This will make all hosts automatically added to your known_hosts file FQDN names, and should simplify the configuration and eliminate hostname mistmatches as a source of errors.
- [Recommended] You can take full control of your known_hosts files. If you centrally manage the /etc/ssh/ssh_known_hosts files, you can force them to contain all variations of hostnames, e.g.:
source,source.example.com,192.168.10.20 ssh-rsa PublicKeyDataHere...
destination,destination.example.com,192.168.10.21 ssh-rsa PublicKeyDataHere...
This is probably the best and most secure choice. If you update a host and its host keys you can update your known_hosts files to match and users will never see warnings that host keys have changed, and hostbased authentication will always work in the cases where you want it to work.
This can be maintained via scripts, or by centralized control with ansible, puppet, chef, etc, or by hand for a small number of machines.