2

I currently have an AWS setup with 3 VPCs, one for the bastion, then a Dev and a Prod VPC, those last two only being SSH accessible via the bastion of course (layout from this great guide). I'm looking for the best way to manage having between 3-5 users access the systems. Long term I hope to have automation and external logging to a place where nobody has to SSH in, but for now, SSH it is.

After reading over ideas from this question on ssh key management I am thinking about setting up my AWS access as follows:

  • 2 accounts on bastion, sudoable@bastion and non-sudoable@bastion. sysadmins go through one, devs through the other. (devs have no reason to do anything but pass through the bastion)
  • the authorized_keys file in each bastion account holds the public key of the sysadmins or devs, respectively
  • Dev/Prod VPC instances each have a single, sudoable account: it@devVPC and it@prodVPC.
  • Devs get the private key for instances at it@devVPC, sysadmins get the private key for both VPCs instances.
  • If someone joins/leaves the company, their pubkey is added/removed from the correct authorized_keys file on the bastion.
  • Since the devVPC/prodVPC instances are behind the bastion, even though the worker who left may still have the private key to those servers, there is no way to access them since the bastion won't allow them through

I think this should keep me from having to manage individual useradd/userdel accounts on each instance. Without getting into LDAP and things I wouldn't want long term anyway (again, hopefully nobody needs to SSH into AWS servers in the future), does this seem like a legitimate/secure setup for a small team?

I haven't used it but I've also seen AWS OpsWorks mentioned as a method for controlling SSH keys? Does my situation sound like a good use case for it?

xref
  • 131
  • 5

3 Answers3

1

Use separate SSH keys for every user and separate user accounts for them on each system.

We create users, assign them to roles, and enable pubkey access using the users cookbook with Chef which is the Open Source project which OpsWorks is based on.

Alain O'Dea
  • 1,635
  • 9
  • 13
1

Using my method as posted would make it much harder to audit actions down the line if we needed to do that, as Jesse pointed out, so I think I'll need to choose a better option.

1) Looking into OpsWorks or just using Ansible which we already do for some server provisioning, like Alain suggests, sounds promising. It doesn't seem like it would add too much overhead.

2) I also came across the service Foxpass which looks like it would integrate SSH key management with our existing Google Apps accounts. That could alleviate some of my worries about the overhead of on-boarding (and un-boarding) employees

xref
  • 131
  • 5
1

There is one big hole in this, which is that it is difficult to prove who did what action with shared accounts. Not impossible, just more difficult. In addition, the shared private key is a bad idea from a defense in depth strategy. You can solve both problems by managing the accounts individually, which creates greater management overhead, but it shouldn't be significant with just 5ish users.

I can't comment in depth on the opsworks idea, I haven't used it. Based on a quick read of the document you provided, it really just looks like it sets up accounts on your instances and then the authorized_keys file, which is a pretty small task. You also have to then spend the time setting all your users up with AWS IAM accounts, which doesn't seem like a good tradeoff.

Jesse K
  • 1,068
  • 6
  • 13