Text

Any long-term data archives you keep - backups, copies of logs, code repositories, audit entries, etc. Are they append-only? I don’t mean from the perspective of the account owner. Of course the operator is able to do whatever he wants with those files, including deleting them.

But what about your servers? I’ve seen many backup manuals, or even published scripts which pretty much require you to setup the S3 account and that’s it. They’ll manage the files inside and make sure everything is updated as needed. But there’s a problem with that strategy - what if someone else gains control of your server? What can they do to your archives?

Can they, in one go, wipe both your current database and all the available backups using credentials stored locally? Or all of the logs from this and other services? Can they wipe the code repositories using your deployment keys? Can they read all your database backups with every past and present account still in there? Guessing from many tutorials and articles, there are lots of production services out there where the answer is “yes” to all. Sometimes it’s not even an issue with the admins not trying hard enough - sometimes services themselves are really not prepared for protection from that kind of behaviour. Many providers simply don’t offer any other ways of accessing their accounts apart from a single user+key combination giving global access to everything.

There are ways to provide some level of protection even without any support from the service provider, but it’s much easier if the access control is already built-in. For example when uploading data into S3, everything that’s needed for good protection is already provided. IAM gives you the possibility to create an account per service, or even per host. User permissions allow you to give only as much access as needed for your use case. That means your remote backup script shouldn’t need any more privileges than PutObject on some specific object (in case you’re using versioned buckets), or on a bucket (without versioning). The second case requires that you also assign unpredictable names (for example random suffixes) to the files, so that they cannot be destroyed by overwriting.

Here’s an example of a user policy for S3 bucket without versioning:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:PutObject"
      ],
      "Sid": "Stmt1375779806000",
      "Resource": [
        "arn:aws:s3:::blah-database-backups/*"
      ],
      "Effect": "Allow"
    }
  ]
}

But that’s not all. Even if you have a single account, there’s still a way to limit the potential damage to some extent. Instead of using the same account for upload and long-term storage of the files, get two accounts. Upload to the first one and keep it around only for that purpose. For long-term storage, set up a machine which is completely separated from the users (maybe even rejects all incomming connections) and make sure all files are moved to the second account as soon as possible.

That kind of setup is not perfect and still allows the attacker to replace files that have not been moved, or download files which wouldn’t normally be accessible from the local filesystem. But the time window and amount of data that may be impacted is much lower. There’s also still the possiblity to encrypt the data locally using a public key (which can be safely stored on the server) in order to protect the information.

So, if I got access to one of your servers and its backup system… what would I be able to achieve?

Was it useful?
BTC: 18AMX5sowkLoR78Lns7Qz7fEzSTVEqCpqS
DOGE: DDYKHC6EBRxR7Ac2ByVLEuxmrhwo3xV3kk