In a series of blog posts we will focus on some of the best practices we use within Merapar to evolve our DevOps practices we have built around the GCP platform. Why? Because we think it’s fun to share knowledge and to learn from others in the industry.
In some projects we still prefer the “old” school database, PostgreSQL still has its role to play. Especially in the “modern” cloud world, where cloud providers like GCP, make the management and configuration around these instances very simple.
We tend to follow security best practices which define no manual access to resources. Every time we start off with a new project, no developer, you do not get access to the database. When you need something to be done, use your database migration scripting.
This is a very good starting point, however in the real world we do understand you sometimes need access to the database. It might be to revert this failed database migration script, verify any unexpected database performance issues, or data integrity issues. We have all been across them.
So, how to get secure, auditable access to our Cloud SQL database?
Our solution is to have a secure connection to the database, traditionally you can do this with a VPN connection. However this is costly, from both infrastructure and maintenance perspective.
The main components in this solution are:
This is depicted in the diagram below:
As described in a previous blog: “How to secure your ssh connections using an Identity Aware Proxy on GCP”, we have configured a bastion host with in front an identity aware proxy.
By using the following command we can setup an SSH connection to the bastion instance:
gcloud compute ssh vm-bastion --project <project name> --zone <location>-a --tunnel-through-iap
Now, in this case we do not want a normal SSH connection, we want a Socks proxy, to do this we need to use the following commands:
gcloud compute ssh vm-bastion --project <projecgt name> --zone <location>-a --tunnel-through-iap -- -N -D 8123
This does:
Any command which is now send on local port 8123 is forward to our remote instance (note that depending on your authentication the SSH connection will time-out after 1 hour of inactivity)
For the next step we will need the cloud sql proxy, use the following commands to download it (for Mac) and to make sure it’s executable.
curl -o cloud_sql_proxy https://dl.google.com/cloudsql/cloud_sql_proxy.darwin.amd64 -s
chmod +x cloud_sql_proxy
Now we can start glueing things together, we can instruct cloud_sql_proxy to use the connection we set up to the bastions using the socks5 protocol.
While doing this we provide the database to connect to and which token to use (or alternatively which service account file to use).
Before starting the cloud sql proxy, we request an access token, as GCP might not be available through the socks5 proxy.
Note that the access token is only valid for an hour. It’s anyhow a good practice to stop the connection once done, but after an hour you won’t be authenticated anymore.
TOKEN=$(gcloud auth print-access-token)
ALL_PROXY=socks5://127.0.0.1:8123 ./cloud_sql_proxy -instances=<connection string, see GCP console>=tcp:8124 -enable_iam_login -token=$TOKEN -ip_address_types=PRIVATE
After this command, we can use our normal database clients to connect to our database using localhost and port 8124. As we need to provide the database and our IAM use account to use. This will only succeed if the user was added to the database, see https://cloud.google.com/sql/docs/postgres/authentication
For example:
psql -h 0.0.0.0 -p 8124 sslmode=disable --username=<iam user> --dbname=<db name>