GitHub Actions: How to run SSH commands (without third-party actions)

Recently, I’ve been using GitHub Actions for my Continuous Integration and Continuous Delivery. Often, I use a simple deployment process that consists in executing remote commands on the target server.

GitHub Actions SSH

I found two third-party actions in the marketplace:

Here is what I dislike about them:

  • appleboy/ssh-action uses Docker, and therefore, is quite slow
  • garygrossgarten/github-action-ssh fails to report errors
  • both make it difficult to split a job into multiple steps

Finally, I settled on the following solution: calling the ssh command directly from the workflow.

However, before we can call ssh, we need to configure a few things:

  • hostname
  • username
  • private key

Of course, we’ll not store these values in the source code (at least, not the private key); instead, we’ll use the repository’s secrets.

To do that, we’ll’ begin the job with an additional step that creates the private key file on the runner, as well as an SSH configuration file.

To create the private key file, we simply need to echo the key’s content and make sure we set the right permissions:

$ mkdir -p ~/.ssh/
$ echo "$SSH_KEY" > ~/.ssh/staging.key
$ chmod 600 ~/.ssh/staging.key

To create the SSH configuration file, we could use echo as well, but I think it’s simpler with cat:

$ cat >>~/.ssh/config <<END
Host staging
    HostName $SSH_HOST
    User $SSH_USER
    IdentityFile ~/.ssh/staging.key
    StrictHostKeyChecking no
END

As you can see, this file defines the alias staging for the target machine (“staging” is just an example; you can use “deploy”, “production”, or anything else). To this alias, it binds the hostname, the username, and the private key.

It also sets StrictHostKeyChecking=no to avoid the infamous error Host key verification failed. If you think this is not secure, you can set the target footprint in ~/.ssh/known_hosts the same way we created the private key.

Here is the complete workflow:

name: CI
on: [push, pull_request]
jobs:
  # test:
  #   ...
  deploy:
    name: "Deploy to staging"
    runs-on: ubuntu-latest
    if: github.event_name == 'push' && github.ref == 'refs/heads/master'
    # needs: test
    steps:
      - name: Configure SSH
        run: |
          mkdir -p ~/.ssh/
          echo "$SSH_KEY" > ~/.ssh/staging.key
          chmod 600 ~/.ssh/staging.key
          cat >>~/.ssh/config <<END
          Host staging
            HostName $SSH_HOST
            User $SSH_USER
            IdentityFile ~/.ssh/staging.key
            StrictHostKeyChecking no
          END
        env:
          SSH_USER: ${{ secrets.STAGING_SSH_USER }}
          SSH_KEY: ${{ secrets.STAGING_SSH_KEY }}
          SSH_HOST: ${{ secrets.STAGING_SSH_HOST }}

      - name: Stop the server
        run: ssh staging 'sudo systemctl stop my-application'

      - name: Check out the source
        run: ssh staging 'cd my-application && git fetch && git reset --hard origin/master'

      - name: Start the server
        if: ${{ always() }}
        run: ssh staging 'sudo systemctl start my-application'

A few remarks on this workflow:

  • I deploy only when all tests are passing, hence the commented needs: test to link the deploy job to the test job.
  • I deploy only on a push to the master branch; that’s the if statement on the job
  • I restart the application service even if the other steps fail, thanks to if: ${{ always() }} on the last step.
  • I used the git command on the remote host to check out the code, but instead, you could use the scp command to upload a build artifact.
  • To allow Git to check out the code, I installed a read-only deploy key on the target machine

As you can see, this simple technique allows us to copy files and execute commands on a remote host. Thanks to the alias defined in .ssh/config, we can split a job into several steps with minimal code duplication.