Automating deployment with fabric
Author: Mike
July 30, 2024 05:33 PM
Updated: September 30, 2024 06:49 PM
Introduction
While there are many tools to automate deployment of any code base, these tools often require more engagement from the programmer than one may have thought. Among those tools is fabric a python library that can be used to automate shell commands remotely over SSH. Fabric is built with other libraries, specifically Invoke and Paramiko. Invoke handles shell subprocesses, and Paramiko is an implementation of SSH protocol in python. These two libraries enable fabric to use shell commands over SSH. Thus, fabric’s strengths are: wrapping Invoke’s CLI functionality, extending configuration options for SSH provided by Paramiko, and providing high-level primitives for use with context managers. As mentioned, there are many options for automation, fabric may be a good starting point for a smaller project. Furthermore, fabric only requires the installation of a single package through pip, and does not require signing up for a service. A simple use case of fabric can be summed up in two steps: creating a task targeting a single host, and running that task. Tasks involve three components: the task function itself, running the task command, and a Connection object whose configuration is constructed through command line arguments.
Simple Use Case: Single Target Remote Machine
Fabric ssh connections are configured with the Connection object. There are different ways to configure the Connection object, however tasks are specifically handled by the CLI. For example, assuming an ssh identity file has been created for connection to a host, a fabric task can be run at the command line, like so:
fab -i ~/.ssh/identity_file_name -H user@host task-name
Breaking down the command
- fab: the module name provided from the python environment
- -i : flag for identity file
- ~/.ssh/identity_file_name : example path name to an identity file
- -H : host flag indicating the target of the task
- task-name: the task name, here the dashes replace underscores of the task function’s name
Note that the fab command implicitly uses a file named fabfile.py, this file is assumed to be in the current working directory. Additionally, the Connection object is configured from information provided in the command, such as the user and host (which can be accessed with c.host or c.name).
Now that the how of running of tasks is clear, the building of said tasks follows. In python, fabric tasks are denoted by the decorator @tasks
. The task function needs at least have a parameter for the Connection object (e.g. def task_func(conn):
). Now that the function has access to the Connection object, shell commands can be scripted with the use of the Connection object’s run function. Inside the run function a shell command can be given, for example:
c.run(‘pwd’) # run the shell command pwd
Note that this statement returns a Result
object, that can be further used for more actions such as running commands using the output of the previous command.
It’s important to note that Connection objects are run in a session. In other words, actions in a shell that occur in one statement will not carry over to the next statement. In order to have that effect, say for example, creating a file or folder at a specific directory and interacting with that file or folder, will require a context manager to be used. For example:
""" some function in fabfile.py """
c.run(f”mkdir -p {folder}”)
with c.cd(folder):
_do_some_stuff_here(c)
Here , line by line, commands will be briefly explained: runs a command to make a folder, then cds to that newly created folder, and then other code can be run within the context of cd’ing to that folder. Now to put all these concepts together, a deploy function can be created:
""" a task function called deploy in fabfile.py """
@task
def deploy(c):
folder = f”/home/{c.user}/sites/{c.host}”
c.run(f”mkdir -p {folder}”)
with c.cd(folder):
_run_internal_tasks(c)
And then from the command line, the fabric CLI can be used like so:
fab -i ~/.ssh/id_file -H user@host deploy
This command, as mentioned, will look for the task deploy in the fabfile.py file, it will create a ssh connection to user@host using the identity file provided.
Many tools have their trade offs, and while automating tasks with fabric can be at times not very intuitive, its strengths lie in its ease of access, its lack of dependency on a complete or commercial software service, only relying on the host’s shell configuration, and ease of use in creating tasks.
Automating site deployment with fabric tl;dr
What is fabric?
- Fabric is a python library
- it can ssh to a remote machine and run commands
- it can also run commands locally
- it does this with the Invoke and Paramiko libraries
Why fabric?
- less dependencies, single install through pip
- no sign up for software service
- use of shell commands on target machine
How to use fabric
- Note the focus here is a simple use case (the fabric main site has examples for multi connection use cases)
-
single task function on remote machine target
-
define task in fabfile.py, to be supplied in CLI, e.g. :
@task def task_name(c): _do_some_stuff(c)
-
use fab CLI to run task on remote machine target, e.g. :
fab -i ~/.ssh/identity_file -H user@host task-name
-
-
Sample code for fabfile.py
- Here is a sample from an old fabfile for setting up my test environment. I've since iterated on this, but in the beginning this worked fine for automating my setup to test on my local network by creating and setting up my environment on a virtual machine.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 | import random, os from fabric import task from fabric import Connection from fabric import Config from invoke import Responder REPO_LINK = os.getenv("REPO_LINK") JENKINS_DIR = os.getenv("JENKINS_DIR") IS_PROD = os.getenv("PROD_REQS", False) DB_NAME = "mywebsite" DOMAIN_NAME = None DOMAIN_NAME_TEST = os.getenv("DOMAIN_NAME_TEST") @task def site_directory_status(c): expected_path = f"/home/{c.user}/sites/{c.host}/" result = c.run(f"if test -d {expected_path};then echo 'True'; else echo 'False';fi") if result.stdout == "True": return expected_path else: # path is possibly a jenkins workspace for testing pass def _get_site_folder(user, host): return f"/home/{user}/sites/{host}/" @task def get_latest_source(c): current_commit = _get_current_commit(c) dir_ = _get_site_folder(c.user, c.host) with c.cd(dir_): _get_latest_source(c, current_commit) @task def deploy(c, commit=None): """deploy to website, argument is a switch for commit type either main or local, defaults to local""" site_folder = f"/home/{c.user}/sites/{c.host}" current_commit = _get_current_commit(c) if not commit else _get_main_commit(c) provision_database(c) c.run(f"mkdir -p {site_folder}") # django needs this to be able to read env file c.config.run.env = {"DJANGO_READ_DOT_ENV_FILE": True} c.config.run.env = {"DOMAIN_NAME_TEST": os.getenv("DOMAIN_NAME_TEST")} with c.cd(site_folder): _get_latest_source(c, current_commit) _update_virtualenv(c) _create_or_update_dotenv(c) _update_static_files(c) _migrate_database(c) _provision_nginx_gunicorn_conf(c) def _get_current_commit(c): return c.local("git log -n 1 --format=%H") def _get_main_commit(c): return c.local("git rev-parse main") def _get_latest_source(c, current_commit): """get latest source code from repo onto server""" # check ssh connection _git_ssh_check(c) is_git_repo = c.run('if test -d .git;then echo "True";else echo "False"; fi').stdout is_git_repo = is_git_repo.rstrip() if is_git_repo == "False": # git repo does not exist thus clone repo c.run(f"git clone {REPO_LINK} .") else: c.run("git fetch") current_commit_stdout = (current_commit.stdout).strip("\n") c.run(f"git reset --hard {current_commit_stdout}") SSH_KEY_NAME = os.getenv("SSH_KEY_NAME") def _git_ssh_check(c): core_command_check = f""" if [ "$(git config core.sshCommand)" = '' ]; then echo "git ssh identity file not set! Setting identity file..." git config core.sshCommand "ssh -i ~/.ssh/{SSH_KEY_NAME}" else echo "identity file already set!" fi """ c.run(core_command_check) def _update_virtualenv(c): dir_exists = c.run( f'if test -d .venv/;then echo "True";else echo "False";fi' ).stdout dir_exists = dir_exists.rstrip() if dir_exists == "False": c.run(f"/home/shiba/.pyenv/shims/pip install virtualenv") c.run(f"/home/shiba/.pyenv/shims/python -m virtualenv .venv") reqs = "requirements/" reqs += "local.txt" if not IS_PROD else "production.txt" c.run(f".venv/bin/pip install -r {reqs}") def _create_or_update_dotenv(c): keys = [ "DJANGO_DEBUG_FALSE", "SITENAME", "DJANGO_SECRET_KEY", "EMAIL_APP_PASSW", "DATABASE_URL", "TEST_URL", "REDIS_URL", "DJANGO_ADMIN_URL", "USE_SMTP_BACKEND", "MAINTENANCE_MODE_ON", "LOCAL_TESTING", "DOMAIN_NAME_TEST", "SMTP_EMAIL_TEST", # django needs this to be able to read env file "DJANGO_READ_DOT_ENV_FILE", ] current_contents = None if c.run("test -f .env", warn=True).failed: c.run("touch .env") else: # get contents from .env for checking current_contents = ((c.run("cat .env")).stdout).strip("\n") def echoer(key, value): """echos missing key value pairs into .env file""" if not current_contents or not key in current_contents: c.run(f'echo "{key}={value}" >> .env') # sort by alphabetical, for convenience keys.sort() for key in keys: match key: case "DJANGO_DEBUG_FALSE": echoer(key, "y") case "SITENAME": echoer(key, c.host) case "DJANGO_SECRET_KEY": new_secret = "".join( random.SystemRandom().choices( "abcdefghijklmnopqrstuvwxyz0123456789", k=50 ) ) echoer(key, new_secret) case "EMAIL_APP_PASSW": echoer(key, f"'{os.getenv(key)}'") case "DATABASE_URL": echoer(key, os.getenv("DATABASE_URL")) case "TEST_URL": echoer(key, os.getenv("TEST_URL")) case "REDIS_URL": echoer(key, "") case "DJANGO_ADMIN_URL": echoer(key, os.getenv("DJANGO_ADMIN_URL")) case "USE_SMTP_BACKEND": echoer(key, "y") case "MAINTENANCE_MODE_ON": echoer(key, "") case "LOCAL_TESTING": echoer(key, "y") case "DOMAIN_NAME_TEST": echoer(key, f"'{DOMAIN_NAME_TEST}'") case "SMTP_EMAIL_TEST": email = os.getenv("EMAIL") echoer(key, f"'{email}'") case "DJANGO_READ_DOT_ENV_FILE": echoer(key, "True") def _update_static_files(c, settings=None): if not settings: settings = "config.settings.local" c.run( f"set -a; source .env; set +a; .venv/bin/python manage.py collectstatic --noinput --settings={settings}" ) def _migrate_database(c, settings=None): if not settings: settings = "config.settings.local" c.run( f"set -a; source .env; set +a; .venv/bin/python manage.py migrate --noinput --settings={settings}" ) @task def provision_nginx_gunicorn_conf(c, tld=None, sub=None): """creates nginx configuration, provide tld and sub domain params to overwrite template""" if not _check_if_installed(c): _provision_nginx_gunicorn_conf(c, tld, sub) else: print("nginx already configured!") def _check_if_installed(c): true_ = "Found" false_ = "Not Found" with c.cd("/etc/nginx/sites-available/"): result = c.run( f"""[[ -f {c.host} ]] && echo {true_} || echo {false_}""" ).stdout.rstrip() if result == true_: return True else: return False def _provision_nginx_gunicorn_conf(c, tld, sub): """provision nginx configurations from templates at deploy_tools directory""" _check_nginx_gunicorn_installed(c) deploy_tools_dir = f"/home/{c.user}/sites/{c.host}/deploy_tools" with c.cd(deploy_tools_dir): _sed_nginx(c, tld, sub) _sed_gunicorn(c) start_services(c) def _check_nginx_gunicorn_installed(c): nginx, gunicorn = ("nginx", "gunicorn") _check_and_install_package(c, nginx) _check_and_install_pip_package(c, gunicorn) def _sed_nginx(c, tld, sub): sudopass = _get_responder() if not tld and not sub: """testing on an ip address""" c.run( f'cat nginx.template.conf | sed "s/(DOMAIN.tld|sub.DOMAIN.tld)/{c.host}/g" | sudo tee /etc/nginx/sites-available/{c.host}', pty=True, watchers=[sudopass], ) c.run( f'cat /etc/nginx/sites-available/{c.host} | sed "s/DOMAIN/{c.host}/g" | sudo tee /etc/nginx/sites-available/{c.host}', pty=True, watchers=[sudopass], ) else: c.run( f'cat nginx.template.conf | sed "s/DOMAIN/{c.host}/g" | sudo tee /etc/nginx/sites-available/{c.host}', pty=True, watchers=[sudopass], ) if tld: c.run( f'cat /etc/nginx/sites-available/{c.host} | sed "s/tld/{tld}/g" | sudo tee /etc/nginx/sites-available/{c.host}', pty=True, watchers=[sudopass], ) if sub: c.run( f'cat /etc/nginx/sites-available/{c.host} | sed "s/sub/{sub}/g" | sudo tee /etc/nginx/sites-available/{c.host}', pty=True, watchers=[sudopass], ) if _check_certbot_file(c, tld): crt = f"/etc/letsencrypt/live/{c.host}.{tld}/fullchain.pem" c.run( f"""cat /etc/nginx/sites-available/{c.host} | sed "s,/etc/nginx/ssl/nginx.crt,{crt},g" | sudo tee /etc/nginx/sites-available/{c.host}""", pty=True, watchers=[sudopass], ) key = f"/etc/letsencrypt/live/{c.host}.{tld}/privkey.pem" c.run( f"""cat /etc/nginx/sites-available/{c.host} | sed "s,/etc/nginx/ssl/nginx.key,{key},g" | sudo tee /etc/nginx/sites-available/{c.host}""", pty=True, watchers=[sudopass], ) c.run( f"sudo ln -sf /etc/nginx/sites-available/{c.host} /etc/nginx/sites-enabled/{c.host}", pty=True, watchers=[sudopass], ) # add nginx to group so that static content is reachable c.run(f"sudo usermod -aG {c.user} www-data", pty=True, watchers=[sudopass]) def _check_certbot_file(c, tld): result = c.run( f'[[ -f /etc/letsencrypt/live/{c.host}.{tld}/fullchain.pem ]] && [[ -f /etc/letsencrypt/live/{c.host}.{tld}/privkey.pem ]] && echo "True" || echo "False"' ).stdout.rstrip() if result == "True": return True else: return False def _sed_gunicorn(c): """host name in prod will work, but in testing using local or other host name needs to be manually added""" sudopass = _get_responder() c.run( f'cat gunicorn-systemd.template.service | sed "s/DOMAIN/{c.host}/g" | sudo tee /etc/systemd/system/gunicorn-mywebsite.service', pty=True, watchers=[sudopass], ) @task def start_services(c): sudopass = _get_responder() c.run("sudo systemctl daemon-reload", pty=True, watchers=[sudopass]) c.run("sudo systemctl reload nginx", pty=True, watchers=[sudopass]) c.run( f"sudo systemctl enable gunicorn-mywebsite", pty=True, watchers=[sudopass], ) c.run( f"sudo systemctl start gunicorn-mywebsite", pty=True, watchers=[sudopass], ) @task def restart_gunicorn(c): sudopass = _get_responder() c.run( f"sudo systemctl restart gunicorn-systemd.test-server.service", pty=True, watchers=[sudopass], ) @task def full_restart(c): sudopass = _get_responder c.run("sudo systemctl daemon-reload", pty=True, watchers=[sudopass]) c.run("sudo systemctl reload nginx", pty=True, watchers=[sudopass]) restart_gunicorn(c) @task def provision_database(c): _set_needrestart_auto(c) _check_postgres_installed(c) sudopass = _get_responder() user = os.environ.get("DB_USER") check_user_exists = f""" if [ "$(psql postgres -XtAc "select 1 from pg_roles where rolname='{user}'")" = '1' ]; then echo "User: {user} exists!" else echo "User: {user} does not exist...creating user" sudo -u postgres psql -c "create user {user} with password '{os.environ.get('DB_PASS')}';" sudo -u postgres psql -c "alter user {user} createdb;" fi """ c.run(check_user_exists, pty=True, watchers=[sudopass]) databse_setup = f""" if [ "$(psql postgres -XtAc "select 1 from pg_database where datname='{DB_NAME}'")" = '1' ]; then echo "database already exists!" else echo "database does not exist... creating database" sudo -u postgres psql -c "create database {DB_NAME};" fi echo "giving privileges to user: {user}" sudo -u postgres psql -c "grant all privileges on database {DB_NAME} to {user};" """ c.run(databse_setup, pty=True, watchers=[sudopass]) set_postgres_pass = f""" sudo -u postgres psql -c "alter user postgres with password '{os.environ.get('DB_PASS')}';" """ c.run(set_postgres_pass, pty=True, watchers=[sudopass]) def _set_needrestart_auto(c): """this is to auto accept daemon gui restart prompts during installation of certain pacakges""" sudopass = _get_responder() with c.cd(f"/etc/needrestart/"): if c.run("test -f needrestart.conf.bak", warn=True).failed: # create backup of needrestart.conf c.run( "sudo cp needrestart.conf needrestart.conf.bak", pty=True, watchers=[sudopass], ) # replaces nrconf{restart} = 'i' with nrconf{restart} = 'a' c.run( "sudo sed -i \"s/#\$nrconf{restart} = 'i'/\$nrconf{restart} = 'a'/g\" needrestart.conf", pty=True, watchers=[sudopass], ) def _check_postgres_installed(c): sudopass = _get_responder() check_postgres = f""" if [ "$(which psql)" = "" ]; then sudo apt-get install postgresql -y; else echo "postgresql already installed!" fi """ c.run(check_postgres, pty=True, watchers=[sudopass]) def _get_responder(): """generic Responder object""" passwd = os.environ.get("REMOTE_PASSW") return Responder(pattern=r"\[sudo\] password[a-z|\s]*:", response=passwd + "\n") def _check_packages_and_install(c, packages): for package in packages: _check_and_install_package(package=package) def _check_and_install_package(c, package): sudopass = _get_responder() result = c.run(f"""echo $(dpkg-query -W -f='${{Status}}' {package} 2>/dev/null)""") result = result.stdout.rstrip() expected = "install ok installed" if not result in expected or result in "": # package not installed c.run(f"sudo apt install {package} -y", pty=True, watchers=[sudopass]) def _install_package(c, package): sudopass = _get_responder() c.run(f"sudo apt install {package} -y", pty=True, watchers=[sudopass]) def _check_and_install_pip_package(c, pip_package): """check for pip pacakges in venv directory""" venv_dir = _get_site_folder(user=c.user, host=c.host) + ".venv/bin/" result = c.run(f"{venv_dir}pip freeze | grep '{pip_package}'") if not pip_package in result.stdout: c.run(f"{venv_dir}pip install {pip_package}") def _check_pip_package(c, pip_package): """returns a Result object""" return c.run(f"pip freeze | grep '{pip_package}'") @task def copy_install_packages_to_host(c): keyname = os.environ.get("AGENT_KEYNAME") server_user_name = os.environ.get("SERVER_USER_NAME") ip = os.environ.get("HOST_IP") install_packages_loc = "~/install_packages.sh" result = c.run( f"""scp -i ~/.ssh/{keyname} {install_packages_loc} {server_user_name}@{ip}:/home/{server_user_name}/""" ) @task def setup_database(c): _set_needrestart_auto(c) user = os.environ.get("DB_USERNAME") db_pass = os.environ.get("DB_USER_PASS") db_name = os.environ.get("DB_NAME") sudopass = _get_responder() # create base user c.run( f""" if [ $(psql postgres -tXAc "SELECT 1 FROM pg_roles WHERE rolname='{user}';") -eq 1 ];then echo user already exists! continuing... else sudo -u postgres psql -c "create user {user} with password '{db_pass}';" fi """, pty=True, watchers=[sudopass], ) # create db c.run( f""" if [[ -z 'psql -Atqc "\list {db_name}" postgres' ]];then sudo -u postgres createdb {db_name} else echo db already exists! continuing... fi """, pty=True, watchers=[sudopass], ) # alter roles c.run( f"""sudo -u postgres psql -c "alter role postgres with password '{db_pass}';" """, pty=True, watchers=[sudopass], ) c.run( f"""sudo -u postgres psql -c "grant all privileges on database {db_name} to {user};" """, pty=True, watchers=[sudopass], ) c.run( f"""sudo -u postgres psql -c "grant all privileges on database {db_name} to postgres;" """, pty=True, watchers=[sudopass], ) @task def install_packages(c): filename = "install_packages.sh" sudopass = _get_responder() c.run(f"source {filename}", pty=True, watchers=[sudopass]) @task def install_node_tools(c): filename = "check_node_tools.sh" sudopass = _get_responder() c.run(f"source {filename}", pty=True, watchers=[sudopass]) |
-
Previous blog posts