Stephen Hara

Automation At Work

Published on 4/30/2018

  • post
  • automation

I've been working a little over a couple months now. At first it was super rough - the language barrier has been non-trivial to work through, but reasonable thanks to Google-sensei; I was working on PHP despite my best efforts to escape that world for the past several years; and the processes at my current company are kind of garbage.

The good news for my sanity as well as this blog post is that last one, being that the processes are terrible, but not (completely) irreconciliably so! Insert applause here. So I'm gonna talk about how I made it a little less stab me in the face and punch my eyes out and a little more touch me touch me.

Though not by that much, unfortunately. Here's a case study on the case for automation.

The Situation

In the Established Process we'll be looking at here, the basic flow of development is:

  • Someone (possibly developer of feature, possibly not) writes a detailed spec of the feature, indicating error conditions, parameters, useage of params, etc.
  • Developer writes code for feature according to spec, eventually submits it for code review
  • Someone (possibly developer of feature, possibly not) essentially does integration testing of the feature, which at some point really should be automated but Selenium is scary (eg. I haven't learned it yet)
    • For this, first you seed your local DB with test data stored in a Google Sheet where applicable (*)
    • Then you run through one execution path, which possibly requires altering the source to force a particular branch
    • If it came out as expected per the spec, capture the following pieces of "test evidence":
      • SQL queries + Explain plan for each
      • DB before/after + diff
      • App logs
      • Relevant config files
      • Screenshots of the mock client/terminal demonstrating the input/output
      • Any changed source files
    • Repeat for all branches

* Yeah, it's annoying having to download a CSV export of the test data and then fiddle with it to make it a bunch of insert statements. Just kidding, it's like five vim commands! ez

Now, by no means is this fundamentally what I'd call a bad process. The spec always acts as the Single Source of Truth for the feature, the concept of testing is present, and the process of testing is also fine I think (says the man who has never had to do integration testing in any form before this).

Where it starts to break down in our implementation, however:

  • Oftentimes development is mandated to begin before the spec is complete due to requirements changing. This is a management issue and can't really be resolved at the developer level.
    • FWIW, this isn't the worst thing in the world as you can usually get the big parts of the feature done, but a requirements change forces developer context-switching in some cases, which is suboptimal
    • The impact on test results is not always a problem, but since it's all done manually here it's more wasted brain cycles
  • Development doesn't require unit tests at all, which adds to the impact of the previous point
    • For a hilarious note, they definitely call integration tests the unit tests (単体テスト)
  • It seems extremely fragile to have the feature implementer also doing the integration tests.
    • I guess it's expected to be fine because Japan, but since there's no automated tests I could pretty much fudge whatever results I wanted and not have to worry about it until manual review, which I'm not convinced even happens. Welcome to bugs in production, eventually.

So that's a lot of silly points. And sadly, as a developer with little say in the matter because I'm junior my entire team by several years' minimum and Japan caring about that and just generally receiving the response しょうがない when I try and bring it up, I can't change much.

But I can at least improve what goes on on my machine! Since that's pretty much development and the integration testing, and I don't have many complaints about my development as long as I have vim and a directory with permissions for my plugins, that leaves testing!

Pre-Automation

To look at the testing process as I do it in slightly more detail:

  • Reset local databases to some default state (easy)
  • Generate dump of that default state for diffing later (easy)
  • Clear most recent log file to minimize what gets captured (easy)
    • This requires determining the name of the log file which is a function of the date + current hour, but still easy
  • Capture configuration files (easy)
  • Execute function with parameters to match desired output (easy, but must be done through web client, so annoying to automate)
  • Capture screenshot as part of evidence (easy, but manual, so annoying)
  • Capture logs as part of evidence (easy once log file name is known)
    • The app logs are what we extract the SQL queries from, so that's also in this (easy)
  • If any source was changed to induce a particular execution path, capture that (easy)
  • Capture explain plans from any SQL queries (easy)
  • Capture new DB dump and diff with old DB dump

If you've done a bit of sysadmin stuff, you might note that all of these are...pretty simple. Simple enough they could be scripted, perhaps.

Which is exactly what I did, one by one.

The Automation

For this section I'll just go through each of the above steps I've automated, provide a similar snippet to what I used to automate it, and provide any additional notes that might help.

Reset local databases to some default state

mysql db_name < add_test_AAA_table.sql

Basically I just have a bunch of SQL scripts in a directory off of ~ run together with a simple shell script. All the SQL scripts:

  • Drop the table if exists
  • Build the table anew with the exact structure needed
  • Insert a bunch of rows (in one transaction, because speed)
  • Commit the transaction

Nothing too fancy, but writing the table create statements is a little bothersome.

Generate dump of that default state for diffing later

mysqldump db_name > db_pre_dump.sql

Nothing complicated.

Clear most recent log file to minimize what gets captured

find /var/log/path/to/logs/parent -name "*$(get_log_date).log" -delete

The logs live in a directory somewhere under /var/log that I have forgotten, but each category of log also has its own directory somewhere in there, hence the name pattern and using find in the first place. get_log_date is just a script that's an invocation of date with the format matching our log pattern. Other than that, pretty simple.

Capture configuration files

cp /path/to/app/config/env.yml /tmp/evidence/configs/

Again, easy. Nothing tricky.

Capture logs as part of evidence

find /var/log/path/to/logs/parent name "*$(get_log_date).log" -exec cp {} /tmp/evidence/log_captures \;

We basically need the "log clear" command, but this time we copy to our evidence log folder instead.

Capture new DB dump and diff

mysqldump db_name > db_post_dump.sql

Nothing fancy here. The diff is similarly simple:

diff db_pre_dump.sql db_post_dump.sql > db_diff.txt cp db_diff db_pre_dump.sql db_post_dump.sql /tmp/evidence/DB_dumps

Again, nothing too fancy.

Capture SQL queries and the explain plans

For grabbing the SQL queries out of the logs, we just need to look at the app logs. They're all marked by "SQL:" followed by the query, and each log line is tab-separated:

grep "SQL:" /var/log/path/to/logs/app/app.$(get_log_date).log | cut -d"\t" -f4 | cut -d":" -f2

It may or may not be the 4th column in the first cut but yenno. This command gets the query itself, which in practice I output to something like /tmp/evidence/SQL/SQL.txt as well as capture to a variable.

If you perhaps need to filter them such that you ignore some queries on a certain table or something, you can do a second grep (probably right after the first one) that would look like grep -v "ignore_table" or similar.

For getting the explain plans, you need to take the captured variable and use that in a command-line call to MySQL:

mysql db_name -t -e "${SQL_QUERY}"

This call as well as the output then get sent to a file like /tmp/evidence/Explain/Explain.txt. The -t makes it output in table-format regardless of if it's a standalone call on the shell (where it defaults to table-format) or if it's used as part of a sequence of commands/redirected output (where it looks less pretty).

Post-Automation

The result of this is that where I once had 11 steps for each execution path being tested, all of which are highly prone to human error and accelerate my acquisition of carpal tunnel, there are now 5 steps:

  • Pre-execution evidence steps (reset DB, clear logs, DB dump, capture config files)
  • Execute function under test
  • Post-execution evidence steps (new DB dump, capture logs, capture SQL + explains)
  • Copy source to evidence if necessary
  • Capture screenshot

And two of them change, at most, once per function under test. There's not much I care to figure out for the screenshot capture, but the source copying could be made less manual with some playing around with the git status -s output and executing the function could probably be done via CLI or Selenium in the web browser. But I am lazy, and this feels "good enough" to me for the moment at least.

Wrapping Up

A lot of long-time developers probably don't need this post, as doing something manually thrice is about twice too many as I've heard from some of my peers. But if you get stuck doing a lot of manual, repetitive tasks, you should definitely think about the possibility of automation for several reasons. For one, it saves time and brain cycles. You'll also probably learn something about the pieces of your toolkit - two months ago I basically knew nothing about the MySQL command line options besides the basic mysql -u username -p db_name, and now I know the basics of how to integrate it into a manual testing flow.

But most importantly, it means you have the time to read allthoseStackOverflow community questions you don't have time to read but read anyway.