1 min read

A structure I'm experimenting with for spreadsheet-generating Python code

Whenever I write an experiment at work, I need to arrange the resulting data in some readable, manipulable form, and usually that's a table. Doing it manually is a pain and redundant anyway. That means generating tables programmatically in Python.

It's important for the results to be correct and for the code to produce the same results on reruns. Unfortunately, reporting code changes a lot subject to the experiment's demands. It also tends to get complicated. Ideally I would write tests to make sure that the results are correct. This can be a pain though.

To make the code more readable testing more workable, I'm experimenting with a general structure:

  1. For each output table, write a dataclass representing one row in the table.
  2. Write functions that tabulate the input data in whatever form it comes.
  3. Write functions that consume data in the input form and produce data in the output form.
  4. Write functions that serialize the output data.
  5. Make the de/serialization code dumb — it should take the objects and dump them as they are using say asdict() or maybe a simple protocol to convert.
  6. Unit test the processing code to make sure it does what it is supposed to do. Generally this means a suite of tests for each spreadsheet or table generated.
  7. Do manual or integration tests for the functions that parse raw input rows (e.g. a dict from csv.DictReader) into record types. Unit tests for this would be trivial and too much extra work to keep up to be worth it.

Currently I'm trying this with legacy spreadsheet generation code (that I wrote). Next time I start from scratch I might try test-driving the process code. That is: Test that it does what it is supposed to starting from a dummy implementation that returns one hard-coded row value, writing tests for a behavior/output feature first then writing code to make the test pass (red-green-refactor).