Thanks for that. It seems you have to take on many roles. I imagine you’re also a mentor to each student employee. It must be rewarding when things go according to plan.
I produce use-case tests and they can write the unit-tests and associated functions.
I have some troubles understanding this. Which of the following is true, if any? You produce use-case tests and they are free to solve the problem
however they like
as long as they provide unit tests along with solution
by writing unit-tests first
I’m probably too harsh but if example based unit tests are expected and accepted, I can see you’re walking dangerously close to a slippery slope. As soon as you start to believe that your program works because all unit tests pass you’re missing the fact that those unit tests test just a specific examples. If program works (more or less) correctly in general then that’s only down to your team doing good enough job. Desired outcome and proud moment for you and “first timers” but tests have hardly anything to do with it.
My approach to tests depends a lot on the nature of the modification I am applying to the system.
When I don’t have a clear vision of how I’m going to implement a new module I tend to prefer to sketch a test on outermost layer of the application I’m working on (given that It still has a quick feedback cycle.) e.g If it is a web API I’m working on it would be an API test
I start writing the happy path scenario test on the outermost layer of the application.
From the moment I have the tests written and failing I start writing the code on a top-down way using RDD when writing pure functions. I like to use (comment …) forms to test each function I’m writing.
I write the necessary code until my tests are passing.
At this point, I have a much more clear vision of how I want to design that module, so I make the design changes using TDD.
When I am happy with the design I start to write the corner case scenarios as unit tests on the innermost layers.
In another scenario, when I know exactly how I’m going to design a certain module, for example, when there is a clear pattern to follow on the project, I prefer to use TDD and build the code bottom-up.
Firstly I completely buy the usual gripes with TDD. It’s a strangely expensive way to build anything, not repeated in any other industry. If you designed the feature badly TDD won’t help - instead the bad design is baked in twice. Also it tests a few highly specific paths only unlike generative tests. But in pair programming for normal Joes like me - just getting one flow built at all with a clear goal set out by the test (rather than relying on each other’s verbal skills) is fairly indispensable, as is the safety net of a barrage of tests to stop you when you mess things up.
Second I love RDD, it provides far faster and more directed feedback and changes how you think about code. But also recently I’ve been using a remote repl to build some tests that churn through user journeys in integrated test environments (with legacy systems that are difficult to reproduce locally.) Developing directly on a externally deployed instance with the same fast feedback as locally is a joy, and it’s a mode of development that TDD could never give you.
+1 for “run tests on save”! I do the same for all unit and integration tests, using kaocha’s --watch feature. It
provides almost the same fast feedback loop as the REPL
reduces the mental load to remember the current state of the REPL variable
ensures a better coverage for all use cases.
Of course this is interleaved with using the REPL, so a typical workflow is like:
Launch the kaocha watcher
Add test
write code, play with the repl, write more code, until the test pass
refine the tests / add more tests
repeat step 3
And I make heavy use kaocha’s foucs/skip feature to only run the tests for the current module I’m working on.
For e2e tests it varies from case to case, e.g. I won’t setup auto run for browser-driven e2e tests since I’m working on frontend / backend code at the same time, so a saving on the cljs file may need some change on the backend to make the tests pass.