As a lot of workforces go “fully remote”, pay is becoming a thing. Personally, I am on the side of paying people equally no matter where they are (and yes, I realize the complexities this can bring). Here’s an article on how this is changing.
We made it through another week. Sort of. Between this, that, and some really gnarly other stuff, here are a few links I found that you may find interesting as well.
In teams I lead, I do as much as I can to ensure a culture of transparency, accountability, and psychological safety. As such, I’m always interested in articles and research like this, that discuss workplace toxicity
The PNSQC Conference Program is out. This has always been one of my favorite conferences for discussing Quality (and not just testing). It’s worth checking out.
I’ve been re-reading this article on self-service models for platform teams, and like a lot of what it has to say.
The folks at PractiTest recently asked me to take their test case management tool for a spin. It’s been a while since I’ve used a tool like this, and after several hours of exploring and playing, I was reminded how valuable a flexible tool like PractiTest can be to assist and accelerate testing.
I’ll go into a few of my favorite details, but the key thing I liked about PractiTest over other tools I’ve used in this space is the flexibility it allows. Starting from a template choice of Agile vs. Traditional, with or without automation, it tries to set you up with the fields that will work best for your project and goals. But even if you don’t make the perfect choice (or your project doesn’t neatly align with any of these choices, there’s plenty of options for customizations to make it work exactly as you want it to.
In my case, I chose Agile + Automation for a template, and I was ready to add tests or other information immediately.
The customization options and flexibility in PractiTest are amazing. In my initial exploration, I started creating some placeholder test cases, but quickly got distracted with all the fine tuning and tweaks that are available. Because I could, I added the Angry Weasel logo to my dashboard, added a few test tasks, and I was in business.
As you can see, PractiTest offers a completely customizable task board you can use to track work in progress. Using the board isn’t a requirement, but if you’re using PractiTest to track scripted or exploratory test cases or charters, I think the ability to visualize work in progress is essential.
PractiTest is set up out of the box with three test types – Scripted, Exploratory, and AutomatedTest. It also has a Requirement type – which is nice to have as you can directly link Tests to Requirements and view the relationship in a Traceability tab, which is available both from the Requirement, and an individual test.
Of course, there’s also an Issue type that can be used for bugs, suggestions, etc. as needed. It’s also straightforward and easy to set up PractiTest to use with a wide variety of external Issue tracking tools. The list of connected tools is extensive (and unfortunately beyond the scope of this current round of kicking the tires of PractiTest).
You can choose to use the test types in PractiTest as you see fit – but I have some opinions and preferences (and experiences) with each of these test types that’s probably worth diving into in this article.
I know a lot of teams that don’t use Scripted tests (tests with pre-defined steps and outcomes) at all – and that’s fine. On a lot of Agile teams, those testing the code, and those developing the code work extremely closely (or are the same person). In many of these cases, writing a bunch of test steps may not seem as efficient as exploratory testing. In fact, on most of the products I’ve worked with over the last ~6 years, I haven’t used scripted tests. However, sometimes I’ve been on teams where we’ve found it necessary to get another team to help with testing (usually on different devices or environments that we didn’t have on our local team. In these cases, sharing a set of scripted tests with those teams helped significantly.
I really like the way Exploratory Tests are set up in PractiTest. I’ve used the concept of Test Charter and Guide Points (under various names) for my entire career, and think it’s a great way to guide investigative testing of any product.
For me, I define the charter as “the thing I want to do with this exploratory session” – and then I usually time box the effort. For a daily build of a simple website I may take just 30 minutes for exploratory testing. I use the Guide Points as reminders of things that I may want to poke at or investigate during my exploratory session.
It’s worth mentioning that I’ve found that scripted tests can easily and often become exploratory tests. Years ago, I worked on a product where a contract team ran a set of scripted tests on every build. After a short time, I found that the team was coloring outside the lines a bit while running the tests. They knew the product well enough that most times; they ended up exploratory testing, and marking off test steps as they ran across them.
I created a Test Charter and some Guide Points for every feature area of the product (taking some input from the existing Scripted Tests and previous Issues), and the testing quickly became both more efficient and more effective.
PractiTest handles automated tests in a few different ways. First, there’s the FireCracker tool, which just lets you take xml output from any test tool (or any tool for that matter) and import it as test results. The second method is xBot. xBot is an agent that allows you to run a test on a remote machine via PractiTest. It’s a simple, but effective way to run tests directly from the PractiTest UI. Finally, for ultimate flexibility, PractiTest has a REST API that allows you to connect just about any automation framework. I was pleased to see a lot of examples of connecting a variety of tools to PractiTest. I played around with this feature quite a bit, and it was pretty straightforward to interact with test case details via curl in a terminal. It’s worth noting that you can create new items from the REST API – so there’s a great opportunity to create new Issues (bugs) directly from a wrapper around your automation and add stack traces, environment variables, etc. to make debugging test failures easier.
I think this area especially is an outstanding example of where the extensibility of PractiTest makes it a great tool. I love that it doesn’t try to be too many things – but that it let’s you make it work they way you want rather than changing your workflows to make it work as I’ve had to do with many other tools in my career.
While I definitely put PractiTest through a decent amount of poking, I didn’t dive into several of the features. Most notably missing from this review are the dashboard and reporting features, and the ability to connect to other Issue Tracking tools (Bugzilla, GitHub, Jira, and more). Like the rest of PractiTest, the dashboard and reporting features are highly customizable and extensible. There are a nice set of core features that don’t try to do too much, and it’s intuitive to use.
Overall, for me, the big win with PractiTest is the extensibility. I don’t want tools to get in my way, and I don’t want to change my workflows to make the tests work for me. I feel like PractiTest provides a great way to be the right tool for a lot of teams – and that’s a huge achievement.
For me, it’s getting tougher lately to focus and get work done. This article on deep focus has given me some inspiration that I hope I can put to good use.
Saved an important one for last this week. I, and a lot of other leaders are seeing more and more signs of burnout in our organizations. As such, I’ve been reading a lot about it. Here’s a great overview of the growing problem with burnout.