As a developer having QA you can rely on is great! They are welcome friends helping us cultivate our precious software. But there are dark places which even a QA cannot shine a light. When your software has no interface, what can a QA do, but wish you luck? But what if there was a way for QAs to interact with otherwise UI-less software? Enter Cucumber, a tool that allows QA to shine a light in dark places.
I rediscovered Cucumber, while researching test automation frameworks. Cucumber is a framework for Behavioral Driven Development. After experimenting for a time, I realized Cucumber opens a whole realm of possibilities. Cucumber encourages the expression of program actions in the human tongue. With a proper translation mechanism, Cucumber could act as a mediator between QA and the UI-less software.
Cucumber translates the human tongue into functions through the Gherkin language. For example, a tester would define a test case like this:
Scenario: Messages are saved until the consumer arrives
Given the queues are empty
And I publish a message to the queue with ‘SomeDetails’
When Alice subscribes to the queue
Then Alice should receive a message with ‘SomeDetails’
It is fairly easy to understand the behavior that is being described in this scenario. Cucumber ties the keywords Given, When, and Then to functions which execute the described action using a Regex Match string. This can include free-hand parameters such as ‘SomeDetails’.
Properly designed, the Givens and Whens can be setup to be repeatable and re-compose-able. Doing so allows the QA to describe more complex scenarios with different combinations of the same simple behaviors. As a result, once the initial steps are available, a QA could test to their hearts content with little developer support.
Cucumber improves the documentation of a product. Test document expected behaviors in a common tongue. This makes them available to all parts of the company.
But great care must be taken to ensure that the compose-able parts function precisely as described and without side-effects. Imperfections in the design or the aforementioned side-effects will destroy test-validity and erode trust in the test cases written using Cucumber.
Cucumber was designed to improve TDD, enabling members of a team to describe the function of a program in a human tongue. This same feature creates a tool for empowering QA. Given careful planning and design, you can compose a terse but flexible set of instructions. These allow a QA to test projects they could never touch before! By blending the skills of a developer and a QA, we can reap the best of all our talents. All it takes is an investment to allow our friend in QA to come with us!
Software is for the user. It is not for the Software Engineers who develop it. In the end, software will succeed or fail to meet user needs. The user is the arbiter of software’s fate. Oddly though, many software developers tend to resent their users. The users are prone to strange behaviors. Sometimes they can even come across as whinny children to jaded developers. But we must do away with this flawed way of thinking. We must act as humble stewards, gentle of heart, and eager to please.
Users are the life blood of a software product. Without them, the product will fail. As a result their needs are paramount, and must be address to the best of our abilities. If this is the case, then why are developers so often frustrated by their users? Remember we are fluent in the machine tongue. Generally speaking, users aren’t. Sure they can use the machines, to a limited degree. But they don’t understand them like we do.
Imagine you are in a foreign country. The only way to get your work done is to cajole a lumbering beast into action for you. Without understanding the beast’s language, even simple tasks could be infuriating. Users who are less familiar with software might feel the same. Only remember that we specialize software to particular tasks. As a result users need to learn, remember and use a variety of these ‘beasts’ to get their work done. Also remember, they are being evaluated by their ability to get work done, using your software.
And so scared, frustrated, and feeling impotent, they turn to us. They wonder why their actions did not work. They ask for strange features or work-flows. All these feeling arise because they don’t understand their tools. Sure we could ‘educate them’. But if the way to use a tool is less than obvious, or they only use it seldom, then you can expect them to forget. Not to mention, you have to convince them to take the time to get trained, rather than working. Even we don’t feel comfortable trading training time for working time. So why should we ask that of them?
Two paths remain to us. We can tell the user’s they are wrong and constantly bicker with them, trying to explain the proper way. Or we can choose to listen. The way we thought was obvious is not. They need more help, because the grammar of machines is difficult. I would call this path ‘Stewardship’. We have to think of the code as belonging to the users, not to us. In so doing, it becomes clear what choices we need to make. If the code is for the user, then their needs overrule ours. If they aren’t fluent, we must may the software more approachable.
We are like gardeners. The land we tend is not our own, but still we make it bloom with brilliant flowers. We cherish the blossoms, and suffer when they are trodden upon. But the garden is not for us. Imagine if the gardener chased off the owner with a spade when he ask for a new row of lilies. The gardener would be marched off and a new one brought in to replace him. This is not an exact analogy, since users pick their software. They might just avoid a certain gardener altogether.
If instead, we are gentle and approachable, we could better tend our gardens. If no one ever walks our garden paths, then we put to waste all the love and beauty to garden contains. Software without users, despite its brilliant design, and delicious complexity, is dead. If we want vibrant, living software we must serve our users. We cannot lord our understanding over them, but must instead steward the code for them. With gentle hearts, we can learn their needs, and make the garden they need. In the process we may discover an even greater beauty.
What’s the point of an interview? Before you jump to an answer, do you give your candidate’s coding tests? Some white-board challenges? Have you ever wondered why? Do you think it’s the best way? Recently I’ve encountered opinions that counter the traditional wisdom filtering candidates. Interviewing.io shared data that shows LinkedIn Endorsements don’t correlate to a candidate’s actual skill.
Recently, respected programmers have taken to Twitter to ‘confess their programming sins’. This prompted a discussion on the technical interview questions by The Outline. There is even a small industry to prepare candidates for Whiteboard Challenges. In the end, the hubbub about Whiteboard challenges comes from the fact we are using them wrong.
We interview this way because Employers need to feel comfortable about a candidate. For Software, this means verifying the skills of the candidate. And to a lesser extend verifying their ability to communicate. This sums up the entire purpose of an interview.
But what does my answer to a whiteboard challenge actually mean? Is there such a thing as a ‘correct’ response? At a deeper level, does my answer truly reflect my skills as a developer? I say it does not. It does not reflect your skills, unless you are referring to the ability to communicate/reason by drawing boxes and lines.
Don’t get me wrong though. The ability to present your designs on a whiteboard is a useful skill. But it is not the skill that an employer wants to check. Unfortunately, there isn’t a good way to measure some of the skills without seeing actual work. ‘Take-home tests’ in the interviewee’s preferred language are much more useful. Whiteboard challenges do not demonstrate the same skills.
That is not to say you should toss out Whiteboard challenges . What we need is to change our thinking. Whiteboard challenges may not show an interviewee’s ‘coding’ skills. But they do show the manner in which an interviewee thinks. If you ask someone to write out an algorithm on a whiteboard, you will see how they think about the algorithm. You will see how they remember it. If you ask them to create a new algorithm, something unique, you can learn how they explore a new problem. You’ll see what details they pay attention to. Moreover, you can introduce new requirements after they get started. This reveals how they will adapt.
All these insights are useful to know. But they are far less tangible/measurable. As with most hard to measure qualities, we tend to fail at measuring them. As a result, the tools created to measure them begin to be mis-used or mis-applied to find other tidbits. It ends up like using a fork to eat soup. It’s not very effective and wears you and your server out trying to get anything done.
So, if an interview is about revealing the skills of the interviewee, then we need technical interview questions. But using Whiteboard challenges still provides some benefits. But we cannot use whiteboard challenges as a litmus for programming skills. Instead, we should use them to pose unusual challenges which expose the way the interviewee thinks. This new form can also reveal how interviewees adapt to adversity. Those insights combined with more traditional evaluations will help businesses to find stronger, more suitable candidates. These candidates will be stronger not merely from a technical perspective but also from a cultural one. All it takes is using the tool for its proper purpose.
*Note: I am capitalizing and italicizing Class names for ease of identification.
As 2016 drew to a close, there were numerous articles covering the state of the software development community. [For example here, and here] In several cases, the authors pointed out the sorry state of ‘Agile’. In fact, this trend of developers hating ‘Agile’ has been growing for quite some time. Reading those articles prompted some self-reflection. Obviously, Business management does ‘Agile’ differently. It is a set of prescribed practices, since that is what they understand. And of course robbed of its vigor, this ‘Agile’ is less effective. But we, software developers, do it wrong sometimes as well. We may have bought into the wrong ideas.
As I wrestled with myself over Agile, a larger picture began to emerge. When I entered the workforce, I joined a company that did ‘Agile’. As I learned more about the original principles of the practice I became a supporter. Note I say original principles. The more of a supporter I became, the more I realized my company did not quite get Agile right. We have the form, but lacked the true substance of it. Now, it wasn’t all bad, there were pockets of true agility here and there, but en masse, we missed it. As a result I started to burn out. I had only been working for half a year when I began to tire. The discontinuity between what we professed, and what we actually did was a heavy burden. So far, so normal as disillusioned developers go.
Now, my company did provide a good opportunity for discussion. Specifically, they supported a developer’s book club. And of course ‘Agile’ methodologies would be the topic of discussion from time to time. But when I would bring up some place where I saw the company missing the goal of agility, the observation was generally dismissed. There were a few who did heard and would later come and discuss with me. They usually would come to offer their own observations to help me see what I had missed. Each of these kind souls all had a common trait, they were willing to look at failure for what it was. They didn’t deny its occurrence, and they always looked for some nugget to learn from. From those leaders, I learned a great deal. I would return to them and seek advice during the rest of my time at the company. In my opinion, they understood the true core of agility, despite being unable to practice it because of organizational constraints.
With the advice of these leaders in my ear, I searched. And as I searched I realized that we, as software developers, need to branch out more. To find insight not just from our insular community, but also from the world at large. After all we are humans, and the world has been analyzing humans for centuries! During one such exploratory expedition, I found the OODA loop. As described, the loop is this:
Observe : Review your facts and information
Orient : Is something off? How so? Frame your thoughts and discussion
Decide : Based on your thoughts, and your facts, what should you do? Make it a small step.
Act : Act out your decision.
Repeat : Repeat process ad nauseum, until you have reached your goal/destination
To any supporter of the principles of agile software development, these steps ought to look familiar. It is the same core of iteration with small steps. The very same principle found outside of software development for the same purpose: reach your goal faster.
But here is where Business influenced the ‘Agile’ practice in a negative sense. Review the loop. It never mentioned the idea that all actions must lead directly to your goal. In fact it appears to assume that some steps won’t be optimal. Just like the original principles for agile software development. But in a business context, such a step can prove costly. If you make a step that doesn’t lead to results, then for a business the cost of the step is lost. So naturally business would want to avoid lossy steps and ensure that they take just the right ones. So we end up with strong Project/Product Managers, and non-autonomous engineers. And from a Business stance, this is excellent. It is safe, and much more certain. And explaining it to any higher-up is infinitely easier.
It is also stagnant, and impotent, and ineffective. By the very act of achieving safety, the methodology loses its potency. The principles for agile software development imply, expect, and I would go so far as to say requires, risk. The original agile allows, and expects some of the steps to be imperfect. In fact, the first step is supposed to be just a guess. But it is time-boxed so that we can learn from it while the ideas are still fresh in our mind! If we don’t risk anything in a step, how can we gain anything? In agile, there are not ‘unsuccessful’ steps. That is not blind optimizing or and new-age BS. Instead it is a deep understanding of what we are buying with each step. With each step, we either are buying customer approval for the developed feature. Or we are buying knowledge of our customers. And this isn’t just any knowledge we are buying. It is a personal and contextualized knowledge that our customer provides back to us. We pay to learn in small, highly contextualized, ‘as close to the real thing as possible’ bits of knowledge.
But before I move on, there is one other detail in which Business Agile, and original agile differ. In the original, we do not assume we know what the customer wants. We expect to find it though experimentation and missteps. We start with inaccuracy, and move towards accuracy. In Business Agile, the Product and Project Manager ‘know’ what the customer wants. We start with accuracy and have nowhere further to go. The Iteration is simple and convenient block of man-hours. It allows them to estimation the time it will take to complete the feature we ‘know’ the customer wants.
It would seem to me that Business has forgotten a value we had given to us in childhood. After all, don’t we spend nearly the first two decades of our lives in learning? In trading time for knowledge? Hasn’t our society decided that it is of value to ensure everyone has some common understanding? I think Business has fallen into its current state of ‘Agile’ because it misunderstands what it is buying. It is not buying software, at least not directly. The original agile aims to provide strategic knowledge. What if we shifted our thinking about agile? Instead of purchasing a static product, we are acquiring and applying strategic knowledge. We could reinvigorate the practices that have been robbed of their efficacy.
Disclaimer: I am not the happy looking chap in the photo.
I was working on a personal project recently when a realization dawned on me. User Experience Design,also known as UX design, and software design collide more frequently. And not only in the User Interface layer.
Before I get too far, when I talk about UX, I am referring to the experience the user has while attempting to use the device or object, or code. I think this image does an excellent job of describing good UX concisely.
It’s pretty easy to tell what UX is like with a Graphic User interface, or a GUI. After all, this is the part everyone touches. If a website is snappy and the layout makes sense, that is good UX. If it is clear how to do the operation you want, without needing to consult the magic talking paperclip, then it is a good UX. But it seems that once you go below the GUI layer, the lessons on good UX vanish.
I was working on a Fluent Testing API for python when I realized it. In version 1, I had all the functionality for this API bound up in a single class. Sure, it limited the import tree, and made it easy for me to develop. For version 2, I decided to pull the functions into separate classes. And while I was writing out some example cases, I realized that this simple code change resulted in an augmented User Experience!
You see, by pulling the various functions into different classes, I allowed the IDE to create better prompts. The better prompts now guide a user of my API through the proper pattern of using my API. Since there were fewer functions to choose from, it is now clearer how to proceed. The user no longer has to consult a lot of documentation. This is a simple example, but it did get me thinking.
In fact, one week prior, I added a Facade to one of my library at work. The Facade simplified interactions with my Library. Now other software engineers could more readily use my library’s functionality. I am surprised that I didn’t think of it at the time, but APIs are a Software Engineer’s UI layer. As a result, they should be subject to a UX review!
I mentioned earlier that I have noticed that, on the whole, UX degrades as you leave the GUI layer. Two factors are responsible, in my opinion. First, the majority of UX review and work goes into the GUI layer. And this focus makes sense. The vast majority of software interaction is through such a layer. As an aside, finding a UX guy who can talk about UX and about API design can be difficult. I usually have a heck of a time getting time with them to review a GUI design with them!
The Second factor is a lack of discipline. I am not throwing stones here, the first version of my Testing API is example of such a lack! I collected all the functionality in a single class because it was easier for me! I wanted to get the functionality together and to reduce the import tree. In hindsight this is a silly reason. And yet, it was enough to change my behavior.
So now that I’ve seen the problem, what can I do? Well, I noticed the improvements made in the UX for version 2 by writing up some examples. That is to say, I used it. This is a good start, bu submitting it to user testing would be a better step. After all, as the design I was intimately familiar with the inner workings and the proper usage of the tool. But a fresh user wouldn’t be. And if there is anything I have learned developing software: the user never does exactly what you expect them to.
Besides more user testing, some cross-functional education might help. This recent epiphany put me in mind of a tech talk that I hadn’t finished. You can find the youtube video here. I am hoping that revisiting the principles from the talk will continue to improve my designs!