Software Development, Work Projects

Going where no QA has gone before!

As a developer having QA you can rely on is great! They are welcome friends helping us cultivate our precious software. But there are dark places which even a QA cannot shine a light. When your software has no interface, what can a QA do, but wish you luck? But what if there was a way for QAs to interact with otherwise UI-less software? Enter Cucumber, a tool that allows QA to shine a light in dark places.

I rediscovered Cucumber, while researching test automation frameworks. Cucumber is a framework for Behavioral Driven Development. After experimenting for a time, I realized Cucumber opens a whole realm of possibilities. Cucumber encourages the expression of program actions in the human tongue. With a proper translation mechanism, Cucumber could act as a mediator between QA and the UI-less software. 

Cucumber translates the human tongue into functions through the Gherkin language. For example, a tester would define a test case like this: 

Scenario: Messages are saved until the consumer arrives
Given the queues are empty
And I publish a message to the queue with ‘SomeDetails’
When Alice subscribes to the queue
Then Alice should receive a message with ‘SomeDetails’

It is fairly easy to understand the behavior that is being described in this scenario. Cucumber ties the keywords Given, When, and Then to functions which execute the described action using a Regex Match string. This can include free-hand parameters such as ‘SomeDetails’. 

Properly designed, the Givens and Whens can be setup to be repeatable and re-compose-able. Doing so allows the QA to describe more complex scenarios with different combinations of the same simple behaviors. As a result, once the initial steps are available, a QA could test to their hearts content with little developer support.

Cucumber improves the documentation of a product. Test document expected behaviors in a common tongue. This makes them available to all parts of the company.

But great care must be taken to ensure that the compose-able parts function precisely as described and without side-effects. Imperfections in the design or the aforementioned side-effects will destroy test-validity and erode trust in the test cases written using Cucumber.

Cucumber was designed to improve TDD, enabling members of a team to describe the function of a program in a human tongue. This same feature creates a tool for empowering QA. Given careful planning and design, you can compose a terse but flexible set of instructions. These allow a QA to test projects they could never touch before! By blending the skills of a developer and a QA, we can reap the best of all our talents. All it takes is an investment to allow our friend in QA to come with us!

Standard
Perspective, Software Development

For the love of the User

Software is for the user. It is not for the Software Engineers who develop it. In the end, software will succeed or fail to meet user needs. The user is the arbiter of software’s fate. Oddly though, many software developers tend to resent their users. The users are prone to strange behaviors. Sometimes they can even come across as whinny children to jaded developers. But we must do away with this flawed way of thinking. We must act as humble stewards, gentle of heart, and eager to please.

Users are the life blood of a software product. Without them, the product will fail. As a result their needs are paramount, and must be address to the best of our abilities. If this is the case, then why are developers so often frustrated by their users? Remember we are fluent in the machine tongue. Generally speaking, users aren’t. Sure they can use the machines, to a limited degree. But they don’t understand them like we do.

Imagine you are in a foreign country. The only way to get your work done is to cajole a lumbering beast into action for you. Without understanding the beast’s language, even simple tasks could be infuriating. Users who are less familiar with software might feel the same. Only remember that we specialize software to particular tasks. As a result users need to learn, remember and use a variety of these ‘beasts’ to get their work done. Also remember, they are being evaluated by their ability to get work done, using your software.

And so scared, frustrated, and feeling impotent, they turn to us. They wonder why their actions did not work. They ask for strange features or work-flows. All these feeling arise because they don’t understand their tools. Sure we could ‘educate them’. But if the way to use a tool is less than obvious, or they only use it seldom, then you can expect them to forget. Not to mention, you have to convince them to take the time to get trained, rather than working. Even we don’t feel comfortable trading training time for working time. So why should we ask that of them?

Two paths remain to us. We can tell the user’s they are wrong and constantly bicker with them, trying to explain the proper way. Or we can choose to listen. The way we thought was obvious is not. They need more help, because the grammar of machines is difficult. I would call this path ‘Stewardship’. We have to think of the code as belonging to the users, not to us. In so doing, it becomes clear what choices we need to make. If the code is for the user, then their needs overrule ours. If they aren’t fluent, we must may the software more approachable.

We are like gardeners. The land we tend is not our own, but still we make it bloom with brilliant flowers. We cherish the blossoms, and suffer when they are trodden upon. But the garden is not for us. Imagine if the gardener chased off the owner with a spade when he ask for a new row of lilies. The gardener would be marched off and a new one brought in to replace him. This is not an exact analogy, since users pick their software. They might just avoid a certain gardener altogether.

If instead, we are gentle and approachable, we could better tend our gardens. If no one ever walks our garden paths, then we put to waste all the love and beauty to garden contains. Software without users, despite its brilliant design, and delicious complexity, is dead. If we want vibrant, living software we must serve our users. We cannot lord our understanding over them, but must instead steward the code for them. With gentle hearts, we can learn their needs, and make the garden they need. In the process we may discover an even greater beauty.

Standard
Perspective, Software Development

What are you looking for in your interview?

What’s the point of an interview? Before you jump to an answer, do you give your candidate’s coding tests? Some white-board challenges? Have you ever wondered why? Do you think it’s the best way? Recently I’ve encountered opinions that counter the traditional wisdom filtering candidates. Interviewing.io shared data that shows LinkedIn Endorsements don’t correlate to a candidate’s actual skill.

Recently, respected programmers have taken to Twitter to confess their programming sins. This prompted a discussion on the technical interview questions by The Outline. There is even a small industry to prepare candidates for Whiteboard Challenges. In the end, the hubbub about Whiteboard challenges comes from the fact we are using them wrong.

We interview this way because Employers need to feel comfortable about a candidate. For Software, this means verifying the skills of the candidate. And to a lesser extend verifying their ability to communicate. This sums up the entire purpose of an interview.

But what does my answer to a whiteboard challenge actually mean? Is there such a thing as a ‘correct’ response? At a deeper level, does my answer truly reflect my skills as a developer? I say it does not. It does not reflect your skills, unless you are referring to the ability to communicate/reason by drawing boxes and lines.

Don’t get me wrong though. The ability to present your designs on a whiteboard is a useful skill. But it is not the skill that an employer wants to check. Unfortunately, there isn’t a good way to measure some of the skills without seeing actual work. ‘Take-home tests’ in the interviewee’s preferred language are much more useful. Whiteboard challenges do not demonstrate the same skills.

That is not to say you should toss out Whiteboard challenges . What we need is to change our thinking. Whiteboard challenges may not show an interviewee’s ‘coding’ skills. But they do show the manner in which an interviewee thinks. If you ask someone to write out an algorithm on a whiteboard, you will see how they think about the algorithm. You will see how they remember it. If you ask them to create a new algorithm, something unique, you can learn how they explore a new problem. You’ll see what details they pay attention to. Moreover, you can introduce new requirements after they get started. This reveals how they will adapt.

All these insights are useful to know. But they are far less tangible/measurable. As with most hard to measure qualities, we tend to fail at measuring them. As a result, the tools created to measure them begin to be mis-used or mis-applied to find other tidbits. It ends up like using a fork to eat soup. It’s not very effective and wears you and your server out trying to get anything done.

So, if an interview is about revealing the skills of the interviewee, then we need technical interview questions. But using Whiteboard challenges still provides some benefits. But we cannot use whiteboard challenges as a litmus for programming skills. Instead, we should use them to pose unusual challenges which expose the way the interviewee thinks. This new form can also reveal how interviewees adapt to adversity. Those insights combined with more traditional evaluations will help businesses to find stronger, more suitable candidates. These candidates will be stronger not merely from a technical perspective but also from a cultural one. All it takes is using the tool for its proper purpose.

Standard
Software Development

If you give a Dev a board game…

From my first lecture on C, I have been tinkering with side projects. I’ve done projects purely for exploration and entertainment, like a text-based adventure games. More recently I’ve done utility projects like a script to correct QIF formatted text. Recently I took on a project of a larger scope.
 
A while back,I read an article about a simulation of Machikoro. It is a ‘city-building game’, with rules that are easy to translate to code. In particular, the idea of using the simulator to ‘evolve’ an optimal strategy for the game captivated me. This was applying Machine-learning to a board game. I figured ‘I could do that’, and got to work. I encountered many distractions and set-backs, including a new baby. But this month I am pleased to admit that I have hit a milestone.
 
To support the ‘evolution’ aspect, I had to be able to run thousands of simulations in a reasonable amount of time. And after a bit over a month of concerted effort, I made it. I took my code from being a collection of classes to a library and simulator able to run 1000 games in 15 seconds.
 
I started back in December with classes to represent the deck of cards, a strategy for play, and a player state. The first step after this was to create a basic AI* to act upon the player state, and a given strategy. Borrowing from the article I had found, I decided to make the strategy more static. The decision logic reduced to constant decisions like ‘always yes’, or ‘always the cheapest available’. Then the AI only needed to use the Strategy to answer queries from the Game.
*Note: I am capitalizing and italicizing Class names for ease of identification.
After the simplified AI was complete, I got to work on the Game, which would simulate a single game. I decided that I wanted to use fluent APIs to instantiate a Game. I spend a good chunk of time to get these write, but it helped to make the main routine clearer. While I developed the Game, I decided to abstract the mechanisms of the game. This allowed me to separate the calculations from the sequence in which they are applied. I extracted the Engine to handle things like calculating which AI if any has won, or how much money this AI gets with this dice roll. Meanwhile the Game can manage whose turn it is, and who rolls the dice.
 
Testing both the Game and the Engine were somewhat arduous, but it was time well spent. I caught numerous bugs, and infinite loops before I ever ran a full simulation. Thankfully the Deck, State, and AI were all similarly tested. But I do wish that I had adhered more tightly to TDD. Instead I was very eager to getting the core functionality working.
 
Once these pieces were in place, I initiated my GitFlow, branching Master, Dev, and a new Feature. After pushing version 1.0 to Git, I started work on a new Feature, multi-game simulation! And while I tinkered with a Simulator, I realized that my fluent APIs had a bug. So I went back to Dev, and produced a Hotfix, which was merged into Master. From there I re-based the Feature, and continued my work.
 
With the Simulator, I needed to initialize a Game, but also to be able to run it N times, without interference from the previous rounds. So I had a two-pronged approach, I would accumulate the results of each game, and I would allow a Game to be reset. Learning from my forebears, I was sure to include randomization of the first-player when I reset. This removed the skewing of First-move advantage from my results. With the core Game working and fluently initialized, I was able to simple inject it into a Simulator to run.
 
The original simulator was able to run 1000 games in around 80 seconds. This performance is alright, but my personal dev box has 8 cores and the Simulator was maxing out just one. So to improve performance , I began to look into Python multi-threading. I found two similar flavors of concurrent operations in Python.
I elected to try Tasks first, as it seemed similar to Microsoft’s Task Parallel Library. Sadly I was not quite right about that. The BatchSimulator’s performance was terrible. For some reason it never used multiple cores. The original time for the BatchSimulator was 150 seconds for 1000 games. While it is likely this was user error, it was enough to discourage me from pursuing Tasks further.
 
So I turned to concurrents. And with concurrents, I had much better luck. In this case I spawned some sub-processes. I created the Coordinator to provide each fork with its own copy of the given Game, and an assigned number of games to run. Then each fork created its own Simulator, and ran the given number of games. Once each Simulator completed, the Coordinator would accumulate the results. After all the forks completed, the coordinator calculates the final statistics. This provides an overall winner. To make this easier, I extracted the SimulationResults class. I then added public methods for merging and calculations. By leveraging sub-processes, and existing code, the Coordinator was able to run at least 1000 games in ~16 seconds. Now I say at least, because the Coordinator divides the games evenly among the sub-processes. So to ensure that at least 1000 games are run, it must round up on the division of games per sub-process. But having more data is never a bad thing.
 
I was able to push and close this Feature recently, and I am very pleased with the progress. I went from single game simulation to rather performant 1000 game simulation in a month. I now have something to show for my ideas and my work. This milestone leaves me at a good break point. I can either continue working on the simulator to pursue the machine-learning angle. Or I can change focus and return to this project later. At the moment, I don’t know what direction I will turn. But I wanted to take a step back and look at what I have accomplished, and share my ‘geeking out’ a bit.
 
If anyone is interested in the source, you can find it here.
Standard
Perspective, Software Development

Resuscitating the dread word ‘Agile’

As 2016 drew to a close, there were numerous articles covering the state of the software development community. [For example here, and here] In several cases, the authors pointed out the sorry state of ‘Agile’. In fact, this trend of developers hating ‘Agile’ has been growing for quite some time. Reading those articles prompted some self-reflection. Obviously, Business management does ‘Agile’ differently. It is a set of prescribed practices, since that is what they understand. And of course robbed of its vigor, this ‘Agile’ is less effective. But we, software developers, do it wrong sometimes as well. We may have bought into the wrong ideas.

As I wrestled with myself over Agile, a larger picture began to emerge. When I entered the workforce, I joined a company that did ‘Agile’. As I learned more about the original principles of the practice I became a supporter. Note I say original principles. The more of a supporter I became, the more I realized my company did not quite get Agile right. We have the form, but lacked the true substance of it. Now, it wasn’t all bad, there were pockets of true agility here and there, but en masse, we missed it. As a result I started to burn out. I had only been working for half a year when I began to tire. The discontinuity between what we professed, and what we actually did was a heavy burden. So far, so normal as disillusioned developers go.

Now, my company did provide a good opportunity for discussion. Specifically, they supported a developer’s book club. And of course ‘Agile’ methodologies would be the topic of discussion from time to time. But when I would bring up some place where I saw the company missing the goal of agility, the observation was generally dismissed. There were a few who did heard and would later come and discuss with me. They usually would come to offer their own observations to help me see what I had missed. Each of these kind souls all had a common trait, they were willing to look at failure for what it was. They didn’t deny its occurrence, and they always looked for some nugget to learn from. From those leaders, I learned a great deal. I would return to them and seek advice during the rest of my time at the company. In my opinion, they understood the true core of agility, despite being unable to practice it because of organizational constraints.

With the advice of these leaders in my ear, I searched. And as I searched I realized that we, as software developers, need to branch out more. To find insight not just from our insular community, but also from the world at large. After all we are humans, and the world has been analyzing humans for centuries! During one such exploratory expedition, I found the OODA loop. As described, the loop is this:

Observe : Review your facts and information

Orient : Is something off? How so? Frame your thoughts and discussion

Decide : Based on your thoughts, and your facts, what should you do? Make it a small step.

Act : Act out your decision.

Repeat : Repeat process ad nauseum, until you have reached your goal/destination

To any supporter of the principles of agile software development, these steps ought to look familiar. It is the same core of iteration with small steps. The very same principle found outside of software development for the same purpose: reach your goal faster.

But here is where Business influenced the ‘Agile’ practice in a negative sense. Review the loop. It never mentioned the idea that all actions must lead directly to your goal. In fact it appears to assume that some steps won’t be optimal. Just like the original principles for agile software development. But in a business context, such a step can prove costly. If you make a step that doesn’t lead to results, then for a business the cost of the step is lost. So naturally business would want to avoid lossy steps and ensure that they take just the right ones. So we end up with strong Project/Product Managers, and non-autonomous engineers. And from a Business stance, this is excellent. It is safe, and much more certain. And explaining it to any higher-up is infinitely easier.

It is also stagnant, and impotent, and ineffective. By the very act of achieving safety, the methodology loses its potency. The principles for agile software development imply, expect, and I would go so far as to say requires, risk. The original agile allows, and expects some of the steps to be imperfect. In fact, the first step is supposed to be just a guess. But it is time-boxed so that we can learn from it while the ideas are still fresh in our mind! If we don’t risk anything in a step, how can we gain anything? In agile, there are not ‘unsuccessful’ steps. That is not blind optimizing or and new-age BS. Instead it is a deep understanding of what we are buying with each step. With each step, we either are buying customer approval for the developed feature. Or we are buying knowledge of our customers. And this isn’t just any knowledge we are buying. It is a personal and contextualized knowledge that our customer provides back to us. We pay to learn in small, highly contextualized, ‘as close to the real thing as possible’ bits of knowledge.

But before I move on, there is one other detail in which Business Agile, and original agile differ. In the original, we do not assume we know what the customer wants. We expect to find it though experimentation and missteps. We start with inaccuracy, and move towards accuracy. In Business Agile, the Product and Project Manager ‘know’ what the customer wants. We start with accuracy and have nowhere further to go. The Iteration is simple and convenient block of man-hours. It allows them to estimation the time it will take to complete the feature we ‘know’ the customer wants.

It would seem to me that Business has forgotten a value we had given to us in childhood. After all, don’t we spend nearly the first two decades of our lives in learning? In trading time for knowledge? Hasn’t our society decided that it is of value to ensure everyone has some common understanding? I think Business has fallen into its current state of ‘Agile’ because it misunderstands what it is buying. It is not buying software, at least not directly. The original agile aims to provide strategic knowledge. What if we shifted our thinking about agile? Instead of purchasing a static product, we are acquiring and applying strategic knowledge. We could reinvigorate the practices that have been robbed of their efficacy.

Standard
Perspective, Software Development, Work Projects

Where’d my UX go?

Disclaimer: I am not the happy looking chap in the photo.

I was working on a personal project recently when a realization dawned on me. User Experience Design,also known as UX design, and software design collide more frequently. And not only in the User Interface layer.

Before I get too far, when I talk about UX, I am referring to the experience the user has while attempting to use the device or object, or code. I think this image does an excellent job of describing good UX concisely.

Link: http://i.imgur.com/9LqhOl3.jpg

It’s pretty easy to tell what UX is like with a Graphic User interface, or a GUI. After all, this is the part everyone touches. If a website is snappy and the layout makes sense, that is good UX. If it is clear how to do the operation you want, without needing to consult the magic talking paperclip, then it is a good UX. But it seems that once you go below the GUI layer, the lessons on good UX vanish.

I was working on a Fluent Testing API for python when I realized it. In version 1, I had all the functionality for this API bound up in a single class. Sure, it limited the import tree, and made it easy for me to develop. For version 2, I decided to pull the functions into separate classes. And while I was writing out some example cases, I realized that this simple code change resulted in an augmented User Experience!

You see, by pulling the various functions into different classes, I allowed the IDE to create better prompts. The better prompts now guide a user of my API through the proper pattern of using my API. Since there were fewer functions to choose from, it is now clearer how to proceed. The user no longer has to consult a lot of documentation. This is a simple example, but it did get me thinking.


In fact, one week prior, I added a Facade to one of my library at work. The Facade simplified interactions with my  Library. Now other software engineers could more readily use my library’s functionality. I am surprised that I didn’t think of it at the time, but APIs are a Software Engineer’s UI layer. As a result, they should be subject to a UX review!

I mentioned earlier that I have noticed that, on the whole, UX degrades as you leave the GUI layer. Two factors are responsible, in my opinion. First, the majority of UX review and work goes into the GUI layer. And this focus makes sense. The vast majority of software interaction is through such a layer. As an aside, finding a UX guy who can talk about UX and about API design can be difficult. I usually have a heck of a time getting time with them to review a GUI design with them!

The Second factor is a lack of discipline. I am not throwing stones here, the first version of my Testing API is example of such a lack! I collected all the functionality in a single class because it was easier for me!  I wanted to get the functionality together and to reduce the import tree. In hindsight this is a silly reason. And yet, it was enough to change my behavior.

So now that I’ve seen the problem, what can I do? Well, I noticed the improvements made in the UX for version 2 by writing up some examples. That is to say, I used it. This is a good start, bu submitting it to user testing would be a better step. After all, as the design I was intimately familiar with the inner workings and the proper usage of the tool. But a fresh user wouldn’t be. And if there is anything I have learned developing software: the user never does exactly what you expect them to.

Besides more user testing, some cross-functional education might help. This recent epiphany put me in mind of a tech talk that I hadn’t finished. You can find the youtube video here. I am hoping that revisiting the principles from the talk will continue to improve my designs!

Standard
Automation, Software Development, Work Projects

How to increase Team Velocity by 50% III

sport-1014015_960_720Last time, I discussed the development process and some of the end results of a automated test-generation system. I have mentioned from the beginning that it enabled my team to increase our velocity by 50%. Today, I will discuss how long it took for us to realized that increase. I will also talk about some further improvements that allowed us to reach that level.

As mentioned in the last post, we were able to achieve a 50% increase in our delivered story points per iteration. To be sure, this increase did not happen overnight. It took roughly 3 iterations before we learned how to use the system most effectively. It took an two more iteration before we reached our new plateau.

As we used the system we began to realize several weaknesses in it. The clearest of these was the systems rapid rate of decay. If we got even a little lazy, the system magnified that laziness. And we would then have to spend much more time just to fix it.  Sort of like cleaning one’s room. Some mess attracts more mess. But if you’d just put the laundry away you wouldn’t spend a couple hours extra on the weekends just to clear it away.

In a similar fashion we had to adopt better habits to keep our system pristine and operating. As a team we had to adopt better habits, one of which I mentioned before. We adopted the practice of having our requirements discussions with the Database open. We then kept it up to date with the conversation.

Now in theory, this fixation with cleanliness would only need to be maintained during active development of the data model. Once the data model development was completed, the Test generation system would not longer be as necessary. Presuming the system runs for the last time on a completely specified data model, and that all models correctly meet their criteria, the auto-generation system would be effectively retired.  While its final output would be kept for posterity. However, I was transferred to another team before such an even occurred, and so cannot speak from experience.

But before I left the team, I actually returned to school for my last semester. I then returned to work with the team again, this time as a full-hire. When I returned, the team had expanded on the auto-generated tests. They had added new types of tests and were beginning to have trouble maintaining my original t4 architecture. This was the first improvement that I made to the system when I returned.

My original design had become cluttered and bloated. This was due to intense aggregation of the test implementation and the generation-decision logic. So as any good programmer would, I created layers of abstraction. I created a hierarchy of t4 files. Since you can refer to functions created in other t4, I organized the test implementation logic in one file and the test generation logic in another for each test category. Some categories were particularly large and so I split their logic out into yet more.

At most, I believe the nesting was 3 deep. But by adding this abstraction, all further extension of the generation system were greatly eased. Additionally, while abstracting the tests, I discovered several generation errors, and corrected them. Further improving the test coverage of the system.

While the reorganization was taking place, one of my colleagues was making another improvement. At the time, the generation system produced something like 5000 tests. However, they used a network database. As a result, running all the tests would take 2 hours or so. My colleague created a script that was run before the test-suite executed. It would create a seed copy of the database on the SQL server running on the local machine. As a result, the execution time went from 2 hours to around 16 minutes! Again, this was not am improvement I made. But it did greatly increase our efficiency and so I feel it is imperative to mention it here.

After finishing the re-organization of the system, and improving our execution speed, I happened on an interesting idea. I realized that we could apply the same concept to test another aspect of our code. At the time, I was tasked with writing some tests that would confirm that our triggers were working as expected after a schema upgrade. I realized we could use a similar system to test the proper creation of Table, Keys, Triggers, and restrictions of the database itself.

Most of us agreed that testing this through the entity was cumbersome and unnecessary. The trigger executed after the entity was saved, and thus testing it would require a second read cycle, which is slow while using the entity. So instead we decided to use SQL queries directly.

It was at this stage that the idea struck me. I offered it to our team lead, and she again supported the idea. And this time, having learned several lessons from the last time, I was able to whip out a working system for the desired test in an iteration. Over the following iterations, I expanded the trigger test to several other tables. I added both structural and key verification tests, which eased many of our worries regarding the schema upgrade process. At this point,I was transferred to another team. I saw the successful extension of the system, so I am unable to comment on the value it added to the team in the long run.

I will leave off with just three points. First, If you are willing to put in a little extra effort, you can buy time for your team to pay down technical debt. This is done by investing in strong meaningful tests. These tests, if properly written, will pay dividends whenever the system is changed. And the system is always changing.

Secondly, All test systems require maintenance. A test is only as valuable as the code that it verifies, and if that code changes, the test may also need to change. When the business function a test covers is no longer valid, the test should be removed. It is like weeding a garden (if the peonies in the garden could turn into dandelions spontaneously). 

Finally, A quick excursions into a new way of doing things, can pay off in many ways. It can invigorate the team, especially if the system is time or labor saving! Everyone likes to work less! The new way can stimulate new ideas, as it did with the trigger tests. And of course if the time-savings do pan out, your team can achieve even more in the same period of time! I hope my discussion has provided some food for thought, and that perhaps you too will consider a little automation of your own! Feel free to PM me if you are curious about any system details that I did not mention.

Standard