As a developer having QA you can rely on is great! They are welcome friends helping us cultivate our precious software. But there are dark places which even a QA cannot shine a light. When your software has no interface, what can a QA do, but wish you luck? But what if there was a way for QAs to interact with otherwise UI-less software? Enter Cucumber, a tool that allows QA to shine a light in dark places.
I rediscovered Cucumber, while researching test automation frameworks. Cucumber is a framework for Behavioral Driven Development. After experimenting for a time, I realized Cucumber opens a whole realm of possibilities. Cucumber encourages the expression of program actions in the human tongue. With a proper translation mechanism, Cucumber could act as a mediator between QA and the UI-less software.
Cucumber translates the human tongue into functions through the Gherkin language. For example, a tester would define a test case like this:
Scenario: Messages are saved until the consumer arrives
Given the queues are empty
And I publish a message to the queue with ‘SomeDetails’
When Alice subscribes to the queue
Then Alice should receive a message with ‘SomeDetails’
It is fairly easy to understand the behavior that is being described in this scenario. Cucumber ties the keywords Given, When, and Then to functions which execute the described action using a Regex Match string. This can include free-hand parameters such as ‘SomeDetails’.
Properly designed, the Givens and Whens can be setup to be repeatable and re-compose-able. Doing so allows the QA to describe more complex scenarios with different combinations of the same simple behaviors. As a result, once the initial steps are available, a QA could test to their hearts content with little developer support.
Cucumber improves the documentation of a product. Test document expected behaviors in a common tongue. This makes them available to all parts of the company.
But great care must be taken to ensure that the compose-able parts function precisely as described and without side-effects. Imperfections in the design or the aforementioned side-effects will destroy test-validity and erode trust in the test cases written using Cucumber.
Cucumber was designed to improve TDD, enabling members of a team to describe the function of a program in a human tongue. This same feature creates a tool for empowering QA. Given careful planning and design, you can compose a terse but flexible set of instructions. These allow a QA to test projects they could never touch before! By blending the skills of a developer and a QA, we can reap the best of all our talents. All it takes is an investment to allow our friend in QA to come with us!
Software is for the user. It is not for the Software Engineers who develop it. In the end, software will succeed or fail to meet user needs. The user is the arbiter of software’s fate. Oddly though, many software developers tend to resent their users. The users are prone to strange behaviors. Sometimes they can even come across as whinny children to jaded developers. But we must do away with this flawed way of thinking. We must act as humble stewards, gentle of heart, and eager to please.
Users are the life blood of a software product. Without them, the product will fail. As a result their needs are paramount, and must be address to the best of our abilities. If this is the case, then why are developers so often frustrated by their users? Remember we are fluent in the machine tongue. Generally speaking, users aren’t. Sure they can use the machines, to a limited degree. But they don’t understand them like we do.
Imagine you are in a foreign country. The only way to get your work done is to cajole a lumbering beast into action for you. Without understanding the beast’s language, even simple tasks could be infuriating. Users who are less familiar with software might feel the same. Only remember that we specialize software to particular tasks. As a result users need to learn, remember and use a variety of these ‘beasts’ to get their work done. Also remember, they are being evaluated by their ability to get work done, using your software.
And so scared, frustrated, and feeling impotent, they turn to us. They wonder why their actions did not work. They ask for strange features or work-flows. All these feeling arise because they don’t understand their tools. Sure we could ‘educate them’. But if the way to use a tool is less than obvious, or they only use it seldom, then you can expect them to forget. Not to mention, you have to convince them to take the time to get trained, rather than working. Even we don’t feel comfortable trading training time for working time. So why should we ask that of them?
Two paths remain to us. We can tell the user’s they are wrong and constantly bicker with them, trying to explain the proper way. Or we can choose to listen. The way we thought was obvious is not. They need more help, because the grammar of machines is difficult. I would call this path ‘Stewardship’. We have to think of the code as belonging to the users, not to us. In so doing, it becomes clear what choices we need to make. If the code is for the user, then their needs overrule ours. If they aren’t fluent, we must may the software more approachable.
We are like gardeners. The land we tend is not our own, but still we make it bloom with brilliant flowers. We cherish the blossoms, and suffer when they are trodden upon. But the garden is not for us. Imagine if the gardener chased off the owner with a spade when he ask for a new row of lilies. The gardener would be marched off and a new one brought in to replace him. This is not an exact analogy, since users pick their software. They might just avoid a certain gardener altogether.
If instead, we are gentle and approachable, we could better tend our gardens. If no one ever walks our garden paths, then we put to waste all the love and beauty to garden contains. Software without users, despite its brilliant design, and delicious complexity, is dead. If we want vibrant, living software we must serve our users. We cannot lord our understanding over them, but must instead steward the code for them. With gentle hearts, we can learn their needs, and make the garden they need. In the process we may discover an even greater beauty.
What’s the point of an interview? Before you jump to an answer, do you give your candidate’s coding tests? Some white-board challenges? Have you ever wondered why? Do you think it’s the best way? Recently I’ve encountered opinions that counter the traditional wisdom filtering candidates. Interviewing.io shared data that shows LinkedIn Endorsements don’t correlate to a candidate’s actual skill.
Recently, respected programmers have taken to Twitter to ‘confess their programming sins’. This prompted a discussion on the technical interview questions by The Outline. There is even a small industry to prepare candidates for Whiteboard Challenges. In the end, the hubbub about Whiteboard challenges comes from the fact we are using them wrong.
We interview this way because Employers need to feel comfortable about a candidate. For Software, this means verifying the skills of the candidate. And to a lesser extend verifying their ability to communicate. This sums up the entire purpose of an interview.
But what does my answer to a whiteboard challenge actually mean? Is there such a thing as a ‘correct’ response? At a deeper level, does my answer truly reflect my skills as a developer? I say it does not. It does not reflect your skills, unless you are referring to the ability to communicate/reason by drawing boxes and lines.
Don’t get me wrong though. The ability to present your designs on a whiteboard is a useful skill. But it is not the skill that an employer wants to check. Unfortunately, there isn’t a good way to measure some of the skills without seeing actual work. ‘Take-home tests’ in the interviewee’s preferred language are much more useful. Whiteboard challenges do not demonstrate the same skills.
That is not to say you should toss out Whiteboard challenges . What we need is to change our thinking. Whiteboard challenges may not show an interviewee’s ‘coding’ skills. But they do show the manner in which an interviewee thinks. If you ask someone to write out an algorithm on a whiteboard, you will see how they think about the algorithm. You will see how they remember it. If you ask them to create a new algorithm, something unique, you can learn how they explore a new problem. You’ll see what details they pay attention to. Moreover, you can introduce new requirements after they get started. This reveals how they will adapt.
All these insights are useful to know. But they are far less tangible/measurable. As with most hard to measure qualities, we tend to fail at measuring them. As a result, the tools created to measure them begin to be mis-used or mis-applied to find other tidbits. It ends up like using a fork to eat soup. It’s not very effective and wears you and your server out trying to get anything done.
So, if an interview is about revealing the skills of the interviewee, then we need technical interview questions. But using Whiteboard challenges still provides some benefits. But we cannot use whiteboard challenges as a litmus for programming skills. Instead, we should use them to pose unusual challenges which expose the way the interviewee thinks. This new form can also reveal how interviewees adapt to adversity. Those insights combined with more traditional evaluations will help businesses to find stronger, more suitable candidates. These candidates will be stronger not merely from a technical perspective but also from a cultural one. All it takes is using the tool for its proper purpose.
*Note: I am capitalizing and italicizing Class names for ease of identification.
As 2016 drew to a close, there were numerous articles covering the state of the software development community. [For example here, and here] In several cases, the authors pointed out the sorry state of ‘Agile’. In fact, this trend of developers hating ‘Agile’ has been growing for quite some time. Reading those articles prompted some self-reflection. Obviously, Business management does ‘Agile’ differently. It is a set of prescribed practices, since that is what they understand. And of course robbed of its vigor, this ‘Agile’ is less effective. But we, software developers, do it wrong sometimes as well. We may have bought into the wrong ideas.
As I wrestled with myself over Agile, a larger picture began to emerge. When I entered the workforce, I joined a company that did ‘Agile’. As I learned more about the original principles of the practice I became a supporter. Note I say original principles. The more of a supporter I became, the more I realized my company did not quite get Agile right. We have the form, but lacked the true substance of it. Now, it wasn’t all bad, there were pockets of true agility here and there, but en masse, we missed it. As a result I started to burn out. I had only been working for half a year when I began to tire. The discontinuity between what we professed, and what we actually did was a heavy burden. So far, so normal as disillusioned developers go.
Now, my company did provide a good opportunity for discussion. Specifically, they supported a developer’s book club. And of course ‘Agile’ methodologies would be the topic of discussion from time to time. But when I would bring up some place where I saw the company missing the goal of agility, the observation was generally dismissed. There were a few who did heard and would later come and discuss with me. They usually would come to offer their own observations to help me see what I had missed. Each of these kind souls all had a common trait, they were willing to look at failure for what it was. They didn’t deny its occurrence, and they always looked for some nugget to learn from. From those leaders, I learned a great deal. I would return to them and seek advice during the rest of my time at the company. In my opinion, they understood the true core of agility, despite being unable to practice it because of organizational constraints.
With the advice of these leaders in my ear, I searched. And as I searched I realized that we, as software developers, need to branch out more. To find insight not just from our insular community, but also from the world at large. After all we are humans, and the world has been analyzing humans for centuries! During one such exploratory expedition, I found the OODA loop. As described, the loop is this:
Observe : Review your facts and information
Orient : Is something off? How so? Frame your thoughts and discussion
Decide : Based on your thoughts, and your facts, what should you do? Make it a small step.
Act : Act out your decision.
Repeat : Repeat process ad nauseum, until you have reached your goal/destination
To any supporter of the principles of agile software development, these steps ought to look familiar. It is the same core of iteration with small steps. The very same principle found outside of software development for the same purpose: reach your goal faster.
But here is where Business influenced the ‘Agile’ practice in a negative sense. Review the loop. It never mentioned the idea that all actions must lead directly to your goal. In fact it appears to assume that some steps won’t be optimal. Just like the original principles for agile software development. But in a business context, such a step can prove costly. If you make a step that doesn’t lead to results, then for a business the cost of the step is lost. So naturally business would want to avoid lossy steps and ensure that they take just the right ones. So we end up with strong Project/Product Managers, and non-autonomous engineers. And from a Business stance, this is excellent. It is safe, and much more certain. And explaining it to any higher-up is infinitely easier.
It is also stagnant, and impotent, and ineffective. By the very act of achieving safety, the methodology loses its potency. The principles for agile software development imply, expect, and I would go so far as to say requires, risk. The original agile allows, and expects some of the steps to be imperfect. In fact, the first step is supposed to be just a guess. But it is time-boxed so that we can learn from it while the ideas are still fresh in our mind! If we don’t risk anything in a step, how can we gain anything? In agile, there are not ‘unsuccessful’ steps. That is not blind optimizing or and new-age BS. Instead it is a deep understanding of what we are buying with each step. With each step, we either are buying customer approval for the developed feature. Or we are buying knowledge of our customers. And this isn’t just any knowledge we are buying. It is a personal and contextualized knowledge that our customer provides back to us. We pay to learn in small, highly contextualized, ‘as close to the real thing as possible’ bits of knowledge.
But before I move on, there is one other detail in which Business Agile, and original agile differ. In the original, we do not assume we know what the customer wants. We expect to find it though experimentation and missteps. We start with inaccuracy, and move towards accuracy. In Business Agile, the Product and Project Manager ‘know’ what the customer wants. We start with accuracy and have nowhere further to go. The Iteration is simple and convenient block of man-hours. It allows them to estimation the time it will take to complete the feature we ‘know’ the customer wants.
It would seem to me that Business has forgotten a value we had given to us in childhood. After all, don’t we spend nearly the first two decades of our lives in learning? In trading time for knowledge? Hasn’t our society decided that it is of value to ensure everyone has some common understanding? I think Business has fallen into its current state of ‘Agile’ because it misunderstands what it is buying. It is not buying software, at least not directly. The original agile aims to provide strategic knowledge. What if we shifted our thinking about agile? Instead of purchasing a static product, we are acquiring and applying strategic knowledge. We could reinvigorate the practices that have been robbed of their efficacy.
Disclaimer: I am not the happy looking chap in the photo.
I was working on a personal project recently when a realization dawned on me. User Experience Design,also known as UX design, and software design collide more frequently. And not only in the User Interface layer.
Before I get too far, when I talk about UX, I am referring to the experience the user has while attempting to use the device or object, or code. I think this image does an excellent job of describing good UX concisely.
It’s pretty easy to tell what UX is like with a Graphic User interface, or a GUI. After all, this is the part everyone touches. If a website is snappy and the layout makes sense, that is good UX. If it is clear how to do the operation you want, without needing to consult the magic talking paperclip, then it is a good UX. But it seems that once you go below the GUI layer, the lessons on good UX vanish.
I was working on a Fluent Testing API for python when I realized it. In version 1, I had all the functionality for this API bound up in a single class. Sure, it limited the import tree, and made it easy for me to develop. For version 2, I decided to pull the functions into separate classes. And while I was writing out some example cases, I realized that this simple code change resulted in an augmented User Experience!
You see, by pulling the various functions into different classes, I allowed the IDE to create better prompts. The better prompts now guide a user of my API through the proper pattern of using my API. Since there were fewer functions to choose from, it is now clearer how to proceed. The user no longer has to consult a lot of documentation. This is a simple example, but it did get me thinking.
In fact, one week prior, I added a Facade to one of my library at work. The Facade simplified interactions with my Library. Now other software engineers could more readily use my library’s functionality. I am surprised that I didn’t think of it at the time, but APIs are a Software Engineer’s UI layer. As a result, they should be subject to a UX review!
I mentioned earlier that I have noticed that, on the whole, UX degrades as you leave the GUI layer. Two factors are responsible, in my opinion. First, the majority of UX review and work goes into the GUI layer. And this focus makes sense. The vast majority of software interaction is through such a layer. As an aside, finding a UX guy who can talk about UX and about API design can be difficult. I usually have a heck of a time getting time with them to review a GUI design with them!
The Second factor is a lack of discipline. I am not throwing stones here, the first version of my Testing API is example of such a lack! I collected all the functionality in a single class because it was easier for me! I wanted to get the functionality together and to reduce the import tree. In hindsight this is a silly reason. And yet, it was enough to change my behavior.
So now that I’ve seen the problem, what can I do? Well, I noticed the improvements made in the UX for version 2 by writing up some examples. That is to say, I used it. This is a good start, bu submitting it to user testing would be a better step. After all, as the design I was intimately familiar with the inner workings and the proper usage of the tool. But a fresh user wouldn’t be. And if there is anything I have learned developing software: the user never does exactly what you expect them to.
Besides more user testing, some cross-functional education might help. This recent epiphany put me in mind of a tech talk that I hadn’t finished. You can find the youtube video here. I am hoping that revisiting the principles from the talk will continue to improve my designs!
Last time, I discussed the development process and some of the end results of a automated test-generation system. I have mentioned from the beginning that it enabled my team to increase our velocity by 50%. Today, I will discuss how long it took for us to realized that increase. I will also talk about some further improvements that allowed us to reach that level.
As mentioned in the last post, we were able to achieve a 50% increase in our delivered story points per iteration. To be sure, this increase did not happen overnight. It took roughly 3 iterations before we learned how to use the system most effectively. It took an two more iteration before we reached our new plateau.
As we used the system we began to realize several weaknesses in it. The clearest of these was the systems rapid rate of decay. If we got even a little lazy, the system magnified that laziness. And we would then have to spend much more time just to fix it. Sort of like cleaning one’s room. Some mess attracts more mess. But if you’d just put the laundry away you wouldn’t spend a couple hours extra on the weekends just to clear it away.
In a similar fashion we had to adopt better habits to keep our system pristine and operating. As a team we had to adopt better habits, one of which I mentioned before. We adopted the practice of having our requirements discussions with the Database open. We then kept it up to date with the conversation.
Now in theory, this fixation with cleanliness would only need to be maintained during active development of the data model. Once the data model development was completed, the Test generation system would not longer be as necessary. Presuming the system runs for the last time on a completely specified data model, and that all models correctly meet their criteria, the auto-generation system would be effectively retired. While its final output would be kept for posterity. However, I was transferred to another team before such an even occurred, and so cannot speak from experience.
But before I left the team, I actually returned to school for my last semester. I then returned to work with the team again, this time as a full-hire. When I returned, the team had expanded on the auto-generated tests. They had added new types of tests and were beginning to have trouble maintaining my original t4 architecture. This was the first improvement that I made to the system when I returned.
My original design had become cluttered and bloated. This was due to intense aggregation of the test implementation and the generation-decision logic. So as any good programmer would, I created layers of abstraction. I created a hierarchy of t4 files. Since you can refer to functions created in other t4, I organized the test implementation logic in one file and the test generation logic in another for each test category. Some categories were particularly large and so I split their logic out into yet more.
At most, I believe the nesting was 3 deep. But by adding this abstraction, all further extension of the generation system were greatly eased. Additionally, while abstracting the tests, I discovered several generation errors, and corrected them. Further improving the test coverage of the system.
While the reorganization was taking place, one of my colleagues was making another improvement. At the time, the generation system produced something like 5000 tests. However, they used a network database. As a result, running all the tests would take 2 hours or so. My colleague created a script that was run before the test-suite executed. It would create a seed copy of the database on the SQL server running on the local machine. As a result, the execution time went from 2 hours to around 16 minutes! Again, this was not am improvement I made. But it did greatly increase our efficiency and so I feel it is imperative to mention it here.
After finishing the re-organization of the system, and improving our execution speed, I happened on an interesting idea. I realized that we could apply the same concept to test another aspect of our code. At the time, I was tasked with writing some tests that would confirm that our triggers were working as expected after a schema upgrade. I realized we could use a similar system to test the proper creation of Table, Keys, Triggers, and restrictions of the database itself.
Most of us agreed that testing this through the entity was cumbersome and unnecessary. The trigger executed after the entity was saved, and thus testing it would require a second read cycle, which is slow while using the entity. So instead we decided to use SQL queries directly.
It was at this stage that the idea struck me. I offered it to our team lead, and she again supported the idea. And this time, having learned several lessons from the last time, I was able to whip out a working system for the desired test in an iteration. Over the following iterations, I expanded the trigger test to several other tables. I added both structural and key verification tests, which eased many of our worries regarding the schema upgrade process. At this point,I was transferred to another team. I saw the successful extension of the system, so I am unable to comment on the value it added to the team in the long run.
I will leave off with just three points. First, If you are willing to put in a little extra effort, you can buy time for your team to pay down technical debt. This is done by investing in strong meaningful tests. These tests, if properly written, will pay dividends whenever the system is changed. And the system is always changing.
Secondly, All test systems require maintenance. A test is only as valuable as the code that it verifies, and if that code changes, the test may also need to change. When the business function a test covers is no longer valid, the test should be removed. It is like weeding a garden (if the peonies in the garden could turn into dandelions spontaneously).
Finally, A quick excursions into a new way of doing things, can pay off in many ways. It can invigorate the team, especially if the system is time or labor saving! Everyone likes to work less! The new way can stimulate new ideas, as it did with the trigger tests. And of course if the time-savings do pan out, your team can achieve even more in the same period of time! I hope my discussion has provided some food for thought, and that perhaps you too will consider a little automation of your own! Feel free to PM me if you are curious about any system details that I did not mention.
If you missed the first post in the series you can find it here!
Last time, I opened with the hook of increasing a team’s velocity by 50%. I introduced an automation project that would generate integration tests for us. Before that system, the testers, myself included, had trouble keeping pace with the rest of the team. Worse still, we found out later that some of the entities we release had bugs in them! But I had an idea. I assembled a rough outline and a demonstration for the Team Lead. Then after some discussion she gave it a green light. She also gave me one month to set up the necessary scaffolding, while she got the team ahead of schedule.
The core of this automation system was T4 templates. For those unfamiliar, they are a file generation framework created by Microsoft. By writing .NET code in the .tt file, one can control the contents of the generated text files. This includes generating C# code, and other file types.
We used these templates to generate partial test classes containing the predefined test cases. Not every entity would get the same kind of tests. For example, some entities would have doubles that could not be negative. Others might have a string that had to be populated. There were even different edge cases supported within the same data type. A database containing various flags would dictate what tests to generate.
To review, the database housed two kinds of tables: the Main table, and an entity specific table. The Main table controlled whether tests were generated, and linked to the entity tables. The entity tables housed information on the properties to test. AS well as the boundary conditions and other requirements for testing.
One challenge I discovered while scaffolding was ensuring that Parent-Child relationships were honored. I couldn’t just assign a random ID to the parentID field. The program database would kick that out with a constraint. I discussed and brainstormed on this problem with the Senior tester. We finally decided to create a helper class that could act as a factory for the tested entities.
The factory would assign all the required fields of an entity with appropriate values. For the most part, these were randomly generated numbers, or strings. The Helper’s factory functions were called to create the entity-under-test’s Parents. Following this logic, the helper would create the entire entity tree. This would work at any level of child, leaving our database in an appropriate state.
In database testing there are four basic level tests: Create, Read, Update, and Delete. To support these cases, one must control when to save an entity to the database. To help in these cases, we added alternative factory functions to the helper, selected by parameter flags.
Up to this point we wrote the helper functions manually. This became difficult to maintain, and so we automated it as well, again using T4 templates. But unlike the test generators, we could not honor the generate flags in the database. There were cases where an entity was not ready to test, but a child, or a parent was wishing to test with it. Instead, we opted to generate factory functions for every listed entity.
By the time I had finished this level of scaffolding it was time to bring the team on board the project. We delegated by a series of test cases. ‘Not equal null’ tests to this developer, ‘Less than the specified max length’ to another and so on.
The size of the system, in comparison to its scaffold, exploded during this time. I spend much less time coding the system, and much more helping and directing the other developers. I sought guidance from the Team Lead often. I did so to ensure that I was not ruffling feather or otherwise harming my effectiveness as a leader.
With grace and patience, she guided me on better practices. She offered ideas for how to help the developer understand. Many of her ideas made it into loose documentation that I sent to the developers for reference. But the developers weren’t the only ones who had to understand the system. I also had to find a way to communicate the value and usage to the Product Manager.
I was blessed with an understanding PM. She allowed me to walk her through the basics of the system, and what it meant. In the end, she decided that it was the perfect place for her to define the AC for any new or modified entities. Once she understood the structure of the tables, she happily filled in the requirements. Moreover, she was able to provide them in greater detail than we’d been able to achieve before.
Instead of having some loose requirements, we had detailed expectations. Or in other cases, we had a description of the desired end-state of a modified entity. This greatly reduced confusion altogether. And resulted in far fewer follow-up meetings with the PM. This alone would have increased our team speed. This test Database provided our team, PM included, a common medium to communicate in. And on top of that, it provided enough details for all parties to understand!
Back to the practical use of the database, I crafted template SQL queries. These allowed the PM to add new entities, or change existing ones. And with her existing skill she easily found the information she wanted. These tools, including the database, allowed the team to accommodate the availability of the PM’s time. Some weeks she would be out with customers, while on others she was free most of the day for discussion. With the test Database, she could tell us what she wanted without having to be present for everyone of our meetings!
After a month of expeditious work by our team, we had the core of our automation system ready to use! The developers returned to new development. The testers moved to round out the automation, and to maintain it. Our first process change was adding another step to our storyboard. The developers would now generate the core integration tests for an entities and run them. If those tests didn’t pass, then they would fix their entity, before it every went into QA.
This extra step saved the testers a great deal of time, since the Developers would see the common bugs. And this reduced the back-and-forth between Development and QA immensly! With the extra time, the testers could focus on maintaining the system. We could also pursue exploratory testing!
One drawback in this system was that every time a developer wanted to run their entity tests, it had to change a flag in the database. This flag change affected everyone! Which lead to some confusion in the first week. My first iteration of improvement added an override list on each user’s box. This allowed a developer to test without modifying the database.
On the topic of maintenance, our automation system was great at handling standard cases. But it was somewhat ornery about special cases, and especially so for one-offs. We had to add a couple of tables to identify specific special relationships. This way we could test them specifically without interrupting the existing structure.
Further, we had to carefully manage access to this database to protect it against accidental corruption. Which meant we had to allow the developers to read, but not write to the database. Both of these requirements were non ideal. But in hindsight, we should have expected them, considering the tools used to create the system.
But the required maintenance did encourage the team to adopt a better development process. Instead of immediately going to work on new entities, we would start with a through review of the specifications. We adopted the habit of always having the test Database open during these meetings. And we kept it up-to-date with the discussion. When we finished with the meeting, the database accurately reflected our expectations. The developer could immediately and confidently begin their work.
The automation system was beneficial for all. Though it did not completely free the testers from test maintenance. It did free up our time for exploratory testing. With the benefits to the developers in rapid feedback, they saved time. For the PM, it provided a fertile communication medium. And all together the team was able to achieve a 50% increase in our velocity for a given iteration. It was a good way to end an internship.
The next post, the last in this series, I’ll cover what happened by the time I returned as a full-time developer. This will include exact quantification of the team’s new stable velocity. I will cover the improvements we made on the system, and even a scion system based on the same idea!
How would you like to increase your team’s velocity by 50%? Well, I was able to accomplish such a feat as the capstone to my last internship. It took a lot of help from the team I was on, and a lot of support from the Team Lead. But we implemented a system that increased our velocity by 50% in roughly a month.
Our tale begins with the database access team, working with Entity Framework. I had just arrived. I was learning C#, Integration Test practices, and other new technologies as fast as I could. I was shadowing the team’s current tester to learn the ropes, and contributing where I could. Even with two of testers, the developers often need to wait for the us to catch up, up to an iteration after them. Moreover, released entities are later discovered to have inadequate test coverage, or undesirable behaviors.
The idea hit me one day when I was literally copying and pasting tests, and then replacing class names. I admit this was a poor habit, but I was endeavoring to keep up with the team. What I realized was a great number of our tests were of the same type. This kind of property gets that battery of tests, that kind gets a different set. Thanks to recent lessons in T4 templates, I realized we needed to generate the correct battery of tests for the entities in a programmatic way.
I tried to compile a set of known test scenarios. Then I enumerated the entities we’d produced or modified in the last PSI that matched the set I wrote. These data sets provided a good coverage. But I needed a place to store that information for all the entities, and the templates had to be able to read it. I experimented a bit and found that, with some C# code, the templates were able to read the a database.
The database tables would represent a single entity. The rows would represent the properties. They could even store boundary conditions! Then a master table would control what entities would have tests generated. The master table could also present the relationships between entities.
But this was too big a task for me. I needed help, so I brought my idea and some of my findings to my team lead. She called a small meeting between myself, the other tester, and the architect. I explained my idea, and showed them some of what I’d done.
After a lot of discussion, the team lead decided that it was a worthwhile project. She then set about getting the team ahead of schedule. This was we could put a month into the new project, and not be behind. Meanwhile she had me setup the core of the system and a scaffold to make sure we could delegate work. Additionally we spent a lot of time polishing the format of the generated tests. By the end of that month we were ready to start making our team faster!
With a moment of inspiration, and the support of the team lead, my plan to put in place a system to generate our tests was ready to begin. In my next post I will discuss more of the implementation details. The various discoveries that occurred while the team worked on the project will also come up. In the end, we easily achieved a 50% increase in our team velocity!