Software Development, Work Projects

Going where no QA has gone before!

As a developer having QA you can rely on is great! They are welcome friends helping us cultivate our precious software. But there are dark places which even a QA cannot shine a light. When your software has no interface, what can a QA do, but wish you luck? But what if there was a way for QAs to interact with otherwise UI-less software? Enter Cucumber, a tool that allows QA to shine a light in dark places.

I rediscovered Cucumber, while researching test automation frameworks. Cucumber is a framework for Behavioral Driven Development. After experimenting for a time, I realized Cucumber opens a whole realm of possibilities. Cucumber encourages the expression of program actions in the human tongue. With a proper translation mechanism, Cucumber could act as a mediator between QA and the UI-less software. 

Cucumber translates the human tongue into functions through the Gherkin language. For example, a tester would define a test case like this: 

Scenario: Messages are saved until the consumer arrives
Given the queues are empty
And I publish a message to the queue with ‘SomeDetails’
When Alice subscribes to the queue
Then Alice should receive a message with ‘SomeDetails’

It is fairly easy to understand the behavior that is being described in this scenario. Cucumber ties the keywords Given, When, and Then to functions which execute the described action using a Regex Match string. This can include free-hand parameters such as ‘SomeDetails’. 

Properly designed, the Givens and Whens can be setup to be repeatable and re-compose-able. Doing so allows the QA to describe more complex scenarios with different combinations of the same simple behaviors. As a result, once the initial steps are available, a QA could test to their hearts content with little developer support.

Cucumber improves the documentation of a product. Test document expected behaviors in a common tongue. This makes them available to all parts of the company.

But great care must be taken to ensure that the compose-able parts function precisely as described and without side-effects. Imperfections in the design or the aforementioned side-effects will destroy test-validity and erode trust in the test cases written using Cucumber.

Cucumber was designed to improve TDD, enabling members of a team to describe the function of a program in a human tongue. This same feature creates a tool for empowering QA. Given careful planning and design, you can compose a terse but flexible set of instructions. These allow a QA to test projects they could never touch before! By blending the skills of a developer and a QA, we can reap the best of all our talents. All it takes is an investment to allow our friend in QA to come with us!

Standard
Software Development, Work Projects

Pretty Good Privacy

computer-1294045_960_720

Shortly after starting with my new company, I began work on a back-end infrastructure project. To be specific, I am working on an inter-process-communication (hereafter IPC) layer. As the project developed, we realized the need to protect our data in transit. This is because we are working with Protect Health Information (hereafter PHI). It would be a disaster if the data became compromised.

So to combat this, we are encrypting the data before it is send through the IPC layer. There are many fine encryption schemes available, but many are difficult to implement. Moreover, it is not enough to just encrypt the data. One cannot continue to use the same key for all applications without risk. Enough messages using the same key, and enough time mean someone could learn it. They would then be free to read all our messages and the possible PHI contained within.

Our brilliant architect suggested that we use Pretty Good Privacy or PGP for short. It is an easy to implement encryption scheme that combines many desirable features. PGP uses a new random key for each message to encrypt the outbound data. This key is itself encrypted by a known private key, and is sent along with the encrypted message.

Since the key is random every time, it is difficult to guess the private key. As a result, one cannot decrypt the public key, thus the message is reasonably safe.

To help explain this, I have crafted a simple example in python code, using a Vigenere Cipher. You can find the entire example project on my GitHub Repo, here. But the core of the example is as follows:

def encodePGP(self,plainMsg): 
# generate random key 

randKey = self._generateRandomKey() 
print("> Internal Random Key: "+randKey) 

# encrypt input with ^ 
cryptographer = Crypto() 
encryptedMsg = cryptographer.encode(randKey,plainMsg) 

# encrypt random key with priv. 
key pubKey = cryptographer.encode(self.privateKey,randKey) 

#return concat encrypted key and input 
return pubKey + "_"+encryptedMsg

For those who prefer, a visual representation of this is available on the Wikipedia page for PGP. The algorithm is as I stated before:

  1. Generate a Random Key for the message
  2. Encrypt the message with the Random key
  3. Encrypt the Random Key with the Private Key, to form the public key
  4. Concatenate the Encrypted Message and Public Key

The code for Decoding is as follows:

def decodePGP(self,concatMsg): 
#parse encrypted pub key, encrypted message 
parsed = concatMsg.split("_") 
pubKey = parsed[0] 
encryptedMsg = parsed[1] 

# decrypt rand key with priv. key 
cryptographer = Crypto() 
randKey = cryptographer.decode(self.privateKey, pubKey)
 
# decrypt message with rand key 
decryptedMsg = cryptographer.decode(randKey,encryptedMsg) 

#return message 
return decryptedMsg

In plain terms the decryption steps are:

  1. Parse the input message to get the Public Key and the Encrypted Message
  2. Decrypt the Public key with the Private key, to form the original Random Key
  3. Use the Random Key to Decrypt the Encrypted Message

Ridiculously simple right?! However, this method can be rendered vulnerable by using a weak encryption method, such as the Vigenere Cipher, as I have. Though,it should be clear that a PGP-Vigenere is stronger that Vigenere alone.

As you can see, with a strong encryption method, PGP adds a significant increase in security. The cost is that it increases the complexity in a limited fashion. Naturally, I will be adding this to my tool kit for future projects! I hope this explanation and example has been helpful. But I admit the diagram on Wikipedia provides a good outline of the PGP scheme. For anyone interested, you can download the example and the Vigenere Cipher implementation here.

Standard
Innovation Fridays, Software Development

Innovation Fridays – Learn C#

2p4iInternally, my company has experienced a push to get ‘Innovation Fridays’ started up again. Every month developers are given a 4 hour period to pursue various projects or learning. For this block of time, the Automated Test Scripting team, ATS for short had been interested in learning how to write and use C#. Given my familiarity with the Team, and my knowledge of the language, they asked if I would be willing to lead a workshop on it. Naturally I couldn’t refuse such a request from friends, and moreover, I knew this would be a chance to strengthen my core on C#. As everyone knows, to teach something is to learn it far better than when you first were taught.

I figured that my workshop plan, while good, had some chinks in its armor and that it might also make interesting reading. So for the purpose of getting some additional insight and for the freedom of information I decided to post a series on my plans and activities relating to the workshop. So if you think of anything, please feel free to make suggestions in the comments.

Now before I start, I want to express that I have a good deal of experience in running workshops on a variety of topics, including some rather technical ones. I somehow got into doing this when I was in high school, wherein I would help teach Tae Kwon Do to kindergartens, while working on my black-belt and after. Further, while in College I was blessed with an A in Calculus-based Physics, and so was asked by an enrichment program to lead a Physics workshop, which I did every semester for perhaps 3 years or so, until I graduated.

I very much enjoyed the work, and was blessed with some help paying for school by it. Among the most enjoyable was creating questions and problems for the students to work on, especially in Mechanical Physics. However I found that for the majority, the workshop was not totally ideal, since it must keep pace with the class. And sadly, many students could not keep up with the class in the first place. This is perhaps the single biggest obstacle that I am working to overcome with my present workshop plan.

Overall, I am planning to follow the recommendations made by Josh Kaufman in his 20 Hours TED talk, which I have mentioned before. Naturally I think the four steps are very well suited for learning C#, so my workshops will generally be focused on one or perhaps at most two of these steps at a time.With any luck, I will run these for the next 6 months or so, and have our learning concluded by that time. However for the first workshop which occurred last Friday, most of our time was spent on step 2, or Learning Enough to Self-Correct, and on deciding on project infrastructure. For this discussion I will talk more about the project infrastructure, and perhaps will elaborate more on the learning portion at a later time.

Now, getting back to the obstacle I mentioned earlier. Everyone on the ATS team is at different levels of skill, which means each will need a different levels of instruction and will be able to accomplish different amounts in the same time. So to support the needs of the high-performing team members, while maintaining the approach-ability for those who need more guidance, I elected to pursue Project based learning as the workshop model. On this topic, I put the team size and project type to a vote. It was decided that, for the most part, everyone would pursue their own project.

As a result we have 6 teams, for 7 people. As to the projects they are covering, those are somewhat less varied as there are 4 different projects at the moment. They are as follows:

  1. A Sudoku Game application
  2. A Music Player
  3. A Rich-Text-Format Text Editor
  4. An internal Installation tool for our Company’s Software

Of course, you can see that each of these will have different technical challenges, but also each has some elements of similarity. For instance many of these will need File Access, and each will need some component of UI. As a result these projects cover a breadth of topics making them good candidates for learning C#, while still maintaining some common ground for group discussions. To make things easier on myself, I have had all the teams setup the following organization for their projects, so that we can work with others more easily.

Every Team project will have at least 4 separate Code Projects( the place where code is dropped in Visual Studios Solutions). These are the Infrastructure, the UI layer, and the test associated with both. By separating the Infrastructure from the UI, I am hoping to provide the necessary grounds for following better design principles. Of course with how small these projects will be, It may feel somewhat silly to have all this extra separation. But since the purpose of this workshop is not simply to write the Project and get it done, but is instead to learn how to write C# well, I think it is a fair trade-off.

If we are going to be working so that our project structures are similar, it would make sense to ensure we are all using the same tools. And sadly, since most of our participants are using personal laptops, they don’t have easy access to the Company WiFi, or the source-control servers, which means we needed another place to store our code. This way the projects could be shared between teams when the time came. For this I have selected Github, as I am familiar with it due to personal side projects, and because it would be free to access for all involved. All I would have to do would be to add the ATS team as contributors to the selected Repo.

gitkraken-social-iconFor those of you who are also familiar with Github, you should recognize that git is very Command Line heavy without a GUI. So to make the transition easier for the team, I had them all download GitKraken, which is my preferred GUI for Github. So far, this has seemed to make the process much easier. But I will know more when we next met about a month from now.

Lastly, no code project would be complete without an IDE. And for this we turned to Microsoft Visual Studios community edition. We have a mix of 2013 and 2015, since that is what the team members were able to find. I am hoping that this will not cause problems later on when the projects are viewed by other teams, but we shall cross that bridge when we come to it.

So during our first meeting, we covered all of the project set-up minutia, and some of the basic principles that I am hoping to pass on. I will share this in my next Innovation Friday post. These will likely include the various links that I used and other references that I find to be very useful! In the meantime, I would love to hear any suggestions or ideas you have! After all, I only had my experience and my gut to go on, and I am certain I might have overlooked some great opportunities!

* – The C# logo was created by DevStickers

* – The GitKraken Logo was borrowed from the GitKraken website.

//Edits//
11JUN2016 – Spell-checking and Minor Grammar/Readability refactor
Standard
Software Development

Development Tool: Atom

atom-iconA few years ago, just before I left college, a friend introduced me to a funny little program called Atom. It was billed as a ‘Hackable’ text editor. At the time I thought it was an interesting little toy, and tinkered with it for a while. But since I didn’t find any real use for it at the time, I was satisfied with just tinkering. Over time, as classes became more demanding I kind of left it behind. That is until I found a convincing use-case for just such a program!

Recently, I have picked up Atom again for a personal project with some church buddies of mine. We are working with an Arduino and several external components. Since there are three developers and two or three operating systems between us, I wanted to get a product that we could all use with ease on any system. I settled on Atom after becoming frustrated with the existing Arduino IDE.

Since our project had three developers, we split the responsibilities into three primary areas, and had organized our project files accordingly. However, the Arduino IDE does not support a nested architecture, and instead needs all the files to be present at the highest level. Not wanting to lose the project organization, I started dabbling with Atom and found its support to be far superior to the Arduino IDE for this project.

Of course, nothing is perfect, and Atom does not ship with built in support for the Arduino. Thankfully there are a couple of packages which provide the necessary components for it. They are Platformio, and the language-arduino packages. Now, Platformio did require that we adjust our project architecture so that the compiler could locate all our file, this is a very small change, and allowed us to continue more-or-less un-phased. Furthermore, the Platformio package also supports other boards than the Uno which our project was using.

So, after playing with Atom for a week or so, purely for my Arduino project, I became more familiar with the various features, and I was able to get more comfortable with the shortcuts among other things. After a while, I switched back to one of my python projects, and had a little shell shock. At present, I am using PyCharm, which has severed me well, and has the added benefit that one of its default settings allows the Microsoft Visual Studio shortcuts to be used. It is quite polished, and provides solid support for most anything a developer could want to do in Python. But it’s not very easy to customize, at least not compared to Atom.

O
n the flip side, Atom doesn’t ship with support for running python scripts from the IDE. But it does include some language highlights. Here again, the Package system comes to save the day. With the Script Package, Atom gains the ability to execute both Python and other interpretive languages, like Julia , and can display the feedback via an in-IDE terminal window! Furthermore, with Atom, the error highlighting is fairly descriptive, and will show the developer the breaks for the current document! So by switching between various files in your project you can see the pertinent errors in each file, without having to browse through an exhaustive list contained every file all together! Which, coming from a C++ project, is pretty great!

For a little icing on the cake, Atom also has a fair bit of Git integration. (I should hope so, considering it is Git’s IDE). The projects nicely highlight new, and changed files from the current Git changeset, and the default settings are programmed to reduce the clutter in the project view, by leaving out the various .git files, like the .git-ignore. This is a pleasant feature, which I have enjoyed for my Arduino project.

Overall, Atom is a very impressive program. It can be as simple or as advanced as you need it, and can change with ease to suit your needs, through their robust Package manager! With their wide community support base, I look forward to enjoying Atom for many years to come. For anyone interested in learning more, please check out Atom here!

*- Image borrowed from this
source
.

Standard
Software Development

Development Tool: Jupyter

Recently, one of my colleagues presented a prototype of a new feature that my team was going to implement. To be certain the new feature was fascinating both for its algorithmic complexity, and its significance to our users. However, I was admittedly more caught by the tool he had used to develop and present the prototype. With this tool he was able to set-up a development environment, test data, and was able to demonstrate live, working code for us with ease! This tool was Jupyter.

*Jupyter Logo

I can best describe Jupyer as a web-hosted development and testing environment. The Jupyter application is installed on a server which can then expose multiple notebooks wherein the development can be done. More specifically, these notebooks are where the demonstration data is housed, and the presentation are run. Moreover, each notebook can be hooked up to a different compiler/interpreter to allow development to proceed in multiple languages!

This is profoundly useful, because it allows a prototype to be developed in the easiest language to program in, without having to pay for the overhead of a presentation layer! Thus demonstrating a feature to the PM/PO becomes much easier! Furthermore, when you are presenting to the developers, they can make adjustments to the code which you are presenting and they can witness the change’s effects in real-time!

A Jupyter notebook’s structure is very similar, if not identical, to that of a Mathematica Notebook. In Mathematica, the user creates a notebook, and enters an equation , or series of equations into an entry. Then the computation is carried out for that entry, and the user can proceed to use the results in the next entry. This includes plotting as well as some algorithmic analysis, which is especially useful for complex physics simulations.

In Jupyter, the user enters a series of functions, function calls, or classes into an entry, which can then be employed for later use by future entries. One can execute an algorithm in one step, and plot it in the next, or go on to use the results of the algorithm in another step.

Each entry’s results are calculated based only on the present conditions, so changes to entry 1 might affect entry 5’s results, if entry 5 used entry 1’s results to calculate. But as a benefit, if a mistake was made in entry N, one need only correct that entry, and then re-run the calculations for the entries which follow. Both Mathematica and Jupyter share this behavior.

In a corporate setting Jupyter would excel in several use cases, including the PM/PO and the developer Demonstration. In a non-co-located, or even in a co-located environment, a Jupyter notebook could be set-up to allow many users to interact with prototypes in real-time, allowing developers to review the functioning of the prototype while they might be developing the code in a different location or language.

Alternatively, It could be used to allow the PM to visualize what a new feature’s output will look like given some sample data, without having to ask the developers to run the simulation! This would allow the PM to quickly sort through the accuracy of the algorithm. In this case, a QA could also use the notebook to actively investigate a customer reported error in the algorithm, so long as they have the important data and access to an updated algorithm. This way the QA would not need the entire user project, and all the sensitive information that might contain, which could make reproducing bugs much easier!

Finally, as was the case with my colleagues work, Jupyter can be used as a rapid-prototyping environment. Since the language compiler/interpreter are set with the notebook, and the presentation layer is already handled, the developer is much more free to pursue the real interest, the product algorithm. Since the language is not locked by previous work, the developer would be free to choose whatever language they felt would best suit the project. They could feasibly borrow data from other projects, or even simply generate it within the notebook!

Overall, Jupyter looks to be a very effective tool for sharing the development of algorithms, or other possible calculation intensive features in an accessible way with multiple parties within the organization. It provides a usable interface to both developers and non-developers alike, in an approachable fashion. It provides the ability to modify the experimental data to give the users a more detailed understanding of the prototype. And finally, if it were used to hold the existing algorithms, then it might also allow the PM’s to simulate the program sufficiently to trace bugs related to the customer data, or to the company’s algorithm rather than wasting significant time in the back-and-forth as developers seek to understand the meaning behind the data, and why a particular output is wrong.

For those interested in knowing more, you can find Jupyter at jupyter.org! Thank you for your time, and I hope that you find this tool to be useful in your endeavors!

* The image shown is the Jupyter logo found on the jupyter.org home page.

Standard