By: Matt Chiang 
As many of you know, automation is a great skill/asset to have. However, building automation is only part of the solution to checking code for problems. There are many other supporting technologies that goes into putting a usable solution together so that the automation doesn’t have to run your local machine and consume the time you would otherwise be manually or doing adhoc testing.
Here’s the challenges this hackathon will help us learn and figure out.

  1. How to launch the automation w/o an IDE or from your local machine. headless automation maybe?
  2. How to write reusable tests. Object oriented programming principles and
  3. How to build data driven automation tests.
  4. How to make QA transparent so that others can run the tests.
  5. Learn mobile automation.
  6. Learn how to control elements using drag and click.
  7. How to increase our velocity to run 33 tests in a matter of a couple of minutes.
  8. How to build 1 test that will test 33 different data sets using IE, chrome, and FF w/o having to write 99 separate tests. If we apply this to mobile (Android using Chrome), that would mean 132 tests total. Writing one tests will make the automation maintainable.
  9. How to build a solution so that one can see historical trends of each test and each time it ran and which browser it was ran on. In today’s agile environment of deploying often and testing often, these tests might be ran once or ten times a day. If they are ran after new code is deployed and the test is triggered, how do we check for failures w/o having to watch the tests run? This could be a web app that gathers all the data and the automation sends the test status or any info to the web app for it to parse and display. At the end of the data, one person can go view the results.
  10. For the advanced requirements, building tools that facilitates the solution. This could be another webapp to allow business analysts, project managers, your kids, or your parents to modify the data and to run the tests.
  11. For the web app for reporting, add in filtering or searching so that you can easily narrow in on a specific test or result you want to see.

Here’s the site/web app we are going to build a solution around.
This version of the web app for the Belgium market. If you look at the CountryURL column in the spreadsheet, it will give you the starting point of where the web app is for every market.

The background of this app is a single page app that allows people to customize a lotion that fits their skin type. The levels of chemicals in each formulation can make you look younger, or have no effect. It’s an app that I have already automated w/ a solution around it to do regression testing on desktop web and mobile. If you build the automation so that it’s reusable and generic enough, you should be able to use the same code on the web and mobile and only have to change one line of code. Also, if you build your code to be reusable, you only have to write 1 test and scale it to verify 33 different variations across the world markets.

The attached xls spreadsheet is the data you will be using to test and to verify pass/fail. The data should drive your tests and values for each option / scale for which the code need to emulate to that you can verify the correct assessment code w/ the correct formulation based on what the user is wanting.

The age column is the numeric value entered on the page asking for age.
Gender, skintype, aha, firmness, radiance, texture, dayfragrance, and nightfragrance, data controls which button needs to be clicked.

Ethnicity is a link.

Location is used for text search for the location page.

Pollution, environment, sensitivity, agespots, wrinkles, wrinklesnose, wrinklesforedead, and pore, are on a 1 to 100 scale to control the slider on the various pages.

The OriginalAssessment is the code that will be generated based on the data that is used to run the tests.

You’ll also see a DateChanged and a NewAssessment column.

These are used to store your data when you tests completes. If the code isn’t the same, the new code needs to be logged into the NewAssessment column and DateChanged is the current timestamp for when the test detected the change. If the code is the same, then nothing needs to be stored in the spreadsheet. This will solve the problem if basic test reporting.

Here’s a list of the requirements:

Basic Requirements

  1. Use the Page Object Model in the framework (PageFactory and @FindBy annotations)
  2. Use collections such as Lists for related WebElements
  3. Load data provider info into a Map
  4. Multi-language support

Functionality Requirements

  1. Ability to automate locations other than the app’s GPS auto-locate
  2. Implement drag and drop functionality for the slider dials using percentages
  3. Use just one test class that dynamically handles different test data
  4. Be able to automate and handle several types of moisturizer choices and handle relevant modal windows that may be called

Optional Requirements:

  1. Run spell checking (Agreement TOS, etc.)
  2. Save assessment summary to a document

Advanced Requirements:

  1. Integration w/ build system (Ant, Jenkins, etc)
  2. multi threading automation. (automation code itself, use of Docker, grid, etc).
  3. reporting of historical tests and general trends for pass/fail (use of built in test reporting is fine but it doesn’t really show results from test to test w/o manually opening the report).
  4. web based data provider (restful web service, database, etc).
  5. web interface so that any end user could manipulate the test data and run the tests. Make this a point/click/run.
  6. mobile automation on android (remote, local, multithreaded testing).

For those who want to focus only on the automation portion, here’s the list of things to help you break down the challenge:

Challenge #1
Begin an automation framework for testing the Single Page App (SPA) located at, by automating the acceptance of the license agreement. Automate the input of a name, age and gender, ethnicity, and a non-GPS specified location as well.

Challenge #2
Automate the selection dial to move in percentages, automate the first dial entitled, “Chemical Exposure” to move 70% (High Exposure).

Challenge #3
Complete the automation of the entire app, to the end of the assessment. Be sure to automate for handling of all the possible modal pop-ups shown for Day Moisturizer and Night Moisturizer dial selections.

Challenge #4
Modify the automation to run successfully in 2 different mobile device types (e.g., iPad and Samsung Galaxy S4) using the Chrome Mobile Emulation through the ChromeDriver capabilities.

Challenge #5
Modify the automation to utilize the first line of test data given through the data provider, an Excel Spreadsheet (provided). Incorporate the use of collections (lists, maps, etc.) to store the data provider information for the tests.

Challenge #6
Enable framework to run language-independent, with the ability to run the tests successfully in any language specified through the data provider.

Challenge #7
Integrate the automated framework into a build manager (Ant, Maven, Gradle) and run the tests on a Continuous Integration Server such as Jenkins or Hudson.

Challenge #8
Enable a multi-threaded execution of the tests listed in the data provider. Run the tests in multiple browsers using Selenium Grid. (Optional/Extra Credit: Run the tests in Selenium Grid using multiple Docker containers).

We are going to do this a teams and include developers and QA engineers. You are welcome to invite family, friends, classmates to be on your team. You might also want team members from different disciplines to help w/ different areas of this solution. Front end dev can your team build the front end of the reporting app, c#/java/php/etc dev can help build the controller for the web app, DBA and help you build the models where the data is stored and optimize search queries. They don’t have to primarily focus on automation. However, it’s up to your team. The teams can be anywhere in the world to build out the solution. There’s plenty of online tools to help w/ collaboration, communication, and task management.

Here’s a list of the free ones that can be used. and can help you build your agile board of tasks. The boards sticks around for a while. I tried out both by creating a test board and adding items and even after a week, I can still see my board w/ all the tasks. can help you collaborate w/ other team members so they can help you debug. is another good place as well.
Github is a great place to store your code.

You can also use or Use whatever that will make your team successful.

To get you everyone started, you can go to this URL and start building your teams and or start planning and breaking down the work.

We will announce when the hackathon is posted in our blog on

We will run the hackathon for 4 weeks and judging will commence mid December. We’ll have judges grade the solutions based on “easy of use”, “amount of requirements solved”, “most unique solution”, “fastest to run/get results”.

Winning team will have recognition on STG’s site forever and each team will walk away w/ a digital portfolio you can show others.

By: Bill Witt
As a Quality Assurance Engineer or Software Development Engineer in Test, there is high demand for those who are not only able to create automated tests using Selenium WebDriver, but there is also a demand for engineers who can integrate selenium testing into a continuous integration server like Jenkins.
Using tools like Jenkins provides the ability to create and maintain automated test runs and provide test result reports.  Mastering Jenkins can be a challenge to those who may be unfamiliar with it and its many configuration options and plugins, but the challenges presented by Jenkins can be even more discouraging when trying to set it up for a Linux environment.
Problem Background
Having been familiar with configuring WebDriver automation tests to run on an instance of Jenkins from within a Windows environment, I had hoped that configuring Jenkins might be as straight-forward when configured in a Linux environment, in this case, Ubuntu.  Upon beginning this endeavor, I quickly found that setting up a Linux based Jenkins configuration was quite a bit more difficult, especially in a non-GUI Linux Server environment.  
The following article has been written with the intention of assisting others who may have come across the same difficulties I had encountered, or those who may have been tasked with setting up a Jenkins automated test project who may not be sure where to begin.
After several days of trial and error attempting to run automated Firefox and Chrome WebDriver tests from Jenkins in a Linux (Ubuntu) environment, I was finally able to implement a solution that works.  This solution will work in Linux environments that do and do not have a GUI, both desktop and server versions.
In the following sections of this article I will outline the various steps required to successfully configure the Linux environment and Continuous Integration Server, in this case Jenkins, to run WebDriver automated tests using either the Firefox or Chrome browsers.
Linux Setup
The first thing to do is to prepare your Linux environment by adding the required Chrome and Firefox drivers.  Although somewhat obvious that chromedriver will need to be setup separately, due to recent changes in how WebDriver works, it will also be necessary to separately configure Firefox’s driver, geckodriver.  Run the following commands in the Linux terminal, outlined in each of the driver configuration sections.
Chromedriver Setup:

    1. sudo apt-get -y install unzip
    1. wget -N -P ~/Downloads
    1. unzip ~/Downloads/ -d ~/Downloads
    1. chmod 777 ~/Downloads/chromedriver
    1. sudo mv -f ~/Downloads/chromedriver /usr/bin/chromedriver
  1. sudo apt-get install libxi6 libgconf-2-4

Geckodriver (Firefox) Setup:

    1. sudo apt-get -y install unzip (skip this step if done previously)
    1. wget -N -P ~/Downloads
    1. tar -xvzf ~/Downloads/geckodriver* -C ~/Downloads
    1. unzip ~/Downloads/geckodriver* -d ~/Downloads
    1. chmod 777 ~/Downloads/geckodriver
  1. sudo mv -f ~/Downloads/geckodriver /usr/bin/geckodriver

Xvfb Setup:

    1. sudo apt-get -y install xvfb
  1. sudo chmod 777 /run/user/

Git Setup:

  1. sudo apt-get -y install git-all

Jenkins Setup

After completing the installation of the Chrome and Gecko (Firefox) drivers, you will need to configure Jenkins to properly run the WebDriver tests in a semi-headless browser type of execution.  The following configuration sections will provide the steps for installing the required Jenkins plugins and Jenkins build environment setup.  This article assumes that Jenkins has already been installed, but if this is not the case, an additional section has been added to the end of this article that provides the steps needed to install Jenkins
Jenkins Plugin Installations:

    1. Open the plugin manager by doing the following, navigate to: Jenkins > Manage Jenkins > Manage Plugins, then click the “Available” tab
  1. Select the following plugins by selecting from the alphabetized list, or by typing the plugin name in the “Filter” textbox, then click the associated checkbox and click the “Install without restart” button (repeat for each plugin)
      1. GitHub Organization Folder
      1. Pipeline
      1. Xvfb
    1. Safe Restart (this is optional, but recommended)

Jenkins Global Tool Configuration:

    1. Open the Jenkins Global Tool Configuration window by navigating to: Jenkins > Manage Jenkins > Global Tool Configuration
    1. Scroll to the “JDK” section and do the following:
        1. Click the “Add JDK” button
        1. For the name textbox, just enter: JDK 8 (this is whatever Java version you happen to be using, this could be 7, 8, 9, etc.)
        1. Click the “Install automatically” checkbox
        1. Select the “Install from” option, then select the appropriate version
      1. Click the “I agree…” checkbox. You may have to enter the Java/Oracle username credentials, which can be setup for free on their website
    1. Scroll to the “Git” section and do the following:
        1. Click the “Add Git” button
        1. Name the Git Installation Default or Git
      1. In the “Path to Git executable” textbox, enter: /usr/bin/git
    1. Scroll to the “Maven” section and do the following:
        1. Click the “Add Maven” button
        1. Just name the installation Maven or Maven <version number>
        1. Click the “Install automatically” checkbox
      1. Select the version installed (usually the latest version available) in the dropdown box under “Install form Apache”
    1. Scroll to the “Xvfb installation” section and do the following:
        1. Click the “Add Xvfb installation” button
        1. Name the installation simply, Xvfb
      1. Click the checkbox by “Install automatically”
  1. Click the “Save” button

Jenkins System Configuration:

    1. Open the Jenkins System Configuration by navigating to: Jenkins > Manage Jenkins > Configure System
    1. Scroll to the “Global properties” section
    1. Click the checkbox by “Environment variables”
    1. Add a new environment variable by clicking the “Add” button
    1. Name the variable: XDG_RUNTIME_DIR
    1. For the value, enter: /run/usr/1001
    1. Scroll to the “Jenkins Location” section
    1. In the “Jenkins URL” textbox, enter:
  1. Click the “Save” button

Create the Job for the Automation Project:

  1. To create a new project for your automated tests, do the following:
      1. From the main Jenkins vertical menu, select: “New Item”
      1. Name the project in the “Enter an item name” textbox
      1. Select the “Freestyle project” option
    1. Click the “OK” button

Configure Jenkins Automated Test Project/Job:

    1. Select the newly created project, then click “Configure” in the left vertical menu
    1. Scroll to the “Source Code Management” section
        1. Click the “Git” radio button
        1. Set the repository URL in the associate textbox (e.g.,<username>/<projectname>)
        1. If the GitHub repo is Private, rather than Public, add the necessary credentials
        1. Scroll to the “Branches to build” section
      1. In the “Branch Specifier” textbox enter: */master
    1. Scroll to the “Build Environment” section
        1. Click the checkbox by the “Start Xvfb before the build, and shut it down after.” Option
        1. Click the “Advanced…” button
      1. Select the Xvfb installation previously created from the drop down menu
    1. Scroll to the “Build” section
        1. Click the “Add build step” button
        1. Select the “Invoke top-level Maven targets” option
        1. Select the previously named Maven installation from the dropdown menu
        1. In the “Goals” textbox enter: clean test
        1. Click the “Advanced…” button
      1. In the “POM” textbox enter: $workspace/pom.xml
    1. Scroll to the “Post-build Actions” section
        1. Click the “Add post-build action” button
        1. Select the “Publish TestNG Results” option
      1. In the “TestNG XML report pattern” textbox enter: **/testng-results.xml
  1. Click the “Save” button

Jenkins Linux Installation
In the case that Jenkins has not yet been installed on whichever Linux environment you may be setting up your tests in (preferably in Ubuntu), these are the steps for installing the Jenkins continuous integration server in your environment, run from the terminal window:

    1. wget -q -O – | sudo apt-key add –
    1. sudo sh -c ‘echo deb binary/ > /etc/apt/sources.list.d/jenkins.list’
    1. sudo apt-get update
    1. sudo apt-get install jenkins
  1. Edit the /etc/default/jenkins config file to replace the line, “HTTP_PORT=8080” with “HTTP_PORT=8081”

Sample WebDriver Setup for Automation Project
Initializing the Java WebDriver instance for Chrome or Firefox in your project can be accomplished by doing the following:
public static String chromeDriverPath = “/usr/bin/chromedriver”;
System.setProperty(“”, chromeDriverPath);
WebDriver driver = new ChromeDriver();
public static String geckoDriverPath = “/usr/bin/geckodriver”;
System.setProperty(“webdriver.gecko.driver”, geckoDriverPath);
WebDriver driver = new FirefoxDriver();
Additional Email Configuration Options
Another essential set of features that should be configured in your Jenkins setup includes collecting test result data into a test report, displaying test result data, and enabling the ability for Jenkins to email the test result report.
TestNG Emailable Report Setup
Before test reports can be sent out after a test run, the settings for both the native Jenkins email publisher and the “Email Extension Plugin” will need to be configured.  This configuration will require that the SMTP Server setting be setup from within Jenkins in two separate locations.
Once the Email notification settings have been configured, the next step will be to install and configure the “TestNG Results Plugin” and optionally, the “Test Results Analyzer Plugin.”  The TestNG Results Plugin will compile all the test run and result data into an emailable report that can be automatically sent off once a test fun has completed.  The Test Results Analyzer Plugin is not required, but it adds an additional presentation of the test result data.
Install Plugins

    1. Open the plugin manager by doing the following, navigate to: Jenkins > Manage Jenkins > Manage Plugins, then click the “Available” tab
  1. Select the following plugins by selecting from the alphabetized list, or by typing the plugin name in the “Filter” textbox, then click the associated checkbox and click the “Install without restart” button (repeat for each plugin)
      1. Email Extension Plugin
      1. Test Results Analyzer
    1. TestNG Results

System Email Notification Configuration

    1. Open the Jenkins System Configuration by navigating to: Jenkins > Manage Jenkins > Configure System
    1. Scroll to the “Extended E-mail Notification” section
    1. Click the “Advanced…” button
    1. Fill in the SMTP server details as follows (using the STG Consulting Gmail as an example):
        1. In the “SMTP server” textbox, enter:
        1. In the “Default user E-mail suffix” textbox, enter:
        1. In the Advanced section, check the box by “Use SMTP Authentication”
        1. Enter the STG Gmail username (email address)
        1. Enter the STG Gmail password
        1. Click the “Use SSL” checkbox
        1. In the “SMTP port” textbox, enter: 465
      1. Change the Default Content Type option to “HTML (text/html)”
    1. Scroll to the “E-mail Notification” section
    1. Click the “Advanced…” button
    1. Fill in the SMTP server details with the same information that was added previously in step 4
  1. Click the “Save” button

TestNG Results Configuration

    1. Navigate to the Job/Project page for the test automation
    1. Select the “Configure” option
    1. Click the “Post-build Actions” tab at the top, or scroll down to that section
    1. Configure the TestNG Results Plugin by doing the following:
        1. Click the “Add post-build action” button
        1. Select the “Publish TestNG Results” option
        1. In the “TestNG XML report pattern” textbox enter: **/testng-results.xml
      1. For additional, optional TestNG Results options, click the “Advanced…” button
    1. Configure the Email Notifications Plugin by doing the following:
        1. Click the “Add post-build action” button again
        1. Select the “Editable Email Notification” option
        1. Change the Content Type option to “HTML (text/html)”
        1. Click the “Advanced Settings…” button
        1. In the “Triggers” section, in the “Failures – Any” box, do the following
            1. Click the red ‘X’ on the “Developer’s” box to remove it
            1. Click the “Add” button in the “Send To” portion of the box
            1. Select the “Recipients List” option
            1. Click the “Advanced…” button
            1. In the “Recipient List” textbox, enter the email(s) that the test reports will be sent to
            1. If desired, a reply-to email address may be entered in the “Reply-To List” textbox
            1. Change the Content Type option to “HTML (text/html)”
          1. In the “Attachments” textbox, enter: **/emailable-report.html, **/Run Tests.html, or **/*.html
        1. Click the “Add Trigger” button
        1. Select the “Success” trigger option and do the following
      1. Follow the same sub-steps provided in step 5e for the “Success” trigger
  1. Click the “Save” button

Test Results Analyzer Configuration
Once the Test Results Analyzer Plugin has been installed, most of its functionality will be automatic.  On the automation project’s page, a new menu item will be added under the new “TestNG Results” option, named “Test Results Analyzer” in the left side menu.  Additional test result analysis can be found by clicking this menu option.  For further configuration, do the following:

    1. Open the Jenkins System Configuration by navigating to: Jenkins > Manage Jenkins > Configure System
    1. Scroll down to the “Test Results Analyzer” section
  1. Configure the settings as desired within this section, such as specifying the graph and chart types and thresholds

I hope these steps were helpful to better understand how to successfully configure the Linux environment and a Continuous Integration Server, like Jenkins, to run automated tests using Selenium webdriver on either Firefox or Chrome browsers.
For more information regarding selenium testing, jenkins integration, or automated testing check out our other blog posts.
To learn how you can work with a STG Consultant for your software technology needs, contact us— we’d love to hear from you.

Posted in QA

Author: Matt Chiang

What is automation:

In the most simplistic term, automation is some type of script that controls an application to do some type of action.  However, there are flaws to this ideology.  The background to automation probably started years ago w/ someone in QA or development when they were doing the same thing over and over again.  They eventually got tired of wasting time so they wrote a script to do the same action for them automatically so they can do something else.  This probably isn’t 100% true but that is how I got started writing automation back 17 years ago.  

Why automation:

Automation can be simple or complex.  The need for automation increases as technology advances.  Things that used to take days, weeks, months to validation will now take minutes or hours.  Machine power is cheap.  Machines never complain about doing the same thing over and over and will always be 100% focused on their task 24 hrs a day/365 days a year.  Machine power makes it cheaper to produce a product.  A look into history will reveal that just about every industry employs some type of automated process.  So why not software.  Testing software is expensive, tedious, monotonous, and error prone if you are not aware of everything that is being tested.  Automation can be built to do the same type of tests over and over and validate every possible point of failure.  Automation is also great at capturing business rules for a functionality that one would have to memorize and exercise.  Most people will end up forgetting about a business rule after a few weeks but automation won’t.  

Progression and need of automation:

In the past, most QA engineers would only need to find some type of macro recorder, or some type of record and play back tool to get started doing automation.  This was great in the days of waterfall development where the application changes were slow to develop.  The recorded script could execute for months before a new one would need to be recorded to test a change in a function.  Nowadays, w/ every company pushing Agile development, changes are frequent and releases are fast.  Code changes daily.  Automation is needed more and more for QA engineers to keep up, especially when it comes time to do regression testing.  Most agile development sprints are 1/2/3 weeks and at the end of the sprint, the code should/needs to be releasable.  How can one test all the new features that was developed and to make sure previous code wasn’t broken because of the change w/o automation?  It’s nearly impossible.

Good and bad automation:

I’ll start w/ bad automation.  In the past, code was development using procedures or basically a really long script of steps for a function to work.  The automation script would also be procedural, following a functional test script until it gets to the end.  This makes for very brittle test scripts and the QA engineer would always have to babysit his tests.  This practice does not work today in the world of object oriented development where developers uses shared libraries/classes to build a function.  This allows developers greater efficiency when building a functionality.  This also creates the need for object oriented automation.  If a QA engineer writes 100 procedural scripts that tests 100 different functional tests and one step happens to change, they would have to change it in 100 different areas.  A good example of this is a simple login.  If there are 100 tests that needs the user to be logged in and the username text field changes it’s value to User Name, then someone would have to change the script.  If the User Name field is procedural in 100 scripts, there 100 locations to change.  If the automation script was written in a object oriented pattern, then only 1 line of the script would have to be change for 100 functional tests to work again.  Good automation is built so that the code is written once and can be applied to multiple situations.  Good automation depends on data to drive the tests, not lines of automation code.  Good automation should be easy to run and can be started by anyone w/o being a QA engineer baby sitting the scripts.  Good automation can be run at anytime of the day or night and give QA, BA, PM, Management or anyone interested in test report indicating every failure or area that needs some attention.  Good automation should not fail w/ every build or simple change that would require someone to change the script w/ every time the test is run.  Good automation should be threaded so multiple tests can be run simultaneously to increase efficiency of regression testing.  Regression tests that would take one QA engineer 100 hours to run can be reduced to 10 hours or less if there are 10 tests that can be run at the same time.  The return on investment in automation would be recognized w/ only a few test runs.  

Do I need manual testing w/ all this automation?

Yes!  Manual testing will always be needed.  However, the time that is spent testing by manual testers are now in the areas that needs attention instead of randomly focusing on some area of the application and missing other critical areas where the code changed.  This gives QA engineers a focused testing task rather than adhoc testing task.  Automation should be looked at as a radar system for QA/Development.  The automation is only going to catch what you want to catch.  It will miss things if the engineer who wrote it missed validation points.  If it’s written correctly, a failed automated test should give the QA engineer an indication of where code was changed, so they can look at why it was changed, and what else needs to be tested.  A failed automation test is not always a bug but could lead to one.  Like the radar system, every blip on the screen is not an enemy target but could lead to one.    

What is missing in automation:

There are so many automation frameworks out there nowadays to help test just about everything that it would probably take a lifetime to master everyone of them along w/ new ones.  So what is missing?  The biggest piece to this automation puzzle is reporting.  Every framework will give the QA engineer a report at the end of each test run.  However, it only applies to that one time.  What if someone wants to look at historical trends?  Other than manually tracking down a test report or log, there isn’t much that is available.  However, great QA engineers will come up w/ great solutions.  Like developers building an application for customers, QA engineers builds solutions to capture test results.  A good reporting application will make great automation awesome.  It takes away the complexity from various group members in interpreting the reports or logs.  The reporting application should be accessible to anyone one the team or management.  It will give them 24/7 access to the test results and ways to query for certain tests or to look at failures.  It gives them step by step analysis of where the test failed and a screenshot of what the application looked like when it did fail.  W/ the reporting application, automation tests can run 24/7, 365 days a year.  Failed tests will need to be analyzed manually but the QA engineer does not need to babysit the tests anymore.  

Final thoughts:

Just as anything in technology, the implementation has to be correct.  Poor implementation of a technology w/ only lead to frustrations, re-factoring of code, or complete re-writes.  The person doing the work needs to know what they are doing w/ the tool.  However, w/ the correct implementation of automation, applications can be released more frequently that is more stable and reliable.  The last thing anyone wants, is to have the customers test the application because QA didn’t have enough time to test it.  
Author: Matt Chiang

Posted in QA

It is common to read articles and posts which say that the issues that organizations face when developing software are bigger than ever before. However, in many ways we face many of the same issues now as those that were common in 1975 when Fred Brooks wrote his famous essay on issues facing software development companies, “The Mythical Man Month”.
The issues discussed in “The Mythical Man Month” have not so much changed as the options for fixing those issues have grown and morphed into better solutions. Every business is different and they all have their own sets of challenges. We can categorize issues that businesses face by the size of the organization. Large businesses have more employees, harder to handle communications, and more strenuous processes that must be followed. With that in mind let’s explore five issues, and their solutions, that large organizations face as they try to find success with IT.

Big Design Up Front

Large organizations have a number of challenges which often seem as though they can be solved by the concept of big design up front. Executives want to know when they will be able to see a return on their investment. Marketing and sales teams want clear deadlines so that they can begin to share information with customers about new features. Engineering management want to be able to plan what will be developed with project management groups. Operations teams need to be able to plan and purchase needed hardware to support these new products.
It is exactly these pressures which cause companies to attempt to design every facet of their software as early in the process as possible. Big design up front is the process of trying to architect and plan everything in a project before ever writing a line of code. Often big design up front will lead to long schedules filled with development tasks which take a product down the wrong path and do not leave room for an organization to learn from their customers as development proceeds. Big design up front may alleviate the initial pressures of development by creating schedules, but it will also often lead to product failure.
Obviously companies can not just ignore these initial project pressures. The solution to these pressures is to adopt a goal to deploy to real users early in the development process and often after an initial release. At Hewlett Packard, on a consumer facing project, we made a goal to release to a small set of customers in the first week, and weekly after that. By doing so, everyone in the organization could plan and prepare for constant customer interaction. Additionally, the company was able to receive customer feedback early and pivot quickly as it saw areas of innovation. This sort of goal, as opposed to big design up front, leads to quick and consistent success.

Not Quite Agile

Many organizations I work with call themselves agile. However when pressed to discuss what agile methods they follow companies will stammer out an explanation that we in the Agile community call “Scrum But”. “We are scrum but we don’t have stand ups,” “… but we don’t write user stories,” “… but we don’t demo to stakeholders”. This list can get quite long and before long the organization is not so agile at all.
Large organizations need to understand the agile mindset now more than ever before. Agile states that an organization should value:
● individuals and interactions over process and tools
● working software over comprehensive documentation
● customer collaboration over contract negotiation
● responding to change over following a plan
Large businesses know that agile is important but they fall into the pit of “Scrum But” when they forget the real tenets of agile. Companies need to remember that agile is not scrum. But the rules of scrum, as well as extreme programming, lean, kanban, and many other so called “Agile Methodologies,” if adhered to, will help an organization to follow the four truly agile principles.

Too Many Meetings Not Enough Action

Large organizations often complain of communications issues. With dozens of levels of employee and management between the software producers and the executive teams it is easy to see how communications can become problematic. Most people within organizations have an honest desire to communicate status, changes, and issues to both their supervisors and their subordinates. However this honest desire to communicate generally leads to a crippling number of meetings. These meetings often hamper the amount of work a team can complete.
Organizations in recent years have turned to agile practices in an attempt to alleviate these meeting needs while still keeping communication channels open. This is a good tactic as long as it is kept in check. All too often companies allow agile to become a reason for more meetings. Daily stand ups turn into multiple hour status sessions. Sprint planning can turn into multiple day marathons. When agile methods are not kept in check they are not agile at all.
When agile is used correctly it can reduce the meetings and increase communication. An organization must work to have an effective backlog and to make sure all of the stakeholders in an organization have access to the list. A prioritized backlog can tell stakeholders, even at the executive levels, more in a few minutes about development efforts than weeks worth of meetings. Additionally companies must be diligent in keeping their agile meetings agile. Stand ups must be kept to 15 minutes of less. Sprint planning should be under an hour. Other meetings should be kept to a minimum or eliminated. By keeping a prioritized backlog and keeping agile meetings agile, organizations will see engineers complete more software development and meet less.

A Lack of DevOps and Automation

Whenever I start a new engagement I always ask how the company deploys its software. Often the company will complain of the long and arduous process of moving their code from environment to environment. The process usually involves a myriad of manual steps and is quite error prone. The longer and more error prone the process is, the longer the organization will wait between releases. This is not good for the development process nor for an organizations bottom line.
This is why DevOps and automation is so important. DevOps is the intersection between development and operations teams. Before the DevOps movement engineers working on products would complete features and then “toss” the code over to operations for deployment. Operations would then try to replicate a development like environment on production hardware, deploy the code, and debug any production issues. This method of deployment is ineffective and expensive.
By implementing a DevOps strategy which includes the automation of both the server configuration as well as software deployment organizations can speed up the deployment of their software. Product engineers must help in the process of automating these steps. Companies can no longer tolerate an internal atmosphere where product engineers create software in a vacuum and then toss the results over the fence to operations. As product engineers plan their development activities they must add tasks to automate and test deployment strategies.

Rebuild Or Hand On Too Hard

Large organizations can on occasion fall prey to their own success. Unlike bootstrapped or venture backed start ups, large organizations often have the capital needed to invest in very large software projects. These projects do occasionally go awry. When that happens companies will either decide to rebuild or hang on and keep trudging forward with their software development. Depending on the circumstances these scenarios can be detrimental to the company and the development teams.
When a large software project starts to have huge problems is may seem like the best plan is to rebuild the project from scratch. This stems from the nearly universal good feelings that come at the beginning of a project. Tasks complete quickly and progress often is fast in the early project stages. But if there is a problem in an initial project it will be exacerbated in a rebuild project.
Similarly, companies in distress will sometimes try to hang on to a failing project. They do not want to lose the perceived value, and cash, they have already put into a project so they trudge forward hoping things will get better. Again when a project has issues this desire to trudge forward, throwing good money after bad, will exacerbate the issues.
The answer to these issues is not to rebuild from scratch or trudge forward blindly but to find the underlying issue in the project and eliminate it. Often the project is having issues because of ill defined requirements. At other times simple processes are being made overly complex. It is also not uncommon for a project to no longer be of value to the organization. It is better to understand the underlying needs and address them than to blindly rebuild or hang on.

Where Do We Go From Here?

Large organizations are obviously not immune to issues when it comes to IT. Unlike smaller businesses, large organizations have a huge number of employees, levels of management, high budgets and budget constraints, and rules and regulations to follow. When we look over these issues that have been presented, it can be seen that they creep in because of the unique environment that are within these large organizations. However, by following good principles such as true agile methodologies, keeping communications in check, automating, and making solid businesses decisions when issues arise, we can overcome problems within our large organization IT departments.