Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Friday, January 29, 2016

Interview for QA or the Highway

I will be taking my Web Services Testing talk to the QA or the Highway conference next month.  I am excited to speak at this QA focused conference for the first time.  I have heard a lot of good things about the conference.  Alliance Data, my new employer will be sending their entire testing team to the conference, so hopefully I'll have some friendly faces in the audience during my session :)  To promote the event and generate buzz, the Testing Curator is interviewing some of the speakers in what they call the Speaker Series.  I was recently interviewed.  Below is a link to the results.


QA or the Highway 2016 – Speaker Series – Featuring Stan Jónsson

Our sixth featured speaker who will be at QA or the Highway is Stan Jónsson.
1. What attracted you to speaking at QA or the Highway this year? I saw a...

blog.testingcurator.com

Tuesday, June 30, 2015

StirTrek Talk Available on YouTube

Last month I had the pleasure of presenting at StirTrek 2015, Ultron Edition.  This one day software development conference has grown into one of the premier technology events in town, so I felt honored to get a chance to present.  The conference was held at the Rave Theaters in Columbus, and as in previous years the 1500 attendee tickets were sold out in minutes.

I gave a talk on Testing Web Services, similar to ones previously presented at CodeMash and Columbus Code Camp.  According to the proctor manning my theater, 217 people attended the session.  The presentation went very well and I had a lot of people come up to me afterwards and thank me for the talk, which made me feel good.  The talk was recorded and can now be viewed on YouTube.  Only the audio and video feeds to the projector were captured, so you wont be able to see my face.  Well, maybe just as well.  If you want to learn more about SoapUI, JMeter and REST-assured, then give the video a shot.  It is about an hour long.  



The slides from the talk (with screenshots of most of what I walked through), along with all code/scripts are available in the StirTrek Github repo (downloadable zip file).

As always the conference ended with a private movie screening.  This time the movie was The Avengers: Age of Ultron.  My wife joined me and the rest of the Quick Solutions gang for the viewing, which was a great ending to a fun conference. 

Monday, March 16, 2015

JMS testing with HermesJMS

HermesJMS is a handy tool that can be used to visually interact with JMS destinations (JMS Queues or JMS Topics).  I find it convenient for ad hoc testing of JMS applications.  I use it to monitor the status of JMS Queues, browse their contents, and to drop messages onto queues for testing purposes.  

When viewing a message in a JMS Queue, HermesJMS shows you the JMS headers and the value of the message payload, even if the payload is a serialization of a custom Java object.   For example, in my current consulting engagement, we had a situation where we had a bad message stuck at the front of one of our JMS Queues (and due to invalid configuration our app kept processing that same message over and over, rather than proceeding onto the next message in the queue).  Through the WebLogic Console we were able to see that there was a message in the queue that wasn't getting processed, but we couldn't see the actual content of the message that was causing it to get stuck.  By connecting HermesJMS to the queue we could view the message payload and as a result identify and fix the issue.

Browse Queue Contents

The screenshot below shows an example of what browsing messages in a JMS Queue looks like:


The table lists the JMS messages currently in the queue and the JMS headers for each message.  Below the table is then a text rendering of the actual payload (typically a serialized Java object).  In this example the payload is a Java class called HermesDemo with two properties, foo and bar (which I creatively concocted for this blog post :)

Drag Messages Between Queues

Another handy feature of HermesJMS is that you can easily copy messages between queues.  For example, if I click on the top message in the demo/Queue on my local machine I can drag it over to a queue in my testing environment (UAT):


HermesJMS asks me to confirm the action and then copies the message over.  HermesJMS will automatically handle any necessary mapping if the JMS Destination names differ between the source and the target queues.  I find this drag and drop feature quite handy for ad hoc testing JMS applications in multiple environments.  I produce a message on one of my local queues and then drag it as needed onto a corresponding queue in the environment I want to test.  

Build Message Stores

HermesJMS also has a feature where you can build so called stores; that work off of a database rather than an actual JMS destination.  Using this feature you can build a database of various JMS messages and have them ready for dragging over to a remote destination anytime you need to test a specific condition in one of your JMS applications.

XML Export/Import

Alternatively HermesJMS allows you to export messages to XML files, for later import into queues/topics. To do this, you simply click on a message in the queue and select Save as XML... from the Messages menu, then give it a file name and hit save.  To import the message to a queue you click on the JMS Queue and select Send XML Encoded Messages from the Messages menu and then select the XML file to import from on your hard drive:


Note: if the JMS Destination name does not match between the source and target queues you will need to edit the XML and update the value to match that of the target queue.  

In our example the exported DemoClass.xml file looks like this:

where the value of the object tag is an object serialization + Base64 encoding of the following Java class:

When you export a message from a Queue to XML, HermesJMS handles the serialization magic for you and writes it out to the XML file.   If you want to create a new XML message from scratch (e.g. when adding the first message for a queue), you can build the serialization string using the SerializeHermesDemoClass in my Github repo (just modify the main method to use whatever class you want to serialize).

Setup Instructions

Below are basic instructions for getting HermesJMS set up.  In my case I am using WebLogic as the application server.  Setup for other app servers is similar; you just need to use the ContextFactory and jar files specific to that app server.  If you go to hermesjms.com you will find setup instructions for many app servers under the Providers menu.

  1. Download and install HermesJMS, either directly from Sourceforge or as part of SoapUI install.
  2. Start HermesJMS by running hermes.bat/hermes.sh.
  3. Create a ClasspathGroup for your app server jar files 
    1. Select Option and Preferences
    2. Click on the Providers tab
    3. Right-click on Classpath Groups and select Add Group and give it a name (e.g. JarDependencies)
    4. Click the + sign and right-click on Library and select Add Jars and find the jar files you want to import.  In our case that is weblogic.jar, wlclient.jar, and HermesDemo.jar, which has the custom Java class used in our demo.  If you want HermesJMS to show the contents of a custom Java object in your JMS Queue, it needs to have the corresponding class file on its classpath.  You can either add the jar here, or alternatively edit hermes.bat/hermes.sh and add it where the CLASSPATH variable gets set.
  4. Next we need to create a Session for JNDI browsing the JMS server 
    1. On the Preferences screen, click the Sessions tab.
    2. Give the session a name, corresponding to the JMS server you are pointing it to.
    3. Select the Plug In matching your app server.  In our case it is BEA WebLogic.
    4. Under Loader, select your JarDependencies and under Class select hermes.JNDIConnectionFactory.
    5. Populate the binding, initialContextFactory, providerUrl, and security properties as appropriate for your app server.  For WebLogic the properties are:
    6. If the destination names don't get auto-populated, right-click under Destinations and add the names of JMS Queues/Topics you want to connect to on the JMS Server.

Note: If you are using WebLogic you can alternatively download this
pre-populated hermes-config.xml file and put it in your .hermes directory (replacing the default one that HermesJMS puts there during install).  Before you run HermesJMS make sure you edit the file and change the following:
  • Update the providerUrl value to match the server and port of your JMS Server.
  • Set the securityCredentials and securityPrincipal values to mach your username and password.
  • Edit the library paths for JarDepdencies and make sure they point to wherever you have these jar files on your machine.

In Conclusion

I hope this overview and these setup instructions help you get going with HermesJMS.  Once you have it working, interacting with your JMS destinations is a breeze, and testing a given JMS app can be as simple as a drag and drop.

If you need to run a suite of JMS tests, e.g. for sanity testing or load testing, you can use SoapUI, which knows how to interact with HermesJMS.  I may write a future blog post demonstrating this integration.  For SoapUI basics, see this blog post.

All the examples used for this blog post can be seen in this Github repo.

Happy JMSing!

Friday, January 10, 2014

CodeMash 2014

CodeMash 2014 is winding down and as usual the conference was a blast! Lots of greats talks, lots of fun activities, and last but not least good time to be had in the waterpark! Outside of the talks they had activities such as lock picking, jam sessions, early morning 5K run, astronomy, 3D printing, game rooms, open spaces, lightning talks, kids fun at KidsMash, a StarTrek simulator, various parties, bacon bar, and on and on. This was my 4th year attending the conference and it is without a doubt my favorite developer conference. Hats off to the organizers!

This year I was fortunate enough to not only deliver one, but two talks at the conference. On Thursday I gave a talk on Web Service testing and Friday I did one on Kanban. The first talk went sort of ok.  I was very nervous, felt I stuttered too much through the material and didn't feel very sharp in my thoughts, so I didn't feel super good afterwards. I had a talk with Leon Gersing in the evening and he gave me some great pointers on delivering presentations and dealing with nerves. I went ahead and rehearsed my second talk several times in the evening and then delivered it the following morning and felt way more relaxed, sharp in my thoughts and quite enjoyed giving the talk. So thanks, Leon, for the tips! Now I am waiting for my family to join me at Kalahari, so we can hit the waterpark tonight and tomorrow.

The slide decks from my talks are available on Slideshare:


and


The slides have also been uploaded to the conference GitHub.

Sunday, June 17, 2012

Agile Tips from Remote Seminar with Henrik Kniberg

Last week Agile netið organized a remote seminar with Henrik Kniberg, award-winning keynote speaker and author of multiple Agile and Lean books.  The seminar was held at the Grand Hotel in Reykjavík, where around 10 Icelandic Agile enthusiasts gathered to remote connect to Henrik via Skype and Google Hangout.  We listened to Henrik talk about lessons learned at one of his recent projects and then had a long Q&A session, where we got to ask him all sorts of Agile and Lean related questions.  Despite an initial technical hiccup with poor sound quality the event was very enjoyable and informative.  Here are some of my take-aways. Basically random tips from the discussions that I found useful.
  • If you need to explain to someone the benefits of limiting work in progress when running Kanban, use a traffic jam analogy: When there are not that many cars on a given highway, everything runs smoothly and we have a lot of cars running through the highway. But as we add more cars on the highway, basically everything starts slowing down to a jam and we have fewer and fewer cars that are able to drive the highway to the end.  The same thing happens in Kanban if we don't put WIP limits on our queues. Starting more tasks actually causes us to end up finishing fewer.
  • If you ever need to sell to management the need to pay off a technical debt and refactor some legacy code: print out the longest class, tape the pages together and bring it with you to a meeting!  When you show up with a 5 meter long paper strip, showing some monstrosity of a class, people have an easier time visualizing the problem.
  • When running a Scrum team, think about establishing a rule that each team member that completes a user story needs to ask the team if he can help out with a story that is already in progress before he is allowed to start work on a new one.  That encourages cooperation and limits work in progress, increasing the likelihood that we have fully completed stories at the end of the sprint, as opposed to stories that are partially done.  If our sprint backlog is 10 stories we would much rather have 8 fully completed stories at the end of the sprint than 6 completed stories and 4 partially done.
  • According to Henrik, research has shown that incorporating QA into your agile development teams (testing continuously) saves on overall time spent in testing and bug fixing.  Meaning that if you test and bug fix as part of each sprint, as opposed to doing a large round of testing at the end of the project followed by a lot of bug fixes, you save time.  One reason for this is that when it comes time to fix a bug the code is still fresh in the developer's mind.  Also the continuous QA feedback encourages developers to deliver higher quality code.  This is visualized in the following illustration from Henrik's slides:
  • Size doesn't matter.  According to Henrik having the team categorize user stories into small, medium and large before implementation is not very useful.  At the end of his project he calculated actual times spent on user stories and found out that it actually took about the same time on average to implement a story that had been categorized as small as it took to implement a story that had been categorized as medium.
  • When you are on a large project that comprises multiple teams, try doing finger voting to assess the overall confidence that the project will get delivered on time.   Basically, in weekly meetings ask team leads to hold up fingers on one hand representing their confidence that their team will get the required work done on time.  Where 5 fingers mean "definitely", 4 fingers mean "likely", 3 fingers "it's a gray line", two fingers "probably not" and one finger means "no way".  Track this on a weekly basis.  If you get few fingers in the air, you know you have a problem that needs to be addressed.
  • Try to make your development teams cross functional.  Each team should comprise all the skills required to fully deliver a feature (e.g. graphics designer, front-end developer, back-end developer, DBA, tester).  If a resource is needed at least 50% of the time put him on the team.  Make sure everyone has a home team, even though they might be outsourced to other teams from time to time.
  • To check the health of a Scrum team use the Scrum check-list.

Tuesday, August 30, 2011

Web Service Testing with soapUI

For a few years now I have used the open source application soapUI to simplify Web Service testing.  Both to test services that I have written and to test external services that I have had to consume.  In this post I'll cover the basic types of testing you can do with soapUI and provide some practical tips on how to use soapUI when working with Web Services.  If you don't want to go through the detailed examples I suggest jumping straight to the Practical Tips section at the end.

Getting Started

After downloading and installing soapUI the easiest way to get started is to create a new project from an initial WSDL or WADL.  For demonstration purposes I'll use the free Weather Web Service at http://www.webservicex.net/globalweather.asmx.  We will create a test for each operation in this SOAP Web Service.  We will also create a Test Suite, that allows us to run all our tests at the click of a button.  Furthermore, we will create a Load Test for our Test Suite and a Mock Service to simulate the functionality of the Web Service.

To do this go to File and select New soapUI Project.  Populate the New soapUI Project screen in the following manner and click OK:

Basically, paste the URL to the web service WSDL file into the Initial WSDL/WADL field and additionally check the field to generate a MockService.  SoapUI will now load the Web Service definition and ask you questions about how to create the MockService:

For this demonstration change the path to "/WeatherMock" and check the Starts the MockService immediately checkbox, but otherwise accept the defaults.  After clicking OK, specify the name of the Mock Service as "Weather MockService" (or a name of your choice) and click OK again. 

This will complete generating artifacts for the SOAP 1.2 version of the GlobalWeather Web Service.
Next you will be asked the exact same questions, for the SOAP 1.1 version.  Since we don't plan to use SOAP 1.1 you can just hit Cancel.  (I haven't found a way to configure soapUI to only generate artifacts for SOAP 1.2.  If you know how to, please let me know! :)

Running Tests

To execute a given test, drill down to the requests that soapUI auto-generated and insert values that make sense for the given Web Service operation.  In our case, in the Navigator on the left, click on GlobalWeatherSoap12, then GetCitiesByCountry and then double-click on Request 1.
We'll set the Country name as Iceland and then hit the green arrow button to execute the test.

Voila, you should get results back from the Web Service, a list of Icelandic cities.  To check if the Web Service response is valid (conforms to the WSDL), right-click in the results window and click Validate.

Lets give this test the name "City Test".  (Right-click Request 1 and select rename) 
Then do a similar test for the GetWeather operation.  Click on GetWeather and then double-click on Request 1. Put in "Reykjavik" for CityName and "Iceland" for CountryName and run the test.  Also rename the test to "Weather Test".  After running the test, the results should look something like this:

Build a TestSuite

Now that we have created two tests, lets add them to a TestSuite so we can easily re-run them at any time.  In the Navigator on the left, right-click GlobalWeatherSoap12 and select Generate TestSuite.  Select Use existing Requests in interface, and check Generates a default LoadTest for each created TestCase and Single TestCase with one Request for each Operation:  

After clicking OK give the TestSuite the name "Weather TestSuite".   Double-click on GlobalWeatherSoap12 TestSuite under Weather TestSuite to show the TestCase editor.
Then click the green arrow button to run the TestSuite.  If all goes well you should be presented with a results-screen like this, indicating a successful test run:


If the TestSuite run fails you'll see FAILED at the top instead of FINISHED and you should get a message explaining why a given test failed.

Add Assertions

To make the tests more meaningful lets add some assertions to validate the responses from the Web Service we are calling.  You do this by opening up the test Request Editor (double-click on a given test step under TestSuite in the Navigator) and then click the plus-sign next to the green arrow button (second from the left).

For both tests lets add the assertion called SOAP Response, to ensure the Web Service is returning a valid SOAP response.  Then add the assertion called Response SLA and specify the response time as 2000 ms.  That basically means that we are going to consider the test a failure if we don't get a response within two seconds.  Lastly, lets add some content validation by selecting a Contains validation.  For the GetCitiesByCountry operation add the string "Iceland" as the content to expect in the response, and for the GetWeather operation add the string "Success". 

Then run the TestSuite again to make sure you get success-results.

Run a Load Test

When we created the TestSuite in last step, we told soapUI to generate a Load Test as well.  You'll find it under the TestSuite in the Navigator on the left, under the heading Load Test.  The default name is LoadTest 1.  Double click on it in the Navigator to open it up.  Before running the LoadTest you can tweak such parameters as number of concurrent threads to run, length of the test-run, and delay between tests.  Once you have made the desired configurations hit the green arrow button to run the Load Test:


In this sample, 5 concurrent threads are running the TestSuite for 60 seconds with a random wait of up to 1 second between the start of each test.
You can track the progress of the load test with the progress bar in the upper right corner.  If it reaches a 100% without reporting any test errors you are good to go.

Use the Mock Service

Back when we imported the Weather Web Service we told soapUI to generate and start a Mock Service.  That service can now be accessed at http://localhost:8088/WeatherMock.  This is  convenient for example if you are developing against a Web Service that has been designed (WSDL/WADL available) but not yet implemented.  Then you can have the Mock return an actual Web Service response to test your code even though the actual implementation hasn't been completed.  A default response has already been generated (under WeatherMock Service,  GetCitiesByCountry, and Response 1), which you can edit as you like.

You can also have the Mock service return different responses depending on which request it receives.  To demonstrate this, lets create two new MockResponses for the GetWeather operation:
  • Click on GetWeather under the Weather MockSerive in the Navigator and select New MockResponse.  
  • Give it the name "ReykjavikWeatherResponse".  Accept the automatically generated response, but put the value "Reykjavik" in the GetWeatherResult tag.  (Or even better copy the actual response from calling GetWeather for Reykjavik, which should give you a fully valid response).  
  • Create another response called AkureyriWeatherResponse and put the text "Akureyri" in the GetWeatherResult tag.  
Now put logic in the Mock service for when to return each response:
  • Double-click on GetWeather in the Navigator to show the MockOperation Editor.  
  • Select ReykjavikWeatherResponse under MockResponses.
  • Select QUERY_MATCH under Dispatch.
  • Click the plus sign to add a new match. 
  • Give it the name "Reykjavik". 
  • Select Reykjavik and then populate the XPath value with:

    declare namespace web='http://www.webserviceX.NET';
    declare namespace soap='http://www.w3.org/2003/05/soap-envelope';
    /soap:Envelope/soap:Body/web:GetWeather/web:CityName


    This XPath query will grab the value from the CityName tag.
  • Under Expected Value enter "Reykjavik".
  • Under Dispatch to, select ReykjavikWeatherResponse.
  • Repeat the same steps to create a Match that returns AkureyriWeatherResponse when the city name in the request is Akureyri.
Now you can test your mock Match-logic by opening up your GetWeather test and adding http://localhost:8088/WeatherMock as the endpoint to use (select the current endpoint in the dropdown and pick add new endpoint...).  Then run the test and play around with changing the city name in the request to get different responses from the Mock service.

Additionally, if you want to test your client-side error handling you can have soapUI generate a soap:Fault response and have your Mock return it.  You do that by creating a MockResponse and then clicking the apostrophe icon in the MockResponse Editor.  Then edit the auto generated response as appropriate.

Practical Tips: What to use soapUI for

Now that we have covered the basics of soapUI, here are some practical tips for putting it to use during your software development.

Test Web Services You Have to Consume

When consuming external services, before delving into code, use soapUI to "kick the tires" of the Web Service.  This especially applies when consuming newly written services.  Rather than potentially spending hours pulling your hair over why your client code isn't working, spend a few minutes with soapUI validating and getting familiar with the Web Service you are about to consume. In particular:
  • Create and run simple tests for key Web Service operations
  • Make sure there are no security/access problems
    • Is the web service using some proprietary authentication protocol (NTLMv2 comes to mind) that might give you trouble during implementation?
  • Have soapUI validate that responses conform to the Web Service contract (WSDL/WADL)
  • Visually inspect responses 
    • Do they make sense or are they some illegible auto generated garble that should really be cleaned up and restructured by the Web Service developer? 
    • Do the responses meet your needs?
  • Test a few boundary cases
    • Does the Web Service implement proper error handling?  Or does it blow up with an HTTP 500 error or some non SOAP compliant text message?  Does the level of error reporting meet your needs?
  • Add all your tests to a TestSuite.  That way you can quickly "ping" the Web Service to make sure everything is working on the other end.  When a problem arises, taking your code out of the loop is a good way to make sure the issue is on the remote end and not with your own code.
When developing against a newly written external Web Service, it is very rare that the Web Service works 100% out of the box as expected.  There is usually a fair amount of communication needed between the Web Service developer and the client-side developer to tweak things until the Web Service works as needed.  By using soapUI, you can inspect the Web Service right at the time it is delivered to you, and quickly spot things that may need to be fixed.  I usually take 5 or 10 minutes to do so and almost always have a list of things that need to be modified.  The response time for getting those changes implemented is usually very short, as the Web Service developer is still engaged in the project and things fresh in his mind.  If I don't get back to him/her until some weeks later when I finally get around to implementing my client-side code, the other developer has probably moved onto other things and/or forgotten why he implemented the service in a certain way.   

Lastly, if you have received a WSDL/WADL file, but the service hasn't actually been implemented, and you NEED to start implementation against the service (not ideal), then consider using soapUI to create a Mock service from the WSDL/WADL file.  That way you can have your code hit the Mock service and at least get some preliminary feedback on whether your client code is working.  Since creating a Mock service is really a breeze with soapUI it can sometimes be more practical than implementing Mock objects in your code.

Test Your Own Web Services

When writing a Web Service for others to consume it can be handy to have a soapUI TestSuite to sanity test your service.  Of course you should still write unit and integration tests for your code, but having a good soapUI TestSuite can be quick and easy way to find out if all your services are running as expected.  When you get that 4 AM phone call saying that something is broke, fire up soapUI and at the click of a button sanity test all of your Web Service operations.  If you are smart, you'll hand the TestSuite over to a support team so that you only get woken when the issue truly is on your end ;-)  Just make sure to add proper assertions for Web Service responses and include SLA assertions to test that things are not running dead slow.

If you are concerned about the performance of your Web Service or whether it can handle a given load, then a soapUI Load Test can be a convenient way to test that.  Set the number of threads to imitate the expected number of concurrent users for your service and add SLA assertions to make sure all requests are handled in a timely fashion.  Of course generating all the load from a single machine does not quite imitate real traffic, so for truer numbers consider having coworkers assist you in running simultaneous Load Tests from multiple machines.

Wednesday, November 17, 2010

Notes on Test Driven Development and Testing

This week I attended the Agilis 2010 conference, including a 2 day course on Test Driven Development by Nat Pryce. Below are a few notes from his class. Not necessarily the key teachings, but rather nuggets of information that I found interesting.
  • One of the main arguments for using TDD is that it encourages you to improve the design of your code.  Writing the test for a software component that you have designed, but have yet to implement gets you to think about and question the purpose and nature of your design.  E.g., if when writing a unit test for a given class you discover that the test code gets way to complex or convoluted, then that is probably an indication that the class being tested has too broad responsibility and should be refactored into smaller, simpler units.  I had always thought of TDD as more of a means to ensure you build a comprehensive test suite and as a result have fewer bugs in your software, but had not given much thought to the fact that it is in essence a way to get you to improve your software design; hence making your code easier to understand, use, and maintain. 
  • We did some exercises using the JMock framework, that Nat Pryce co-wrote.  JMock is a neat tool to help you mock out interfaces that your code interacts with and to validate that your code is using the interfaces as expected.  JMock allows you to command the mock object of the interface to behave a certain way (e.g. return specific results) and set up expectations for how you are planning to call the interface (e.g. methodA will be called once and only once with arguments "ABC" and 99).  These expectations are integrated with JUnit and if not fulfilled by the end of your test run then JUnit will fail the test.  
    • The following JMock Cheet Sheet page provides an overview of he JMock syntax.
    • During the Q&A session with Nat he admitted that JMock was probably best suited for green-field projects (new systems) while frameworks like Mockito where better suited for brown-field projects (preexisting systems that you are trying to create tests for).
  • We spent some time discussing Monitoring Events, which is the concept of having your system broadcast notifications (events) about your code execution.  E.g. when an order is placed or when a user logs into your system a notification of the event is sent to a JMS Topic that interested parties can subscribe to.
    • This is great for logging.  A logger component can subscribe to the topic and log all events in the system.  It can then allow you to filter out certain events or do things like group events by requestid and provide a holistic overview of a single user transaction (as opposed to having to grep through numerous log files on multiple servers to try piece together what happened when a given user transaction ran through the system, which may have spawned multiple threads in multiple JVMs).
    • Monitoring Events are great for testing too.  Imagine trying to assert that a call to a checkout service (to complete the purchase of a product) will result in a proper inventory reduction in asynchronous inventory system.  If your test runs straight through and checks the inventory status as soon as it has completed the purchase, then the test will likely fail, since the inventory system hasn't had time to process its update inventory request.  One might try to fix such a test by adding a sleep statement of say 10 seconds after the checkout call but before the inventory is checked (which still might fail if the system is running slow).  Or (which is a little better) one might implement a loop that every 1 second pulls the inventory system to see if the update has been received (succeed fast).  In both cases we are polluting our tests with sleep statements and lengthening the time to feedback, when we run a suite of tests.  A better way would be to have the test subscribe to the Monitoring Event (topic) and complete (succeed) as soon as it has received an inventory update notification.
    • You can also use Monitoring Events to build a support tool for your system.  E.g. the tool could send an email or SMS text message to a support person when a certain event is received (OutOfMemoryError,  External service not responding, etc) or a when certain number of events have been received over a given time frame.
  • Miscellaneous tips on testing and coding
    • If you are ever testing code that depends on the system clock (e.g. at noon every day the system is supposed to execute some function) then a neat trick to make that code more testable is to refactor out the dependency on the system clock.  E.g. instead of your class making a direct call to say System.currentTimeInMillis(), have your constructor take in a generic Clock object (or define a class variable and use dependency injection to inject a Clock implementation).  Then you can have a SystemClock implementation that simply uses the current system clock, where as during your test run, the class under test is initialized with a FakeClock implementation that hardcodes the time to 12 PM)
    • In your system tests, which load up and work with data in a database, have your JUnit setUp method clear out the database, rather than the tearDown method.  This way you have  actual data to look at if a test fails (rather than a clean swiped database).
    • Use Simplicators to simplify communication with 3rd party APIs.  That is, a facade that maps the 3rd party interface and artifacts over to your domain model or something that makes sense in your system.  This way you can more easily test your own code (by using mock Simplicators that don't have a dependency on a 3rd party system) and likewise your code is more easily maintained (e.g. a type change or renaming of a field in some 3rd party XML response may require you to update your Simplicator implementation, but might have no impact on your business code if the response has already been mapped over to your domain model.
    • Have separate tests that test your production setup.  Basically tests that you can run in production to verify a deployment.  Don't deploy your unit/system tests into production. Avoid the painful lesson that GitHub recently experienced where a test run in production cleared out their entire production database!
    • When programming, don't have your methods return null!  Null checks pollute you code and make it harder to debug. Return empty objects instead.