Here are slides on a Kanban Case Study that I presented at agileLUNCHBOX today. I basically recapped my experience using Kanban at Hugsmiðjan (maker of Eplica CMS), and showed how our Kanban board evolved over time. I went over basic Kanban theory and covered a few advanced topics like associated metrics. I also talked about how we integrated Kanban and Scrum. The talk was well received and afterwards we had some good agile discussions.
Blog about software development, agile practices, software use, and the latest happenings in the wide world of technology
Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts
Thursday, September 27, 2012
Sunday, June 17, 2012
Agile Tips from Remote Seminar with Henrik Kniberg
Last week Agile netið organized a remote seminar with Henrik Kniberg, award-winning keynote speaker and author of multiple Agile and Lean books. The seminar was held at the Grand Hotel in Reykjavík, where around 10 Icelandic Agile enthusiasts gathered to remote connect to Henrik via Skype and Google Hangout. We listened to Henrik talk about lessons learned at one of his recent projects and then had a long Q&A session, where we got to ask him all sorts of Agile and Lean related questions. Despite an initial technical hiccup with poor sound quality the event was very enjoyable and informative. Here are some of my take-aways. Basically random tips from the discussions that I found useful.
- If you need to explain to someone the benefits of limiting work in progress when running Kanban, use a traffic jam analogy: When there are not that many cars on a given highway, everything runs smoothly and we have a lot of cars running through the highway. But as we add more cars on the highway, basically everything starts slowing down to a jam and we have fewer and fewer cars that are able to drive the highway to the end. The same thing happens in Kanban if we don't put WIP limits on our queues. Starting more tasks actually causes us to end up finishing fewer.
- If you ever need to sell to management the need to pay off a technical debt and refactor some legacy code: print out the longest class, tape the pages together and bring it with you to a meeting! When you show up with a 5 meter long paper strip, showing some monstrosity of a class, people have an easier time visualizing the problem.
- When running a Scrum team, think about establishing a rule that each team member that completes a user story needs to ask the team if he can help out with a story that is already in progress before he is allowed to start work on a new one. That encourages cooperation and limits work in progress, increasing the likelihood that we have fully completed stories at the end of the sprint, as opposed to stories that are partially done. If our sprint backlog is 10 stories we would much rather have 8 fully completed stories at the end of the sprint than 6 completed stories and 4 partially done.
- According to Henrik, research has shown that incorporating QA into your agile development teams (testing continuously) saves on overall time spent in testing and bug fixing. Meaning that if you test and bug fix as part of each sprint, as opposed to doing a large round of testing at the end of the project followed by a lot of bug fixes, you save time. One reason for this is that when it comes time to fix a bug the code is still fresh in the developer's mind. Also the continuous QA feedback encourages developers to deliver higher quality code. This is visualized in the following illustration from Henrik's slides:
- Size doesn't matter. According to Henrik having the team categorize user stories into small, medium and large before implementation is not very useful. At the end of his project he calculated actual times spent on user stories and found out that it actually took about the same time on average to implement a story that had been categorized as small as it took to implement a story that had been categorized as medium.
- When you are on a large project that comprises multiple teams, try doing finger voting to assess the overall confidence that the project will get delivered on time. Basically, in weekly meetings ask team leads to hold up fingers on one hand representing their confidence that their team will get the required work done on time. Where 5 fingers mean "definitely", 4 fingers mean "likely", 3 fingers "it's a gray line", two fingers "probably not" and one finger means "no way". Track this on a weekly basis. If you get few fingers in the air, you know you have a problem that needs to be addressed.
- Try to make your development teams cross functional. Each team should comprise all the skills required to fully deliver a feature (e.g. graphics designer, front-end developer, back-end developer, DBA, tester). If a resource is needed at least 50% of the time put him on the team. Make sure everyone has a home team, even though they might be outsourced to other teams from time to time.
- To check the health of a Scrum team use the Scrum check-list.
Labels:
agile,
development methodologies,
kanban,
scrum,
testing
Thursday, June 30, 2011
Kanban with David Anderson
Few points from the class:
The Kanban Method is based on three core principles:
- Start with what you do now
- Agree to pursue incremental evolutionary change
- Initially, respect current processes, roles, responsibilities & job titles
The five keys to a successful Kanban implementation are:
- Visualize Workflow
- Limit Work-in-Progress
- Manage Flow
- Make Process Policies Explicit
- Improve Collaboratively (using models & scientific method)
Key class takeaways according to Dave were:
- Kanban is like water
- It goes around obstacles and slowly changes them, rather than removing them like a bulldozer
- Change has to come from within
- How many experts does it take to change an organization? Answer: One. But people are going to have to want to change
Surprising fact from the class:
According to a research done by David, getting more done only ranks number 4 on most manager's lists! The list of their preference being:
- Predictability
- Business Agility (ability to respond to changes in market)
- Good Governments (managing budgets, money spent the way intended, etc.)
- Getting more done
| Playing the Kanban game |
All in all a good class and I look forward to continuing the improvement of Kanban at Hugsmiðjan where we develop the Eplica CMS.
Labels:
agile,
conference,
development methodologies,
kanban
Wednesday, November 17, 2010
Notes on Test Driven Development and Testing
This week I attended the Agilis 2010 conference, including a 2 day course on Test Driven Development by Nat Pryce. Below are a few notes from his class. Not necessarily the key teachings, but rather nuggets of information that I found interesting.
- One of the main arguments for using TDD is that it encourages you to improve the design of your code. Writing the test for a software component that you have designed, but have yet to implement gets you to think about and question the purpose and nature of your design. E.g., if when writing a unit test for a given class you discover that the test code gets way to complex or convoluted, then that is probably an indication that the class being tested has too broad responsibility and should be refactored into smaller, simpler units. I had always thought of TDD as more of a means to ensure you build a comprehensive test suite and as a result have fewer bugs in your software, but had not given much thought to the fact that it is in essence a way to get you to improve your software design; hence making your code easier to understand, use, and maintain.
- We did some exercises using the JMock framework, that Nat Pryce co-wrote. JMock is a neat tool to help you mock out interfaces that your code interacts with and to validate that your code is using the interfaces as expected. JMock allows you to command the mock object of the interface to behave a certain way (e.g. return specific results) and set up expectations for how you are planning to call the interface (e.g. methodA will be called once and only once with arguments "ABC" and 99). These expectations are integrated with JUnit and if not fulfilled by the end of your test run then JUnit will fail the test.
- The following JMock Cheet Sheet page provides an overview of he JMock syntax.
- During the Q&A session with Nat he admitted that JMock was probably best suited for green-field projects (new systems) while frameworks like Mockito where better suited for brown-field projects (preexisting systems that you are trying to create tests for).
- We spent some time discussing Monitoring Events, which is the concept of having your system broadcast notifications (events) about your code execution. E.g. when an order is placed or when a user logs into your system a notification of the event is sent to a JMS Topic that interested parties can subscribe to.
- This is great for logging. A logger component can subscribe to the topic and log all events in the system. It can then allow you to filter out certain events or do things like group events by requestid and provide a holistic overview of a single user transaction (as opposed to having to grep through numerous log files on multiple servers to try piece together what happened when a given user transaction ran through the system, which may have spawned multiple threads in multiple JVMs).
- Monitoring Events are great for testing too. Imagine trying to assert that a call to a checkout service (to complete the purchase of a product) will result in a proper inventory reduction in asynchronous inventory system. If your test runs straight through and checks the inventory status as soon as it has completed the purchase, then the test will likely fail, since the inventory system hasn't had time to process its update inventory request. One might try to fix such a test by adding a sleep statement of say 10 seconds after the checkout call but before the inventory is checked (which still might fail if the system is running slow). Or (which is a little better) one might implement a loop that every 1 second pulls the inventory system to see if the update has been received (succeed fast). In both cases we are polluting our tests with sleep statements and lengthening the time to feedback, when we run a suite of tests. A better way would be to have the test subscribe to the Monitoring Event (topic) and complete (succeed) as soon as it has received an inventory update notification.
- You can also use Monitoring Events to build a support tool for your system. E.g. the tool could send an email or SMS text message to a support person when a certain event is received (OutOfMemoryError, External service not responding, etc) or a when certain number of events have been received over a given time frame.
- Miscellaneous tips on testing and coding
- If you are ever testing code that depends on the system clock (e.g. at noon every day the system is supposed to execute some function) then a neat trick to make that code more testable is to refactor out the dependency on the system clock. E.g. instead of your class making a direct call to say System.currentTimeInMillis(), have your constructor take in a generic Clock object (or define a class variable and use dependency injection to inject a Clock implementation). Then you can have a SystemClock implementation that simply uses the current system clock, where as during your test run, the class under test is initialized with a FakeClock implementation that hardcodes the time to 12 PM)
- In your system tests, which load up and work with data in a database, have your JUnit setUp method clear out the database, rather than the tearDown method. This way you have actual data to look at if a test fails (rather than a clean swiped database).
- Use Simplicators to simplify communication with 3rd party APIs. That is, a facade that maps the 3rd party interface and artifacts over to your domain model or something that makes sense in your system. This way you can more easily test your own code (by using mock Simplicators that don't have a dependency on a 3rd party system) and likewise your code is more easily maintained (e.g. a type change or renaming of a field in some 3rd party XML response may require you to update your Simplicator implementation, but might have no impact on your business code if the response has already been mapped over to your domain model.
- Have separate tests that test your production setup. Basically tests that you can run in production to verify a deployment. Don't deploy your unit/system tests into production. Avoid the painful lesson that GitHub recently experienced where a test run in production cleared out their entire production database!
- When programming, don't have your methods return null! Null checks pollute you code and make it harder to debug. Return empty objects instead.
Sunday, March 21, 2010
Agile netið Founded

This week I took part in founding the Agile Network (Agile netið), a consortium of agile minded Icelandic companies, one of which is Hugsmiðjan, my current employer. The purpose of the non-profit organization is to advocate Agile and Lean development practices and sponsor agile lectures and conferences in Iceland. The founding partners are 10 companies that have been doing agile for awhile and are interested in sharing their experiences and learning from each other. At the founding meeting we elected a 3 person board for the organization, of which I got elected secretary :-) Hopefully it wont be too much work :) I am very excited to be part of this process and hope to learn a lot from my fellow members. The website for the organization is www.agilenetid.is. No English version yet, but we are working on it. Then there is a Facebook group for those interested in keeping up with what were are doing. Fans wanted :)
Subscribe to:
Posts (Atom)

