-
Apply Operation In Collection With Linq
These days, it is unusual to write an implementation of the Iterator pattern. Most of the modern languages included a library with a set of aggregate objects that follow this pattern as defined in the Gang of Four book.
In .Net, these objects are in the System.Collections namespace and they include classes such as Lists, Queues, Dictionaries, and also their specialised strongly typed (Generic) version.
The Gang Of Four defines two types of iterators: External and Internal. The External requires the client to traverse the items (by looping through the items until the end of the iterator). In contrast to this, the internal iterator delegates this process to a Traverse mix-in class that can be extended and customised.
I was looking for something similar to the internal iterator in .Net. That is, a generic way of passing an operation to a collection that would apply it to each of its items. None of the LinQ extension methods for collections seemed to do this, so I wrote one:
public static class Extensions { public static void ApplyOperation<T> (this IEnumerable<T> iterator, Action<T> action) { foreach (var item in iterator) action(item); } }
Which can be used as follows:
IEnumerable<string> cities = new string[] { "Dublin", "Buenos Aires", "Madrid" }; cities.ApplyOperation(x=>Console.WriteLine("City name: " + x));
or simply
new string[] { "Dublin", "Buenos Aires", "Madrid" } .ApplyOperation( x => Console.WriteLine("City name: " + x));
which prints the following output:
-
Unit Test & Design By Contract (Part III)
This article is the last of three on Unit Testing and Design By Contract
Testing Post Conditions and Class Invariants
Now we can test the two post conditions of the
Add
method:// Count is 0 before adding any items // This is a post condition of the constructor //and should be tested separately [Test] public void Add_PostCond_CountHasIncreasedByOne() { var stack = new Stack(maximumNumberOfItemsAllowed: 10); stack.Add(new object()); Assert.AreEqual(1,stack.Count); }
// We use the 'Peek' method to access the item on top of the stack. // Pre and Post conditions of this method should be tested separately [Test] public void Add_PostCond_ItemIsOnTopOfTheStack() { var stack = new Stack(maximumNumberOfItemsAllowed: 10); var firstItem = new object(); var secondItem = new object(); stack.Add(firstItem); stack.Add(secondItem); Assert.AreSame(secondItem, stack.Peek()); }
These types of test are probably the most commonly seen. Good Test Driven Development habits include asserting as few times as possible per tests, ideally one assertion only. The point is, as I mentioned earlier, error localization: if you only test one thing at a time, there has to be one test that explicitly exposes a bug.
There is still one more post condition to test: that no invariant rule of the class was violated by the
Add
method.Checking Class Invariants
As mentioned earlier, Class Invariants are the rules that should always remain true after any method is executed on a class. The
Add
method is no exception, so we will have to test that class invariants were enforced when it executed. We don’t want to repeat ourselves, so we will delegate to a reusable method calledCheckInvariants
that throws an exception if an Invariant rule has been broken.[Test] public void Add_PostCond_InvariantsWereEnforced() { //Arrange var stack = new MockRepository().PartialMock<Stack>(1); stack.Expect(x=>x.CheckInvariants()); stack.Replay(); //Act stack.Add(new object()); //Assert stack.VerifyAllExpectations(); }
This tests ensures that the method
CheckInvariants
has been invoked by theAdd
method. This type of tests, which verifies the interaction of methods rather than the resulting state of objects is known as Behavioural Tests.The delegation itself is tested by using the Rhino Mocks mocking library for .Net. In this case we create a Partial Mock
Stack
object. It is partial because we only want to mock one method (CheckInvariants
) but we do want to execute a different one (Add
) of the same class. We tell the partially mockedStack
that we are expecting theCheckInvariants
method to be called later. After we explicitly invokedAdd
, we tell the stack to verify our expectation. This verification will fail if the method was not invoked indirectly byAdd
.How public are the Class Invariants?
So the last thing is to test the Class Invariants. This is just another method called void
CheckInvariants()
of the class, but its access level deserves a bit of thought.Let’s see what options we have:
-
private: This method cannot be private because we need to write tests for it (as we did before)
-
protected: This would at least let us create the class
TestStack
that inherits theStack
class and create a public method that invokesCheckInvariants
. See this article for more details on how to mock a protected method -
internal: Internal methods can be invoked from within the assembly or an external assembly explicitly designated. This is the option chosen in this example
-
public: Not a bad option either in my opinion, any object can invoke this method anytime, which should not cause any inconvenient.
And here are the two test for
CheckInvariants
itself[Test,ExpectedException(typeof(InvalidOperationException), ExpectedMessage="Count cannnot be negative")] public void CheckInvariants_PostCond_CountIsNonNegative() { var stack = new MockRepository().PartialMock<Stack>(1); stack.Replay(); stack.Count = -1; stack.CheckInvariants(); } [Test, ExpectedException(typeof(InvalidOperationException), ExpectedMessage = "Count cannot be greater than 7")] public void CheckInvariants_PostCond_CountIsLessThanMaximumAllowed() { var stack = new MockRepository().PartialMock<Stack>(7); stack.Replay(); stack.Count = 8; stack.CheckInvariants(); }
And finally, here is the code for the
Stack
class, a class that honors its contract:public class Stack { private int _maximumNumberOfItemsAllowed; public Stack(int maximumNumberOfItemsAllowed) { _maximumNumberOfItemsAllowed = maximumNumberOfItemsAllowed; } public int Count { get; protected internal set; } public void Add(object item) { if (item == null) throw new ArgumentNullException("item"); if (_maximumNumberOfItemsAllowed == Count) throw new InvalidOperationException( "Cannot add items when the stack is full."); Count++; _array.Add(item); CheckInvariants(); } private IList _array = new ArrayList(); public object Peek() { return _array[Count - 1]; } protected internal virtual void CheckInvariants() { if (Count < 0) throw new InvalidOperationException("Count cannnot be negative"); if (Count > _maximumNumberOfItemsAllowed) throw new InvalidOperationException("Count cannot be greater than " + _maximumNumberOfItemsAllowed); } }
-
-
Unit Test & Design By Contract (Part II)
This article is the second of three on Unit Testing and Design By Contract
Testing Pre-Conditions
Our User Story contains two pre conditions, so we should write a test for each of them.
[Test, ExpectedException( typeof(ArgumentNullException))] public void Add_PreCond_ItemIsRequired() { new Stack(maximumNumberOfItemsAllowed: 1) .Add(null); } [Test, ExpectedException(typeof(InvalidOperationException), ExpectedMessage="Cannot add items when the stack is full.")] public void Add_PreCond_StackCantBeFull() { new Stack(maximumNumberOfItemsAllowed: 0) .Add(new object()); }
These unit tests ensure that if any of the two pre conditions are not satisfied, the Supplier will not execute the method. To do so, it throws an error to the Client.
When we unit tests this behaviour, we assert that the Unit Tests will result in an exception being thrown, and we can additionally specify the type of the exception. These types of tests are also known as Unhappy Tests.
Happy Case: The N + 1 Pre Condition
How do you test that all pre conditions are met? Well, you could argue that is not needed, when testing post conditions we can assume that the method is executed, what means that we passed all pre condition checks.
Let’s suppose a new pre condition is added to the class, and that the developer forgets to write the Unit Test for it. Yes, all post-condition tests should now fail: there is a new pre condition in town. But this may not happen, what if the new pre condition was coincidentally met by all tests? For example, a new requirement that only allows us to add non empty strings to the stack may not be detected by any existing test if they already use non empty strings only.
There is another problem, related to error localization. In his brilliant book “Working Effectively with Legacy Code” Michael Feathers emphasizes that one of the benefits of good tests is that it helps us to easily localize problems. With a happy case, we do not want to test the outcome of a method. We want to check that if all pre requisites are satisfied, the method throws no error. This is the opposite scenario of an unhappy case, that is, a happy case for pre conditions.
[Test] public void Add_PreCond_HappyCase() { new Stack(maximumNumberOfItemsAllowed: 1) .Add(new object()); }
Notice that we are not doing any checking on this test. We simply want to pass all the validations for pre conditions. We are also not interested in testing the state of the stack after adding the item. In other words, we are not testing pre conditions either. We will do that in the nexdt part
Coming Next:
-
Unit Test & Design By Contract (Part I)
One of the most useful metaphors for organising software is the one of Design by Contract.
This technique, which was formalised by Bertrand Meyer in 1986 has its roots in previous ideas from Tony Hoare and Barbara Liskov. In a nutshell, DBC equates the relationship of two pieces of software (i.e. two methods) to that of a Client and a Supplier in the business world, both interacting with each other with clear rights and obligations, specified in a Contract.
In DBC, a Supplier can serve a request from a Client only if the latter satisfy the prerequisites or preconditions required by the Supplier. The Supplier has an obligation to return a property as expected by the Client, the post-condition, once the request is served.
Additionally, a contract must specify certain properties that must remain true before and after the nteraction between Client and Supplier. This obligation is known as a class invariant.
Example:
A classic example to show DBC in practice is by building a data structure class, such as a Stack class. The rules of adding items to this structure are very clear:
-
Pre conditions
-
An item must be provided to be added to the stack
-
There must be room in the stack to store the item
-
-
Post conditions
-
The number of items in the Stack increased by one
-
The item on top of the Stack is the one we just added
-
-
Class Invariants
-
The number of items in the Stack are greater or equal than zero
-
The number of items in the Stack are less or equal to the maximum size allowed
-
Who checks the contract?
Typically, a method checks the pre-conditions of the contract only. If any of the pre conditions were not enforced by the Client of the method, the Supplier will not continue with the execution of the method since its result (and even its execution) can be unexpected. This is usually done by assertions at the beginning of a method: we assert that a precondition is true prior to executing the method, if the assertion is wrong, then the execution is suspended (an exception is thrown).
Post conditions and Class Invariant could, in theory, be checked in code too. However, this would lead to very long, unreadable and inefficient code. Also, in many cases testing the post-conditions is simply not viable. Remember the DBC metaphor and it will be clear that the party who is responsible for an obligation (the Supplier in this case) cannot be in charge of verifying that the obligation has been fulfilled (i.e. that the post condition is true).
Now, the Client could verify Post-Conditions and Class invariants, but we don’t know (and shouldn’t care) who will be the client of the method we are currently building. And if we are creating a Client of a method, our code requires the Supplier to do what is expected when we invoke its method. So where do we test Post conditions and Class invariants? You know the answer, in the Unit Tests.
Each of the six rules defined above can be converted into a unit test. This is what we’ll do next.
A structure for your tests
But before writing the tests, let me suggest a naming convention for them. I like naming the tests using the following pattern: %MethodName%%ruleType%%Description%
Where rule type is any of the three types of DBC rule: pre condition, post condition or class invariant Here is the list of test we will write for this User Story, based on the previous pattern
[Test] public void Add_PreCond_ItemIsRequired() { throw new NotImplementedException(); } [Test] public void Add_PreCond_StackCantBeFull() { throw new NotImplementedException(); } [Test] public void Add_PreCond_HappyCase() { throw new NotImplementedException(); } [Test] public void Add_PostCond_CountHasIncreasedByOne() { throw new NotImplementedException(); } [Test] public void Add_PostCond_ItemIsOnTopOfTheStack() { throw new NotImplementedException(); } [Test] public void Add_PostCond_InvariantsWereEnforced() { throw new NotImplementedException(); } [Test] public void CheckInvariants_PostCond_CountIsNonNegative() { throw new NotImplementedException(); } [Test] public void CheckInvariants_PostCond_CountIsLessThanMaximumAllowed() { throw new NotImplementedException(); }
The Happy Case test was not in the list, and there are two tests for a method called
Check Invariants
. We will look into these tests in detail later. Typically you would put these methods in a class calledStackTests
. However, if you have a method that requires a large number of tests, you can have one test class per method instead. Your test class would be namedStack_AddTests
and the test names would be, for example,PreCond_ItemIsRequired
.Generating Documentation from your Tests
By following this structure for tests and fixtures, we can enrich the documentation generated from our code. This requires customisation of tools like NDoc or Sandcastle. The exact implementation is outside of the scope of this article, but I will post it at some point. The idea is to parse the name of Fixtures and Tests to append pre and post conditions to their relative methods, in a readable way.
One caveat I have with this idea is that you have to remember that you are not using the “true” code for generating the docs, but the names of the apis. This means that your documentation will be as accurate as the name of your tests.
Coming next:
-
-
Notes From Qcon London 2012
Did you know Facebook is n ordinary single-threaded PHP website? How complex do you think Twitter’s business logic model is? Do you know how Nokia is preparing to scale Ovi once (if) their Microsoft deal is agreed?
You may already be familiar with InfoQ, but if you are not, I recommend you to take a look at it at www.infoq.com. InfoQ is an online community focused on innovation in enterprise software development. On a daily basis, they deliver high quality articles for IT communities such as .Net, Java, SOA, Ruby and Agile among others._
QCon is the annual event organised by InfoQ, where experts in each of these diverse areas provide an update on the latest lessons learned, discoveries and challenges ahead. I am just back from QCon 2011 in London, where I had the opportunity to listen to some of these experiences and to get an idea of what is being done in other companies and countries.
Is not often that you get to talk to the people behind architectures such as Twitter, Facebook, Nokia and Guardian.co.uk among others, so I want to share some of the good stuff of QCon here.
Today I’m starting with an overview of the chats I assisted to on the first of the three days of conference . I can’t guarantee I’ll remember (or I even grasped) all I listened to, but I can at least give you resources and point you to the right documentation. I also plan to write detailed posts on a few topics I found most interesting.
Day 1
All presentations were held during three days, and categorised into five areas or tracks per day. Unfortunately there is only one me so I had to select carefully …
One of the tracks of Day 1 was “Enterprise Agile Transformation”.
Being an Agile geek, I had to force myself to diversify, which was as difficult as getting a dog to eat a balanced diet.
Scaling Lean & Agile: Large, Multisite or Offshore Delivery
Craig Larman has a vast experience on implementing SCRUM in very large-scale project. With large I mean 500-1500 person, multisite teams with clients such as Xeror & Alcatel-Lucent. From these experiences he shared lessons learned, patterns and anti patterns on how to adopt and maintain Agile projects with multi hundred person teams, both co located and off shore.
Larman is also the author of the books “Thinking & Organizational Tools” and “Practices for Scaling Lean & Agile Development”
This was the key note of the day what means that a bit of time is wasted on bad jokes, but it gave us an idea on how we can keep a high level of communication and transparency despite teams being very large or non- co-located.
The Invisible Computer Lab
I am still not sure why I choose this presentation, but it turned out very interesting. Fraser Speirs is an iPhone developer who is also the head of Computer at a School in Greenok, Scotland . He told us how his school provided all children with iPads as the main learning channel. He also shared with us what where the students, teachers and parents reactions, his view of the overall results of the iPad implementation, and how he foresees the future of technology in the classroom.
Contracts & Collaboration in Agile “Offshore” Outsourced Development
Again Craig Larman (I told you I struggled to diversify this day!) deep-dive on how to scale Agile, in particular with off shore outsourced teams. There were a few useful tips and tricks on how to deal with offsite teams without losing the high level of transparency required by Agile methodologies. From cultural differences to contract-model choices, this was no high level, abstract Agile talk, but a very hand on and pragmatic one instead, including tips as low levels as what video chat software to use in daily SCRUMs .
Bringing developers and testers closer together with Visual Studio
The “Solutions” track, the only one that run over the three days, was the showroom for commercial products and frameworks. In this context, Gile Davies from Microsoft UK presented Test Manager, a new application for collaboration between testers and developers.
I have to say I was extremely impressed by this tool, which is an extension of Team Foundation Server. The amount of help it provides to testers and developers during the lifecycle of bugs is what surprised me:
-Testers can record their UI interaction within Windows and store them (okay this is not all that impressive yet…)
-They can modularised these UI steps into logical blocks (e.g. “Find a book to buy”, “Add it to the cart with IE 7.0”, ”Cancel order using keyword only”, etc)
-All test steps gets auto appended to Bug description.
-Developers can click on the logical block that failed. They can even debug the problem as of the execution performed by the tester (yes!, it saves the entire state of the test that failed, so developers and tester are always talking of the same environment!)
-It can even take automatic snapshot of cloud environments (we are talking Windows Azure only, of course)
Complex Event Processing: DSL for High Frequency Trading
Richard Tibbets presented the system his company build, StreamBase. It is an engine for Complex Event Processing Domain Specific Languages, complied into JVM code. The common perception of DSLs is that they are very expressive but that they cannot perform in scenarios where speed is at premium, like front-office trading applications.
Unfortunately, I wasn’t able to dissipate my objections to DSLs. Ironically, despite all the talk about speed I felt this chat so slow it was never ending.
Where Did My Architecture Go? Preserving software architecture in its implementation
Agile systems are evolutionary, which means that the design of the system is not fully detailed up front. Instead, it “emerges” from a series of cycles of iterative development. Now for those who are not used to evolutionary design, this may sound like anarchy or worse, like no design at all.
Eoin Woods did a very convincing walk through a set of tools that allow system to generate their representation from their implementation. This allows teams to keep their architectural design updated, so that the big picture is always clear as the software evolves. Some of the tools I knew already, but there were a few others that I’d really like to try in my next project. The tools demonstrated included support for .Net, Java and C++ technologies.
More may follow…
As I mentioned earlier, I will try to follow this post with other two with the presentation I’ve attended to on Day 2 and 3. After that, I will write a detailed post on some of the talks I’ve found most interesting.