Skip to main content

One Post Wonder?

I figured the first hurdle to tackle in proving that we can use PostSharp to bend assemblies to our unit testing wills was to get rid of the need to decorate the targets of our mocking desires with attributes. I spent some time going over the documentation, browsing the forums, and looking through samples. Gael pretty much told me in the comments to my previous post to look into ICustomAttributeProvider and CompoundAttribute. I tried and failed to implement ICustomAttributeProvider; I think I don't quite understand how a PS plugin works yet. I thought going through Ruurd Boeke's examples would be enlightening... no such luck. So, instead of doing the smart thing and actually asking for help, I'm going to continue to waste time trying to figure it out on my own.

Eventually I'll get fed up and ask for some help, but in the meantime I need to make some progress somewhere in this experiment. I might as well take what I have done and at least build out some real functionality. This brings up a whole slew of interesting decisions I need to make:

  • How should I structure the solution? I'm going to put any serious decisions this off until I gain better understanding of the way PS works, the licensing issues, what it is I actually want to accomplish, whether or not it's feasible, etc. I'm making a note now that the structure of the solution is going to drastically change.
  • What should the API look like? Do I support multiple models for constructing unit tests (Arrange/Act/Assert and/or Record/Replay)? How do I initialize the mocks? What threading considerations do I need to make?
  • How do I test? I plan on using TDD, but testing a framework used for testing seems to throw me into some sort of backwards meta-testing scenario that just feels weird.
Since I can't clearly explain it, I probably need to illustrate what I mean by that last point.

As it stands right now in my very simple implementation, the data about the calls that need to be faked, their arguments, and the faked return values are stored in static fields of the Expect class. In order for tests to be atomic, independent units, these fields need to be reset for each test. The requirement is simple, but how do we write an individual test to test this functionality? I don’t know either. What I ended up with was this:

[Test]
public void Will_Get_Person_From_Mocked_Call()
{
var ryan = new Person { Id = 11, Name = "Ryan" };

// Set the expectation for the dao call
Expect.Call(() => PersonDao.GetPerson(0), ryan);

// Note that we don't care about the person id argument
var commandUnderTest = new PersonGetCommand();
var resultingPerson = commandUnderTest.Execute();

Assert.That(resultingPerson == ryan);
}

[Test]
public void Wont_Get_Person_From_Mocked_Call()
{
var commandUnderTest = new PersonGetCommand();
var resultingPerson = commandUnderTest.Execute();

Assert.That(resultingPerson.Id != 11);
}


These two tests work together to signal whether or not the Expectations are being correctly discarded at the end of a test/before the next test. If the second test fails, they aren’t. So I started with this, wrote some crappy code in the Expect class, and ended up adding these setup and teardown methods to the TestFixture:



[SetUp]
public void InitializeMockFramework()
{
Expect.Initialize();
}

[TearDown]
public void UnloadMockFramework()
{
Expect.UnLoad();
}


Don’t panic… I know this is ugly, especially considering what must be on the other side of Initialize and UnLoad, but this is strictly to illustrate a point – the whole fixture together functions as a single unit. Not ideal, but it’s the best solution I can come up with at the moment to test testing. It allows me to refactor the Expect class and verify that I haven’t broken functionality.



(Side thought: I’m thinking I should take a second look at that XUnit Test Patterns book; I labeled it ‘boring’ the first time I skimmed through it.)

Comments

Popular posts from this blog

Enabling Globalization Invariant Mode for .NET Core App on Raspberry Pi Running LibreElec

I had an app I wanted to run on my Raspberry Pi 3 running LibreElec . In LibreElec you can install the dotnet core 2.2 runtime as an addon, and in Visual Studio you can compile for ARM processors with ‘Target Runtime’ set to ‘linux-arm’ in the publish profile. So, I published to a folder from VS using that profile, and I copied the output over to my RPi which had the dotnet runtime installed. I did a simple dotnet Whatever.dll to run the app (actually in this case, it was /storage/.kodi/addons/tools.dotnet-runtime/bin/dotnet Whatever.dll because of the way the addon is installed) and was met with this error: FailFast: Couldn't find a valid ICU package installed on the system. Set the configuration flag System.Globalization.Invariant to true if you want to run with no globalization support. at System.Environment.FailFast(System.String) at System.Globalization.GlobalizationMode.GetGlobalizationInvariantMode() at System.Globalization.GlobalizationMode..cctor() at Syste

Migrating Hg Repos with hg-fast-export and Windows Subsystem for Linux

Introduction I prefer Mercurial (hg) to git . I don’t really have any reason for this preference - they both do the same thing, and the user experience for 90% of the use cases is the same. It probably comes from the conditions of the DVCS landscape when I started using these systems. Some of this may have been perception only, but it looked like this: GitHub didn’t have free private repos BitBucket did have free private repos BitBucket was very hg-friendly Joel Spolsky had an amazing tutorial that served as both a how-to for hg as well as a general intro to DVCS hg was much more Windows-friendly than git Since hg was written in python, I felt like extending it would be easier than doing so for git if I ever needed to (admittedly, this is a pretty ridiculous reason) hg felt like a more unified, “coherent” system than the very linux-y feeling git and its extensions (also pretty ridiculous) Where they differed, I liked the verbs hg used better than git’s counterparts

Stubbing Static Methods with PostSharp

TypeMock uses the Profiler API to allow mocking, stubbing, etc. of classes used by code under test. It has the ability to handle sealed classes, static classes, non-virtual methods, and other troublesome-yet-oft-encountered scenarios in the world of unit testing. Other frameworks rely on proxies to intercept method calls, limiting them to be able to only fake virtual, abstract, and interface members. They also rely on dependecy injection to place the proxies as the concrete implementation of calls to the abstracted interface members. Anyone working with a legacy codebase is bound to run into static method calls (especially in the data access layer), dependencies on concrete types with non-virtual methods, and sealed class dependencies (HttpContext anyone?). The only way to unit test this without refactoring is with TypeMock. I've never used TypeMock, and I'm sure it's a great product, but it's not free. I decided to spike some code to see if I could solve the prob