<feed xmlns="http://www.w3.org/2005/Atom"><title type="text">Damian Hickey</title><subtitle type="text">Damian Hickey</subtitle><id>http://dhickey.ie/</id><updated>2016-01-26T23:43:18+01:00</updated><author><name>Damian</name><uri>http://dhickey.ie</uri><email>dhickey@gmail.com</email></author><generator>Sandra.Snow Atom Generator</generator><link rel="alternate" href="http://dhickey.ie/feed.xml" /><link rel="self" type="text/html" title="Damian Hickey" href="http://dhickey.ie/feed.xml" /><entry><id>http://dhickey.ie/2016/01/commercial-suicide-integration-at-the-database-level/</id><title type="text">Commercial Suicide - Integration at the Database Level</title><summary type="html">&lt;blockquote&gt;
  &lt;p&gt;&lt;em&gt;This post was originally authored by &lt;a href="https://twitter.com/JakCharlton"&gt;Jak Charlton&lt;/a&gt; in 2009 and is originally hosted at &lt;a href="http://devlicio.us/blogs/casey/archive/2009/05/14/commercial-suicide-integration-at-the-database-level.aspx"&gt;devlicio.us&lt;/a&gt;. That site appears to be inactive and regularly unavailable so, with permission, I'm re-publishing it here as I think it's a timeless piece.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are many ways you can commit commercial suicide, but there is possibly no slower and more agonising death than that produced by attempting that great architectural objective, the single authoritative database to which all applications talk.&lt;/p&gt;

&lt;p&gt;The theory is good, if we have a single database then we have all our business information in one place, accessible to all, easy to report against, reduced maintenance costs, consistency across all applications, and a host of other good objectives.&lt;/p&gt;

&lt;p&gt;However all these noble ideals hide a more fundamental problem, that the single database does not solve any of them, and makes most of them into far bigger problems.&lt;/p&gt;

</summary><published>2016-01-02T23:00:00Z</published><updated>2016-01-02T23:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2016/01/commercial-suicide-integration-at-the-database-level/" /><content type="html">&lt;blockquote&gt;
  &lt;p&gt;&lt;em&gt;This post was originally authored by &lt;a href="https://twitter.com/JakCharlton"&gt;Jak Charlton&lt;/a&gt; in 2009 and is originally hosted at &lt;a href="http://devlicio.us/blogs/casey/archive/2009/05/14/commercial-suicide-integration-at-the-database-level.aspx"&gt;devlicio.us&lt;/a&gt;. That site appears to be inactive and regularly unavailable so, with permission, I'm re-publishing it here as I think it's a timeless piece.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are many ways you can commit commercial suicide, but there is possibly no slower and more agonising death than that produced by attempting that great architectural objective, the single authoritative database to which all applications talk.&lt;/p&gt;

&lt;p&gt;The theory is good, if we have a single database then we have all our business information in one place, accessible to all, easy to report against, reduced maintenance costs, consistency across all applications, and a host of other good objectives.&lt;/p&gt;

&lt;p&gt;However all these noble ideals hide a more fundamental problem, that the single database does not solve any of them, and makes most of them into far bigger problems.&lt;/p&gt;

&lt;!--excerpt--&gt;

&lt;h2&gt;All Information In One Place - The Single Authority&lt;/h2&gt;

&lt;p&gt;On the face of it this sounds like a great objective - after all developers try to live by the maxim of Don't Repeat Yourself - and data in many place is a clear violation of that principle.&lt;/p&gt;

&lt;p&gt;Data that appears across many applications and across many storage mechanisms leads to all sorts of massive problems; inconsistency, duplication, replication, duplicated business logic and code, essentially all boiling down to - you end up with spaghetti data. Spaghetti data is much like spaghetti code - it sprawls, gets tangled up, and becomes hard to pull apart without covering yourself in pasta sauce. This is obviously a "bad thing".&lt;/p&gt;

&lt;h3&gt;So what is wrong?&lt;/h3&gt;

&lt;p&gt;Well the first and most obvious thing that is wrong is that all applications have different requirements, and different "world views". Although there may in theory be some concept of a "Customer" for the organisation as a whole, even this most basic of data items varies widely between individual departments and even individual applications within a single department.&lt;/p&gt;

&lt;p&gt;You could approach this problem, as many database centric people would do, and say "that is the problem right there, we need to standardise all these applications to use the One True Customer". But that is missing the really important word in the definition of the problem ... "requirements" ... this is not accidental that the Customer is different to different parts of the business, it really &lt;em&gt;is&lt;/em&gt; different.&lt;/p&gt;

&lt;p&gt;Your database guys could say "well your Customer in your application may be different as you have different requirements, but you will just have to fit it into our One True Customer", but this is then like trying to put snakes into a plastic bag - they really don't want to be in the bag, they don't fit too well in the bag, and sooner or later while putting one in some others are going to escape or bite your hand. And when your other department starts putting his snakes in the bag too you will be fighting for who gets to not be bitten.&lt;/p&gt;

&lt;p&gt;And worse still - now you are trying to map from your requirements based Customer to the One True Customer, and spend an inordinate amount of time maintaining this translation layer in your application. When that One True Customer changes, as he undoubtedly will as new applications require he is expanded to deal with more data they require, every previous application needs to be revisited, large parts of it need ot be re-written, and the whole application needs to be regression tested again. And you have to do this for every single application talking to your single authority database.&lt;/p&gt;

&lt;p&gt;You could just skip this stuff, and rely on your applications ignoring this new data, and rely on the database not caring if they correctly updated new data - but this really will come back to bite you - when that bag of snakes starts getting really large and really full - you really don't want to be the one trying to get new snakes into it.&lt;/p&gt;

&lt;h3&gt;The Truth About the One True Authority&lt;/h3&gt;

&lt;p&gt;There isn't one.&lt;/p&gt;

&lt;p&gt;There - I've said it - I have upset all those database guys, probably upset a large number of SOA guys (I'll cover Commercial Suicide - Integration at Service Level in a later post), and have totally disagreed with noble business objectives.&lt;/p&gt;

&lt;p&gt;The truth is, data must have context - without context, data is worthless, absolutely and totally worthless. Data stored in a database has no context, and therefore has no value. Context is provided by the applications that read and write that data, and therefore they are the only thing that matters, and their requirements are the only thing that matters. That means, they need data that is specific to their application, structured in a way that makes that application meet the business objectives, and in a way that makes that application meet non-functional requirements like resilience, reliability and consistency.&lt;/p&gt;

&lt;h3&gt;So Why Does Anyone Want the One True Authority Database?&lt;/h3&gt;

&lt;p&gt;Well, in legacy terms it is easy to understand why database admins and database developers want it - it is their lifeblood, their whole raison d'etre. More importantly, it is the culture in which they were brought up - the data is the important thing, the data is the centre of the universe, the data must be consistent, uniform and pure.&lt;/p&gt;

&lt;p&gt;But leaving database developers aside, more importantly why would a corporation want the One True Authority Database (OTADB)? After all, the title of this post says  this is "Commercial Suicide", so why hasn't this got through to management?&lt;/p&gt;

&lt;p&gt;Well - the promise of OTADB is that it will reduce errors in duplication, reduce waste, reduce duplicated effort and reduce maintenance costs - all highly desirable business objectives. And indeed, from those that advocate this approach (those database admins and developers again), the OTADB sounds mighty attractive. On the face of it, it achieves all of these objectives.&lt;/p&gt;

&lt;p&gt;Where it falls down is that this holy grail of software development is always just out of reach, they never quite manage to achieve it. Each application that is developed starts to make the OTADB worse, people start to hack things into it to get it to meet business requirements, not because developers want to hack things together, but precisely becasue the restriction of the OTADB &lt;em&gt;forces&lt;/em&gt; them to do it that way if they are to deliver any kind of functionality at all.&lt;/p&gt;

&lt;p&gt;They blame these hacks for ruining their vision of the One True Authority Database, the database admins tell them that they have to fight the application teams to stop them messing up their nice database, but that the OTADB no longer meets the noble objectives as those pesky development teams have messed it up for everyone.&lt;/p&gt;

&lt;h3&gt;Wait a Minute - What is the BUSINESS Objective Behind the One True Authority Database?&lt;/h3&gt;

&lt;p&gt;If anyone was to step back from those noble objectives and ask a far more fundamental question, the solution might actually be a lot more obvious than it may seem. While they are all noble objectives, largely actually made worse specifically by the OTADB approach, they are not the real business objective.&lt;/p&gt;

&lt;p&gt;Underlying all the other requirements, the ultimate business requirement that drives people (in particular database admins) to want a single database is so that they can see what their company looks like, in other words - so they can produce Management Information - reports to you and me.&lt;/p&gt;

&lt;p&gt;This is the single and most fundamental requirement for a business - to have a clear, consistent, accurate and up to date picture of what their company looks. This is what management needs, it is what allows them to make decisions, allows them to identify problems and allows them to spot opportunities.&lt;/p&gt;

&lt;p&gt;So, we are going to all this effort, and believe me it is extensive and significant effort, all to support some reporting tools at the end of the day. Reporting tools have problems with data in different formats, with data that is inconsistent, with data that is disparate and distributed. So at some point in the past, the "accepted truth" became "we need one true authority database to be able to produce good reports"&lt;/p&gt;

&lt;p&gt;Reporting is a Context - and data only has purpose and relevance in context.&lt;/p&gt;

&lt;h3&gt;If the Elephant in the Room is Actually Reporting, How Do We Solve The Elephant Problem?&lt;/h3&gt;

&lt;p&gt;This is almost so easy to deal with, it is silly. Perhaps it is because it is so obviously simple that is has been overlooked by many and rejected by others. Especially as it violates another one of those noble objectives ... to provide quality reporting information, we duplicate more of our data.&lt;/p&gt;

&lt;p&gt;Yep - we duplicate it - after all reporting data is read only, so it doesn't matter if it is just a copy of other data. Reporting requirements are also very different to transactional requirements too, so we get the added benefit of being able to optimise that duplicated data for the reporting functions.&lt;/p&gt;

&lt;p&gt;Data in relational databases is actually very poor for query and reporting purposes, and there is a constant compromise to make it fast for all those applications to write to, that makes it poor to report on - and vice-versa.&lt;/p&gt;

&lt;p&gt;How this data gets into the reporting database isn't my direct concern in this blog post, suffice to say the "easy" way is to publish messages with data changes, and have a reporting application pick those up and persist them. My point here - is that splitting the reporting functions from the day-to-day business functions pays massive dividends.&lt;/p&gt;

&lt;h3&gt;Now We Have a New Problem&lt;/h3&gt;

&lt;p&gt;That still leaves us with one problem - what happens when disparate applications really do need to know about data in other applications? What happens when my call centre operatives are asked to update the address for one of the Customers. Now as each application has it's own view of the world, and it's own data stores, my accounting applciation does not have access to that change.&lt;/p&gt;

&lt;p&gt;Well, the solution to the "how does data get in the reporting databases" question is exactly the same one here - you published messages from your application when you have changes that the rest of the corporation may be interested in. Fire off a message saying "CustomerAddressUpdated" and any other applciation that is concerned can now listen for that message and deal with it as it sees fit.&lt;/p&gt;

&lt;h3&gt;As It Sees Fit&lt;/h3&gt;

&lt;p&gt;And this is the real business objective we were trying to achieve in the beginning ... avoiding Corporate Suicide.&lt;/p&gt;

&lt;p&gt;When applications are each responsible for their own data, their own actions, and are only responsible for letting the "enterprise" know they have made some changes that other things may need to know about - then you have your solution.&lt;/p&gt;

&lt;p&gt;In good development terms, we have proper separation of concerns ... applications are responsible for their data, and their data only. They decide if they care about data from other applications - they are not forced to use it, nor to work around it.&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2015/06/capturing-log-output-in-tests-with-xunit2/</id><title type="text">Capturing Log Output in Tests with XUnit 2</title><summary type="html">&lt;p&gt;xunit 2.x now enables parallel testing by default. &lt;a href="https://xunit.github.io/docs/capturing-output.html"&gt;According to the docs&lt;/a&gt;, using console to output messages is no longer viable:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;When xUnit.net v2 shipped with parallelization turned on by default, this output capture mechanism was no longer appropriate; it is impossible to know which of the many tests that could be running in parallel were responsible for writing to those shared resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The recommend approach is now to take a dependency on &lt;code&gt;ITestOutputHelper&lt;/code&gt; on your test class.&lt;/p&gt;

&lt;p&gt;But what if you are using a library with logging support, perhaps a 3rd party one, and you want to capture it's log output that is related to your test?&lt;/p&gt;

&lt;p&gt;Because logging is considered a cross-cutting concern, the &lt;em&gt;typical&lt;/em&gt; usage is to declare a logger as a static shared resource in a class:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public class Foo
{
    private static readonly ILog s_logger = LogProvider.For&amp;lt;Foo&amp;gt;();

    public void Bar(string message)
    {
        s_logger.Info(message);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The issue here is that if this class is used in a concurrent way, it's log output will be interleaved, just in the same way as using console in tests.&lt;/p&gt;

</summary><published>2015-06-01T22:00:00Z</published><updated>2015-06-01T22:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2015/06/capturing-log-output-in-tests-with-xunit2/" /><content type="html">&lt;p&gt;xunit 2.x now enables parallel testing by default. &lt;a href="https://xunit.github.io/docs/capturing-output.html"&gt;According to the docs&lt;/a&gt;, using console to output messages is no longer viable:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;When xUnit.net v2 shipped with parallelization turned on by default, this output capture mechanism was no longer appropriate; it is impossible to know which of the many tests that could be running in parallel were responsible for writing to those shared resources.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The recommend approach is now to take a dependency on &lt;code&gt;ITestOutputHelper&lt;/code&gt; on your test class.&lt;/p&gt;

&lt;p&gt;But what if you are using a library with logging support, perhaps a 3rd party one, and you want to capture it's log output that is related to your test?&lt;/p&gt;

&lt;p&gt;Because logging is considered a cross-cutting concern, the &lt;em&gt;typical&lt;/em&gt; usage is to declare a logger as a static shared resource in a class:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public class Foo
{
    private static readonly ILog s_logger = LogProvider.For&amp;lt;Foo&amp;gt;();

    public void Bar(string message)
    {
        s_logger.Info(message);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The issue here is that if this class is used in a concurrent way, it's log output will be interleaved, just in the same way as using console in tests.&lt;/p&gt;

&lt;!--excerpt--&gt;

&lt;h2&gt;Solution&lt;/h2&gt;

&lt;p&gt;The typical approach to message correlation with logging is to use &lt;a href="https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/NDC.html"&gt;diagnostic contexts&lt;/a&gt;. That is, for each xunit 2.x test collection, we attach a correlation id to each log message and filter + pipe the messages we're interested in to the collection's &lt;code&gt;ITestOutputHelper&lt;/code&gt; instance.&lt;/p&gt;

&lt;p&gt;In this &lt;a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests"&gt;sample solution&lt;/a&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using serilog, &lt;a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests/blob/master/src/Lib.Tests/LoggingHelper.cs#L22-L26"&gt;we capture all log output&lt;/a&gt; to an &lt;code&gt;IObservable&amp;lt;LogEvent&amp;gt;&lt;/code&gt;. Note we must &lt;code&gt;.Enrich.FromLogContext()&lt;/code&gt; for the correlation id to be attached.&lt;/li&gt;
&lt;li&gt;When each &lt;a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests/blob/master/src/Lib.Tests/Tests.cs#L13"&gt;test class is instantiated&lt;/a&gt;, we open a unique diagnostic context, &lt;a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests/blob/master/src/Lib.Tests/LoggingHelper.cs#L31-L45"&gt;subscribe and &lt;em&gt;filter&lt;/em&gt; log messages for that context and pipe them to the test classes' &lt;code&gt;ITestOutputHelper&lt;/code&gt; instance&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;When the test class is disposed, &lt;a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests/blob/master/src/Lib.Tests/LoggingHelper.cs#L47-L51"&gt;the subscription and the context is       disposed&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A test class will look like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public class TestClass1 : IDisposable
{
    private readonly IDisposable _logCapture;

    public TestClass1(ITestOutputHelper outputHelper)
    {
        _logCapture = LoggingHelper.Capture(outputHelper);
    }

    [Fact]
    public void Test1()
    {
        //...
    }

    public void Dispose()
    {
        _logCapture.Dispose();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Notes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;While we used &lt;a href="https://github.com/damianh/LibLog"&gt;LibLog&lt;/a&gt; in the sample library, the same approach applies to any library that defines it's own logging abstraction or has a dependency on a logging framework.&lt;/li&gt;
&lt;li&gt;While we used &lt;a href="http://serilog.net"&gt;Serilog&lt;/a&gt; to wire up the observable sink, we could do similar with another logging framework (NLog, Log4Net etc).&lt;/li&gt;
&lt;/ol&gt;
</content></entry><entry><id>http://dhickey.ie/2015/04/stepping-down-from-neventstore/</id><title type="text">Stepping down as NEventStore coordinator</title><summary type="html">&lt;p&gt;I can't remember exactly when it happend, but for the last few years at least, I have been the core maintainer / coordinator of &lt;a href="https://github.com/NEventStore/NEventStore"&gt;NEventStore&lt;/a&gt;. Built by Jonathan Oliver and just known as EventStore at the time (not related to the other &lt;a href="https://geteventstore.com"&gt;EventStore&lt;/a&gt;), it provided me and many people with a really easy way to get up and running with event sourcing. For unknown reasons the core maintainers back then stepped back and since I was heavily invested in it, I offered to take it over.&lt;/p&gt;

&lt;p&gt;Since then it has gone through a rename, 2 major versions and a bunch of minor releases. In that process I learned a lot about running an OSS project and connected with a lot of cool and smart people. This is something that I would highly recommend to any developer looking to get into OSS.&lt;/p&gt;

&lt;p&gt;In the last 12 months or so I haven't been responsive enough to the community, issues, pull-requests, google group etc., and it's time to make a statement. While I'm fairly stretched time wise, the core reason is that as I've as learned to build event sourced systems over the years, NEventStore's current design no longer works for me. While I'm using &lt;a href="https://geteventstore.com"&gt;GetEventStore&lt;/a&gt; in some scenarios, I still, and will continue to, have a need for SQL backed event stores. How I'd like to see and interact with such stores is  significantly different to how NEventStore currently works. I &lt;em&gt;could&lt;/em&gt; mould NEventStore into how &lt;em&gt;I'd&lt;/em&gt; like it to be but then the changes would very likely alienate people and break their stuff. Thus it's best that I head off in my own direction.   &lt;/p&gt;

&lt;p&gt;If you are vested into NEventStore and would like to take over and run a popular OSS project (nearly 800 stars and 250 forks on github, and 10's of thousands of nuget downloads), please &lt;a href="https://twitter.com/randompunter"&gt;reach out to me&lt;/a&gt; :)&lt;/p&gt;
</summary><published>2015-04-16T22:00:00Z</published><updated>2015-04-16T22:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2015/04/stepping-down-from-neventstore/" /><content type="html">&lt;p&gt;I can't remember exactly when it happend, but for the last few years at least, I have been the core maintainer / coordinator of &lt;a href="https://github.com/NEventStore/NEventStore"&gt;NEventStore&lt;/a&gt;. Built by Jonathan Oliver and just known as EventStore at the time (not related to the other &lt;a href="https://geteventstore.com"&gt;EventStore&lt;/a&gt;), it provided me and many people with a really easy way to get up and running with event sourcing. For unknown reasons the core maintainers back then stepped back and since I was heavily invested in it, I offered to take it over.&lt;/p&gt;

&lt;p&gt;Since then it has gone through a rename, 2 major versions and a bunch of minor releases. In that process I learned a lot about running an OSS project and connected with a lot of cool and smart people. This is something that I would highly recommend to any developer looking to get into OSS.&lt;/p&gt;

&lt;p&gt;In the last 12 months or so I haven't been responsive enough to the community, issues, pull-requests, google group etc., and it's time to make a statement. While I'm fairly stretched time wise, the core reason is that as I've as learned to build event sourced systems over the years, NEventStore's current design no longer works for me. While I'm using &lt;a href="https://geteventstore.com"&gt;GetEventStore&lt;/a&gt; in some scenarios, I still, and will continue to, have a need for SQL backed event stores. How I'd like to see and interact with such stores is  significantly different to how NEventStore currently works. I &lt;em&gt;could&lt;/em&gt; mould NEventStore into how &lt;em&gt;I'd&lt;/em&gt; like it to be but then the changes would very likely alienate people and break their stuff. Thus it's best that I head off in my own direction.   &lt;/p&gt;

&lt;p&gt;If you are vested into NEventStore and would like to take over and run a popular OSS project (nearly 800 stars and 250 forks on github, and 10's of thousands of nuget downloads), please &lt;a href="https://twitter.com/randompunter"&gt;reach out to me&lt;/a&gt; :)&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2015/04/testing-owin-applications-with-httpclient-and-owinhttpmessagehandler/</id><title type="text">Testing OWIN applications with HttpClient and OwinHttpMessageHandler</title><summary type="html">&lt;p&gt;Let's take the simplest, littlest, lowest level &lt;a href="http://owin.org"&gt;OWIN&lt;/a&gt; app:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Func&amp;lt;IDictionary&amp;lt;string, object&amp;gt;, Task&amp;gt; appFunc = env =&amp;gt;
{
    env["owin.ResponseStatusCode"] = 200;
    env["owin.ResponseReasonPhrase"] = "OK";
    return Task.FromResult(0);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;(In reality you'll probably use &lt;a href="https://katanaproject.codeplex.com/"&gt;Microsoft.Owin&lt;/a&gt; or &lt;a href="https://github.com/damianh/LibOwin"&gt;LibOwin&lt;/a&gt; to get nice typed wrapper around the OWIN environment dictionary instead of using keys like this.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;How do you test this?&lt;/p&gt;

&lt;p&gt;Well you &lt;em&gt;could&lt;/em&gt; create your own environment dictionary, invoke appFunc directly and the assert the dictionary.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[Fact]
public async Task Should_get_OK()
{
    var env = new Dictionary&amp;lt;string, object&amp;gt;()

    await appFunc(env);

    env["owin.ResponseStatusCode"].Should.Be(200);
    env["owin.ResponseReasonPhrase"].Should.Be(OK);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;While this would work it, it's not particularly pretty and it'll get messy fairly quickly as you may need to assert cookies, dealing with chunked responses, doing multiple requests (e.g. a login post-redirect-get), etc.&lt;/p&gt;

&lt;p&gt;Wouldn't it be nicer to use &lt;code&gt;HttpClient&lt;/code&gt; and leverage it's richer API? It would also better represent an actual real-world client. Something like this is desirable:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[Fact]
public async Task Should_get_OK()
{
    using(var client = new HttpClient())
    {
        var response = await client.GetAsync("http://example.com");

        response.Status.Should().Be(HttpStatusCode.OK)
    }
}
&lt;/code&gt;&lt;/pre&gt;

</summary><published>2015-04-16T22:00:00Z</published><updated>2015-04-16T22:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2015/04/testing-owin-applications-with-httpclient-and-owinhttpmessagehandler/" /><content type="html">&lt;p&gt;Let's take the simplest, littlest, lowest level &lt;a href="http://owin.org"&gt;OWIN&lt;/a&gt; app:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;Func&amp;lt;IDictionary&amp;lt;string, object&amp;gt;, Task&amp;gt; appFunc = env =&amp;gt;
{
    env["owin.ResponseStatusCode"] = 200;
    env["owin.ResponseReasonPhrase"] = "OK";
    return Task.FromResult(0);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;(In reality you'll probably use &lt;a href="https://katanaproject.codeplex.com/"&gt;Microsoft.Owin&lt;/a&gt; or &lt;a href="https://github.com/damianh/LibOwin"&gt;LibOwin&lt;/a&gt; to get nice typed wrapper around the OWIN environment dictionary instead of using keys like this.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;How do you test this?&lt;/p&gt;

&lt;p&gt;Well you &lt;em&gt;could&lt;/em&gt; create your own environment dictionary, invoke appFunc directly and the assert the dictionary.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[Fact]
public async Task Should_get_OK()
{
    var env = new Dictionary&amp;lt;string, object&amp;gt;()

    await appFunc(env);

    env["owin.ResponseStatusCode"].Should.Be(200);
    env["owin.ResponseReasonPhrase"].Should.Be(OK);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;While this would work it, it's not particularly pretty and it'll get messy fairly quickly as you may need to assert cookies, dealing with chunked responses, doing multiple requests (e.g. a login post-redirect-get), etc.&lt;/p&gt;

&lt;p&gt;Wouldn't it be nicer to use &lt;code&gt;HttpClient&lt;/code&gt; and leverage it's richer API? It would also better represent an actual real-world client. Something like this is desirable:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[Fact]
public async Task Should_get_OK()
{
    using(var client = new HttpClient())
    {
        var response = await client.GetAsync("http://example.com");

        response.Status.Should().Be(HttpStatusCode.OK)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;!--excerpt--&gt;

&lt;p&gt;There are three ways you can leverage &lt;code&gt;HttpClient&lt;/code&gt; in this way.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start an actual web server using &lt;code&gt;Microsoft.Owin.Hosting.HttpListener&lt;/code&gt;, &lt;code&gt;nowin&lt;/code&gt; or similar. The issue with this is that because we are using a system resource (a port number) we will very likely get conflicts when running tests in parallel, on CI servers, ncrunch, etc.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;Microsoft.Owin.Testing.TestServer&lt;/code&gt;; which I'm not fond of because the strange HttpClient property that is actually a factory, the lack of cookie and redirect support and the reliance on &lt;code&gt;IAppBuilder&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://github.com/damianh/OwinHttpMessageHandler"&gt;&lt;code&gt;OwinHttpMessageHandler&lt;/code&gt;&lt;/a&gt;!&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;OwinHttpMessageHandler&lt;/h3&gt;

&lt;p&gt;This is an &lt;a href="https://msdn.microsoft.com/en-us/library/system.net.http.httpmessagehandler.aspx"&gt;&lt;code&gt;HttpMessageHandler&lt;/code&gt;&lt;/a&gt; that takes an &lt;code&gt;AppFunc&lt;/code&gt; or a &lt;code&gt;MidFunc&lt;/code&gt; in its constructor. When passed into an &lt;code&gt;HttpClient&lt;/code&gt; it allows invoking of the &lt;code&gt;AppFunc&lt;/code&gt;/&lt;code&gt;MidFunc&lt;/code&gt; directly, &lt;strong&gt;in memory and without the need for a listener (+ port)&lt;/strong&gt;. This makes it very compatible with CI servers, parallel testing etc. It is also faster and cleaner from a setup and tear down perspective.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[Fact]
public async Task Should_get_OK()
{
    Func&amp;lt;IDictionary&amp;lt;string, object&amp;gt;, Task&amp;gt; appFunc = env =&amp;gt; { ... }
    var handler = new OwinHttpMessageHandler(appFunc); // Or pass in a MidFunc
    using(var client = new HttpClient(handler))
    {
        var response = await client.GetAsync("http://example.com");

        response.Status.Should().Be(HttpStatusCode.OK)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once you have that &lt;code&gt;HttpResponseMessage&lt;/code&gt;, you can do all sorts of rich assertions. For example, using &lt;a href="https://github.com/jamietre/CsQuery"&gt;CsQuery&lt;/a&gt; or &lt;a href="https://github.com/FlorianRappl/AngleSharp"&gt;AngleSharp&lt;/a&gt; to assert on the returned HTML.&lt;/p&gt;

&lt;h3&gt;Options&lt;/h3&gt;

&lt;p&gt;There are options that allow you to customize the handler:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;var handler = new OwinHttpMessageHandler(appFunc)
{
    // Set this to true (default is false) if you want to automatically handle 301
    // and 302 redirects
    AllowAutoRedirect = true,
    AutoRedirectLimit = 10,

    // Set this if you want to share a `CookieContainer`
    // across multiple HttpClient instances.
    CookieContainer = _cookieContainer,
    UseCookies = true,
};
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Building an AppFunc from a Startup class&lt;/h3&gt;

&lt;p&gt;If you are using &lt;code&gt;Microsoft.Owin&lt;/code&gt; to construct your pipelines you are probably following the &lt;code&gt;Startup&lt;/code&gt; class convention:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app.UseStuff();
        // ...
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Not many know this but you can simply create an AppFunc by leveraging &lt;code&gt;Microsoft.Owin.Builder.AppBuilder&lt;/code&gt;:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[Fact]
public async Task Should_get_OK()
{
    var app = new AppBuilder();
    new Startup().Configuration(app);
    var appFunc = app.Build();
    var handler = new OwinHttpMessageHandler(appFunc);

    using(var client = new HttpClient(handler))
    {
        var response = await client.GetAsync("http://example.com");

        response.Status.Should().Be(HttpStatusCode.OK)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Real world examples and other usages&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/damianh/LimitsMiddleware/tree/master/src/LimitsMiddleware.Tests"&gt;LimitsMiddleware&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/damianh/Cedar.CommandHandling/blob/master/src/Cedar.CommandHandling.Tests/CommandHandlingTests.cs"&gt;Cedar.CommandHandling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;... pretty much all of my middleware / web projects on Github.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When I am building an application (typically CQRS based) that is HTTP-API-first, if I want to invoke a command, I'll often use &lt;code&gt;HttpClient&lt;/code&gt; with &lt;code&gt;OwinHttpMessageHandler&lt;/code&gt; in-proc to invoke the API "embedded".&lt;/p&gt;

&lt;h3&gt;Acknowledgements&lt;/h3&gt;

&lt;p&gt;While I don't use &lt;code&gt;Microsoft.Owin.Testing&lt;/code&gt;, they did have a better implementation of a fake in-memory network stream which I pinched. &amp;lt;3 OSS :)&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2015/04/introducing-liblog/</id><title type="text">Introducing LibLog</title><summary type="html">&lt;p&gt;LibLog (Library Logging) has actually been baking for a few years now, since before 2011 if I recall correctly, and &lt;a href="https://www.nuget.org/packages/LibLog"&gt;the current version&lt;/a&gt; is already at 4.2.1. It's fair to say it's been battle tested at this point.&lt;/p&gt;

&lt;p&gt;As a library developer you will often want to your library to support logging. The first and easiest route is to simply take a dependency on a specific logging framework. If you work in a small company / team and ship to yourselves, you can probably get away with this. But things get messy fast when the consumers of your library want to use a different framework and now they have to adapt output of one to the other or somehow configure them both. Then things get real messy when one of the &lt;a href="http://stackoverflow.com/questions/8743992/how-do-i-work-around-log4net-keeping-changing-publickeytoken"&gt;logging frameworks change their signing key&lt;/a&gt; in a patch release breaking stuff left, right and centre.&lt;/p&gt;

&lt;p&gt;I think at one point I had NLog, Log4Net (2 versions), EntLib logging be pulled into a single project. That's when I had enough.&lt;/p&gt;

</summary><published>2015-04-13T22:00:00Z</published><updated>2015-04-13T22:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2015/04/introducing-liblog/" /><content type="html">&lt;p&gt;LibLog (Library Logging) has actually been baking for a few years now, since before 2011 if I recall correctly, and &lt;a href="https://www.nuget.org/packages/LibLog"&gt;the current version&lt;/a&gt; is already at 4.2.1. It's fair to say it's been battle tested at this point.&lt;/p&gt;

&lt;p&gt;As a library developer you will often want to your library to support logging. The first and easiest route is to simply take a dependency on a specific logging framework. If you work in a small company / team and ship to yourselves, you can probably get away with this. But things get messy fast when the consumers of your library want to use a different framework and now they have to adapt output of one to the other or somehow configure them both. Then things get real messy when one of the &lt;a href="http://stackoverflow.com/questions/8743992/how-do-i-work-around-log4net-keeping-changing-publickeytoken"&gt;logging frameworks change their signing key&lt;/a&gt; in a patch release breaking stuff left, right and centre.&lt;/p&gt;

&lt;p&gt;I think at one point I had NLog, Log4Net (2 versions), EntLib logging be pulled into a single project. That's when I had enough.&lt;/p&gt;

&lt;!--excerpt--&gt;

&lt;p&gt;Another approach is have your library depend on an logging abstraction - &lt;a href="https://github.com/net-commons/common-logging"&gt;Common.Logging&lt;/a&gt;. This way at least means your library is independent of a specific logging library. But are you better off? Common.Logging is strongly named and &lt;a href="https://www.nuget.org/packages/Common.Logging"&gt;has had dozen or so releases&lt;/a&gt;; so what do you think happens when you have 2, 3 or more libraries that depend on &lt;em&gt;different&lt;/em&gt; versions of Common.Logging are pulled into the same project? A world of pain with assembly loading errors and tweaking binding redirects. &lt;/p&gt;

&lt;p&gt;And the adaptor packages aren't in a great state either:
&lt;img src="http://dhickey.ie/images/2015-04-14-introducing-liblog-common-logging-adapters.png" alt="Common Logging Adapters" /&gt;&lt;/p&gt;

&lt;p&gt;Some libraries, like NEventStore, &lt;a href="https://github.com/NEventStore/NEventStore/blob/master/src/NEventStore/Logging/ILog.cs"&gt;define their own logging abstraction&lt;/a&gt;. This puts the burden on the consumer to write their own adapter but it's just a single class file without adding scores of references and dependencies to a project. The adapters can put in the project's wiki/docs or a gist for users to copy and paste. Simples.&lt;/p&gt;

&lt;p&gt;This is my preferred approach and is the foremost thing LibLog aims to make easier for you. As a source code only package (it will never be a dll) it will add a few public interfaces and classes to your project in &lt;code&gt;{ProjectRootNameSpace}.Logging&lt;/code&gt; namespace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;ILogProvider&lt;/code&gt; the defines a 3 methods: get a logger and open nested/mapped diagnostic contexts.&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;single&lt;/strong&gt; &lt;code&gt;Logger&lt;/code&gt; delegate through which &lt;strong&gt;all&lt;/strong&gt; logging functionally can be channelled including structured messages, checking if a log level is enabled, exceptions etc. This is contrast to Common.Logging's &lt;a href="https://github.com/net-commons/common-logging/blob/master/src/Common.Logging.Core/Logging/ILog.cs"&gt;&lt;code&gt;ILog&lt;/code&gt;&lt;/a&gt; which has about 69 members.&lt;/li&gt;
&lt;li&gt;A static &lt;code&gt;LogProvider&lt;/code&gt; for your library to get a logger and for consumers to set the LogProvider.&lt;/li&gt;
&lt;li&gt;Is PCL compatible with conditional compilation symbol &lt;code&gt;LIBLOG_PORTABLE&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;The Icing on the Cake&lt;/h2&gt;

&lt;p&gt;LibLog gives you more though. LibLog will detect the presence of Serilog, NLog, Log4Net, LoupeLogging and EntLib Logging (in that preferential order) in your consumers project and automatically log to them, &lt;em&gt;without the consumer of your library having to do anything at all&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;While LibLog uses reflection to do this (and caches the delegates for performance reasons), it turns out these logging frameworks have really stable APIs. &lt;a href="https://github.com/ayende/ravendb/tree/master/Raven.Abstractions/Logging"&gt;An early variation has been in RavenDB&lt;/a&gt; for a few years and has yet to break because of new version of NLog or Log4Net. So for me, it feels a safe enough approach. But don't worry if it does break though; when that happens LibLog will catch the problem and just disable logging. In that scenario, if you can't update your lib, or your consumer can't update, the consumer can always fall back on supplying their own &lt;code&gt;{YourLibRootNamespace}.Logging.ILog&lt;/code&gt; to get things working again.&lt;/p&gt;

&lt;p&gt;You can see where else &lt;a href="https://github.com/damianh/LibLog/wiki/LibLog%20in%20the%20wild"&gt;LibLog is being used in the wild&lt;/a&gt;. If you know of any more places, please update the wiki :)&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/damianh/LibLog"&gt;project is on github&lt;/a&gt;, is licensed under MIT and further documentation is in the &lt;a href="https://github.com/damianh/LibLog/wiki"&gt;wiki&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Footnote: It may seem I'm bashing Common.Logging but I'm not - this is a general problem in .NET, particularly when strong naming comes into play. Let's take a moment to lament the lack of structural typing where maybe none of this would be necessary in the first place.&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2014/11/our-open-source-policy-at-evision/</id><title type="text">Our Open Source policy at eVision</title><summary type="html">&lt;p&gt;There is a problem within our industry, particularly around .NET, known as the "no-contrib culture". That is, companies take and benefit from OSS but don't given anything back and also prevent their employees from doing so. A while ago &lt;a href="http://dhickey.ie/post/2014/06/25/establishing-oss-policies-in-a-proprietary-software-company.aspx"&gt;I blogged about establishing an Open Source Software policy at a proprietary software company&lt;/a&gt; that happens to consume and depend on a significant amount of open source libraries. eVision was a typical company in that the standard employment contract had that catch-all clause that they own everything one does on a computer.&lt;/p&gt;

&lt;p&gt;With the proliferation of OSS, the fact the platform we predominately build on &lt;a href="http://www.hanselman.com/blog/AnnouncingNET2015NETasOpenSourceNETonMacandLinuxandVisualStudioCommunity.aspx"&gt;is going OSS&lt;/a&gt;, and the fact that OSS is a viable business strategy (&lt;a href="http://hbswk.hbs.edu/item/6158.html"&gt;even within proprietary businesses&lt;/a&gt;), this was a somewhat short-sighted. Indeed, several employees were simply ignoring this; a situation which was in neither party's interest.&lt;/p&gt;

</summary><published>2014-11-26T23:00:00Z</published><updated>2014-11-26T23:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2014/11/our-open-source-policy-at-evision/" /><content type="html">&lt;p&gt;There is a problem within our industry, particularly around .NET, known as the "no-contrib culture". That is, companies take and benefit from OSS but don't given anything back and also prevent their employees from doing so. A while ago &lt;a href="http://dhickey.ie/post/2014/06/25/establishing-oss-policies-in-a-proprietary-software-company.aspx"&gt;I blogged about establishing an Open Source Software policy at a proprietary software company&lt;/a&gt; that happens to consume and depend on a significant amount of open source libraries. eVision was a typical company in that the standard employment contract had that catch-all clause that they own everything one does on a computer.&lt;/p&gt;

&lt;p&gt;With the proliferation of OSS, the fact the platform we predominately build on &lt;a href="http://www.hanselman.com/blog/AnnouncingNET2015NETasOpenSourceNETonMacandLinuxandVisualStudioCommunity.aspx"&gt;is going OSS&lt;/a&gt;, and the fact that OSS is a viable business strategy (&lt;a href="http://hbswk.hbs.edu/item/6158.html"&gt;even within proprietary businesses&lt;/a&gt;), this was a somewhat short-sighted. Indeed, several employees were simply ignoring this; a situation which was in neither party's interest.&lt;/p&gt;

&lt;!--excerpt--&gt;

&lt;p&gt;Thus I am delighted to announce that we have established a policy that we believe strikes the right balance and have made it available on our &lt;a href="https://github.com/eVisionSoftware/OpenSourcePolicy/blob/master/OSS-Policy.md"&gt;github organisation site&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;In layman's terms, this means that our employees are free to create any sort of open source outside of business hours (as long as it doesn't compete with our business), are free to contribute to open source we depend on at any time, and they own the copyright to that work (or whatever the terms are of the project they contribute to). The only real stipulation is that the project's licence must allow us to use it in our commercial software.&lt;/p&gt;

&lt;p&gt;We hope this will have the effect of encouraging contribution to the platform we depend on strengthening it to our mutual benefit. We also hope that engagement with the open source community will have a positive learning effect for our engineers.&lt;/p&gt;

&lt;p&gt;I personally hope that more organisations and companies adopt our policy and make it known publicly. Thus, we've released this policy under a creative commons licence. Disclaimer: you should have your lawyer check it over!&lt;/p&gt;

&lt;p&gt;To all developers out there, when considering whom to work for I would &lt;strong&gt;strongly&lt;/strong&gt; suggest that you should only look for organisations that have this or a similar policy in place. These are mature organisations that care for the platform and ecosystem they and their staff build upon.&lt;/p&gt;

&lt;p&gt;Any feedback or questions, please contact me dhickey at gmail.com. Thanks to &lt;a href="https://twitter.com/adamralph"&gt;Adam Ralph&lt;/a&gt;, &lt;a href="https://twitter.com/fransbouma"&gt;Frans Bouma&lt;/a&gt; and &lt;a href="https://twitter.com/chr_horsdal"&gt;Christian Horsdal&lt;/a&gt; for their feedback.&lt;/p&gt;

&lt;p&gt;On that note, eVision are looking for talented engineers (and more) to work on some challenging, distributed and large scale problems. :)&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2014/06/establishing-oss-policies-in-a-proprietary-software-company/</id><title type="text">Establishing OSS policies in a proprietary software company</title><summary type="html">&lt;p&gt;My company produces proprietary software but consumes a lot of FOSS libraries and components. Until recently, our standard contracts were of the 'we own everything you do' type. Obviously that created some difficulties for someone like myself given that I'm active in the .NET OSS community. The irony being that if I signed it, I wouldn't be able to support the very OSS libraries I maintain and that the company uses.&lt;/p&gt;

&lt;p&gt;We'd like to establish a corporate policy where developers can contribute back and, possibly, create new OSS. I believe this is strategically a good move for several reasons, including:&lt;/p&gt;

</summary><published>2014-06-24T22:00:00Z</published><updated>2014-06-24T22:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2014/06/establishing-oss-policies-in-a-proprietary-software-company/" /><content type="html">&lt;p&gt;My company produces proprietary software but consumes a lot of FOSS libraries and components. Until recently, our standard contracts were of the 'we own everything you do' type. Obviously that created some difficulties for someone like myself given that I'm active in the .NET OSS community. The irony being that if I signed it, I wouldn't be able to support the very OSS libraries I maintain and that the company uses.&lt;/p&gt;

&lt;p&gt;We'd like to establish a corporate policy where developers can contribute back and, possibly, create new OSS. I believe this is strategically a good move for several reasons, including:&lt;/p&gt;

&lt;!--excerpt--&gt;

&lt;p&gt;A) It supports the software that we've built our business on, and without which, we might not even exist
 B) It exposes our developers to more code and ways of working, thereby helping to improve their craft.
 C) It's an attractive policy to the type of developer that we are interested in attracting as we grow.&lt;/p&gt;

&lt;p&gt;At the same time though, it is critically important for us to protect the proprietary aspect of our software.&lt;/p&gt;

&lt;p&gt;Our legal department and I are not experienced in establishing such policy and procedures. I'd like to get in touch with someone from a company who has enacted something like this, so if you know anyone, I'd be very grateful if you could introduce me (dhickey@gmail.com, @randompunter).&lt;/p&gt;

&lt;p&gt;I hope to publicly share what we come up with such that other companies can learn and implement similar policies.&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2014/03/gullivers-travels-test/</id><title type="text">Gulliver's Travels Tests</title><summary type="html">&lt;p&gt;Like writing lots and lots of fine-grained "unit" tests, mocking out every teeny-weeny interaction between every single object?&lt;/p&gt;

&lt;p&gt;This is your application:&lt;/p&gt;

&lt;p&gt;&lt;img src="http://dhickey.ie/images/2014-03-03-gulivers-travels-tests-guliver.jpg" alt="Gulliver Tied Down" /&gt;&lt;/p&gt;

&lt;p&gt;Now try to change something.&lt;/p&gt;
</summary><published>2014-03-02T23:00:00Z</published><updated>2014-03-02T23:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2014/03/gullivers-travels-test/" /><content type="html">&lt;p&gt;Like writing lots and lots of fine-grained "unit" tests, mocking out every teeny-weeny interaction between every single object?&lt;/p&gt;

&lt;p&gt;This is your application:&lt;/p&gt;

&lt;p&gt;&lt;img src="http://dhickey.ie/images/2014-03-03-gulivers-travels-tests-guliver.jpg" alt="Gulliver Tied Down" /&gt;&lt;/p&gt;

&lt;p&gt;Now try to change something.&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2014/02/bubbling-exceptions-in-nancy-up-the-owin-pipeline/</id><title type="text">Bubbling exceptions in Nancy up the owin pipeline</title><summary type="html">&lt;p&gt;In the application I am building I have a requirement to do common exception handling within my OWIN pipeline via custom middleware:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;private class CustomExceptionMiddleware : OwinMiddleware
{
    public CustomExceptionMiddleware(OwinMiddleware next) : base(next)
    {}

    public override async Task Invoke(IOwinContext context)
    {
        try
        {
            await Next.Invoke(context);
        }
        catch(Exception ex)
        {
            // Custom stuff here
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

</summary><published>2014-02-10T23:00:00Z</published><updated>2014-02-10T23:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2014/02/bubbling-exceptions-in-nancy-up-the-owin-pipeline/" /><content type="html">&lt;p&gt;In the application I am building I have a requirement to do common exception handling within my OWIN pipeline via custom middleware:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;private class CustomExceptionMiddleware : OwinMiddleware
{
    public CustomExceptionMiddleware(OwinMiddleware next) : base(next)
    {}

    public override async Task Invoke(IOwinContext context)
    {
        try
        {
            await Next.Invoke(context);
        }
        catch(Exception ex)
        {
            // Custom stuff here
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;!--excerpt--&gt;

&lt;p&gt;Where the startup looked something like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public class Startup
{
    public void Configuration(IAppBuilder app)
    {
        app
            .Use&amp;lt;CustomExceptionMiddleware&amp;gt;()
            .UseNancy()
            .UseOtherStuff();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The "problem" is that Nancy is very safe by default - it will always try to capture the exception and set the status code to 500, even if you have cleared all the status code handlers from the bootstrapper. Thus, any exceptions thrown in Nancy will never reach the CustomExceptionMiddleware. &lt;/p&gt;

&lt;p&gt;Taking the PassThroughStatusCodeHandler found in Nancy.Testing and tweaking the handle method, we can capture the exception and rethrow it using .NET 4.5's ExceptionDispatchInfo:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public class RethrowStatusCodeHandler : IStatusCodeHandler
{
    public bool HandlesStatusCode(Nancy.HttpStatusCode statusCode, NancyContext context)
    {
                // This is unchanged
    }

    public void Handle(Nancy.HttpStatusCode statusCode, NancyContext context)
    {
        Exception innerException = ((Exception) context.Items[NancyEngine.ERROR_EXCEPTION]).InnerException;
        ExceptionDispatchInfo
            .Capture(innerException)
            .Throw();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Finally we need to tell Nancy to use this. In your bootstrapper, override InternalConfiguration property:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;protected override NancyInternalConfiguration InternalConfiguration
{
    get
    {
        return NancyInternalConfiguration.WithOverrides(config =&amp;gt;
            config.StatusCodeHandlers = new[] {typeof (RethrowStatusCodeHandler)});
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now the exception, with proper stack trace will be thrown and captured by the middleware.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/damianh/NancyOwinException/blob/master/NancyOwinException/Class1.cs"&gt;Full example with test&lt;/a&gt;.&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2014/01/protecting-a-self-hosted-nancy-api-with-microsoft-owin-security-activedirectory/</id><title type="text">Protecting a Self-Hosted Nancy API with Microsoft.Owin.Security.ActiveDirectory</title><summary type="html">&lt;p&gt;This post is the Nancy version of Vittorio's "&lt;a href="http://www.cloudidentity.com/blog/2013/12/10/protecting-a-self-hosted-api-with-microsoft-owin-security-activedirectory/"&gt;Protecting a Self-Hosted API with Microsoft.Owin.Security.ActiveDirectory&lt;/a&gt;". Since I'm lazy, I may even be lifting some parts of it verbatim, if you don't mind, Vittorrio ;)&lt;/p&gt;

&lt;p&gt;I'm going to skip the intro to Owin etc, if you are reading this, you probably know all about it by now. (Except for you Phillip; Get yout s**t together man!). This tutorial will be using Nancy.MSOwinSecurity that I introduced in the previous post.&lt;/p&gt;

&lt;p&gt;What I am going to show you is how you can set up a super simple Nancy HTTP API in a console app and how you can easily secure it with Windows Azure AD with the exact same code you use when programming against IIS Express/full IIS.&lt;/p&gt;

</summary><published>2014-01-03T23:00:00Z</published><updated>2014-01-03T23:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2014/01/protecting-a-self-hosted-nancy-api-with-microsoft-owin-security-activedirectory/" /><content type="html">&lt;p&gt;This post is the Nancy version of Vittorio's "&lt;a href="http://www.cloudidentity.com/blog/2013/12/10/protecting-a-self-hosted-api-with-microsoft-owin-security-activedirectory/"&gt;Protecting a Self-Hosted API with Microsoft.Owin.Security.ActiveDirectory&lt;/a&gt;". Since I'm lazy, I may even be lifting some parts of it verbatim, if you don't mind, Vittorrio ;)&lt;/p&gt;

&lt;p&gt;I'm going to skip the intro to Owin etc, if you are reading this, you probably know all about it by now. (Except for you Phillip; Get yout s**t together man!). This tutorial will be using Nancy.MSOwinSecurity that I introduced in the previous post.&lt;/p&gt;

&lt;p&gt;What I am going to show you is how you can set up a super simple Nancy HTTP API in a console app and how you can easily secure it with Windows Azure AD with the exact same code you use when programming against IIS Express/full IIS.&lt;/p&gt;

&lt;!--excerpt--&gt;

&lt;p&gt;Here's what we are going to do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a minimal self-hosted Nancy Http API in a console app.&lt;/li&gt;
&lt;li&gt;Add middleware for validating JWT tokens from AAD.&lt;/li&gt;
&lt;li&gt;Create a console app client to poke our API and challenge the user for credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;Create a Minimal Self-Hosted Nancy Http API&lt;/h3&gt;

&lt;p&gt;Let's start by creating blank solution. I'm a bit unimaginative so I am calling this 'NancyAAD'. Note: this entire solution will be targeting .NET 4.5. .NET 4.0 is not supported. Next, we add a console application, 'NancyAAD.Server', that is going to host our Nancy HTTP API. To this project, we're going to add a number of nuget packages, in this order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Nancy.Owin&lt;/strong&gt; - This will bring in Nancy and the adapter that allows Nancy to be hosted in an OWIN pipeline.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Microsoft.Owin.Hosting&lt;/strong&gt; - Provides default infrastructure types for hosting and running OWIN-based applications. There are other hosting implementations out there, if you feel the need to explore.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Microsoft.Owin.Host.HttpListener&lt;/strong&gt; - OWIN server built on the .NET Framework's HttpListener class.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Microsoft.Owin.Security.ActiveDirectory&lt;/strong&gt; - The middleware through which we will secure our Nancy HTTP API. This package will pull in a bunch of other packages. We'll be using this later.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Nancy.MSOwinSecurity&lt;/strong&gt; - This will provide integration between Nancy's context and modules and Microsoft.Owin.Security.*.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your NancyAAD.Server packages.config should look similar to this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="utf-8"?&amp;gt;
&amp;lt;packages&amp;gt;
  &amp;lt;package id="Microsoft.Owin" version="2.0.2" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Microsoft.Owin.Host.HttpListener" version="2.0.2" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Microsoft.Owin.Hosting" version="2.0.2" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Microsoft.Owin.Security" version="2.0.2" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Microsoft.Owin.Security.ActiveDirectory" version="2.0.2" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Microsoft.Owin.Security.Jwt" version="2.0.2" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Microsoft.Owin.Security.OAuth" version="2.0.2" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Nancy" version="0.21.1" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Nancy.MSOwinSecurity" version="1.0.0" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Nancy.Owin" version="0.21.1" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Newtonsoft.Json" version="4.5.11" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="Owin" version="1.0" targetFramework="net45" /&amp;gt;
  &amp;lt;package id="System.IdentityModel.Tokens.Jwt" version="1.0.0" targetFramework="net45" /&amp;gt;
&amp;lt;/packages&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Next we'll define our Nancy module and a very simple GET endpoint that we'll want to secure later:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;namespace NancyAAD.Server
{
    using Nancy;

    public class ValuesModule : NancyModule
    {
        public ValuesModule()
        {
            Get["/values"] = _ =&amp;gt; new[] { "value1", "value2" };
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That done, let's add the Startup class for hosting our Nancy application in OWIN:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;namespace NancyAAD.Server
{
    using Owin;

    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            app.UseNancy();
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This says that Nancy is handling all requests from the root. Nice and simple. &lt;/p&gt;

&lt;p&gt;Lastly, we need to host the OWIN application itself. In Program.cs, we write the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;namespace NancyAAD.Server
{
    using System;
    using System.Net.Http;
    using System.Net.Http.Headers;
    using Microsoft.Owin.Hosting;

    internal class Program
    {
        public static void Main(string[] args)
        {
            using (WebApp.Start&amp;lt;Startup&amp;gt;("http://localhost:9000/"))
            {
                Console.ForegroundColor = ConsoleColor.Blue;
                Console.WriteLine("Nancy listening at http://localhost:9000/");

                // Test call
                var client = new HttpClient();
                client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
                var response = client.GetAsync("http://localhost:9000/values").Result;
                Console.ForegroundColor = ConsoleColor.Red;
                Console.WriteLine(response.Content.ReadAsStringAsync().Result);

                Console.ReadLine();
            }
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The call to WebApp.Start initializes a new server, which listens at the specified address. The rest of the method calls the Nancy HTTP API to double check that we did everything correctly.&lt;/p&gt;

&lt;p&gt;There is one subtle difference here compared to Vittorio's post - we are defining an Accept header "application/json". The reason is that because Nancy supports view engines where it's default behaviour when returning a model for a request with no accept header, is to try to resolve a view. This would result in an exception for us as we haven't defined any views.&lt;/p&gt;

&lt;p&gt;F5'ing the app should result in:&lt;/p&gt;

&lt;p&gt;&lt;img src="http://dhickey.ie/images/2014-01-04-protecting-a-self-hosted-nancy-api-with-microsoft-owin-security-activedirectory-console.png" alt="Console" /&gt;&lt;/p&gt;

&lt;p&gt;Exactly the same as Vittorio's screen shot. Nice.&lt;/p&gt;

&lt;p&gt;Before moving on to the next task, remove the test call from the Main method and change it to the following:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public static void Main(string[] args)
{
    using (WebApp.Start&amp;lt;Startup&amp;gt;("http://localhost:9000/"))
    {
        Console.ForegroundColor = ConsoleColor.Blue;
        Console.WriteLine("Nancy listening at http://localhost:9000/");
        Console.WriteLine("Press ENTER to terminate");
        Console.ReadLine(); 
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Secure the Nancy HTTP API with Windows Azure AD&lt;/h3&gt;

&lt;p&gt;Here comes the raison d’être of the entire post. We have already added the Microsoft.Owin.Security.ActiveDirectory nuget package, now let's add it in the right place into the OWIN pipeline. Note, the order things are defined in the pipeline are important, so we must add this before Nancy so requests are piped through it before hitting the Nancy handlers. &lt;/p&gt;

&lt;p&gt;Our OWIN pipeline with the middleware added now looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public void Configuration(IAppBuilder app)
{
    app.UseWindowsAzureActiveDirectoryBearerAuthentication(
            new WindowsAzureActiveDirectoryBearerAuthenticationOptions
            {
                Audience = "https://contoso7.onmicrosoft.com/RichAPI",
                Tenant = "contoso7.onmicrosoft.com"
            })
          .UseNancy();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The new code is the "UserWindowsAzure....". You’ll notice that instead of relying on app.config settings, values are passed directly to the middleware. This is another beauty of OWIN, you can now use can keep such settings wherever you deem most appropriate.&lt;/p&gt;

&lt;p&gt;Now our pipeline includes the right middleware: if we receive a JWT, we’ll validate it and if it checks out we’ll pass the corresponding ClaimsPrincipal to the handler delegate. Very good, but not good enough. Let’s modify the module class to mandate that all callers must present a valid token from the tenant of choice:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public void Configuration(IAppBuilder app)
{
    app.UseWindowsAzureActiveDirectoryBearerAuthentication(
            new WindowsAzureActiveDirectoryBearerAuthenticationOptions
            {
                Audience = "https://contoso7.onmicrosoft.com/RichAPI",
                Tenant = "contoso7.onmicrosoft.com"
            })
          .UseNancy();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The new code is the "UserWindowsAzure....". You’ll notice that instead of relying on app.config settings, values are passed directly to the middleware. This is another beauty of OWIN, you can now use can keep such settings wherever you deem most appropriate.&lt;/p&gt;

&lt;p&gt;Now our pipeline includes the right middleware: if we receive a JWT, we’ll validate it and if it checks out we’ll pass the corresponding ClaimsPrincipal to the handler delegate. Very good, but not good enough. Let’s modify the module class to mandate that all callers must present a valid token from the tenant of choice: &lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public class ValuesModule : NancyModule
{
    public ValuesModule()
    {
        this.RequiresMSOwinAuthentication();
        Get["/values"] = _ =&amp;gt;
        {
            ClaimsPrincipal claimsPrincipal = Context.GetMSOwinUser();
            Console.WriteLine("==&amp;gt;I have been called by {0}", claimsPrincipal.FindFirst(ClaimTypes.Upn));
            return new[] {"value1", "value2"};
        };
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The key statement here is "this.RequiresMSOwinAuthentication" - this ensures that a request hitting this module must have a valid authenticated user, otherwise an unauthorized HTTP status code is returned.&lt;/p&gt;

&lt;p&gt;The second key statement, "Context.GetAuthenticationManager().User", retrieves the user from IAuthenticationManager where we write a claim to the console, just for demonstration purposes. The UPN claim is among the ones that Windows Azure AD sends in JWTs.&lt;/p&gt;

&lt;p&gt;Believe it or not, that’s all we had to do to secure the Nancy HTTP API: we just had to add the code in bold :)&lt;/p&gt;

&lt;h3&gt;Create a Client App and Test the Nancy HTTP API&lt;/h3&gt;

&lt;p&gt;Add a new console app project called NancyAAD.Client (see, I'm not really imaginative!). To this project, install the 'Microsoft.WindowsAzure.ActiveDirectory.Authentication' nuget package. This package contains the main assembly for the Windows Azure Authentication Library (AAL) and provides easy to use authentication functionality for .NET client apps. Your packages.config should look like;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="utf-8"?&amp;gt;
&amp;lt;packages&amp;gt;
  &amp;lt;package id="Microsoft.WindowsAzure.ActiveDirectory.Authentication" version="0.7.0" targetFramework="net45" /&amp;gt;
&amp;lt;/packages&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Next up, the Main method should looks like this:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;namespace NancyAAD.Client
{
    using System;
    using System.Net.Http;
    using System.Net.Http.Headers;
    using Microsoft.WindowsAzure.ActiveDirectory.Authentication;

    internal class Program
    {
        [STAThread]
        public static void Main(string[] args)
        {
            Console.ForegroundColor = ConsoleColor.Green;
            Console.WriteLine("Client ready.");
            Console.WriteLine("Press any key to invoke the service");
            Console.WriteLine("Press ESC to terminate");
            ConsoleKeyInfo consoleKeyInfo;

            var authenticationContext = new AuthenticationContext(
                "https://login.windows.net/contoso7.onmicrosoft.com");

            do
            {
                consoleKeyInfo = Console.ReadKey(true);
                // get the access token
                AuthenticationResult authenticationResult = authenticationContext.AcquireToken(
                    "https://contoso7.onmicrosoft.com/RichAPI",
                    "be182811-9d0b-45b2-9ffa-52ede2a12230",
                    "http://whatevah");
                // invoke the Nancy API
                var httpClient = new HttpClient();
                client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
                httpClient.DefaultRequestHeaders.Authorization =
                    new AuthenticationHeaderValue("Bearer", authenticationResult.AccessToken);
                HttpResponseMessage response = httpClient.GetAsync("http://localhost:9000/values").Result;
                // display the result
                if (response.IsSuccessStatusCode)
                {
                    string result = response.Content.ReadAsStringAsync().Result;
                    Console.WriteLine("==&amp;gt; Successfully invoked the service");
                    Console.WriteLine(result);
                }
            } while (consoleKeyInfo.Key != ConsoleKey.Escape);
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This is a very classic ADAL client and is virtually identical to the one in Vittorio's post. It simply acquires the token, attaches it to the Authorization header before making the call. The one difference is the Accept header is defined, as explained previously.&lt;/p&gt;

&lt;p&gt;Starting both projects and triggering a call from the client should result in:&lt;/p&gt;

&lt;p&gt;&lt;img src="http://dhickey.ie/images/2014-01-04-protecting-a-self-hosted-nancy-api-with-microsoft-owin-security-activedirectory-login-prompt.png" alt="Login prompt" /&gt;&lt;/p&gt;

&lt;p&gt;Sign in as any user in your tenant, and you’ll get to something like the screen below:&lt;/p&gt;

&lt;p&gt;&lt;img src="http://dhickey.ie/images/2014-01-04-protecting-a-self-hosted-nancy-api-with-microsoft-owin-security-activedirectory-console2.png" alt="Console" /&gt;&lt;/p&gt;

&lt;p&gt;Ta-dah!&lt;/p&gt;

&lt;p&gt;Now go forth and secure your enteprise(!) Nancy Apps!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/damianh/NancyAAD"&gt;Full code for this is on Github&lt;/a&gt;. &lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2013/12/uniqueness-in-a-ddd-cqrs-es-application/</id><title type="text">Uniqueness in a DDD-CQRS-ES application</title><summary type="html">&lt;p&gt;A while back I blogged that set based based concerns, i.e. duplicates, that &lt;a href="http://dhickey.ie/post/2011/12/08/Questions-to-ask-whenever-set-based-business-concerns-come-up-in-a-DDD-CQRS-ES-based-application.aspx"&gt;when analyzed aren't really a problem&lt;/a&gt;. An example of this could a be a duplicate product in a catalogue. The worst that can happen is the user sees the item twice when browsing / searching. Everything still works - the customer can still purchase it, you can still ship it. It's low risk, low impact and life goes on.&lt;/p&gt;

&lt;p&gt;There are situations though where global uniqueness is a real requirement, for example, choosing a username during a new user registration process. The problem here is that if a duplicate occurs you may have a security issue, or maybe neither user can log in. It depends on your application of course, but this may be considered high risk.&lt;/p&gt;

</summary><published>2013-12-01T23:00:00Z</published><updated>2013-12-01T23:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2013/12/uniqueness-in-a-ddd-cqrs-es-application/" /><content type="html">&lt;p&gt;A while back I blogged that set based based concerns, i.e. duplicates, that &lt;a href="http://dhickey.ie/post/2011/12/08/Questions-to-ask-whenever-set-based-business-concerns-come-up-in-a-DDD-CQRS-ES-based-application.aspx"&gt;when analyzed aren't really a problem&lt;/a&gt;. An example of this could a be a duplicate product in a catalogue. The worst that can happen is the user sees the item twice when browsing / searching. Everything still works - the customer can still purchase it, you can still ship it. It's low risk, low impact and life goes on.&lt;/p&gt;

&lt;p&gt;There are situations though where global uniqueness is a real requirement, for example, choosing a username during a new user registration process. The problem here is that if a duplicate occurs you may have a security issue, or maybe neither user can log in. It depends on your application of course, but this may be considered high risk.&lt;/p&gt;

&lt;!--excerpt--&gt;

&lt;p&gt;The solution to this is to use the &lt;a href="http://www.rgoarchitects.com/nblog/2009/09/08/SOAPatternsReservations.aspx"&gt;reservation pattern&lt;/a&gt; where we 'reserve' the username from some fully consistent store before creating the user. When the user is successfully created, we then confirm the reservation, via a process manager. Reservations have an expiry so if for some reason the user is not created, i.e. they abandoned the process, the username is eventually released for someone else to use. The worst thing that can happen is that a reserved/unconfirmed username is unavailable to other users for whatever duration you decide to set as an expiry.&lt;/p&gt;

&lt;p&gt;A word of caution though - this pattern introduces a single point of failure. If the reservation system is not available, new users won't be able to sign up. Therefore this should only be considered if you &lt;em&gt;absolutely&lt;/em&gt; need to have uniqueness (which is far less often than you think). Of course, you'll keep this component separate from your authentication and application so existing users can still log in, if it does go down :)&lt;/p&gt;

&lt;p&gt;From a disaster recovery perspective, a username reservation system should be rebuildable from domain events.&lt;/p&gt;

&lt;p&gt;In the end, it's all about risk analysis.&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2012/05/ensuring-dispose-gets-called/</id><title type="text">Ensuring that Dispose() gets called on your IDisposable types</title><summary type="html">&lt;p&gt;Here is a neat trick to make sure you IDisposable classes have their dispose method called. In the finalizer, call Debug.Fail and only include it in Debug builds:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public class DisposableClass : IDisposable
{
    public void Dispose()
    {
        //The usual cleanup
#if DEBUG
        GC.SuppressFinalize(this);
#endif
     }

#if DEBUG
        ~DisposableClass()
     {
        Debug.Fail(string.Format("Undisposed {0}", GetType().Name));
     }
#endif
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will give you an in-your-face dialog to let you know of your fail:&lt;/p&gt;

&lt;p&gt;&lt;img src="http://dhickey.ie/images/2012-05-31-ensuring-dispose-gets-called-assertion-failed.png" alt="Assertion Failed" /&gt;&lt;/p&gt;

&lt;p&gt;I can't claim that I came up with this, but I can't remember where I saw it either.&lt;/p&gt;
</summary><published>2012-05-30T22:00:00Z</published><updated>2012-05-30T22:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2012/05/ensuring-dispose-gets-called/" /><content type="html">&lt;p&gt;Here is a neat trick to make sure you IDisposable classes have their dispose method called. In the finalizer, call Debug.Fail and only include it in Debug builds:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;public class DisposableClass : IDisposable
{
    public void Dispose()
    {
        //The usual cleanup
#if DEBUG
        GC.SuppressFinalize(this);
#endif
     }

#if DEBUG
        ~DisposableClass()
     {
        Debug.Fail(string.Format("Undisposed {0}", GetType().Name));
     }
#endif
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will give you an in-your-face dialog to let you know of your fail:&lt;/p&gt;

&lt;p&gt;&lt;img src="http://dhickey.ie/images/2012-05-31-ensuring-dispose-gets-called-assertion-failed.png" alt="Assertion Failed" /&gt;&lt;/p&gt;

&lt;p&gt;I can't claim that I came up with this, but I can't remember where I saw it either.&lt;/p&gt;
</content></entry><entry><id>http://dhickey.ie/2011/12/questions-to-ask-for-se-based-concerns-in-ddd/</id><title type="text">Questions to ask whenever set based business concerns come up in a DDD-CQRS-ES based application</title><summary type="html">&lt;p&gt;One of the biggest brain speed bumps for people who come from a centralized data-model first \ RDMBS world is the issue of duplicates. "What if we have duplicate names?" "What if we have duplicate accounts?" etc. This can be of special concern in distributed systems, where your aggregate roots may be created in different places.&lt;/p&gt;

&lt;p&gt;Before I engage in any sort of implementation, I have several business questions that must be answered first:&lt;/p&gt;

</summary><published>2011-12-07T23:00:00Z</published><updated>2011-12-07T23:00:00Z</updated><link rel="alternate" href="http://dhickey.ie/2011/12/questions-to-ask-for-se-based-concerns-in-ddd/" /><content type="html">&lt;p&gt;One of the biggest brain speed bumps for people who come from a centralized data-model first \ RDMBS world is the issue of duplicates. "What if we have duplicate names?" "What if we have duplicate accounts?" etc. This can be of special concern in distributed systems, where your aggregate roots may be created in different places.&lt;/p&gt;

&lt;p&gt;Before I engage in any sort of implementation, I have several business questions that must be answered first:&lt;/p&gt;

&lt;!--excerpt--&gt;

&lt;ol&gt;
&lt;li&gt;What is the business impact of something like this happening?&lt;/li&gt;
&lt;li&gt;Under what exact circumstances / business operations could this issue occur?&lt;/li&gt;
&lt;li&gt;What are the chances of this happening? Or, how big is the window of opportunity for this to occur?&lt;/li&gt;
&lt;li&gt;Is it possible to mitigate the chances of it happening through design of the business operations?&lt;/li&gt;
&lt;li&gt;Can it be automatically detected?&lt;/li&gt;
&lt;li&gt;How quickly can it be detected after the issue occurs?&lt;/li&gt;
&lt;li&gt;Can it be compensated (fixed) after occurring?&lt;/li&gt;
&lt;li&gt;Can the compensation be automated and transparent?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;99% of the time, it turns out that set based concerns, aren't.&lt;/p&gt;
</content></entry></feed>