Damian HickeyDamian Hickeyhttp://dhickey.ie/2021-01-24T17:22:17+01:00Damianhttp://dhickey.iedhickey@gmail.comSandra.Snow Atom Generatorhttp://dhickey.ie/2021/01/24/pulumi-multiple-accounts/Using Pulumi with multiple accounts<p>When logging into Pulumi, using <a href="https://www.pulumi.com/docs/reference/cli/pulumi_login/"><code>pulumi login</code></a> the effect is
machine wide. If, like me, you have multiple accounts i.e. personal and work,
switching accounts by logging out and logging back in again becomes tedious
fast. It's also risky when you find yourself logged into the wrong account and
you do a <code>pulumi up</code>...</p>
2021-01-23T23:00:00Z2021-01-23T23:00:00Z<p>When logging into Pulumi, using <a href="https://www.pulumi.com/docs/reference/cli/pulumi_login/"><code>pulumi login</code></a> the effect is
machine wide. If, like me, you have multiple accounts i.e. personal and work,
switching accounts by logging out and logging back in again becomes tedious
fast. It's also risky when you find yourself logged into the wrong account and
you do a <code>pulumi up</code>...</p>
<!--excerpt-->
<p>AWS CLI has a concept of <a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html">named profiles</a> that allows you select
the profile an AWS CLI command applies to. Unfortunately Pulumi does not support
such a concept. It does however support an environment variable
<code>PULUMI_ACCESS_TOKEN</code> that takes precedence over the machine level access token.
Using this with a function in your PowerShell <code>$profile</code> / Bash <code>.bashrc</code>, we
can switch profiles on-the-fly for the current terminal session and any
processes spawned from there. This neatly also allows you to use different accounts
in separate terminal sessions at the same time.</p>
<p>To set this up, first go to your each of your Pulumi accounts' settings and
generate an Access Token for each profile.</p>
<p>Then, if you are a <strong>PowerShell</strong> user, add this function to your PowerShell's <code>$profile</code> (obviously replace the tokens with your real tokens) :</p>
<pre><code>function pulumi-profile([string]$profile){
if($profile -eq ""){
Write-Host "Profile name argument missing."
return
}
pulumi logout
$env:PULUMI_ACCESS_TOKEN = ""
if($profile -eq "local"){
pulumi login --local --non-interactive
return
}
elseif($profile -eq "work"){
$env:PULUMI_ACCESS_TOKEN = "pul-work-token"
}
elseif($profile -eq "personal"){
$env:PULUMI_ACCESS_TOKEN = "pul-personal-token"
}
else{
Write-Host "Unknown profile"
return
}
pulumi login --non-interactive
}
</code></pre>
<p>Or, if you are a <strong>Bash</strong> user, add this function to your <code>~/.bashrc</code> :</p>
<pre><code>pulumi-profile() {
if [ $# -eq 0 ]; then
echo "Profile name argument missing."
return
fi
pulumi logout
export PULUMI_ACCESS_TOKEN=""
if [ $1 == "local" ]; then
pulumi login --local --non-interactive
elif [ $1 == "work" ]; then
export PULUMI_ACCESS_TOKEN="pul-work-token"
elif [ $1 == "personal" ]; then
export PULUMI_ACCESS_TOKEN="pul-personal-token"
else
echo "Unknown profile"
return
fi
pulumi login --non-interactive
}
</code></pre>
<p>You can then switch Pulumi profile using by calling <code>pulumi-profile <profile-name></code> in your terminal.</p>
<p>Example in PowerShell:</p>
<p><img src="http://dhickey.ie/images/2021-01-24/ps.png" alt="Powershell terminal" /></p>
<p>Example in Bash:</p>
<p><img src="http://dhickey.ie/images/2021-01-24/bash.png" alt="Bash terminal" /></p>
<p>Hope that helps!</p>
<p>Comments or feedback? <a href="https://twitter.com/randompunter/status/1353377441855795200">Respond to this tweet</a>.</p>
http://dhickey.ie/2020/10/23/proxykit-last-release/ProxyKit - Last release and EOL.<p>At end of last year I pushed version <code>2.3.4</code> of
<a href="https://github.com/ProxyKit/ProxyKit">ProxyKit</a> to nuget.org which I expect to
be the <a href="https://github.com/ProxyKit/ProxyKit/releases/tag/v2.3.4">final version</a>.</p>
2020-10-22T22:00:00Z2020-10-22T22:00:00Z<p>At end of last year I pushed version <code>2.3.4</code> of
<a href="https://github.com/ProxyKit/ProxyKit">ProxyKit</a> to nuget.org which I expect to
be the <a href="https://github.com/ProxyKit/ProxyKit/releases/tag/v2.3.4">final version</a>.</p>
<!--excerpt-->
<p>I had intended do a new major version with some features and improvements
expecting Microsoft's <a href="https://github.com/microsoft/reverse-proxy">Reverse
Proxy</a> to only support .NET 5.
However they have since decided to also support ASP.NET Core 3.1 and with an
expected an initial release with .NET 5 so I don't want to invest any more time
in it. So if you are a ProxyKit users, plan your migration.</p>
<p>I've closed all remaining issues that are not bugs. Apologies to those of you
who were suggesting improvements that I left hanging there for some time
(motivation on this took a hit).</p>
<p>I will take some solace in the fact that it proved that the concept of a
code-first, programmable and embeddable reverse proxy is desirable (and I prefer
C# to LUA anyways).</p>
<p>Side note: The original project name was Dotnet Extensible Reverse Proxy. I
think that might have been a better name than YARP :)</p>
http://dhickey.ie/2020/10/20/aspnet-core-3-nested-apps/AspNet Core 3.1 Nested Applications<p>This is a follow up on a <a href="http://dhickey.ie/2018/06/09/aspnet-core-nested-apps/">previous post</a>
about how to host multiple isolated ASP.NET core applications in a single
process with with a single http listener/entry. The techniques used in that post
used types and classes e.g.
<code>Microsoft.AspNetCore.Hosting.Internal.StartupLoader</code> that have been made
<code>internal</code> in ASP.NET Core 3.x and thus the approach is no longer viable.</p>
<p>Instead we need to take a different approach that, while has some caveats, works well.</p>
2020-10-19T22:00:00Z2020-10-19T22:00:00Z<p>This is a follow up on a <a href="http://dhickey.ie/2018/06/09/aspnet-core-nested-apps/">previous post</a>
about how to host multiple isolated ASP.NET core applications in a single
process with with a single http listener/entry. The techniques used in that post
used types and classes e.g.
<code>Microsoft.AspNetCore.Hosting.Internal.StartupLoader</code> that have been made
<code>internal</code> in ASP.NET Core 3.x and thus the approach is no longer viable.</p>
<p>Instead we need to take a different approach that, while has some caveats, works well.</p>
<!--excerpt-->
<p>The gist of the solution is:</p>
<p><img src="http://dhickey.ie/images/2020-10-20/multiple-apps.png" alt="Nested Apps" /></p>
<ul>
<li><p>Host each web application on their own independent <code>WebHost</code> with their own
HTTP listener on <code>loopback</code>/<code>localhost</code>.</p></li>
<li><p>Run a reverse proxy server (ProxyKit today, or Microsoft's Reverse Proxy in
the future) on a non-loopback IP address and configure it to forward requests
for matching routes to each application.</p></li>
</ul>
<p>However there are various defaults in AspNetCore will assume that it is the host
and entry point. There are a number of things we need to do to make our ASP.NET
Core web applications more "library" like so they can be composed and hosted.</p>
<ul>
<li><p>Consider where and how static content is discovered depending on execution
scenario (<code>dotnet run</code>/<code>F5</code> vs Tests(ncrunch) vs <code>dotnet publish</code>).</p></li>
<li><p>Ensure each ASP.NET Core web application doesn't discover controllers,
services, etc from the other web applications and register things it shouldn't
know about.</p></li>
<li><p>Ensure the web applications bind to a random port so not to clash with any
other process that might be using the same port.</p></li>
<li><p>Use typed settings for configuration and only use IConfiguration in the
MainHost.</p></li>
<li><p>Any Security considerations i.e. other applications on the same machine should
not be able to make HTTP requests to the applications listening on localhost.</p></li>
</ul>
<p>Rather than go into details here, I've posted a complete runnable sample <a href="https://github.com/damianh/lab/tree/master/dotnet/AspNetCoreMultipleApps">on
GitHub</a>
that addresses all of the above and with more details in the implementation notes
in the readme.</p>
http://dhickey.ie/2018/06/09/aspnet-core-nested-apps/AspNet Core Nested Applications<p>Given any application of a reasonable size, to reason about it and manage
complexity one generally applies modular programming along clear and well
defined boundaries. Recently I was seeking to do this with AspNet Core where I
wanted to compose several independent applications, potentially developed
by separate teams, within the one host.</p>
<p><img src="http://dhickey.ie/images/2018-06-09/nested-apps.png" alt="Nested Apps" /></p>
<p>Out-of-the box this is achieved with middleware, however this still means there
is a single dependency injection container whose service registration code gets
large and leaky. Occasionally there were classes and configurations that
clashed. While useful for a lot of scenarios, AspNetCore middleware wasn't
giving me the isolation I desired: defined controllers, own static content, auth
middleware settings, independent policy definition, focused service registration
and more. Basically I want multiple StartUp classes representing the different
applications and connect them to a path in a host application.</p>
<p>Prior to AspNet Core this was easy to achieve with OWIN. Nested applications
were just an <code>AppFunc</code> that you build and connected when wiring up an an
<code>app.MapPath("path", appFunc)</code>. With AspNet.Core we need to take another
approach.</p>
2018-06-08T22:00:00Z2018-06-08T22:00:00Z<p>Given any application of a reasonable size, to reason about it and manage
complexity one generally applies modular programming along clear and well
defined boundaries. Recently I was seeking to do this with AspNet Core where I
wanted to compose several independent applications, potentially developed
by separate teams, within the one host.</p>
<p><img src="http://dhickey.ie/images/2018-06-09/nested-apps.png" alt="Nested Apps" /></p>
<p>Out-of-the box this is achieved with middleware, however this still means there
is a single dependency injection container whose service registration code gets
large and leaky. Occasionally there were classes and configurations that
clashed. While useful for a lot of scenarios, AspNetCore middleware wasn't
giving me the isolation I desired: defined controllers, own static content, auth
middleware settings, independent policy definition, focused service registration
and more. Basically I want multiple StartUp classes representing the different
applications and connect them to a path in a host application.</p>
<p>Prior to AspNet Core this was easy to achieve with OWIN. Nested applications
were just an <code>AppFunc</code> that you build and connected when wiring up an an
<code>app.MapPath("path", appFunc)</code>. With AspNet.Core we need to take another
approach.</p>
<!--excerpt-->
<h2>Isolated Nested Apps</h2>
<p>From the <a href="https://github.com/aspnet-contrib">aspnet-contrib</a> project <a href="https://github.com/aspnet-contrib/AspNet.Hosting.Extensions/blob/e60bb9a96cce4a0feb61956b95c6f2e84e801bda/src/AspNet.Hosting.Extensions/HostingExtensions.cs">there is
an IApplicationBuilder
extension</a>
that allows adding a nested (isolated) application to a pipeline. This is a
single class file that is small enough to be copied into your code base (as
opposed to adding a dependency). Internally, it creates it's own independent
service collection that <a href="https://github.com/aspnet-contrib/AspNet.Hosting.Extensions/blob/e60bb9a96cce4a0feb61956b95c6f2e84e801bda/src/AspNet.Hosting.Extensions/HostingExtensions.cs#L241-L249">registers a minimal set of global
services</a>
resolved from the root/parent container.</p>
<p>Let's look at a minimal example:</p>
<pre><code>public class RootStartup
{
public void ConfigureServices(IServiceCollection services)
{
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.IsolatedMap<NestedStartup>("/nested");
app.Run(async context => await context.Response.WriteAsync("Hello World!"));
}
}
public class NestedStartup
{
public void ConfigureServices(IServiceCollection services)
{
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.Run(async context => await context.Response.WriteAsync("Hello from Nested App!"));
}
}
</code></pre>
<p>Here, requests to <code>/nested</code> will be routed to the application defined in
<code>NestedStartup</code>. Neat.</p>
<p>There is a small gotcha though that isn't obvious in this API. As I was
exploring getting this to work with
<a href="https://github.com/IdentityServer/IdentityServer4">IdentityServer4</a> I was
getting NullReferenceExceptions from deep insider it with respect to accessing
the HttpContext. When registering IdentityServer it <a href="https://github.com/IdentityServer/IdentityServer4/blob/2df7b152031605c8311e8f082c57c1a56cb73fcc/src/IdentityServer4/Configuration/DependencyInjection/BuilderExtensions/Core.cs#L44">registers
<code>IHttpContextAccessor</code></a>.
The HttpContext is owned by the root application and because this service is not
registered in the root startup's, nested applications won't receive it either.
To fix this I needed to register the service in the root container for it to
'flow' into nested apps:</p>
<pre><code>public class RootStartup
{
public void ConfigureServices(IServiceCollection services)
{
services.TryAddSingleton<IHttpContextAccessor, HttpContextAccessor>();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.IsolatedMap<NestedStartup>("/nested");
app.Run(async context => await context.Response.WriteAsync("Hello World!"));
}
}
public class NestedStartup
{
public void ConfigureServices(IServiceCollection services)
{
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.Run(async context => await context.Response.WriteAsync("Hello from Nested App!"));
}
}
</code></pre>
<p>If the root app has <code>IHttpContextAccessor</code> registered, the <code>IsolatedMap</code>
extension will then <a href="https://github.com/aspnet-contrib/AspNet.Hosting.Extensions/blob/e60bb9a96cce4a0feb61956b95c6f2e84e801bda/src/AspNet.Hosting.Extensions/HostingExtensions.cs#L236-L239">register it in the nested app's container</a>.</p>
<p><a href="https://github.com/aspnet/Hosting/issues/793">This issue</a> describes why
<code>IHttpContextAccesor</code> is not registered by default. At this time, I am not aware
of any other services that might need explicit registering in the root app
container to supported nested apps.</p>
<h3>Injecting Service into Nested Startup</h3>
<p>Many nested applications may need a service that needs to be
injected into their startup class. To support such, just register the service
in the root app container and it will be injected when the nested app's startup
is activated:</p>
<pre><code>public class RootStartup
{
public void ConfigureServices(IServiceCollection services)
{
services.TrysAddSingleton<IHttpContextAccessor, HttpContextAccessor>();
services.AddSingleton(new NestedAppSettings());
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.IsolatedMap<NestedStartup>("/nested");
app.Run(async context => await context.Response.WriteAsync("Hello World!"));
}
}
public class NestedAppSettings { }
public class NestedStartup
{
private readonly NestedAppSettings _settings;
public NestedStartup(NestedAppSettings settings)
{
_settings = settings;
}
public void ConfigureServices(IServiceCollection services)
{
services.AddSingleton(_settings);
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
app.Run(async context => await context.Response.WriteAsync("Hello from Nested App!"));
}
}
</code></pre>
<h3>Controller discovery</h3>
<p>Something to be aware of is that AspNetCore MVC will assembly scan to discover
controllers. Want we absolutely don't want happening is any nested apps
discovering controllers from other nestesd apps. Therefore you must override this behavior
with more explicit registration. Refer to the section <em>Handling MVC</em> in <a href="https://www.strathweb.com/2017/04/running-multiple-independent-asp-net-core-pipelines-side-by-side-in-the-same-application/">this
blog
post</a> that describes an approach.s
This post also describes a mechanism to host independent apps ("Pipelines") however
I think the <code>aspnet-contrib</code> approach described in this post <a href="https://github.com/WebApiContrib/WebAPIContrib.Core/pull/139#issuecomment-395741067">is superior</a>.</p>
<h3>Wrapping up</h3>
<p>My advice on when to use this is only when your application is sufficiently
large / partitioned. In other words, sparingly. I also think that AspNet Core
should have first class support for this model.</p>
<p>The sample code is on github: https://github.com/damianh/lab/tree/master/dotnet/AspNetCoreNestedApps</p>
<p>Hat tip to <a href="https://twitter.com/PinpointTownes">Kévin Chalet</a> for
developing and publishing this very useful utility.</p>
http://dhickey.ie/2016/01/03/commercial-suicide-integration-at-the-database-level/Commercial Suicide - Integration at the Database Level<blockquote>
<p><em>This post was originally authored by <a href="https://twitter.com/JakCharlton">Jak Charlton</a> in 2009 and is originally hosted at <a href="http://devlicio.us/blogs/casey/archive/2009/05/14/commercial-suicide-integration-at-the-database-level.aspx">devlicio.us</a>. That site appears to be inactive and regularly unavailable so, with permission, I'm re-publishing it here as I think it's a timeless piece.</em></p>
</blockquote>
<p>There are many ways you can commit commercial suicide, but there is possibly no slower and more agonising death than that produced by attempting that great architectural objective, the single authoritative database to which all applications talk.</p>
<p>The theory is good, if we have a single database then we have all our business information in one place, accessible to all, easy to report against, reduced maintenance costs, consistency across all applications, and a host of other good objectives.</p>
<p>However all these noble ideals hide a more fundamental problem, that the single database does not solve any of them, and makes most of them into far bigger problems.</p>
2016-01-02T23:00:00Z2016-01-02T23:00:00Z<blockquote>
<p><em>This post was originally authored by <a href="https://twitter.com/JakCharlton">Jak Charlton</a> in 2009 and is originally hosted at <a href="http://devlicio.us/blogs/casey/archive/2009/05/14/commercial-suicide-integration-at-the-database-level.aspx">devlicio.us</a>. That site appears to be inactive and regularly unavailable so, with permission, I'm re-publishing it here as I think it's a timeless piece.</em></p>
</blockquote>
<p>There are many ways you can commit commercial suicide, but there is possibly no slower and more agonising death than that produced by attempting that great architectural objective, the single authoritative database to which all applications talk.</p>
<p>The theory is good, if we have a single database then we have all our business information in one place, accessible to all, easy to report against, reduced maintenance costs, consistency across all applications, and a host of other good objectives.</p>
<p>However all these noble ideals hide a more fundamental problem, that the single database does not solve any of them, and makes most of them into far bigger problems.</p>
<!--excerpt-->
<h2>All Information In One Place - The Single Authority</h2>
<p>On the face of it this sounds like a great objective - after all developers try to live by the maxim of Don't Repeat Yourself - and data in many place is a clear violation of that principle.</p>
<p>Data that appears across many applications and across many storage mechanisms leads to all sorts of massive problems; inconsistency, duplication, replication, duplicated business logic and code, essentially all boiling down to - you end up with spaghetti data. Spaghetti data is much like spaghetti code - it sprawls, gets tangled up, and becomes hard to pull apart without covering yourself in pasta sauce. This is obviously a "bad thing".</p>
<h3>So what is wrong?</h3>
<p>Well the first and most obvious thing that is wrong is that all applications have different requirements, and different "world views". Although there may in theory be some concept of a "Customer" for the organisation as a whole, even this most basic of data items varies widely between individual departments and even individual applications within a single department.</p>
<p>You could approach this problem, as many database centric people would do, and say "that is the problem right there, we need to standardise all these applications to use the One True Customer". But that is missing the really important word in the definition of the problem ... "requirements" ... this is not accidental that the Customer is different to different parts of the business, it really <em>is</em> different.</p>
<p>Your database guys could say "well your Customer in your application may be different as you have different requirements, but you will just have to fit it into our One True Customer", but this is then like trying to put snakes into a plastic bag - they really don't want to be in the bag, they don't fit too well in the bag, and sooner or later while putting one in some others are going to escape or bite your hand. And when your other department starts putting his snakes in the bag too you will be fighting for who gets to not be bitten.</p>
<p>And worse still - now you are trying to map from your requirements based Customer to the One True Customer, and spend an inordinate amount of time maintaining this translation layer in your application. When that One True Customer changes, as he undoubtedly will as new applications require he is expanded to deal with more data they require, every previous application needs to be revisited, large parts of it need ot be re-written, and the whole application needs to be regression tested again. And you have to do this for every single application talking to your single authority database.</p>
<p>You could just skip this stuff, and rely on your applications ignoring this new data, and rely on the database not caring if they correctly updated new data - but this really will come back to bite you - when that bag of snakes starts getting really large and really full - you really don't want to be the one trying to get new snakes into it.</p>
<h3>The Truth About the One True Authority</h3>
<p>There isn't one.</p>
<p>There - I've said it - I have upset all those database guys, probably upset a large number of SOA guys (I'll cover Commercial Suicide - Integration at Service Level in a later post), and have totally disagreed with noble business objectives.</p>
<p>The truth is, data must have context - without context, data is worthless, absolutely and totally worthless. Data stored in a database has no context, and therefore has no value. Context is provided by the applications that read and write that data, and therefore they are the only thing that matters, and their requirements are the only thing that matters. That means, they need data that is specific to their application, structured in a way that makes that application meet the business objectives, and in a way that makes that application meet non-functional requirements like resilience, reliability and consistency.</p>
<h3>So Why Does Anyone Want the One True Authority Database?</h3>
<p>Well, in legacy terms it is easy to understand why database admins and database developers want it - it is their lifeblood, their whole raison d'etre. More importantly, it is the culture in which they were brought up - the data is the important thing, the data is the centre of the universe, the data must be consistent, uniform and pure.</p>
<p>But leaving database developers aside, more importantly why would a corporation want the One True Authority Database (OTADB)? After all, the title of this post says this is "Commercial Suicide", so why hasn't this got through to management?</p>
<p>Well - the promise of OTADB is that it will reduce errors in duplication, reduce waste, reduce duplicated effort and reduce maintenance costs - all highly desirable business objectives. And indeed, from those that advocate this approach (those database admins and developers again), the OTADB sounds mighty attractive. On the face of it, it achieves all of these objectives.</p>
<p>Where it falls down is that this holy grail of software development is always just out of reach, they never quite manage to achieve it. Each application that is developed starts to make the OTADB worse, people start to hack things into it to get it to meet business requirements, not because developers want to hack things together, but precisely becasue the restriction of the OTADB <em>forces</em> them to do it that way if they are to deliver any kind of functionality at all.</p>
<p>They blame these hacks for ruining their vision of the One True Authority Database, the database admins tell them that they have to fight the application teams to stop them messing up their nice database, but that the OTADB no longer meets the noble objectives as those pesky development teams have messed it up for everyone.</p>
<h3>Wait a Minute - What is the BUSINESS Objective Behind the One True Authority Database?</h3>
<p>If anyone was to step back from those noble objectives and ask a far more fundamental question, the solution might actually be a lot more obvious than it may seem. While they are all noble objectives, largely actually made worse specifically by the OTADB approach, they are not the real business objective.</p>
<p>Underlying all the other requirements, the ultimate business requirement that drives people (in particular database admins) to want a single database is so that they can see what their company looks like, in other words - so they can produce Management Information - reports to you and me.</p>
<p>This is the single and most fundamental requirement for a business - to have a clear, consistent, accurate and up to date picture of what their company looks. This is what management needs, it is what allows them to make decisions, allows them to identify problems and allows them to spot opportunities.</p>
<p>So, we are going to all this effort, and believe me it is extensive and significant effort, all to support some reporting tools at the end of the day. Reporting tools have problems with data in different formats, with data that is inconsistent, with data that is disparate and distributed. So at some point in the past, the "accepted truth" became "we need one true authority database to be able to produce good reports"</p>
<p>Reporting is a Context - and data only has purpose and relevance in context.</p>
<h3>If the Elephant in the Room is Actually Reporting, How Do We Solve The Elephant Problem?</h3>
<p>This is almost so easy to deal with, it is silly. Perhaps it is because it is so obviously simple that is has been overlooked by many and rejected by others. Especially as it violates another one of those noble objectives ... to provide quality reporting information, we duplicate more of our data.</p>
<p>Yep - we duplicate it - after all reporting data is read only, so it doesn't matter if it is just a copy of other data. Reporting requirements are also very different to transactional requirements too, so we get the added benefit of being able to optimise that duplicated data for the reporting functions.</p>
<p>Data in relational databases is actually very poor for query and reporting purposes, and there is a constant compromise to make it fast for all those applications to write to, that makes it poor to report on - and vice-versa.</p>
<p>How this data gets into the reporting database isn't my direct concern in this blog post, suffice to say the "easy" way is to publish messages with data changes, and have a reporting application pick those up and persist them. My point here - is that splitting the reporting functions from the day-to-day business functions pays massive dividends.</p>
<h3>Now We Have a New Problem</h3>
<p>That still leaves us with one problem - what happens when disparate applications really do need to know about data in other applications? What happens when my call centre operatives are asked to update the address for one of the Customers. Now as each application has it's own view of the world, and it's own data stores, my accounting applciation does not have access to that change.</p>
<p>Well, the solution to the "how does data get in the reporting databases" question is exactly the same one here - you published messages from your application when you have changes that the rest of the corporation may be interested in. Fire off a message saying "CustomerAddressUpdated" and any other applciation that is concerned can now listen for that message and deal with it as it sees fit.</p>
<h3>As It Sees Fit</h3>
<p>And this is the real business objective we were trying to achieve in the beginning ... avoiding Corporate Suicide.</p>
<p>When applications are each responsible for their own data, their own actions, and are only responsible for letting the "enterprise" know they have made some changes that other things may need to know about - then you have your solution.</p>
<p>In good development terms, we have proper separation of concerns ... applications are responsible for their data, and their data only. They decide if they care about data from other applications - they are not forced to use it, nor to work around it.</p>
http://dhickey.ie/2015/06/02/capturing-log-output-in-tests-with-xunit2/Capturing Log Output in Tests with XUnit 2<p>xunit 2.x now enables parallel testing by default. <a href="https://xunit.github.io/docs/capturing-output.html">According to the docs</a>, using console to output messages is no longer viable:</p>
<blockquote>
<p>When xUnit.net v2 shipped with parallelization turned on by default, this output capture mechanism was no longer appropriate; it is impossible to know which of the many tests that could be running in parallel were responsible for writing to those shared resources.</p>
</blockquote>
<p>The recommend approach is now to take a dependency on <code>ITestOutputHelper</code> on your test class.</p>
<p>But what if you are using a library with logging support, perhaps a 3rd party one, and you want to capture it's log output that is related to your test?</p>
<p>Because logging is considered a cross-cutting concern, the <em>typical</em> usage is to declare a logger as a static shared resource in a class:</p>
<pre><code>public class Foo
{
private static readonly ILog s_logger = LogProvider.For<Foo>();
public void Bar(string message)
{
s_logger.Info(message);
}
}
</code></pre>
<p>The issue here is that if this class is used in a concurrent way, it's log output will be interleaved, just in the same way as using console in tests.</p>
2015-06-01T22:00:00Z2015-06-01T22:00:00Z<p>xunit 2.x now enables parallel testing by default. <a href="https://xunit.github.io/docs/capturing-output.html">According to the docs</a>, using console to output messages is no longer viable:</p>
<blockquote>
<p>When xUnit.net v2 shipped with parallelization turned on by default, this output capture mechanism was no longer appropriate; it is impossible to know which of the many tests that could be running in parallel were responsible for writing to those shared resources.</p>
</blockquote>
<p>The recommend approach is now to take a dependency on <code>ITestOutputHelper</code> on your test class.</p>
<p>But what if you are using a library with logging support, perhaps a 3rd party one, and you want to capture it's log output that is related to your test?</p>
<p>Because logging is considered a cross-cutting concern, the <em>typical</em> usage is to declare a logger as a static shared resource in a class:</p>
<pre><code>public class Foo
{
private static readonly ILog s_logger = LogProvider.For<Foo>();
public void Bar(string message)
{
s_logger.Info(message);
}
}
</code></pre>
<p>The issue here is that if this class is used in a concurrent way, it's log output will be interleaved, just in the same way as using console in tests.</p>
<!--excerpt-->
<h2>Solution</h2>
<p>The typical approach to message correlation with logging is to use <a href="https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/NDC.html">diagnostic contexts</a>. That is, for each xunit 2.x test collection, we attach a correlation id to each log message and filter + pipe the messages we're interested in to the collection's <code>ITestOutputHelper</code> instance.</p>
<p>In this <a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests">sample solution</a>:</p>
<ol>
<li>Using serilog, <a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests/blob/master/src/Lib.Tests/LoggingHelper.cs#L22-L26">we capture all log output</a> to an <code>IObservable<LogEvent></code>. Note we must <code>.Enrich.FromLogContext()</code> for the correlation id to be attached.</li>
<li>When each <a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests/blob/master/src/Lib.Tests/Tests.cs#L13">test class is instantiated</a>, we open a unique diagnostic context, <a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests/blob/master/src/Lib.Tests/LoggingHelper.cs#L31-L45">subscribe and <em>filter</em> log messages for that context and pipe them to the test classes' <code>ITestOutputHelper</code> instance</a>.</li>
<li>When the test class is disposed, <a href="https://github.com/damianh/CapturingLogOutputWithXunit2AndParallelTests/blob/master/src/Lib.Tests/LoggingHelper.cs#L47-L51">the subscription and the context is disposed</a>.</li>
</ol>
<p>A test class will look like this:</p>
<pre><code>public class TestClass1 : IDisposable
{
private readonly IDisposable _logCapture;
public TestClass1(ITestOutputHelper outputHelper)
{
_logCapture = LoggingHelper.Capture(outputHelper);
}
[Fact]
public void Test1()
{
//...
}
public void Dispose()
{
_logCapture.Dispose();
}
}
</code></pre>
<p>Notes:</p>
<ol>
<li>While we used <a href="https://github.com/damianh/LibLog">LibLog</a> in the sample library, the same approach applies to any library that defines it's own logging abstraction or has a dependency on a logging framework.</li>
<li>While we used <a href="http://serilog.net">Serilog</a> to wire up the observable sink, we could do similar with another logging framework (NLog, Log4Net etc).</li>
</ol>
http://dhickey.ie/2015/04/17/stepping-down-from-neventstore/Stepping down as NEventStore coordinator<p>I can't remember exactly when it happend, but for the last few years at least, I have been the core maintainer / coordinator of <a href="https://github.com/NEventStore/NEventStore">NEventStore</a>. Built by Jonathan Oliver and just known as EventStore at the time (not related to the other <a href="https://geteventstore.com">EventStore</a>), it provided me and many people with a really easy way to get up and running with event sourcing. For unknown reasons the core maintainers back then stepped back and since I was heavily invested in it, I offered to take it over.</p>
<p>Since then it has gone through a rename, 2 major versions and a bunch of minor releases. In that process I learned a lot about running an OSS project and connected with a lot of cool and smart people. This is something that I would highly recommend to any developer looking to get into OSS.</p>
<p>In the last 12 months or so I haven't been responsive enough to the community, issues, pull-requests, google group etc., and it's time to make a statement. While I'm fairly stretched time wise, the core reason is that as I've as learned to build event sourced systems over the years, NEventStore's current design no longer works for me. While I'm using <a href="https://geteventstore.com">GetEventStore</a> in some scenarios, I still, and will continue to, have a need for SQL backed event stores. How I'd like to see and interact with such stores is significantly different to how NEventStore currently works. I <em>could</em> mould NEventStore into how <em>I'd</em> like it to be but then the changes would very likely alienate people and break their stuff. Thus it's best that I head off in my own direction. </p>
<p>If you are vested into NEventStore and would like to take over and run a popular OSS project (nearly 800 stars and 250 forks on github, and 10's of thousands of nuget downloads), please <a href="https://twitter.com/randompunter">reach out to me</a> :)</p>
2015-04-16T22:00:00Z2015-04-16T22:00:00Z<p>I can't remember exactly when it happend, but for the last few years at least, I have been the core maintainer / coordinator of <a href="https://github.com/NEventStore/NEventStore">NEventStore</a>. Built by Jonathan Oliver and just known as EventStore at the time (not related to the other <a href="https://geteventstore.com">EventStore</a>), it provided me and many people with a really easy way to get up and running with event sourcing. For unknown reasons the core maintainers back then stepped back and since I was heavily invested in it, I offered to take it over.</p>
<p>Since then it has gone through a rename, 2 major versions and a bunch of minor releases. In that process I learned a lot about running an OSS project and connected with a lot of cool and smart people. This is something that I would highly recommend to any developer looking to get into OSS.</p>
<p>In the last 12 months or so I haven't been responsive enough to the community, issues, pull-requests, google group etc., and it's time to make a statement. While I'm fairly stretched time wise, the core reason is that as I've as learned to build event sourced systems over the years, NEventStore's current design no longer works for me. While I'm using <a href="https://geteventstore.com">GetEventStore</a> in some scenarios, I still, and will continue to, have a need for SQL backed event stores. How I'd like to see and interact with such stores is significantly different to how NEventStore currently works. I <em>could</em> mould NEventStore into how <em>I'd</em> like it to be but then the changes would very likely alienate people and break their stuff. Thus it's best that I head off in my own direction. </p>
<p>If you are vested into NEventStore and would like to take over and run a popular OSS project (nearly 800 stars and 250 forks on github, and 10's of thousands of nuget downloads), please <a href="https://twitter.com/randompunter">reach out to me</a> :)</p>
http://dhickey.ie/2015/04/17/testing-owin-applications-with-httpclient-and-owinhttpmessagehandler/Testing OWIN applications with HttpClient and OwinHttpMessageHandler<p>Let's take the simplest, littlest, lowest level <a href="http://owin.org">OWIN</a> app:</p>
<pre><code>Func<IDictionary<string, object>, Task> appFunc = env =>
{
env["owin.ResponseStatusCode"] = 200;
env["owin.ResponseReasonPhrase"] = "OK";
return Task.FromResult(0);
}
</code></pre>
<p><em>(In reality you'll probably use <a href="https://katanaproject.codeplex.com/">Microsoft.Owin</a> or <a href="https://github.com/damianh/LibOwin">LibOwin</a> to get nice typed wrapper around the OWIN environment dictionary instead of using keys like this.)</em></p>
<p>How do you test this?</p>
<p>Well you <em>could</em> create your own environment dictionary, invoke appFunc directly and the assert the dictionary.</p>
<pre><code>[Fact]
public async Task Should_get_OK()
{
var env = new Dictionary<string, object>()
await appFunc(env);
env["owin.ResponseStatusCode"].Should.Be(200);
env["owin.ResponseReasonPhrase"].Should.Be(OK);
}
</code></pre>
<p>While this would work it, it's not particularly pretty and it'll get messy fairly quickly as you may need to assert cookies, dealing with chunked responses, doing multiple requests (e.g. a login post-redirect-get), etc.</p>
<p>Wouldn't it be nicer to use <code>HttpClient</code> and leverage it's richer API? It would also better represent an actual real-world client. Something like this is desirable:</p>
<pre><code>[Fact]
public async Task Should_get_OK()
{
using(var client = new HttpClient())
{
var response = await client.GetAsync("http://example.com");
response.Status.Should().Be(HttpStatusCode.OK)
}
}
</code></pre>
2015-04-16T22:00:00Z2015-04-16T22:00:00Z<p>Let's take the simplest, littlest, lowest level <a href="http://owin.org">OWIN</a> app:</p>
<pre><code>Func<IDictionary<string, object>, Task> appFunc = env =>
{
env["owin.ResponseStatusCode"] = 200;
env["owin.ResponseReasonPhrase"] = "OK";
return Task.FromResult(0);
}
</code></pre>
<p><em>(In reality you'll probably use <a href="https://katanaproject.codeplex.com/">Microsoft.Owin</a> or <a href="https://github.com/damianh/LibOwin">LibOwin</a> to get nice typed wrapper around the OWIN environment dictionary instead of using keys like this.)</em></p>
<p>How do you test this?</p>
<p>Well you <em>could</em> create your own environment dictionary, invoke appFunc directly and the assert the dictionary.</p>
<pre><code>[Fact]
public async Task Should_get_OK()
{
var env = new Dictionary<string, object>()
await appFunc(env);
env["owin.ResponseStatusCode"].Should.Be(200);
env["owin.ResponseReasonPhrase"].Should.Be(OK);
}
</code></pre>
<p>While this would work it, it's not particularly pretty and it'll get messy fairly quickly as you may need to assert cookies, dealing with chunked responses, doing multiple requests (e.g. a login post-redirect-get), etc.</p>
<p>Wouldn't it be nicer to use <code>HttpClient</code> and leverage it's richer API? It would also better represent an actual real-world client. Something like this is desirable:</p>
<pre><code>[Fact]
public async Task Should_get_OK()
{
using(var client = new HttpClient())
{
var response = await client.GetAsync("http://example.com");
response.Status.Should().Be(HttpStatusCode.OK)
}
}
</code></pre>
<!--excerpt-->
<p>There are three ways you can leverage <code>HttpClient</code> in this way.</p>
<ol>
<li>Start an actual web server using <code>Microsoft.Owin.Hosting.HttpListener</code>, <code>nowin</code> or similar. The issue with this is that because we are using a system resource (a port number) we will very likely get conflicts when running tests in parallel, on CI servers, ncrunch, etc.</li>
<li>Use <code>Microsoft.Owin.Testing.TestServer</code>; which I'm not fond of because the strange HttpClient property that is actually a factory, the lack of cookie and redirect support and the reliance on <code>IAppBuilder</code>.</li>
<li>Use <a href="https://github.com/damianh/OwinHttpMessageHandler"><code>OwinHttpMessageHandler</code></a>!</li>
</ol>
<h3>OwinHttpMessageHandler</h3>
<p>This is an <a href="https://msdn.microsoft.com/en-us/library/system.net.http.httpmessagehandler.aspx"><code>HttpMessageHandler</code></a> that takes an <code>AppFunc</code> or a <code>MidFunc</code> in its constructor. When passed into an <code>HttpClient</code> it allows invoking of the <code>AppFunc</code>/<code>MidFunc</code> directly, <strong>in memory and without the need for a listener (+ port)</strong>. This makes it very compatible with CI servers, parallel testing etc. It is also faster and cleaner from a setup and tear down perspective.</p>
<p>For example:</p>
<pre><code>[Fact]
public async Task Should_get_OK()
{
Func<IDictionary<string, object>, Task> appFunc = env => { ... }
var handler = new OwinHttpMessageHandler(appFunc); // Or pass in a MidFunc
using(var client = new HttpClient(handler))
{
var response = await client.GetAsync("http://example.com");
response.Status.Should().Be(HttpStatusCode.OK)
}
}
</code></pre>
<p>Once you have that <code>HttpResponseMessage</code>, you can do all sorts of rich assertions. For example, using <a href="https://github.com/jamietre/CsQuery">CsQuery</a> or <a href="https://github.com/FlorianRappl/AngleSharp">AngleSharp</a> to assert on the returned HTML.</p>
<h3>Options</h3>
<p>There are options that allow you to customize the handler:</p>
<pre><code>var handler = new OwinHttpMessageHandler(appFunc)
{
// Set this to true (default is false) if you want to automatically handle 301
// and 302 redirects
AllowAutoRedirect = true,
AutoRedirectLimit = 10,
// Set this if you want to share a `CookieContainer`
// across multiple HttpClient instances.
CookieContainer = _cookieContainer,
UseCookies = true,
};
</code></pre>
<h3>Building an AppFunc from a Startup class</h3>
<p>If you are using <code>Microsoft.Owin</code> to construct your pipelines you are probably following the <code>Startup</code> class convention:</p>
<pre><code>public class Startup
{
public void Configuration(IAppBuilder app)
{
app.UseStuff();
// ...
}
}
</code></pre>
<p>Not many know this but you can simply create an AppFunc by leveraging <code>Microsoft.Owin.Builder.AppBuilder</code>:</p>
<pre><code>[Fact]
public async Task Should_get_OK()
{
var app = new AppBuilder();
new Startup().Configuration(app);
var appFunc = app.Build();
var handler = new OwinHttpMessageHandler(appFunc);
using(var client = new HttpClient(handler))
{
var response = await client.GetAsync("http://example.com");
response.Status.Should().Be(HttpStatusCode.OK)
}
}
</code></pre>
<h3>Real world examples and other usages</h3>
<ul>
<li><a href="https://github.com/damianh/LimitsMiddleware/tree/master/src/LimitsMiddleware.Tests">LimitsMiddleware</a></li>
<li><a href="https://github.com/damianh/Cedar.CommandHandling/blob/master/src/Cedar.CommandHandling.Tests/CommandHandlingTests.cs">Cedar.CommandHandling</a></li>
<li>... pretty much all of my middleware / web projects on Github.</li>
</ul>
<p>When I am building an application (typically CQRS based) that is HTTP-API-first, if I want to invoke a command, I'll often use <code>HttpClient</code> with <code>OwinHttpMessageHandler</code> in-proc to invoke the API "embedded".</p>
<h3>Acknowledgements</h3>
<p>While I don't use <code>Microsoft.Owin.Testing</code>, they did have a better implementation of a fake in-memory network stream which I pinched. <3 OSS :)</p>
http://dhickey.ie/2015/04/14/introducing-liblog/Introducing LibLog<p>LibLog (Library Logging) has actually been baking for a few years now, since before 2011 if I recall correctly, and <a href="https://www.nuget.org/packages/LibLog">the current version</a> is already at 4.2.1. It's fair to say it's been battle tested at this point.</p>
<p>As a library developer you will often want to your library to support logging. The first and easiest route is to simply take a dependency on a specific logging framework. If you work in a small company / team and ship to yourselves, you can probably get away with this. But things get messy fast when the consumers of your library want to use a different framework and now they have to adapt output of one to the other or somehow configure them both. Then things get real messy when one of the <a href="http://stackoverflow.com/questions/8743992/how-do-i-work-around-log4net-keeping-changing-publickeytoken">logging frameworks change their signing key</a> in a patch release breaking stuff left, right and centre.</p>
<p>I think at one point I had NLog, Log4Net (2 versions), EntLib logging be pulled into a single project. That's when I had enough.</p>
2015-04-13T22:00:00Z2015-04-13T22:00:00Z<p>LibLog (Library Logging) has actually been baking for a few years now, since before 2011 if I recall correctly, and <a href="https://www.nuget.org/packages/LibLog">the current version</a> is already at 4.2.1. It's fair to say it's been battle tested at this point.</p>
<p>As a library developer you will often want to your library to support logging. The first and easiest route is to simply take a dependency on a specific logging framework. If you work in a small company / team and ship to yourselves, you can probably get away with this. But things get messy fast when the consumers of your library want to use a different framework and now they have to adapt output of one to the other or somehow configure them both. Then things get real messy when one of the <a href="http://stackoverflow.com/questions/8743992/how-do-i-work-around-log4net-keeping-changing-publickeytoken">logging frameworks change their signing key</a> in a patch release breaking stuff left, right and centre.</p>
<p>I think at one point I had NLog, Log4Net (2 versions), EntLib logging be pulled into a single project. That's when I had enough.</p>
<!--excerpt-->
<p>Another approach is have your library depend on an logging abstraction - <a href="https://github.com/net-commons/common-logging">Common.Logging</a>. This way at least means your library is independent of a specific logging library. But are you better off? Common.Logging is strongly named and <a href="https://www.nuget.org/packages/Common.Logging">has had dozen or so releases</a>; so what do you think happens when you have 2, 3 or more libraries that depend on <em>different</em> versions of Common.Logging are pulled into the same project? A world of pain with assembly loading errors and tweaking binding redirects. </p>
<p>And the adaptor packages aren't in a great state either:
<img src="http://dhickey.ie/images/2015-04-14-introducing-liblog-common-logging-adapters.png" alt="Common Logging Adapters" /></p>
<p>Some libraries, like NEventStore, <a href="https://github.com/NEventStore/NEventStore/blob/master/src/NEventStore/Logging/ILog.cs">define their own logging abstraction</a>. This puts the burden on the consumer to write their own adapter but it's just a single class file without adding scores of references and dependencies to a project. The adapters can put in the project's wiki/docs or a gist for users to copy and paste. Simples.</p>
<p>This is my preferred approach and is the foremost thing LibLog aims to make easier for you. As a source code only package (it will never be a dll) it will add a few public interfaces and classes to your project in <code>{ProjectRootNameSpace}.Logging</code> namespace:</p>
<ul>
<li>A <code>ILogProvider</code> the defines a 3 methods: get a logger and open nested/mapped diagnostic contexts.</li>
<li>A <strong>single</strong> <code>Logger</code> delegate through which <strong>all</strong> logging functionally can be channelled including structured messages, checking if a log level is enabled, exceptions etc. This is contrast to Common.Logging's <a href="https://github.com/net-commons/common-logging/blob/master/src/Common.Logging.Core/Logging/ILog.cs"><code>ILog</code></a> which has about 69 members.</li>
<li>A static <code>LogProvider</code> for your library to get a logger and for consumers to set the LogProvider.</li>
<li>Is PCL compatible with conditional compilation symbol <code>LIBLOG_PORTABLE</code></li>
</ul>
<h2>The Icing on the Cake</h2>
<p>LibLog gives you more though. LibLog will detect the presence of Serilog, NLog, Log4Net, LoupeLogging and EntLib Logging (in that preferential order) in your consumers project and automatically log to them, <em>without the consumer of your library having to do anything at all</em>. </p>
<p>While LibLog uses reflection to do this (and caches the delegates for performance reasons), it turns out these logging frameworks have really stable APIs. <a href="https://github.com/ayende/ravendb/tree/master/Raven.Abstractions/Logging">An early variation has been in RavenDB</a> for a few years and has yet to break because of new version of NLog or Log4Net. So for me, it feels a safe enough approach. But don't worry if it does break though; when that happens LibLog will catch the problem and just disable logging. In that scenario, if you can't update your lib, or your consumer can't update, the consumer can always fall back on supplying their own <code>{YourLibRootNamespace}.Logging.ILog</code> to get things working again.</p>
<p>You can see where else <a href="https://github.com/damianh/LibLog/wiki/LibLog%20in%20the%20wild">LibLog is being used in the wild</a>. If you know of any more places, please update the wiki :)</p>
<p>The <a href="https://github.com/damianh/LibLog">project is on github</a>, is licensed under MIT and further documentation is in the <a href="https://github.com/damianh/LibLog/wiki">wiki</a>.</p>
<p>Footnote: It may seem I'm bashing Common.Logging but I'm not - this is a general problem in .NET, particularly when strong naming comes into play. Let's take a moment to lament the lack of structural typing where maybe none of this would be necessary in the first place.</p>
http://dhickey.ie/2014/11/27/our-open-source-policy-at-evision/Our Open Source policy at eVision<p>There is a problem within our industry, particularly around .NET, known as the "no-contrib culture". That is, companies take and benefit from OSS but don't given anything back and also prevent their employees from doing so. A while ago <a href="http://dhickey.ie/post/2014/06/25/establishing-oss-policies-in-a-proprietary-software-company.aspx">I blogged about establishing an Open Source Software policy at a proprietary software company</a> that happens to consume and depend on a significant amount of open source libraries. eVision was a typical company in that the standard employment contract had that catch-all clause that they own everything one does on a computer.</p>
<p>With the proliferation of OSS, the fact the platform we predominately build on <a href="http://www.hanselman.com/blog/AnnouncingNET2015NETasOpenSourceNETonMacandLinuxandVisualStudioCommunity.aspx">is going OSS</a>, and the fact that OSS is a viable business strategy (<a href="http://hbswk.hbs.edu/item/6158.html">even within proprietary businesses</a>), this was a somewhat short-sighted. Indeed, several employees were simply ignoring this; a situation which was in neither party's interest.</p>
2014-11-26T23:00:00Z2014-11-26T23:00:00Z<p>There is a problem within our industry, particularly around .NET, known as the "no-contrib culture". That is, companies take and benefit from OSS but don't given anything back and also prevent their employees from doing so. A while ago <a href="http://dhickey.ie/post/2014/06/25/establishing-oss-policies-in-a-proprietary-software-company.aspx">I blogged about establishing an Open Source Software policy at a proprietary software company</a> that happens to consume and depend on a significant amount of open source libraries. eVision was a typical company in that the standard employment contract had that catch-all clause that they own everything one does on a computer.</p>
<p>With the proliferation of OSS, the fact the platform we predominately build on <a href="http://www.hanselman.com/blog/AnnouncingNET2015NETasOpenSourceNETonMacandLinuxandVisualStudioCommunity.aspx">is going OSS</a>, and the fact that OSS is a viable business strategy (<a href="http://hbswk.hbs.edu/item/6158.html">even within proprietary businesses</a>), this was a somewhat short-sighted. Indeed, several employees were simply ignoring this; a situation which was in neither party's interest.</p>
<!--excerpt-->
<p>Thus I am delighted to announce that we have established a policy that we believe strikes the right balance and have made it available on our <a href="https://github.com/eVisionSoftware/OpenSourcePolicy/blob/master/OSS-Policy.md">github organisation site</a>. </p>
<p>In layman's terms, this means that our employees are free to create any sort of open source outside of business hours (as long as it doesn't compete with our business), are free to contribute to open source we depend on at any time, and they own the copyright to that work (or whatever the terms are of the project they contribute to). The only real stipulation is that the project's licence must allow us to use it in our commercial software.</p>
<p>We hope this will have the effect of encouraging contribution to the platform we depend on strengthening it to our mutual benefit. We also hope that engagement with the open source community will have a positive learning effect for our engineers.</p>
<p>I personally hope that more organisations and companies adopt our policy and make it known publicly. Thus, we've released this policy under a creative commons licence. Disclaimer: you should have your lawyer check it over!</p>
<p>To all developers out there, when considering whom to work for I would <strong>strongly</strong> suggest that you should only look for organisations that have this or a similar policy in place. These are mature organisations that care for the platform and ecosystem they and their staff build upon.</p>
<p>Any feedback or questions, please contact me dhickey at gmail.com. Thanks to <a href="https://twitter.com/adamralph">Adam Ralph</a>, <a href="https://twitter.com/fransbouma">Frans Bouma</a> and <a href="https://twitter.com/chr_horsdal">Christian Horsdal</a> for their feedback.</p>
<p>On that note, eVision are looking for talented engineers (and more) to work on some challenging, distributed and large scale problems. :)</p>