When Decoupling Goes Bad

I’m currently reading ASP.Net MVC 2 in Action and overall, the books seems solid. I can say this because I’ve read about 20 pages and agree with most of it. However, the 20 pages I’ve read do contain some advice that seems overdone at best and downright confusing to future developers at worst. The chapter I’m reading is Data Access with NHibernate. I’m working on an application that contains an ASP.Net web site backed by a PostgreSQL database. Previously, all my applications used MSSQL and therefore were set up using Linq-to-SQL as a poor man’s ORM. With PostgreSQL, that’s no longer an option so I’m in the process of learning NHibernate and Fluent NHibernate, a task that’s long overdue.

I hate learning a new technology by doing everything wrong the first time so I went looking for some best practices or architecture suggestions for setting up NHibernate. This book had an entire chapter on that and so I dove right in. Overall, it’s been very useful. Heavy use of Dependency Injection and Inversion of Control nicely decouple the pieces of the app from each other. However, the authors recommend something that seems a little extreme to me.

The example solution has a UI project which is the ASP.Net site, a Core project containing domain models and code, an Infrastructure project for things like data access and assorted test projects including an Integration Test project. The authors point out that the only project that references the Infrastructure project is the Integration Test project. Their rationale for this is that Infrastructure is necessarily fluid. Because of this, you don’t want to couple the core or UI to it. They set this up by using runtime DI to inject dependencies from the Infrastructure project into UI components. Specifically, the data access repositories that certain controllers need are discovered at runtime using settings in the web.config. They claim that this results in a completely decoupled application.

However, in order for this to work, the UI project needs to have access to the Infrastructure assemblies and config files. Normally, this would happen via an explicit reference in Visual Studio which would result in the necessary files being copied into the UI project at compile time. Because the UI project doesn’t have that reference, the authors have to get the files there another way. Their solution is to use a post build step in the Infrastructure project to copy the necessary files. To me, this only serves to make the reference implicit, something is likely to cause issues down the road.

The UI definitely has a dependency on the Infrastructure project. It seems extreme to hide that dependency in a post build step instead of showing it explicitly in the project references of the website. It’s one thing to write decoupled code that is easy to test and change. It’s entirely another to force developers to jump through hoops and keep track of idiosyncrasies like implicit project references. However, being a complete newbie to this form of architecture, maybe I’m missing something. Is there a true reason for managing dependencies between projects in this manner? If so, why not manage all of them in the same manner, do everything at runtime and copy all files via post build steps? I think the answer is that there isn’t a real reason for this except for the purity of the architecture, something that should immediately be questioned for intrinsic use. I’d love to hear any opinions from the experts out there.