(Some) 3rd party dependencies as a design smell

How do you define good software design?

How do you know you’ve arrived at that sweet spot where you can stop refactoring?
Ask this question to hundred developers and you’ll get hundred different answers. Also, I bet most of them will start with infamous:

And that’s OK. I’d say that, as well. Why? Because, most of the time, lots of factors are playing a role and one factor is function of other(s).
It’s all about trade-offs.

The opposite question

How do we know that a given design is sub-optimal?
Are there any smells that we can use in order to drive design away from bad design spectrum?
Since lots of us cannot agree on what a good software design is, I believe that this approach, of asking the opposite question, can help us in, at least, driving away from a bad design.

As, I hope, many of you know, there are already some seminal books that explain reasoning behind why a certain code is “smelly”. Those books also explain a step-by-step mechanics that can drive us away from bad design by removing these smells.

One thing worth noticing here is that these code smells are defined on the function/class level.
But are there any design smells on higher abstraction level? Perhaps assembly/package level?
I think there are and some of those are the mere fact of having dependencies on (some) 3rd party libraries.

 

“Programmers know the benefits of everything and the trade-offs of nothing”
― Rich Hickey

 

An example (IoC containers vs Pure DI)

Dependency injection (DI) pattern is quite popular nowadays and most of us are using Inversion of Control containers to specify which dependency implementation to use in which client class. As an alternative, you can utilize Pure DI (previously known as Poor Man’s DI), but you could argue that, in that case, when you add a new dependency for a given class, it will affect composition root (i.e. you have to manually pass instance of dependency when instantiating client class) and the code won’t compile anymore.

For this reason, some people see the benefit in using IoC containers, since adding new dependencies doesn’t affect the composition root. If I was to rephrase it, with IoC container it’s easier and more convenient to add a new dependency.

My question would be: “Would you really like to make it too easy to add a new dependency?”

This, in my experience, easily gets out of hand and leads to classes having too many dependencies which, in turn:

  • violate Single Responsibility Principle (SRP)
  • are harder to maintain
  • are harder to reuse
  • are harder to test
  • lead to big ball of mud anti-pattern
Pain induced (design) feedback

If I’m able to get a feedback about design from using Pure DI, then I’d like to leverage that information and address design smell before it’s too late.
Of course, I’m not advocating “IoC containers considered harmful” stance here. I’m advocating their critical usage and the need to reconsider the reasons behind why someone would use them in the first place. Maybe, they are in fact sweeping some design smells under the rug?


Also, bear in mind that, when using IoC container, you’re in fact circumventing compiler and if binding is not correct, you’ll know it only at run time. Not to mention how easy it is for developers to get wrong lifetime scopes.
If you have too many classes in your solution and if that’s the reason why you find it cumbersome to use Pure DI, ask yourself why do you have so many classes in the solution.
Is there too much logic that needs to be separated into libraries/components/microservices? Are you sliding into monolith with bunch of entangled dependencies?
All of these “Why” questions lead you to the root cause of the problem.

More examples
Mocking frameworks vs Hand-rolled stubs/mocks

Other example is using various frameworks that “aid” in unit testing. Let’s take a look at mocking frameworks (e.g. Moq in C#, Mockito in Java etc.) which enable you to stub and mock dependencies (actually their interfaces) in the runtime. Some of the reasons developers use it (myself included), instead of hand-rolled stubs/mocks, is that, when you add new method to the interface, that change doesn’t affect the test code.
On the other hand, with hand-rolled stubs/mocks, whenever you add new method to the interface, you have to implement it in all of the existing implementors (including hand-rolled stubs/mocks used in test code).

Thus, some of us would say that maintainability is affected when using hand-rolled stubs comparing to using mocking frameworks, and I’d agree with you. But, why did we need to add new method to the interface in the first place? Are we sliding towards fat interface anti-pattern? Should we, instead, adhere to the Interface Segregation Principle (ISP) and split fat interface into multiple smaller ones? That way, perhaps, we won’t affect unrelated test code.

Also, maybe I’d go as far as to ask why do you need to use stubs/mocks in the first place? I’m Outside-In TDD practitioner and highly value the way that that approach positively affects design, but there are other approaches which may go one step further. For example, Imperative shell – functional core approach avoids the need for using mocks in the first place.

Feature branching in DVCS vs Trunk based development

Compared to branching in Centralized Versioning Control Systems (CVCS), branching in Distributed Versioning Control Systems (DVCS) is much more easier and convenient to use. Thus, I believe that this is the main reason why developers like to use DVCSs in order to implement feature branching in their development process. But.

Feature branching just feels more convenient to use comparing to trunk base development approach. With trunk based development, you’re very often merging your work with someone else’s changes. Because of these often merges, you might feel that this creates friction in your development. On the other hand, feature branching functionality, of (distributed) versioning control systems, enables you to have your little sandbox where you can isolate yourself from others and not be interrupted. But what happens in the end of the sprint? All other developers finish features, they’ve been developing, and merge changes into trunk. Suddenly all of you have monster merges that you need to resolve. It’s the end of the sprint. Clock is ticking. Pressure to deliver.

 

If it hurts, do it more frequently, and bring the pain forward.
Jez Humble

 

Feature branching just feels convenient and is only temporary convenient. It masks/delays integration problems and yields monster merges at the end of the sprint when all functionality should be completed.

Mocking frameworks vs Dependency injection

Some other examples of coding by using “more convenient” approach is using static methods. It’s “easier” to call static method than to inject dependency and call instance method.
But, this leads to

  • hidden dependencies,
  • inability to test,
  • separating behavior from data and
  • anemic domain model

If we don’t address the root cause, we might go down the rabbit hole of asking, sadly popular, questions (like this one), that solve the consequence rather than the cause.
Then we introduce mocking frameworks that have super exotic, advanced features, that solve this issue (like TypeMockIsolator in C# or Mockito in Java) by being able to mock static methods.
These frameworks have very very limited scenarios that justify their use. For frameworks that enable mocking concrete classes, private and static methods that would be just the case when we need to introduce unit tests into legacy code (by definition code without the tests).

More of other examples
  • automocking containers vs SRP – masking/circumventing design smell related to too many dependencies in the class, while making it easy to stub/mock
  • automappers vs proper domain modeling – exposing getters/setters and violating encapsulation and all domain model invariants while making it easy to map incoming DTOs to domain classes
  • etc.
Summary

There are definitely cases when using 3rd party library/framework is fine and a justified decision. You find that advantages outweigh disadvantages and you accept the trade offs and risks of using them. And that’s fine.
My point is not only should you consider the trade offs that come along with introducing new 3rd party dependency (e.g. accepting the possibility that a given library will not be supported or maintained in the future), but also, what does that need tell you about your design? Where did that need come from?

Agile development is all about small steps and incremental approach. Why? Because it provides a fast feedback about if you’re heading in the right direction.
For the given reason, don’t neglect the design feedback you’re getting when you feel a need to include a new dependency.
Think twice and ask yourself where did that need came from in the first place. Answer those ‘why’ questions and you’ll be surprised as to how much of future problems you’ll be able to prevent and how fast you’ll be able to deliver.