Keep your code debuggable

Gerrit Stapper
NEW IT Engineering
Published in
6 min readMar 21, 2021

--

Bugs are fine as long as you can see them!

When we write code, we never do it perfectly — there will always be erros and thats totally fine!
Thats why we just hit the run button of the application or its unit tests every now and then to see whether the code compiles, passes its tests or hits a certain point in the application flow. After a change, we quickly check how it reflects in the processing stream. When things get messy and we need to find bugs, we leverage the IDEs debugger and just go ahead.

Imagine not having that: Imagine not having these quick feedback cycles of just running your code right away, of missing a debugger to dig deep into your codes logic, halt the execution whenever you want and take time to understand. Image being short of all these tools.

Working in restricted environments

I hope we all agree that above scenario should never come true. Unfortunately, it does! Imagine an environment where you don’t have the freedom of installing whatever software on the laptop or a situation where network separation makes it impossible to connect to two services in different parts of the network at the same time.

In the scenario I faced, we had the following situation:

Our code fetches data from two different databases, one of them is accessed via a webservice, where the code is not available to us. Further, the databases are not part of the same network and are also not part of the network I sit in with my laptop. I can tunnel to one of them at a time, but never both simultaneously. On top of that, one of the two is an Hbase database, which is something we could not install on our machines locally.

Network separation problem

What we ended up doing was compiling the application locally, uploading it to the server, which has access to both databases, and then run the code with our latest changes, hoping it goes through…without a real idea why — yikes! Debugging boiled down to print statements and feedback cycles were humongous - double yikes!

Overcoming the restrictions

Our next instinct was: What if we fetch the data for a specific use case on the server once and then create a mechanism that can work with it locally?

This was a valid scenario for us as same input always meant same database queries — we could pre-fetch the data for the use cases on the server and then mock the database calls locally.

For this solution, we relied on two Spring Boot (or Spring-only?) mechanisms:

  • Injecting interfaces that are instantiated by corresponding bean at runtime
  • @ActiveProfiles annotation to set a profile-scope for the bean

Additionally, we needed a mechanism to write the results of the database queries to file when running on the server and reading them when running locally.

Let’s look at them one by one — shall we?

Injecting interfaces

With Spring (Boot) you can declare an interface to be injected in one of your Spring components and let the dependency injection find the corresponding bean that implements this interface at runtime. See below example:

Injecting interfaces into Spring Boot components

This has intersting advantages, one of which we will leverage: You can define multiple beans that all implement that same interface and its defined methods, but in different manners. In our case: One is actually querying the real database the other is just reading the data from file:

Two services implementing the same interface

If we try to run our application like this, it will fail as there is more than one bean that implements the interface:

Spring Boot startup error on ambiguous beans

The reason: Spring cannot decide for us which bean to use as the replacement for interface. We need to tell Spring when to use which, which we will do next.

Profile-scope beans with @Profile

You might have heard of Spring profiles in the context of the application.properties file, which you can extend with different profiles like application-local.properties. This way, if local is the active profile, both properties will be considered by the application, with application-local.properties having higher precedence, i.e. overwriting identical properties from application.properties.

You can leverage the same mechanism for bean instantiation as well. Annotating a Spring component with @Profile(<profile>) tells Spring to only add this bean to the context in case the given profile is actually active.

We can now annotate our local service from above with the local profile to ensure its only bootstrapped with the local profile:

Cofiguring bean to be active with profile “local”

This still also bootstraps the other implementation, thus we need to annotate it as well with the exact opposite value:

Cofiguring bean to be active with every profile but “local”

The ServerDatabaseService will now be bootstrapped for every profile but local, while the LocalDatabaseService will be bootstrap then and only then.

Inside IntellIJ for example, you can set the active profile flag like so (you can set multiple, if you want):

Setting the active profile(s) in IntelliJ

Now, if you start your Spring Boot application again, you won’t run into ambiguous beans anymore, cool! We now have our two services for the different environments. The last part is writing to and reading from files.

Pre-fetched data via file access

Next we extended the code that queries the database from the node (profile !local) by adding a method that saves whatever result is fetched to file to have the data we need for development locally:

Writing database results to file in the non-local service

This service is using two properties from the application.properties file:

  • debuggable.persist: Determine whether any results should be written at all
  • debuggable.directory: In case files should be written, determines in which base directory
Application properties to configure writing to file and where

You can use the same properties in the LocalDatabaseService now as well to read from the same directory. Make sure that the filenames created when writing the files can actually be reproduced when reading them so that the local service finds the correct files.

Now, whenever you need specific data, you still need to run the example on the server to get the data from the database. After that however, you have it at hand locally, can debug, rerun the code quickly and make adjustments on the go!

Conclusion

In some restricted environments it can sometimes be tough to run your code locally. The presented solution can help partially solving this problem by giving your access to the content of databases locally that you normally would not have available.

This solution comes short for the code that actually reads and writes to the database. In case you need to check that interface in more detail, I’m afraid your best bet is still running the code on the server. For all other scenarios, I have it helps increasing your productivity.

If you have seen similar solutions in other languages or frameworks, I’d be glad to get to know about it. Additionally, suggestions on improving the solution are also welcome :-)

UPDATE: The respective code repository is currently down as I am refactoring and updating the content as well as the blog post :-) Will add the link right after again.

--

--

Gerrit Stapper
NEW IT Engineering

Software Engineer, Interested in Software Quality and Teamwork, Cyclist