Databox is mostly known for its business analytics and visualization capabilities. As a single-product company for the majority of its existence, it was developed in a way that catered to just one product. As the company began its path as a startup, it was convenient to keep the architecture simple and flexible to quickly adapt to change and align direction if needed to ensure business success. Starting with a small team and a lack of resources, this is usually the de-facto direction for most companies at its initial stages. Until a few years ago, the ecosystem architecture was relatively simple, as seen in the figure below.
The product frontend solution, public website, and mobile app communicated with a monolithic product backend solution and one product-level database. Although some minor features were implemented as microservices, most core functionalities were in the monolith and tightly coupled. The solution also served as a direct product backend (hence the naming). This blog describes the transformation of a product-facing backend monolith to a multi-layered structure in detail.
As the years passed, the company grew in all aspects, helping the product mature and gain a lot of new business and administrative features while also significantly growing the customer base. From a technical standpoint, this meant that the monolith approach, which served us greatly in the past, grew to the extent where it was difficult to properly maintain and develop further while still keeping the quality of the code on a high level. Despite having best practices in code quality in mind, the fast-paced development and adapting to business needs resulted in accumulated technical debt, which started to show in ever-growing challenges, such as:
The complexity related to sticking to a monolith architectural approach started to show. Our rate of developing new features has slowed as we needed more focus to handle the issues mentioned above.In parallel to this, a new idea started to form. Based on all the data our company has access to, we have a unique position to provide our customers with an insight into their business success in relation to their competitors. This sparked the idea to form Benchmark Groups product, enabling users to compare their business results with companies of the same size from the same industry. By introducing a new company, Databox was shifting focus from offering a single product to embracing a new multi-product strategy. As we delved into developing Benchmark Groups, we aimed to incorporate established best practices but quickly realized our architectural approach posed some limitations. This prompted us to reassess and evolve our thinking on architectural strategies.
A new product, together with the growing pains of the current architectural approach, was the final catalyst to revisit our practice and shift it toward a new direction. When embarking on embracing a multi-product strategy, we wanted to think about an architectural approach that would support our new direction and ensure the flexibility to adapt to future changes and challenges. The following section presents our plan, goals, considerations we wanted to achieve, challenges we encountered and needed to overcome, and our solution and reasoning behind it.
In the planning process, we identified areas that would be challenging to overcome. We determined that the new and all of our possible future products would share the same ecosystem and will need the same base functionalities to make them work together, such as:
The main product’s backend as a monolith made it difficult to implement new products, as all of them would need to either duplicate the implementation or transition users to the main product to conduct some shared operations. None of these options were considered a good solution, as sacrifices had to be made in both cases.
Based on the requirements, we identified that the best way to achieve our goal is to extract these features as separate mini/microservices, which would be available as separate entities for all products and services to use and integrate. These features are essential to our product operations. Therefore, we did not want to focus only on moving the existing features from one service to another but to think thoroughly about the best architectural practices and approaches to find the best possible solution. In practice, we had to think about scalability, security, stability, and future flexibility while not suffering significant performance loss. We have prepared a four-stage plan based on these:
Out of the listed stages, this blog focuses on the first three that are already implemented to some extent and highlights the changes to our architectural approach.
To achieve all the goals with high efficiency and minimal impact on functionality, we decided to follow some well-established guidelines in software development. To start, we aimed to follow Clean Architecture – a software design principle promoting the separation of concerns and aims to create a modular, scalable, and testable codebase proposed by Robert C. Martin (Uncle Bob).
Additionally, we wanted to implement our new solutions in a way that would be easy to maintain and adapt to unforeseeable needs in the future. We also wanted to reduce the coupling within services and ensure the separation of concerns. To handle this, we introduced the modularization pattern. This enabled us to more strictly handle the coupling between feature sets and form strict communication channels when communication is needed. Modularization enables us to quickly decouple a module into a separate microservice on the backend or a feature package on the frontend if needed.
Furthermore, we knew that our frontend solutions were too tightly coupled with the domain-related services. Often, these services were actually acting as a product backend, although this was not their main role. To solve this, we introduced a new layer between the services and our frontend solutions by using the Backends for Frontends pattern. This helped us make our services cater to multiple products and leave the product specific to this new layer. As one of our stages also presented the implementation of a new product solution, the emphasis here was to ensure high feature reusability. To ensure this, we have embraced the Composite UI pattern, thereby linking our frontend implementation to the associated mini/micro-service and ensuring that these full-stack solutions can be reused anywhere in the ecosystem.
Transitioning from a monolithic architecture is challenging, and the same was true in our case. Below are some of the more challenging aspects of our process:
We started the transition with the development of new reusable services, which presented a new architecture layer in our ecosystem called the Platform layer. This new layer, meant to hold all the shared functionalities, enabled our backend solutions to start transitioning into a more Backend for Frontend role, handling their specific responsibilities concerning their product counterparts.
As the shared services were removed from the product monolith, the functionalities were also moved to a central location, introducing a new account-level product – Account Management Application. This central hub now enables all our users to conduct account-level operations and administration in one place. Both of these solutions are briefly explained in the following subsections.
The main purpose of this layer is to offer reusable services that contain shared features, used by all of our existing and future products. Based on the list of required functionalities and accompanying dependencies, we have designed and implemented multiple new solutions:
Most of the services were implemented as microservices to ensure scalability. The only exception was the Account Service, with fewer scalability issues due to its features being used less frequently. In this regard, we used the modularization approach to create a module for each domain. We ensured code de-coupling and flexibility to transition them into micro-services if needed.
We put a lot of effort into managing access management and security. In this regard, we used the Open Policy Agent (OPA) to ensure that all access management rules were specified in one place and could be used by multiple services. Some of the domain-specific rules would still need to be implemented in each service, but unauthorized access is entirely managed by OPA.It is important to note that all these solutions share best practices in the form of standard packages, project structure, and communication patterns. They are documented using Open API specification and directly accessible via HTTP requests or automatically generated SDK-s for multiple languages. This ensures easy reusability of all aspects of our ecosystem. The detailed image of this layer is illustrated below.
These services, together with the existing Billing Service, all transitioned to the platform layer and are available for all products and services.
Separate account management application is not a novelty, especially in the context of multi-product support. As realized by web industry leaders, our direction also showed this to be the right path. The account management project intended to tackle most of the challenges listed in the previous sections. The goal was to create one central point (hub) for all Databox products, which would handle all the shared account and data sources related operations, ensuring the separation of concerns.
Implementing a new separate application solution enabled all products to share their account-related operations and focus on providing the best value for their customers. Consequently, users are now redirected to this new app to conduct account management operations. As a technical result, we ensured a much more manageable approach from the implementation and maintenance viewpoint.
Architecture-wise, we wanted to extend our focus on reusability when implementing new features. As our products still needed some parts of the shared features to reside on the product sides, we wanted to provide a simple way of sharing the functionalities between all solutions. To achieve this, we implemented our variation of the known Composite UI pattern, by bundling frontend and backend modules together to form standalone feature packages that can be used anywhere in our ecosystem.
A new application, together with our feature packages, resulted in a significant reduction in the code base and maintenance & support of these features as it was moved to the newly formed Platform team. This approach does mean we introduced a single point of entry, but it also presents a single point of failure. We plan to mitigate this with special emphasis on ease of maintenance and scalability options. In the future, we plan to introduce more mechanisms to ensure the reduction of dependencies and implementation of redundancy techniques (e.g., caching).
The first two stages enabled the implementation of changes across both existing products. For our primary product, this translated into the challenging task of refactoring the monolith solution. Upon research, it became evident that the current product backend solution accumulated a lot of technical debt, coupled with code and complex processes that would require a careful approach to handle it properly. The initial phase involved the extraction of all transferred features and the adjustment of the implementation of remaining features to align with the new services, paving the way for a successful release of the Account Management Application as mentioned earlier.
What followed was careful planning and implementation of all the remaining flows that still do not follow the new architectural approach and striving towards slowly transitioning the product monolith into a proper Backend for Frontend role in the future. When the changes to existing services are finished, and the code is adapted to the new flows, all the conditions for stage 4 will be met, which is to split our main product database into smaller, more domain-oriented databases.
The last stage will focus on separating the main product database, which currently includes all products and shared data. Although the Benchmark Groups product introduced its own database, most shared data remains in the main product database. How our services were set up and implemented in Stage 1 already accounted for the changes we want to achieve for our databases.
The main challenge in achieving this goal will be to not cause performance issues, as a lot of the features need these shared data to some extent. This is now all in one place. Therefore the problems are not that apparent. As we move to more databases, we must account for data separation between databases and ensure that each product/service includes enough information to function properly and as independently as possible.
The goal of changing our architectural approach was to empower our multi-product strategy and enhance our system stability, scalability, and security by considering the best industry practices, patterns, and standards. To achieve this, we devised an ambitious 4-stage plan to accomplish our goals. Although still in the process, most architectural challenges have been addressed and successfully implemented. Building on the success of our initial research and planning, we are confident in finalizing our plan and finishing the remaining stage of the process. The results are visible below.
This transformative change not only facilitated unrestricted handling of account-level operations across both products but also allowed for a heightened focus on delivering product-level features. This shift ensures that Databox products can now provide users with the best possible experience and optimal value.
Unveiling a transition to a multi-product strategy is part of a series of technical articles that offer a look into the inner workings of our technology, architecture, and product & engineering processes. The authors of these articles are our product or engineering leaders, architects, and other senior members of our team who are sharing their thoughts, ideas, challenges, or other innovative approaches we’ve taken to constantly deliver more value to our customers through our products.
Boris Ovčjak is the Director of Engineering at Databox. With his extensive experience and leadership skills, Boris plays a crucial role in steering the technological advancements and engineering processes at Databox. To learn more about his journey, read his Playmaker Spotlight.
Stay tuned for a stream of technical insights and cutting-edge thoughts as we continue to enhance our products through the power of data and AI.
Director of Engineering at Databox
Get practical strategies that drive consistent growth