When Martin McCann and Mathias Born decided to create Trade Ledger, an Australian lending platform, their plan was to simplify and streamline lending services through cloud-based software for lenders. Their journey provides insights for CIOs in their own development efforts.\n\nWhen Trade Ledger started to develop its software as a service, Born\u2019s team was focused on how to build and architect a system that wouldn\u2019t be obsolete in a few years. Beyond the technical choices, Born had to consider whether the whole team work independently on each piece of functionality, and managed and maintained their portions of that repository.\n\nAnd, finally, in a business context, Born says it is important to know if the functionality is modular and can be used in different areas.\n\n\u201cIt's sometimes more an art than a science. But breaking those components down into the right context was definitely a challenge,\u201d he says.\n\nThe challenges of building modular functions and adopting a microservices architecture\n\nThere were multiple challenges along the way around the approach to adopting a microservices architecture.\n\nFor organisations building a similar system, Born says that it is important to figure out if there are certain parts of the system that get a lot of usage and decouple them so the function can then be scaled independently.\n\nOne technical challenge was that many of Trade Ledger\u2019s engineers were more familiar with traditional SQL, traditional relational databases, and a traditional way to construct data models. \u201cPutting that together into the document-oriented database is definitely a different way of thinking, but nevertheless the document-oriented database was the right model for us,\u201d Born says.\n\nAnother challenge involved an early decision to build the system monolithically. The first version of Trade Ledger that went to market in 2017 was built by a team of three engineers, despite being a bigger and complex system than what now exists. That monolithic approach made it difficult to evolve the platform, so Trade Ledger had to switch to a modular, component approach.\n\nAs a result, they had to break down the components of the modules into individual components in the next iteration. To do that, Trade Ledger took a phased approach by refactoring certain parts of the system and creating dedicated components.\n\nFor example, Trade Ledger initially had a feature that allowed a connection with cloud-based accounting systems like Xero. Now, it resides in its own dedicated component, rather than be a function within a monolithic application. \u201cWe have taken this piece and moved it into its own connector component, which then allowed us to extend this independent of any other change in the initial monolith,\u201d Born says.\n\nThe application architecture itself has to be modular, not simply the coded components, Born notes. \u201cIn a microservices-based system, we want to ensure that the services work independently of each other. Otherwise, we risk building a monolithic application using a microservices architecture.\u201d\n\nIn a modular, microservices application, temporary inconsistency is expected and it is important to, ensure data will nonetheless essentially be consistent because data across the various services could be at different levels depending on the execution time.\n\n\u201cOther challenges are leveraging the power of document-oriented thinking,\u201d Born says. \u201cIn relational databases, data is typically linked together, and you create joins to query or consolidate the data. In a document-oriented database, you need to think differently and can store a lot of information in an embedded object. However, if this is information which changes very frequently then it might not be the best approach to store everything in one single document. Multiple smaller documents may be more efficient.\u201d\n\nBorn suggests a few things to look out for:\n\nHow Trade Ledger took control of data\n\nFor the data itself, Trade Ledger opted for a document-oriented database, which would allow the flexibility needed in the data model and the ability to manage future growth and scalability.\n\nThe next step was to identify the right components that would be put into a component-based system. \u201cThe way how we constructed it is that every component owns its own data structure and data tables, so one component cannot talk to the database of the other component directly. That's all handled either by events or APIs, and this allows us in the future to have flexibility,\u201d Born says. This structure allows Trade Ledger a clear separation of data ownership of which component can modify the data.\n\nWith MongoDB Atlas, MongoDB\u2019s cloud-hosted database service, Trade Ledger was able to configure its database to provide high resilience and high availability to its customers, with MongoDB functioning as a data layer.\n\n\u201cMongoDB is the operational database that is behind the microservices, and it provided us with the flexibility to make the move. While not every service has its own cluster, this model gives us the flexibility, if it were ever needed, to change the services or components that are powering the microservices \u2014 including the database \u2014 to support different use cases,\u201d Born says.\n\nThe database also helped Trade Ledger on the operational side, as the organisation was able to offload a lot of the operational activities to then focus on the domain-specific problems. Now, Trade Ledger can control where the data is hosted, replicate it, set up the availability, and run tests. Born says that 10 years ago he would have needed a team of 10 to 20 people to do that job.\n\nSelecting the right tools and programming languages\n\nAs to how MongoDB came to be the final choice, Born says that he looked into offerings like ArangoDB and DynamoDB, as well as a couple smaller options. One of the key differentiators was that MongoDB Atlas provides a fully managed hosted platform. \u201cIt allowed us to manage the database system much more efficiently, with a small team,\u201d he says.\n\nWhen Trade Ledger was selecting tools and programming languages, it started by looking at the core capabilities of its engineering team, which primarily was based on Java, and so started to build the first components in Java.\n\nFor its events server, Trade Ledger chose NATS as one of the core components for the event bus, instead of Apache Kafka. \u201cAt the time, there wasn\u2019t a great hosted Kafka solution in the market, and we didn\u2019t have enough capacity to have multiple engineers solely working on Kafka. NATS was a great solution that optimised Docker and got us up and running quickly, still offering a very robust event-messaging solution.\u201d\n\nHow Trade Ledger plans to simplify its system\n\nNext for Trade Ledger is a complete change of its user experience. Born says that to ensure the system can continue to expand, a big change needs to be made. As the team itself has experienced when they go through a big scaling phase, it\u2019s been getting harder to coordinate all the moving pieces.\n\nThe engineering team is looking at principles of design systems, which they began to elaborate two years ago, but the approach was too complex. The goal now is to make it simpler.\n\n\u201cMy strong belief is the best code you can have is no code, because you don't need to maintain and nothing can go wrong. Obviously, it puts it to an extreme, but it's about making smart decisions on what you code. That is one of one of the big learnings that sometimes rather pay more attention that it probably takes way less time to create a system with less code rather than just coding it and creating a lot of information,\u201d Born says.\n\nTrade Ledger is now building a no-code solution for customers where they can implement their own rules without the need for coding. Born says that there are interesting movements around no-code UI platforms with powerful concepts but that it still needs to be proven if they work as well in more complex systems.