You can listen the full episode of the podacst here: https://javaswag.github.io/episode/78/
Interviewer: Welcome to this special interview, drawing on insights from the recent Java Swag podcast with Filipp, an architect at lektone.op. Filipp, thank you for joining us.
Filipp: Glad to be here.
Interviewer: You’ve had a long and varied career in programming. Could you start by telling us about your journey into Java and how you began writing industrial systems?
Filipp: My first industrial language when I started working on industrial systems, also in FinTech, was Visual Basic. That was popular for a short time, quickly replaced by C++. I even spent some time as a DBA (Database Developer), working purely with SQL. Then I was invited to do a complex project that I realised couldn’t be done in SQL, so I looked around and discovered Java 1.1 or 1.2. We built a system with a Java applet frontend and a Java-plus-MS-SQL Server backend. Since then, I’ve been committed to the JVM stack, including Java and Kotlin. Lately, I’ve been slowly moving towards Kotlin, with my last project mostly in Kotlin. However, I’ve also been thinking about using new Java versions again, as it has become very interesting.
Interviewer: You’ve touched on databases, and it’s clear you have a deep background there. You’ve mentioned starting with TheBase 2.0, then working extensively with MSSQL Server, and later falling in love with DB2. Could you elaborate on why DB2 stands out as the “best database from a programmer’s perspective” and the “most convenient for development”? What features made it so good, and why did you eventually shift to Postgres?
Filipp: I started with TheBase 2.0 as a student for an optimisation task, which was a very old, non-SQL database. My main work was with MSSQL Server, which I still have a “tender love” for, doing complex things like FIFO analysis and showing pictures within the database. Then came DB2, which I consider the best from a programmer’s perspective. It’s a very high-quality and reliable solution, especially when IBM was a great software company. It was one of the first SQL databases. I loved its features like automatic Active Standby out of the box, similar to Patroni in Postgres, which allowed switching between modes with a button in the interface and even reverting, something most others still can’t do. Its support was very cheap, making it excellent from an administration and architecture standpoint. For developers, it introduced convenient features like getting the identifier of a newly inserted record immediately (ID) and a very user-friendly SQL. When I used it actively, its combination with SUN hardware was incredibly efficient. Unfortunately, in Russia, DB2 is mostly confined to mainframes now.
My shift to Postgres was primarily because it’s a Free, cheap, and universal “Swiss Army knife”. For 90% of projects I see, it’s simpler to start and finish with Postgres because everything is ready, there’s good tooling, and you’re unlikely to hit its limitations. When Postgres isn’t enough, then you start considering FoundationDB, YandexDB, CockroachDB, or other horizontally scalable, complex, but growth-oriented databases. Recently, I’ve also grown fond of ClickHouse.
Interviewer: Speaking of horizontally scalable databases, you’re a big proponent of FoundationDB for FinTech, despite it not being widely popular. What makes FoundationDB so suitable for FinTech, beyond its support for “normal transactions”?
Filipp: FoundationDB is used extensively internally by Apple and is open-source. In Russia, some FinTech companies have used it. Its key advantages are horizontal scalability and ACID guarantees. It allows for effective management of money, enables transactional changes across multiple servers, and is simple to support. My favourite story is when the author of the Jepsen Test, which rigorously tests database correctness, said he didn’t need to test FoundationDB because “their tests are more maniacal than mine”. It was developed starting with a testing environment that emulates hardware and network issues. This focus on not losing data is crucial for FinTech.
From Java, you can interact with it via a standard Java driver that links to a local C-driver. It’s inherently asynchronous, which makes it a great fit with Kotlin coroutines. We’ve built an intelligent wrapper with Index support, automatic partitioning, and main partition selection based on FoundationDB’s capabilities. We haven’t open-sourced it due to lack of demand, but it’s a very powerful tool.
Interviewer: You mentioned Kotlin coroutines. You’ve often highlighted their importance. Why do you believe coroutines are so crucial, and what main advantages do they offer over traditional threads?
Filipp: I remember times when Java servers couldn’t handle many threads. It was easy to run into a “perfect storm” where opening a database connection for every incoming HTTP request, coupled with network delays, would exhaust resources and crash the system. I’ve seen this happen in production due to engineering errors. I dislike reactive programming because it has a complex tooling and it’s hard to find developers proficient in it. Coroutines offer a solution; it’s much harder to hit a “perfect storm” with them. You simply have concurrent requests without worrying about exhausting connection pools or threads. For services that can receive many concurrent requests, coroutines reduce mental overhead and simplify the system’s mental model for the average developer.
Interviewer: Java developers are excited about Virtual Threads. What are your thoughts on them?
Filipp: They’re too new for me to have tried them in production. Perhaps I’ll use them in a future large project. I’m hopeful that Kotlin will effectively utilise Project Loom’s virtual threads. I like Kotlin as a language for certain advantages, though I also dislike some aspects. For complex business logic, I would still prefer Kotlin. However, Kotlin requires more developer attention. Java, especially around version 1.4, was ideal for industrial development because it was simple and hard to write unreadable code. Kotlin, in contrast, is difficult to read without an IDE. I initially disliked Kotlin, but coroutines and null safety were the two things that convinced me to adopt it, despite its issues like heavy support for extensions. Kotlin has a different development ideology around coroutine contexts and extensions, which looks beautiful but can be hard to trace without an IDE. It requires some re-training for Java developers to write idiomatic Kotlin code.
Interviewer: You’ve participated in a “battle” comparing Java and GoLang. From your perspective, given Java’s evolution, when would you choose GoLang, and when would Java still be the preferred option (assuming Kotlin is not an option)?
Filipp: My “battle” started by noting that GoLang is very similar to Java 1.1 or 1.2 – a simple language with automatic memory management and good concurrency. Back then, Java 1.4 was more interesting, with the first Spring versions appearing. However, Java today, with generics, an infinitely developed ecosystem, and an excellent set of libraries, is still my number one choice. JVM Vault, even Java itself, is preferred, with C# as choice number two. Anything else needs a very strong justification. Go’s ecosystem still needs to understand the importance of quality libraries and good frameworks for large projects.
Go lacks features like meta-programming beyond simple annotations. I remember Doclets in Java, where you described what you wanted in comments, and a special program generated Java code. With the advent of reflection, on-the-fly instrumentation, and bytecode manipulation libraries like ASM, Java saw a massive re-birth, allowing all code to be written in one codebase without compiling generated code. This gave a huge boost to the Java world. Go, however, relies solely on code generation, which for me, is an endless source of problems, especially with IDEs that don’t love it due to a lack of standard libraries.
Regarding Garbage Collectors, Go has one, and it “just works,” whereas Java requires tuning and understanding different GCs. While Java’s approach might seem more complex, I see the ability to understand and tune GCs as a huge advantage. It allows you to optimise for specific needs, a possibility Go doesn’t offer. Also, Go’s documentation often suggests that if you need to understand its memory model, “you’re too smart and don’t need to do that,” which I find concerning. The Java world, especially since Java 1.5, expects seniors to understand how multi-threaded code works.
Interviewer: Your FinTech background is extensive. Let’s trace the evolution of the tech stack in FinTech over the past 20 years. What was used then, and what’s common now?
Filipp: 20 years ago, there were more diverse databases. When I started, two-tier architectures were common: a database and an operator’s workstation directly interacting with it via SQL queries or stored procedures. Later, application servers like Inter-Apollo and Diva Beans became popular with Java. Eventually, Spring became dominant, though now new ways are emerging to move towards lighter, simpler frameworks. We’d like to use Kotlin and Ktor due to coroutines.
Today, the landscape is very diverse. My preferred approach involves Fairly large services covering entire subdomains, primarily with synchronous interaction and large analytical databases. Others prefer event-sourcing, which I find problematic. Then there are those who build with thousands of microservices in the modern style, often leading to system failures . There’s no single “standard architecture” anymore .
System Design of Payment Systems
Interviewer: Let’s focus on your “authorial approach” to designing a payment system. What principles do you follow?
Filipp: I prefer sufficiently large services because they simplify transaction boundaries, reduce distributed transactions, and minimise interaction overhead. They typically correspond to subdomains. I’m also keen on Interacting via databases, where reading from a dedicated view can be more efficient and faster than API calls . I generally favour synchronous interactions because with dozens or hundreds of microservices, they are as reliable as asynchronous ones, and simpler to develop .
However, I sometimes use pipes and filters as an architectural style for specific modules, treating them as processing pipelines for data or events . Overall, I’d call my style pragmatic . It somewhat resembles Uber’s domain-oriented architecture, though the underlying ideas are much older .
Interviewer: You’ve discussed patterns like BFF (Backend for Frontend) and Workflow actors in your talks. Could you elaborate on these and also provide guidance on the “size” of a microservice, particularly in the context of your subdomain approach ?
Filipp: BFF is about isolating complexity between the frontend and backend and concentrating contracts . It ensures your UI or mobile app doesn’t need to know about all your microservices . The frontend team usually proposes BFF requirements, and the backend team implements it, which facilitates parallel development . Regarding microservice size, there are two main approaches . The first comes from system theory and DDD: decompose a large system into interacting parts, setting microservice boundaries to maximise internal cohesion and minimise external coupling . This is ideal, but the outcome depends on the domain model and can sometimes become subjective .
The second approach is based on non-functional requirements . For example, if you handle card data, you might isolate all card number operations into a small, non-domain-specific microservice . This simplifies the rest of the system because that small service has much stricter security requirements . Similarly, if one service needs to respond 100,000 times per second while the rest respond once, it should be a separate service due to different non-functional requirements . There’s no simple algorithm; it involves considering both domain models and non-functional requirements . The “two-pizza team” concept was a marketing ploy, not the original definition of microservices .
Workflow is a solution for complex, long-running business processes involving multiple microservices in various ways . This includes saving data, waiting for responses, and then performing further actions . It’s a way to handle what some call “sagas,” though “saga” itself is a problematic term as it refers to many different patterns . My workflow approach aims to retry and complete a process, potentially through alternative paths, rather than relying on compensation . This idea was influenced by Uber’s Cadence (and later Temporal) .
We implemented our workflow as a library , rather than an external service like Temporal . This involves many intricacies: how to save state in the database, what to do on service restarts, how to parallelise workflows across instances, and how to start slowly when many processes need to resume . It’s complex; one of the best programmers I know found it challenging . Kotlin’s DSL capabilities make writing elegant workflow code possible .
Postgres
Interviewer: You’ve built several things on top of Postgres, like your workflow library and queues, rather than introducing external systems. Could you explain this preference for using Postgres as a versatile backend ?
Filipp: This choice is driven by necessity, especially for developing boxed solutions . We can’t install and support Temporal for every client . If I were building an in-house solution, I would use Temporal unless it didn’t meet performance requirements . I wouldn’t build a database on top of a database or a queue on top of Postgres if there were a ready-made solution with reliable transactional guarantees . For example, with YandexDB, I wouldn’t build anything custom because it has topics and tables that work in one transaction . My preference is to use existing, reliable, and cost-effective solutions . I don’t love building custom “bicycles,” but I do it when necessary .
Our custom Postgres queue supports millions of individual queues, which is useful for actor models where each client or account has its own event queue . Kafka, in contrast, is designed for fewer topics but many events per topic, requiring efficient parallelism . Our Postgres queue uses SELECT FOR UPDATE SKIP LOCKED for its implementation . This PostgreSQL feature was specifically designed for queues .
Kafka
Interviewer: Let’s discuss your “love for Kafka” . When do you choose Kafka versus your Postgres-based queues, or even Redis for queuing? What are the trade-offs ?
Filipp: My Postgres-based queue solution emerged from the need to have millions of active queues, where each client, user, or account has its own queue for events . This is ideal for an actor model . Kafka isn’t suitable for a million topics; it would crash . Kafka, on the other hand, is for when you have Fewer queues (topics) but many events in each, added and processed quickly, requiring efficient parallelism and specific guarantees . As an architect, I look at technical solutions based on what they promise and guarantee .
I haven’t used Redis for queues often, primarily because its guarantees aren’t ideal . While Redis can write WALs now, you need to understand the guarantees, performance implications, and why you’d choose it over Kafka . If losing an event is acceptable, Redis might be faster in some cases . Redis for me is more of a general cache between services, and I generally try to avoid inter-service caches unless absolutely necessary .
Interviewer: You often discuss storing complex entities in relational databases using JSONB columns . Could you explain this pattern and provide examples of what types of data you typically store in JSONB versus structuring into multiple tables ?
Filipp: I store data in JSONB when I need to retrieve it entirely at once . For example, a client’s record with physical attributes like hair color, hobbies, name, and surname . If I need all that data together for the system and for front-end display, and it doesn’t change frequently, I’ll pack it into JSONB . This avoids breaking it into 20 tables if I’ll always read it whole . If tables are tightly coupled and can change independently, it’s better to normalise them . For example, storing a transaction’s history in JSONB isn’t convenient, but an individual transaction is fine because it’s usually needed entirely . The key is to compare it to alternatives . Gathering the same data from 30 tables via joins might involve fetching more data from disk and complex linking, which can be more expensive than reading a large, packed JSONB column . However, if you have a megabyte-sized JSON and change parts of it 50 times per second, that would be inefficient . In such cases, it’s better to extract the changing part into a separate table or use more efficient partial update mechanisms .
Ultimately, system design is about trade-offs and compromises . No solution is perfect; you pick the one with the lowest cost for your context .
Being a Team Lead
Interviewer: Let’s switch gears to team leadership and architecture. What was the most challenging aspect of your transition from a pure developer role to a team lead and then to an architect ?
Filipp: The hardest part was probably accepting the role . I often took on leadership responsibilities without the title or compensation initially, and would then leave for a pure developer role . This cycle repeated until I realised compensation mattered .
Secondly, it was crucial to understand and admit that I was a bad team lead . I initially thought I knew how to manage and delegate, but I was terrible in my early years . I learned to identify my strengths and weaknesses . I’m good at describing processes and getting consensus, but less so at tracking team psychology or mediating conflicts between business and development . My team lead career was a series of infinite failures from which I sometimes learned . A team lead’s job isn’t just protecting the team from management but fostering joint development for the company, without infringing on developers’ rights or the business’s needs . Saying “I protect the programmers from these idiot businesspeople” is counterproductive . It’s hard to break that bias towards techies, I still prefer programmers over designers, with rare exceptions .
I generally dislike the “team lead” position . I believe it’s an inherited problem from the early 2000s . There’s no real need to “lead” someone within a development team . A tech lead is important for technical decisions the team can’t make and for developing the team’s technical excellence . Process management should be handled by someone who understands processes, likely across multiple teams . Personal development and psychological support should ideally come from HR or a professional closer to a psychologist . These are all distinct professional roles that are often dumped onto a team lead who might not even understand them .
I advocate for servant leadership, where the manager serves the team by providing context and enabling effective work, rather than being a boss . For example, when production is down, the best a team lead can do is buy coffee and pastries for the team fixing it, and shield them from higher management . It’s difficult to transition into this position .
Hiring
Interviewer: How do you approach hiring Java developers, specifically for mid-level to senior roles? What do you focus on during interviews ?
Filipp: I’ve almost stopped asking about language or technology specifics . For a mid-plus hire, an hour to an hour and a half of conversation is usually enough; more doesn’t provide additional useful information . I always hire for a specific team and understand what role I need to fill—whether it’s someone for business logic, database work, or operations skills . I first identify deficits or surpluses in my team . My main focus is understanding the candidate’s personal background—what they’ve done, what interests them, and what they understand how to do . I also check for common sense: do they blindly believe what they’re told by vendors or books, or can they critically evaluate information based on experience ? This is crucial for mid- and senior-level roles, as it indicates their ability to work with requirements and write technical specifications . I might ask them to write a very simple piece of code (live coding) just to ensure they’re familiar with an IDE and Java, especially with the rise of AI tools . I also try to understand their awareness: do they know why their previous company used Postgres over MSSQL, or Redis over Kafka ? If they have logical reasoning, that’s interesting; if they just repeat textbook answers, it’s not . If a person genuinely fits, the conversation often flows for an hour and a half or even two hours .
Interviewer: You mentioned a “battle” on system design. Do you believe in asking typical system design questions (e.g., “design Twitter”) during interviews, or do you prefer a different approach ?
Filipp: I don’t ask typical system design questions . I’m more interested in whether the developer reflects on their past choices and the trade-offs involved . Do they understand why they chose a particular path and how to argue for it ? I prefer to discuss their real-world experience rather than theoretical problems from books . For example, I’ve never designed Twitter, and neither has the candidate, nor likely the author of the book . Asking about it doesn’t reveal much .
Instead, I’d ask why they chose MongoDB over Postgres, or what the trade-offs were . Even if they inherited the choice, I’d ask if they investigated it . My goal is to see their Interest in their tools and their desire to dig deeper than the surface level .
Unpopular opinion
Interviewer: Finally, do you have an “unpopular opinion” you’d like to share, perhaps related to development practices ?
Filipp: My unpopular opinion is that ORMs (Object-Relational Mappers) are a very harmful practice that significantly hinders project development .
Interviewer: Please expand on that. Why are ORMs harmful ?
Filipp: First, in normal products, actual database interaction constitutes a small portion of development time . You rarely write complex queries, even for CRUD operations, so ORMs don’t save much time . Second, ORMs force you to stop thinking about the system from the perspective of how data is truly stored . They introduce a very leaky abstraction that almost always causes problems as the project grows . It’s much better to use a Query Builder at worst, or ideally, work directly with the database . This saves time, provides a better understanding of the system’s structure, and simplifies operations and future development .
For Java, I particularly like JDBC templates . You write direct SQL with minimal boilerplate code . Ideally, I’d love IDE support that tells me if a column doesn’t exist in my test database when I write a query, but that’s not fully realised yet .