Deep Generalism
I’ve spent the majority of my career working in a microscopic slice of technology, building client and server applications that power websites for editorial content, education, and ecommerce. From my perspective inside of that bubble (especially at the very beginning), web programming was the universe.
Tutorials, blogs, and video courses on highly popular and marketable skills were ubiquitous, drowning out entire domains of computing with a rich breadth of theory and history. Bootcamps promised quick routes to financial independence at a fraction of the time of traditional education. As someone who did not go to school for computer science, it wasn’t just a way into software — it was software altogether.
But in the last few years, I’ve found myself with more responsibilities: handling software architecture, designing schemas, and making trade-offs to design solutions that best suit the problem at hand. These are the kinds of decisions that crosscut many aspects of your work. Leaky abstractions don’t always just spread into their vertically adjacent neighbors; it can behave like electromagnetic interference, spreading outward seeping into anything not adequately insulated.
Take for example the choice of a database for a particular application. We have choices between a variety of forms and paradigms: monolithic or distributed; relational, document, columnar, or graph; in the cloud or on-prem, etc; each with their own guarantees around consistency and performance. We have to consider access patterns, whether it will incur heavy writes or reads, if access is sparse or dense, if we expect a lot of at-will analytical queries or simpler transactions. We may have compliance requirements and budget considerations. If we’re running the database ourselves, this may influence our decision about the OS, its file system, and the underlying hardware. We may need to consider who will administer this database, if the team is organized in a way that will allow it, or if the hiring pool has a reasonable amount of candidates with knowledge of it. How are backups and snapshots managed? Do we have certain requirements for these backups? We should also think about the applications that intend to use it: are there libraries and drivers that will reliably allow us to use this database? Are we expecting a lot of concurrent updates to the same data?
And these are not just immediate needs, you need to have the experience and vision to predict what will be needed for the next couple of years (at least until you can safely reassess the solution and have the resources to move away from it). It takes coordination with other engineers, teams, and departments to have as much information as you can to make these choices.
For the most part, it’s enough to have a passing knowledge of most things. Many brilliant, successful folks have been perfectly fine with some shallow knowledge that a NoSQL database usually works better than a relational DB for write-heavy workloads. But there’s a deep rationale behind such knowledge, and that rationale may in fact lead to unconsidered benefits or downsides lurking just beneath the surface.