The technical meat of this post is below. It looks like they mostly eliminated functionality in their app code wherever it was redundant with either OS functionality, SQL Lite capabilities, or the Messenger backend servers at Facebook's datacenters.<p><i>>One of our main goals was to minimize code complexity and eliminate redundancies. ... To build this unified architecture, we established four principles: Use the OS, reuse the UI, leverage the SQLite database, and push to the server.</i><p><i>>We accomplished this by using the native OS wherever possible, reusing the UI with dynamic templates powered by SQLite, using SQLite as a universal system, and building a server broker to operate as a universal gateway between Messenger and its server features.</i><p><i>>... the existing OS often does much of what’s needed. Actions like rendering, transcoding, threading, and logging [and JSON processing] can all be handled by the OS.</i><p><i>>To simplify and remove redundancies, we constrained the design to force the reuse of the same [UI] structure for different [UI] views. So we needed only a few categories of basic [UI] views, and those could be driven by different SQLite tables.</i><p><i>>Now, ... All the caching, filtering, transactions, and queries are all done in SQLite. The UI merely reflects the tables in the database.</i><p><i>>We developed a single integrated schema for all features. We extended SQLite with the capability of stored procedures, allowing Messenger feature developers to write portable, database-oriented business logic, and finally, we built a platform (MSYS) to orchestrate all access to the database, including queued changes, deferred or retriable tasks, and for data sync support.</i><p><i>>MSYS is a cross-platform library built in C that operates all the primitives we need. ... With MSYS, we have a global view. We’re able to prioritize workloads. Say the task to load your message list should be a higher priority than the task to update whether somebody read a message in a thread from a few days ago; we can move the priority task up in the queue.</i><p><i>>With MSYS, it’s easier to track performance, spot regressions, and fix bugs across all these features at once. In addition, we made this important part of the system exceptionally robust by investing in automated tests, resulting in a (very rare in the industry) 100 percent line code coverage of MSYS logic.</i><p><i>>For anything that doesn’t fit into one of the categories above, we push it to the server instead. We had to build new server infrastructure to support the presence of MSYS’s single integrated data and sync layer on the client.</i><p><i>>Coordinating logic between client and server is very complex and can be error-prone — even more so as the number of features grows. ... </i><p><i>>Similar to MSYS on the client, we built a server broker to support all these scenarios while the actual server back-end infrastructure supports the features.</i><p><i>>[To minimize code-base growth] We also built a system that allows us to understand how much binary weight each feature is bringing in. We hold engineers accountable for hitting their budgets as part of feature acceptance criteria. Completing features on time is important, but hitting quality targets (including but not limited to binary size budgets) is even more important.</i>