You'll rarely get the chance to write something from scratch as most code bases are already in place with their structure so there's that.<p>But bulk and at the heart of it all is always a database. The rest of it gets built because databases aren't that powerful for all the use cases.<p>Let me explain.<p>You mostly get data (in some format, JSON, HTML forms), validate it, cross reference it within database and then write it to different tables. On the read path, you do the reverse, you check some access rights, combined different pieces, assemble them into response.<p>But because reading from databases over and over is expensive, you end up adding an in memory caching such as Redis, memcache, varnish etc. So that's first optimisation and layer of complexity.<p>But reads are not only plain reads, sometimes users need to search and databases are not very good at that or at least, gets very expensive. So now you need to pair some Information Retrieval system (Elasticsearch, Solr etc) and now you need to keep that system in sync whenever something new gets written, or gets updated or gets deleted and of course, you search in that store. So that's the next layer of complexity.<p>Then, some read operations might not be search but could be lots of assembling, computing, cross referencing and aggregating of data. These usually take several minutes or hours. That's where you have to "queue" them to process them in the background so you need a queue system. This then requires adding a queue (Redis, RabbitMQ, ActiveMQ, SQS etc) and a software layer on top (Sidekich, Celery etc) so there's that.<p>Lastly, if you have some data from users that gets accessed with wider public (such as images that are uploaded) then you might need some object store (S3) and CDN in place and the logic to update the content in CDN.<p>That's about 90% of the backend work in 90% of the shops. There are then microservices but that's totally unnecessary in most cases. The only end result of them is that a single developer cannot run the whole application on their machine without pulling in some 50+ half a GB images each but still requiring access to the company's staging environment (through VPN if working remotely) and everything becomes extremely complicated to debug for no reason.<p>Typical tasks<p>-------------<p>Apart from implementing new features which would entail lots of debate about how to structure the data, how each system would get updated, there would be bugs from time to time. First types of bugs are when the PM forgot to specify something or development team misinterpreted something that was specified in the requirements or the whole business unit (development team + PM) didn't fully understand a particular use case. IMHO - In good teams, I have not seen that kind of bugs that frequently because PMs known exactly what they want and get questioned a lot by development team so usually what gets delivered is what was required.<p>The other kind of bugs that you might expect is where as there are lots of moving parts so some might not be in sycn with other such as the Information Retrieval system does not have something that it should have or has something that it should not have or some background task getting executed multiple times etc.<p>Last note - larger the code base, slower would be the development pace. Few years down the road, as people come and go, rarely anyone has a full insight into how the whole system works.<p>EDIT: Typos + nature of work