As a data engineer, I think blaming Excel for this is the wrong framing. The real problem here is with the entire process, and specifically, the lack of testing.<p>Excel is great for many use cases, especially if you need people to enter data somewhere. Its UI is unparalleled in terms of quickly giving something to users that they can understand, mess around with, and verify. It's a very common use case to then need to pull in data from a bunch of Excel files, into one main repository of data (a data warehouse). That can be stored in an Excel file, although more commonly would be stored in a database.<p>But there are always problems with this process! There can be missing data, there can be weird data conversions because the program/language you're using to parse the data and get it into the database reads things differently than how Excel intended, there can be weird database issues that causes data loss, etc.<p>Complex systems <i>always, always</i> have bugs.<p>It is the job of a data engineering team to, among other things, test the systems thoroughly, and put in place systems to test against data loss, etc. It is pretty common, for example, to count the data going into a pipeline, and the data that you end up with, and make sure nothing was lost on the way.<p>Anyone can make a mistake. Any team, no matter how good, especially when they're rushed, can use shortcuts, use the wrong technologies because it's expedient, or simply have bugs. It is the job of the team, and of project management in general, to do all the manual and automatic testing necessary to make sure that mistakes are caught.<p>The real lesson isn't "Excel is bad". It's not. It's an amazing tool. The real lesson is "Data Engineering is hard, requires a lot of resources", and "all systems have bugs - testing is mandatory for any critical system".