Ultimately, programming languages lie on a few spectra. Extremes are annoying, lack of access to them is sometimes too.<p>The big problem with the extremes especially the lower, is the amount of discipline they require from the user. If your team has anyone that ignores this, you get garbage and bugs, which get hard to fix as either the language constrains you due to structure or the problematic behaviors are depended upon transparently and this very hard to factor out.<p>Low level languages make it easier to commit the latter mistake. In higher level, this usually rears its ugly head as bad design where you have to replace big chunks of code at the same time, sometimes leaving compatibility glue.
You cannot do that in some of the higher level "discipline" languages or have to hack it in nasty ways (hey Java and C# with reflection, C and C++ even with macros; try doing it in Haskell), while in something unstructured you're up a creek without a paddle...<p>On the upside, the DIY nature makes it harder to produce bad or any abstraction, as trying to write anything you do not understand ends poorly.
The problem then with getting the debugger in is that this can let you code with incomplete understanding and depend on things that are not meant to be depended upon. Everything becomes an API. That accidental delay in revision x of hardware? Can't get rid of it.
Used a slower bus in the past? Oops.<p>Defined interfaces are important too. Which is why VHDL and Verilog exist even in hardware world - and even they are on the soft side of defined.