Is there a succinct summary of what one gains from this being ‘functional’? I find the succinctness of regular awk to be a good advantage, and it feels like some of that comes from it being non-functional.<p>When I think about how I use awk, I think it’s mostly something like:<p><pre><code> awk '!a[$2]++' # first occurrence of each value in the second field
</code></pre>
Or<p><pre><code> awk '{a[$2]+=$3} END {for(x in a) print x, a[x]}'
</code></pre>
Or just as an advanced version of cut. A fourth example is something that is annoying to do in a streaming way but easy with bash: compute moving average of second field grouped by third field over span of size 20 (backwards) in first field.<p><pre><code> awk '{ print $1, $3, 1, $2; print $1+20, $3, -1, -$2}' | sort -n | awk '{ a[$2]+=$3; b[$2]+=$4; print $1, $2, b[$2]/a[$2] }'
</code></pre>
The above all feel somewhat functional as computations – the first is a folding filter, the second a fold, the third a map, and the fourth is a folding concat map if done on-line or a concat map followed and a folding map as written.<p>The awk features that feel ‘non-functional’ to me are less the mutation and more operations like next, or the lack of compositionality: one can’t write an awk program that is, in some sense made of several awk programs (i.e sets of pattern–expr rules) joined together. That compositionality is the main advantage, in my opinion, of the ‘functional’ jq, which feels somewhat awk-adjacent. Is there some way to get composition of ja programs without falling back to byte streams in between?