"Algorithmic Decision System" is not a good term. Some of the systems that are described this way do not make decisions, strictly speaking. For example, facial recognition systems don't make decisions- they make <i>identifications</i>. They are classifiers, yes? Planners, game-playing algorithms, decision trees and decision lists, etc, those are systems that are commonly thought of as making "decisions"- but those are very rarely the subject of scrutiny of AI systems these days.<p>Take for instance a system that is used to determine whether a person is in risk of recidivism. The system will cough up some number, probably a float from 0 to 1. The number will be _interpreted_ as a probability that the person will recidivate. Then, based on this _interpetation_ a decision will be made by the person or persons using the system, whether to treat the person as having a high risk of recidivism or not. The system hasn't decided anything at that point- it's the person using the system that has made a decision.<p>The matter is complicated somewhat by the existence of systems that <i>incorporate</i> AI algorithms in a more general automated decision process. For example, self-driving cars use image recognition algos to identify objects in their path but navigation decisions are not taken by the image recognition algos! However I'd wager that those kinds of integrated systems are not what most people think of when they speak of "algorithms" making "decisions". But I may well be wrong.