I came across a Twitter thread (linked in comments) which shows some interesting results when trying to recolor historical photos that have been decolored. The recolored photos are a lot drabber than the original (example in comments), and the thread author says that this gives us a skewed view of the past, making us think the past was a lot more boring than it was.<p>I thought of some related questions that I thought could be good for discussion here:<p>- Do you think that expert opinion should be consulted in the Machine Learning process more? If so, where? (perhaps omitting Expert Systems)<p>- Is there too much faith that a result from an ML model is the "right" result? (a phenomenon that maybe isn't specific to ML but a result of human tendencies?)<p>- Do ML practitioners have a responsibility to clearly communicate to the general public the limitations and degree-of-confidence in these systems?<p>- Am I reading too much into this, and this colorization model is just a fun model to play with, and the conclusions of the Twitter thread are too speculative or conjectural?<p>- Is this colorization issue just another form of bias that needs to be ironed out?<p>- The thread concludes by saying that colorization should be left to experts who can use context to pick accurate colors. I think this is too extreme, and that ML systems can incorporate expertise when training, or after during evaluation. Do you think there are any jobs/problems that ML methods could be applied to but should be left to experts (some considerations might be safety, privacy, ethics, etc.)<p>I know that ultimately a lot of these questions can simply boil down to statistics and their interpretation, so I'm not sure exactly where discussion could/should/will lead, but I'm looking forward to hearing your opinions!
If we're talking about the specific example - DeepAI's API isn't perfect (as the twitter thread demonstrates), that's about it.<p>> Am I reading too much into this<p>Kind of. If you're trying to generalize to all ML problems based on this one example, the discussion is too broad to say anything meaningful. Try to apply your questions to the specific example and you'll see how little sense they make:<p>- consulting experts - who would be the expert when trying to guess the color of a 19th century carpet?<p>- faith in the results - does anyone care about the specific results in this case? I mean it's a cool API but the end result is purely aesthetic, it's not like something important is happening in the world based on the results<p>- do ML practitioners have responsibility to communicate to the general public - no, the people who are writing the ML API have a responsibility to their customers and to the internal product stakeholders in the company. The general public has nothing to do with any of this.<p>And then there's hundreds different configurations of [ML application + person creating it + company they work for + user + real world impact] for which the answers will be completely different.
My take.<p>I have a rug of the same style as that rug and the colors are pretty close to the reconstructed colors even if the reconstructed colors are "a little drab" I think there are many copies of that rug with the exact same colors but I don't think you can go to a store and find a rug with the same pattern in different colors.<p>The hue of the garment is completely wrong but it might be impossible to tell what the true colors are because I think you could go to a store and find a very similar garment with different hues.<p>In the case of image recoloring the "expert" might well be the person who is supervising the inference process. For many purposes the endpoint might be that that person likes the result. If it's really important to you that the hue is right you need to have an "expert" do research the way people would do research if they were colorizing a movie.