Java is interesting in that variance is defined at a reference site. Contrast that to, say, C#, where variance is defined at a type's declaration site.<p>In C#, a type `Function<in T, out R>` is always contravariant with `T` and covariant with `R`. An assignment target of `Function<String, Object>` can always accept a `Function<Object, String>` because the definition of `Function<,>` says that's okay.<p>In Java, there is no way to impose variance rules at a type's declaration site. An assignment target of `Function<String, Object>` can only (barring unchecked cheats) ever refer to a `Function<String, Object>`. However, I can declare a reference of `Function<? super String, ? extends Object>`, and suddenly it can be assigned a `Function<Object, String>`.<p>I always found this difference interesting. C# is much stricter: all members of a type with generic variance must obey that variance, or that type will not compile. The type `List<T>` can only be invariant with `T` because `T` appears as both an input (e.g., in `Add(T)`) and as an output (e.g., `T this[int]`). In Java, enforcement happens at the usage site. If I declare a `List<? extends String>`, the compiler will happily let me call methods for which `T` is an <i>output</i> (e.g., `get(int)`), but it prohibits me from calling methods for which `T` is an <i>input</i> (e.g., `add(T)`).<p>As an interesting consequence of this decision, Java allows for the concept of 'bivariance': it's possible to declare a variable of type `List<?>` in Java and assign any instantiation of `List<>` to it. Moreover, I can still invoke members which do not contain `T` in their signature (e.g., I can call `int size()`). In C#, this is simply not possible. To work around this, some generic interfaces have non-generic counterparts. For example, every `IEnumerable<>` instantiation is also an instance of the <i>non-generic</i> `IEnumerable` type.<p>I am curious which approach is more common across other programming languages.