Synonymy is by far one of the simplest kinds of tasks that are relevant to making computers understand language, despite its awesome usefulness. The Wordnet database (<a href="http://wordnet.princeton.edu/" rel="nofollow">http://wordnet.princeton.edu/</a>) actually has a wealth of synonymy and other word-word relations (probably significantly more extensive than Google's). In the future I can imagine that companies like Google with start to use things like syntactically related phenomena (e.g. syntactic "synonymy" between sentences like "The dog bit John" and "John was bitten by the dog"), in place of the simple word-word relations, and probably eventually even tackle things like answering questions based on the semantic content of the query + websites. There's actually some interesting work done by Phil Resnik (<a href="http://www.umiacs.umd.edu/~resnik/" rel="nofollow">http://www.umiacs.umd.edu/~resnik/</a>) here are UMD trying to do "sentiment analysis", whereby you essentially can detect spin/bias in a document by mapping grammatical structures to semantic features, and then analyzing that. Quite an interesting future. :D