There are a slew of fascinating recent advances in CS and I discover more with every passing semester, but for brevity I will pick three things that have been occupying my mindspace as of late.<p>1. It seems that lazy functional programming languages (like Haskell) may provide a basis for a serious improvement in more robust natural language processing. A survey paper: [<a href="http://cs.uwindsor.ca/~richard/PUBLICATIONS/NLI_LFP_SURVEY_DRAFT.pdf" rel="nofollow">http://cs.uwindsor.ca/~richard/PUBLICATIONS/NLI_LFP_SURVEY_D...</a>]<p>2. Semi-Human Instinctive AI, a new dynamic, nondeterministic decision-making process, seems to be the new hotness in robotics/learning algorithms. In it, a given agent is given a set of basic behaviors ("instincts") that it hones with both open and closed learning methods in a problem space. [<a href="http://en.wikipedia.org/wiki/Semi_Human_Instinctive_Artificial_Intelligence" rel="nofollow">http://en.wikipedia.org/wiki/Semi_Human_Instinctive_Artifici...</a>]<p>3. Anatoly Shalyto's Automata-based programming, using finite state machines to describe program behavior, seems to have a lot of potential. It attempts to view programs from the context of engineering control theory, which opens the door to the use of powerful techniques from dynamical systems in mathematics.
In the academic world, the semantic web is pretty much taken for granted. Curiously, it appears that people in the real world have been saying for so long that the semantic web will never happen that they have failed to notice that has already happened!<p>Look at this diagram: <a href="http://en.wikipedia.org/wiki/File:Linking-Open-Data-diagram_2008-03-31.png" rel="nofollow">http://en.wikipedia.org/wiki/File:Linking-Open-Data-diagram_...</a>
All these datasets have already been interlinked and are available for you to use. This is the linked open data approach (<a href="http://en.wikipedia.org/wiki/Linked_Data" rel="nofollow">http://en.wikipedia.org/wiki/Linked_Data</a>) The opposite approach is to use data from a single already-interlinked source through an unified API, exemplified by Freebase (<a href="http://freebase.com" rel="nofollow">http://freebase.com</a>), which is more straightforward but perhaps offers less control. I've found these resources invaluable in more than one project that I'm working on, and every hacker should at least keep abreast of what is available so that you can use it if you need to.
Yep, on a related note I was wondering recently if there is any website which lists currently <i>hot</i> and <i>buzz-worthy</i> research papers in CS or other fields. I know faculty of 1000 does that for biology but is there any other website?
Well, if you're interested in machine learning NIPS was last week.<p><a href="http://books.nips.cc/nips21.html" rel="nofollow">http://books.nips.cc/nips21.html</a><p>There were several papers near applied areas like text classification, breaking audio captchas, and even brain machine interfacing. However, even the theoretical papers usually come with examples (e.g. image classification) that show optimistic results. If you were doing any learning task that is definitely the place to find the state of the art.
I'm more interested in using interesting concepts in academic computer science from 50 years ago. I'm not against new good ideas, but it's not as if we've run out of old good ideas already.
In my opinion, it's obvious what the next big thing is going to be. Image recognition, accelerometer integration, multi-touch and so on. Basically, we're looking at the death of the mouse and keyboard a few years down the line.<p>It's starting now, and it's starting the same way the web started - working poorly, very fragmented, cool but not yet practical. This will change soon.
The Blue Waters project (<a href="http://www.ncsa.uiuc.edu/BlueWaters/" rel="nofollow">http://www.ncsa.uiuc.edu/BlueWaters/</a>) is being done in the building across the street from me, I can see it right out the window from this CS room. It's one of those "off limits" things, although you really need to have a use for it first.
I'm a part of the XMT project @ UMD <a href="http://www.umiacs.umd.edu/~vishkin/XMT/index.shtml" rel="nofollow">http://www.umiacs.umd.edu/~vishkin/XMT/index.shtml</a><p>Admittedly, the <i>concepts</i> involved are dated since PRAM theory (<a href="http://en.wikipedia.org/wiki/Parallel_Random_Access_Machine" rel="nofollow">http://en.wikipedia.org/wiki/Parallel_Random_Access_Machine</a>) dates to the 70's. However, this project marks the first successful commitment of PRAM theory to silicon