Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Interesting recent developments in academic computer science.
37 points by rplevy on Dec 19, 2008 | hide | past | favorite | 32 comments
Curious to learn about any concepts or techniques developed recently in academia that you think should be used more widely in applied settings.


There are a slew of fascinating recent advances in CS and I discover more with every passing semester, but for brevity I will pick three things that have been occupying my mindspace as of late.

1. It seems that lazy functional programming languages (like Haskell) may provide a basis for a serious improvement in more robust natural language processing. A survey paper: [http://cs.uwindsor.ca/~richard/PUBLICATIONS/NLI_LFP_SURVEY_D...]

2. Semi-Human Instinctive AI, a new dynamic, nondeterministic decision-making process, seems to be the new hotness in robotics/learning algorithms. In it, a given agent is given a set of basic behaviors ("instincts") that it hones with both open and closed learning methods in a problem space. [http://en.wikipedia.org/wiki/Semi_Human_Instinctive_Artifici...]

3. Anatoly Shalyto's Automata-based programming, using finite state machines to describe program behavior, seems to have a lot of potential. It attempts to view programs from the context of engineering control theory, which opens the door to the use of powerful techniques from dynamical systems in mathematics.


Number 3 seems interesting...

If it catches on, architecture classes might get higher precedence in curricula. Moreover, it might unify (to some extent) the theoretical background around hardware and software.


In the academic world, the semantic web is pretty much taken for granted. Curiously, it appears that people in the real world have been saying for so long that the semantic web will never happen that they have failed to notice that has already happened!

Look at this diagram: http://en.wikipedia.org/wiki/File:Linking-Open-Data-diagram_... All these datasets have already been interlinked and are available for you to use. This is the linked open data approach (http://en.wikipedia.org/wiki/Linked_Data) The opposite approach is to use data from a single already-interlinked source through an unified API, exemplified by Freebase (http://freebase.com), which is more straightforward but perhaps offers less control. I've found these resources invaluable in more than one project that I'm working on, and every hacker should at least keep abreast of what is available so that you can use it if you need to.


You've linked to a PNG file and said "see! semantic web!"

I tried to dig into it, looking for some data and to see what you are talking about, and I finally find a piece of RDF, real semantic web stuff: http://dbtune.org:3030/sparql/?query=describe%20%3Chttp://db...

Um, ok. Now what? This is short XML file containing links, half of which are dead. The biggest problems with SW is that no one agreed on the labels, inputs and outputs, and that there are no mechanisms for data preservation or trust.

How have those been solved now?

(edit) I'm not hating on the idea, btw. It just doesn't seem to be a technological problem. It's a social one. The second you find a way to get people to structure their data for fun & profit, the SW will blossom. And then it will be spammed. And then someone will find a way to index it and filter out the spam, and by then it will be something good, but quite different from what was intended.

I am genuinely curious to know what has changed in the last few years that academics now take SW for granted.


Fluidinfo seems to have an interesting handle on the problem.


Can you elaborate? All I see is that I have to listen to Robert Scoble talk for an hour about it, and a blog post explaining that it takes an hour to explain it.


In the academic world, the semantic web is pretty much taken for granted.

This is absolutely not the case. Among the Semantic Web community, the SW might be taken for granted (to some extent), but among the wider CS community, there is a lot of scepticism about whether the SW is feasible. Most of the people I know in the academic database research community dismiss the Semantic Web, for example.


I'm a skeptic. I don't believe in the idea that everyone will do their little piece and sometime this magical thing will emerge. Deriving benefit depends on the use of logic, but the web is an illogical mess. The semantic web doesn't scratch any itch that I have. Sure I would like something better than Google to bring my info to me. Right now I am finding the collective smartness to be more relevant than the semantic web. It will have some local success like Linda or Corba, but in the end something else will be the next great answer.


Yep, on a related note I was wondering recently if there is any website which lists currently hot and buzz-worthy research papers in CS or other fields. I know faculty of 1000 does that for biology but is there any other website?


I like http://hnr.dnsalias.net/wordpress/ . And just reading the papers that won awards at VLDB and SOSP.


I don't know about buzz-worthy, but here are a few buzzword-y ones : semantic web, ubiquitous computing, context-awareness, study of social networks, feel free to add more...


i like www.hunch.net


lambda the ultimate, no?


I'm more interested in using interesting concepts in academic computer science from 50 years ago. I'm not against new good ideas, but it's not as if we've run out of old good ideas already.


I'm interested in both. I'm a fan of Common Lisp, but also of Clojure.


You mean AI. :-)


Well, if you're interested in machine learning NIPS was last week.

http://books.nips.cc/nips21.html

There were several papers near applied areas like text classification, breaking audio captchas, and even brain machine interfacing. However, even the theoretical papers usually come with examples (e.g. image classification) that show optimistic results. If you were doing any learning task that is definitely the place to find the state of the art.


In my opinion, it's obvious what the next big thing is going to be. Image recognition, accelerometer integration, multi-touch and so on. Basically, we're looking at the death of the mouse and keyboard a few years down the line.

It's starting now, and it's starting the same way the web started - working poorly, very fragmented, cool but not yet practical. This will change soon.


Given that multitouch has been around since the 80's or so (at least the technology), I don't think we're going to see the death of the keyboard. It's just too good of a method of input.


I don't think it's that great... Most importantly, keyboards require a flat surface to be really good. When on the move, keyboards really suck. There's plenty of room for improvement in portable input devices. Even something with a lot less expressiveness, but that, for instance, can be used with your hand in your pocket (perhaps while wearing a HUD of some sort) would be a huge improvement.


It depends on what you're trying to do. Most phones I've used can be operated with one hand, blindly, to do common tasks (raise/lower vol. silence call).

The biggest problem, though, is that modern portable devices /need/ the expressiveness of a keyboard. Which is why devices like the Blackberry, the sidekick, etc. took off.

The touchscreen changes some of that, but, from what I hear, it doesn't quite work as well.


Why do they 'need' it? If voice recognition were completely perfect, they would not need it.


The power of editing and revising printed text with a keyboard is not something that can be easily duplicated using voice. Consider trying to write code in this way. It's easy to say words but faster to type when you need custom spellings and editing words at a character level. It seems awkward to have to edit text with speech (easier to type something like C-space down down C-s quote . right C-space C-k M-x end-of-buffer C-y than to say it). Maybe some combination of keyboard and voice would work. Voice is not be ideal for a workplace situation, unless it is subvocal silent speech, which it sounds like is a technology that is almost ready.


For an interesting take on this, see Ghost in the Shell: Stand Alone Complex. Many of the cyborgs and secretary-bots have expandable spider fingers that type at lightning speed. People without augmentation can still use the keyboards, so it's device and space efficient and keyboards are dirt cheap and the keys can take a lot of pounding.


Which looks good on film, but in reality I rather suspect that a cyborg with a USB interface would be more efficient.


QWERTY is a more stable protocol. You'd have to do less rip and replace of hardware (read: body parts). You could even handle Dvorak with only a software upgrade ;-)


The Blue Waters project (http://www.ncsa.uiuc.edu/BlueWaters/) is being done in the building across the street from me, I can see it right out the window from this CS room. It's one of those "off limits" things, although you really need to have a use for it first.


Distributed computing will be the next huge thing. I can see it right through the window too.

Anyway, the BlueWaters project will be done by 2011. It will be the fastest computer then.


No, it won't be. There was a while when we had 200mhz computers. Then we moved to 2Ghz computers. Looking at all the innovations that happened within that time period, very few were due to the super computers getting faster.

Distributed computing just means computers getting faster. There is no killer app for this. Yes, you may say cancer research or flight simulations or so on, but there are not the next big things - not the way the web was.


1000 core 2GHz machines for the desktop are 5 years out. Parallel is the way forward.


I'm a part of the XMT project @ UMD http://www.umiacs.umd.edu/~vishkin/XMT/index.shtml

Admittedly, the concepts involved are dated since PRAM theory (http://en.wikipedia.org/wiki/Parallel_Random_Access_Machine) dates to the 70's. However, this project marks the first successful commitment of PRAM theory to silicon


You should certainly look into research related to Google's Map Reduce and Big Table.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: