Hacker Newsnew | past | comments | ask | show | jobs | submit | zweben's commentslogin

I'm one of the few non-programmers here, and I figured this would be a good place to ask: Right now, few applications are coded to utilize so many cores. Is this simply a matter of programmers transitioning to coding for multi-core computers, or are some types of software not good candidates for taking full advantage of so many cores?

I have an 8-core Mac Pro, and I was hoping to see its performance improve over time as software took better advantage of the hardware, but I don't see that happening yet.


Both, not all tasks are equally parallelized. Some, like the graphics rendering pipeline, data mining, search indexing, and others can reap the benefits and already do. However, there are many tasks that don't parallelize as easily and lead to the programmer waiting for one synchronous task after another due to a dependency. Even if you did run the tasks in parallel the one would need to block and wait until the other completed.

In some cases the programming languages are just ill suited to handle performing concurrent operations free of side effects. Programs written in the languages would have to be rewritten (not likely for many apps). Parallel Studio, by Intel, attempts to make the task easier on programmers working in c++/Windows to identify such cases. Still, others have proposed and attempted to create a layer beneath the programming language that automatically detects code that can be safely executed concurrently without side effects. There's still no silver bullet that's suddenly going to give significant gains without modifications to existing code at this point.


I have an 8-core HP Z800 with 24GB RAM.

For me it's not about improving the speed of end user applications.

My usage is simple, virtualisation.

I have 6 VMs currently fired up, 1 with Oracle installed, a couple of Linux boxes to run an ESB and the remainder run SharePoint on Windows Server.

I wanted each of those to perform pretty well and not be a dev bottleneck (waiting for stuff to happen) and so I've assigned at 1+ cores per VM as well as at 2+GB RAM per VM (Oracle gets more of both). I try and balance my handing out of hardware resource to reduce the amount of scheduling that the system will have to perform between virtual machines... hence, I wanted as many cores as possible and a good chunk of RAM to go with it.

It was cheaper to buy a single high powered workstation than it was to buy 6 cheap servers. Another huge factor was that the running costs (power) and environment (heat generated in the front room + the cabling and physical space + volume in decibels) of a single workstation beat those 6 cheap boxes.


This is a good question and the answer is it's both.

It requires a completely different approach to programming so many programmers haven't made the leap. Additionally, some kinds of software problems seem to be unparallelizable by default.

Having said that, there is an advantage to a many-core machine right now: you can run many tasks (programs) simultaneously. Even if a program can only use one core, your OS should be smart enough to run it on its own core.

Of course then they might contend for other resources, like disk or video throughput.


One of the bigger traditional market segments for these high-end Apple desktop workstations is graphics and film processing, and the tools there have made some progress on parallelization. A bunch of Photoshop transforms and film-editing tools will happily eat up any cores you throw at them (although Photoshop has also been increasingly moving stuff to the GPU as well).


Multicore programming is expensive, and in violation of some very common paradigms and libraries. It takes lots of work to accomplish multicore work in a program and actually gain performance


This is probably because it's not shipping yet. They are still selling the older model in the store. It'll probably get a "Now Shipping" front page spot when the time comes.


Nope. I'm a web designer and I have some interest in programing, but all I know is a little Actionscript.

I come here for the high quality discussion on the articles I do understand, and I read the occasional programming story just to see if I can get anything out of it.


Have you noticed any desire to program more since you started coming here?


Oddly, I'd say it has the opposite effect to a certain extent (Similarly, I'm pretty familiar with front end web development but am most definitely not a hacker)

StackOverflow produce concise and clear answers, Rails webcasts insist (sometimes not entirely convincingly) that everything is quick and easy. HackerNews is frequented far more by programmers with serious breadth and depth of interest and far less by people looking for a quick fix or help with the learning curve. The resulting fondness for arcane languages and esoteric solutions can at times be overwhelwing even for the motivated novice.

On the one hand I think it's important for even non-tech people interested in that startup ecosystem to gain a basic understanding of how hackers think and the choices they face, which is my chief motivation for reading a lot of the programming articles posted here.


Not really. I've had a slight interest in programming for years, but I'm more interested in design.

I'd like to develop an iPhone game, but I don't think I have enough motivation or focus to make it up the steep learning curve. I made a Pong clone in Actionscript, that was as far as I got. The only project idea I have right now is pretty complex; I think I'd need a more modest goal to start with.


(I'm in a similar situation) I've learned bits of PHP and Java in the past, but I wouldn't call it being a programmer.

Thanks to HN, I've decided to properly learn to program.

I'm buying a book on Python (http://amzn.to/bUMPCP), and I'm taking C++ in the upcoming semester.


1) Learn python with these two resources: a.This is for absolute beginners http://ocw.mit.edu/courses/electrical-engineering-and-comput....

b.Do these exercises for some extra oomp - http://code.google.com/edu/languages/google-python-class/exe.... there are some video lectures if required

They're much more better and practical than any ole paperback.


Finishing with .../electrical-engineering-and-computer-science/ works for first link (although google also shows a much longer link for a python course from the same url) http://code.google.com/edu/languages/google-python-class/ works for the second. Thanks.


Cool, hope your journey goes well.


I have been getting messages that say "Reddit is currently under heavy load. Please try again later." or something to that effect.

I'm pretty sure they are just having trouble dealing with the level of traffic they're getting.


Page 11? I only see two pages. What am I missing?


Sorry, its the 1st page under "2. Entire Agreement" (I was looking at the whole lawsuit - http://www.scribd.com/doc/34239119/Ceglia-v-Zuckerberg-compl... ).


This is actually just a couple of pages from the whole document. It is page 11 and 12:

http://www.scribd.com/doc/34239119/Ceglia-v-Zuckerberg-compl...


I sort of thing that if the keyboard was going to disappear completely any time soon (~10 years), it would've disappeared already. We have a lot of different technologies for text input these days, but I don't feel like any of them would be better than a keyboard for working at a desk, even if they worked perfectly.

-Touchscreens can eliminate the need for typing to interact with a computer, but people will always want to communicate with each other in writing. On a touchscreen only device, this means virtual keyboards. Those are a convenient tradeoff for portable devices, but they're not ideal, and when you have space for as much screen as you need and a keyboard, I can't think of a good reason to get rid of physical keys.

-100% accurate voice recognition would be nice for some things, but I think I'd still be too slow, tiring, disruptive, or not private enough for most uses.

-Silent speech recognition (http://en.wikipedia.org/wiki/Silent_speech_interface) is a really interesting option, I think. A device good enough at measuring tiny 'subvocal' muscle movements could seem very similar to mind reading, while also being less intimidating and possibly less invasive. NASA did some interesting work on this (http://www.nasa.gov/home/hqnews/2004/mar/HQ_04093_subvocal_s...).

-I think true mind reading devices also have potential in the long term, but I'd be highly surprised if we had anything practical in less than a decade.


If everything goes well, he won't have to teach you much at all. That's the exciting thing about the direction the iPad is bringing computing; it's very capable, but at the same time, there's not that much to learn.


My guess is they're going to announce they started putting a coating on new phones to fix the issue, and will offer free exchanges or cases to existing iPhone 4 customers.

*Either that, or Jobs is going to come out on stage and give a demonstration of how to hold the phone the right way.


I've got a 2560x1600 monitor, and honestly, the 4k video looked worse to me than 1080p. Blocky all over the place. If they can't raise the bitrate on the 4k videos, there's really no point to it.


How is losses per unit sold relevant to anything? Overall profits or losses are what matter. I don't know how much Microsoft made off each one, so I'll be generous and guess $400 average revenue after subsidy payments.

1,000,000,000 - ($400 x 503) = $999,798,800 lost

1,000,000,000 - ($400 x 8810) = $996,476,000 lost

The difference is negligible.


I'm pretty sure he was joking.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: