This should excel in the "computation per joule" metric.
I am cautiously hopeful that funding will succeed and I will be using mine in conjunction with a broadband radio from end to simultaneously receive and decode a great number of FM voice channels on a remote, solar powered location with long periods of clouds.
(I have existing ARM boards whose GPU hardware might have been useful, but they are not openly accessible.)
I was trying to figure this out. Where did you find the information confirming that the cores are general purpose. I had inferred that they were GPU like from the use of the openCL programming language. Perhaps ignorantly.
Well, multi GPU debugging is terrible. You need different cards(you can't use the one that powers the display) and there is only one company that counts there, Nvidia.
Nvidia is married with Microsoft, and the only intuitive tool you can use for debugging is Windows-only, no mac or Linux support.
No UNIX support in a pro tool is a big no-no for me.
Another problem is that it evolves from graphics and you need to use graphic concepts whether you need it or not.
The good side of doing that is that we can take advantage of the economies of scale of game tech to get good prices.
The bad side is that you can't use it as a stand-alone tool for what you want, like chemical or physical problems.
I take it you never used CUDA or OpenCL before? Because everything you said is complete bullshit.
>Well, multi GPU debugging is terrible. You need different cards(you can't use the one that powers the display) and there is only one company that counts there, Nvidia.
You can run computations on the same card as the display. You can compile to software emulation to debug logic code.
>Nvidia is married with Microsoft, and the only intuitive tool you can use for debugging is Windows-only, no mac or Linux support.
CUDA works on Windows and Linux, not sure how good the mac support is.
>Another problem is that it evolves from graphics and you need to use graphic concepts whether you need it or not.
You don't need to understand any graphics concepts. It's parallel programming concepts you need.
CUDA development on UNIX is arguably easier than on windows, I have no clue where you got your information but it definitely isn't accurate. None of it. Where did you get that you can't use a card that powers the display? That works just fine. They have good driver support. You don't need to use graphics concepts, yes, there are still some remnants of that (for instance, in naming conventions) but for the most part GPUs are best described as coprocessors that happen to be able to drive a display.
You can use them for chemical or physical problems just fine (provided you are willing to do the programming).
Not true. I'm running OpenCL GPU code while reading this on the same machine with one AMD 6570 GPU right now.
> only Nvidia
Not completely true. Nvidia are doing much more, but AMDs cards are more than capable and OpenCL can work. AMD was/is certainly the favourite of the Bitcoin miners.
Other than the obvious of running a standard OS...