Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> Accidentally (?) she managed to create the best representation of AI I have seen in art: all that counts is that you call it AI even if it is a simple algorithm.

Backpropagation, which most researchers will agree is an AI algorithm, is a "simple algorithm".

So are many other AI algorithms, some of which are simple enough to be understood so well that most people don't recognise them as AI anymore: search algorithms like depth- breadth- or best-first search, game-playing algorithms like alpha-beta minimax, gradient descent/ hill climb, are the examples that readily come to mind.

I think the above article and your comment are assuming that, for an algorithm to be "AI" it must be very complicated and difficult to understand. This is common enough to have a name: "the AI effect". A few years down the line I bet people will say that "this is not AI, it's just deep learning".

There's no reason for AI algorithms to be complicated. Very simple algorithms can create enormous complexity, even infinite complexity. The state of deterministic systems with even a couple of parameters can become impossible to predict after a small number of steps if they have the chaos property. Language seems to be the application of a finite set of rules on a finite vocabulary to produce an infinite set of utterances. Complexity arises from very simple sources, in nature.



The point was that her PCB wasn’t connected to anything at all. She claimed there were pumps and sensors, but there was literally nothing. There were cables etc and it certainly would fool someone who has no idea of circuit design and electronics, but I happen to know a bit about it and the circuit almost certainly didn’t do what it claimed it did.


Ah, I see. I must have misread your comment. I thought you meant that the PCB didn't have anything like (a hardware implementation of?) an AI algorithm on it, not that it had nothing at all on it.


> Backpropagation, which most researchers will agree is an AI algorithm, is a "simple algorithm".

As time rolls on and we see more articles like this one calling out the "AI BS" - which I agree should be called out...

I worry that a new "winter" will set in, and funding will be cut, and research towards how biological neural networks actually work vis-a-vis artificial neural networks will suffer.

Because from what I understand, we currently don't know how such biological networks actually "learn" - because there isn't a "mechanism" for backpropagation to occur.

IIRC, there's still questions on how information is propagated through biological networks; our artificial representations of them are constrained to approximately a single dimension of the real thing (and even that doesn't capture the biology, thus the idea of "spiking neural networks") - but there may be other avenues of information diffusion that are important as well, still to be revealed in the biological makeup.

We know for certain we are missing something fundamental, when even if you could scale up some of today's best deep learning systems to data center scale won't approximate anything close to what goes on in the human brain, given the size and power constraints.

Figuring this out could be set back, when funding becomes scarce once more.


a lot of people think it will mirror how the internet was. Lots of hype, people threw money at it while not really understanding it but when the profits didn't roll in, the winter came. People forget how tech was desolate, and it wasn't just 2001. probably from 2001 to somewhere in 2005? Anyhow, those who figured out how to make use of the internet well were incredibly successful. Once the winter was over tech has been among the hottest industries for a long time.

AI might end up the same, enter a winter. people will talk about how silly endevours into AI where, but a few companies will really figure it out and a huge explosion will occur and people will wonder how anyone did anything without AI. something like that

for those who forgot what that winter was like: 1999 2:10: your not a pure internet company... (insinuating thats bad) https://www.youtube.com/watch?v=GltlJO56S1g&t=316s

.com bubble burst somewhere March, 2001

2002 2:22: netflix represents one of the few success stories of an otherwise desolute tech sector https://www.youtube.com/watch?v=YBLAwGhyV5k

2004 who in their right mind brings a company to market in the doldrums of august in a down tech... https://www.youtube.com/watch?v=HxOoeCHc47Q


Can back-propagation can't be used in total isolation (data, nnet), so by itself it isn't "AI", even if it is an "AI algorithm".


Sure, backprop is fairly simple, but the thing it produces is somewhat complex, and we seem to find it hard to explain “why” the weights it finds work (even though we understand clearly how it finds weights that do work), right?

That seems sufficiently “black-box-ish” to me?


> Backpropagation, which most researchers will agree is an AI algorithm, is a "simple algorithm".

Back-propogation is not an "AI" algorithm.

You are ironically doing exactly what this article is about.


Without meaning to sound patronising, I believe I understand your confusion. Allow me to explain.

My comment is making an entirely uncontroversial statement: that "backpropagation is an AI algorithm". Not that "backpropagation is AI". The latter could be taken to mean that backpropagation is itself artificially intelligent, that it exhibits some kind of intelligence (leaving aside for the moment the fact that we have no agreed upon definition of "intelligence", artificial or otherwise). If I understand your comment correctly, this is the interpretation you make of my comment.

However, what my comment says, and this should be clear from the context ("most researchers will agree"), is that backpropagation is an algorithm from the field of research that is known as AI.

In that context, "AI", "Artificial Intelligence", is the field of research that investigates methods to construct "AI", "Artificial Intelligence(s)". Backpropagation is a component of one such method, neural networks.

I think then that the confusion, which is also discussed, and exhibited, in the article, stems from the fact that the same word is used to describe both "artificial intelligence" and the field that researches artificial intelligence.

And I hope this clarifies the confusion.


This is not serious : back-propogation is the origin point of modern AI. This is the algorithm that powers the deep network revoultion. It's not the end point, it's not a magic box, but it is fundamentally an AI algorithm.

Just saying "no it isn't" is just not helpful or useful.


You are missing the entire point of the article if you continue to call these algorithms "AI". Inflating simple things like this to mean "AI" has led to the term being meaningless.

You are the example it is making.


I thought one of the key principles in this discussion is that the goalposts keep moving: before it's a solved problem, it requires Artificial Intelligence; once solved, it's just basic algorithms. The bar for what constitutes 'AI' keeps getting raised.


People have often believed some narrow tasks require general AI (AGI), however it turns out that almost any specific task can be solved without building an AGI first. This does not change the meaning of "AGI" - a system that is able to perform any mental task as well as an average human.


Most of the use of the term 'AI' I see isn't referring to AGI at all.


Ok : define Intelligence. Define Life.

There are no accepted definitions of these terms. Are they meaningless?

AI that does not include backpropagation or logical deduction or GA's or optimisation is... is... magical thinking. AI without the nuts and bolts from the last 50 years of work is meaningless. The article is heartfelt, and we all agree with the tenant that people pretending that they are using AI when they are really using a database isn't a good thing, but if you take any current system look right down inside it all you will find is a Shannon type implementation of church-turing.


There are tonnes of other approaches to machine learning that don't involve backprop! There are also other approaches to neural netwroks that don't use backprop (have a look at Numenta's stuff for example). I suggest you watch Pat Winston's brilliant MIT AI lectures to see how huge the range of techniques is.


I direct you to lectures 12a and 12b.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: