Hacker Newsnew | past | comments | ask | show | jobs | submit | danbruc's commentslogin

How much energy does it take to pump the heat from a primary loop at a temperature tolerated by the silicon to a secondary loop at 900 K? If we pick 300 K for simplicity, would we not need twice as much energy as we want to get rid of just to raise the temperature? 2 MW to raise 1 MW from 300 K to 900 K?

Yep, the COP goes down as the temperature goes up, and at a certain point it's not worthwhile increasing the temperature.

I am in favor of returning America to the Indigenous peoples of the Americas. And while we are at it, let us also return Australia to the Aboriginal Australians. We probably also have to return Europe, Asia, and Africa - or at least some parts - to someone.

What fraction of the population of your average country has done some serious thinking about UFOs? What fraction of those thinks at least one of those unexplained events involved aliens?

Argumentum ad Populum.

No, I was only wondering how many people believe that we were visited by aliens for somewhat reasonable reasons. I would guess quite a few people would say that they believe that at least one of the UFO sightings was an actual UFO but I would also guess that most people are only informed by headlines or History Channel documentaries and only relatively few people have dedicated some non-trivial amount of time to look into the topic like you would for other topics that interest you.

I mean, when I was younger I thought “maybe angels and demons and all that stuff was aliens”, but probably just lots of hallucinating mostly.

I would assume they are just drawing the outline, not performing any distance calculations, and the differences are just a result of different linejoin choices. [1]

[1] https://www.w3.org/TR/fill-stroke-3/#stroke-linejoin


I'd imagine that at some point during the text rendering process, they have to generate an SDF of the text they want to render (it's what I did when I wanted to manually render text anyway). If they do, then they can generate the extra text-width lines basically for free, just fill everything with distance less than the property.

I may be entirely wrong though, I don't know in detail how browsers render stuff


If an AI driven car drives off the side of the road, I want to know why it did that. I could take the software developer to court, but I would much rather take the AI to court.

How would that work? You have the AI explain its reasoning - and trust that this is accurate - and then you decide whether that is acceptable behavior. If not, you ban the AI from driving because it will deterministically or at least statistically repeat the same behavior in similar scenarios? Fine, I guess, that will at least prevent additional harm. But is this really all that you want? The AI - at least as we have them today - did not create itself and choose any of its behaviors, the developers did that. Would you not want to hold them responsible if they did not properly test the AI before releasing it, if they cut corners during development? In the same way you might hold parents responsible for the action of their children in certain circumstances?


That'd be great for the corporations. Take the AI to court, not us. The AI the gets punished (whatever that means...let's say banned) and the corporation continues without accountability. They could then create another AI and do the same thing all over again.

Or maybe the accountability flows upward from the AI to the corp that created it? Sounds nice, but we know that accountability doesn't work that way in practice.

I think I'd rather have the corporation primarily accountable in the first place rather than have the AI take the bulk of the blame and then hope the consequences fall into place appropriately.


The source code is not the specification, the source code is an implementation of the specification. The specification tells you what happens, the source code tells you how it happens. Ideally you also have some additional documentation for the why.

As any four-year-old can tell you, ‘why’ is infinitely recursive. ‘What’ from the perspective of level n is ‘how’ looking down from level n+1 and ‘why’ looking up from level n-k.

That usually does not matter in practice because you quickly reach a level of sufficient understanding.

We usually use UUIDs for this type of object but we have to send those objects to the legacy system XYZ, which only supports IDs with up to sixteen characters and is case insensitive, so we generate sixteen character random alphanumeric strings with uppercase letters which provides 82 bits of entropy.

Could you go deeper? Sure. Why do we have to send those objects to XYZ? Why does the legacy system still exist? Why does it not support UUIDs? Why is there no secondary key specifically for that system? Why are we using UUIDs?

But most likely you do not have to spell all those out. The point of a why is to explain why something is not what one would expect, you explain on top of some common knowledge. Everyone involved might know what XYZ does and why some objects have to get send there. If not, that is probably written down elsewhere. Why is the system using UUIDs? Maybe written down in the design for the persistence layer.


Sure, I'm not suggesting we need to go into infinite regress for every explanation! I'm saying that you should bear in mind that you _are_ in the middle of an infinite stack, and what is a ‘how’, a ‘what’ or a ‘why’ is just a function of your current position in it relative to the thing you're talking about. In the ID generation code you might want to explain why you're using this weird format here instead of a more standard format (because it needs to be passed to legacy system XYZ). But if you go up a step or two to where the ID is passed to XYZ in code, that ‘why’ has become a ‘what’ — the calling code acts as a ‘specification’ for the behaviour of that ID generation code.

Why would you want to approximate tanh for the use in neural networks? Every smoothed step function will do, so if your concern is speed, why not design something for speed, who cares if it is an established mathematical function? Because you might also need the derivative and tanh(x) has a quite nice one with 1 - tanh²(x) that is cheap to compute if you already have tanh(x)?


Complete tangent, what is going on with this image [1]? Render? AI? Too much post-processing? It has some computer game graphics look to me, but I can not quite put the finger on what seems off.

[1] https://images.blackmagicdesign.com/images/products/davincir...


For years now all their images have this look, everything sharp at all distances. I enjoy it because it goes against the shallow depth of field trend that has been dominant, it’s refreshing. I think they achieve it by focus stacking, compositing multiple images focused at different distances.


Oh, neat! Wachowski 'Speed Racer' but as its own aesthetic.


I’m not sure if it’s AI so much as a composition of dozens of images stacked on top of each other. The shadows of different objects seem to be going in different directions.


The camera and headphones are composited in, pretty sure the skyline is shopped in as well (the shadows on the desk should be much harsher given the bright sky), same with what's on screen. The displays being mirrored for no reason doesn't exactly help sell the reality of it either.

The bookshelf is looking sus too.


That lens (Sigma Cine 18-35/F2) is a big lens, but it looks almost too big there, like it was composited in, or the perspective is somehow strange.


I thought it was 3D.

A further bit of a tangent, but anyway: what really strikes me is the choice of such an image to represent whatever they're trying to convey. It feels bland, and there's a kind of underlying sadness to it... the books, the small sculpture, the shelf, the desk... it all drags me down.

I'm pretty sure the "fakeness" is intentional. The image seems designed to appeal to a specific target audience (when I look at their 'AI erase/replace tool' example I get a clear idea).


Ridiculously engineered studio lighting and HDR, I would suppose. Stuff can start looking very artificial when you start bringing in good equipment.


We may be witnessing a fascinating trend : AI images are making professional-grade imagery look like spam, and natural lighting and blurry images are becoming the new "human" esthetic.


the last two generations (9 months) of image models do natural lighting and blurry images really well too


Softbox lighting and it looks off because obviously no one lights their work desk like they would for a professional photo shoot.


Looks like a rendered scene yes


Where do you see exponential blow-up? If you replace every function in an expression tree with a tree of eml functions, that is a size increase by a constant factor. And the factor does not seem unreasonable but in the range 10 to 100.


The exponential blowup is in the symbolic regression section, where to search among depth K trees requires 2^K parameters.

As an example, searching for sqrt(x) would require a tree of depth ~40 which is in the trillion-parameter regime.


But that is not an increase in the expression size, that is the effort for searching for an expression tree that fits some target function. And that is no different from searching an expression based on common functions, that is of course also exponential in the expression tree height. The difference is that a eml-based tree will have a larger height - by some constant factor - than a tree based on common functions. On the other hand each vertex in an eml-tree can only be eml, one, or an input variable whereas in a tree based on common functions each vertex can be any of the supported basis functions - counting constants and inputs variables as nullary functions.


Ah I see I misunderstood your point, thanks for clarifying.

I think you are right, each node in some other expression tree formed of primitives in some set Prim would be increased by at most max(nodes(expr)) for expr in Prim.

That's essentially what the EML compiler is doing from what I understand.


yes, and even this search doesn't actually require trillions of parameters, since the switching parameters will be sparse, which means you can apply a FakeParameter trick: suppose I want a trillion sparse parameters, thats a million by a million. Let's just model those parameters as inner products of a million vectors each of some dimension N. Now its in the regime of megabytes or a GB.

For extreme regularization, one can even go down to 10 arbitrary precision numbers: if we have a single vector of 10 dimensions, we can re-order the components 10! different ways.

10! = 3 628 800

so we can retrieve ~3M vectors from it, and we can form about 10 T inner products.


The big mistake was attacking a state in violation of international law.


Any country that can veto a UN resolution is, effectively, immune to international law.


So is everyone with enough power, every law requires enforcement. But even without enforcement or with the ability to outright block laws, being in violation of international law still matters. It informs others whether you truly belief in a rule-based order or whether you only use it as a tool if it benefits you and they will adjust their behavior accordingly. Also if you want support from others, if you are in violation of international law, the others will think twice if they should support you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: