Hacker Newsnew | past | comments | ask | show | jobs | submit | f1notformula1's commentslogin

I found this linked from the article https://movio.co/en/blog/migrate-Scala-to-Go/

So if the alternative was Scala I can see why Go may have helped tighten things up a bit.


> everyone understands a different part of git and has slightly different ideas of how things should be done

This was a big problem that bugged me too, so for every team I've worked with I've created a few scripts for the team's most common version control operations.

Most devs, including me, are pretty lazy so they'd all rather run this script than go to Stack Overflow to figure out git arcania.

This helps standardize conventions too: Feature branches/linear DAGs/topic branches/dev branches/prod branches/whatever weird thing a team does they all just do that using the script so it's standardized.


Replace with:

(1) I am from $region_X and I went to $region_Y

(2) $region_Y'ers do things in a way I find unusual and/or distressing

(3) The cause must be a character flaw shared by all $region_Y'ers (<insert stereotypes of choice here>).

and your statement is still valid about this formula.

I think we tend to preferentially notice them when we're in one of the two regions involved, but in general the world (and so the internet) is full of this sort of semantically null comparisons about culture and stereotypes. I've started seeing these as idle chatter akin to "Hey, look at that weirdly painted bus". At most I'd respond with "oh heh cool" but I'm not going to spend any more of my energy trying to explain choices of bus colors to anyone.


PHP dev? ;)


Another stereotype? It could be Perl or Bash.


Or Ruby, or TCL...


I didn't know that there are still people out there that program in Perl or Tcl??


Booking.com is mostly Perl and doing gangbusters. Tcl is, for example, the lingua franca for F5 load balancers. I have more examples, but both are far from dead.


Tcl is a household name in the field of EDA tool scripting.

Virtually all software tools designed by Cadence, Synopsys, and Mentor include a Tcl REPL/console.

Any task performed in the GUI is actually executed in the REPL, making it easy to replicate a specific flow by just copy/pasting the commands displayed in the REPL into a Tcl script.

To load a script, you simply pass it as an argument when running the software.


There are. There's even a Perl 6 that is production ready: https://perl6.org


or PowerShell


I thought Perl was the only language that has ' as a valid character in variable names along with the $. Ada allows the quote, too.


First thing that came to my mind too! But it could also be bash...


Replying here because this matched my train of thought well.

I'm actually really curious to know if anyone has first-hand knowledge of a tinkerer (by this blog's definition) getting their work published.

As the GP said, understanding the rules is one thing, but demonstrating their understanding is another. And as you said, matching the style seems to be more important than the value of work that is demonstrated.

I see the value in matching terms, jargon, style etc just so the reviewers can standardize their thought processes. But as someone who's been a tinkerer for ages now it's hard to change styles for no immediate benefit.

Maybe some examples will inspire me to try :)


So my first attempt was 'this is what we do' -a descriptive paper, documenting a technique. The reviews that came back were basically a mixture of 'so what' and 'what does it show' and 'show something of significance' -I had tried to write it as the ur-paper, to establish a technique, and only described in general terms its applicability. They wanted a lot more demonstrable outcomes. I walked away. I found peer review very upsetting. It felt like nobody actually cared about what I was trying to say.

My second attempt was 'this is a specific thing it can do' combined with a much more rigorous academic 'this is the analytical technique' and 'this is a polemic about lack of statistical rigour in results, here is our data, you repeat it' -Which interestingly got panned as 'too much tutorial, too much argument, more results' -which of course this time, we addressed instead of walking away. Result? we didn't get in the first journal/conference, we made the second. I'm reasonably content, but this feels like 'learn the rules of the game' a lot more than 'say something of merit you find personally interesting'

Oh, and 'this technique is interesting' doesn't seem to cut it as a paper subject.


Sorry if this comes across as harsh but I'm really not understanding your grievance.

> They wanted a lot more demonstrable outcomes. I walked away. I found peer review very upsetting. It felt like nobody actually cared about what I was trying to say.

> Oh, and 'this technique is interesting' doesn't seem to cut it as a paper subject.

Well, yeah? To invoke an HN cliche, "ideas are cheap". Why should anyone else care about your idea if you can't be bothered to show it actually does something interesting on some specific problem(s) or even motivate why it might be expected to do something interesting in light of what's already out there? Without any expectation of experimental validation, conferences would basically be giant circle-jerks filled with completely inconsequential "interesting ideas".

And it sounds like you took the feedback from your first round of peer review and revised your work in light of those critiques and got your resubmission accepted. That seems like a pretty good experience to me, knowing many academics with multiple experiences of resubmitting work 3+ times (with new results and revisions each iteration) before acceptance. I'm not saying that any peer review process is perfect by any means, but it's a very important filter and in this case it honestly sounds like the criticism you got when your paper rejected was pretty fair...


I have no grievance. I gave an experience summary. I think my expectations did not match reality and I was reset.


Assuming you're talking about academic computer science papers, you're right that it is not "say something of merit you find personally interesting". It is more "convince the reader that something you find personally interesting has merit."


This thread is somewhat framing this as ethics vs salary, but what I found most useful in your statement was the (somewhat implicit?) thought that there is often no ethical way to earn a salary for some programmers.

While this has not been true in my personal experience, I must say that I've been very fortunate in life. I can totally understand and see how this may be true for some programmers.

I wonder if, as an industry, there was a better way for us to handle such situations.


This is the most intuitive introduction to Kalman filtering that I've ever seen. Even just reading that part was the most enlightening 2-minutes of my entire month. Kudos to the author!


Much appreciated! :)


Agreed with this as well as the GP comment.

Personally I think this also has the unfortunate side-effect of college's turning into de-facto gatekeepers to 'earned' status/success/wealth.


I don't understand enough about the ad business to answer this myself. If there's a legitimate reason to allow 3p scripts to run code - it would seem like creating a domain specific language that Google safely translates into JS would be so much better. Allowing 3Ps to run arbitrary JS just seems so shockingly wrong.

No amount of manual auditing can catch malicious code. It's way too complex for a human to parse.

Is there a legitimate business need that anyone's aware of to have code run in Ads? If so, why not use a DSL?


I used to work for https://www.interpolls.com/ in 2007 so tech may has changed quite a bit but here's how the business worked then:

We were allowed 30kb of JS file to load which could (depending on the ad network) serve ~300kb of a Flash SWF file. We ran Cold Fusion hooks in the SWF to radio home to our JS file to trigger 1x1 pixels for 3rd party trackers. We scraped our raw Akamai HTTP request logs after the fact on a CRON job to create our reporting system. There was a small cluster of FreeBSD servers that crunched the HTTP request logs. Every mouse-over / click was registered via these pixel HTTP GETs. We had timers too that would trigger every few seconds. The reporting system probably had a 2-3 hour delay due to the immense amount of traffic we received.

We specialized in "polls" which were plain old HTML radio buttons overtop of the SWFs which after you answered gave you a quick answer and sometimes had digital takeaways in the popup answer window (Icons, Wallpapers, etc..)

At the time all of our ads were handmade, we had a design team and a programming team that would create these together and code them specifically to the clients request. By the time I left we had started to automate it into a drag+drop system for clients.

Sidenote: Biggest job screw up that I've ever done was not putting in the correct 3rd party tracking pixel into an 300x250 that ran for 1 day on AOL.com homepage. It ended up being fine since we got the results back for the typo from the raw logs, but it could have been a $200k mistake!


I don't know anything about JS-based cryptomining, but I wonder if you can't stop such ads without breaking 90% of legit ads.

I mean, it's all probably boils down to number-crunching? So DSL you are envisioning should block really basic language parts, like cycles and math operations.

If I'm wrong and mining actually could be easily blocked on language level using some DSL, I'm all ears.


It would be nice if things could be blocked by CPU usage... even if you’re not mining cryptocurrency, if your ad uses more then 5% of my CPU it should be killed.


...and total CPU Time.

Interesting, just noticed that watching a Udemy course uses %98 CPU (in Activity Monitor on a MBP). This even if playback if paused. Wonder if they're doing something similar or it's just a lame implementation?


I've seen some software video decoders do that at times, though if it's happening even while paused it's a little unusual (maybe decoding buffered frames?).


yep, i've had the same with udemy app..


Maybe they're polling for input?


yeah, good point, but it's hard to differentiate between malicious ad and, say, some widget displaying real-time NASDAQ chart.

Sometimes I want some pages (or even iframes in those pages) consume my CPU and be as smooth as possible.


Once a widget has run say 100 million instructions, suspend it if it comes from a different domain than the main page, mark it visually and provide a button on it to enable high CPU usage.

We used to do something like that with Flash: make the user click it if they want it to run.


I hope that doesn't occur, as it would break our site and the usability for our users.

Hopefully a more sophisticated solution which requires or measures GPU usage to UX updates may be better.

WebGL games and heavy animations can run on the GPU, but if they aren't updating the interface, perhaps that can be used to find something nefarious


That's exactly the reasoning that is the root cause of all these problems. You designed a website and you don't want it to be broken. Fair enough. But as a user, I don't really care about your website - what matters for me is if I can prevent it from taking over my CPU or not. This option should be there and it should be configurable. The browser makers already figured this is a problem and have some rudimentary mechanisms preventing total abuse ("this window/tab became unresponsive...") but if users have more control over it, it completely changes the rules of the game. Having a configurable option "if a script consumes more than N% of CPU, turn it off" would save many people the time spent on looking for the culprit, sometimes hidden between tabs. Fortunately many people have an auditory clue when the JS is abused: the fan noise.

Designers and developers need to understand that allowing them to run their code on my computer is a privilege, not an absolute right. As every right, it must not be abused. If it's abused, it will be terminated. Google finding and disabling these Coinhive miners on YT is just treating the symptoms, not the root cause.


I've come to decide that the only ads I consider "legit" are where the site owner strikes a deal with another business that is interested in advertising on their site, the site owner hosts the ad on their own server, as a picture banner or text or perhaps a nice block in a side column that says "sponsored content" or whatever, and just links to the other business.

Site owner controls all the content. Any tracking will be done mainly via server logs, if the site owner wants to they can use a bit of script to quickly shove in a redirect onmousedown, in order to track exactly when the user clicked what link. But frankly I've found even that technique a privacy insult ever since I noticed Google doing this in their own search results.

This is analogous to how paper newspapers used to manage their ad space. No third party shit, and if the magazine was proud of itself it would curate the ads to only deal with advertisers that wouldn't annoy their reader base (too much).

A bit of a hassle maybe, but it shows your readers that you actually care about what content is displayed on your site (let alone what code is run). But most importantly, no adblocker will block these kinds of ads. Because they're just image links, after all. Adblocker can't see if that's an ad banner or just a thumbnail linking to an external domain. And I would maybe even bother to whitelist those if they did (right until one shows me crap I don't want to see, like being confronted with nudity or sex when I'm not in the mood for it).


>I've come to decide that the only ads I consider "legit" are where the site owner strikes a deal with another business that is interested in advertising on their site, the site owner hosts the ad on their own server, as a picture banner or text or perhaps a nice block in a side column that says "sponsored content" or whatever, and just links to the other business.

I agree. When it comes to ads for niche content (blogs, forums, etc). The ad industry sells online ads like they're TV commercials but the companies buying the ads should be looking at them like partially sponsoring a race team in exchange for your logo showing up in front of people who are interested in your type of products/services.


Evidently blocking math is probably not okay.

However, mining is useless without a way to send it back out to the network. I doubt ads need networking capabilities - so just prevent that bit. That should do it, as far as I can tell.


I know a bit about modern ad-tech.

Ads absolutely need networking capabilities, for tracking stuff like "viewability", or "anti-fraud, brand safety and independent measurement" by some third-party provider. In fact, you can't get serious marketing budget from reputable brand without having your, err... their ad wrapped inside some JS which does networking calls. Brands want to audit each impression you, as ad-tech firm, will serve on their behalf.


Part of what I find frustrating as a user is that I don't like ANY of those features :)

While I'd prefer a model without any advertising (and am willing to pay for it), I can put up with unobtrustive ads, without tracking, like Daringfireball uses.

I've worked in adtech before, and I know that these techniques make money, and are important to advertisers. But as a user, I find them intrusive, and they are why I run adblockers.


Yeah, race to the bottom. Fraud in online advertising estimated to be tens of billions USD yearly, so brands require more and more "brand safety" and "measurements", each ad calls, like, 4 different vendors calculating some metrics, and this, in turn, fuels adblockers growth.

Interestingly enough, "walled gardens", like facebook, are big and important enough to bully advertisers into playing by facebook rules, accepting FB measurement standards, without calls to 3rd party vendors.

It's only open web which is polluted more and more each year.


Shouldn't Google be big enough to bully advertisers into the same sort of deal? It seems like a business opportunity for Google here.


Well, it's a complicated question, and I'm not that well educated.

To my best knowledge...

1. You can't make a single cent if your bot visited facebook.com 100 million times. You can make some serious money if your bot visited some-exciting-domain.com, which belongs to you, and there was 5 ads displayed on each visit.

With this incentive you have all the reasons to make your bot very human-like (think headless chrome, realistic mouse movements, having old cookies, etc) so fighting fraud gets extremely hard.

It's easier to serve the ads and let advertiser figure out anti-fraud measures by himself. Being responsible for measurements and lack of fraud on open web is a huge PITA without clear path to huge uplift in revenue.

2. Facebook optimizes UX (or claims to), and calls to other servers make site slower, especially on mobile, lowering user engagement. This argument obviously does not work for some-exciting-domain.com. So, you can call whatever your want from your ad on some-exciting-domain.com, but on facebook.com you play by facebook rules.

In fact, some-exciting-domain.com can probably ban ads which call other domains, but it will just kill his revenue (programmatic systems will label him as "non-performing", because nothing is properly measured and stop buying ads there).


> "It's only open web which is polluted more and more each year."

Is water still polluted if more and more and more are willing to chug it?


I do not think that metaphor holds at all, if "water" is open web and "chugging" means spending advertising dollars there.

I don't have a link on hand, but GOOG and FB captured something like 95% of digital advertisement growth in 2017. In other words, out of each new 1$ shifted to digital from TV and print, 95 cents went to duopoly.

And advertisers which are still "chugging" open web, installing more and more "filters" and "purifiers" (different anti-fraud and measurement providers).

People from digital media are talking about digital media crash. [0] Buzzfeed failed their revenue goal and fired 100 employees. [1] Mashable was sold for peanuts. [2] Business Insider, granddady of ad-monetized clickbait, pushing more and more articles under "BI Prime", which means paid access.

A lot of people were clamoring for death of ad-supported publishing on open web. Well, the future is almost here.

[0] https://talkingpointsmemo.com/edblog/theres-a-digital-media-... [1] https://www.recode.net/2017/11/29/16715350/buzzfeed-lays-off... [2] https://www.recode.net/2017/12/5/16735262/ziff-davis-mashabl...


Chugging means, the general public is not at all concerned about what it's being subjected to - be it privacy / tracking or the visual pollution of ads.

If the market is the decider then I think the market is saying - quite loudly - the water is great to drink.


> Business Insider, granddady

Now I feel like Methuselah, who played with dinosaurs before the Ark.


As a user I love all of those features. Every single one of them makes my adblockers more effective, not to mention they drive more people to use adblockers. :)


Ads absolutely do not need this. One day these capabilities will be taken from the networks and they will survive.

The advertisers will take anything they can. It’s up to others to set the limits.


Will be taken by who? As long as we have publishers and adnetworks allowing this behaviour, money will flow there.


The audience is clawing back their rights using ad blockers and the browser vendors are already limiting tracking by eliminating apis used for tracking and limiting cookies.

Ultimately the browser vendors and the users make the rules, not advertisers.


but one of the biggest browser is owned by one of the biggest advertisers in the world. you'd expect dive conflict of interest at best, and anticompetitive behaviour at worst, from them.


Some Ads do you use networking capability. How do you think they do cross domain tracking?

I have also seen some ads with chat boxes to either talk to the seller or other people viewing the add.


Ads could run on something like the Ethereum virtual machine, having a limited amount of "gas" (instructions) to execute.


Looks like https://developers.google.com/caja/ can be used for the purpose.

Disclaimer: I work for Google but not on ads side and have limited knowledge on Javascript or general frontend stuff.


It was only last week or so when I read some security researcher pulling some tricks to bypass part of caja's sandbox (while looking for something else, even). Sure this was a whitehat researcher and they got a (very) nice bug bounty from Google Project Zero. But if they use this to secure the 3rd party scripts that are apparently allowed these days on Google Ads, they're being hugely irresponsible.

I never heard of Caja until a few weeks ago, but apparently it started in 2007, could be that I forgot when I heard about it though. Back in 2007 though I was still a frontend web developer with a very keen eye on JS security and all the XSS/CSRF problems of those days. Back then JS/ECMAScript did not have sufficiently advanced features to properly sandbox code. This was a bit of a fool's errand back then. By now it's gotten a lot more of these features, mainly revolving around protecting the super-flexible fluid JS objects from modification and abuse. But I really kind of wonder if that locks everything really watertight? Because browsers are going to have unofficial/proprietary features, and you need just one to accidentally get this slightly wrong, get a reference to a non-sandboxed Window object via-via-via, and it all falls apart.

You don't let untrusted people run code on stuff you serve. Code is just too slippery, turing complete etc, and ECMAScript perhaps even more so than many other languages. Can't we just take that for a given by now, instead of trying to be cleverer than the previous smart person that failed at it?


Sort of: I don't think it's meant to restrict time/space usage, just access to capabilities. If you don't give the ad-code access to the network, it'd have no way to access the blockchain, but it could still chew up your CPU cycles.


Thanks for this - That actually looks like a much more pragmatic approach than an all-new DSL.


Ultimately, it's because the advertisers don't trust Google or other middle men and insist on running code from third party vendors that grabs more information and promises better metrics, or to determine if the site or user are in some way fraudulent.


I'm not a legal expert and don't understand EULAs at all. I expected a "Datacenter" to be defined at some point but I didn't see a definition in the EULA.

How should one interpret this? Is a research cluster a datacenter? How about a rack of a few dozen machines I built by hand? I've been in university research groups that had both options.

Somewhat maddening to have a single line in the EULA that's so open to interpretation.


IANAL, but in the law there's a thing which is called Contra proferentem: https://en.wikipedia.org/wiki/Contra_proferentem Basically, it means that ambiguous terms are interpreted against the contract drafter.


You get that pretty much everywhere that used to be British common law iirc, and it’s why contracts frequently define seemingly minor and silly terms, ones which are obvious such as “the undersigned“ and all manner of the seemingly banal. In essence the contract is first and foremost agreeing on the same language, to express a commitment which ideally all parties are fully cognizant of.

Edit: spelling


Also there is no punitive damages in U.K. law. So what’s the worst that could happen? Would you put them back in the position they would have been had the contract not existed, so give the cards back plus a contribution towards depreciation? Would be interesting if someone with legal knowledge could chime in.


They could potentially claim damages for the difference in price between the GeForce and the equivalent Tesla, using the argument that using the GeForce in a DC has cost them that amount in lost revenue.

Of course, a counterclaim for existing owners would be that the product they purchased was licensed for DC usage at time the contract was entered into, and so the original rights cannot be unilaterally revoked without consent.


They don't want commercial cloud and bare metal and "machine learning as a service" platforms to be based on GeForce cards. Such a service is either insignificant in it's size, or based in a Datacenter.

I would not stress about this on my own hardware for my personal or company internal use, even if it is a full rack or more in a Datacenter. Nvidia is not going to be able to tell the difference to a rack in my basement.


Undefined terms like that speak to the weakness of the EULA as an enforceable contract.


When you're not a legal expert and you feel like you are about to maybe break a term in someone else's license agreement the right avenue is to consult your company or university legal department.


"Medrano worked with Urton over the next several months and the two compiled their findings into a paper which will be published in the peer-reviewed journal Ethnohistory in January. Medrano is the first author on the paper, indicating he contributed the bulk of the research, something Urton notes is extremely rare for an undergraduate student"

I read that to mean there's more, but it'll be published in Ethnohistory.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: