> So I'm assuming Europe's goal is not to compete or break American moat but to force them to be polite and to preserve national sovereignty on important national security aspects.
When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.
You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.
If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.
It's true. DEI is genuinely hard to do well. Your company has to look closely at itself and ask questions that it may not want to answer: Do we have a diverse team? Do we have a diverse customer base? If not, why not?
And as America's motto goes: if it's hard, it's not worth doing. Instead, we can weaponize our incompetence until people stop asking us to make things better.
> Interestingly the developers predict that AI will make them faster, and continue to believe that it did make them faster, even after completing the task slower than they otherwise would!
In this case clearly anecdotes are not enough. If that quote from the article is accurate, it shows that you cannot trust the developers time perception.
I agree, its only one study and we should not take it as the final answer. It definitely justifies doing a few follow up evaluations to see if this
> If that quote from the article is accurate, it shows that you cannot trust the developers time perception.
The scientific method goes right out the window when it comes to true believers. It reminds me of weed-smokers who insist getting high makes them deep-thinkers: it feels that way in the moment, but if you've ever been a sober person caught up in a "deep" discussion among people high on THC, oh boy...
I did not say to trust it. I do not need to trust it.
If I run my own tests on my own codebase I will definitely use some objective time measurement method and a subjective one. I really want to know if there is a big difference.
I really wonder if its just the individuals bias showing. If you are pro-AI you might overestimate one, and if you are against it you might under-estimate it.
Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
The issue came when he promised every car would become a robotaxi. This means he either has to retrofit them all with lidar, or solve it with the current sensor set. It might be ego as well, but adding lidar will also expose them to class action suits.
The promise that contributed to the soaring valuation, now looks like a curse that stops him from changing anything. It feels a bit poetic.
> Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
But radar and ultrasound did not cost a lot and he got rid of those too, suggesting it was more than cost that made him go vision only.
Heck, they even use vision for rain sensing instead of the cheap and more effective sensor everyone else uses (which is just some infrared LEDs and photodiodes that measures the change in internal reflection at the outer side of the windshield when the critical angle changes when the windshield gets wet).
> But radar and ultrasound did not cost a lot and he got rid of those too, suggesting it was more than cost that made him go vision only.
They did get rid of the radar at a moment when there was a shortage of parts. They had the choice, ship now without the part, or wait and ship less cars.
Maybe that was always the plan, and the shortage only accelerated the decision.
I don't want to defend Tesla, but ... The problem with LIDAR is a human problem. The real issue that LIDAR has fundamentally different limitations than human sensors have, and this makes any decision based on them extremely unpredictable ... and humans react on predictions.
A LIDAR can get near-exact distances between objects with error margins of something like 0.2%, even 100m away. It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away. You can see this in the youtube movies of the "Tesla beep". It used to be the case that if the Tesla autopilot judged a collision between 2 objects inevitable, it had a characteristic beep.
The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
But worse yet is when the reverse happens. Distance judgement is the strength of LIDARs. But they have weaknesses that humans don't have. Angular resolution, especially in 3D. Unlike human eyes, a LIDAR sees nothing in between it's pixels, and because the 3d world is so big even 2 meters away the distance between pixels is already in the multiple cm range. Think of a lidar as a ball with laser beams, infinitely thin, coming out of it. The pixels give you the distance until that laser hits something. Because of how waves work, that means any object that is IN ONE PLANE smaller than 5 centimers is totally invisible to lidar at 2 meters distance. At 10 meters it's already up to over 25 cm. You know what object is smaller than 25 cm in one plane? A human standing up, or walking. Never mind a child. If you look at the sensor data you see them appear and disappear, exactly the way you'd expect sensor noise to act.
You can disguise this limitation by purposefully putting your lidar at an angle, but that angle can't be very big.
The net effect of this limitation is that a LIDAR doesn't miss a small dog at 20 meters distance, but fails to see a child (or anything of roughly a pole shape, like a traffic sign) at 3 to 5 meters distance. The same for things composed of beams without a big reflective surface somewhere ... like a bike. A bike at 5 meters is totally invisible for a LIDAR. Oh and perhaps even worse, a LIDAR just doesn't see cliffs. It doesn't see staircases going down, or that the surface you're on ends somewhere in front of you. It's strange. A LIDAR that can perfectly track every bird, even at a kilometer distance, cannot see a child at 5 meters. Or, when it's about walking robots, LIDAR robots have a very peculiar behavior: they walk into ... an open door, rather than through it 10% of the time. Makes perfect sense if you look at the LIDAR data they see, but very weird when you see it happen.
Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction. Well, you cannot predict what a lidar sees, so robots with lidars constantly get into that situation. Or, if it's a lidar robot attempting to go through a door, you predict it'll avoid running into anything. Then the robot hits the wood ... and you hit the robot ... and the person behind you hits you.
> It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away.
Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters, and I'm likely vastly underestimating the distance. It is very apparent when this is the case. I can't tell if the distance between us is 45 vs 51 meters, but that is information with 0 relevance to anything.
> The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
This is a non-issue and certainly not horror-like. All one's got to do is train themselves to slow down / brake when they hear the beep. And you're trying to paint this extremely useful safety feature as something bad?
> Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction.
What are you trying to say here? If the car in front of me brakes I brake too. I do not need to know the reason it braked, I simply brake too, because I have to. It works out fine every time because I have to drive in such a way to be able to stop in time in case the car in front of me applies 100% braking at any time. Basic driving.
Generally, what you're describing as predicting is more accurately called assuming. Assuming that things will go how one wants them to go. I call that sort of driving optimistic: optimistically assuming that the car in front of me will continue going forward and that there is nothing behind that huge truck that's blocking my view of the upcoming intersection, so I can freely gas it through.
That mindset is of course wrong; we must drive pessimistically, assuming that any car may apply max braking at any time and that if any part of our line of sight is obstructed, the worst case scenario is happening behind it - there is a high speed object coming towards us at a collision course that will reveal itself from behind the obstruction at the last second. Therefore, we must slow down when coming around a line of sight obstruction.
> Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters, and I'm likely vastly underestimating the distance. It is very apparent when this is the case. I can't tell if the distance between us is 45 vs 51 meters, but that is information with 0 relevance to anything.
That's probably because for things moving in straight lines at constant velocity you don't need to be able to measure distance at all accurately to figure out if they are on a collision course. You just need to be able to tell if the distance is decreasing.
First, you just have to note if their angular position is changing. If it is then they are not on a collision course.
If the angular position is not changing, then you have to check if the distance is decreasing. If it is they are on a collision course. If it is not then they aren't.
If you take advantage of the fact that cars generally have distinctly different front ends and back ends and that most of the time cars are traveling forward you don't even have to estimate distance. If the angular position is not changing just note if the direction the car is pointing has its front closer to you than its back or not. If its front is closer than its back then it is on a collision course. Otherwise not.
You will need to make some adjustments due to cars having volume. A near miss for point cars could still be a collision for cars with volume, but this should be fairly easy to deal with.
> Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters
Can you tell me the distance between 2 objects, each 50 meters away from you, down to 1 cm? That's the superhuman part. Even the distance between you and an object 10 meters away down to a few millimeters is impossible for a human.
It is because 10 measurements of that per second can predict with great accuracy where every object in the scene will go for the next few seconds, except for small children or bikes or ... that the LIDAR cannot see.
It can also tell you 4-5 seconds before it happens which objects are going to collide. Not just which object YOU are going to collide with, but any collision between any 2 objects if they are within the range of the LIDAR.
But they see fundamentally different things than humans. So humans will never work together nicely with LIDAR guided robots.
You don't need a license for most of what people do with traditional, physical copyrighted copies of works: read them, play a DVD at home, etc. Those things are outside the scope of copyright. But you do need a license to make copies, and ebooks generally come with licensing agreements, again because to read an ebook, you must first make a brand new copy of it. Anyway as a result physical books just don't have "licenses" to begin with and if they tried they'd be unenforceable, since you don't need to "agree" to any "terms" to read a book.
> If a publisher adds a "no AI training" clause to their contracts?
This ruling doesn't say anything about the enforceability of a "don't train AI on this" contract, so even if the logic of this ruling became binding prcecednet (trial court rulings aren't), such clauses would be as valid after as they are today. But contracts only affect people who are parties to the contract.
Also, the damages calculations for breach of contract are different than for copyright infringement; infringement allows actual damages and infringer's profits (or statutory damages, if greater than the provable amount of the others), but breach of contract would usually be limited to actual damages ("disgorgement" is possible, but unlike with infringer's profits in copyright, requires showing special circumstances.)
Fair Use and similar protections are there to protect the end user from predatory IP holders.
First, I dont think publishers of physical books in the US get the right to establish a contract. The book can be resold for instance and that right cannot be diminished. But secondly adding more cruft to the distribution of something that the end user has a right to transform, isn't going to diminish that right.
Fair use "overrides" licensing in the sense that one doesn't need a copyright license if fair use applies. But fair use itself isn't a shield against breach of contract. If you sign a license contract saying you won't train on the thing you've licensed, the licensor still has remedies for breach of contract, just not remedies for copyright infringement (assuming the act is fair use).
I am not going to sign a contract at the bookstore. Anyone who tries to get me to sign a contract at the bookstore is just going to lose book sales. IIRC the case involved Anthropic literally feeding physical books into scanners. Your proposed solution sounds like its just going to make books worse, not AI better.
I'm not proposing any kind of solution, just stating what the law currently is. A book purchased at a store is a purchase; content obtained from online services like Bloomberg or LexisNexis is typically licensed; more and more of these license contracts include AI-focused restrictions.
I suspect IP like text is going to follow the college virtual textbook model where DRMed software is needed to access it and physical copies won't exist. Maybe some HDCP-like protection to stop screen scraping.
To access them, institutions do have to sign contracts, along with abiding by licensing terms.
I know, but the article mentions that a separate ruling will be made about that pirating.
quote: “We will have a trial on the pirated copies used to create Anthropic’s central library and the resulting damages,” Judge Alsup wrote in the decision. “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for theft but it may affect the extent of statutory damages.”
This tells me Anthropic acquired these books legally afterwards. I was asking if during that purchase, the seller could add a no training close to the sales contract.
> The doctrine was first recognized by the Supreme Court of the United States in 1908 (see Bobbs-Merrill Co. v. Straus) and subsequently codified in the Copyright Act of 1909. In the Bobbs-Merrill case, the publisher, Bobbs-Merrill, had inserted a notice in its books that any retail sale at a price under $1.00 would constitute an infringement of its copyright. The defendants, who owned Macy's department store, disregarded the notice and sold the books at a lower price without Bobbs-Merrill's consent. The Supreme Court held that the exclusive statutory right to "vend" applied only to the first sale of the copyrighted work.
> Today, this rule of law is codified in 17 U.S.C. § 109(a), which provides:
> Notwithstanding the provisions of section 106 (3), the owner of a particular copy or phonorecord lawfully made under this title, or any person authorized by such owner, is entitled, without the authority of the copyright owner, to sell or otherwise dispose of the possession of that copy or phonorecord.
---
If I buy a copy of a book, you can't limit what I can do with the book beyond what copyright restricts me.
"When aid funds are used to substitute for government spending, then few, maybe even no one, has actually been helped unless the government uses the freed-up money for other projects of benefit to the general population. Of course, they don’t. They use the money to shore up their political position and the loyalty of their essential backers."
So there is definitely an argument to be made, that if you are not careful with how you dispense foreign aid, you will definitely empower corrupt politicians.
The book discusses examples where foreign aid was used more effectively.
Yeah I'd be open to "more effecitve" rather than wholesale cuts.
But as for starving the poor to effect change you want, I think history has shown you just end up starving the powerless (folks in power don't care) and change doesn't necessarily happen. So who are you punishing?
I am not even sure this administration is cutting it with the goal of causing populations to rebel against corrupt politicians.
> I think history has shown you just end up starving the powerless (folks in power don't care) and change doesn't necessarily happen.
I know. I was not specifically arguing that it was definitely going to happen in all or even any current scenario. I just wanted to make the case that giving aid to countries with corrupt governments can definitely keep them in power longer.
I am not making the case, that we should sacrifice starving people, in the hopes of triggering a revolution. We just should stay cognizant of the fact, that foreign aid can influence internal political balances between factions. After analysis it might be, that we have to accept helping a dictator by stabilizing his country, if it saves more lives.
And with skin sales. Remember that Valve charges a Tax on every item sale. Every time you sell an Item like a skin on the steam market, valve takes a cut. If they crackdown on the gambling it will impact their bottom line.
As you say, Valve does not directly promote gambling products. They are not like EA with their predatory Fifa super team.
Still a lot of people, including journalists, find that they could do more to protect against underage gambling.
I guess it will sounds like I being Valve advocate here, but it's just not only a bottom line. Valve simply dont have headcount to crackdown on many things and neither they have headcount to do lots of predatory stuff too.
Valve is under 400 people and wast majority of them do not work on Steam or specific game like CSGO. Likely each project support team is like 30-50 people at most.
To compare numbers for other companies in 2023-2024:
* Epic Games - 4000
* Nintendo - 7,724
* Sony Interactive Entertainment - 12,700
* Take-Two Interactive - 12,371
* Electronic Arts - 13,700
* Activision Blizzard - 17,000
* Mircosoft Gaming - 20,100
Might be they do have to go to hire 100 more people to solve this problem and might be it's fully their fault, but expectations many people have of this certainly rich, but small company are not realistic.
Also looking at things like pirate sites, it would be never ending process. Close one and lock the their inventories and 3 will popup somewhere else. It is unlikely to be solvable issue.
Just because you cannot "solve" an issue, does not mean you should do nothing. By that logic valve should not implement VAC, because after all, cheaters will always find a way.
They play the anti-cheat game of cat and mouse, because if they do not, users will stop playing. No one wants to play with cheaters unless they are cheating.
They could definitely invest some resources into this. But they have no monetarily incentive thus they do nothing. I fully expect legal action or fear of it, will eventually make them do something.
> As you say, Valve does not directly promote gambling products.
Counter strike cases are a gambling product. They cost money to buy, they cost money to open, and they reward with an item worth real money. This is indisputable, and arguing otherwise is either in bad faith or due to ignorance of the platform and surrounding ecosystem.
Valve wouldn’t be making over a billion dollars a year on case openings alone if the outcome of opening a case was worthless 100% of the time.
When push comes to shove the US company will always prioritize US interest. If you want to stay under the US umbrella by all means. But honestly it looks very short sighted to me.
After seeing this news https://observer.co.uk/news/columnists/article/the-networker..., how can you have any faith that they will play nice?
You have only one option. Grow alternatives. Fund your own companies. China managed to fund the local market without picking winners. If European countries really care, they need to do the same for tech.
If they don't they will forever stay under the influence of another big brother. It is US today, but it could be China tomorrow.