The split would likely be Douyin (TikTok China) vs. TikTok for the rest of the world. The business is already structured that way- ByteDance operates Douyin directly but TikTok operates through a subsidiary.
The subsidiary is for legal purposes only, all of the tech is built and operated from China and is subject to Chinese laws.
In fact, even after divestiture this will be the case for some time. The Chinese entity holds the IP to the algorithms and other underlying technology that make Tik Tok as viral as it is.
Betteridge's law of headlines:
"Any headline that ends in a question mark can be answered by the word 'no'."
tl;dr of the article:
- Autonomous vehicles drove fewer miles in the state of California in 2017, but maybe made up for that in miles driven elsewhere.
- The disengagement rate will probably need to improve considerably before AVs are ready for widespread deployment
- Waymo's disengagement rate barely improved year over year, but that may have been because they are placing the cars in more difficult scenarios (their blog post suggests that is indeed the case)
Surprisingly few people in the tech media seem to be aware of this.
But I'm not sure I'm too mad about someone trying to take down Google a notch. My only worry is not that people are starting to believe Silicon Valley companies can do no wrong anymore, but that the government will exploit the situation to force backdoors, censorship, and other stuff like that on them. And we're already seeing that. Feinstein and others are now trying to take advantage of the "hate on tech companies" to push for encryption backdoors by also arguing about how "out of control" tech companies are, which is starting to ring a bell in people's heads.
I probably mentioned this a few times in the past few years, but Google and Facebook and others should not be taking advantage of the good will of people to constantly maximize their profits. Eventually that good will is going to run-up, and then they'll be in Uber's situation, where few people defend them anymore. And then they're in trouble, because if the people don't defend them anymore, then the government will have free reign against them.
But they've always ignored this, because they've always seen such warnings as only coming from a "vocal minority" so they didn't care. I remember even comments from here post-Snowden, about how Google doesn't care about end-to-end encryption just to gain the trust of a few HNers. But they've forgotten that Chrome built a reputation and a fanbase "on the backs" of people like that. Without people like that preaching how much better Chrome is than Firefox and IE, Chrome may have been relegated to Opera status.
Perhaps instead of seeing tech enthusiasts as a "vocal minority", Google should see them as an "army of unpaid PR agents", working every day either to raise them up or bring them down if they start doing nasty stuff. That might change their perspective a bit on how to approach the criticism coming from enthusiasts.
Google is a conglomerate like many before. At some point people will realise that it's better to split the company up and allow individual parts to focus more. Google's individual businesses often have very little overlap. But that's not only for Google. Amazon, for example, has no reason to keep AWS in the same company as ecommerce (except to subsidise businesses). Quite the opposite, keeping them apart would bring some clients back that don't want to support Amazon (such as Walmart). I'm not sure how long it will take but I'd be very surprised if they remain connected in the longer term.
Conglomerates are usually split up when they're under pressure. Tech companies haven't seen any crisis in the past decade, when the first one comes investors will probably push harder.
Interestingly, Facebook so far remains quite focused. The companies they bought (Whatsapp, Instagram) are very similar to their core product. That means they remain extremely reliant on the success of Facebook itself but also means there's not natural way of splitting up the company.
On your point, might be worth mentioning that Facebook is starting to lose its focus with acquisitions such as Oculus Rift - which Zuck considers a major strategic move
While well-constructed C programs are as minimal and straightforward as programs come, the language is certainly full of pitfalls (just look at the underhanded C contest). Keep in mind that most computer science students in their first intro course have not yet developed the mental models that experienced programmers take for granted. Students will try doing all sorts of things which don't make much sense, and C is overly permissive of these things unless you introduce additional tooling, e.g. valgrind, which can become very overwhelming for intro-to-CS students who are still feeling lost.
I agree that C0 on its own is overly simplistic, but the way it is used at CMU is as 'C with training wheels' - strict type checking, no potentially-unsafe pointers to the stack, dynamic array bounds checking, etc. Then, two thirds into the semester, students transition to real C, and learn how to correctly manage those potential pitfalls.
This is how students at CMU have been learning C since 2010, and graduates don't seem to be any worse off for it.
>... have been learning C since 2010, and graduates don't seem to be any worse off for it.
[Citation needed]
> C is overly permissive of these things unless you introduce additional tooling, e.g. valgrind, which can become very overwhelming for intro-to-CS students who are still feeling lost.
Sure, but isn't the the POINT of college? To push you into unfamiliar territory? To force you out of your comfort zone, and then hand you the tools to learn something new?
It may just be my arrogence speaking, having already taught myself C. But I taught myself C, when I had to fix a bug, I taught myself asan, and valgrind. None of that was very overwhelming. Complicated, very. Confusing, occasionally. But Once I learned them, they became easy, and intuitive, and helpful.
I think the problem I have with your argument, is; when you apply it to math, the argument becomes painful.
You don't want to learn algebra, it' too complicated; lets learn this subset of algebra, without irrational or imaginary numbers.
I don't get the analogy. Every algebra course goes over the material in order of complexity. You don't go over real analysis until the students have mastered the basics.
The problem with c is that the advanced things get in the way of the basics. You get students asking why their if statements didn't work and to answer you need to teach them GDB and valgrind and different compiler flags and how the stack works. This is a recipe for students that don't end up learning either thing. And it slows everything down so in the end of the year you can't cover everything properly and end up with mediocre C programmers you would never trust near a real C codebase
---
Another thing to remember is that we should be very careful about that drill sergeant mentality that intro to CS should be hard and painful. This advantages students that had the opportunity to program before university and often ends up turning away women and other minorities from the field
You actually managed to change my mind and I now agree C isn't a good first language. Mixing C with data structures / algorithms in the very first programming course is probably not a good idea for the reasons you mention.
Although I am kind of miffed by the last sentence, I admit I'm one of the people who programmed WAY before college but in my experience most CS majors who didn't simply didn't choose to do so, as opposed to having lacked the opportunity to do so; you're dismissing the thousands of hours spent coding as a child (more than most CS majors invest in programming during college...) without the internet or anyone to teach me anything as "opportunity"
Why stop expectations for self-study at the programming part of the degree program? Math textbooks are available from any library. Rather than continuing mathematics from where high school left off, we could skip past the beginner stuff. The students who really care about computer science will have already learned the basics of discrete math, linear algebra and theory of computation.
Where did you get the idea that I said we should skip anything or that there should be "expectations" for self-study?
I have an issue with someone dismissing thousands of hours of invested effort as "having an opportunity". And the same is of course true for someone who went to the library and learned first year math.
I didn't "choose" to program before college because I didn't decide to major in CS until I applied to college. And I suspect that unless "WAY" means high-school, you programmed before college because of your parents.
Nah my parents are barely computer-literate even today. I found QBASIC that came with DOS when I was like 10 which had the source code for 2 simple games and decided I wanted to learn how to make stuff like that. A few years later I read about Linux somewhere and found someone who baked and mailed me Slackware on CDs for a small fee so then I had gcc
If you're learning C, as a basic intro into programming concepts... you're gonna have a bad time. Try python. However, if you're learning C because you'll need it later for [any reason]. You should learn gdb, and valgrind (asan has replaced valgrind for most of my usage, valgrind is often my last ditch now) on day one... I guess I mean to say, if you have better things to teach than gdb, then C isn't the language you should be using.
I would gladly hand over some C code to someone who's really interested in programming, but has only ever written python. I can't say the same about someone who's only known Java. So experience with C, isn't really a good bar to judge programmers by, not by itself anyways. You don't need to know how the stack works for most issues in C, and after you can make you stuff compile, without asking questions, is the time to learn about compiler flags.
So if that's the only argument for using an obscure/broken version of C, then it's a crap argument. If you've been doing CTF for years, the school needs to offer you CBE, not force you into a class where you'll only break the curve, and still be bored.
Also, I'd gladly turn away women and minorities both. If you're handed a problem you know has a solution, and you give up because it's hard. Your skin color or sex is irrelevant, if you're the type that gives up when it's hard; you're worthless as a programmer. (But that might just be my obsessive need to solve any problem I don't understand, so I might not be the best judge)
I would agree the point of college is to push you into unfamiliar territory.
But, that's why we have dedicated classes that challenge you. This language is meant for a course to try to bring everyone to the same level. Some students enter the course having done CTF's since middle school and others just took the AP exam. C0 is used only for one course, and the course that follows it uses C in depth.
I am no expert, but it sounds to me like this headline is an oversimplification.
The abstract of the paper suggests that the authors intend to support the claim that "not all quantum systems can be simulated efficiently using classical computational resources". Essentially that it's impractical to simulate complex nondeterministic quantum behavior using deterministic algorithms.
But what about quantum computers? It would seem to me like their inherent nondeterministic behavior would be an ideal fit for simulating other nondeterministic quantum systems.
In any case, it doesn't sound like they found any real theoretical limitation, just a practical one.
Why does matter whether it is simulated efficiently? What if each Planck time or frame takes a huge amount of "real" time to simulate, we would not know it.
Think of a Pixar render farm that takes hours to render one frame, the characters inside would not know that, their world unfolds in real time to them.
> But what about quantum computers? It would seem to me like their inherent nondeterministic behaviour would be an ideal fit for simulating other nondeterministic quantum systems.
true, but in this case now you either need a computer the size of the universe to simulate the universe in real time, or you can simulate a universe in (incredibly small) fractions of real time in a system less than the size of the universe.
this doesn't even get around the state-storage problem - storing the state of a simulated universe (assuming literally zero-overhead) takes the entirety of the universe too.
> In any case, it doesn't sound like they found any real theoretical limitation, just a practical one.
A prior paper contributed to by Ms. Flack (https://arxiv.org/ftp/arxiv/papers/1406/1406.7720.pdf) presents a framework for modeling the dynamics between individuals - I would assume that is the abstract "social coordinate space" to which she's referring, which places individuals within "Markovian, probabilistic, 'social' circuits" rather than into some Euclidean space.
And Ms. Flack, though her research interests extend beyond the realm of what is traditionally considered hard science, is most certainly a scientist - an evolutionary biologist specifically.
I assume you are linking to her LinkedIn profile to point out her "Doctor of Philosophy" degree? That's just the full, formal name for a PhD :P
Just to get the full quote from original article: “their metric space is a social coordinate space. It’s not Euclidean.”
If the space being considered is a probability space like you say, then there is no metric, there is a measure.
---
Why am I so pedantic?
Because I feel that the liberal arts mindset of writing is not serious enough for subjects where there is a ground truth to uncover.
In the article you link, where is the empirical study that compares how this model predicts reality?
I wish that people who publish on SIAMS, or ASA, AMS, IEEE affiliated journals were the ones trending on HN, and getting interviewed by journalists. A pessimist would say those people are too busy with their work for publicity, and that an empty barrel echoes loudest.
The problem is not competition from traditional taxi companies (they do indeed replace taxi companies' management structure) but other ride sharing companies, ie Lyft. Uber has only managed to stay ahead of Lyft by undercutting Lyft's prices, incurring severe losses in the hope that as soon as they have self-driving cars they will be able to become profitable by eliminating one of their main expenses, the drivers themselves, while maintaining lower prices than Lyft.
Without self-driving cars, that strategy isn't sustainable, and so eventually Uber will find itself unable to maintain its advantage over Lyft.
> Uber has only managed to stay ahead of Lyft by undercutting Lyft's prices, incurring severe losses in the hope that as soon as they have self-driving cars they will be able to become profitable by eliminating one of their main expenses
I think someone should pause to note that, if true, this is one of the dumbest long term business strategies in the history of high finance.
This idea, apparently, is a bet on a technology that not only doesn't exist, but is extremely highly regulated, that the company has literally no demonstrated core competencies in, hasn't been even successfully prototyped, that represents the hardest most complex use case of the technology, and is obviously years away at best, but yet it justifies a policy of losing billions of dollars in the present just to get market share when the costs of switching brands are literally so non-existant that a typical customer often does it several times in a single evening out.
Maybe it's my old age and having lived through the first dotcom crash, but it seems to me that even when you feel like the only person who sees that the underlying business logic is nonsensical magical thinking, it's still quite possible you're correct.
You aren't alone in that thought. Being privately held the financials are opaque enough that no outsider knows for sure how much of the spend is subsidizing the low prices, versus being used for expansion and R&D.
The leaked data from Naked Capitalism is really the only data outsiders have at their disposal: http://www.nakedcapitalism.com/2016/11/can-uber-ever-deliver... The use of EBITAR (vs EBITDA) makes it difficult to draw real conclusions, though the fact that they use EBITAR at all is suggestive on its own.
However, something as epic as self-driving cars are one of those things that come once every two generations. SDCs, if achieved, have the potential to change the entire American way of life and massively disrupt society. Start-ups love to disrupt and this would be the mother of all disruptions.
So, for that reason alone, I think firms are willing to bet a sliver of their portfolio. If I was a family office I certainly would do so.
Right. But it's the second half of the business strategy that I outlined above that's key to the discussion, ie the idea that paying billions of dollars for market share in this field makes sense. That's literally crazy, the network effect and lock-in at scale is really modest at best.
Sure the more cars you have the better the service can be, but it's trivially easy for a competitor to come along at any time and attack your most profitable market segment in a given city, and trivially easy for any customer to switch as easily as clicking on a different app and glancing at the estimated time and price. That's going to be true forever, this isn't a market that will ever have a defensible monopoly position.
Investing some amount to hedge in self-driving cars could conceivably be defensible, but looking at what's going on it feels like that's more of an rationalization for their present behavior.
They've been lighting money on fire subsidizing rides for a few years, and have hunted around for a plausible excuse. One is the "pool" functions, as that has a slightly more plausible network effect story, and the other is self-driving cars.
Both appear to be post-hoc rationalizations designed to provide some plausible story for why they need to borrow another couple billion dollars.
I wouldn't say it's trivial for a competitor to take over.
The Uber app is still best-in-class, and all my friends now say "Get an uber" instead of "Get a cab" because the UX is so much smoother.
Generally it's extremely difficult to dislodge an incumbent from a market slot. Newcomers can only compete on price, UX, and brand recognition. Ideally all three need to be significantly better than the incumbent to have a hope of taking over. Without all three the best a newcomer can hope for is a small slice of the pie.
So market share definitely has value in the abstract. Unfortunately in Uber's case it has negative economic value because of the costs/subsidies.
Then again I suppose it's possible Uber has always been a cunning plot to take VC money and spend it on subsidised transport. If so, it's definitely been a success - for now, at least.
What you are saying is right about Uber's current business. But that is not true for self-driving cars.
The infrastructure costs of deploying a fleet of cars to compete with Uber if they gain market share will be massive, all but assuring they will hold a monopoly position for years unless a decentralized competitor could actually become reliable quickly (I doubt it).
Once you have a self-driving fleet sure, you can start printing money, but it makes no sense to sacrifice revenue today by starting an unsustainable price war long before the tech is ready. All this does is shorten your runway, and you don't even know how long a runway you will need, given that reliable, fully autonomous driving is so hard.
> Once you have a self-driving fleet sure, you can start printing money
Why?
Are drivers who make $10-15 an hour so ludicrously expensive that saving that money fundamentally changes the business?
Are the carrying costs and maintenance costs and depreciation costs of self-driving cars likely to be lower or higher than the cost of a 2017 Toyota Camry? How about the regulatory and insurance costs?
Is there likely to be some magic secret that allows one company to dominate self-driving cars, or will it resemble the historical markets for transportation devices, whereby there are dozens of companies who make different offerings of similar technology, and a network of component suppliers and hardware and software companies that contribute?
I guess it captures a unique place in our imaginations, but this topic has an unusually severe infestation of magical thinking for some reason.
I can see an argument that a vertically integrated car manufacturer and taxi service provider would be hard to beat. Still makes Tesla and GM the companies to worry about, not Uber. The manufacturing part is much harder than the app part.
> Is there likely to be some magic secret that allows one company to dominate self-driving cars [...]?
I actually agree with you that this is not a given, but it wasn't clear from my phrasing. I should have started my comment with "Even assuming you will print money with a self-driving fleet, etc".
That said, whoever deploys a self-driving fleet first should enjoy a significant cost advantage against human-driven taxis, at least for the initial period before competitors finalize their own transition. Insurance costs should get lower if SD cars prove to be safer (if not, they wouldn't pass regulation), and taxi customers are very price sensitive, so the $10-15 per hour cost advantage will certainly matter. Taxi drivers hate the price competition from upstarts already. If anybody figures out how to make those prices economical, they win, at least until the market commoditizes itself at a lower price level.
Given your $10-15/hr rate, the monthly insurance costs for a vehicle are paid for in less than a day. As SD cars prove themselves, they may even have lower insurance costs than humans.
Since drivers can't work 24 hours per day, you are paying depreciation on ~three 2017 Toyota Camry's, not one.
It's déjà vu all over again. Remember "eyeballs" from 1999.
Research into "autonomous driving" began 1987 with the Prometheus programme consortium, then came C2C and C2X.
We have "autonomous driving" the day an automotive CEO is happily blindfolded on the back seat, alone, chauffeured a random journey through Seoul on a morning commute in monsoon season or a scooter-mania evening on Friday in Milan.
So far, Uber et al. have operated on the principle of pitching easily disposable worker bees against each other, at the mercy of an opaque rating system, with zero rights to appeal, burdening them with all costs of doing business (car, insurance, maintenance, etc.) while providing nothing more than an app with server-farm back end - the classic founder/VC/underwriter/IPO-seller benefit narrative.
> Without self-driving cars, that strategy isn't sustainable, and so eventually Uber will find itself unable to maintain its advantage over Lyft.
I see the "Uber is a bet on self-driving cars" coming up in every discussion about their business plan. But even if that's true, why is there an assumption that Uber would have a monopoly on self-driving cars?
Google, every car manufacturer, and a whole series of startups and universities are working on self-driving technology. If for example Google perfects it first, they will sell licenses to car manufacturers, who in turn will sell cars to Uber, Lyft, and every taxi company in every city of the world. Even if Uber develops the technology first and keeps it to themselves - then others cannot be too far behind, the potential payoff is just too great.
I totally agree with you. I don't think Uber would have a monopoly on self driving cars. In terms of the tech, they are definitely not ahead of e.g. Google. According to this TechCrunch article [1] their self driving cars require human intervention every minute.