Hacker Newsnew | past | comments | ask | show | jobs | submit | maxst's commentslogin

In 1998, the idea seemed so ridiculous, TheOnion mocked it:

https://theonion.com/new-5-000-multimedia-computer-system-do...


At the time, the mocking was well deserved. I remember downloading trailers for moves over my dial-up connection. Took the entire night for 3 minutes of video. Can’t imagine paying $5k for that privilege.

Today though, the mocking doesn’t make sense and is confusing. I haven’t ever owned a TV.


By 99 it wasn't that bad. I remember screaming along with V.92 56k modems. Futurama episodes were about 50mb encoded as RealVideo and took a mere two and a half hours to download o.0

(and it really was v.92; I still have the double-bong towards the end of the handshake emblazoned in my memory)


Realmovies were the new hotness, evolution of video piracy was >vivoactive is the OG (stream only format, like 50x50 pixels, NO KEYFRAMES - no fast forward, rewind, or seeking), talking about 1995 here >realmovies - higher quality, seeking, around 1998 >DIVX (format, not the discs also at the same time) - mindblowing quality update, around 2000 >VCDs - concurrent with DIVX around 2000 >XVID - (DIVX backwards) arose as DIVX failed, 2001 >then wherever we are now, 9999 formats and VLC supports them all


I downloaded episodes of South Park using eMule over dial-up. It took days.


Well back then there was a huge difference in the Internet experience between people at universities and other places with T1s and other fast connections, and everyone else on dial-up. There was a lot of full-length video downloading at universities by 2000. But even on dial-up I seem to remember realplayer and other UDP dumps being pretty popular around this time.


Picking 300MB as a ridiculous amount of data to download dates that nicely without needing to look at the article header.

Though using the codecs and hardware of that time I doubt the quality at even that size would be great. Compare an old 349MB (sized to fit two on a CD-R/-RW, likely 480p though smaller wasn't uncommon) cap of a Stargate episode picked up in the early/mid 20XXs to a similarly sized file compressed using h265 or even h264 on modern hardware.


I recall Xvid rips of SD television content being just fine quality wise, even at the 350MB per episode that ‘the scene’ used. A modern encoding at 480P might have slightly better compression in dark areas, but SD television is kinda janky compared to HD.

H.265 or H.264 would absolutely crush Xvid for compressing HD content, both in size and quality.


I appreciate the usage of SG-1 as an example, as I definitely still have several seasons of SG-1 episodes of that size floating around old hard drives somewhere. XVID, of course.


I wonder if the 6000 series from nvidia will finally be able to deliver on the prognostication of being able to make toast with a PC?


You can make a flambé with Nvidia’s new 12VHPWR connectors


Haha that article is wild. Thanks for sharing


Demonstrating the technology, Welborne stood proudly beside a prototype of the Presario 6000 as it displayed an eight-minute segment from a recent 3rd Rock From The Sun episode, downloaded from an NBC server in under 75 minutes.

lol

If you went to blockbuster, you could move 4.7 gb to your home in half the time (unless your family was involved in choosing the movie which would slow you down)


Dragonfly's landing site is near equator (Selk crater 7°N), but all the lakes coordinates seems to be close to poles, see "Lakes of Titan".

Too bad. Would be wonderful to see these rivers and likes up close...


It wasn't a big deal in 2012. The popular opinion was "Russia no longer posed the threat" and even can be viewed as "an ally" of the US. See: https://www.politico.com/blogs/burns-haberman/2012/04/hillar...


Screaming what? "How dare Iranian people use the same tech we use!" or something like that?


> That makes no sense. First of all most of the sanctions are not "punishment", they prohibit certain transactions and are not aimed at people

Visa and MasterCard suddenly blocked all transactions for all credit/debit cards issued in Russia, and you claim such actions "not aimed at people"? Come on.


“The New York electorate seems to be inured to corruption, they seem to believe it’s normal and expected behavior"

"According to a poll released last week by Quinnipiac University, 45 percent of New Yorkers consider corruption to be a “very serious” problem, but 48 percent of voters said it’s as bad as anywhere else"

"You'll be hard pressed to find New Yorkers who don't think government corruption is a problem in New York State,” poll analyst Mary Snow said. “Yet, it's not the defining issue in the race.”[0]

[0]: https://www.politico.com/states/new-york/city-hall/story/201...


Will there be simple jobs on Mars base? Someone to sweep the floor, someone to bake a cake, etc? Will they produce enough value to justify enormous resources to keep them alive in such hostile environment?

And even people with important jobs, they would work maybe 50-60 hours a week, but consume resources 24/7.

On average, the value all humans produce in a week might not be enough to keep them all alive for a week.


That's a big one. Probably hundreds of contributors. But what percentage of FFmpeg code they would actually run?


So many algorithms insist on using "single RGB camera" approach, when it would be much more practical to use 2 cameras.


The power is the existing giant corpus of video. When I'm in front of a computer I'm going to run this up against the dancing of James Brown and Michael Jackson. Should be interesting.

Maybe an Olympic gymnast as well.


Practical for an algorithm implementer, maybe - deeply impractical for real-world use. Stereo cameras are rare and nontrivial (you have to synchronize shutters). Monocular algorithms can be applied to the millions of hours of existing footage, or used with the billions of cameras, smartphones, robots, drones, and fancy doorbells that already exist right now.


If you can calibrate the time-delay between the two cameras, can you not just interpolate one or the other signal backward or forward in time so that it aligns with the other? (By "interpolation", here, I mean the sort of thing the Oculus does on the display side, generating frames "between" frames, to smooth motion during head rotation. Take one real frame from one camera, and build an interpolated frame between two real frames from the other camera to match it.)


For “slow” moving object like human body, does synchronised shutter matters that much and if so , is there any tricks to compensate it if synchronisation is not possible?


You can do reasonable software sync with identical cameras and threading - it gets you to within a few milliseconds.

Even for slow objects it's a problem because being a few pixels off might make the difference between matching and not.


There are dozens of 360 cameras on the market, so I think shutters synchronization is not that difficult to implement.


360 cameras produce spherical panoramas from a single point and would not help capture of a person on a stage over a regular camera.


But 360° cams try to do something entirely different than, say a kinect.

I would be surprised if all 360° cameras had synchronized shutters.


Only for near-field. Further out than roughly an arm-span, your brain itself doesn't use binocular vision for 3D estimation because there's not that much information in the parallax.


do you have a source for that? because my source: one closed eye, tells me very clearly i do use binocular vision for 3d - 3d being how far something is.

you don't need 'much' information. you know the distance between the eyes, and then you have the 2 lines from each eye to the object. that's called a triangle. do you know how to calculate the height of a triangle? because that's your distance.


If you read my comment again, possibly all of it this time, you'll see that I'm not saying that we don't have or use binocular vision. I'm saying that there's a limit to how far out it's useful. That means adding a second camera is only going to be useful for a small number of tasks.

Messing about with triangles is called convergence. The accuracy falls off quite quickly with distance, and it's completely useless out past about 10m. Your brain has much better sources of depth cues before that point.

There are at least another 12 mechanisms humans use for depth perception, only one (arguably two) of which uses both eyes. I'll let you do the googling.


i don't need to 'google' what closing one eye does. that's the whole point of my comment. you claim it's unreasonable when people use their eyes for basic information. i say having to google what your own eyes tell you is what is unreasonable.


And they would know that "maintaining that lie for N years" would prove useless as soon as the powerful cameras in lunar orbit would photograph the landing sites. Who would participate in such a lie knowing 100% it will be exposed, eventually...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: