Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Mozilla will add H.264 to Firefox as Cisco makes push for WebRTC’s future (gigaom.com)
218 points by gz5 on Oct 30, 2013 | hide | past | favorite | 62 comments


No matter what codec will eventually be widely adopted, I hope native video support for browsers will lead to better video players. Right now, I have issues with YouTube (loading speed, quality switching back and forth between 480p, 720p and 240p), Vimeo (still can't jump to a desired position, loads incredibly slowly), various Flash embedded players (won't load, or will reset to 0:00 when trying to change the current position), auto-playing videos, lack of controls, usability issues...

I usually end up downloading the video with a Firefox add-on to watch it later. Or just stop bothering and close the tab.


This has nothing to do with the codec, the quality issues are bandwidth related, most providers no not want to fork out cash to buffer your video's anymore.


they most certainly are player related. I can easily avoid every problem the parent has described on a low quality DSL connection by downloading the video directly and watching the partial download in a native video player.

The state of video players on the internet is just flat out terrible. Given how restrictive the policies and philosophies are around downloading the videos and watching them directly, browser players arn't actually competing with native players. I don't think normal people realize just how terrible they are. I can't imagine this situation ever getting better.


Not that I disagree, but the argument that it's not bandwidth-related isn't refuted by saying you don't have issues by downloading the video and then playing it with another player.


I said I don't have issues downloading video and playing with another player in low bandwith scenarios.


But you said you partially downloaded the video, and then played it? Perhaps I'm missing something, but it sounds like the equivalent of pausing the Flash player and waiting for it to buffer.


Something's gone wrong with communication here:

It is the exact equivalent.

Except it never discards the buffer (they do this all the time), I can seek with impunity (I do this all the time), it doesn't consume huge amounts of CPU (they do this all the time), it goes full screen reliably and quickly (on linux they fail frequently), If there are problems with the connection then I won't lose everything I've downloaded (online video players are so terrible about this), I can run it at any speed I want (I do this all the time), I can supplement it with third party subtitles (I do this all the time) ect. ect. ect.

further, no bizarre bandwidth saving gets involved. If the connection is very bad, I can just let it download. It won't stop after 10 seconds and wait for me to play, and I get to decide when it starts playing. I can resume failed downloads, I can respond to bad connections, I can use web technologies that have existed for decades that are specifically there to deal with bad connections to route around poor network conditions.

Video players on the web are just death by a thousand cuts. They're all different unique little snowflakes, and all worse then windows media player from 2001. We are literally swimming in methodologies to deal with poor network connections, and the inter

but other than that they are equivalent.


It is yet another hidden price of DRM. The W3C's embrace of DRM won't help either.


I've never seen any of those issues -- are you sure your Internet provider isn't throttling video packets with DPI or something?


I get the same, and it's not dependent on ISP. Many (most?) flash players are buggy resource hogs, and the html5 implementations I've seen don't feel ready for primetime yet.


(Former) Flash nerd here. The problems with plackback and scrubbing are absolutely the fault of the Flash player and there's really nothing you can do about it short of hacking your own video handling into the SWF format. There are a lot of low-level network-related problems in Flash.

I have zero real proof, but I know for a long time Adobe (Macromedia) was pushing their streaming server and didn't really put a lot of effort into handling streaming/buffering without it and I suspect this just never got fixed as things were "good enough".


To be fair, there weren't many (any?) standardized alternatives back then. There was no particular incentive for Adobe to support, say, Red5 when there was no standard to gather around. Apple's HLS is a pretty recent attempt at moving toward a documented standard.

If you're talking about streaming .flvs off of a regular Apache setup, then it shouldn't be surprising that scrubbing and seeking are troublesome: static file servers weren't build for that use case.

What would you have preferred Adobe to have done?


As others have said in answer, it's absolutely the fault of the various Flash embedded players.

It's somewhat easy to prove if you write your own player (not in Flash) to bypass the Flash player of sites like YouTube, Twitch, etc. The experience becomes perfectly smooth (as long as you have the average bandwidth), the video starts quickly, jumping to another point in the video is almost instant.


If you're using a provider like Time Warner that has well-known and longstanding issues between it and Google's caching servers, that could be your issue. Myself and fellow Time Warnians in much of NYC spent about 2 years nearly unable to play anything above 480p. Google was sending everyone from certain blocks of Time Warner to the same caching server and it would get overloaded. There were a couple tricks you could implement at the OS level that would block the overloaded Google servers and get better video, but if you weren't using a router that supported those blocks, you were out of luck on your Xbox, Playstation, etc.

Note that things seem to be working better this year, though.



The best experience is from a Safari extension called Youtube-5. It uses so much less CPU power and is so much smoother, it really puts Flash based players to shame, and shows how good the experience could, and should be.


I had all those problems until I dropped my ATT DSL for a local provider.


Use Chrome, install a YouTube buffering plugin that buffers all the way.


Good move by Cisco. Mozilla response seems lukewarm though?

Mozilla CTO Brendan Eich:

“Although H.264 will become available to Firefox users thanks to Cisco’s move, the codec still comes with restrictive licensing that we believe is not in the long-term best interests of users and the Web, compared to the situation with a truly free and unrestricted codec.”


Here's the announcement from Eich himself:

https://brendaneich.com/2013/10/ciscos-h-264-good-news/

I wouldn't say that the reaction is lukewarm, but rather reserved. Mozilla's never been a fan of H.264, but supporting it has long since transformed from a philosophical choice to a political necessity. This move helps Mozilla, no question, but it doesn't advance their agenda of patent-unencumbered web standards. Let's hope that VP9 and Daala can do better on this front in the coming years.


That makes sense to me. This is a reasonable compromise for Mozilla, but it's still a compromise. What they want is a free (as in FOSS) codec. What they got is a single free of charge implementation of a patent encumbered codec.


It's a "move" but not a solution. Just a temporary workaround. The solution is work on Daala.

See http://xiphmont.livejournal.com/61927.html


Of course the response is lukewarm, this still represents a failure of open web usage. As a practical measure this is a good move, but that doesn't make this a principled measure.


It seems that Mozilla is in a position of doing what is right for pushing the Web forward for end users and doing what is right for the open Web. It's nice to see them choose on the side of end users but I doubt Mozilla will stop looking for a more open alternative.


Not only has Moz not stopped looking for something better, they continue to actively develop something better (i.e. Daala)


Personally I'm overjoyed by that lukewarm response.


I find myself increasingly frequently using 'youtube-dl' to grab clips, advantages being that youtube sometimes only buffers a little bit before stopping and this way I don't need to care about keeping tabs open or codecs.

Everything always plays fine with VLC and I can mess with the playback speed by simply pressing '[]'.

Should probably automate this. Maybe independently of browsers, just sniff my own traffic and MITM myself (request goes to youtube-dl, browser gets a page that closes itself immediately).


I would be interested in developments regarding this. I was thinking perhaps some extension to firefox to trigger youtube-dl. I don't think firefox would provide that kind of system level access to plugins, however.


Wow, this seems like a very backwards move. Just when the truly open source VP8 was set to go in WebRTC, we're opening the discussion about putting h.264 in there again? Why?!

This seems like a very short sighted move.


That's just it, it's not apparent that IETF will mandate VP8 in WebRTC even without Cisco's offer here. Nor can Mozilla go it alone, Chrome has already announced they will also support H264.

As Monty himself put it on his personal blog, in the matter of VP8 vs. H264, VP8 lost. This is an appreciated measure by Cisco to mitigate the damage until Daala can be fielded to proactive displace whatever MPEG LA tries to push after H264.


Chrome has not said it will support H264 in WebRTC.


What guarantee is there that a codec which doesn't even exist yet won't lose to Microsoft/Cisco/Nokia/et al the next time we need to standardize a MTI codec in some spec? This seems wishful thinking.


No guarantee, but the same team was successful at getting Opus (audio codec) adopted as MTI for WebRTC. http://www.ietf.org/proceedings/84/slides/slides-84-rtcweb-6...

On a purely technical level, Opus' performance blew the pants off the other options, which surely helped. If Daala can repeat that performance I think they have a good chance.


Well, the fact that Skype and Cisco were already onboard and using it helped a bit and that audio codecs don't have the kind of deployment challenge that displacing H264 does.

Surely, the argument is going to come around in 2015 or whenever Daala ships that H264/H265 hardware is too widely deployed on billions of mobile devices to justify a new codec.

Does Cisco really want to WebRTC to succeed when it clearly commodifies their WebEx offering? The question is, what are they getting by ensuring H264 is MTI?

What Mozilla and Google were going to get out of VP8 winning was pretty clear. So one has to ask what their motivation is for spending so much money to keep H264 as MTI.


Yeah, I don't think they had to compromise this much. Using the H.264 decoder on the host OS was good enough.


>Just when the truly open source VP8 was set to go in WebRTC

That's not really fair summary of the situation.


This still does not make it better than VP8. Yes CISCO will release a binary for most platforms but not all. They can snub platforms because of lack of resources or for strategic reasons. Furthermore, CISCO has not committed to keep paying the royalties until all patents expire.

I suppose it is an ok move by CISCO, but still does not make H.264 acceptable when we have a completely free open source alternative.


We also have vp9 out in the wild and Dalaa sometime in Q1 of next year, we hope.


So according to this thread:

https://news.ycombinator.com/item?id=6640324

Mozilla is going to add a Cisco blob to their distributed browser, and not the actual compiled source. Any clarification on that?


Cisco will release the source code, but according to the H264 licensing terms Cisco has to be the one to distribute the resultant binary if Cisco is to pay for the license.

It would be good for others to independently verify that Cisco's source compiles to Cisco's binary, but that still wouldn't give legal permission for those others to distribute the binary -- only Cisco can do that.


What's more interesting is if this will allow end-user to compile a binary for personal use. If the build scripts works as well as eg: most c-extentions for python (so you can do like the equivalent on debian for mercurial: apt-get build-dep mercurial; virtualenv --no-site-packages test;cd test;./bin/pip install mercurial -- and have it just work) -- a real "loophole" might actually present itself in the licensing.

It still wouldn't be quite as easy on Windows (even if it builds with a free and/or gratis compiler there) -- but on all platforms building a plugin (automagically) should be much more feasible than for "everyone" to build Firefox (that thing is a monster).


Unfortunately as far as I understand it you're only "licensed" to use the codec if you receive the binary directly from Cisco... even if you manage to compile a bit-for-bit identical equivalent. You'd surely get away with it as a personal-use thing but I doubt distros would automate it for you.


I'm thoroughly confused by this press release. I have an app that needs cross browser video support and I was required to use either flash/webm/vorbis to get my videos working in FF. As of a few weeks ago, h264 videos started working in the latest version of FF. Once I discovered that I ditched webm and vorbis and now have it default to mp4.

Then I see this press release... Does FF officially support h264 right now? I'm using FF v24 and it seems to already support h264. Is that because I'm using a mac and it falls back on a system codec?


>Does FF officially support h264 right now? I'm using FF v24 and it seems to already support h264. Is that because I'm using a mac and it falls back on a system codec?

It's using system codecs. Your stuff won't work on Windows XP, which is still a significant percentage of users.


It will work since it falls back on flash, but that's obviously not ideal.


Right now Firefox uses system codecs where available. In the future it will also have the Cisco codec as an option.


So they will pay what? When? with what guarantee? What about production cost of videos? Do we need a license? That doesn't smell very good...


It's a brilliant hack, they pay exactly zero for this.

You get a licence from Cisco, and they pay royalties to the MPEG LA for that licence. Now, the H.264 licence fees are per copy distributed, but there's a cap on the fees, and Cisco's own usage puts them _way_ over the cap anyhow, so distributing this thing freely for Mozilla to use costs them literally nothing at all. Plus, they're apparently really distributing binaries for any and all platforms you can think of (someone mentioned S/360).


The Cisco binary should include encoding as well as decoding for it to be usable in WebRTC. Production systems could download Cisco's binary and use it.



Yes! Windows XP will get native video playback after all (doesn't support the current microsoft framework workaround).

Thanks Mozilla!


Only baseline profile is supported. Don't expect most videos to work.


April 8, 2014 is approaching rather fast...


So how will I know that the binaries my Firefox downloads from Cisco don't include backdoors which aren't to find in the sourcecode which they provide too?


Well, anybody can build from source and compare to the Cisco-distributed binary, no?


This is more complicated then you'd think https://madiba.encs.concordia.ca/~x_decarn/truecrypt-binarie...


IMHO Mozilla should build it from source and give it to Cisco for distribution.


Can someone explain to me what the purpose of WebRTC is? Why does the browser need some kind of new protocol in a browser to do "real time communication?" I mean, don't we already have TCP/IP?


WebRTC allows developers to potentially write the next Skype-type application without having to worry about the underlying voice and video codecs, signal processing components like acoustic echo cancellation, bandwidth estimation and other complex components using high level javascript APIs because these components are now baked into the browser. There is also a data channels API that allows for peer to peer data transfer.

http://www.html5rocks.com/en/tutorials/webrtc/basics/


In addition to what pigubrco said, browser developers consider it dangerous to give Web pages direct access to TCP/IP because that would make it very easy to create botnets. Instead, browsers expose high-level APIs like XHR, WebSocket, and WebRTC and the implementations take care to not be vectors for new attacks.


Browsers currently can't do any "hosting" so there is no way to directly connect to another browser. WebRTC solves this problem.


As far as I understand it is a way for a 1-1 communication between browsers. No middleman. So when I talk to another browser it is (browser<->browser2) NOT (browser<->Google<->browser2)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: