Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The other big factor is that pipes only work because of the power of plain text. The output of every UNIX command is ASCII text, and most of the time, it's ASCII text with columns delimited by \t and rows delimited by \n. And there are command-line utilities like awk, cut, sed, & xargs for parsing & rearranging that text, and funneling it into formats that other commands understand.

The web has a variety of other content types - images, videos, applications, structured data - that don't map well to this model. Before you have interoperable webapps, you need to define common data formats for them to interop.



I disagree with that. Pipes seem to work in spite of plain text, being only made incredibly inefficient by it. A much better way is to send data structures through them (for which, btw. modern web is perfectly well-suited, with present JSON domination) - you can always render the data to text if you need, but you don't have to write arcane and bug-laden shotgun parsers with sed and awk because every step in UNIX polka means throwing away metadata.

Also: ditch plaintext for structured data and suddenly handling of other content becomes much, much easier, and they map perfectly to this model.


I pipe things in and out of ffmpeg, curl, convert (and the rest of imagemagick), mysql, tar, netcat, jq, etc pretty frequently, and that's all images, video, audio, applications and structured data. It seems to map quite well for all that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: