I don't feel the analogy is entirely accurate. Imagine having to learn 5 "if" statements, or 10 ways to say "++". In a spoken conversation, you don't get to look up the API docs if you forgot something. However, most programmers are likely at least bilingual because most programming books are in English and many companies in non-English countries use English as a common language.
However, I do feel there's some similarities between constructing a possibly-correct sentence and seeing if it actually works, and constructing some possibly-correct code and seeing if it compiles.
> most programmers are likely at least bilingual because most programming books are in English and many companies in non-English countries use English as a common language.
Reading English is a smallish subset of fluency in English, i.e. being able to also speak, listen to, and write English. Technical specs often use simple written English, relying on example code. So I'm not sure if what non-English programmers do can be called "bilingualism".
I work at a company that makes MMOs. If this guy's statements are true, then he was making significantly more money exploiting an MMO than I do by programming one.
There's tons of people making more within the Microsoft/Oracle/SAP/Apple/etc ecosystems than employees of those companies, too. This is different insofar as the market was not intentionally created, but it's hard to blame him for "exploiting" the circumstances.
I use "exploiting" in a more technical sense here. An exploit is using a bug in the game for personal gain, in an always-on multiplayer game with an economy affecting all players, the permanent effects of an exploit can be much worse. That's not what's happening here, because no bugs are necessarily being exploited (though maybe at one point they were; a buggy drop rate resulting in excessively high yields for farmers, for example) but the farmers tend to get lumped in with the exploiters in the developer's mind because the exploit/fraud departments of customer service work pretty closely together.
Gold farming has generally been considered a fraud of sorts, because it can seriously damage the in-game economy or the player perception of the economy, and many gold farmers perform actual fraud (creating accounts with fake or stolen CC numbers).
The fact that we even have these problems is both amazing and wonderful.
The fact that there are cartels within computer games hiring low cost foreign workers to manipulate a virtual experience for real monetary gain, and there are investigative units trying to track them down is fucking awesome.
I was an original beta player for UO and we had a fantastic run exploiting bugs and high speed Internet connections from the Intel game lab we ran, with multiple accounts to dominate and reach wealth and fame.
It was my golden years of gaming actually. Now I may play an hour or two of skyrim a week if I am lucky.
Back then I was making nearly nearly 70k to play UO from a sick lab 12+hours a day.
"The fact that we even have these problems is both amazing and wonderful.
The fact that there are cartels within computer games hiring low cost foreign workers to manipulate a virtual experience for real monetary gain, and there are investigative units trying to track them down is fucking awesome."
From one point of view, sure. It really is a bummer to the player experience since the reality of it is that a company can and will protect itself better than it's users in aggregate will protect themselves, so you just end up with the current WoW situation of a huge stream of hacked accounts draining resources of the game company and destroying the play experience of the players. I'm not sure where any of this is wonderful, really. It isn't like the myth of the gold farmer where they hire out warehouses of employees to play the game building up "new gold" in the player economy, it's just key loggers and trojans and ten million potential victims.
$100,000k is a lot higher than you would get as a game programmer in most places outside of a handful of really hot markets (Seattle, Vancouver, SV, etc).
Generally speaking expect a 30% pay cut compared to a non-games programmer (varying on what kind of games, zynga-style stuff is highly payed IIRC).
I immediately thought of this too. Could there possibly be a revenue opportunity for these game developers that they are missing? Should the game developers focus on the gameplay with economics in mind so as it should be a an objective and not just a byproduct?
Yes. The US MMORPG companies resisted the Korea/China Free2Play model, where you sell advancement fairly directly for hard currency, for cultural reasons. (My read is that their business guys were totally clueless and their devs had passionate hatred for selling advancement. They wanted you to earn advancement the proper way: by dropping out of school and devoting your life to the game.)
Every gold farmer was competing on price with someone who had INSERT ... INTO ITEMS; available to them and the manual labor was winning.
There were many folks who said that there were solid commercial reasons for this refusal at the time. A lot of them boiled down to "I don't know about those crazy Asians but sophisticated American consumers don't want to do this thing that we're spending thousands of man-hours a month unsuccessfully trying to prevent Americans from doing." Those have been pretty decisively proven wrong since a) when AAA US MMORPGs experiment with item sales they make serious bank and b) the Free2Play model empirically has worked very, very well in the US, especially when you compare the revenue per user and in aggregate against the HUGE premium embedded in development costs of the AAA subscription RPGs.
Oh, also, Zynga ROFLstomped the entire industry. (10~50% of the revenues of the most successful MMORPGs for 2~3% of the development costs of the median AAA MMORPG... and they did it more times in a three year period than there have been successful AAA MMORPGs in history.)
As you say Zynga proves it can work for some games.
But I can also understand the resistance to it. I only just started playing my first MMORPG (SW:TOR), and as a player I think it would bother me if they openly sold advancement to the people that would be in the PVP matches with me. I know a certain amount in the background is unavoidable, but that is fair easier to tolerate than having the company sanction much less be involved in it.
So, if the amount of money they would gain from the purchasers exceeds the amount they would lose from people with my opinions dropping the game, then from a business perspective they should do it, but it comes at a cost. Of course the middle ground is to do it on some servers and consider outside purchases cheating on other servers, but even that might turn off some potential players that see it as distasteful.
Actually its not Zynga - I think the original was a Korean MMORPG which set the tone for other AAA MMOs.
The f2p model was tested with a lot of other games, both in China and Korea. I think gunz online was one of them.
Anyhoo - the major American brands resisted because of the cultural idea behind an MMO of the time - as in a world, which you subscribed to be in, and act within.
Being able to pay money to advance faster than other people broke the golden rule of immersion the original founders/artists had in mind.
Dont forget, this is now a bygone age of gaming - where people created games as an extension of their imagination and a hope to enact cool things in character.
Today those drives are still there, but completely leashed to the need to ensure profitability.
Its led to more attempts and games, but you now don't have things like sprawling empty wastelands in the barrens of WoW. Every square inch has been maximized to ensure that it has no dead spaces, or environments which let people advance at anything beyond the average rate decided by the designers.
Most designers for MMOs will avoid selling advancement enhancing goods - it destroys the game in the long run.
The model they use is one where if you aren't paying, you are the content. They need to make sure it is egalitarian at a gameplay level.
Too be fair, zynga is a parasite on facebooks infrastructure. So while syngas direct development costs are low, the real cost of developing what was required to provide zynga the platform is over a billion. (considering Facebook stated their network inf investment has been 1B)
Also, zynga is an idea thieving bandit of a company - so their costs are also far lower due to their hiring people that can copy as opposed to the thought leaders who would have come up with original content.
But if the only metric you choose to measure them on is revenue to expense, then by that myopic unethical lens - sure they look good.
They are facebook's largest advertiser, IIRC. Additionally, I believe facebook has invested in zynga, so the relationship is more symbiotic than parasitic. I am anti-zynga, but its important to be accurate in our critiques.
Ahh, no. Sales of credits to fund RMT in games like Zynga is where Facebook's largest growth in revenues come from: about 30% of the sale price of the credits goes to Facebook. Games like Zynga's make FB users more "sticky" and spend more time on the site which helps raise advertising "hits".
In what way does that refute what I said: that Zynga is wholly dependent on Facebook's infrastructure.
This doesnt mean they dont have their own infrastructure - but it is different than an MMO company like SWG/WOW that have their own infrastructure for 100% of the game.
The point made was that Zynga is doing what MMOs do at far lower cost. I said the cost was offset because Facebook foots the bill for the platform on which Zynga depends.
Actually, there's a major shift in the industry (to "free to play") that companies have been making. Basically, the company provides an in-game item shop where players can use real money to buy in-game items. These shops accomplish pretty much what this guy was doing, but keep the money in the company purse, and circumvent many of his limitations: there's no need to farm the game when you make the game, and you can sell items which farmers could never obtain.
In almost all cases, it has meant a higher amount of revenue, either for brand-new games or for games which transition to the F2P model or an in-between hybrid model. It looks like, generally, big-name MMORPGs are still starting out with a subscription model, but most games at other tiers are launching free-to-play with an item shop. Lots of long-running MMOs are transitioning to a hybrid model to add revenue to a product from which little is expected but also little is being spent on (just enough money to keep the servers running and keep a slow flow of updates).
I think it would be much harder these days for a player to make money by through farming, except for with the old guard of MMORPGs which still don't have an item shop. I could be wrong, I am not looking at these economies closely so I don't know if the farmers are actually able to provide items / prices competitive with the developrs.
Eve focuses much more (industry leading) on in-game economics as a gameplay aspect , but not as exceptionally much on converting game money to real money. Second Life has a rather sophisticated and officially supported currency exchange with out-of -game currency (dollars)
Slightly tangental but I believe one of the online poker playing websites (Absolute Poker) was accused of having a rogue insider who would fix tournaments.
MMO designers far too often create the need and people like the guy from the story naturally arise to fill it. This is usually done by having items to be purchased from NPCs for absurd values. Examples are pets, mounts, movement speed, costumes, and the like. Designers reason that not every one will need or what them but fail to take into account human emotions like greed, envy, and fear. These drive competitiveness for many a player and item sellers provide a simple and effective means to remedy it.
The costs of these items or services in game helps to establish the value of what is traded among players, either directly or through an auction house.
Actually the game designers completely understand those human emotions like greed, envy and fear.
They design these things into the game [i] on purpose [/i]
The economics of video games, in terms of GDP is simple - people constantly produce thing from a never ending well of digital resource nodes.
As a result of crafting + drop + trash sales, total gold in the servers always goes up, causing unending price inflation.
Its a pretty cool example of what would happen in a world with plenty.
Game designers create gold sinks to give people - especially the stupidly rich - something to dump their gold into.
I've been on the wow servers soon after vanilla launched - some gamers (and some human beings in general) are just wired to enjoy amassing wealth and gaming the economy.
These sinks were created to ensure that those players end up doing minimal damage to the average player.
I believe NYC is implementing a bike-share program like the ones in Montreal, London, and other cities. Basically, you pay a subscription fee (or you can just pay as you go, of course) to get a bike from a docking station, ride it to another docking station closest to your destination, and push it into the station and forget about it.
The biggest problem, which does not at all make the system impractical, seems to be the availability of working bikes and open docks at the stations you choose. The city (Montreal) seems to have trucks that go around and redistribute bikes fairly often based off of the real-time data the stations give them. Still, there have been several situations where during peak hours there are favorite "pick-up" and "drop-off" points, meaning there are no bikes for you to pick up, and if you do get a bike, there is no space at the station where you want to drop it off. There is usually a bike or two that's broken down at each station, so it's important to put every bike through a few tests to avoid frustration.
For me, the times that it was the most useful was either to commute, supposing a bike was actually available, or to get home after a late night out after the metro has closed. I hate taxis and will only take one at last resort, it was awesome to just take a bike back home.
I overall liked it but it's difficult for me to justify the added cost, since I'm not about to get rid of my metro pass. I would probably end up saving money if I replaced my pass with bikes and packs of metro tickets if I really wanted to, but that would not work during the winter when the stations go away.
This sounds like the most common reaction to users accidentally causing errors in data: lock everything down and only allow a very specific operation, and complain loudly and specifically (i.e., incomprehensibly) if they get out of line, in order to protect the data. Did someone find a new way to screw up? Add another restriction.
I sometimes wonder why these interfaces are so freaking common for anything not facing the general public.
Yep and sites like this get the back button treatment if they're public sites, but you don't know about it until you start working there when it's intranet crud.
There is a cool German word for companies like Facebook or Google, which collect mounds of information about their users:
Datenkrake : http://de.wikipedia.org/wiki/Datenkrake
The German word for octopus is indeed "Kraken", and looking at the pictures in the article linked by the GP, I'd bet on the former, rather than the latter.
There's a cultural connotation to the octopus: its tentacles reach everywhere. So, a likely translation for "Datenkrake" would be something like "data umbrella". We could also try "Big Data", in reference to "Big Pharma".
Video games allow the user to interact with the media and cause reactions to input. It allows us to explore causality at a very personal level, even though the systems and entities invovled are entirely virtual.
Live theater allows the user to control what to focus on during each scene. If there is a conversation, we can choose who the "camera" is pointing at. We are also looking at real human beings interacting with each other. The experience is mildly interactive.
Passive screen-based media gives us neither set of choices.
How is looking at an image of live human beings "worse" than looking at live human beings (which is ultimately an image too)?
If the child is given a remote control, does that count as interactivity since he can choose what to look at while exploring causality?
By the way, I'm not arguing that TV is good for kids. I'm just trying to analyze the argument in favor of preventing infants from watching passive screen media.
You can't compare seeing human beings interacting in a "true 3D", real-life setting to the same thing happening on a screen. You don't get depth perception, and there are artifacts of recording in video and audio (even in modern HD shows) that make a real-life scene distinctly different. I don't know scientifically how that's important, I just know there's a difference. You know there's a difference if you ever see someone on TV and then see them in real life for the first time.
A remote doesn't explore causality inside the media, it's only exploring "when I press a certain button it will switch to a different show which is not of my choosing and which I can't predict". They might correllate pressing the same numbers with the same show at the same time of the day. The "camera angle" interactivity in live theater doesn't get into causality at all, just different ways you can look at or listen to a scene, but in videogames causality is rarely so random.
Think of it at a very basic, I-don't-know-what-TV-or-videogames-are level: I press the channel up button on the remote, the image suddenly changes to something entirely different. If I do this a few hours later, the former image and the latter image are entirely different from before. Let's even assume I'm watching Netflix and I've figured out how to navigate menus: the menu is interactive, but the media I'm watching doesn't give me any control over what's happening inside the media. In a videogame, if I press a button a character will move, a gun will shoot, a menu will open. If I do the same thing a few hours from now, the same thing will happen. A different thing might happen in a predictable context: if my dude is in front of a wall he might climb the wall rather than jump when I press A, it's generally bad design to allow otherwise. The link between cause and effect is much more clear, and my role as an agent of cause is much more clear as well.
I've always been curious, regarding "speaking with a Southern drawl, with a stuffed-up nose from a bad cold" ... are there non-native English speakers here who have had particular difficulty using speech-to-text because of an accent? Have the British, Irish, Scottish, Australians, Canadians, Jamaicans, or South Africans had trouble because of their dialect?
Speech-to-text sounds like it will be a constantly tough problem. Even humans aren't 100% accurate, or even 99% accurate, depending on the circumstances.
However, I do feel there's some similarities between constructing a possibly-correct sentence and seeing if it actually works, and constructing some possibly-correct code and seeing if it compiles.