Charging by the hour is a great way of making users minimize use of your software. This is exactly not what you want your users to do. Some idiot will try this, but will quickly learn and revert or go under. Especially dumb with something like MS Office where the whole system is designed to keep users in the system 24x7 and reliant, rather than encourage alternate solutions for bits and pieces. Slash your software licensing costs by pasting text from notepad rather than typing it in MS Office, survive this months budget cuts, eventually realize you can do without the software entirely.
> Some idiot will try this, but will quickly learn and revert or go under.
Or, more than likely, every major software company in a certain domain will coincidentally release metered software in the same time period and small indie developers will be trying to compete with products that have 30+ years of familiarity (and not really succeeding).
Then other software industries will adopt the system within months and it'll be inescapable.
This is precisely what happened with subscription services.
This is true, per user pricing can result in stuff like customers sharing user accounts. Although it’s easier to counter that effect with per user pricing by adding things like audit logs, which are very valuable (and even required) in some industries.
Doubt it. Not for office. But for niche software like specialty engineering software etc this is already the case. In the case of the software I build for a living the billing granularity is minutes (and this is for traditional heavy desktop software not some web SaaS).
Yes. With monthly you are either using the software and paying. Or not using it and not paying. You wouldn't likely use it for one month and then have a month off before paying again. So there is no stress to optimise costs.
Well... I can tell you that engineers are not particularly stressed about turning off EC2 VMs they aren't using, or ever ever ever deleting useless junk from S3.
When it's someone else's dollar... most people don't sweat it too hard.
Statements like this make me feel like I've stepped into bizarro world,.. because I obsess over that.
To be clear though, this isn't wrong. The number of times I've gotten into shouting matches with people over deallocating cloud resources is not insignificant.
MS is pushing it's cloud version of Office heavily. I doubt there will be a desktop version by then, and if there is it will not have all the features of the Cloud 365 version.
Office365 Online is to compete against GDocs which GDocs has an incredible feature set for light/basic users.
But, cloud Office is absolutely abysmal if you get away from the core line features or need to do anything semi-difficult.
Online Word fails to faithfully reproduce complex layouts. Dealing with asian languages results in weird font inconsistencies as well as layout errors.
Online Excel also fails in non-ASCII text. Complex Excel books that are already slow are even slower online. Of course, you can't access local resources if any book reaches outwards like that.
The cloud cannot replace Office for contexts where the data is not allowed in the cloud. Physical custody of data remains a hard requirement for quite a few sectors. So does offline operation. It's difficult to imagine Microsoft simply writing off entire government departments around the world that are current customers.
I wouldn't be too surprised if it still all moves to subscription pricing though.
IIRC while on an internship at Broadcom, lots of EDA tools were utilized by sending a script to a (Sun Solaris?) grid compute cluster to save on license costs or due to licenses being billed per minute or cpu-minute. I do recall some internal "wall of shame" dashboard listing users with the most (idle?) minutes using an EDA tool.
It is more likely that the company has a fixed number of licenses for some product, and is using a grid to maximize the amount of work they get out of those licenses. That is the usual practice with expensive EDA software, to the extent that the vendors will help customers set it up.
Not the OP but our product falls into the 'specialty engineering software' category. We build an enterprise ready tool for teams working with Apache Kafka (https://kpow.io) and offer an hourly price.
The deliverable is a single docker container (not a desktop app!) and right from the start, nearly three years now, we've been selling on the AWS Marketplace for 0.16c/hr:
We have a couple of pricing models, but on the consumption based model we stick to that clear, transparent hourly price. It's good for customers that choose that model, and good for us because we're engineers and we don't care for opaque enterprise sales practices.
We do a fair number of enterprise sales too, they require a negotiation because metering usage at an enterprise level is rarely practical. Even then the price is some calculation of usage/value + support. Keeps it simple, lets us attack a big market as a small team and stay focused on technical delivery, shortens the sale/negotiation time, and our customers know they're basically paying the same price as anyone else.
I'd rather not, but it's a specialty design/construction calculation software for the building industry. Userbase is ~1000 and the development effort is in the order of several hundred man years.
Nope, won't happen. In fact, their argument is straight-up incorrect:
> The first software was designed for, or sold to individual hardware manufacturers and distributed free with the hardware. In the next stage it was sold directly to end users. Next, it was sold under a subscription model to end users. In five years, it will be metered by the nanosecond to end users. This is not only predictable, but inevitable.
The kind of software they are talking about (operating systems, word processors and other enterprise productivity software) was never given out for free, and the volume sold directly to end users was (and is) negligible. It was always a per-user license negotiated between sales and procurement teams since back when business software first became a thing, and will be so for the foreseeable future.
I seriously doubt this. They can make way more money billing everyone per month than by the second. They might make more from the top percent of heavy users, but I bet most users only use it less than an hour a month. There is no way that they can find a price that would be worth charging some who only a couple of minutes while also keeping it affordable for someone who uses it al day every day.
> They can make way more money billing everyone per month than by the second.
Based on what? Obviously the price per second will be higher than a fixed monthly rate. See Lambda vs EC2 pricing for example. AWS’ whole infrastructure is pay-per-use and they’re banking.
Software isn’t gym though, it can be “used” by just keeping it in the background for multiple sessions in a tab somewhere, especially if the software does some monitoring so it’s always active.
Whether this makes sense though depends on the software.
For me personally I would not terribly mind to pay Photoshop by the minute because I would need it an hour per year. That’s extra cash that Adobe now just does not see. Most professionals though would still keep paying monthly because they use it a lot.
How much are you willing to pay for your hour of usage per year? $1? $10? $100?
Is the amount of work it would take to implement a metered rate at a rate users would pay worth it to capture the people who don't use it as a regular part of their job?
We're dealing with a video streaming service and the minutely billing is interesting. We're trying to see if we can negotiate pricing, turns out it gets expensive if the app gets used hourly/entire day. -- pricing based on number of participants and archive quality/duration.
There are other options/build your own but the cost to reintegrate or quality.
Yeah, the ideal is to get customers (business or consumer) on a monthly subscription regardless of how much they use the software. There's just no advantage to this in the vast majority of cases.
Yes, to me this incentivizes customers to use your software as little as possible, which seems like the opposite of what you would want to encourage: lock in.
Agreed. The division of Oracle I worked at had revenue that was close to impossible to predict and they ended up making huge cuts to revenue just to convert it from usage based to monthly recurring. At the time they also did not know if the division was profitable or not because it was so difficult to track usage based revenue. Spoiler alert, they were gushing losses and had lay offs/reorgs for a long time.
In aws, if you aren't using it someone else could be. With SAAS office software, this is not the case or at least capacity is much more divorced from usage.
AWS isn't a software product as defined in the post though. It goes more along the lines of Office 365 or Adobe's tools, but those are just standard subscriptions and I doubt that'll change any time soon.
The difference with AWS (eg
EC2) is the people who run their workload in a few hours on an instance is the minority. Most people are going to have a base amount of servers. The hourly billing tries to encourage customers to spend more to scale their resources with demand of their service. Maybe having an extra N nodes on stand by isn't affordable for a customer, but you they could afford those extra N nodes during peak hours.
For most or all AWS resources it makes sense to bill by relatively small time increments because you really are using hardware for those defined time increments. AWS charges by time increments because that is exactly what they are selling.
Most software runs locally. "What you say, jerf? It's 2022, man, keep up." By that I mean, in terms of resource consumption, most software uses mostly local resources. Your Slack server footprint is tiny compared to the resources eaten by your slack client. Websites explode in RAM and CPU compared to what it took to serve the resources to your browser. Streamed video is expensive to encode once, but will be decoded somewhere locally millions of times. Especially for systems large enough to be worth optimizing and where they're not some guy's first Ruby on Rails project (built before he really got how not to make 15,000 queries per web page) or something. Servers for Slack or Office 365 or a streaming video service are certainly Big Iron and expensive, but compared to the sum total of resources sitting on the client side they're tiny.
Charging by the second for Office 365 wouldn't be smart, because their marginal costs aren't all that related to how many seconds of use there is. (They're certainly correlated, obviously, but I can find you signals that are much more strongly correlated than simply seconds of use, like document size and how many times they reload the page.) So you get all the disadvantages of your user sitting there thinking "How fast can I get this done? Every second I delay I'm paying more!", but you're not saving the resources or anything.
Pure time-based costs make sense if you're selling truly time-based resources, but are questionable in other places. For a lot of services, if not the majority, the true nature of the marginal costs are really hard to divine, and the users don't much care anyhow, and the marginal costs are much smaller than you want to charge anyhow, so it's much easier to just charge them $10/month and get on with it, rather than do all the complicated math to discover that this user's marginal cost was $0.33 this month.
(If my choice of example numbers confuses you, be sure you know what a marginal cost is: https://www.shopify.com/encyclopedia/marginal-cost . There's a lot more to pricing a cloud service than marginal costs. With modern hardware, networking, and even a cursory stab at some optimization by the developers, the marginal cost of a lot of useful cloud services can be tiny. There's useful cloud services out there where the marginal cost of a user is probably less than a penny per month. All the other costs can be quite non-trivial, though!)
> most software uses mostly local resources. Your Slack server footprint is tiny
That’s completely irrelevant for “most software.” What I pay as a company is for the value it provides (and for the people who work on it). For Office specifically the resources are minimal on both sides but it still generates millions for Microsoft, whether it’s native or web based.
You sound like you think you're disagreeing with me, but that's actually my point. There's no point in charging by a metric so disconnected from either the costs or the benefits of the software. It makes sense for AWS to charge that way, because it is in fact directly connected to how their resources are being used.
Metering software by the second seems worse for both the developer (whose revenue becomes less predictable than e.g. monthly billing) and the typical customer (who now needs to think about whether the particular document they want to write warrants paying for Office, or whether it's fine to just do it in GDocs or LibreOffice).
It benefits someone who just uses Excel for a short time to do their budgeting each month (because it saves them money), but it also decreases the value of that customer so much that Microsoft probably doesn't care that they get (wild guess) 5x as many of them.
It would make sense in fringe cases, but not for mainstream use.
Purchasing access to exclusive content with pre-paid minutes is not a new thing after all, but most people want either buy-once-pay-once or "manageable monthly/annual rate", not a highly variable and potentially very spendy financial timebomb ticking away on their computers.
Vast majority ( if not All ) of the HN reply seems to be betting against it. Let me be the usual contrarian. The message is very specific about business software, not any software ( as the headline suggest ), I actually think this isn't so bad provided the following:
You get freemium usage every month. In that case, everyone gets to use Microsoft Office for free. And that is a huge lock in if it isn't already.
You have caps, just like AWS, the cap will be cheaper than actually billing per second.
The subscription per month stays in place. Because companies like OpeX, but the usage model gives them a lot of flexibility. Especially in terms of cross selling to other software within the same company.
The only problem is I only see this being possible if the software is run on server and streamed to clients. Otherwise I dont see how they could meter it and we are just back to the old crackz or warez era.
But who does AWS make the most money from? Those with cloud-native solutions, running almost entirely on lambda and dynamodb? Or those who have lift-and-shift-ed legacy solutions, running almost entirely on reserved EC2 and RDS instances? AWS has been saying for yonks that they'd love for all their customers to be cloud-native and to pay less. But they must be banking on many of their customers (especially the enterprise / government ones) not actually making that switch anytime soon.
EC2 instance costs the same whether I use its RAM or not (these resources are sold as spot instances). I pay for Lambda only when it executes and only exactly for what I used.
While true, it's not the whole story. lambda is also approximately 1000x more expensive than EC2. So if your lambda is only running 0.1% of the time or less (12 seconds per every 3 hours 20 minutes) you break even. If it runs more than that, lambda is more expensive.
This is just price comparison. There are other reasons, convenience, visibility, integration with other services, to choose lambda sometimes and EC2 other times.
It's important to profile and know what your use case is!
Source on the price comparison? Or did you mean that Lambda is 1000x cheaper than EC2 [1][2]? It's obviously load dependent but functions-style code is going to typically be cheaper than even the smallest EC2 instance while handling more load (provided your RAM/CPU usage fits within the envelope it's optimized around).
Disclaimer: I work at Cloudflare on R2 and Workers is even cheaper than Lambda@Edge (provided your workload fits within the compute envelope). Billing is based on CPU time not wall time [3]. Workers Bundled pricing can be more cost effective in some cases if you only need <= 50ms of compute.
Yeah but that's mostly for stuff that's supposed to always be available, like web services. I'll be damned if I'm going to start worrying about leaving my PC on because my OS charges by the second, or leaving certain windows open, or what background tasks are running. No only would it be unmanageable, it be a constant source of tension no one would want in their lives.
The vast majority of AWS revenue comes from fixed-price multi-year reserve contracts. Billing by the second/minute/hour really only happens for hobbyists and small fish.
AWS do not mess around with price tiers and cost breaks for customers who can commit to some resources for a year or more, and I'm sure those will also be their bigger institutional customers. I think you nailed it.
Longbet: most popular software becomes completely free. Paid software simply can't compete and companies are better off getting their money through other means.
However you are paying for the energy required to run the software, and energy prices may also spike and become non-negligable. So technically, you are paying by the second.
So, if I keep my word document open on my computer and start doing other things, I still get billed? Not sure end users will like this or think it's easier.
The way I see the future of computing: You have what basically amounts to a keyboard, a screen, and a wifi chip, so that it can be sold at astonishingly low prices. No excess memory, no excess processing power, because all memory and storage will be rented out or extracted for data collection. Privacy will become a thing of the past since everything you do passes through a corporation that's doing the real computing. You will own absolutely none of your data or your programs, you are beholden to the whims of a corporation that can raise prices whenever they feel like, and it will become the norm because "wow cool $90 computer that can run any program".
People will absolutely buy inferior products if it's cheap enough and marketed well enough.
Based on historical trends it seems more likely to me that $90 computers of the future will simply be able to run the most advanced consumer software locally and thin clients will just be unnecessary complexity.
The hardware will probably have the capabilities to do so, but I can see consumer software reaching a point where much of the compute is performed remotely, and we're just being streamed a rendered interface.
That would require networks to be massively more reliable than they are currently. It's all well and good if you live somewhere where you have a rock-solid connection, but most of the world doesn't. And is unlikely to anytime soon.
Some very popular CAD software already works this way (Autodesk Fusion 360 and OnShape, possibly others). The people with money to spend on such things tend to have good network connectivity.
If the hardware has the capability of doing all my calculations locally, then why would I want to do it all remotely and just receive a rendered interface? So that I can have the "added convenience" of having my software unavailable during the occasional network outage? And the pleasure of network latency, even when there isn't an outage?
We've gradually been having control of the software we use wrestled away from us.
The (unfortunate) idea is that network latency will be low enough, and rendered interfaces will be good enough, that companies can make "streaming software" a compelling enough experience for the masses, to the point where it becomes normalised.
Another of the "you will own nothing, and be happy" predictions? Let's hope they don't come true (or at the least, fight against it as much as you can)...
> In five years, it will be metered by the nanosecond to end users. This is not only predictable, but inevitable.
The only people working at nanosecond precision are likely to want a long-term project budget to allow for budget planning.
> [..] specifically MS Office for business and Avid Pro Tools for film, will be metered in seconds, rather than distributed in a sales or monthly subscription model.
Businesses have budgets to manage, and those are typically allocated on the order of months or years. Not only this, but the seconds interacted with these pieces of software would incentivize users to keep closing them, breaking workflow.
I suspect we will see more license pools shared at companies for expensive software, with the possibility of pulling out additional licenses on-the-fly at an additional cost.
On the other hand, an industry that may want per-minute billing is lawyers, who typically bill their clients per minute. That said, it's hardly worth the effort and is just considered part of the operational overhead of an employee.
Metering hardware by the minute makes way more sense though, as is roughly correlates with wear. Things like electric scooters now charge per minute. But this is essentially the taxi model and this is obvious.
Restating your claim is not an argument, even if you tack on that it is "inevitable" without any evidence or actual argument to support that claim.
The answer to future shitty software that sucks ass is to use or write different software. I understand that lots of companies use Outlook/Exchange and won't switch anytime soon.
I think you meant to reply in thread to me since I was the only commenter that said “inevitable” before your post. That being said, I said this was the inevitable state of full decentralization, not that full decentralization was an inevitability. I cannot speak to how the market will ultimately decide on this topic, but I can build towards it as I believe it to be an ideal state in that the computing power of the world is currently sub-optimally used and reaching an equilibrium would be beneficial overall.
The word "inevitable" is from the Longbet prediction itself
> "... In five years, it will be metered by the nanosecond to end users. This is not only predictable, but inevitable."
And that's the extent of the rationale for the prediction. The right thing to do when people insist their unfounded conclusions are "inevitable" is to take their money, but that's not an option here (the Long Now foundation keeps what they make on investing any "bet" for the duration, and the capital is paid to a charity of the winner's choice).
Fair, I missed the word inevitable in their prediction. I do personally believe it to be an inevitability, hence my investing my free time, energy and money into being a part of shaping that future. Ultimately any prediction in technology is defined by the whim of the market, so either my belief is correct and I can help in creating that future in a fairer way than some hyper-PaaS which does not impart an ability to earn or be an active part of that market segment (a future where AWS _is_ the core of computing is a terrifying future to me), or I am wrong, but the cryptography and distributed computing research I'm investing my time into will have beneficial uses regardless.
expect something more like open source platforms + file formats, commercially licensed plugins for non-commodity pieces of the puzzle
office <-> avid is a pretty broad spectrum. I'm sure there are ways in which office is not a commodity available for free on every platform, and as a libreoffice user I feel the limitations of libreoffice sometimes, but office software is mostly a commodity.
video + graphics software OTOH doesn't have a strong oss contender IMO
also unclear how AI will change the game -- will we sell software or trained models. will plugins from different training sets be 'incompatible' in some way that favors one-stop-shop vendors
I don’t have much input on whether the gist of this is solid or not but it’s a very poorly structured wager. I can see unlimited ways in which the outcome could be disputed, such as Microsoft having tiers of office with different names, all of which don’t match what is listed and some charging that way and some not.
This favors the challenger because they’re more likely to get a clear win.
If I were this person I’d do a lot of clarification.
Also why avid pro tools? That choice of something so niche makes me feel like the author knows something on that one.
I doubt this will happen for regular consumers mainly because per-second billing is overly complicated and potentially easy to fake by editing the amount of seconds downward; and also because companies often have subscribers who never touch their software each month but remain subscribed, and those customers would be worth nothing if we switched.
At the risk of being reductive: Make more of our software copyleft? Even weak copyleft?
As a thought experiment, imagine how the modern web might've played out if KHTML was released under the MIT license instead of the LGPL?
Bearing in mind: Google has other ways of controlling the market (search, android, youtube, gmail, "switch to chrome" everywhere... don't call it anti-competitive...), but they've had to work much harder to exert control than they would've -- IMO -- if blink/webkit/khtml had a more "corporate friendly" license.
Maybe I'm off base with my reasoning, but I see it as being about friction. We can't stop the inhuman profit-seeking machine from doing what it does, but we do have some (underutilized) tools to slow it down.
Form a "software engineering association" with fees paying for full time lawyers and lobbyists and try to pass laws that would prevent the scenario above. But expect a very organised and well funded opposition: the anti-privacy trends have big sponsors at the top.
The desktop version of O365 isn't really cloud software. It doesn't run in the cloud, it just stores its files there if you choose to do so, and checks it's licences there.
The web version is, but it does have a lot of drawbacks.
The point of frequent pricing like subscription is to create stability in the business, probably once a month for software is based off of the financial cycles.
I don’t doubt this at all, but I suspect it will extend to all aspects of software, including the OS — it is the inevitable state of full decentralization. However, I expect also that the usage of software in being metered will also be countered by in-kind contributions, in that the power you feed back to the decentralized grid will compensate your use of the software you need.
Disclaimer: I am working on exactly this notion of a decentralized PaaS
I guess, except that the market for printers is beginning to show signs of swinging away from this, and not just at the pro level -- witness Epson's EcoTank range.
Personally I think this long bet is questionable, not least because computing power that can be sold by the second effectively already is (serverless functions, or cloud hosting, which is already sold in fractions of hours), but also because many of our computation requirements will be able to be satisfied by the dirt cheap microcontrollers of 2027, let alone the phones.
The specific software in his bet has a large enough setup and configuration burden that it seems unlikely to me there's much value in selling it in units of time shorter than a month.
Office applications in particular have an inevitable outright purchase or long term subscription model. You need them for all the things you do; this cost is baked into very ordinary overheads.
There are some narrow functions _in_ applications that you might want to pay for per use or per short burst of time. I can see the relevant app developers lowering the cost of their applications so they can sell CPU time for specific AI-driven or database-driven functions (advanced autotracers in Illustrator, advanced video processing calculations). But not the applications themselves.
Can someone on Hacker News please just win a lottery and start a decent printer company? Call it "Ethical Printing Corporation" (EPiC) and make something that doesn't take a million drivers, have no replacement parts, break after two years, and require those darn proprietary cartridges in a million variations.
It is really notable that there are successful open source, modular or standardised 3D printers but the projects to produce an equivalent document printer have struggled.
Printers appear to be an unusual niche in that everyone needs them cheaply, but they also need to be highly precise to print documents and graphics well. So you have these cheap devices that pull off something very difficult -- thousands of tiny tiny droplets of ink per inch, precisely aligned, again and again, very quickly. But with plastic shells, mass-produced injection-moulded mechanics, and a few high cost, enclosed components.
This is why they tend to have no replaceable parts, and integrated print heads. Because user-replaceable parts mean human-level tolerances, not factory machine tolerances, and human-level tolerances mean badly calibrated prints.
In short, nobody wants to have to learn to tune their document printer the way everyone with a 3D printer expects to have to do.
There are a few things that your EPC printer could focus on -- standardised ink delivery, standardised print heads, modular linear rails. But it will be difficult because it is difficult to solve for all of those things cheaply.
If we could go back to e.g. the limitations of a golfball printer, we'd have ethical/open/modular implementations left, right and centre. Otherwise, be careful what you wish for, unless you really want to tinker with your printer.
My understanding is that creating the printheads themselves is fiendishly difficult to do, such that only a few companies have the technology. I imagine there are lots of patents involved too.
I have to say almost every major printer company now has models with ink tanks in the printer which you can just fill with a bottle. The printers are sold at a premium but I think the price for the hardware is more realistic.
There's a trend of manufacturing scarcity for bits. Users don't like it but companies will use it as a way to goose earnings.
Once a company finds a way to do it then all other companies will start to do it.
We've started to see something similar with all the paywalls that have gone up in the last decade.
A good example are wireless companies. Cell phone companies could have gone the unlimited subscription model route many years ago but yet they continue to limit the use. It's not a straight per minute model but you can divide the monthly charges by the minutes and it works out the same way.
The way to fight it is to go open source and the expansion of community projects like Wikipedia and such. But getting people to do work for the public good without compensation in the long term is next to impossible.
Charging by the period of time is coming. I'm 100% sure.
This will depend heavily on the use case. Generally, you want your users to be as engaged with your software as possible - meaning they actively use it as often as possible.
If you charge per second then they will find alternatives, they’ll have to think of ways to pause and resume the software when they are in between tasks. This adds real costs and disincentives for the customer to use your product. When they compare billing to a per user license, they might perceive the per user license as lower cost.