It's also widely misunderstood. Just because the spot price of electricity is set by the price of gas doesn’t mean the consumer pays that price for all of their electricity.
A lot of wind and solar are on Contracts for Difference. That means when market prices go above the agreed level, the generator pays the difference back through the scheme, which reduces supplier costs rather than the generator simply keeping the whole windfall.
This is particularly relevant when e.g. the price of gas goes way up due to the Iran war, it doesn't mean that the consumer ends up paying more for the energy from wind
> Presumably because the price is a volatile, and storage gives you more flexibility around when you buy.
I will give credit to the person who got there before me. :)
Smoothing out price volatility is a big one.
But also it gives you options:
You can buy it "today" when its cheap and store it for when you need it (e.g. winter months).
You can also trade on that basis too. For example you can make a future-dated commitment to buy gas (knowing you have the storage available to take delivery). But if the situation changes and you later find you don't need it, you can sell that contract to someone else (or you can still take delivery and re-sell it). But you can't do any of that without having the ability to take delivery, because the person who sold you that future-dated contract will want both your money and to get the gas they sold you off their hands.
Because if you have enough renewables and storage to eliminate gas from the mix you are no longer paying gas prices. The more often that happens, the cheaper your bills get.
For me, it's beyond doubt these tools are an essential skill in any SWE's toolkit. By which I mean, knowing their capabilities, how they're valuable and when to use them (and when not to).
As with any other skill, if you can't do something, it can be frustrating to peers. I don't want collegeues wasting time doing things that are automatable.
I'm not suggesting anyone should be cranking out 10k LOC in a week with these tools, but if you haven't yet done things like sent one in an agentic loop to produce a minimal reprex of a bug, or pin down a performance regression by testing code on different branches, then you could potentially be hampering the productivity of the team. These are examples of things where I now have a higher expectation of precision because it's so much easier to do more thorough analysis automatically.
There's always caveats, but I think the point stands that people generally like working with other people who are working as productively as possible.
One thing missing but important to understand is the energy embodied in buying 'stuff'. At a very rough approximation, the cost of stuff, especially consumer goods manufactured cheaply, is quite a high percentage energy.
When you look at people's energy usage, quite a lot of it ends up being the embodied energy in the stuff they buy. For quite a lot of people, it's probably the largest category of energy consumption. I once had a very rough go at calculating this here: https://www.robinlinacre.com/energy_usage/
One gram of finished 3nm packaged semiconductor is roughly equivalent to half a kilogram of refined aluminum in terms of energy cost. If you want to spend a lot of energy for not much mass, photolithography is fantastic.
Love this! Would be super interested in any details the author could share on the data engineering needed to make this work. The vis is super impressive but I suspect the data is the harder thing to get working.
The most time and energy has been getting my head around the source data [0] and industry-specific nuances.
In terms of stack I have a self-hosted Dagster [1] data pipeline that periodically dumps the data onto Cloudflare R2 as parquet files. I then have a self-hosted NodeJS API that uses DuckDB to crunch the raw data and output everything you see on the map.
My wife's old company, a fairly significant engineering consultancy, ran it's entire time/job management and invoicing system from a company wide, custom developed Microsoft Access app called 'Time'.
It was developed by a single guy in the IT department and she liked it.
About 5 years ago the company was acquired, and they had to move to their COTS 'enterprise' system (Maconomy).
All staff from the old company had to do a week long (!) training course in how to use this and she hates it.
In future I think there will be more things like 'Time' (though presumably not MS Access based!)
> In future I think there will be more things like 'Time' (though presumably not MS Access based!)
That's my assertion - those things like 'Time' can be developed by an AI primarily because there is no requirement of an existence of a community from which to hire.
It's an example of a small ERP system - no consultants, no changes, no community, etc.
Large systems (Sage, SAP, Syspro, etc) are purchased based on the existing pool of contractors that can be hired.
Right now, if you had a competing SAP/Syspro system freshly developed, that had all the integrations that a customer needs, how on earth will they deploy it if they cannot hire people to deploy it?
Not to mention Sage's midline - Sage100 is incredibly cheap and effective for it's cost. I mean it's ridiculous what a mature software can do. Everything under the sun basically for a pittance.
It's certainly not "SAP 10 million dollar deployments". we see implementation rarely run into 6 figures for SMB distributors and manufacturing firms. That's less than most of their yearly budget for buying new fleet vehicles or equipment
I still think MS Access was awesome. In the small companies I worked it was used successfully by moderately tech savvy directors and support employees to manage ERP, license generation, invoices, etc.
The most heard gripe was the concurrent access to the database file but I think that was solved by backing the forms by accessing anything over odbc.
It looked terrible but also was highly functional.
Agreed! The first piece of software I built was a simple inventory and sales management system, around 2000. I was 16 and it was just about my first experience programming.
It was for school, and I recently found the write up and was surprised how well the system worked.
Ever since I've marvelled at how easy it was to build something highly functional that could incorporate complex business logic, and wished there was a more modern equivalent.
Grist[1] is great for this stuff, at first glance its a spreadsheet but that spreadsheet is backed by a SQLite database and you can put an actual UI on top of it without leaving the tool, or you can write full blown plugins in Javascript and HTML if you need to go further than that.
Just another yay for Grist here! I've been looking for an Access alternative for quite a while and nothing really comes close. You can try hacking it together with various BI tools, but nothing really feels as accessible as the original Access. While it's not a 1:1 mapping and the graphical report building is not really there, you can still achieve what you need. It's like Access 2.0 to me.
Access as a front end for mssqlserver ran great in a small shop. Seems like there was a wizard that imported the the access tables easily into sqlserver.
I've not seen anything as easy to use as the Access visual query builder and drag-n-drop report builder thing.
Agree. Much of the value of devs is understanding the thing they're working on so they know what to do when it breaks, and knows what new features it can easily support. Doesn't matter whether they wrote the code, a colleague wrote it, or an AI.
A lot of wind and solar are on Contracts for Difference. That means when market prices go above the agreed level, the generator pays the difference back through the scheme, which reduces supplier costs rather than the generator simply keeping the whole windfall.
This is particularly relevant when e.g. the price of gas goes way up due to the Iran war, it doesn't mean that the consumer ends up paying more for the energy from wind
reply