I'm thinking of moving my homelab over to SSDs sometime, maybe when my HDDs start gradually failing. I remember when I could get a 1 TB HDD for about 40 EUR, but now they're up to like 55 - 70 EUR over here, whereas SSDs with the same capacity go for 60 - 65 EUR and the lack of moving parts and the performance increase are both hard to argue with. At higher capacities HDDs are still more affordable, but I wonder for how long, I still have enough SATA ports and PCIe SATA expansion cards.
- 4TB SATA (2.5") SSD: Silicon Power Ace A55, 230 EUR
- 4TB NVME (M.2) SSD: Crucial P3, 260 EUR
- 4TB SATA HDD: Seagate Barracuda, 108 EUR
so HDDs are still more affordable for larger storage, assuming that you're okay with the shortcomings. I think that the differences remain similar at the higher storage capacities too.
The 12TB listing shows up to me as "Currently unavailable", whereas others show up as more expensive, so instead of 66 USD for a 4 TB HDD I instead get 172 EUR on amazon.co.uk (Seagate Enterprise ST4000NM0035). The prices in the post above were from an e-commerce store in my country, so those are also different from Amazon's in some ways.
It is not brand new but refurbished if you actually click on the amazon link. That goes the same with all MDD drive. If you only look at Seagate or WD, they are all $15/TB +.
Interesting. For a BackBlaze type device with 60 drives, that's 3.6PB of raw storage.
BackBlaze's site is blocked at work, but aren't they up to using 22TB drives? And I think Seagate announced 30TB spinning drives this year. So that's 1.3PB or 1.8PB with 3.5" drives.
Now if you double the amount of drives (assuming those new Samsung drives are 2.5", and use the propsoed 122TB drives, and you could cool and power it) you'd have 14.4PB!
Amazing since the original design needed like 15 pods for a single petabyte.
>Interesting. For a BackBlaze type device with 60 drives, that's 3.6PB of raw storage.
BlackBlaze storage pod are 4U, that is 0.9PB / 1U. We have been able to store 1PB per U since ~2020 with EDSFF. So the density achievement isn't quite new.
I dont see a price point here. Its probaly around 5000 if it is competitive. This makes me wonder what other storage tech could achieve with similar price points.
Exiting times are ahead for anyone with technology for better non-volatiles or better scaling volatile memory
Are we anywhere close to that? I just picked up a 14TB HDD for under $300.
I am also unclear on how much of a concern leakage is on a SSD. If I leave it disconnected on a shelf for a year, will it have lost data? I have seen some reports that SSDs need to be refreshed regularly, which makes then ill suited for cold storage (eg leave an offsite backup at parents).
Where do you get them? Any kind of owner confirmation or just roll the dice? I feel a bit queasy buying used drives, imagining it has been redlined, while living in 95C temps for three years before I come along. That being said, I suppose the market of people buying such drives new probably implies they were treated appropriately.
Modern high density HDDs don't have a great cold shelf life either. Depending on the HDD or SSD you can see claims of 5-25 years of cold storage retention. Have to test and actually see if it holds though :)
5+ years is already approaching my comfort limit on drives anyway, so that’s not really a problem. I just want to know that I can stick a drive on a shelf for two years without undue stress. Something which is not currently advertised for SSDs.
I take an annual backup and ship it offsite. Current wisdom says that the SSD is more perilous to recover than the HDD if I need to restore it within the next year or two.
Dunno know I've ever heard conventional wisdom that said NAND flash was particularly short lived on the shelf vs other options. E.g. never heard of mass deaths of files on USB drives from sitting in drawer for a year or two, usually just typical endurance/usage deaths from cheap flash.
Hell, I just took my very first SSD (60 GB OCZ Vertex 2) and booted the old install fine after it had sat on a shelf for ~7 years. Of course that just had the Windows boot files on NTFS so I can't say for sure no bitrot had occurred. In the same bin/nostalgia period I had an old 2 GB USB 2.0 flash drive with Ubuntu 9.10 on it that did validate the fs image fine from the same bin as well.
I remember https://www.youtube.com/watch?v=igJK5YDb73w this guy was doing some intentional tests with modern cheapo drives but it hasn't even reached time for the year 2 check quite yet.
PDF[0], but this industry presentation said that for a consumer drive kept at 25C, one should expect 58 weeks of unpowered data storage (page 27). Significantly worse for higher temps. I appreciate those are probably bare minimum threshold numbers which have probably changed, but that makes me uncomfortable vs a HDD where I have not heard such a qualifier. Just given the cost per TB, I will continue to use HDDs for offline backups.
Since at least 2010 there are flash standards which define a floor and no such standard at all for HDD? Choose what you want of course but I'll take a 15 year old minimal standards guarantee over no standards guarantee at all.
They used to drop up until last year and then they went 50% up. Something about Samsung raising the prices in some part that is universal in most ssds these days.
I'm confused. NVMe supports either SATA or PCIe connections depending on the key of the slot. And the drive in question appears to support PCIe connection to the housing based on the text of the article. What situation are you in that neither of these options works?
FYI. it's M.2 connector that can support SATA or PCIe (or even USB, with A or E key). NVMe is a protocol or command set that runs over some physical interface, like PCIe, RDMA, FC or TCP.
So you can have M.2 SATA (not NVMe) or M.2 PCIe (NVMe) drives, both in the bubblegum form factor. The drive from the article uses U.2 connector, which also provides SATA and PCIe (some desktop boards did come with such a connector, parallel with M.2 slot).
I have a consumer motherboard. I have a hard time getting large drives that are fast enough if I am not willing to spend a lot money on server hardware.
First you need a u2 card (or whare er new standards there are)that works and then you need actual u2 drives which are friggin expensive.
With storage density reaching tens of terabytes per drive, sata 6gbit is not enough.