because of how proof of storage is performed by a distributed network of computers. proof of storage by a decentralized network is really almost identical to proof of work. the stored data keeps getting encrypted to generate a random string, and then a computer has to find a challenge string that when hashed with the encrypted data file creates an output of a string of zeroes (for the proof). Essentially, storage is proved over a period of work rather than a period of time. The additional computational power for increased hashing rate doesn't mean the data was stored longer, only that the server storing the information had more processing power. If the function is being performed with storage in mind its better to use CPUs.
(1kb / 30 seconds) * (1 second / 10000 MH)
...filed stored for 30 seconds
(1kb / 30 seconds) * (1 second / 100 GH)
...filed still only stored for 30 seconds
the work performed is redundant in proof of storage
(1kb / 30 seconds) * (1 second / 10000 MH) ...filed stored for 30 seconds
(1kb / 30 seconds) * (1 second / 100 GH) ...filed still only stored for 30 seconds
the work performed is redundant in proof of storage