An Anonymous Coward writes:
MP3 decoding was already free and got recently included in Fedora. But now, encoding is also free according to Fraunhofer Institute for Integrated Circuits IIS: "On April 23, 2017, Technicolor's mp3 licensing program for certain mp3 related patents and software of Technicolor and Fraunhofer IIS has been terminated." The Wikipedia MP3 article confirms that.
So, do you still use an MP3 library or have you switched to another format or means of listening to music such as (spying built-in) streaming services?
Market segmentation is one reason what more than one audio format survives. An organisation supplying audio streams will want to minimise bandwidth costs, so will use a format that is just good enough to retain enough of its customers. This means that a better format can be sold at a price premium: which is what you get - see Tidal.
From a practical point of view, as the cost of storage drops, it makes sense to store uncompressed audio - especially as you don't have to use cpu cycles and power to de-compress it. I suspect that a future audio format will actually be expanded relative to the source, by the addition of error-correcting codes. This would ensure that the data stored remains stable, and bit rot can be mitigated. This is already done transparently on CDs - they do not store WAVs, they use Reed-Solomon error-correcting codes (out of necessity) to allow the audio data to be stored 'losslessly' on CD media. Anyone with an old CD collection knows how well, or not, that works - errors are hidden until they can't be any more, when suddenly everything collapses.
- Explanation of Reed-Solomon: https://www.usna.edu/Users/math/wdj/_files/documents/reed-sol.htm [usna.edu]- Some background on error correction an CD-ripping http://docs.linn.co.uk/wiki/index.php/CD_Ripping_Terminology#Is_the_CIRC_error_detection.2Fcorrection_process_perfect.3F [linn.co.uk]
I suspect that a future audio format will actually be expanded relative to the source, by the addition of error-correcting codes.
The future is old news [wikipedia.org]But yeah, I also expect that to become standard in the future and not just for those of us into long-term archival
I wasn't explicitly thinking of PAR (which I knew about), but rather something more like a raptor code ( https://en.wikipedia.org/wiki/Raptor_code [wikipedia.org] ). I don't know if it is possible, but it would be good if you could preserve the characteristic of being able to recover a full copy of a file from any (sufficiently large) subset of the encoded blocks AND make the recovery probability 1 if you have more than a certain size subset. At the moment, recovery appears to be to be probabilistic, so by using a raptor code, you can only minimise the probability of unrecoverable error, not make it zero for a reasonable subset.
What appeals to me about raptor codes is that the error-correcting redundancy is encoded into the totality of the blocks - there isn't a separate parity file (just like Reed-Solomon doesn't have a separate parity file). This makes the encoding robust in the face of a single loss of a set of adjacent blocks, or a loss of a random scattering of blocks throughout the file's structure, which I think is useful.