I am not an expert on programming, algorithms, or the NEMA stream but my concern was that the parser might strip and delete too much raw data, which would limit the pool that an algorithm has to work with.
Would it really take that much server load to send all the streams up and parse them server side? They could be processed once based on the currently accepted method.
If the parsing theory changes the source data would always be there.
If this concept isn't penny wise and pound foolish, I would be willing to fund all the CPU power we need to give algorithm developers the best chance for success.
At the same time we don’t need to pay money to store and process junk.