Every so-called "true random noise" will have limitations. This is obvious due to the many different test criteria, perhaps in localized randomness, Shannon Entropy , n , a power of 2 or perhaps the autocorrelation histogram or the Markov chain etc. .e.g. the number of consecutive heads in a coin toss or maximum length or the Interval coefficient
I asked for a spec on your criteria. "Monte Carlo" is a method , not a criteria.
for example, we know that avalanche noise is a stoichastic process with a multiplication factor from sufficient charge rate or "threshold current" to noise pulse current , using the inverse ratio. The quantum mechanical tunneling of carriers through the bandgap will have limitations due to the channel length, thickness and conflicting requirements for low doping and doping that results in a finite Multiplication Factor for avalanche to leakage ratio or the ratio of energy of charge flow from excitation to escape of a charged particle.
The above limitations results in a limited range of bandwidth in decades or powers of 2 for Shannon entropy. Although thermal avalanche noise can be non repeating, memory-less and said to have infinite entropy it will have limited bandwidth ratio. A simple PN junction often has a limit of 6 decades due to the RC time constants of the diode capacitance using the R leakage to ESR ratio. eg. 10M/10 ohm = 10^6.
This bandwidth ratio is equivalent what a Shannon called entropy of ~3x =18 (10log(2)=3.0103x) This is due to mass mobility and RC time constant inherent characteristics from the physical channel.
Thus the frequency span in decades will be finite.
Thus you need to define the entropy of your data , frequency ratio or the spectral density and span of distribution or some measurable / testable criteria defined by experts. We know there are many solutions and ways to analyze this but it depends on your specs. ... besides overlap and entropy of 120.
For example is would be easy to make a LFSR of length greater than 3DES or 3x56 and then shift for each processor a new result. The memory requirement is simply the Shannon entropy value in bits with an XOR function of the known MLS feedback bits. This could be sent to each CPU in a round robin approach in Real Time DMA dual channel memory with the time interval defined by the core CPU time to process the result or perhaps a faster more clever way.
There are better tools, but I need better requirements. Encryption is hard, but best defined by simple rules. The more complex it is, the greater chance for holes in the randomness. Decoding by brute force is not always the best solution by Monte Carlo rules.
Rolling Codes use a one time key that changes each time upon iteration as well as the method of scrambling.
Do you know anything about the algorithm or spectral density or packet size or are you just trying to hack DES bank information codes?
https://en.m.wikipedia.org/wiki/File:Data_Encription_Standard_Flow_Diagram.svg