They are run through a simulation of thousands of rolls. Zadmar builds his own apps to test all the Savage Worlds rules (as he does his own design-work for the system).
I believe it's a probability aggregate based on the fact we're talking about literally splitting a 1.4% difference. Two Fudge dice apparently in his simulations make up that difference.
Uhhh, "simulations" are completely statistically invalid (no matter how many times you run one). The math of the fudge dice should be calculable, which will determine if there are any benefits to them. But, no matter how you program it, a result from a "simulation" is always an anecdote.
You might want to double check the definitions of "statistics" and "anecdote".
Simulations don't give you exact probabilities. But if the simulation doesn't contain any errors, and we can discount any weirdness with the pseudo-random number generator, both of which are fairly safe bets, for something as trivial as simulating dice rolls, they generate approximations based on a large sample set, i.e. statistics, not anecdotes.
There is never a forest so thick that you can't miss it for the trees, is there? Your linguistic pendantism aside (which is incorrect, anyway), I'll try to frame the point for you as simply as possible. First, as the saying goes, the plural of anecdote is not data. An anecdote is a single experience. Data is information about a representative sample. Hence my statement. Multiple runs of a simulation of dice are multiple single experiences. If the dice (or computerized representations of them) are actually fair, then by definition, and throw of the dice is truly random and disconnected from any other throw. The key term there is "disconnected." One throw, ten throws, ten thousand throws, it doesn't matter. We assume, as the number of throws approaches an infinite number, the mean of those throws will approach some number, but... and pay close attention here... that is not necessarily true for ANY number of throws less than infinite. So, while we can expect that our average will regress towards the mean with many throws of dice, that is NOT guaranteed, because each throw is disconnected from the others (see "gambler's fallacy"). So, you cannot make meaningful assertions from a "simulation" of a thousand, or ten thousand, or one million dice throws, because there is always the possibility that your sample is skewed. Hence your throws are not "data" (part of the definition of "statistic"), they are anecdotes. They would need to be connected to be data, which they cannot be for fair dice.
So, as I said above, the only way to determine the accurate statistics for this combination of dice is via mathematics (the limit of the mean as the number of throws approaches infinity, etc.). So, I am not convinced by someone's "simulation" that the issues discussed above are ameliorated by the addition of fudge dice.