The documentation for adviseLong (Machine Learning) states:

"+BALANCED - enforce the same number of positive and negative target values by replication ..." and also in the Remarks:

"... negative and positive Objective values should be equally distributed. If in doubt, add +BALANCED to the method; this will simply copy samples until balance is reached."

This sounds like standard upsampling. However, a simple experiment shows that this is not the case: When I run one of the example scripts, with the SIGNALS method and without BALANCED the generated data has 58061 samples with the following objective stats:

-1 0 1
28769 587 28705

Assuming that 0 counts as negative (which seems to be the case, yes?), the imbalance is 29356 - 28705 = 651 more negatives than positives. Now, I run the exact same script but with +BALANCED, and the data now has 76241 samples with objective stats:

-1 0 1
37257 864 38120

So balanced with 38121 negative versus 38120 positive.

This means that 76241 - 58061 = 18180 samples were added, whereas upsampling would only have added 651 samples.

What is going on? This is *critical* for training ML models, and I can give a specific example if anyone wants.