Originally Posted by Lapsa

One issue with measuring performance is that I'm constantly tinkering with it.
However - I do believe in such approach. If you keep bashing squares into circled holes - eventually they become circles.
Backtest shows PF1.7 (on a flat bet) but I see PF0.9, maybe PF1.2


I don't know what your strategies and workflow look like but "tinkering" here sounds a lot like overfitting, doesn't it? If you're adjusting parameters or adding new mechanisms that increase the AR in the backtest, you can introduce all sorts of biases into it even with OOS testing.

Originally Posted by Lapsa


About 600% from those ~1200% are from May crypto crash which is a black swan and doesn't repeat monthly.
I do think it's important to include ability to surf such waves.


You have to be confident that your strategy wins this black swan event not by chance though. I've also played around with the MATIC coin some time and the backtests were often defined by that burst of volatility in may. But if there are 2-3 trades happening that capture these insane price jumps, I just don't know how I can know if that is random luck or if the script can actually be on the right side of these reliably. And since I'm pessimistic, I rather exclude those times from the backtest personally. Though I also don't include any mechanism in the scripts so far that specifically react to these moments.