I just fail to see it.

I mean, I do see some benefit in such approach.
Some rules indeed work only temporarily.

But I also feel it raises bunch of new concerns and uncertainties.

Just wanted to highlight one of them - actually picking OOS data.
No matter what's your approach on picking up robustness measurements - they would be based in OOS data.

I think my biggest problem is with the definition of over-fitting itself.
I don't know how to measure it (!) precisely.

Have seen and made algos that are obviously over-fitted and fail immediately on any different data.

But when it's such heaps of data and thousands of trades - I don't really believe it's that easy to over-fit.
More and more the results legitimizes themselves as The Unicorn.

--------

Btw, on actual performance - you might argue it's not half bad.
Sort of break even-ish month. 1 week was great (dunno, +30% or something) - flattened out by others.

Given the circumstances we currently live in - it's not really that much out of the line.
In 2021 - it shows about 3 flat months in a row. Ulcer's might not like it, but hey - that's the path I chose!

--------

Much of the frustration comes from the fact how hard it actually is to live trade.

You need bravery - sending out bunch of money and putting it on the line isn't exactly that easy.
Foolishness can help too (and backstab you later on).

You need resilience - even after hitting stop loss like 5 times in a row, that may or may not tell anything.

You need patience - those hours go by slowly. Even worse when you get bad gut feel predictions for days / weeks.

And then the stuff automagically happens and you are TOO LATE.
Either it failed or you are left with constant reminders that success ain't given freely and may very well disappear next week.

Ridiculous of me initially thinking that sound alerts are a good idea.