Aren't you scared of Bruteforce optimising?
How often is the system retrained?
How many parameters are trained?
The trained parameters are within a small or a wide range?
How do you define "system receives a hit"?
I address these questions
here and
here.
Executive summary: I'm not concerned because of the tons of stock data being trained against and the type of parameters being trained (mostly broad risk management parameters). Also, I should refer you again to the N=1000 shuffle test, which was performed 100% on out-of-sample data.