Let's compare strategy development cycles in a nut shell..
Yours: take the full data set (in-sample) for tuning > skip out-of-sample testing > test it right away on live market data > weeks go on and now you realize that it performs poorly (most cases) or you have an unicorn.
The traditional way: take 75-90% from your data for tuning > test it out-of-sample on the rest of your data set > you realize almost right away that it performs poorly (most cases) or you have an unicorn > once you have that unicorn, you run it on live market data for a final test ride (better safe than sorry).
Advantages of the traditional method: you save much time and you can compare out-of-sample results from multiple strategies / tune-settings (very important).
Your strategy has much potential when I look at the in-sample results, but your tuning method leads to over-fitting. That's the main weakness that you need to fix.