Gamestudio Links
Zorro Links
Newest Posts
Blobsculptor tools and objects download here
by NeoDumont. 03/28/24 03:01
Issue with Multi-Core WFO Training
by aliswee. 03/24/24 20:20
Why Zorro supports up to 72 cores?
by Edgar_Herrera. 03/23/24 21:41
Zorro Trader GPT
by TipmyPip. 03/06/24 09:27
VSCode instead of SED
by 3run. 03/01/24 19:06
AUM Magazine
Latest Screens
The Bible Game
A psychological thriller game
SHADOW (2014)
DEAD TASTE
Who's Online Now
2 registered members (monk12, Quad), 830 guests, and 4 spiders.
Key: Admin, Global Mod, Mod
Newest Members
sakolin, rajesh7827, juergen_wue, NITRO_FOREVER, jack0roses
19043 Registered Users
Previous Thread
Next Thread
Print Thread
Rate Thread
Train and Test Performance Reports #454329
09/04/15 12:37
09/04/15 12:37
Joined: Sep 2014
Posts: 22
G
Giorm Offline OP
Newbie
Giorm  Offline OP
Newbie
G

Joined: Sep 2014
Posts: 22
When i use this code (or using WFO)
if (Train) set (SKIP3);
if (Test) set (SKIP1+SKIP2);

I would like to check and compare the train and test performance of my strategy. How and where can I find both reports?

Last edited by Giorm; 09/04/15 21:54.
Re: Train and Test Performance Reports [Re: Giorm] #454555
09/10/15 10:06
09/10/15 10:06
Joined: Jul 2000
Posts: 27,977
Frankfurt
jcl Offline

Chief Engineer
jcl  Offline

Chief Engineer

Joined: Jul 2000
Posts: 27,977
Frankfurt
Training has no performance in that sense. But if you want to compare in-sample and out-of-sample, just change the line to:

if (Test) set (SKIP3);

and compare both performance reports.

Re: Train and Test Performance Reports [Re: jcl] #454567
09/10/15 13:24
09/10/15 13:24
Joined: Sep 2014
Posts: 22
G
Giorm Offline OP
Newbie
Giorm  Offline OP
Newbie
G

Joined: Sep 2014
Posts: 22
Thanks Jcl,

I have some issues using this way because so i should change the code several time for my objective: I try to explain why (probably exist a better way to do).

I would like to implement a sort of complexity analisys in a similar way found on U.Jaekle, E.Tomasini book (Trading Systems. How to develop a trading strategy) - section 5.4 "Optimisation and over-fitting" in order to try to detect overfitting.

To implement this method I need to collect in-sample and out-sample performance for all optimization steps (every step you add a new parameter optimization).
I thought it was simple collect them on ZT with a single run (others sw do it) for each parameter optimization.

If it was so and I have, just to do an example 5 parameters, I could execute 5 run (as in TradeStation) and collect both in-sample and outsample performance for that optimization. But using ZT, and following your idea, I need to run 5*2 different steps to collect all the report that I need.

And I am asking too what method could I use for checking of in-sample and out-of-sample performance when I use the rolling WFO?

Anyway, if it is not possible doing this now on ZT, do you think that we can include this functionality on a next version?

thanks

Re: Train and Test Performance Reports [Re: Giorm] #454570
09/10/15 15:41
09/10/15 15:41
Joined: Jul 2000
Posts: 27,977
Frankfurt
jcl Offline

Chief Engineer
jcl  Offline

Chief Engineer

Joined: Jul 2000
Posts: 27,977
Frankfurt
You can do that in both cases, using NumTotalCycles = 2. In the first cycle set SKIP1+SKIP2, in the second cycle set SKIP3. Then you can calculate the ratio of the results or whatever you want to do with them. In the case of WFO, do a normal WFO run in the first cycle, and do a WFO run with StartDate set back by the length of the test period in the second cycle.

You have to write some lines of code for this, there is no flag or something that does that automatically. I also do not really see what information comparing in-sample and out-of-sample results would reveal about the system. But if you have some question about coding it, just ask here.

Re: Train and Test Performance Reports [Re: jcl] #454577
09/10/15 19:10
09/10/15 19:10
Joined: Sep 2014
Posts: 22
G
Giorm Offline OP
Newbie
Giorm  Offline OP
Newbie
G

Joined: Sep 2014
Posts: 22
I attach some words and 1 image from the book that explains better the logic.
Do you think that this metod could help to detect overfitting?

[...]
The meaning of the trading system’s complexity

Figure 5.11: Finding an optimal rule complexity for system LUXOR for British pound/US dollar (FOREX) training and test period. The system’s input parameters are optimised for maximum total net profit, from left to right, one after another, within the training period 21/10/2002-28/2/2007 (blue line = training results). Then results are checked in test data range 1/3/2007-4/7/2008 (green line = test results).

How can you now interpret this behaviour of the trading system?
Let’s start within the diagram (Figure 5.11) from the left side when no system parameter is optimised. There the trading system has a very low complexity, since the applied rules are quite simple and not optimised at all. If you start to optimise the first parameter the system’s performance changes markedly. The raw and simple trading logic used so far can easily be made better. Interestingly when not many optimised parameters have been introduced the behaviour within the test range sometimes changes more than within the training range. The change can be much worse, but it can be better than the improvement that takes place in the training range. The reason for this behaviour is that a system that has only been optimised a little reacts very sensitively to parameter changes because there are not many parameters in place yet. The rule complexity and the predictability for your test set is low.
Furthermore, keep in mind that much of what happens in different market phases and areas is accidental and also depends on the market sample bias. It can be that the out-ofsample data period is more “friendly” to our trading system logic in a certain stage than the training data period. With further parameters being optimised or added the changes in the system’s reaction become smaller but still performance improves in the out-ofsample test data range. With the first three parameters being optimised our trading system reaches an important point: it reaches its optimal complexity.
From this point on every further optimised parameter (risk stop, trailing stop, profit target) decreases the system’s performance in the test region although the results still improve in the training region. You now have the situation of curve over-fitting. Every new optimised parameter improves the fit of the system to the training area but what happens here is more an adjustment to the existing market noise than an improvement in predictive
capability
. Thus the net profit within the test region does not become bigger with further optimised parameters but instead it decreases from the fourth parameter onwards. You now again have an out-of-sample deterioration.

[...]

Attached Files Fig.5.11.png
Re: Train and Test Performance Reports [Re: Giorm] #454594
09/11/15 09:24
09/11/15 09:24
Joined: Jul 2000
Posts: 27,977
Frankfurt
jcl Offline

Chief Engineer
jcl  Offline

Chief Engineer

Joined: Jul 2000
Posts: 27,977
Frankfurt
I have that book too, but what they wrote in this section is in fact wrong.

A system can normally not deteriorate by overfitting parameters, only its test results get more inaccurate. This is not intuitive, but is mathematically true under normal conditions. A system with the highest profit in a certain period has also the highest profit expectancy in another period. This does not mean of course that it is profitable at all.

This is also true for the Luxor system from the book. The green performance peak in the image at "optimal complexity" is in fact more overfitted than the performance of "Rules too complex". Reason is that the whole system was adapted to the historical period of the book. Thus, adding or optimizing rules with in sample data simply reduces the system's fit to the out of sample data. That's why you see lower performance there, not because of some overfitting effect.

The Luxor system rules were carefully selected to achieve the maximum return with the data period of the book. The system has no out of sample period at all. You should see all its results under that aspect.

Re: Train and Test Performance Reports [Re: jcl] #454599
09/11/15 10:41
09/11/15 10:41
Joined: Sep 2014
Posts: 22
G
Giorm Offline OP
Newbie
Giorm  Offline OP
Newbie
G

Joined: Sep 2014
Posts: 22
thanks,
so you save me from a lot of work (i was starting to develop that sort of complexity analisys that indeed is wrong!)
smile

At the end, in your opinion, what is the best tool to use to detect overfitting and the simpler to implement with ZT?

Bootstrap Reality Check?
White's Reality Check?
...

Re: Train and Test Performance Reports [Re: Giorm] #454600
09/11/15 11:07
09/11/15 11:07
Joined: Jul 2000
Posts: 27,977
Frankfurt
jcl Offline

Chief Engineer
jcl  Offline

Chief Engineer

Joined: Jul 2000
Posts: 27,977
Frankfurt
White's Reality Check is the best. A system that survives that test really has an edge.

WRC is unfortunately impractical when you manually develop a system. It can be used when a system is developed solely by some mechanical process, without any human intervention or pre-selection. That's what I'm currently attempting on the "Financial Hacker" blog.

But for normal system development there is no perfect solution. The best is a combination of WFA and common sense, i.e. using only rules that are rational and exploit a real inefficiency.


Moderated by  Petra 

Powered by UBB.threads™ PHP Forum Software 7.7.1