In your example, you must retrain every 25 weeks, as this is the length of a test cycle. It should be the time span 26.02.2009-22.01.2014 divided by 10. Your calculation was wrong because the cycles overlap.
Thank you! I think I'm finally grasping this. And I see that you're already providing the reTrain period on the Perf Report (25 weeks) so I really don't need to calculate anything it seems.
So according to Wolfram Alpha, the (test) period from 26.02.2009-22.01.2014 is a time span of:
1791 days
or 1279 weekdays
1791 days / 10 cycles = 179 days / 7day-wk = 25 weeks
1279 weekdays / 10 cycles = 128 days / 5day-wk = 25 weeks
Therefore, my understanding is that, in order to maintain parameter re-optimization on the same schedule as that used in the WFO simulation, that I would want to reTrain the strategy 25 weeks after the latest test date 22.01.2014
That brings up a few followup questions I'd love to hear your thoughts on:
1) if the human reTrains earlier than the 25 weeks, do you believe there could be a negative or different impact? I do believe their could be, for the same reason that 11 WFO cycles may not produce the same results as 12 WFO cycles, for example. I've found in testing that there is a performance "sweet spot" depending on how often a logic's parameters are re-optimized;
2) I think we briefly discussed this before... but is it advisable to simply hardcode an automated reTrain to occur in the strategy logic? For example, if the simulation EndDate is 22.01.2014, then it could be programmatically scripted to initiate a reTrain at every subsequent 25-week interval. Is there any reason this should not be done? The reason I would want to do this is not because I would leave the bot running unattended for long periods... but more likely because I would fail as a human in remembering the bot's required reTrain schedule.
THANKS