I figured something out today that I'm excited to start playing with. I think it could have a positive impact on the robots I'm working on.
I am using optimizer in some ways that probably don't align with its original purpose, which I understand is to select the most robust value in a range. Usually this translates to the center of a broad hill. That is smart, because it means that if optimal parameters shift slightly over time, they should still be in a profitable zone. The theory, as I understand, is that the "highest" value of a given range may not be the "best" value, especially if it is an outlier or spike. Ideally we want to see a broad hill of high objective values.
However, in some circumstances I want optimizer to choose the highest value, not the robust value. This is useful for functions where the individual neighbor values are not necessarily relevant to each other. For example, in my marketOpenCombo() function, I want to select the best combination of days for trading. There are 15 combinations to choose from, but they are all unique and not necessarily related. Therefore, while a broad hill could appear in the opt chart, it may not. Even worse, the best combination of days could be ignored because it may be considered an outlier or spike.
In observing visually how Zorro chooses optimizer values, I noticed that in cases where a broad hill does not exist (for example, all values are mostly good)... then Zorro prefers to have at least a handful of the good values in order to choose one. It occurred to me that in an opt chart that steps from 1 to 15 I possibly trick the optimizer by simply elongating the return values for each point of interest.
In other words... instead of only taking readings from: 1, 2, 3, 4, ... 15 I wanted to see if I could simulate this instead: 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, ... 15, 15, 15
What I didn't realize was how easy this actually is to accomplish. Zorro's optimizer works on float values, while my purposes require int values. When the conversion takes place, it simply truncates the decimal portion and uses the whole number portion. Therefore the following sequence accomplishes exactly what I want: 1.05, 1.25, 1.65, 2.07, 2.32, 2.78 ... The effect is that optimizer now sees 3 values and apparently that qualifies as a hill and exempts them from being an outlier or spike.
Look at these example screenshots to see better what I'm talking about. All that is required is a slight variation of the optimize() call, to get Zorro to choose the highest value instead of most robust value (which may or may not be the same).
original call: test values 1 thru 15 with step 1 between each: optimize(15,1,15,1);
adjusted call: test values 1 thru 15, with (3 steps per whole number): optimize(15,1,15.99,.33);
Re: Tricking optimizer to pick highest value instead of most robust
[Re: dusktrader]
#437173 02/11/1412:4202/11/1412:42
This is interesting, I read your original work on using bitwise operations to optimise days of the week and was concerned about this very issue of the meaninglessness of adjacency.
I too have struggled to factor in day of week in my analysis, but my approach was to make separate algos for each day of the week, optimise their parameters separately to adjust for different ranges and allow opt f to weed out less profitable days.
I'd appreciate views on the best way to deal with day of the week effects.
Last edited by swingtraderkk; 02/11/1412:43.
Re: Tricking optimizer to pick highest value instead of most robust
[Re: swingtraderkk]
#437180 02/11/1414:1002/11/1414:10
I agree, the adjacency of some of these bitwise combinations is meaningless. Even if they do herd together, I think it could be coincidence. For that reason, any "broad hills" that surface are probably also coincidental.
I want to use Zorro to be able to shift with a changing market. I don't know why some logics seem to work better at some times and worse at others. I could speculate that they work better during the London-New York overlap, for example, because of increased volumes. But then that seems like an almost fundamental view, and the data may not even support that. I would rather let the computer deduce these connections and just choose the best dynamically. I don't need an explanation of why it works.
Some of my logics seem to really do well with one or more ongoing bitwise adjustments. I think I should also focus on creating more variations of them (I need to brainstorm some ideas, maybe further variations like early-London-session, late-London-session, etc.) But it has been a concern of mine that the actual bit-neighbors are not necessarily related to each other. I did not want to modify Zorro's standard optimize() function, so this seems like it could be a happy medium. (Sidenote: one obvious drawback is that using step .33 implies triple the Train time)
From a philosophy standpoint... I try to always keep in mind that what works today may no longer work tomorrow. Time is of the essence. I want to focus on rapid development of edge logics that work today... and then be quick to discard them when they fade away. The markets work in waves and cycles, so what was in favor yesterday may cycle back to favor tomorrow.
Re: Tricking optimizer to pick highest value instead of most robust
[Re: dusktrader]
#437478 02/18/1411:1202/18/1411:12
FYI I did some more thorough testing of this concept and determined that it does not help make the bitwise functions more profitable. While it may choose the highest value in an optimize, for whatever reason that ultimate combination is rather consistently less profitable when compared to an identical strategy that uses the traditional stepping of 1.
There may some use for this trick, but apparently not in this case. I don't always know how to explain the "why"