|
7 registered members (Volkovstudio, TipmyPip, AndrewAMD, JMMAC, Martin_HH, Grant, 1 invisible),
5,464
guests, and 2
spiders. |
|
Key:
Admin,
Global Mod,
Mod
|
|
|
WFO Training with parallel cores Zorro64
#489207
02/23/26 10:49
02/23/26 10:49
|
Joined: May 2023
Posts: 50 Hamburg, Germany
Martin_HH
OP
Junior Member
|
OP
Junior Member
Joined: May 2023
Posts: 50
Hamburg, Germany
|
When I do WFO with multiple parallel cores, I recognised a recurring behaviour. All instances running quite fast, but the last instance (not the master one) runs endlessly. Despite the fact, I am running a 13900 K with 24 cores and I am using via Process Lasso the performance cores only, it doesnt change. Surely, the more parameter to optimize the greater the issue. Also RAM is not an issue (64GB). I am using the Zorro64 and it seems to be even slower than Zorro32? I am using NumCores variable with fix values or n-1 , no difference ?
Does anyone recognize the same behaviour?
M.
Last edited by Martin_HH; 02/23/26 10:50.
|
|
|
Re: WFO Training with parallel cores Zorro64
[Re: jcl]
#489209
02/23/26 15:29
02/23/26 15:29
|
Joined: May 2023
Posts: 50 Hamburg, Germany
Martin_HH
OP
Junior Member
|
OP
Junior Member
Joined: May 2023
Posts: 50
Hamburg, Germany
|
Yes, I always do it like this, but this behaviour I have in many scripts. This is why I am asking. The last thread needs ages to finalize the training. It ends finally but it lasts 5 x 10 times longer than the other parallel threads.
Last edited by Martin_HH; 02/23/26 15:30.
|
|
|
Re: WFO Training with parallel cores Zorro64
[Re: Martin_HH]
#489228
02/24/26 19:51
02/24/26 19:51
|
Joined: May 2023
Posts: 50 Hamburg, Germany
Martin_HH
OP
Junior Member
|
OP
Junior Member
Joined: May 2023
Posts: 50
Hamburg, Germany
|
I was playing around with NumCore. Despite the fact that I have 32 logical cores, I ran Zorro64 now with NumCore=3 and it is running a little bit more stable, but not good. I also using Process Lasso and allocate Zorro to the Performance cores of the Intel i9. Actually, I feel this is a Win 11 thing and parallelizations seems to be an issue, when it get complex. Linux might be better prepared for this.
Anyone recognised similiar behaviours?
Last edited by Martin_HH; 02/24/26 20:37.
|
|
|
Re: WFO Training with parallel cores Zorro64
[Re: jcl]
#489261
1 hour ago
1 hour ago
|
Joined: May 2023
Posts: 50 Hamburg, Germany
Martin_HH
OP
Junior Member
|
OP
Junior Member
Joined: May 2023
Posts: 50
Hamburg, Germany
|
Yes, what I see is that the last thread needs endlessly . Process Lasso or Task manager showing that this last instance consumes much less RAM than the others, but the CPU consumption is even slightly more than the others.
I am running a strategy using the python bridge and calculating HMM (Hidden Markov Process) and PDA (Principal Component Analysis). What I am wondering, is that the other instances running fast.
I tried different number of cores:
At the moment,I am using Zorro only on the Intel i9 performance cores, but nothing has changed. I am doing this strategy with NumWFOCycle=5 and for each parallel run there is an on isolated core, no other activities . I put the other activities (noin Zoroo/Python) to Core 17 to 32.
Last edited by Martin_HH; 1 hour ago.
|
|
|
|