6 registered members (TipmyPip, Niels, dBc, Ed_Love, 3run, 1 invisible),
17,577
guests, and 5
spiders. |
Key:
Admin,
Global Mod,
Mod
|
|
|
28 minutes ago
Zorro 2.70 is now available on https://zorro-project.com/download.php under "New versions". It comes with the brand new Z6+ trading system, a strategy that exploits the excess volatility of some Forex pairs. More new features: https://zorro-project.com/manual/en/new.htm. This release candidate will become the official release when no bugs are found in the next time. Please test everything and report any issues here! As usual, anyone who finds a new and serious bug in the release candidate will get a free Zorro S subscription or support extension.
0
1
Read More
|
|
31 minutes ago
You need not convert them, the new Zorro version will accept the old format.
1
80
Read More
|
|
09/27/25 17:07
Thank you AndrewAMD for your reply. Your suggestion unfortunately didn't work; it made optimization for the first asset only. I read many times the manual and made the following changes in the code: If asset/algo specific optimization is not desired at all, don't use loop, but enumerate the assets in a simple for loop, f.i. for(used_assets) ...,
I changed: while(loop(asset.... ===> for(used_assets) ... and optimize the parameters outside the for loop.
Make sure in that case to select all assets before the first optimize call;
otherwise the optimizer will assume a single-asset strategy.
var threshold; // global optimize parameter
function run()
{
set(LOGFILE | PLOTNOW );
set(PARAMETERS);
threshold = optimize(a,b,c,1);
for(used_assets) // first loop
{
Needs no optimization
}
for(used_assets) // second loop
{
Needs no optimization
}
for(used_assets) // third loop
{
threshold decides how many assets to trade
without optimize, go 3 assets long
with optimize, go 1 or 2 or 3 or 4 or 5 assets long
}
The optimize runs on the last asset only. The manual says: [b]Make sure in that case to select all assets before the first optimize call[/b]
How to select all assets before the first optimize call?
David
***** UPDATE ***** The above code works OK, I got the optimization for the global var "threshold" Interesting that the performance degraded after the optimization, holding always 3 assets got the best performance.
2
79
Read More
|
|
09/27/25 10:05
continuation...  // ----------------------------------------------------------------------
// J) Heavy per-bar update slice (uses rolling G_UpdatePos cursor)
// ----------------------------------------------------------------------
var f_affine(var x, var lam, var mean, var E, var P, var i, var c){
return x + lam*mean + E + P + i + c; // small helper used inside nonlins
}
var nonlin1(int i, int n1, var lam, var mean, var E, var P){
var x = G_State[n1];
var arg = (var)A1x[i]*x + (var)A1lam[i]*lam + (var)A1mean[i]*mean + (var)A1E[i]*E + (var)A1P[i]*P + (var)A1i[i]*i + (var)A1c[i];
return arg;
}
var nonlin2(int i, int n2, var lam, var mean, var E, var P){
var x = G_State[n2];
var arg = (var)A2x[i]*x + (var)A2lam[i]*lam + (var)A2mean[i]*mean + (var)A2E[i]*E + (var)A2P[i]*P + (var)A2i[i]*i + (var)A2c[i];
return arg;
}
// returns 1 if a full heavy-update pass finishes, else 0
int heavyUpdateChunk(var lambda, var mean, var energy, var power, int batch){
int N = G_N;
if(N <= 0) return 0;
if(batch < UPDATE_MIN_BATCH) batch = UPDATE_MIN_BATCH;
if(G_UpdatePos >= N) G_UpdatePos = 0;
int i0 = G_UpdatePos;
int i1 = i0 + batch; if(i1 > N) i1 = N;
// projection may be reused by multiple chunks within the same bar
computeProjection();
int i;
for(i=i0;i<i1;i++){
// --- neighbors (safe) ---
int n1 = adjSafe(i,0);
int n2 = ifelse(G_D>=2, adjSafe(i,1), n1);
// --- DTREE ensemble term (also returns top meta) ---
int topEq = -1; var topW = 0;
var treeT = dtreeTerm(i, &topEq, &topW);
G_TreeTerm[i] = (fvar)treeT;
G_TopEq[i] = (i16)topEq;
G_TopW[i] = (fvar)topW;
// --- advisor (data-driven) ---
var adv = adviseEq(i, lambda, mean, energy, power);
// --- nonlinear pair terms controlled by Mode ---
var a1 = nonlin1(i,n1,lambda,mean,energy,power);
var a2 = nonlin2(i,n2,lambda,mean,energy,power);
var t1, t2;
if(G_Mode[i]==1){ t1 = tanh(a1); t2 = sin(a2);
} else if(G_Mode[i]==2){ t1 = cos(a1); t2 = tanh(a2);
} else { t1 = sin(a1); t2 = cos(a2); }
// --- global couplings & momentum ---
var glob1 = tanh( (var)G1mean[i]*mean + (var)G1E[i]*energy );
var glob2 = sin ( (var)G2P[i]*power + (var)G2lam[i]*lambda );
var mom = (G_State[i] - G_Prev[i]);
// --- next state synthesis ---
var xnext =
(var)G_WSelf[i]*G_State[i]
+ (var)G_WN1[i]*t1
+ (var)G_WN2[i]*t2
+ (var)G_WGlob1[i]*glob1
+ (var)G_WGlob2[i]*glob2
+ (var)G_WMom[i]*mom
+ (var)G_WTree[i]*treeT
+ (var)G_WAdv[i]*adv;
// --- stability clamp & book-keeping ---
xnext = clamp(xnext, -10, 10);
G_Prev[i] = G_State[i];
G_State[i]= xnext;
G_StateSq[i] = xnext*xnext;
// --- keep last advisor output for hit-rate scoring next bar ---
G_AdvPrev[i] = (fvar)adv;
// --- lightweight per-eq meta logging (sampled) ---
if(!G_LogsOff && (Bar % LOG_EVERY)==0 && (i < LOG_EQ_SAMPLE)){
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* tnode = treeAt(tid);
int nodeDepth = 0;
if(tnode) nodeDepth = tnode->d;
var rate = (var)TBeta[i]; // any per-eq scalar to inspect quickly
var pred = predByTid(tid);
// last parameter must be a string (avoid ternary; lite-C friendly)
string expr = 0;
if(LOG_EXPR_TEXT){
if(G_Sym) expr = G_Sym[i];
else expr = 0;
}
appendEqMetaLine(Bar, G_Epoch, G_CtxID,
i, n1, n2, tid, nodeDepth, rate, pred, adv, G_Prop[i], (int)G_Mode[i],
(var)G_WAdv[i], (var)G_WTree[i], G_MCF_PBull, G_MCF_Entropy, (int)G_MCF_State,
expr);
}
}
// advance rolling cursor
G_UpdatePos = i1;
// full pass completed?
if(G_UpdatePos >= N){
G_UpdatePos = 0;
G_UpdatePasses += 1;
return 1;
}
return 0;
}
// ----------------------------------------------------------------------
// K) Cycle tracker: pick leader eq on ring and update phase/speed
// ----------------------------------------------------------------------
void updateEquationCycle() {
if(!G_EqTheta){ G_CycPh = wrapPi(G_CycPh); return; }
// Leader = argmax Prop[i]
int i, bestI = 0; var bestP = -1;
for(i=0;i<G_N;i++){
var p = (var)G_Prop[i];
if(p > bestP){ bestP = p; bestI = i; }
}
var th = ifelse(G_EqTheta != 0, G_EqTheta[bestI], 0);
// angular speed (wrapped diff)
var d = angDiff(G_LeadTh, th);
// EW smoothing for speed
G_CycSpd = 0.9*G_CycSpd + 0.1*d;
// integrate phase, keep wrapped
G_CycPh = wrapPi( G_CycPh + G_CycSpd );
G_LeadEq = bestI;
G_LeadTh = th;
}
// ----------------------------------------------------------------------
// L) Markov orchestration per bar (5m every bar; 1H & Relation on close)
// ----------------------------------------------------------------------
int is_H1_close(){ return (Bar % TF_H1) == 0; }
void updateAllMarkov(){
// Don’t touch Markov until all chains are allocated
if(!MC_Count || !MC_RowSum || !ML_Count || !ML_RowSum || !MR_Count || !MR_RowSum)
return;
// low TF always
updateMarkov_5M();
// on 1H close, refresh HTF & relation
if(is_H1_close()){
updateMarkov_1H();
updateMarkov_REL();
}
// expose HTF features to DTREE (the legacy MC_* are HTF via MH_*)
G_MCF_PBull = MH_PBullNext;
G_MCF_Entropy = MH_Entropy;
G_MCF_State = MH_Cur;
}
// ----------------------------------------------------------------------
// M) Rewire scheduler (chunked): decide batch and normalize periodically
// ----------------------------------------------------------------------
void maybeRewireNow(var lambda, var mean, var energy, var power){
int mb = mem_mb_est();
// Near budget? shrink batch or skip
if(mb >= UPDATE_MEM_HARD) return;
// choose batch by bar type
int batch = ifelse(is_H1_close(), REWIRE_BATCH_EQ_H1, REWIRE_BATCH_EQ_5M);
// soften by memory
if(mb >= REWIRE_MEM_SOFT) batch = (batch>>1);
if(batch < REWIRE_MIN_BATCH) batch = REWIRE_MIN_BATCH;
int finished = rewireEpochChunk(lambda,mean,energy,power,batch);
// Normalize proportions after completing a full pass (and every REWIRE_NORM_EVERY passes)
if(finished && (G_RewirePasses % REWIRE_NORM_EVERY) == 0){
normalizeProportions();
// write a header once and roll context id every META_EVERY full passes
writeEqHeaderOnce();
if((G_RewirePasses % META_EVERY) == 0){
// refresh context hash using adjacency (same as in rewireEpoch)
int D = G_D, i, total = G_N * D;
unsigned int h = 2166136261u;
for(i=0;i<total;i++){
unsigned int x = (unsigned int)G_Adj[i];
h ^= x + 0x9e3779b9u + (h<<6) + (h>>2);
}
G_CtxID = (int)((h ^ ((unsigned int)G_Epoch<<8)) & 0x7fffffff);
}
}
}
// ----------------------------------------------------------------------
// N) Heavy update scheduler (chunked) for each bar
// ----------------------------------------------------------------------
void runHeavyUpdates(var lambda, var mean, var energy, var power){
int mb = mem_mb_est();
// Near hard ceiling? skip heavy work this bar
if(mb >= UPDATE_MEM_HARD) return;
int batch = ifelse(is_H1_close(), UPDATE_BATCH_EQ_H1, UPDATE_BATCH_EQ_5M);
if(mb >= UPDATE_MEM_SOFT) batch = (batch>>1);
if(batch < UPDATE_MIN_BATCH) batch = UPDATE_MIN_BATCH;
heavyUpdateChunk(lambda,mean,energy,power,batch);
}
// ----------------------------------------------------------------------
// O) Hit-rate scorer (EW average of 1-bar directional correctness)
// ----------------------------------------------------------------------
void updateHitRates(){
if(is(INITRUN)) return;
if(Bar <= LookBack) return;
int i;
var r = G_Ret1; // realized 1-bar return provided by outer loop
var sgnR = sign(r);
for(i=0;i<G_N;i++){
var a = (var)G_AdvPrev[i]; // last bar's advisor score (-1..+1)
var sgnA = ifelse(a > HIT_EPS, 1, ifelse(a < -HIT_EPS, -1, 0));
var hit = ifelse(sgnR == 0, 0.5, ifelse(sgnA == sgnR, 1.0, 0.0));
G_HitEW[i] = (fvar)((1.0 - HIT_ALPHA)*(var)G_HitEW[i] + HIT_ALPHA*hit);
G_HitN[i] += 1;
}
}
// ----------------------------------------------------------------------
// P) Lambda/Gamma blend & accuracy sentinel
// ----------------------------------------------------------------------
var blendLambdaGamma(var lambda_raw, var gamma_raw){
// adapt blend weight a bit with entropy: more uncertainty -> lean on gamma
var w = clamp(G_FB_W + 0.15*(0.5 - G_MCF_Entropy), 0.4, 0.9);
var x = w*lambda_raw + (1.0 - w)*gamma_raw;
acc_update(lambda_raw, gamma_raw);
return x;
}
// ----------------------------------------------------------------------
// Q) Per-bar orchestrator (no orders here; main run() will call this)
// ----------------------------------------------------------------------
// ----------------------------------------------------------------------
// Q) Per-bar orchestrator (no orders here; main run() will call this)
// Hardened for warmup & init guards (safe if called before LookBack)
// ----------------------------------------------------------------------
void alpha12_step(var ret1_now /*1-bar realized return for scoring*/)
{
// If somehow called before init completes, do nothing
if(!ALPHA12_READY) return;
// 1) Markov update & expose HTF features (always safe)
updateAllMarkov();
// --- Warmup guard ---------------------------------------------------
// Some environments (or external callers) may invoke this before LookBack.
// In warmup we only maintain projections and exit; no rewires/heavy updates.
if(Bar < LookBack){
// keep projection cache alive if arrays exist
computeProjection();
G_Ret1 = ret1_now; // harmless bookkeeping for scorer
// optionally adapt MC threshold very slowly even in warmup
{
var h = 0.5;
int i; var acc = 0;
for(i=0;i<G_N;i++) acc += (var)G_HitEW[i];
if(G_N > 0) h = acc/(var)G_N;
var target = MC_ACT
+ 0.15*(0.55 - h)
+ 0.10*(G_MCF_Entropy - 0.5);
target = clamp(target, 0.20, 0.50);
G_MC_ACT = 0.95*G_MC_ACT + 0.05*target;
}
return; // <-- nothing heavy before LookBack
}
// --------------------------------------------------------------------
// 2) Compute lambda from current projection snapshot
var lambda = 0.0;
{
computeProjection();
int K = keffClamped(); // clamped effective projection dimension
int k;
var e = 0;
var pwr = 0;
for(k = 0; k < K; k++){
var z = (var)G_Z[k];
e += z;
pwr += z*z;
}
var mean = 0;
var energy = pwr; // total energy = sum of squares
var power = 0;
if(K > 0){
mean = e / (var)K;
power = pwr / (var)K;
}
// local "lambda" = trend proxy mixing price-like aggregates
lambda = 0.7*tanh(mean) + 0.3*tanh(0.05*power);
// 3) Maybe rewire a slice this bar (uses same features)
maybeRewireNow(lambda, mean, energy, power);
// 4) Heavy updates for a slice
runHeavyUpdates(lambda, mean, energy, power);
}
// 5) Gamma from coarse network projection (stable, uses whole state)
var gamma = projectNet();
// 6) Blend & store accuracy sentinel
var x = blendLambdaGamma(lambda, gamma);
// 7) Update ring / equation-cycle tracker
updateEquationCycle();
// 8) Score previous advisors against realized 1-bar return
G_Ret1 = ret1_now;
updateHitRates();
// 9) Depth manager & elastic growth controller (memory-aware)
depth_manager_runtime();
edc_runtime();
// 10) Adapt MC acceptance threshold by hit-rate/entropy
{
var h = 0.0;
int i;
for(i = 0; i < G_N; i++) h += (var)G_HitEW[i];
if(G_N > 0) h /= (var)G_N; else h = 0.5;
var target = MC_ACT
+ 0.15*(0.55 - h)
+ 0.10*(G_MCF_Entropy - 0.5);
target = clamp(target, 0.20, 0.50);
G_MC_ACT = 0.9*G_MC_ACT + 0.1*target;
}
// silence unused warning if trading block is removed
x = x;
}
// ==================== Part 4/4 — Runtime, Trading, Init/Cleanup ====================
// ---- globals used by Part 4
var G_LastSig = 0; // blended lambda/gamma used for trading view
int G_LastBarTraded = -1;
// ---- small guards for optional plotting
void plotSafe(string name, var v){
if(ENABLE_PLOTS && !G_ChartsOff) plot(name, v, NEW|LINE, 0);
}
// ---- lite-C compatible calloc replacement ----
void* xcalloc(int count, int size) // removed 'static' (lite-C)
{
int bytes = count*size;
void* p = malloc(bytes);
if(p) memset(p,0,bytes);
else quit("Alpha12: OOM in xcalloc");
return p;
}
// ======================= Markov alloc/free =======================
void allocMarkov()
{
int NN = MC_STATES*MC_STATES;
int bytesMat = NN*sizeof(int);
int bytesRow = MC_STATES*sizeof(int);
// --- HTF (1H) chain (legacy MC_*) ---
MC_Count = (int*)malloc(bytesMat);
MC_RowSum= (int*)malloc(bytesRow);
if(!MC_Count || !MC_RowSum) quit("Alpha12: OOM in allocMarkov(MC)");
memset(MC_Count, 0, bytesMat);
memset(MC_RowSum, 0, bytesRow);
// --- LTF (5M) chain ---
ML_Count = (int*)malloc(bytesMat);
ML_RowSum= (int*)malloc(bytesRow);
if(!ML_Count || !ML_RowSum) quit("Alpha12: OOM in allocMarkov(ML)");
memset(ML_Count, 0, bytesMat);
memset(ML_RowSum, 0, bytesRow);
// --- Relation chain (links 5M & 1H) ---
MR_Count = (int*)malloc(bytesMat);
MR_RowSum= (int*)malloc(MR_STATES*sizeof(int)); // MR_STATES == MC_STATES
if(!MR_Count || !MR_RowSum) quit("Alpha12: OOM in allocMarkov(MR)");
memset(MR_Count, 0, bytesMat);
memset(MR_RowSum, 0, MR_STATES*sizeof(int));
// --- initial states & defaults ---
MC_Prev = MH_Prev = -1; MC_Cur = MH_Cur = 0;
ML_Prev = -1; ML_Cur = 0;
MR_Prev = -1; MR_Cur = 0;
MC_PBullNext = 0.5; MC_Entropy = 1.0;
ML_PBullNext = 0.5; ML_Entropy = 1.0;
MR_PBullNext = 0.5; MR_Entropy = 1.0;
}
void freeMarkov(){
if(MC_Count) free(MC_Count);
if(MC_RowSum)free(MC_RowSum);
if(ML_Count) free(ML_Count);
if(ML_RowSum)free(ML_RowSum);
if(MR_Count) free(MR_Count);
if(MR_RowSum)free(MR_RowSum);
MC_Count=MC_RowSum=ML_Count=ML_RowSum=MR_Count=MR_RowSum=0;
}
// ======================= Alpha12 init / cleanup =======================
void Alpha12_init()
{
if(ALPHA12_READY) return;
// 1) Session context first
asset(ASSET_SYMBOL);
BarPeriod = BAR_PERIOD;
set(PLOTNOW); // plotting gated by ENABLE_PLOTS at call sites
// 2) Warmup window
LookBack = max(300, NWIN);
// 3) Clamp effective projection size and reset projection cache
if(G_Keff < 1) G_Keff = 1;
if(G_Keff > G_K) G_Keff = G_K;
G_ProjBar = -1;
G_ProjK = -1;
// 4) Core allocations
allocateNet();
allocMarkov();
// 5) Depth LUT + initial tree + indexing
if(!G_DepthW) G_DepthW = (var*)malloc(DEPTH_LUT_SIZE*sizeof(var));
if(!Root) Root = createNode(MAX_DEPTH);
G_RT_TreeMaxDepth = MAX_DEPTH;
refreshDepthW();
reindexTreeAndMap(); // sizes pred cache & ring angles
// 6) Bootstrap: RP, projection, one full rewire pass (also sets proportions & CtxID)
rewireInit();
computeProjection();
rewireEpoch(0,0,0,0);
// 7) Logging header once
writeEqHeaderOnce();
// 8) Reset rolling cursors / exposed Markov defaults
G_RewirePos = 0; G_RewirePasses = 0;
G_UpdatePos = 0; G_UpdatePasses = 0;
G_MCF_PBull = 0.5; G_MCF_Entropy = 1.0; G_MCF_State = 0;
// 9) Done
ALPHA12_READY = 1;
printf("\n[Alpha12] init done: N=%i D=%i K=%i (Keff=%i) Depth=%i est=%i MB",
G_N, G_D, G_K, G_Keff, G_RT_TreeMaxDepth, mem_mb_est());
}
void Alpha12_cleanup(){
freeMarkov();
if(Root){ freeTree(Root); Root=0; }
freeNodePool();
if(G_DepthW){ free(G_DepthW); G_DepthW=0; }
freeNet();
ALPHA12_READY = 0;
}
// ======================= Helpers for realized 1-bar return =======================
var realizedRet1(){
// Basic 1-bar return proxy from close series
vars C = series(priceClose());
if(Bar <= LookBack) return 0;
return C[0] - C[1];
}
// ======================= Trading gate =======================
// Combines blended network signal with Markov PBull gate.
// Returns signed signal in [-1..1].
var tradeSignal(){
// --- EARLY GUARDS ---
if(!ALPHA12_READY) return 0; // init not completed
if(!G_RP || !G_Z || !G_StateSq) return 0; // projection buffers not allocated
// Recompute a lightweight lambda/gamma snapshot for display/decisions.
// (Alpha12_step already ran heavy ops; this is cheap.)
computeProjection();
int Keff = keffClamped(); // clamped effective projection size
if(Keff <= 0) return 0; // nothing to project yet; be safe
int k;
var e = 0;
var pwr = 0;
for(k = 0; k < Keff; k++){
var z = (var)G_Z[k];
e += z;
pwr += z*z;
}
// --- NO TERNARY: explicit guards for lite-C ---
var mean = 0;
var power = 0;
if(Keff > 0){
mean = e / (var)Keff;
power = pwr / (var)Keff;
}
var lambda = 0.7*tanh(mean) + 0.3*tanh(0.05*power);
var gamma = projectNet();
var x = blendLambdaGamma(lambda, gamma);
G_LastSig = x;
// Markov (HTF) directional gating (no ternaries)
var gLong = 0;
var gShort = 0;
if(G_MCF_PBull >= PBULL_LONG_TH) gLong = 1.0;
if(G_MCF_PBull <= PBULL_SHORT_TH) gShort = 1.0;
// Symmetric gate around x (no ternary)
var s = 0;
if(x > 0) s = x * gLong;
else s = x * gShort;
// Modulate by relation chain confidence (lower entropy -> stronger)
var conf = 1.0 - 0.5*(MR_Entropy); // 0.5..1.0 typically
s *= conf;
return clamp(s, -1, 1);
}
// ======================= Position sizing & risk =======================
var posSizeFromSignal(var s){
// Simple linear sizing, capped
var base = 1;
var scale = 2.0 * abs(s); // 0..2
return base * (0.5 + 0.5*scale); // 0.5..1.5 lots (example)
}
void placeOrders(var s){
// Basic long/short logic with soft handoff
if(s > 0){
if(!NumOpenLong) enterLong(posSizeFromSignal(s));
if(NumOpenShort) exitShort();
} else if(s < 0){
if(!NumOpenShort) enterShort(posSizeFromSignal(s));
if(NumOpenLong) exitLong();
}
// if s==0 do nothing (hold)
}
// ======================= Main per-bar runtime =======================
void Alpha12_bar(){
// 1) Provide last realized return to the engine scorer
var r1 = realizedRet1();
// 2) Run the engine step (updates Markov, rewires slices, heavy updates, etc.)
alpha12_step(r1);
// 3) Build trading signal & place orders (once per bar)
var s = tradeSignal();
placeOrders(s);
// 4) Plots (guarded)
plotSafe("PBull(1H)", 100*(G_MCF_PBull-0.5));
plotSafe("PBull(5M)", 100*(ML_PBullNext-0.5));
plotSafe("PBull(Rel)", 100*(MR_PBullNext-0.5));
plotSafe("Entropy(1H)", 100*(G_MCF_Entropy));
plotSafe("Sig", 100*G_LastSig);
}
// ---- Zorro hooks (after macros!) ----
function init(){ Alpha12_init(); }
function run()
{
// keep it lean; do NOT change BarPeriod/asset here anymore
if(Bar < LookBack){
updateAllMarkov();
return;
}
Alpha12_bar();
}
function cleanup(){ Alpha12_cleanup(); }
105
31,123
Read More
|
|
09/27/25 10:02
Consensus Gate OrchestratorThe system follows a gate-and-flow pattern. It begins by compressing raw, fast-moving observations into a small alphabet of archetypes—a compact context that says “what the moment looks like” right now. From the rolling stream of these archetypes it infers two quiet dials: a lean (directional tendency for the immediate next step) and a clarity (how decisive that tendency appears). Those two dials form a permission gate: sometimes it opens, sometimes it holds; sometimes it opens in one direction but not the other. The gate is conservative by design and adjusts as evidence accumulates or disperses. Beneath the gate, a soft influence field evolves continuously. Many small units—lightweight, partially independent—carry a trace of their own past, listen to a few peers, and absorb coarse summaries from the broader environment across multiple horizons. Signals are intentionally bounded to prevent spikes from dominating. Attention is rationed: weight is allocated in proportion to agreement and reliability, so faint, inconsistent voices naturally recede while convergent evidence rises to the surface. Connections among these units are reshaped in measured slices. Rather than restarting from scratch, the system refreshes “who listens to whom” and how strongly, favoring simple, stable pairings and rhythm-compatible neighbors. Structure molts; scaffold stays. The goal is to remain adaptive without becoming erratic. Capacity breathes with circumstances. When resources tighten or extra detail stops helping, the system trims depth where it matters least. When there’s headroom and a demonstrable benefit, it adds a thin layer. Changes are tentative and reversible: growth is trialed, scored after a delay, and rolled back if utility falls. Utility balances quality of alignment with a mild cost for complexity. Decisions happen only when permission and the influence field agree meaningfully. Timing and size of action (in any application) scale with consensus strength; ambiguity elevates patience. “Do nothing” is first-class, not failure. A compact diary records the moment’s archetype, the two gate dials, and terse sketches of how influences combined to justify the current posture. It favors clarity over detail, enabling auditability without exposing internals. What emerges is coherence without rigidity. Groups move together when rhythms align; solos fade when clarity drops. Adaptation is maintained through many small adjustments, not dramatic overhauls, so behavior tracks structural change while staying steady between regimes. // ======================================================================
// Alpha12 - Markov-augmented Harmonic D-Tree Engine (Candlestick 122-dir)
// with runtime memory shaping, selective depth pruning,
// elastic accuracy-aware depth growth, and equation-cycle time series.
// ======================================================================
// ================= USER CONFIG =================
#define ASSET_SYMBOL "EUR/USD"
#define BAR_PERIOD 5
#define TF_H1 12
// ... (rest of your USER CONFIG defines)
// ---- Forward declarations (needed by hooks placed early) ----
void Alpha12_init();
void Alpha12_bar();
void Alpha12_cleanup();
void updateAllMarkov();
#define MC_ACT 0.30 // initial threshold on |CDL| in [-1..1] to accept a pattern
#define PBULL_LONG_TH 0.60 // Markov gate for long
#define PBULL_SHORT_TH 0.40 // Markov gate for short
// ===== Debug toggles (Fix #1 - chart/watch growth off by default) =====
#define ENABLE_PLOTS 0 // 0 = no plot buffers; 1 = enable plot() calls
#define ENABLE_WATCH 0 // 0 = disable watch() probes; 1 = enable
// ================= ENGINE PARAMETERS =================
#define MAX_BRANCHES 3
#define MAX_DEPTH 4
#define NWIN 256
#define NET_EQNS 100
#define DEGREE 4
#define KPROJ 16
#define REWIRE_EVERY 127
#define CAND_NEIGH 8
// ===== LOGGING CONTROLS (memory management) =====
#define LOG_EQ_TO_ONE_FILE 1 // 1: single consolidated EQ CSV; 0: per-eq files
#define LOG_EXPR_TEXT 0 // 0: omit full expression (store signature only); 1: include text
#define META_EVERY 4 // write META every N rewires
#define LOG_EQ_SAMPLE NET_EQNS
#define EXPR_MAXLEN 512
#define LOG_FLOAT_TRIM
#define LOG_EVERY 16
#define MC_EVERY 1
// ---- DTREE feature sizes (extended: adds cycle + multi-TF features) ----
#define ADV_EQ_NF 19 // CHANGED: was 15, now +4 (5M + Relation)
#define ADV_PAIR_NF 12 // <— RESTORED: used by buildPairFeatures()
// ================= Candles ? 122-state Markov =================
#define MC_NPAT 61
#define MC_STATES 123 // 1 + 2*MC_NPAT
#define MC_NONE 0
#define MC_LAPLACE 1.0 // kept for reference; runtime uses G_MC_Alpha
// ================= Runtime Memory / Accuracy Manager =================
#define MEM_BUDGET_MB 50
#define MEM_HEADROOM_MB 5
#define DEPTH_STEP_BARS 16
#define KEEP_CHILDREN_HI 2
#define KEEP_CHILDREN_LO 1
#define RUNTIME_MIN_DEPTH 2
// ===== Chunked rewire settings =====
#define REWIRE_BATCH_EQ_5M 24 // equations to (re)build on 5m bars
#define REWIRE_BATCH_EQ_H1 64 // bigger chunk when an H1 closes
#define REWIRE_MIN_BATCH 8 // floor under pressure
#define REWIRE_NORM_EVERY 1 // normalize after completing 1 full pass
// If mem est near budget, scale batch down
#define REWIRE_MEM_SOFT (MEM_BUDGET_MB - 4)
#define REWIRE_MEM_HARD (MEM_BUDGET_MB - 1)
// ===== Chunked update settings (heavy DTREE/advisor in slices) =====
// (Added per your patch)
#define UPDATE_BATCH_EQ_5M 32 // heavy updates on 5m bars
#define UPDATE_BATCH_EQ_H1 96 // larger slice when an H1 closes
#define UPDATE_MIN_BATCH 8
#define UPDATE_MEM_SOFT (MEM_BUDGET_MB - 4)
#define UPDATE_MEM_HARD (MEM_BUDGET_MB - 1)
// runtime flag used by alpha12_step()
int ALPHA12_READY = 0; // single global init sentinel (int)
int G_ShedStage = 0; // 0..2
int G_LastDepthActBar = -999999;
int G_ChartsOff = 0; // gates plot()
int G_LogsOff = 0; // gates file_append cadence
int G_SymFreed = 0; // expression buffers freed
int G_RT_TreeMaxDepth = MAX_DEPTH;
// ---- Accuracy sentinel (EW correlation of lambda vs gamma) ----
var ACC_mx=0, ACC_my=0, ACC_mx2=0, ACC_my2=0, ACC_mxy=0;
var G_AccCorr = 0; // [-1..1]
var G_AccBase = 0; // first seen sentinel
int G_HaveBase = 0;
// ---- Elastic depth tuner (small growth trials with rollback) ----
#define DEPTH_TUNE_BARS 64 // start a growth trial this often (when memory allows)
#define TUNE_DELAY_BARS 64 // evaluate the trial after this many bars
var G_UtilBefore = 0, G_UtilAfter = 0;
int G_TunePending = 0;
int G_TuneStartBar = 0;
int G_TuneAction = 0; // +1 grow trial, 0 none
// ======================================================================
// Types & globals used by memory estimator
// ======================================================================
// HARMONIC D-TREE type
typedef struct Node {
var v;
var r;
void* c;
int n;
int d;
} Node;
// ====== Node pool (upgrade #2) ======
typedef struct NodeChunk {
struct NodeChunk* next;
int used; // 4 bytes
int _pad; // 4 bytes -> ensures nodes[] starts at 8-byte offset on 32-bit
Node nodes[256]; // each Node contains doubles; keep this 8-byte aligned
} NodeChunk;
NodeChunk* G_ChunkHead = 0;
Node* G_FreeList = 0;
Node* poolAllocNode() {
if(G_FreeList){
Node* n = G_FreeList;
G_FreeList = (Node*)n->c;
n->c = 0;
n->n = 0;
n->d = 0;
n->v = 0;
n->r = 0;
return n;
}
if(!G_ChunkHead || G_ChunkHead->used >= 256){
NodeChunk* ch = (NodeChunk*)malloc(sizeof(NodeChunk));
if(!ch) { quit("Alpha12: OOM allocating NodeChunk (poolAllocNode)"); return 0; }
memset(ch, 0, sizeof(NodeChunk));
ch->next = G_ChunkHead;
ch->used = 0;
G_ChunkHead = ch;
}
if(G_ChunkHead->used < 0 || G_ChunkHead->used >= 256){
quit("Alpha12: Corrupt node pool state");
return 0;
}
return &G_ChunkHead->nodes[G_ChunkHead->used++];
}
void poolFreeNode(Node* u){
if(!u) return;
u->c = (void*)G_FreeList;
G_FreeList = u;
}
void freeNodePool() {
NodeChunk* ch = G_ChunkHead;
while(ch){
NodeChunk* nx = ch->next;
free(ch);
ch = nx;
}
G_ChunkHead = 0;
G_FreeList = 0;
}
// Minimal globals needed before mem estimator
Node* Root = 0;
Node** G_TreeIdx = 0;
int G_TreeN = 0;
int G_TreeCap = 0;
var G_DTreeExp = 0;
// ---- (upgrade #1) depth LUT for pow() ----
#define DEPTH_LUT_SIZE (MAX_DEPTH + 1) // <- keep constant for lite-C
var* G_DepthW = 0; // heap-allocated LUT
var G_DepthExpLast = -1.0; // sentinel as var
Node G_DummyNode; // treeAt() can return &G_DummyNode
// Network sizing globals (used by mem estimator)
int G_N = NET_EQNS;
int G_D = DEGREE;
int G_K = KPROJ;
// Optional expression buffer pointer (referenced by mem estimator)
string* G_Sym = 0;
// Forward decls that reference Node
var nodePredictability(Node* t); // fwd decl (needed by predByTid)
var nodeImportance(Node* u); // fwd decl (uses nodePredictability below)
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK);
void reindexTreeAndMap();
// Forward decls for advisor functions (so adviseSeed can call them)
var adviseEq(int i, var lambda, var mean, var energy, var power);
var advisePair(int i,int j, var lambda, var mean, var energy, var power);
// ----------------------------------------------------------------------
// === Adaptive knobs & sentinels (NEW) ===
var G_FB_W = 0.70; // (1) dynamic lambda/gamma blend weight 0..1
var G_MC_ACT = MC_ACT; // (2) adaptive candlestick acceptance threshold
var G_AccRate = 0; // (2) EW acceptance rate of (state != 0)
// (3) advisor budget per bar (replaces the macro)
int G_AdviseMax = 16;
// (6) Markov Laplace smoothing (runtime)
var G_MC_Alpha = 1.0;
// (7) adaptive candidate breadth for adjacency search
int G_CandNeigh = CAND_NEIGH;
// (8) effective projection dimension (= KPROJ or KPROJ/2)
int G_Keff = KPROJ;
// (5) depth emphasis hill-climber
var G_DTreeExpStep = 0.05;
int G_DTreeExpDir = 1;
// ---- Advise budget/rotation (Fix #2) ----
#define ADVISE_ROTATE 1 // 1 = rotate which equations get DTREE each bar
int allowAdvise(int i) {
if(ADVISE_ROTATE){
int groups = NET_EQNS / G_AdviseMax;
if(groups < 1) groups = 1;
return ((i / G_AdviseMax) % groups) == (Bar % groups);
} else {
return (i < G_AdviseMax);
}
}
// ======================================================================
// A) Tight-memory switches and compact types
// ======================================================================
#define TIGHT_MEM 1 // turn on compact types for arrays
// consolidated EQ CSV -> don't enable extra meta
// (no #if available; force meta OFF explicitly)
#ifdef TIGHT_MEM
typedef float fvar; // 4B instead of 8B 'var' for large coefficient arrays
typedef short i16; // -32768..32767 indices
typedef char i8; // small enums/modes
#else
typedef var fvar;
typedef int i16;
typedef int i8;
#endif
// ---- tree byte size (counts nodes + child pointer arrays) ----
int tree_bytes(Node* u) {
if(!u) return 0;
int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
int sz_node = 2*SZV + SZP + 2*SZI;
int total = sz_node;
if(u->n > 0 && u->c) total += u->n * SZP;
int i;
for(i=0;i<u->n;i++) total += tree_bytes(((Node**)u->c)[i]);
return total;
}
// ======================================================================
// Optimized memory estimator & predictability caches
// ======================================================================
// ===== Memory estimator & predictability caches =====
int G_MemFixedBytes = 0; // invariant part (arrays, Markov + pointer vec + expr opt)
int G_TreeBytesCached = 0; // current D-Tree structure bytes
var* G_PredNode = 0; // length == G_TreeN; -2 = not computed this bar
int G_PredLen = 0;
int G_PredCap = 0; // (upgrade #5)
int G_PredCacheBar = -1;
void recalcTreeBytes(){
G_TreeBytesCached = tree_bytes(Root);
}
void computeMemFixedBytes() {
int N = G_N, D = G_D, K = G_K;
int SZV = sizeof(var), SZF = sizeof(fvar), SZI16 = sizeof(i16), SZI8 = sizeof(i8), SZP = sizeof(void*);
int b = 0;
// --- core state (var-precision) ---
b += N*SZV*2; // G_State, G_Prev
// --- adjacency & ids ---
b += N*D*SZI16; // G_Adj
b += N*SZI16; // G_EqTreeId
b += N*SZI8; // G_Mode
// --- random projection ---
b += K*N*SZF; // G_RP
b += K*SZF; // G_Z
// --- weights & params (fvar) ---
b += N*SZF*(8); // G_W* (WSelf, WN1, WN2, WGlob1, WGlob2, WMom, WTree, WAdv)
b += N*SZF*(7 + 7); // A1*, A2*
b += N*SZF*(2 + 2); // G1mean,G1E,G2P,G2lam
b += N*SZF*(2); // TAlpha, TBeta
b += N*SZF*(1); // G_TreeTerm
b += N*(SZI16 + SZF); // G_TopEq, G_TopW
// --- proportions ---
b += N*SZF*2; // G_PropRaw, G_Prop
// --- per-equation hit-rate bookkeeping ---
b += N*SZF; // G_HitEW
b += N*SZF; // G_AdvPrev
b += N*sizeof(int); // G_HitN
// --- Markov storage (unchanged ints) ---
b += MC_STATES*MC_STATES*sizeof(int) + MC_STATES*sizeof(int);
// pointer vector for tree index (capacity part)
b += G_TreeCap*SZP;
// optional expression buffers
if(LOG_EXPR_TEXT && G_Sym && !G_SymFreed) b += N*EXPR_MAXLEN;
G_MemFixedBytes = b;
}
void ensurePredCache() {
if(G_PredCacheBar != Bar){
if(G_PredNode){
int i, n = G_PredLen;
for(i=0;i<n;i++) G_PredNode[i] = -2;
}
G_PredCacheBar = Bar;
}
}
var predByTid(int tid) {
if(!G_TreeIdx || tid < 0 || tid >= G_TreeN || !G_TreeIdx[tid]) return 0.5;
ensurePredCache();
if(G_PredNode && tid < G_PredLen && G_PredNode[tid] > -1.5) return G_PredNode[tid];
Node* t = G_TreeIdx[tid];
var p = 0.5;
if(t) p = nodePredictability(t);
if(G_PredNode && tid < G_PredLen) G_PredNode[tid] = p;
return p;
}
// ======================================================================
// Conservative in-script memory estimator (arrays + pointers) - O(1)
// ======================================================================
int mem_bytes_est(){ return G_MemFixedBytes + G_TreeBytesCached; }
int mem_mb_est(){ return mem_bytes_est() / (1024*1024); }
int memMB(){ return (int)(memory(0)/(1024*1024)); }
// light one-shot shedding
void shed_zero_cost_once() {
if(G_ShedStage > 0) return;
set(PLOTNOW|OFF);
G_ChartsOff = 1;
G_LogsOff = 1;
G_ShedStage = 1;
}
void freeExprBuffers() {
if(!G_Sym || G_SymFreed) return;
int i;
for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]);
free(G_Sym);
G_Sym = 0;
G_SymFreed = 1;
computeMemFixedBytes();
}
// depth manager (prune & shedding)
void depth_manager_runtime() {
int trigger = MEM_BUDGET_MB - MEM_HEADROOM_MB;
int mb = mem_mb_est();
if(mb < trigger) return;
if(G_ShedStage == 0) shed_zero_cost_once();
if(G_ShedStage <= 1){
if(LOG_EXPR_TEXT==0 && !G_SymFreed) freeExprBuffers();
G_ShedStage = 2;
}
int overBudget = (mb >= MEM_BUDGET_MB);
if(!overBudget && (Bar - G_LastDepthActBar < DEPTH_STEP_BARS)) return;
while(G_RT_TreeMaxDepth > RUNTIME_MIN_DEPTH) {
int keepK = ifelse(mem_mb_est() < MEM_BUDGET_MB + 2, KEEP_CHILDREN_HI, KEEP_CHILDREN_LO);
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, keepK);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
mb = mem_mb_est();
printf("\n[DepthMgr] depth=%i keepK=%i est=%i MB", G_RT_TreeMaxDepth, keepK, mb);
if(mb < trigger) break;
}
G_LastDepthActBar = Bar;
}
// ----------------------------------------------------------------------
// 61 candlestick patterns (Zorro spellings kept). Each returns [-100..100].
// We rescale to [-1..1] for Markov state construction.
// ----------------------------------------------------------------------
int buildCDL_TA61(var* out, string* names)
{
int n = 0;
#define ADD(Name, Call) do{ var v = (Call); if(out) out[n] = v/100.; if(names) names[n] = Name; n++; }while(0)
ADD("CDL2Crows", CDL2Crows());
ADD("CDL3BlackCrows", CDL3BlackCrows());
ADD("CDL3Inside", CDL3Inside());
ADD("CDL3LineStrike", CDL3LineStrike());
ADD("CDL3Outside", CDL3Outside());
ADD("CDL3StarsInSouth", CDL3StarsInSouth());
ADD("CDL3WhiteSoldiers", CDL3WhiteSoldiers());
ADD("CDLAbandonedBaby", CDLAbandonedBaby(0.3));
ADD("CDLAdvanceBlock", CDLAdvanceBlock());
ADD("CDLBeltHold", CDLBeltHold());
ADD("CDLBreakaway", CDLBreakaway());
ADD("CDLClosingMarubozu", CDLClosingMarubozu());
ADD("CDLConcealBabysWall", CDLConcealBabysWall());
ADD("CDLCounterAttack", CDLCounterAttack());
ADD("CDLDarkCloudCover", CDLDarkCloudCover(0.3));
ADD("CDLDoji", CDLDoji());
ADD("CDLDojiStar", CDLDojiStar());
ADD("CDLDragonflyDoji", CDLDragonflyDoji());
ADD("CDLEngulfing", CDLEngulfing());
ADD("CDLEveningDojiStar", CDLEveningDojiStar(0.3));
ADD("CDLEveningStar", CDLEveningStar(0.3));
ADD("CDLGapSideSideWhite", CDLGapSideSideWhite());
ADD("CDLGravestoneDoji", CDLGravestoneDoji());
ADD("CDLHammer", CDLHammer());
ADD("CDLHangingMan", CDLHangingMan());
ADD("CDLHarami", CDLHarami());
ADD("CDLHaramiCross", CDLHaramiCross());
ADD("CDLHignWave", CDLHignWave());
ADD("CDLHikkake", CDLHikkake());
ADD("CDLHikkakeMod", CDLHikkakeMod());
ADD("CDLHomingPigeon", CDLHomingPigeon());
ADD("CDLIdentical3Crows", CDLIdentical3Crows());
ADD("CDLInNeck", CDLInNeck());
ADD("CDLInvertedHammer", CDLInvertedHammer());
ADD("CDLKicking", CDLKicking());
ADD("CDLKickingByLength", CDLKickingByLength());
ADD("CDLLadderBottom", CDLLadderBottom());
ADD("CDLLongLeggedDoji", CDLLongLeggedDoji());
ADD("CDLLongLine", CDLLongLine());
ADD("CDLMarubozu", CDLMarubozu());
ADD("CDLMatchingLow", CDLMatchingLow());
ADD("CDLMatHold", CDLMatHold(0.5));
ADD("CDLMorningDojiStar", CDLMorningDojiStar(0.3));
ADD("CDLMorningStar", CDLMorningStar(0.3));
ADD("CDLOnNeck", CDLOnNeck());
ADD("CDLPiercing", CDLPiercing());
ADD("CDLRickshawMan", CDLRickshawMan());
ADD("CDLRiseFall3Methods", CDLRiseFall3Methods());
ADD("CDLSeperatingLines", CDLSeperatingLines());
ADD("CDLShootingStar", CDLShootingStar());
ADD("CDLShortLine", CDLShortLine());
ADD("CDLSpinningTop", CDLSpinningTop());
ADD("CDLStalledPattern", CDLStalledPattern());
ADD("CDLStickSandwhich", CDLStickSandwhich());
ADD("CDLTakuri", CDLTakuri());
ADD("CDLTasukiGap", CDLTasukiGap());
ADD("CDLThrusting", CDLThrusting());
ADD("CDLTristar", CDLTristar());
ADD("CDLUnique3River", CDLUnique3River());
ADD("CDLUpsideGap2Crows", CDLUpsideGap2Crows());
ADD("CDLXSideGap3Methods", CDLXSideGap3Methods());
#undef ADD
return n; // 61
}
// ================= Markov storage & helpers =================
static int* MC_Count; // [MC_STATES*MC_STATES] -> we alias this as the 1H (HTF) chain
static int* MC_RowSum; // [MC_STATES]
static int MC_Prev = -1;
static int MC_Cur = 0;
static var MC_PBullNext = 0.5;
static var MC_Entropy = 0.0;
#define MC_IDX(fr,to) ((fr)*MC_STATES + (to))
int MC_stateFromCDL(var* cdl /*len=61*/, var thr) {
int i, best=-1;
var besta=0;
for(i=0;i<MC_NPAT;i++){
var a = abs(cdl[i]);
if(a>besta){ besta=a; best=i; }
}
if(best<0) return MC_NONE;
if(besta < thr) return MC_NONE;
int bull = (cdl[best] > 0);
return 1 + 2*best + bull; // 1..122
}
int MC_isBull(int s){
if(s<=0) return 0;
return ((s-1)%2)==1;
}
void MC_update(int sPrev,int sCur){
if(sPrev<0) return;
MC_Count[MC_IDX(sPrev,sCur)]++;
MC_RowSum[sPrev]++;
}
// === (6) Use runtime Laplace ? (G_MC_Alpha) ===
var MC_prob(int s,int t){
var num = (var)MC_Count[MC_IDX(s,t)] + G_MC_Alpha;
var den = (var)MC_RowSum[s] + G_MC_Alpha*MC_STATES;
if(den<=0) return 1.0/MC_STATES;
return num/den;
}
// === (6) one-pass PBull + Entropy
void MC_rowStats(int s, var* outPBull, var* outEntropy) {
if(s<0){
if(outPBull) *outPBull=0.5;
if(outEntropy) *outEntropy=1.0;
return;
}
int t;
var Z=0, pBull=0;
for(t=1;t<MC_STATES;t++){
var p=MC_prob(s,t);
Z+=p;
if(MC_isBull(t)) pBull+=p;
}
if(Z<=0){
if(outPBull) *outPBull=0.5;
if(outEntropy) *outEntropy=1.0;
return;
}
var H=0;
for(t=1;t<MC_STATES;t++){
var p = MC_prob(s,t)/Z;
if(p>0) H += -p*log(p);
}
var Hmax = log(MC_STATES-1);
if(Hmax<=0) H = 0; else H = H/Hmax;
if(outPBull) *outPBull = pBull/Z;
if(outEntropy) *outEntropy = H;
}
// ==================== NEW: Multi-TF Markov extensions ====================
// We keep the legacy MC_* as the HTF (1H) chain via aliases:
#define MH_Count MC_Count
#define MH_RowSum MC_RowSum
#define MH_Prev MC_Prev
#define MH_Cur MC_Cur
#define MH_PBullNext MC_PBullNext
#define MH_Entropy MC_Entropy
// ---------- 5M (LTF) Markov ----------
static int* ML_Count; // [MC_STATES*MC_STATES]
static int* ML_RowSum; // [MC_STATES]
static int ML_Prev = -1;
static int ML_Cur = 0;
static var ML_PBullNext = 0.5;
static var ML_Entropy = 0.0;
// ---------- Relation Markov (links 5M & 1H) ----------
#define MR_STATES MC_STATES
static int* MR_Count; // [MR_STATES*MC_STATES]
static int* MR_RowSum; // [MR_STATES]
static int MR_Prev = -1;
static int MR_Cur = 0;
static var MR_PBullNext = 0.5;
static var MR_Entropy = 0.0;
// Relation state mapping (agreement only)
int MC_relFromHL(int sL, int sH) /* sL, sH in [0..122], 0 = none
return in [0..122], 0 = no-agreement */ {
if(sL <= 0 || sH <= 0) return MC_NONE;
int idxL = (sL - 1)/2; int bullL = ((sL - 1)%2)==1;
int idxH = (sH - 1)/2; int bullH = ((sH - 1)%2)==1;
if(idxL == idxH && bullL == bullH) return sL; // same shared state
return MC_NONE; // no-agreement bucket
}
// Small helpers reused for all three chains
void MC_update_any(int* C, int* R, int sPrev, int sCur) {
if(sPrev<0) return;
C[MC_IDX(sPrev,sCur)]++;
R[sPrev]++;
}
// Ultra-safe row stats for any Markov matrix (Zorro lite-C friendly)
void MC_rowStats_any(int* C, int* R, int s, var alpha, var* outPBull, var* outEntropy)
{
// Defaults
if(outPBull) *outPBull = 0.5;
if(outEntropy) *outEntropy = 1.0;
// Guards
if(!C || !R) return;
if(!(alpha > 0)) alpha = 1.0; // also catches NaN/INF
if(s <= 0 || s >= MC_STATES) return; // ignore NONE(0) and OOB
// Row must have observations
{
int rs = R[s];
if(rs <= 0) return;
}
// Precompute safe row slice
int STATES = MC_STATES;
int NN = STATES * STATES;
int rowBase = s * STATES;
if(rowBase < 0 || rowBase > NN - STATES) return; // paranoid bound
int* Crow = C + rowBase;
// Denominator with Laplace smoothing
var den = (var)R[s] + alpha * (var)STATES;
if(!(den > 0)) return;
// Pass 1: mass and bull mass
var Z = 0.0, pBull = 0.0;
int t;
for(t = 1; t < STATES; t++){
var num = (var)Crow[t] + alpha;
var p = num / den;
Z += p;
if(MC_isBull(t)) pBull += p;
}
if(!(Z > 0)) return;
// Pass 2: normalized entropy
var H = 0.0;
var Hmax = log((var)(STATES - 1));
if(!(Hmax > 0)) Hmax = 1.0;
for(t = 1; t < STATES; t++){
var num = (var)Crow[t] + alpha;
var p = (num / den) / Z;
if(p > 0) H += -p*log(p);
}
if(outPBull) *outPBull = pBull / Z;
if(outEntropy) *outEntropy = H / Hmax;
}
// --------------- 5M chain (every 5-minute bar) ---------------
void updateMarkov_5M()
{
// arrays must exist
if(!ML_Count || !ML_RowSum) return;
// compute LTF candlestick state
static var CDL_L[MC_NPAT];
buildCDL_TA61(CDL_L, 0);
int s = MC_stateFromCDL(CDL_L, G_MC_ACT); // 0..MC_STATES-1 (0 = NONE)
// debug/guard: emit when state is NONE or out of range (no indexing yet)
if(s <= 0 || s >= MC_STATES) printf("\n[MC] skip s=%d (Bar=%d)", s, Bar);
// update transitions once we have enough history
if(Bar > LookBack) MC_update_any(ML_Count, ML_RowSum, ML_Prev, s);
ML_Prev = s;
// only compute stats when s is a valid, in-range state and the row has mass
if(s > 0 && s < MC_STATES){
if(ML_RowSum[s] > 0)
MC_rowStats_any(ML_Count, ML_RowSum, s, G_MC_Alpha, &ML_PBullNext, &ML_Entropy);
ML_Cur = s; // keep last valid state; do not overwrite on NONE
}
// else: leave ML_Cur unchanged (sticky last valid)
}
// --------------- 1H chain (only when an H1 bar closes) ---------------
void updateMarkov_1H()
{
// arrays must exist
if(!MC_Count || !MC_RowSum) return;
// switch to 1H timeframe for the patterns
int saveTF = TimeFrame;
TimeFrame = TF_H1;
static var CDL_H[MC_NPAT];
buildCDL_TA61(CDL_H, 0);
int sH = MC_stateFromCDL(CDL_H, G_MC_ACT); // 0..MC_STATES-1
// debug/guard: emit when state is NONE or out of range (no indexing yet)
if(sH <= 0 || sH >= MC_STATES) printf("\n[MC] skip sH=%d (Bar=%d)", sH, Bar);
if(Bar > LookBack) MC_update(MH_Prev, sH);
MH_Prev = sH;
// only compute stats when sH is valid and its row has mass
if(sH > 0 && sH < MC_STATES){
if(MH_RowSum[sH] > 0)
MC_rowStats(sH, &MH_PBullNext, &MH_Entropy); // HTF uses legacy helper
MH_Cur = sH; // keep last valid HTF state
}
// else: leave MH_Cur unchanged
// restore original timeframe
TimeFrame = saveTF;
}
// --------------- Relation chain (agreement-only between 5M & 1H) ---------------
void updateMarkov_REL()
{
// arrays must exist
if(!MR_Count || !MR_RowSum) return;
// relation state from current LTF state and last HTF state
int r = MC_relFromHL(ML_Cur, MH_Cur); // 0 = no agreement / none
// debug/guard: emit when relation is NONE or out of range (no indexing yet)
if(r <= 0 || r >= MC_STATES) printf("\n[MC] skip r=%d (Bar=%d)", r, Bar);
if(Bar > LookBack) MC_update_any(MR_Count, MR_RowSum, MR_Prev, r);
MR_Prev = r;
// only compute stats when r is valid and row has mass
if(r > 0 && r < MC_STATES){
if(MR_RowSum[r] > 0)
MC_rowStats_any(MR_Count, MR_RowSum, r, G_MC_Alpha, &MR_PBullNext, &MR_Entropy);
MR_Cur = r; // keep last valid relation state
}
// else: leave MR_Cur unchanged
}
// ================= HARMONIC D-TREE ENGINE =================
// ---------- utils ----------
var randsign(){ return ifelse(random(1) < 0.5, -1.0, 1.0); }
var mapUnit(var u,var lo,var hi){
if(u<-1) u=-1;
if(u>1) u=1;
var t=0.5*(u+1.0);
return lo + t*(hi-lo);
}
// ---- safety helpers ----
var safeNum(var x) {
if(invalid(x)) return 0; // 0 for NaN/INF
return clamp(x,-1e100,1e100); // hard-limit range
}
void sanitize(var* A,int n){
int k; for(k=0;k<n;k++) A[k]=safeNum(A[k]);
}
var sat100(var x){ return clamp(x,-100,100); }
// ===== EQC-0: Equation-cycle angle helpers =====
var pi() { return 3.141592653589793; }
var wrapPi(var a) {
while(a <= -pi()) a += 2.*pi();
while(a > pi()) a -= 2.*pi();
return a;
}
var angDiff(var a, var b) { return wrapPi(b - a); }
// ---- small string helpers (for memory-safe logging) ----
void strlcat_safe(string dst, string src, int cap) {
if(!dst || !src || cap <= 0) return;
int dl = strlen(dst);
int sl = strlen(src);
int room = cap - 1 - dl;
if(room <= 0){ if(cap > 0) dst[cap-1] = 0; return; }
int i;
for(i = 0; i < room && i < sl; i++) dst[dl + i] = src[i];
dst[dl + i] = 0;
}
int countSubStr(string s, string sub){
if(!s || !sub) return 0;
int n=0; string p=s; int sublen = strlen(sub); if(sublen<=0) return 0;
while((p=strstr(p,sub))){ n++; p += sublen; }
return n;
}
// ---------- FIXED: use int (lite-C) and keep non-negative ----------
int djb2_hash(string s){
int h = 5381, c, i = 0;
if(!s) return h;
while((c = s[i++])) h = ((h<<5)+h) ^ c; // h*33 ^ c
return h & 0x7fffffff; // force non-negative
}
// ---- tree helpers ----
int validTreeIndex(int tid){
if(!G_TreeIdx) return 0;
if(tid<0||tid>=G_TreeN) return 0;
return (G_TreeIdx[tid]!=0);
}
Node* treeAt(int tid){
if(validTreeIndex(tid)) return G_TreeIdx[tid];
return &G_DummyNode;
}
int safeTreeIndexFromEq(int eqi){
int denom = ifelse(G_TreeN>0, G_TreeN, 1);
int tid = eqi;
if(tid < 0) tid = 0;
if(denom > 0) tid = tid % denom;
if(tid < 0) tid = 0;
return tid;
}
// ---- tree indexing ----
void pushTreeNode(Node* u){
if(G_TreeN >= G_TreeCap){
int newCap = G_TreeCap*2;
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
computeMemFixedBytes();
}
G_TreeIdx[G_TreeN++] = u;
}
void indexTreeDFS(Node* u){
if(!u) return;
pushTreeNode(u);
int i;
for(i=0;i<u->n;i++) indexTreeDFS(((Node**)u->c)[i]);
}
// ---- shrink index capacity after pruning (Fix #3) ----
void maybeShrinkTreeIdx(){
if(!G_TreeIdx) return;
if(G_TreeCap > 64 && G_TreeN < (G_TreeCap >> 1)){
int newCap = (G_TreeCap >> 1);
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
computeMemFixedBytes();
}
}
// ---- depth LUT helper (upgrade #1) ----
void refreshDepthW() {
if(!G_DepthW) return;
int d; for(d=0; d<DEPTH_LUT_SIZE; d++) G_DepthW[d] = 1.0 / pow(d+1, G_DTreeExp);
G_DepthExpLast = G_DTreeExp;
}
// ---- tree create/eval (with pool & LUT upgrades) ----
Node* createNode(int depth) {
Node* u = poolAllocNode();
if(!u) return 0;
u->v = random();
u->r = 0.01 + 0.02*depth + random(0.005);
u->d = depth;
if(depth > 0){
u->n = 1 + (int)random(MAX_BRANCHES);
u->c = malloc(u->n * sizeof(void*));
if(!u->c){ u->n = 0; u->c = 0; return u; }
int i;
for(i=0;i<u->n;i++){
Node* child = createNode(depth - 1);
((Node**)u->c)[i] = child;
}
} else {
u->n = 0; u->c = 0;
}
return u;
}
var evaluateNode(Node* u) {
if(!u) return 0;
var sum = 0; int i;
for(i=0;i<u->n;i++) sum += evaluateNode(((Node**)u->c)[i]);
if(G_DepthExpLast < 0 || abs(G_DTreeExp - G_DepthExpLast) > 1e-9) refreshDepthW();
var phase = sin(u->r * Bar + sum);
var weight = G_DepthW[u->d];
u->v = (1 - weight)*u->v + weight*phase;
return u->v;
}
int countNodes(Node* u){
if(!u) return 0;
int c=1,i;
for(i=0;i<u->n;i++) c += countNodes(((Node**)u->c)[i]);
return c;
}
void freeTree(Node* u) {
if(!u) return;
int i;
for(i=0;i<u->n;i++) freeTree(((Node**)u->c)[i]);
if(u->c) free(u->c);
poolFreeNode(u);
}
// =========== NETWORK STATE & COEFFICIENTS ===========
var* G_State;
var* G_Prev;
var* G_StateSq = 0;
i16* G_Adj;
fvar* G_RP;
fvar* G_Z;
i8* G_Mode;
fvar* G_WSelf;
fvar* G_WN1;
fvar* G_WN2;
fvar* G_WGlob1;
fvar* G_WGlob2;
fvar* G_WMom;
fvar* G_WTree;
fvar* G_WAdv;
fvar* A1x; fvar* A1lam; fvar* A1mean; fvar* A1E; fvar* A1P; fvar* A1i; fvar* A1c;
fvar* A2x; fvar* A2lam; fvar* A2mean; fvar* A2E; fvar* A2P; fvar* A2i; fvar* A2c;
fvar* G1mean; fvar* G1E; fvar* G2P; fvar* G2lam;
fvar* G_TreeTerm;
i16* G_TopEq;
fvar* G_TopW;
i16* G_EqTreeId;
fvar* TAlpha;
fvar* TBeta;
fvar* G_PropRaw;
fvar* G_Prop;
// --- Per-equation hit-rate (EW average of 1-bar directional correctness) ---
#define HIT_ALPHA 0.02
#define HIT_EPS 0.0001
fvar* G_HitEW; // [N] 0..1 EW hit-rate
int* G_HitN; // [N] # of scored comparisons
fvar* G_AdvPrev; // [N] previous bar's advisor output (-1..+1)
var G_Ret1 = 0; // realized 1-bar return for scoring
// ===== Markov features exposed to DTREE (HTF) =====
var G_MCF_PBull; // 0..1
var G_MCF_Entropy; // 0..1
var G_MCF_State; // 0..122
// ===== EQC-1: Equation-cycle globals =====
var* G_EqTheta = 0; // [G_N] fixed angle per equation on ring (0..2?)
int G_LeadEq = -1; // last bar's leader eq index
var G_LeadTh = 0; // leader's angle
var G_CycPh = 0; // wrapped cumulative phase (-?..?)
var G_CycSpd = 0; // smoothed angular speed (?? EMA)
// epoch/context & feedback
int G_Epoch = 0;
int G_CtxID = 0;
var G_FB_A = 0.7;
var G_FB_B = 0.3;
// ---------- predictability ----------
var nodePredictability(Node* t) {
if(!t) return 0.5;
var disp = 0; int n = t->n, i, cnt = 0;
if(t->c){
for(i=0;i<n;i++){
Node* c = ((Node**)t->c)[i];
if(c){ disp += abs(c->v - t->v); cnt++; }
}
if(cnt > 0) disp /= cnt;
}
var depthFac = 1.0/(1 + t->d);
var rateBase = 0.01 + 0.02*t->d;
var rateFac = exp(-25.0*abs(t->r - rateBase));
var p = 0.5*(depthFac + rateFac);
p = 0.5*p + 0.5*(1.0 + (-disp));
if(p<0) p=0; if(p>1) p=1;
return p;
}
var nodeImportance(Node* u) {
if(!u) return 0;
var amp = abs(u->v); if(amp>1) amp=1;
var p = nodePredictability(u);
var depthW = 1.0/(1.0 + u->d);
var imp = (0.6*p + 0.4*amp) * depthW;
return imp;
}
// ====== Elastic growth helpers ======
Node* createLeafDepth(int d) {
Node* u = poolAllocNode();
if(!u) return 0;
u->v = random();
u->r = 0.01 + 0.02*d + random(0.005);
u->n = 0;
u->c = 0;
return u;
}
void growSelectiveAtDepth(Node* u, int frontierDepth, int addK) {
if(!u) return;
if(u->d == frontierDepth){
int want = addK; if(want <= 0) return;
int oldN = u->n; int newN = oldN + want;
Node** Cnew = (Node**)malloc(newN * sizeof(void*));
if(oldN>0 && u->c) memcpy(Cnew, u->c, oldN*sizeof(void*));
int i;
for(i=oldN;i<newN;i++) Cnew[i] = createLeafDepth(frontierDepth-1);
if(u->c) free(u->c);
u->c = Cnew;
u->n = newN;
return;
}
int j;
for(j=0;j<u->n;j++) growSelectiveAtDepth(((Node**)u->c)[j], frontierDepth, addK);
}
void freeChildAt(Node* parent, int idx) {
if(!parent || !parent->c) return;
Node** C = (Node**)parent->c;
freeTree(C[idx]);
int i;
for(i=idx+1;i<parent->n;i++) C[i-1] = C[i];
parent->n--;
if(parent->n==0){ free(parent->c); parent->c=0; }
}
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK) {
if(!u) return;
if(u->d == targetDepth-1 && u->n > 0){
int n = u->n, i, kept = 0;
int mark[16]; for(i=0;i<16;i++) mark[i]=0;
int iter;
for(iter=0; iter<keepK && iter<n; iter++){
int bestI = -1; var bestImp = -1;
for(i=0;i<n;i++){
if(i<16 && mark[i]==1) continue;
var imp = nodeImportance(((Node**)u->c)[i]);
if(imp > bestImp){ bestImp = imp; bestI = i; }
}
if(bestI>=0 && bestI<16){ mark[bestI]=1; kept++; }
}
for(i=n-1;i>=0;i--) if(i<16 && mark[i]==0) freeChildAt(u,i);
return;
}
int j; for(j=0;j<u->n;j++) pruneSelectiveAtDepth(((Node**)u->c)[j], targetDepth, keepK);
}
// ----- refresh fixed ring angles per equation (0..2?) -----
void refreshEqAngles() {
if(!G_EqTheta){ G_EqTheta = (var*)malloc(G_N*sizeof(var)); }
int i; var twoPi = 2.*pi(); var denom = ifelse(G_TreeN>0,(var)G_TreeN,1.0);
for(i=0;i<G_N;i++){
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
var u = ((var)tid)/denom; // 0..1
G_EqTheta[i] = twoPi*u; // 0..2?
}
}
// ---------- reindex (sizes pred cache; + refresh angles) ----------
void reindexTreeAndMap() {
G_TreeN = 0;
indexTreeDFS(Root);
if(G_TreeN<=0){ G_TreeN=1; if(G_TreeIdx) G_TreeIdx[0]=Root; }
{ int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = (i16)(i % G_TreeN); }
G_PredLen = G_TreeN; if(G_PredLen <= 0) G_PredLen = 1;
if(G_PredLen > G_PredCap){
if(G_PredNode) free(G_PredNode);
G_PredNode = (var*)malloc(G_PredLen*sizeof(var));
G_PredCap = G_PredLen;
}
G_PredCacheBar = -1;
// NEW: compute equation ring angles after mapping
refreshEqAngles();
maybeShrinkTreeIdx();
recalcTreeBytes();
}
// ====== Accuracy sentinel & elastic-depth controller ======
void acc_update(var x /*lambda*/, var y /*gamma*/) {
var a = 0.01; // ~100-bar half-life
ACC_mx = (1-a)*ACC_mx + a*x;
ACC_my = (1-a)*ACC_my + a*y;
ACC_mx2 = (1-a)*ACC_mx2 + a*(x*x);
ACC_my2 = (1-a)*ACC_my2 + a*(y*y);
ACC_mxy = (1-a)*ACC_mxy + a*(x*y);
var vx = ACC_mx2 - ACC_mx*ACC_mx;
var vy = ACC_my2 - ACC_my*ACC_my;
var cv = ACC_mxy - ACC_mx*ACC_my;
if(vx>0 && vy>0) G_AccCorr = cv / sqrt(vx*vy);
else G_AccCorr = 0;
if(!G_HaveBase){ G_AccBase = G_AccCorr; G_HaveBase = 1; }
}
var util_now() {
int mb = mem_mb_est();
var mem_pen = 0;
if(mb > MEM_BUDGET_MB) mem_pen = (mb - MEM_BUDGET_MB)/(var)MEM_BUDGET_MB;
else mem_pen = 0;
return G_AccCorr - 0.5*mem_pen;
}
int apply_grow_step() {
int mb = mem_mb_est();
if(G_RT_TreeMaxDepth >= MAX_DEPTH) return 0;
if(mb > MEM_BUDGET_MB - 2*MEM_HEADROOM_MB) return 0;
int newFrontier = G_RT_TreeMaxDepth;
growSelectiveAtDepth(Root, newFrontier, KEEP_CHILDREN_HI);
G_RT_TreeMaxDepth++;
reindexTreeAndMap();
printf("\n[EDC] Grew depth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
return 1;
}
void revert_last_grow() {
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, 0);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
printf("\n[EDC] Reverted growth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
}
void edc_runtime() {
if((Bar % DEPTH_TUNE_BARS) == 0){
var U0 = util_now();
var trial = clamp(G_DTreeExp + G_DTreeExpDir*G_DTreeExpStep, 0.8, 2.0);
var old = G_DTreeExp;
G_DTreeExp = trial;
if(util_now() + 0.005 < U0){
G_DTreeExp = old;
G_DTreeExpDir = -G_DTreeExpDir;
}
}
int mb = mem_mb_est();
if(G_TunePending){
if(Bar - G_TuneStartBar >= TUNE_DELAY_BARS){
G_UtilAfter = util_now();
var eps = 0.01;
if(G_UtilAfter + eps < G_UtilBefore){ revert_last_grow(); }
else { printf("\n[EDC] Growth kept (U: %.4f -> %.4f)", G_UtilBefore, G_UtilAfter); }
G_TunePending = 0;
G_TuneAction = 0;
}
return;
}
if( (Bar % DEPTH_TUNE_BARS)==0 && mb <= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_RT_TreeMaxDepth < MAX_DEPTH ){
G_UtilBefore = util_now();
if(apply_grow_step()){
G_TunePending = 1;
G_TuneAction = 1;
G_TuneStartBar = Bar;
}
}
}
// Builds "Log\\Alpha12_eq_###.csv" into outName (must be >=64)
void buildEqFileName(int idx, char* outName /*>=64*/) {
strcpy(outName, "Log\\Alpha12_eq_");
string idxs = strf("%03i", idx);
strcat(outName, idxs);
strcat(outName, ".csv");
}
// ===== consolidated EQ log =====
void writeEqHeaderOnce() {
static int done=0; if(done) return; done=1;
file_append("Log\\Alpha12_eq_all.csv",
"Bar,Epoch,Ctx,EqCount,i,n1,n2,TreeId,Depth,Rate,Pred,Adv,Prop,Mode,WAdv,WTree,PBull,Entropy,MCState,ExprLen,ExprHash,tanhN,sinN,cosN\n");
}
void appendEqMetaLine(
int bar, int epoch, int ctx,
int i, int n1, int n2, int tid, int depth, var rate, var pred, var adv, var prop, int mode,
var wadv, var wtree, var pbull, var ent, int mcstate, string expr)
{
if(i >= LOG_EQ_SAMPLE) return;
// Lightweight expression stats (safe if expr == 0)
int eLen = 0, eHash = 0, cT = 0, cS = 0, cC = 0;
if(expr){
eLen = (int)strlen(expr);
eHash = (int)djb2_hash(expr);
cT = countSubStr(expr,"tanh(");
cS = countSubStr(expr,"sin(");
cC = countSubStr(expr,"cos(");
} else {
eHash = (int)djb2_hash("");
}
// One trimmed CSV line; order matches writeEqHeaderOnce()
file_append("Log\\Alpha12_eq_all.csv",
strf("%i,%i,%i,%i,%i,%i,%i,%i,%i,%.4f,%.4f,%.4f,%.4f,%i,%.3f,%.3f,%.4f,%.4f,%i,%i,%i,%i,%i,%i\n",
bar, epoch, ctx, NET_EQNS, i, n1, n2, tid, depth,
rate, pred, adv, prop, mode, wadv, wtree, pbull, ent,
mcstate, eLen, eHash, cT, cS, cC));
}
// --------- allocation ----------
void randomizeRP() {
int K=G_K,N=G_N,k,j;
for(k=0;k<K;k++)
for(j=0;j<N;j++)
G_RP[k*N+j] = ifelse(random(1) < 0.5, -1.0, 1.0);
}
// === (8/9) Use effective K + per-bar guard ===
int G_ProjBar = -1;
int G_ProjK = -1;
int keffClamped(){
int K = G_Keff;
if(K < 0) K = 0;
if(K > G_K) K = G_K;
return K;
}
void computeProjection()
{
if(!G_RP || !G_Z || !G_StateSq) return;
int K = keffClamped();
if(G_ProjBar == Bar && G_ProjK == K) return;
int N = G_N, k, j;
for(k = 0; k < K; k++){
var acc = 0;
for(j = 0; j < N; j++) acc += (var)G_RP[k*N + j] * G_StateSq[j];
G_Z[k] = (fvar)acc;
}
G_ProjBar = Bar;
G_ProjK = K;
}
// D) Compact allocate/free
void allocateNet() {
int N = G_N, D = G_D, K = G_K;
// core
G_State = (var*)malloc(N*sizeof(var));
G_Prev = (var*)malloc(N*sizeof(var));
G_StateSq = (var*)malloc(N*sizeof(var));
// graph / projection
G_Adj = (i16*) malloc(N*D*sizeof(i16));
G_RP = (fvar*)malloc(K*N*sizeof(fvar));
G_Z = (fvar*)malloc(K*sizeof(fvar));
G_Mode= (i8*) malloc(N*sizeof(i8));
// weights & params
G_WSelf = (fvar*)malloc(N*sizeof(fvar));
G_WN1 = (fvar*)malloc(N*sizeof(fvar));
G_WN2 = (fvar*)malloc(N*sizeof(fvar));
G_WGlob1 = (fvar*)malloc(N*sizeof(fvar));
G_WGlob2 = (fvar*)malloc(N*sizeof(fvar));
G_WMom = (fvar*)malloc(N*sizeof(fvar));
G_WTree = (fvar*)malloc(N*sizeof(fvar));
G_WAdv = (fvar*)malloc(N*sizeof(fvar));
A1x = (fvar*)malloc(N*sizeof(fvar));
A1lam=(fvar*)malloc(N*sizeof(fvar));
A1mean=(fvar*)malloc(N*sizeof(fvar));
A1E=(fvar*)malloc(N*sizeof(fvar));
A1P=(fvar*)malloc(N*sizeof(fvar));
A1i=(fvar*)malloc(N*sizeof(fvar));
A1c=(fvar*)malloc(N*sizeof(fvar));
A2x = (fvar*)malloc(N*sizeof(fvar));
A2lam=(fvar*)malloc(N*sizeof(fvar));
A2mean=(fvar*)malloc(N*sizeof(fvar));
A2E=(fvar*)malloc(N*sizeof(fvar));
A2P=(fvar*)malloc(N*sizeof(fvar));
A2i=(fvar*)malloc(N*sizeof(fvar));
A2c=(fvar*)malloc(N*sizeof(fvar));
G1mean=(fvar*)malloc(N*sizeof(fvar));
G1E =(fvar*)malloc(N*sizeof(fvar));
G2P =(fvar*)malloc(N*sizeof(fvar));
G2lam =(fvar*)malloc(N*sizeof(fvar));
TAlpha=(fvar*)malloc(N*sizeof(fvar));
TBeta =(fvar*)malloc(N*sizeof(fvar));
G_TreeTerm=(fvar*)malloc(N*sizeof(fvar));
G_TopEq=(i16*) malloc(N*sizeof(i16));
G_TopW=(fvar*)malloc(N*sizeof(fvar));
G_PropRaw=(fvar*)malloc(N*sizeof(fvar));
G_Prop =(fvar*)malloc(N*sizeof(fvar));
if(LOG_EXPR_TEXT) G_Sym = (string*)malloc(N*sizeof(char*));
else G_Sym = 0;
// tree index
G_TreeCap = 128;
G_TreeIdx = (Node**)malloc(G_TreeCap*sizeof(Node*));
G_TreeN = 0;
G_EqTreeId = (i16*)malloc(N*sizeof(i16));
// initialize adjacency
{ int t; for(t=0; t<N*D; t++) G_Adj[t] = -1; }
// initialize state and parameters
{
int i;
for(i=0;i<N;i++){
G_State[i] = random();
G_Prev[i] = G_State[i];
G_StateSq[i]= G_State[i]*G_State[i];
G_Mode[i] = 0;
G_WSelf[i]=0.5; G_WN1[i]=0.2; G_WN2[i]=0.2;
G_WGlob1[i]=0.1; G_WGlob2[i]=0.1; G_WMom[i]=0.05;
G_WTree[i]=0.15; G_WAdv[i]=0.15;
A1x[i]=1; A1lam[i]=0.1; A1mean[i]=0; A1E[i]=0; A1P[i]=0; A1i[i]=0; A1c[i]=0;
A2x[i]=1; A2lam[i]=0.1; A2mean[i]=0; A2E[i]=0; A2P[i]=0; A2i[i]=0; A2c[i]=0;
G1mean[i]=1.0; G1E[i]=0.001;
G2P[i]=0.6; G2lam[i]=0.3;
TAlpha[i]=0.8; TBeta[i]=25.0;
G_TreeTerm[i]=0;
G_TopEq[i]=-1; G_TopW[i]=0;
G_PropRaw[i]=1; G_Prop[i]=1.0/G_N;
if(LOG_EXPR_TEXT){
G_Sym[i] = (char*)malloc(EXPR_MAXLEN);
if(G_Sym[i]) strcpy(G_Sym[i],"");
}
}
}
// --- Hit-rate state ---
G_HitEW = (fvar*)malloc(N*sizeof(fvar));
G_HitN = (int*) malloc(N*sizeof(int));
G_AdvPrev = (fvar*)malloc(N*sizeof(fvar));
{ int i; for(i=0;i<N;i++){ G_HitEW[i] = 0.5; G_HitN[i] = 0; G_AdvPrev[i] = 0; } }
computeMemFixedBytes();
if(G_PredNode) free(G_PredNode);
G_PredLen = G_TreeN; if(G_PredLen<=0) G_PredLen=1;
G_PredNode = (var*)malloc(G_PredLen*sizeof(var));
G_PredCap = G_PredLen;
G_PredCacheBar = -1;
}
void freeNet() {
int i;
if(G_State)free(G_State);
if(G_Prev)free(G_Prev);
if(G_StateSq)free(G_StateSq);
if(G_Adj)free(G_Adj);
if(G_RP)free(G_RP);
if(G_Z)free(G_Z);
if(G_Mode)free(G_Mode);
if(G_WSelf)free(G_WSelf);
if(G_WN1)free(G_WN1);
if(G_WN2)free(G_WN2);
if(G_WGlob1)free(G_WGlob1);
if(G_WGlob2)free(G_WGlob2);
if(G_WMom)free(G_WMom);
if(G_WTree)free(G_WTree);
if(G_WAdv)free(G_WAdv);
if(A1x)free(A1x); if(A1lam)free(A1lam); if(A1mean)free(A1mean);
if(A1E)free(A1E); if(A1P)free(A1P); if(A1i)free(A1i); if(A1c)free(A1c);
if(A2x)free(A2x); if(A2lam)free(A2lam); if(A2mean)free(A2mean);
if(A2E)free(A2E); if(A2P)free(A2P); if(A2i)free(A2i); if(A2c)free(A2c);
if(G1mean)free(G1mean); if(G1E)free(G1E);
if(G2P)free(G2P); if(G2lam)free(G2lam);
if(TAlpha)free(TAlpha); if(TBeta)free(TBeta);
if(G_TreeTerm)free(G_TreeTerm);
if(G_TopEq)free(G_TopEq);
if(G_TopW)free(G_TopW);
if(G_EqTreeId)free(G_EqTreeId);
if(G_PropRaw)free(G_PropRaw);
if(G_Prop)free(G_Prop);
if(G_Sym){
for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]);
free(G_Sym);
}
if(G_TreeIdx)free(G_TreeIdx);
if(G_PredNode)free(G_PredNode);
if(G_EqTheta) free(G_EqTheta); // NEW: free ring angles
}
// --------- DTREE feature builders ----------
var nrm_s(var x) { return sat100(100.*tanh(x)); }
var nrm_scl(var x, var s) { return sat100(100.*tanh(s*x)); }
void buildEqFeatures(int i, var lambda, var mean, var energy, var power, var pred, var* S /*ADV_EQ_NF*/) {
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
// equation-cycle alignment
var th_i = ifelse(G_EqTheta!=0, G_EqTheta[i], 0);
var dphi = angDiff(G_CycPh, th_i);
var alignC = cos(dphi); // +1 aligned, -1 opposite
var alignS = sin(dphi); // quadrature
S[0] = nrm_s(G_State[i]);
S[1] = nrm_s(mean);
S[2] = nrm_scl(power,0.05);
S[3] = nrm_scl(energy,0.01);
S[4] = nrm_s(lambda);
S[5] = sat100(200.0*(pred-0.5));
S[6] = sat100(200.0*((var)t->d/MAX_DEPTH)-100.0);
S[7] = sat100(1000.0*t->r);
S[8] = nrm_s(G_TreeTerm[i]);
S[9] = sat100( (200.0/3.0) * (var)( (int)G_Mode[i] ) - 100.0 );
// HTF (1H)
S[10] = sat100(200.0*(G_MCF_PBull-0.5));
S[11] = sat100(200.0*(G_MCF_Entropy-0.5));
S[12] = sat100(200.0*((var)G_HitEW[i] - 0.5));
S[13] = sat100(100.*alignC);
S[14] = sat100(100.*alignS);
// NEW: 5M & Relation Markov features
S[15] = sat100(200.0*(ML_PBullNext - 0.5)); // 5M PBull
S[16] = sat100(200.0*(ML_Entropy - 0.5)); // 5M Entropy
S[17] = sat100(200.0*(MR_PBullNext - 0.5)); // Relation PBull
S[18] = sat100(200.0*(MR_Entropy - 0.5)); // Relation Entropy
sanitize(S,ADV_EQ_NF);
}
void buildPairFeatures(int i,int j, var lambda, var mean, var energy, var power, var* P /*ADV_PAIR_NF*/) {
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* ti = treeAt(tid_i);
Node* tj = treeAt(tid_j);
var predi = predByTid(tid_i);
var predj = predByTid(tid_j);
P[0]=nrm_s(G_State[i]);
P[1]=nrm_s(G_State[j]);
P[2]=sat100(200.0*((var)ti->d/MAX_DEPTH)-100.0);
P[3]=sat100(200.0*((var)tj->d/MAX_DEPTH)-100.0);
P[4]=sat100(1000.0*ti->r);
P[5]=sat100(1000.0*tj->r);
P[6]=sat100(abs(P[2]-P[3]));
P[7]=sat100(abs(P[4]-P[5]));
P[8]=sat100(100.0*(predi+predj-1.0));
P[9]=nrm_s(lambda);
P[10]=nrm_s(mean);
P[11]=nrm_scl(power,0.05);
sanitize(P,ADV_PAIR_NF);
}
// --- Safe neighbor helpers & adjacency sanitizer ---
int adjSafe(int i, int d){
int N = G_N, D = G_D;
if(!G_Adj || N <= 1 || D <= 0) return 0;
if(d < 0) d = 0;
if(d >= D) d = d % D;
int v = G_Adj[i*D + d];
if(v < 0 || v >= N || v == i) v = (i + 1) % N;
return v;
}
void sanitizeAdjacency(){
if(!G_Adj) return;
int N = G_N, D = G_D, i, d;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
i16 *p = &G_Adj[i*D + d];
if(*p < 0 || *p >= N || *p == i){
int r = (int)random(N);
if(r == i) r = (r+1) % N;
*p = (i16)r;
}
}
if(D >= 2 && G_Adj[i*D+0] == G_Adj[i*D+1]){
int r2 = (G_Adj[i*D+1] + 1) % N;
if(r2 == i) r2 = (r2+1) % N;
G_Adj[i*D+1] = (i16)r2;
}
}
}
// --------- advisor helpers (NEW) ----------
var adviseSeed(int i, var lambda, var mean, var energy, var power) {
static int seedBar = -1;
static int haveSeed[NET_EQNS];
static var seedVal[NET_EQNS];
if(seedBar != Bar){
int k; for(k=0;k<NET_EQNS;k++) haveSeed[k] = 0;
seedBar = Bar;
}
if(i < 0) i = 0; if(i >= NET_EQNS) i = i % NET_EQNS;
if(!allowAdvise(i)) return 0;
if(!haveSeed[i]){
seedVal[i] = adviseEq(i, lambda, mean, energy, power); // trains (once) in Train mode
haveSeed[i] = 1;
}
return seedVal[i];
}
var mix01(var a, int salt){
var z = sin(123.456*a + 0.001*salt) + cos(98.765*a + 0.002*salt);
return tanh(0.75*z);
}
// --------- advise wrappers (single-equation only) ----------
var adviseEq(int i, var lambda, var mean, var energy, var power) {
if(!allowAdvise(i)) return 0;
if(is(INITRUN)) return 0;
int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
if(tight) return 0;
if(G_HitN[i] > 32){
var h = (var)G_HitEW[i];
var gate = 0.40 + 0.15*(1.0 - MC_Entropy);
if(h < gate){
if(random() >= 0.5) return 0;
}
}
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
var pred = predByTid(tid);
var S[ADV_EQ_NF];
buildEqFeatures(i,lambda,mean,energy,power,pred,S);
var obj = 0;
if(Train){
obj = sat100(100.0*tanh(0.6*lambda + 0.4*mean));
var prior = 0.75 + 0.5*((var)G_HitEW[i] - 0.5); // 0.5..1.0
obj *= prior;
// --- EQC-5: cycle priors (reward aligned & non-stalled rotation)
{ var th_i = ifelse(G_EqTheta!=0, G_EqTheta[i], 0);
var dphi = angDiff(G_CycPh, th_i);
var align = 0.90 + 0.20*(0.5*(cos(dphi)+1.0));
var spdOK = 0.90 + 0.20*clamp(abs(G_CycSpd)/(0.15), 0., 1.);
obj *= align * spdOK;
}
}
int objI = (int)obj;
var a = adviseLong(DTREE, objI, S, ADV_EQ_NF);
return a/100.;
}
var advisePair(int i,int j, var lambda, var mean, var energy, var power) {
return 0;
}
// --------- heuristic pair scoring ----------
var scorePairSafe(int i, int j, var lambda, var mean, var energy, var power) {
int ti = safeTreeIndexFromEq(G_EqTreeId[i]);
int tj = safeTreeIndexFromEq(G_EqTreeId[j]);
Node *ni = treeAt(ti), *nj = treeAt(tj);
var simD = 1.0 / (1.0 + abs((var)ni->d - (var)nj->d));
var dr = 50.0*abs(ni->r - nj->r);
var simR = 1.0 / (1.0 + dr);
var predi = predByTid(ti);
var predj = predByTid(tj);
var pred = 0.5*(predi + predj);
var score = 0.5*pred + 0.3*simD + 0.2*simR;
return 2.0*score - 1.0;
}
// --------- adjacency selection (heuristic only) ----------
void rewireAdjacency_DTREE(var lambda, var mean, var energy, var power) {
int N=G_N, D=G_D, i, d, c, best, cand;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
var bestScore = -2; best = -1;
for(c=0;c<G_CandNeigh;c++){
cand = (int)random(N);
if(cand==i) continue;
// avoid duplicate neighbors
int clash=0, k;
for(k=0;k<d;k++){
int prev = G_Adj[i*D+k];
if(prev>=0 && prev==cand){ clash=1; break; }
}
if(clash) continue;
var s = scorePairSafe(i,cand,lambda,mean,energy,power);
if(s > bestScore){ bestScore=s; best=cand; }
}
if(best<0){
do{ best = (int)random(N);} while(best==i);
}
G_Adj[i*D + d] = (i16)best;
}
}
}
// --------- DTREE-created coefficients, modes & proportions ----------
var mapA(var a,var lo,var hi){ return mapUnit(a,lo,hi); }
void synthesizeEquationFromDTREE(int i, var lambda, var mean, var energy, var power) {
var seed = adviseSeed(i,lambda,mean,energy,power);
G_Mode[i] = (int)(abs(1000*seed)) & 3;
G_WSelf[i] = (fvar)mapA(mix01(seed, 11), 0.15, 0.85);
G_WN1[i] = (fvar)mapA(mix01(seed, 12), 0.05, 0.35);
G_WN2[i] = (fvar)mapA(mix01(seed, 13), 0.05, 0.35);
G_WGlob1[i] = (fvar)mapA(mix01(seed, 14), 0.05, 0.30);
G_WGlob2[i] = (fvar)mapA(mix01(seed, 15), 0.05, 0.30);
G_WMom[i] = (fvar)mapA(mix01(seed, 16), 0.02, 0.15);
G_WTree[i] = (fvar)mapA(mix01(seed, 17), 0.05, 0.35);
G_WAdv[i] = (fvar)mapA(mix01(seed, 18), 0.05, 0.35);
A1x[i] = (fvar)(randsign()*mapA(mix01(seed, 21), 0.6, 1.2));
A1lam[i] = (fvar)(randsign()*mapA(mix01(seed, 22), 0.05,0.35));
A1mean[i]= (fvar) mapA(mix01(seed, 23),-0.30,0.30);
A1E[i] = (fvar) mapA(mix01(seed, 24),-0.0015,0.0015);
A1P[i] = (fvar) mapA(mix01(seed, 25),-0.30,0.30);
A1i[i] = (fvar) mapA(mix01(seed, 26),-0.02,0.02);
A1c[i] = (fvar) mapA(mix01(seed, 27),-0.20,0.20);
A2x[i] = (fvar)(randsign()*mapA(mix01(seed, 31), 0.6, 1.2));
A2lam[i] = (fvar)(randsign()*mapA(mix01(seed, 32), 0.05,0.35));
A2mean[i]= (fvar) mapA(mix01(seed, 33),-0.30,0.30);
A2E[i] = (fvar) mapA(mix01(seed, 34),-0.0015,0.0015);
A2P[i] = (fvar) mapA(mix01(seed, 35),-0.30,0.30);
A2i[i] = (fvar) mapA(mix01(seed, 36),-0.02,0.02);
A2c[i] = (fvar) mapA(mix01(seed, 37),-0.20,0.20);
G1mean[i] = (fvar) mapA(mix01(seed, 41), 0.4, 1.6);
G1E[i] = (fvar) mapA(mix01(seed, 42),-0.004,0.004);
G2P[i] = (fvar) mapA(mix01(seed, 43), 0.1, 1.2);
G2lam[i] = (fvar) mapA(mix01(seed, 44), 0.05, 0.7);
TAlpha[i] = (fvar) mapA(mix01(seed, 51), 0.3, 1.5);
TBeta[i] = (fvar) mapA(mix01(seed, 52), 6.0, 50.0);
G_PropRaw[i] = (fvar)(0.01 + 0.99*(0.5*(seed+1.0)));
{ // reliability boost
var boost = 0.75 + 0.5*(var)G_HitEW[i];
G_PropRaw[i] = (fvar)((var)G_PropRaw[i] * boost);
}
}
void normalizeProportions() {
int N=G_N,i;
var s=0;
for(i=0;i<N;i++) s += G_PropRaw[i];
if(s<=0) {
for(i=0;i<N;i++) G_Prop[i] = (fvar)(1.0/N);
return;
}
for(i=0;i<N;i++) G_Prop[i] = (fvar)(G_PropRaw[i]/s);
}
// H) dtreeTerm gets predictabilities on demand
var dtreeTerm(int i, int* outTopEq, var* outTopW) {
int N=G_N,j;
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* ti=treeAt(tid_i);
int di=ti->d; var ri=ti->r;
var predI = predByTid(tid_i);
var alpha=TAlpha[i], beta=TBeta[i];
var sumw=0, acc=0, bestW=-1; int bestJ=-1;
for(j=0;j<N;j++){
if(j==i) continue;
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* tj=treeAt(tid_j);
int dj=tj->d; var rj=tj->r;
var predJ = predByTid(tid_j);
var w = exp(-alpha*abs(di-dj)) * exp(-beta*abs(ri-rj));
var predBoost = 0.5 + 0.5*(predI*predJ);
var propBoost = 0.5 + 0.5*( (G_Prop[i] + G_Prop[j]) );
w *= predBoost * propBoost;
var pairAdv = scorePairSafe(i,j,0,0,0,0);
var pairBoost = 0.75 + 0.25*(0.5*(pairAdv+1.0));
w *= pairBoost;
sumw += w;
acc += w*G_State[j];
if(w>bestW){bestW=w; bestJ=j;}
}
if(outTopEq) *outTopEq = bestJ;
if(outTopW) *outTopW = ifelse(sumw>0, bestW/sumw, 0);
if(sumw>0) return acc/sumw;
return 0;
}
// --------- expression builder (capped & optional) ----------
void buildSymbolicExpr(int i, int n1, int n2) {
if(LOG_EXPR_TEXT){
string s = G_Sym[i];
s[0]=0;
string a1 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
(var)A1x[i], n1, (var)A1lam[i], (var)A1mean[i], (var)A1E[i], (var)A1P[i], (var)A1i[i], (var)A1c[i]);
string a2 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
(var)A2x[i], n2, (var)A2lam[i], (var)A2mean[i], (var)A2E[i], (var)A2P[i], (var)A2i[i], (var)A2c[i]);
strlcat_safe(s, "x[i]_next = ", EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*x[i] + ", (var)G_WSelf[i]), EXPR_MAXLEN);
if(G_Mode[i]==1){
strlcat_safe(s, strf("%.3f*tanh%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
} else if(G_Mode[i]==2){
strlcat_safe(s, strf("%.3f*cos%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*tanh%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
} else {
strlcat_safe(s, strf("%.3f*sin%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*cos%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
}
strlcat_safe(s, strf("%.3f*tanh(%.3f*mean + %.5f*E) + ",
(var)G_WGlob1[i], (var)G1mean[i], (var)G1E[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin(%.3f*P + %.3f*lam) + ",
(var)G_WGlob2[i], (var)G2P[i], (var)G2lam[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*(x[i]-x_prev[i]) + ", (var)G_WMom[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("Prop[i]=%.4f; ", (var)G_Prop[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DT(i) + ", (var)G_WTree[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DTREE(i)", (var)G_WAdv[i]), EXPR_MAXLEN);
}
}
// ======================= NEW: Range builders for chunked rewires =======================
// Rewire adjacency for i in [i0..i1), keeps others unchanged
void rewireAdjacency_DTREE_range(int i0,int i1, var lambda, var mean, var energy, var power) {
int N=G_N, D=G_D, i, d, c, best, cand;
if(i0<0) i0=0; if(i1>N) i1=N;
for(i=i0;i<i1;i++){
for(d=0; d<D; d++){
var bestScore = -2; best = -1;
for(c=0;c<G_CandNeigh;c++){
cand = (int)random(N);
if(cand==i) continue;
int clash=0, k;
for(k=0;k<d;k++){
int prev = G_Adj[i*D+k];
if(prev>=0 && prev==cand){ clash=1; break; }
}
if(clash) continue;
var s = scorePairSafe(i,cand,lambda,mean,energy,power);
if(s > bestScore){ bestScore=s; best=cand; }
}
if(best<0){
do{ best = (int)random(N);} while(best==i);
}
G_Adj[i*D + d] = (i16)best;
}
}
}
// Synthesize equations only for [i0..i1)
void synthesizeEquation_range(int i0,int i1, var lambda, var mean, var energy, var power) {
int i; if(i0<0) i0=0; if(i1>G_N) i1=G_N;
for(i=i0;i<i1;i++) synthesizeEquationFromDTREE(i,lambda,mean,energy,power);
}
// Build expr text only for [i0..i1) — guarded at runtime for lite-C compatibility
void buildSymbolicExpr_range(int i0,int i1) {
if(!LOG_EXPR_TEXT) return; // 0 = omit; 1 = build
int i; if(i0<0) i0=0; if(i1>G_N) i1=G_N;
for(i=i0;i<i1;i++){
int n1 = adjSafe(i,0);
int n2 = ifelse(G_D >= 2, adjSafe(i,1), n1);
buildSymbolicExpr(i,n1,n2);
}
}
// ======================= NEW: Rolling rewire cursor state =======================
int G_RewirePos = 0; // next equation index to process
int G_RewirePasses = 0; // #completed full passes since start
int G_RewireBatch = REWIRE_BATCH_EQ_5M; // effective batch for this bar
// ======================= NEW: Rolling cursor for heavy per-bar updates =======================
int G_UpdatePos = 0; // next equation index to do heavy work
int G_UpdatePasses = 0; // #completed full heavy passes
// ======================= NEW: Chunked rewire orchestrator =======================
// Run part of a rewire: only a slice of equations this bar.
// Returns 1 if a full pass just completed (we can normalize), else 0.
int rewireEpochChunk(var lambda, var mean, var energy, var power, int batch) {
int N = G_N;
if(N <= 0) return 0;
if(batch < REWIRE_MIN_BATCH) batch = REWIRE_MIN_BATCH;
if(G_RewirePos >= N) G_RewirePos = 0;
int i0 = G_RewirePos;
int i1 = i0 + batch; if(i1 > N) i1 = N;
// Adapt neighbor breadth by entropy (your original heuristic)
G_CandNeigh = ifelse(MC_Entropy < 0.45, CAND_NEIGH+4, CAND_NEIGH);
// Rewire only the target slice
rewireAdjacency_DTREE_range(i0,i1, lambda,mean,energy,power);
sanitizeAdjacency(); // cheap; can keep global
synthesizeEquation_range(i0,i1, lambda,mean,energy,power);
buildSymbolicExpr_range(i0,i1);
// advance cursor
G_RewirePos = i1;
// Full pass finished?
if(G_RewirePos >= N){
G_RewirePos = 0;
G_RewirePasses += 1;
return 1;
}
return 0;
}
// ---------- one-time rewire init (call central reindex) ----------
void rewireInit() {
randomizeRP();
computeProjection();
reindexTreeAndMap(); // ensures G_PredNode sized before any use
}
// ----------------------------------------------------------------------
// I) Trim rewireEpoch -> now used for one-shot/initialization full pass
// ----------------------------------------------------------------------
void rewireEpoch(var lambda, var mean, var energy, var power) {
// Backward compatibility: do one full pass immediately
int done = 0;
while(!done){
done = rewireEpochChunk(lambda,mean,energy,power, REWIRE_BATCH_EQ_H1);
}
// After full pass, normalize proportions once (exact)
normalizeProportions();
// Context hash (unchanged)
{
int D = G_D, i, total = G_N * D;
unsigned int h = 2166136261u;
for(i=0;i<total;i++){
unsigned int x = (unsigned int)G_Adj[i];
h ^= x + 0x9e3779b9u + (h<<6) + (h>>2);
}
G_CtxID = (int)((h ^ ((unsigned int)G_Epoch<<8)) & 0x7fffffff);
}
}
// coarse projection-based driver for gamma
var projectNet() {
int N=G_N,i;
var sum=0,sumsq=0,cross=0;
for(i=0;i<N;i++){
sum+=G_State[i];
sumsq+=G_State[i]*G_State[i];
if(i+1<N) cross+=G_State[i]*G_State[i+1];
}
var mean=sum/N, corr=cross/(N-1);
return 0.6*tanh(mean + 0.001*sumsq) + 0.4*sin(corr);
}
// ----------------------------------------------------------------------
// J) Heavy per-bar update slice (uses rolling G_UpdatePos cursor)
// ----------------------------------------------------------------------
var f_affine(var x, var lam, var mean, var E, var P, var i, var c){
return x + lam*m
105
31,123
Read More
|
|
09/26/25 17:53
The manual also says: All optimize calls must be placed either in the run function or in functions that are called from the run function. Since parameters are identified by the order of their optimize calls, the order must not change between test and training or from one run to the next. You made an optimize call disappear in subsequent run calls. Not good. Then there's this quote: For optimizing additional global parameters that do not depend on asset and algo, place their optimize call in the very first loop run only, and store the parameter in a static or global variable outside the loop. Probably not the clearest verbiage, but I think it's saying that optimize() should only be called once for the global variables inside the first loop() call, not the first run() call. (That is, "the first time loop() is run".) If that's the case, you need to say if(this is the first loop call) then global variable = optimize().
2
79
Read More
|
|
09/26/25 13:29
Hello, I'm trying to optimize two global parameters in my strategy. Workshop5/6 optimize per asset/algo, which is not my case. I read the manual and probably misunderstand it; due I got an error: Zorro 2.66.3 (c) oP group Germany 2025
rs compiling............ WFO: rs 2020..2025 Error 111: Crash in run: run() at bar 1
From the manual I read: In portfolio strategies, parameters should be normally optimized separately for any component. Use one or two loop() calls to cycle through all used assets and algos. For optimizing asset/algo specific parameters, place the optimize calls inside the inner loop, and make sure to select asset and algo before. For optimizing additional global parameters that do not depend on asset and algo, place their optimize call in the very first loop run only, and store the parameter in a static or global variable outside the loop. The global parameters are then assigned to the first asset and/or algo. If asset/algo specific optimization is not desired at all, don't use loop, but enumerate the assets in a simple for loop, f.i. for(used_assets) ..., and optimize the parameters outside the for loop. Make sure in that case to select all assets before the first optimize call; otherwise the optimizer will assume a single-asset strategy.
From the bolded phrase I understood the following:
// The strategy, once a week decides which 3 assets out of a bank of assets, to buy. It's a long only strategy.
// var threshold should optimize how many assets to hold
var threshold; // this is a global var outside the loop which I wish to optimize
function run()
{
set(PARAMETERS);
NumWFOCycles = 15; // activate WFO
if (FIRSTINITRUN) // I tried FIRSTRUN too, same error
threshold = optimize(N_OF_ASSETS-2,N_OF_ASSETS-3,N_OF_ASSETS,1);
while( loop( ..... { // first loop over the assets
} // end of while loop
while(loop(.... { // second loop over the assets, run after the first loop was finished
} // end of while loop
for( over all assets ... {
Each asset is compared to the threshold, after optimizing it should decide to take 1 or 2 or 3 or 4 or 5 assets, instead 3 assets, before the optimize.
} // end of for loop
The second parameter I want to optimize, is the LookBack build-in var. Any insight over global parameters optimization will be appreciated. David
2
79
Read More
|
|
09/23/25 20:48
Hello,
I saw it is planned to replace the format of the training result files from .par to .csv. I already have a huge data base of trained strategies with .par files. Will there be a way to convert the .par file into .csv? What about the optimal-f .fac any change?
Martin
1
80
Read More
|
|
09/23/25 01:38
Hello folks, What is the best practice for futures intraday data organization? ie, should tick data for different future expiries/contracts for the same futures class, for the same month or day be stored under the same t8 file? ie,should tick data for ESU25 and ESZ25 for the month of the september (both contracts trade) be stored in the same ES_202509.t8 file? Is it best practice to store files by day instead to avoid overlap? Given I have bid, ask and trade data, how should those be stored? It seems t8 doesn't care about last trade - should be store those in a separate .t2 file? Should both bid and ask data be stored in the same file, with negative bid prices? If so, can they have the same time stamp, or should one be 1ms offset from the other? What about bar data - should those be stored in a .t6, and if so how do we manage contracts -- does each contract get it's own .t6? is there any management of related contracts for .t6 based on the name, or has to be explicitly handled in code? Are contract data generally first class citizens in the different scripts, or is it better to just store everything under the contract symbol (eg ESU25) and manually manage rollovers, expiry dates, etc? Generally speaking, if I care about the spread, and may want to simulate using both trades (eg stop activation) and bid-ask (limit order activation) with the real spread, do I need to save all 3 data series separately? And for dynamic spread, do I need to load the bid series and compare timestamps myself, or is there a way to stipulate both bid and ask files during a run?
Thank you!
0
82
Read More
|
|
|