Regime-Responsive Graph Rewiring of Influences

A. A small market lexicon
The pattern alphabet now self-calibrates. It keeps a steady share of “active” moments even as volatility changes, and it lets early uncertainty be smoothed more generously while sharpening as evidence accumulates. The two dials—lean and clarity—stay comparable across regimes, so permission means the same thing in quiet and in storm.

B. A soft landscape of influences
The continuous field underneath reallocates attention as conditions evolve. Its effective dimensionality breathes: higher when structure is clean, lower when noise rises. Multiple drivers are blended with a bias toward whichever one is currently more predictive. Effort is budgeted—more attention when order emerges, less when the tape is muddled or resources are tight. Signals remain bounded; only the emphasis moves.

C. Occasional reshaping of who listens to whom
Connectivity refresh widens its search when the environment is organized and narrows it when it’s chaotic. The refresh can also trigger early after a utility dip, helping the structure realign quickly after shocks without constant churn.

D. Capacity that breathes, with guardrails
Form adjusts to usefulness. When added detail stops paying or resources tighten, it trims deepest, least helpful nuance; when there’s headroom and benefit, it adds a thin layer. Changes are small, tested, and reversible. Depth emphasis also adapts, shifting weight between shallow and deep context as regimes change.

E. Permission meets timing and size
Action still requires both a clear lean and sufficient clarity, plus harmony from the broader landscape. Because the dictionary stays rate-stable and sharpens with evidence, and because drivers are blended by current informativeness, timing improves around transitions. Position size tracks agreement strength; ambiguity defaults to patience.

F. A brief, human-readable diary
The ledger stays compact and comparable across regimes: current archetype, the two dials, and a terse sketch of how influences combined. It aims for oversight-grade clarity without sacrificing speed.

G. What tends to emerge
In ordered tape: broader search, richer projection, more active guidance, and a tilt toward deeper context. In choppy tape: narrower search, leaner projection, tighter guidance, and a tilt toward shallower, more robust cues. The posture glides between modes via small, distributed adjustments.

H. Risk doctrine as a controlling atmosphere
Exposure respects caps at all times. When noise rises or resources get tight, the system automatically de-emphasizes fine detail, focuses on the strongest agreements, and lets activity drift toward neutral rather than forcing trades—keeping drawdown sensitivity in check.

I. Memory, recency, and drift
Assessments use decaying memory so recent tape matters more while stale evidence fades. Permission and landscape both learn continuously, producing controlled drift—no whiplash, no stickiness.

J. Separation of roles

The lexicon offers a compact, discrete view of context and stable permission.

The landscape provides a continuous, multi-horizon view that shapes timing and size.

The reshaper keeps connections healthy, widening or narrowing search as needed.

The capacity governor ensures useful complexity under constraints.
Together they reduce overreaction to noise while preserving responsiveness to structural change.

K. Practical trading implications

Expect fewer, stronger actions when clarity is low; more decisive engagement when agreement is broad.

Expect size to follow consensus strength, not single-indicator extremes.

Expect quicker realignment after shocks, but without perpetual reshuffling.

Expect behavior to remain stable across regime shifts, with measured changes rather than leaps.

L. Philosophy in one line
Trade when the story is both clear and corroborated; keep the model light, the adjustments small, and the diary open.



Code
// ======================================================================
// Alpha12 - Markov-augmented Harmonic D-Tree Engine (Candlestick 122-dir)
// with runtime memory shaping, selective depth pruning, 
// and elastic accuracy-aware depth growth + 8 adaptive improvements.
// ======================================================================

// ================= USER CONFIG =================
#define ASSET_SYMBOL   "EUR/USD"
#define BAR_PERIOD     60
#define MC_ACT         0.30       // initial threshold on |CDL| in [-1..1] to accept a pattern
#define PBULL_LONG_TH  0.60       // Markov gate for long
#define PBULL_SHORT_TH 0.40       // Markov gate for short

// ===== Debug toggles (Fix #1 - chart/watch growth off by default) =====
#define ENABLE_PLOTS   0    // 0 = no plot buffers; 1 = enable plot() calls
#define ENABLE_WATCH   0    // 0 = disable watch() probes; 1 = enable

// ================= ENGINE PARAMETERS =================
#define MAX_BRANCHES    3
#define MAX_DEPTH       4
#define NWIN            256
#define NET_EQNS        100
#define DEGREE          4
#define KPROJ           16
#define REWIRE_EVERY    127
#define CAND_NEIGH      8

// ===== LOGGING CONTROLS (memory management) =====
#define LOG_EQ_TO_ONE_FILE   1    // 1: single consolidated EQ CSV; 0: per-eq files
#define LOG_EXPR_TEXT        0    // 0: omit full expression (store signature only); 1: include text
#define META_EVERY           4    // write META every N rewires
#define LOG_EQ_SAMPLE        NET_EQNS // limit number of equations logged
#define EXPR_MAXLEN          512  // cap expression string

// decimate Markov log cadence
#define LOG_EVERY            16

// ---- DTREE feature sizes (extended for Markov features) ----
#define ADV_EQ_NF       12   // per-equation features
#define ADV_PAIR_NF     12   // per-pair features (kept for completeness; DTREE pair disabled)

// ================= Candles ? 122-state Markov =================
#define MC_NPAT    61
#define MC_STATES  123   // 1 + 2*MC_NPAT
#define MC_NONE    0
#define MC_LAPLACE 1.0   // kept for reference; runtime uses G_MC_Alpha

// ================= Runtime Memory / Accuracy Manager =================
#define MEM_BUDGET_MB        50
#define MEM_HEADROOM_MB       5
#define DEPTH_STEP_BARS      16
#define KEEP_CHILDREN_HI      2
#define KEEP_CHILDREN_LO      1
#define RUNTIME_MIN_DEPTH     2

int  G_ShedStage        = 0;        // 0..2
int  G_LastDepthActBar  = -999999;
int  G_ChartsOff        = 0;        // gates plot()
int  G_LogsOff          = 0;        // gates file_append cadence
int  G_SymFreed         = 0;        // expression buffers freed
int  G_RT_TreeMaxDepth  = MAX_DEPTH;

// ---- Accuracy sentinel (EW correlation of lambda vs gamma) ----
var  ACC_mx=0, ACC_my=0, ACC_mx2=0, ACC_my2=0, ACC_mxy=0;
var  G_AccCorr = 0;      // [-1..1]
var  G_AccBase = 0;      // first seen sentinel
int  G_HaveBase = 0;

// ---- Elastic depth tuner (small growth trials with rollback) ----
#define DEPTH_TUNE_BARS   64   // start a growth “trial” this often (when memory allows)
#define TUNE_DELAY_BARS   64   // evaluate the trial after this many bars

var  G_UtilBefore = 0, G_UtilAfter = 0;
int  G_TunePending = 0;
int  G_TuneStartBar = 0;
int  G_TuneAction   = 0;  // +1 grow trial, 0 none

// ======================================================================
//  (FIX) Move the type and globals used by mem_bytes_est() up here
// ======================================================================

// HARMONIC D-TREE type (we define it early so globals below compile fine)
typedef struct Node { var v; var r; void* c; int n; int d; } Node;

// Minimal globals needed before mem_bytes_est()
Node*  Root = 0;
Node** G_TreeIdx = 0; 
int    G_TreeN = 0; 
int    G_TreeCap = 0; 
var    G_DTreeExp = 0;

Node   G_DummyNode;   // defined early so treeAt() can return &G_DummyNode

// Network sizing globals (used by mem_bytes_est)
int   G_N  = NET_EQNS;
int   G_D  = DEGREE;
int   G_K  = KPROJ;

// Optional expression buffer pointer (referenced by mem_bytes_est)
string* G_Sym = 0;

// Forward decls that reference Node
var  nodeImportance(Node* u); // fwd decl (uses nodePredictability below)
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK);
void reindexTreeAndMap();

// Forward decls for advisor functions (so adviseSeed can call them)
var adviseEq(int i, var lambda, var mean, var energy, var power);
var advisePair(int i,int j, var lambda, var mean, var energy, var power);

// ----------------------------------------------------------------------
// === Adaptive knobs & sentinels (NEW) ===
// ----------------------------------------------------------------------
var G_FB_W = 0.70;     // (1) dynamic lambda?gamma blend weight 0..1
var G_MC_ACT = MC_ACT; // (2) adaptive candlestick acceptance threshold
var G_AccRate = 0;     // (2) EW acceptance rate of (state != 0)

// (3) advisor budget per bar (replaces the macro)
int G_AdviseMax = 16;

// (6) Markov Laplace smoothing (runtime ?)
var G_MC_Alpha = 1.0;

// (7) adaptive candidate breadth for adjacency search
int G_CandNeigh = CAND_NEIGH;

// (8) effective projection dimension (? KPROJ)
int G_Keff = KPROJ;

// (5) depth emphasis hill-climber
var G_DTreeExpStep = 0.05;
int  G_DTreeExpDir  = 1;

// ---- Advise budget/rotation (Fix #2) ----
#define ADVISE_ROTATE    1   // 1 = rotate which equations get DTREE each bar

int allowAdvise(int i)
{
    if(ADVISE_ROTATE){
        int groups = NET_EQNS / G_AdviseMax; 
        if(groups < 1) groups = 1;
        return ((i / G_AdviseMax) % groups) == (Bar % groups);
    } else {
        return (i < G_AdviseMax);
    }
}

// ---- tree byte size (counts nodes + child pointer arrays) ----
int tree_bytes(Node* u)
{
    if(!u) return 0;
    int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
    int sz_node = 2*SZV + SZP + 2*SZI;
    int total = sz_node;
    if(u->n > 0 && u->c) total += u->n * SZP;
    int i;
    for(i=0;i<u->n;i++)
        total += tree_bytes(((Node**)u->c)[i]);
    return total;
}

// ======================================================================
// Conservative in-script memory estimator (arrays + pointers)
// ======================================================================
int mem_bytes_est()
{
    int N = G_N, D = G_D, K = G_K;
    int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
    int b = 0;

    b += N*SZV*(3 + 8 + 7 + 7 + 4 + 2 + 2 + 2 + 2);
    b += N*SZI*(3);                    // G_Mode, G_TopEq, G_EqTreeId
    b += N*D*SZI;                      // G_Adj
    b += K*N*SZV;                      // G_RP
    b += K*SZV;                        // G_Z
    b += G_TreeCap*SZP;                // G_TreeIdx pointer vector
    if(G_Sym && !G_SymFreed) b += N*EXPR_MAXLEN; // optional expression buffers
    b += MC_STATES*MC_STATES*SZI + MC_STATES*SZI; // Markov
    b += tree_bytes(Root);                            // include D-Tree
    return b;
}

int mem_mb_est(){ return mem_bytes_est() / (1024*1024); }

// === total memory (Zorro-wide) in MB ===
int memMB(){ return (int)(memory(0)/(1024*1024)); }

// light one-shot shedding
void shed_zero_cost_once()
{
    if(G_ShedStage > 0) return;
    set(PLOTNOW|OFF); G_ChartsOff = 1;  // stop chart buffers
    G_LogsOff = 1;                      // decimate logs (gated later)
    G_ShedStage = 1;
}

void freeExprBuffers()
{
    if(!G_Sym || G_SymFreed) return;
    int i; for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]);
    free(G_Sym); G_Sym = 0; G_SymFreed = 1;
}

// depth manager (prune & shedding)
void depth_manager_runtime()
{
    int trigger = MEM_BUDGET_MB - MEM_HEADROOM_MB;
    int mb = mem_mb_est();
    if(mb < trigger) return;

    if(G_ShedStage == 0) shed_zero_cost_once();

    if(G_ShedStage <= 1){
        if(LOG_EXPR_TEXT==0 && !G_SymFreed) freeExprBuffers();
        G_ShedStage = 2;
    }

    int overBudget = (mb >= MEM_BUDGET_MB);
    if(!overBudget && (Bar - G_LastDepthActBar < DEPTH_STEP_BARS))
        return;

    while(G_RT_TreeMaxDepth > RUNTIME_MIN_DEPTH)
    {
        int keepK = ifelse(mem_mb_est() < MEM_BUDGET_MB + 2, KEEP_CHILDREN_HI, KEEP_CHILDREN_LO);
        pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, keepK);
        G_RT_TreeMaxDepth--;
        reindexTreeAndMap();

        mb = mem_mb_est();
        printf("\n[DepthMgr] depth=%i keepK=%i est=%i MB", G_RT_TreeMaxDepth, keepK, mb);

        if(mb < trigger) break;
    }

    G_LastDepthActBar = Bar;
}

// ----------------------------------------------------------------------
// 61 candlestick patterns (Zorro spellings kept). Each returns [-100..100].
// We rescale to [-1..1] for Markov state construction.
// ----------------------------------------------------------------------
int buildCDL_TA61(var* out, string* names)
{
    int n = 0;
    #define ADD(Name, Call) do{ var v = (Call); out[n] = v/100.; if(names) names[n] = Name; n++; }while(0)

    ADD("CDL2Crows",              CDL2Crows());
    ADD("CDL3BlackCrows",         CDL3BlackCrows());
    ADD("CDL3Inside",             CDL3Inside());
    ADD("CDL3LineStrike",         CDL3LineStrike());
    ADD("CDL3Outside",            CDL3Outside());
    ADD("CDL3StarsInSouth",       CDL3StarsInSouth());
    ADD("CDL3WhiteSoldiers",      CDL3WhiteSoldiers());
    ADD("CDLAbandonedBaby",       CDLAbandonedBaby(0.3));
    ADD("CDLAdvanceBlock",        CDLAdvanceBlock());
    ADD("CDLBeltHold",            CDLBeltHold());
    ADD("CDLBreakaway",           CDLBreakaway());
    ADD("CDLClosingMarubozu",     CDLClosingMarubozu());
    ADD("CDLConcealBabysWall",    CDLConcealBabysWall());
    ADD("CDLCounterAttack",       CDLCounterAttack());
    ADD("CDLDarkCloudCover",      CDLDarkCloudCover(0.3));
    ADD("CDLDoji",                CDLDoji());
    ADD("CDLDojiStar",            CDLDojiStar());
    ADD("CDLDragonflyDoji",       CDLDragonflyDoji());
    ADD("CDLEngulfing",           CDLEngulfing());
    ADD("CDLEveningDojiStar",     CDLEveningDojiStar(0.3));
    ADD("CDLEveningStar",         CDLEveningStar(0.3));
    ADD("CDLGapSideSideWhite",    CDLGapSideSideWhite());
    ADD("CDLGravestoneDoji",      CDLGravestoneDoji());
    ADD("CDLHammer",              CDLHammer());
    ADD("CDLHangingMan",          CDLHangingMan());
    ADD("CDLHarami",              CDLHarami());
    ADD("CDLHaramiCross",         CDLHaramiCross());
    ADD("CDLHignWave",            CDLHignWave());
    ADD("CDLHikkake",             CDLHikkake());
    ADD("CDLHikkakeMod",          CDLHikkakeMod());
    ADD("CDLHomingPigeon",        CDLHomingPigeon());
    ADD("CDLIdentical3Crows",     CDLIdentical3Crows());
    ADD("CDLInNeck",              CDLInNeck());
    ADD("CDLInvertedHammer",      CDLInvertedHammer());
    ADD("CDLKicking",             CDLKicking());
    ADD("CDLKickingByLength",     CDLKickingByLength());
    ADD("CDLLadderBottom",        CDLLadderBottom());
    ADD("CDLLongLeggedDoji",      CDLLongLeggedDoji());
    ADD("CDLLongLine",            CDLLongLine());
    ADD("CDLMarubozu",            CDLMarubozu());
    ADD("CDLMatchingLow",         CDLMatchingLow());
    ADD("CDLMatHold",             CDLMatHold(0.5));
    ADD("CDLMorningDojiStar",     CDLMorningDojiStar(0.3));
    ADD("CDLMorningStar",         CDLMorningStar(0.3));
    ADD("CDLOnNeck",              CDLOnNeck());
    ADD("CDLPiercing",            CDLPiercing());
    ADD("CDLRickshawMan",         CDLRickshawMan());
    ADD("CDLRiseFall3Methods",    CDLRiseFall3Methods());
    ADD("CDLSeperatingLines",     CDLSeperatingLines());
    ADD("CDLShootingStar",        CDLShootingStar());
    ADD("CDLShortLine",           CDLShortLine());
    ADD("CDLSpinningTop",         CDLSpinningTop());
    ADD("CDLStalledPattern",      CDLStalledPattern());
    ADD("CDLStickSandwhich",      CDLStickSandwhich());
    ADD("CDLTakuri",              CDLTakuri());
    ADD("CDLTasukiGap",           CDLTasukiGap());
    ADD("CDLThrusting",           CDLThrusting());
    ADD("CDLTristar",             CDLTristar());
    ADD("CDLUnique3River",        CDLUnique3River());
    ADD("CDLUpsideGap2Crows",     CDLUpsideGap2Crows());
    ADD("CDLXSideGap3Methods",    CDLXSideGap3Methods());

    #undef ADD
    return n; // 61
}

// ================= Markov storage & helpers =================
static int* MC_Count;   // [MC_STATES*MC_STATES]
static int* MC_RowSum;  // [MC_STATES]
static int  MC_Prev = -1;
static int  MC_Cur  = 0;
static var  MC_PBullNext = 0.5;
static var  MC_Entropy   = 0.0;
static string MC_Names[MC_NPAT];

#define MC_IDX(fr,to) ((fr)*MC_STATES + (to))

int MC_stateFromCDL(var* cdl /*len=61*/, var thr)
{
    int i, best=-1; var besta=0;
    for(i=0;i<MC_NPAT;i++){
        var a = abs(cdl[i]);
        if(a>besta){ besta=a; best=i; }
    }
    if(best<0) return MC_NONE;
    if(besta < thr) return MC_NONE;
    int bull = (cdl[best] > 0);
    return 1 + 2*best + bull;  // 1..122
}
int MC_isBull(int s){ if(s<=0) return 0; return ((s-1)%2)==1; }

void MC_update(int sPrev,int sCur){ if(sPrev<0) return; MC_Count[MC_IDX(sPrev,sCur)]++; MC_RowSum[sPrev]++; }

// === (6) Use runtime Laplace ? (G_MC_Alpha) ===
var MC_prob(int s,int t){
    var num = (var)MC_Count[MC_IDX(s,t)] + G_MC_Alpha;
    var den = (var)MC_RowSum[s] + G_MC_Alpha*MC_STATES;
    if(den<=0) return 1.0/MC_STATES;
    return num/den;
}

var MC_nextBullishProb(int s){
    if(s<0) return 0.5;
    int t; var pBull=0, pTot=0;
    for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); pTot+=p; if(MC_isBull(t)) pBull+=p; }
    if(pTot<=0) return 0.5;
    return pBull/pTot;
}
var MC_rowEntropy01(int s){
    if(s<0) return 1.0;
    int t; var H=0, Z=0;
    for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); Z+=p; }
    if(Z<=0) return 1.0;
    for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t)/Z; if(p>0) H += -p*log(p); }
    var Hmax = log(MC_STATES-1);
    if(Hmax<=0) return 0;
    return H/Hmax;
}

// ================= HARMONIC D-TREE ENGINE =================

// ---------- utils ----------
var randsign(){ return ifelse(random(1) < 0.5, -1.0, 1.0); }
var mapUnit(var u,var lo,var hi){ if(u<-1) u=-1; if(u>1) u=1; var t=0.5*(u+1.0); return lo + t*(hi-lo); }

// ---- safety helpers ----
var safeNum(var x){ if(x!=x) return 0; if(x > 1e100) return 1e100; if(x < -1e100) return -1e100; return x; }
void sanitize(var* A,int n){ int k; for(k=0;k<n;k++) A[k]=safeNum(A[k]); }
var sat100(var x){ return clamp(x,-100,100); }

// ---- small string helpers (for memory-safe logging) ----
void strlcat_safe(string dst, string src, int cap)
{
    if(!dst || !src || cap <= 0) return;
    int dl = strlen(dst);
    int sl = strlen(src);
    int room = cap - 1 - dl;
    if(room <= 0){ if(cap > 0) dst[cap-1] = 0; return; }
    int i; for(i = 0; i < room && i < sl; i++) dst[dl + i] = src[i];
    dst[dl + i] = 0;
}

int countSubStr(string s, string sub){
    if(!s || !sub) return 0;
    int n=0; string p=s;
    int sublen = strlen(sub);
    if(sublen<=0) return 0;
    while((p=strstr(p,sub))){ n++; p += sublen; }
    return n;
}

// ---------- FIXED: use int (lite-C) and keep non-negative ----------
int djb2_hash(string s){
    int h = 5381, c, i = 0;
    if(!s) return h;
    while((c = s[i++])) h = ((h<<5)+h) ^ c;  // h*33 ^ c
    return h & 0x7fffffff;                   // force non-negative
}

// ---- tree helpers ----
int  validTreeIndex(int tid){ if(!G_TreeIdx) return 0; if(tid<0||tid>=G_TreeN) return 0; return (G_TreeIdx[tid]!=0); }
Node* treeAt(int tid){ if(validTreeIndex(tid)) return G_TreeIdx[tid]; return &G_DummyNode; }
int safeTreeIndexFromEq(int eqi){
    int denom = ifelse(G_TreeN>0, G_TreeN, 1);
    int tid = eqi;
    if(tid < 0) tid = 0;
    if(denom > 0) tid = tid % denom;
    if(tid < 0) tid = 0;
    return tid;
}

// ---- tree indexing ----
void pushTreeNode(Node* u){
    if(G_TreeN >= G_TreeCap){
        int newCap = G_TreeCap*2;
        if(newCap < 64) newCap = 64;
        G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
        G_TreeCap = newCap;
    }
    G_TreeIdx[G_TreeN++] = u;
}
void indexTreeDFS(Node* u){ if(!u) return; pushTreeNode(u); int i; for(i=0;i<u->n;i++) indexTreeDFS(((Node**)u->c)[i]); }

// ---- shrink index capacity after pruning (Fix #3) ----
void maybeShrinkTreeIdx(){
    if(!G_TreeIdx) return;
    if(G_TreeCap > 64 && G_TreeN < (G_TreeCap >> 1)){
        int newCap = (G_TreeCap >> 1);
        if(newCap < 64) newCap = 64;
        G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
        G_TreeCap = newCap;
    }
}

// ---- tree create/eval ----
Node* createNode(int depth)
{
    Node* u = (Node*)malloc(sizeof(Node));
    u->v = random();
    u->r = 0.01 + 0.02*depth + random(0.005);
    u->d = depth;
    if(depth > 0){
        u->n = 1 + (int)random(MAX_BRANCHES);
        u->c = malloc(u->n * sizeof(void*));
        int i; for(i=0;i<u->n;i++) ((Node**)u->c)[i] = createNode(depth - 1);
    } else { u->n = 0; u->c = 0; }
    return u;
}
var evaluateNode(Node* u)
{
    if(!u) return 0;
    var sum=0; int i; for(i=0;i<u->n;i++) sum += evaluateNode(((Node**)u->c)[i]);
    var phase  = sin(u->r * Bar + sum);
    var weight = 1.0 / pow(u->d + 1, G_DTreeExp);
    u->v = (1 - weight)*u->v + weight*phase;
    return u->v;
}
int countNodes(Node* u){ if(!u) return 0; int c=1,i; for(i=0;i<u->n;i++) c += countNodes(((Node**)u->c)[i]); return c; }
void freeTree(Node* u){ if(!u) return; int i; for(i=0;i<u->n;i++) freeTree(((Node**)u->c)[i]); if(u->c) free(u->c); free(u); }

// =========== NETWORK STATE & COEFFICIENTS ===========
var*  G_State; var*  G_Prev; var*  G_Vel;
int*  G_Adj;
var*  G_RP; var*  G_Z;
int*  G_Mode;
var*  G_WSelf; var*  G_WN1; var*  G_WN2; var*  G_WGlob1; var*  G_WGlob2; var*  G_WMom; var*  G_WTree; var*  G_WAdv;
var*  A1x; var*  A1lam; var*  A1mean; var*  A1E; var*  A1P; var*  A1i; var*  A1c;
var*  A2x; var*  A2lam; var*  A2mean; var*  A2E; var*  A2P; var*  A2i; var*  A2c;
var*  G1mean; var*  G1E; var*  G2P; var*  G2lam;
var*  G_TreeTerm; int*  G_TopEq; var*  G_TopW; int*  G_EqTreeId; var*  TAlpha; var*  TBeta;
var*  G_Pred; var*  G_AdvScore;
var*  G_PropRaw; var*  G_Prop;

// ===== Markov features exposed to DTREE =====
var G_MCF_PBull;   // 0..1
var G_MCF_Entropy; // 0..1
var G_MCF_State;   // 0..122

// epoch/context & feedback
int    G_Epoch = 0;
int    G_CtxID = 0;
var    G_FB_A = 0.7;  // kept (not used in blend now)
var    G_FB_B = 0.3;  // kept (not used in blend now)

// ---------- predictability ----------
var nodePredictability(Node* t)
{
    if(!t) return 0.5;
    var disp=0; int n=t->n, i;
    for(i=0;i<n;i++){ Node* c=((Node**)t->c)[i]; disp += abs(c->v - t->v); }
    if(n>0) disp /= n;
    var depthFac = 1.0/(1+t->d);
    var rateBase = 0.01 + 0.02*t->d;
    var rateFac  = exp(-25.0*abs(t->r - rateBase));
    var p = 0.5*(depthFac + rateFac);
    p = 0.5*p + 0.5*(1.0 + (-disp))/(1.0);
    if(p<0) p=0; if(p>1) p=1;
    return p;
}

// importance for selective pruning
var nodeImportance(Node* u)
{
    if(!u) return 0;
    var amp = abs(u->v); if(amp>1) amp=1;
    var p = nodePredictability(u);
    var depthW = 1.0/(1.0 + u->d);
    var imp = (0.6*p + 0.4*amp) * depthW;
    return imp;
}

// ====== Elastic growth helpers ======

// create a leaf at depth d (no children)
Node* createLeafDepth(int d){
    Node* u = (Node*)malloc(sizeof(Node));
    u->v = random();
    u->r = 0.01 + 0.02*d + random(0.005);
    u->d = d;
    u->n = 0;
    u->c = 0;
    return u;
}

// add up to addK new children to all nodes at frontierDepth
void growSelectiveAtDepth(Node* u, int frontierDepth, int addK)
{
    if(!u) return;
    if(u->d == frontierDepth){
        int want = addK;
        if(want <= 0) return;
        int oldN = u->n;
        int newN = oldN + want;
        Node** Cnew = (Node**)malloc(newN * sizeof(void*));
        int i;
        for(i=0;i<oldN;i++) Cnew[i] = ((Node**)u->c)[i];
        for(i=oldN;i<newN;i++) Cnew[i] = createLeafDepth(frontierDepth-1);
        if(u->c) free(u->c);
        u->c = Cnew; u->n = newN;
        return;
    }
    int j; for(j=0;j<u->n;j++) growSelectiveAtDepth(((Node**)u->c)[j], frontierDepth, addK);
}

// keep top-K children by importance at targetDepth, drop the rest
void freeChildAt(Node* parent, int idx)
{
    if(!parent || !parent->c) return;
    Node** C = (Node**)parent->c;
    freeTree(C[idx]);
    int i;
    for(i=idx+1;i<parent->n;i++) C[i-1] = C[i];
    parent->n--;
    if(parent->n==0){ free(parent->c); parent->c=0; }
}
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK)
{
    if(!u) return;

    if(u->d == targetDepth-1 && u->n > 0){
        int n = u->n, i, kept = 0;
        int mark[16]; for(i=0;i<16;i++) mark[i]=0;

        int iter;
        for(iter=0; iter<keepK && iter<n; iter++){
            int bestI = -1; var bestImp = -1;
            for(i=0;i<n;i++){
                if(i<16 && mark[i]==1) continue;
                var imp = nodeImportance(((Node**)u->c)[i]);
                if(imp > bestImp){ bestImp = imp; bestI = i; }
            }
            if(bestI>=0 && bestI<16){ mark[bestI]=1; kept++; }
        }
        for(i=n-1;i>=0;i--) if(i<16 && mark[i]==0) freeChildAt(u,i);
        return;
    }

    int j; for(j=0;j<u->n;j++) pruneSelectiveAtDepth(((Node**)u->c)[j], targetDepth, keepK);
}

void reindexTreeAndMap()
{
    G_TreeN = 0;
    indexTreeDFS(Root);
    if(G_TreeN<=0){ G_TreeN=1; if(G_TreeIdx) G_TreeIdx[0]=Root; }
    int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = i % G_TreeN;
    maybeShrinkTreeIdx(); // Fix #3
}

// ====== Accuracy sentinel & elastic-depth controller ======

void acc_update(var x /*lambda*/, var y /*gamma*/)
{
    var a = 0.01; // ~100-bar half-life
    ACC_mx  = (1-a)*ACC_mx  + a*x;
    ACC_my  = (1-a)*ACC_my  + a*y;
    ACC_mx2 = (1-a)*ACC_mx2 + a*(x*x);
    ACC_my2 = (1-a)*ACC_my2 + a*(y*y);
    ACC_mxy = (1-a)*ACC_mxy + a*(x*y);

    var vx = ACC_mx2 - ACC_mx*ACC_mx;
    var vy = ACC_my2 - ACC_my*ACC_my;
    var cv = ACC_mxy - ACC_mx*ACC_my;
    if(vx>0 && vy>0) G_AccCorr = cv / sqrt(vx*vy); else G_AccCorr = 0;
    if(!G_HaveBase){ G_AccBase = G_AccCorr; G_HaveBase = 1; }
}

// utility to maximize: accuracy minus gentle memory penalty
var util_now()
{
    int mb = mem_mb_est();
    var mem_pen = 0;
    if(mb > MEM_BUDGET_MB) mem_pen = (mb - MEM_BUDGET_MB)/(var)MEM_BUDGET_MB; else mem_pen = 0;
    return G_AccCorr - 0.5*mem_pen;
}

// apply a +1 “grow one level” action if safe memory headroom
int apply_grow_step()
{
    int mb = mem_mb_est();
    if(G_RT_TreeMaxDepth >= MAX_DEPTH) return 0;
    if(mb > MEM_BUDGET_MB - 2*MEM_HEADROOM_MB) return 0;
    int newFrontier = G_RT_TreeMaxDepth;
    growSelectiveAtDepth(Root, newFrontier, KEEP_CHILDREN_HI);
    G_RT_TreeMaxDepth++;
    reindexTreeAndMap();
    printf("\n[EDC] Grew depth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
    return 1;
}

// revert last growth (drop newly-added frontier children)
void revert_last_grow()
{
    pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, 0);
    G_RT_TreeMaxDepth--;
    reindexTreeAndMap();
    printf("\n[EDC] Reverted growth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
}

// main elastic-depth controller; call once per bar (after acc_update)
void edc_runtime()
{
    // (5) slow hill-climb on G_DTreeExp
    if((Bar % DEPTH_TUNE_BARS) == 0){
        var U0 = util_now();
        var trial = clamp(G_DTreeExp + G_DTreeExpDir*G_DTreeExpStep, 0.8, 2.0);
        var old  = G_DTreeExp;
        G_DTreeExp = trial;
        if(util_now() + 0.005 < U0){
            G_DTreeExp = old;
            G_DTreeExpDir = -G_DTreeExpDir;
        }
    }

    int mb = mem_mb_est();

    if(G_TunePending){
        if(Bar - G_TuneStartBar >= TUNE_DELAY_BARS){
            G_UtilAfter = util_now();
            var eps = 0.01;
            if(G_UtilAfter + eps < G_UtilBefore){
                revert_last_grow();
            } else {
                printf("\n[EDC] Growth kept (U: %.4f -> %.4f)", G_UtilBefore, G_UtilAfter);
            }
            G_TunePending = 0; G_TuneAction = 0;
        }
        return;
    }

    if( (Bar % DEPTH_TUNE_BARS)==0 && mb <= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_RT_TreeMaxDepth < MAX_DEPTH ){
        G_UtilBefore = util_now();
        if(apply_grow_step()){
            G_TunePending = 1; G_TuneAction = 1; G_TuneStartBar = Bar;
        }
    }
}

// filenames (legacy; still used if LOG_EQ_TO_ONE_FILE==0)
void buildEqFileName(int idx, char* outName /*>=64*/)
{
    strcpy(outName, "Log\\Alpha12_eq_");
    string idxs = strf("%03i", idx);
    strcat(outName, idxs);
    strcat(outName, ".csv");
}

// ===== consolidated EQ log =====
void writeEqHeaderOnce()
{
    static int done=0; if(done) return; done=1;
    file_append("Log\\Alpha12_eq_all.csv",
        "Bar,Epoch,Ctx,EqCount,i,n1,n2,TreeId,Depth,Rate,Pred,Adv,Prop,Mode,WAdv,WTree,PBull,Entropy,MCState,ExprLen,ExprHash,tanhN,sinN,cosN\n");
}

void appendEqMetaLine(
    int bar, int epoch, int ctx, int i, int n1, int n2, int tid, int depth, var rate,
    var pred, var adv, var prop, int mode, var wadv, var wtree,
    var pbull, var ent, int mcstate, string expr)
{
    if(i >= LOG_EQ_SAMPLE) return;

    // ---- SAFE: never call functions inside ifelse; handle NULL explicitly
    int eLen = 0, eHash = 0, cT = 0, cS = 0, cC = 0;
    if(expr){
        eLen  = (int)strlen(expr);
        eHash = (int)djb2_hash(expr);
        cT    = countSubStr(expr,"tanh(");
        cS    = countSubStr(expr,"sin(");
        cC    = countSubStr(expr,"cos(");
    } else {
        eHash = (int)djb2_hash("");
    }

    file_append("Log\\Alpha12_eq_all.csv",
    strf("%i,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,%.4f,%.4f,%.6f,%i,%.3f,%.3f,%.4f,%.4f,%i,%i,%i,%i,%i,%i\n",
        bar, epoch, ctx, NET_EQNS, i, n1, n2, tid, depth, rate,
        pred, adv, prop, mode, wadv, wtree,
        pbull, ent, mcstate, eLen, eHash, cT, cS, cC));
}

// --------- allocation ----------
void randomizeRP()
{
    int K=G_K,N=G_N,k,j;
    for(k=0;k<K;k++)
        for(j=0;j<N;j++)
            G_RP[k*N+j] = ifelse(random(1) < 0.5, -1.0, 1.0);
}

// === (8) Use effective K (G_Keff) ===
void computeProjection(){
    int K=G_Keff, N=G_N, k, j;
    for(k=0;k<K;k++){
        var acc=0; 
        for(j=0;j<N;j++) acc += G_RP[k*N+j]*(G_State[j]*G_State[j]);
        G_Z[k]=acc;
    }
}

void allocateNet()
{
    int N=G_N, D=G_D, K=G_K;
    G_State=(var*)malloc(N*sizeof(var));  G_Prev=(var*)malloc(N*sizeof(var));  G_Vel=(var*)malloc(N*sizeof(var));
    G_Adj=(int*)malloc(N*D*sizeof(int));
    G_RP=(var*)malloc(K*N*sizeof(var));   G_Z=(var*)malloc(K*sizeof(var));
    G_Mode=(int*)malloc(N*sizeof(int));
    G_WSelf=(var*)malloc(N*sizeof(var));  G_WN1=(var*)malloc(N*sizeof(var));   G_WN2=(var*)malloc(N*sizeof(var));
    G_WGlob1=(var*)malloc(N*sizeof(var)); G_WGlob2=(var*)malloc(N*sizeof(var));
    G_WMom=(var*)malloc(N*sizeof(var));   G_WTree=(var*)malloc(N*sizeof(var)); G_WAdv=(var*)malloc(N*sizeof(var));
    A1x=(var*)malloc(N*sizeof(var)); A1lam=(var*)malloc(N*sizeof(var)); A1mean=(var*)malloc(N*sizeof(var));
    A1E=(var*)malloc(N*sizeof(var)); A1P=(var*)malloc(N*sizeof(var));   A1i=(var*)malloc(N*sizeof(var)); A1c=(var*)malloc(N*sizeof(var));
    A2x=(var*)malloc(N*sizeof(var)); A2lam=(var*)malloc(N*sizeof(var)); A2mean=(var*)malloc(N*sizeof(var));
    A2E=(var*)malloc(N*sizeof(var)); A2P=(var*)malloc(N*sizeof(var));   A2i=(var*)malloc(N*sizeof(var)); A2c=(var*)malloc(N*sizeof(var));
    G1mean=(var*)malloc(N*sizeof(var)); G1E=(var*)malloc(N*sizeof(var));
    G2P=(var*)malloc(N*sizeof(var));    G2lam=(var*)malloc(N*sizeof(var));
    G_TreeTerm=(var*)malloc(N*sizeof(var)); G_TopEq=(int*)malloc(N*sizeof(int)); G_TopW=(var*)malloc(N*sizeof(var));
    TAlpha=(var*)malloc(N*sizeof(var));     TBeta=(var*)malloc(N*sizeof(var));
    G_Pred=(var*)malloc(N*sizeof(var)); G_AdvScore=(var*)malloc(N*sizeof(var));
    G_PropRaw=(var*)malloc(N*sizeof(var));  G_Prop=(var*)malloc(N*sizeof(var));

    if(LOG_EXPR_TEXT){
        G_Sym=(string*)malloc(N*sizeof(char*));
    } else {
        G_Sym=0;
    }

    G_TreeCap=128; // was 512 (Fix #3: start smaller; still grows if needed)
    G_TreeIdx=(Node**)malloc(G_TreeCap*sizeof(Node*)); G_TreeN=0;
    G_EqTreeId=(int*)malloc(N*sizeof(int));

    // Pre-init adjacency to safe value
    int tInit; for(tInit=0; tInit<N*D; tInit++) G_Adj[tInit] = -1;

    int i;
    for(i=0;i<N;i++){
        G_State[i]=random();
        G_Prev[i]=G_State[i]; G_Vel[i]=0;
        G_Mode[i]=0;
        G_WSelf[i]=0.5; G_WN1[i]=0.2; G_WN2[i]=0.2; G_WGlob1[i]=0.1; G_WGlob2[i]=0.1; G_WMom[i]=0.05; G_WTree[i]=0.15; G_WAdv[i]=0.15;
        A1x[i]=1; A1lam[i]=0.1; A1mean[i]=0; A1E[i]=0; A1P[i]=0; A1i[i]=0; A1c[i]=0;
        A2x[i]=1; A2lam[i]=0.1; A2mean[i]=0; A2E[i]=0; A2P[i]=0; A2i[i]=0; A2c[i]=0;
        G1mean[i]=1.0; G1E[i]=0.001; G2P[i]=0.6; G2lam[i]=0.3;
        TAlpha[i]=0.8; TBeta[i]=25.0;
        G_TreeTerm[i]=0; G_TopEq[i]=-1; G_TopW[i]=0;
        G_Pred[i]=0.5;   G_AdvScore[i]=0;
        G_PropRaw[i]=1;  G_Prop[i]=1.0/G_N;

        if(LOG_EXPR_TEXT){
            G_Sym[i] = (char*)malloc(EXPR_MAXLEN);
            if(G_Sym[i]) strcpy(G_Sym[i], "");
        }
    }
}

void freeNet()
{
    int i;
    if(G_State)free(G_State); if(G_Prev)free(G_Prev); if(G_Vel)free(G_Vel);
    if(G_Adj)free(G_Adj); if(G_RP)free(G_RP); if(G_Z)free(G_Z);
    if(G_Mode)free(G_Mode); if(G_WSelf)free(G_WSelf); if(G_WN1)free(G_WN1); if(G_WN2)free(G_WN2);
    if(G_WGlob1)free(G_WGlob1); if(G_WGlob2)free(G_WGlob2); if(G_WMom)free(G_WMom);
    if(G_WTree)free(G_WTree); if(G_WAdv)free(G_WAdv);
    if(A1x)free(A1x); if(A1lam)free(A1lam); if(A1mean)free(A1mean); if(A1E)free(A1E); if(A1P)free(A1P); if(A1i)free(A1i); if(A1c)free(A1c);
    if(A2x)free(A2x); if(A2lam)free(A2lam); if(A2mean)free(A2mean); if(A2E)free(A2E); if(A2P)free(A2P); if(A2i)free(A2i); if(A2c)free(A2c);
    if(G1mean)free(G1mean); if(G1E)free(G1E); if(G2P)free(G2P); if(G2lam)free(G2lam);
    if(G_TreeTerm)free(G_TreeTerm); if(G_TopEq)free(G_TopEq); if(G_TopW)free(G_TopW);
    if(TAlpha)free(TAlpha); if(TBeta)free(TBeta);
    if(G_Pred)free(G_Pred); if(G_AdvScore)free(G_AdvScore);
    if(G_PropRaw)free(G_PropRaw); if(G_Prop)free(G_Prop);
    if(G_Sym){ for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]); free(G_Sym); }
    if(G_TreeIdx)free(G_TreeIdx); if(G_EqTreeId)free(G_EqTreeId);
}

// --------- DTREE feature builders ----------
var nrm_s(var x){ return sat100(100.0*tanh(x)); }
var nrm_scl(var x,var s){ return sat100(100.0*tanh(s*x)); }

void buildEqFeatures(int i, var lambda, var mean, var energy, var power, var* S /*ADV_EQ_NF*/)
{
    int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
    Node* t = treeAt(tid);

    S[0]  = nrm_s(G_State[i]);
    S[1]  = nrm_s(mean);
    S[2]  = nrm_scl(power,0.05);
    S[3]  = nrm_scl(energy,0.01);
    S[4]  = nrm_s(lambda);
    S[5]  = sat100(200.0*(G_Pred[i]-0.5));
    S[6]  = sat100(200.0*((var)t->d/MAX_DEPTH)-100.0);
    S[7]  = sat100(1000.0*t->r);
    S[8]  = nrm_s(G_TreeTerm[i]);
    S[9]  = sat100(200.0*((var)G_Mode[i]/3.0)-100.0);
    S[10] = sat100(200.0*(G_MCF_PBull-0.5));
    S[11] = sat100(200.0*(G_MCF_Entropy-0.5));
    sanitize(S,ADV_EQ_NF);
}

// (Kept for completeness; not used by DTREE anymore)
void buildPairFeatures(int i,int j, var lambda, var mean, var energy, var power, var* P /*ADV_PAIR_NF*/)
{
    int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
    int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
    Node* ti = treeAt(tid_i);
    Node* tj = treeAt(tid_j);

    P[0]=nrm_s(G_State[i]); P[1]=nrm_s(G_State[j]);
    P[2]=sat100(200.0*((var)ti->d/MAX_DEPTH)-100.0);
    P[3]=sat100(200.0*((var)tj->d/MAX_DEPTH)-100.0);
    P[4]=sat100(1000.0*ti->r); P[5]=sat100(1000.0*tj->r);
    P[6]=sat100(abs(P[2]-P[3]));
    P[7]=sat100(abs(P[4]-P[5]));
    P[8]=sat100(100.0*(G_Pred[i]+G_Pred[j]-1.0));
    P[9]=nrm_s(lambda); P[10]=nrm_s(mean); P[11]=nrm_scl(power,0.05);
    sanitize(P,ADV_PAIR_NF);
}

// --- Safe neighbor helpers & adjacency sanitizer ---
int adjSafe(int i, int d){
    int N = G_N, D = G_D;
    if(!G_Adj || N <= 1 || D <= 0) return 0;
    if(d < 0) d = 0;
    if(d >= D) d = d % D;
    int v = G_Adj[i*D + d];
    if(v < 0 || v >= N || v == i){
        v = (i + 1) % N;
    }
    return v;
}

void sanitizeAdjacency(){
    if(!G_Adj) return;
    int N = G_N, D = G_D;
    int i, d;
    for(i=0;i<N;i++){
        for(d=0; d<D; d++){
            int *p = &G_Adj[i*D + d];
            if(*p < 0 || *p >= N || *p == i){
                int r = (int)random(N);
                if(r == i) r = (r+1) % N;
                *p = r;
            }
        }
        if(D >= 2 && G_Adj[i*D+0] == G_Adj[i*D+1]){
            int r2 = (G_Adj[i*D+1] + 1) % N;
            if(r2 == i) r2 = (r2+1) % N;
            G_Adj[i*D+1] = r2;
        }
    }
}

// --------- advisor helpers (NEW) ----------

// cache one advisor value per equation per bar
var adviseSeed(int i, var lambda, var mean, var energy, var power)
{
    static int seedBar = -1;
    static int haveSeed[NET_EQNS];
    static var seedVal[NET_EQNS];

    if(seedBar != Bar){
        int k; for(k=0;k<NET_EQNS;k++) haveSeed[k] = 0;
        seedBar = Bar;
    }
    if(i < 0) i = 0;
    if(i >= NET_EQNS) i = i % NET_EQNS;

    // Respect advisor budget/rotation for seed too
    if(!allowAdvise(i)) return 0;

    if(!haveSeed[i]){
        seedVal[i] = adviseEq(i, lambda, mean, energy, power); // trains (once) in Train mode
        haveSeed[i] = 1;
    }
    return seedVal[i];
}

// simple deterministic mixer for diversity in [-1..1] without extra advise calls
var mix01(var a, int salt){
    var z = sin(123.456*a + 0.001*salt) + cos(98.765*a + 0.002*salt);
    return tanh(0.75*z);
}

// --------- advise wrappers (single-equation only) ----------
// Use estimator to halt when tight; respect rotation budget.
var adviseEq(int i, var lambda, var mean, var energy, var power)
{
    if(!allowAdvise(i)) return 0;

    var S[ADV_EQ_NF];
    buildEqFeatures(i,lambda,mean,energy,power,S);

    if(is(INITRUN)) return 0;

    // stop early based on our estimator, not memory(0)
    int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
    if(tight) return 0;

    var obj = 0;
    if(Train && !tight)
        obj = sat100(100.0*tanh(0.6*lambda + 0.4*mean));

    int objI = (int)obj;
    var a = adviseLong(DTREE, objI, S, ADV_EQ_NF);
    return a/100.;
}

// --------- advisePair disabled: never call DTREE here ----------
var advisePair(int i,int j, var lambda, var mean, var energy, var power)
{
    return 0;
}

// --------- heuristic pair scoring ----------
var scorePairSafe(int i, int j, var lambda, var mean, var energy, var power)
{
    int ti = safeTreeIndexFromEq(G_EqTreeId[i]);
    int tj = safeTreeIndexFromEq(G_EqTreeId[j]);
    Node *ni = treeAt(ti), *nj = treeAt(tj);
    var simD  = 1.0 / (1.0 + abs((var)ni->d - (var)nj->d));
    var simR  = 1.0 / (1.0 + 50.0*abs(ni->r - nj->r));
    var pred  = 0.5*(G_Pred[i] + G_Pred[j]);
    var score = 0.5*pred + 0.3*simD + 0.2*simR;
    return 2.0*score - 1.0;
}

// --------- adjacency selection (heuristic only) ----------
// safer clash check using prev>=0
void rewireAdjacency_DTREE(var lambda, var mean, var energy, var power)
{
    int N=G_N, D=G_D, i, d, c, best, cand;
    for(i=0;i<N;i++){
        for(d=0; d<D; d++){
            var bestScore = -2; best = -1;
            // (7) adaptive candidate breadth
            for(c=0;c<G_CandNeigh;c++){
                cand = (int)random(N);
                if(cand==i) continue;
                int clash=0, k;
                for(k=0;k<d;k++){
                    int prev = G_Adj[i*D+k];
                    if(prev>=0 && prev==cand){ clash=1; break; }
                }
                if(clash) continue;
                var s = scorePairSafe(i,cand,lambda,mean,energy,power);
                if(s > bestScore){ bestScore=s; best=cand; }
            }
            if(best<0){ do{ best = (int)random(N);} while(best==i); }
            G_Adj[i*D + d] = best;
        }
    }
}

// --------- DTREE-created coefficients, modes & proportions ----------
var mapA(var a,var lo,var hi){ return mapUnit(a,lo,hi); }

void synthesizeEquationFromDTREE(int i, var lambda, var mean, var energy, var power)
{
    var seed = adviseSeed(i,lambda,mean,energy,power);
    G_Mode[i] = (int)(abs(1000*seed)) & 3;

    // derive weights & params deterministically from the single seed
    G_WSelf[i]  = mapA(mix01(seed, 11), 0.15, 0.85);
    G_WN1[i]    = mapA(mix01(seed, 12), 0.05, 0.35);
    G_WN2[i]    = mapA(mix01(seed, 13), 0.05, 0.35);
    G_WGlob1[i] = mapA(mix01(seed, 14), 0.05, 0.30);
    G_WGlob2[i] = mapA(mix01(seed, 15), 0.05, 0.30);
    G_WMom[i]   = mapA(mix01(seed, 16), 0.02, 0.15);
    G_WTree[i]  = mapA(mix01(seed, 17), 0.05, 0.35);
    G_WAdv[i]   = mapA(mix01(seed, 18), 0.05, 0.35);

    A1x[i]   = randsign()*mapA(mix01(seed, 21), 0.6, 1.2);
    A1lam[i] = randsign()*mapA(mix01(seed, 22), 0.05,0.35);
    A1mean[i]=                  mapA(mix01(seed, 23),-0.30,0.30);
    A1E[i]   =                  mapA(mix01(seed, 24),-0.0015,0.0015);
    A1P[i]   =                  mapA(mix01(seed, 25),-0.30,0.30);
    A1i[i]   =                  mapA(mix01(seed, 26),-0.02,0.02);
    A1c[i]   =                  mapA(mix01(seed, 27),-0.20,0.20);

    A2x[i]   = randsign()*mapA(mix01(seed, 31), 0.6, 1.2);
    A2lam[i] = randsign()*mapA(mix01(seed, 32), 0.05,0.35);
    A2mean[i]=                  mapA(mix01(seed, 33),-0.30,0.30);
    A2E[i]   =                  mapA(mix01(seed, 34),-0.0015,0.0015);
    A2P[i]   =                  mapA(mix01(seed, 35),-0.30,0.30);
    A2i[i]   =                  mapA(mix01(seed, 36),-0.02,0.02);
    A2c[i]   =                  mapA(mix01(seed, 37),-0.20,0.20);

    G1mean[i] =                  mapA(mix01(seed, 41), 0.4, 1.6);
    G1E[i]    =                  mapA(mix01(seed, 42),-0.004,0.004);
    G2P[i]    =                  mapA(mix01(seed, 43), 0.1, 1.2);
    G2lam[i]  =                  mapA(mix01(seed, 44), 0.05, 0.7);

    TAlpha[i] =                  mapA(mix01(seed, 51), 0.3, 1.5);
    TBeta[i]  =                  mapA(mix01(seed, 52), 6.0, 50.0);

    G_PropRaw[i] = 0.01 + 0.99*(0.5*(seed+1.0));
}

void normalizeProportions()
{
    int N=G_N,i; var s=0; for(i=0;i<N;i++) s += G_PropRaw[i];
    if(s<=0) { for(i=0;i<N;i++) G_Prop[i] = 1.0/N; return; }
    for(i=0;i<N;i++) G_Prop[i] = G_PropRaw[i]/s;
}

var dtreeTerm(int i, int* outTopEq, var* outTopW)
{
    int N=G_N,j;
    int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
    Node* ti=treeAt(tid_i); int di=ti->d; var ri=ti->r;
    var alpha=TAlpha[i], beta=TBeta[i];
    var sumw=0, acc=0, bestW=-1; int bestJ=-1;
    for(j=0;j<N;j++){
        if(j==i) continue;
        int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
        Node* tj=treeAt(tid_j); int dj=tj->d; var rj=tj->r;
        var w = exp(-alpha*abs(di-dj)) * exp(-beta*abs(ri-rj));
        var predBoost = 0.5 + 0.5*(G_Pred[i]*G_Pred[j]);
        var propBoost = 0.5 + 0.5*( (G_Prop[i] + G_Prop[j]) );
        w *= predBoost * propBoost;
        var pairAdv = scorePairSafe(i,j,0,0,0,0);
        var pairBoost = 0.75 + 0.25*(0.5*(pairAdv+1.0));
        w *= pairBoost;
        sumw += w; acc += w*G_State[j];
        if(w>bestW){bestW=w; bestJ=j;}
    }
    if(outTopEq) *outTopEq = bestJ;
    if(outTopW)  *outTopW  = ifelse(sumw>0, bestW/sumw, 0);
    if(sumw>0) return acc/sumw; return 0;
}

// --------- expression builder (capped & optional) ----------
void buildSymbolicExpr(int i, int n1, int n2)
{
    if(LOG_EXPR_TEXT){
        string s = G_Sym[i]; s[0]=0;
        string a1 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
                         A1x[i], n1, A1lam[i], A1mean[i], A1E[i], A1P[i], A1i[i], A1c[i]);
        string a2 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
                         A2x[i], n2, A2lam[i], A2mean[i], A2E[i], A2P[i], A2i[i], A2c[i]);

        strlcat_safe(s, "x[i]_next = ", EXPR_MAXLEN);
        strlcat_safe(s, strf("%.3f*x[i] + ", G_WSelf[i]), EXPR_MAXLEN);

        if(G_Mode[i]==1){
            strlcat_safe(s, strf("%.3f*tanh%s + ", G_WN1[i], a1), EXPR_MAXLEN);
            strlcat_safe(s, strf("%.3f*sin%s + ",  G_WN2[i], a2), EXPR_MAXLEN);
        } else if(G_Mode[i]==2){
            strlcat_safe(s, strf("%.3f*cos%s + ",  G_WN1[i], a1), EXPR_MAXLEN);
            strlcat_safe(s, strf("%.3f*tanh%s + ", G_WN2[i], a2), EXPR_MAXLEN);
        } else {
            strlcat_safe(s, strf("%.3f*sin%s + ",  G_WN1[i], a1), EXPR_MAXLEN);
            strlcat_safe(s, strf("%.3f*cos%s + ",  G_WN2[i], a2), EXPR_MAXLEN);
        }

        strlcat_safe(s, strf("%.3f*tanh(%.3f*mean + %.5f*E) + ", G_WGlob1[i], G1mean[i], G1E[i]), EXPR_MAXLEN);
        strlcat_safe(s, strf("%.3f*sin(%.3f*P + %.3f*lam) + ",   G_WGlob2[i], G2P[i],   G2lam[i]), EXPR_MAXLEN);
        strlcat_safe(s, strf("%.3f*(x[i]-x_prev[i]) + ",         G_WMom[i]), EXPR_MAXLEN);
        strlcat_safe(s, strf("Prop[i]=%.4f; ",                   G_Prop[i]), EXPR_MAXLEN);
        strlcat_safe(s, strf("%.3f*DT(i) + ",                    G_WTree[i]), EXPR_MAXLEN);
        strlcat_safe(s, strf("%.3f*DTREE(i)",                    G_WAdv[i]), EXPR_MAXLEN);
    }
}

// --------- one-time rewire init ----------
void rewireInit()
{
    randomizeRP(); computeProjection();
    G_TreeN=0; indexTreeDFS(Root);
    if(G_TreeN<=0){ G_TreeN=1; if(G_TreeIdx) G_TreeIdx[0]=Root; }
    int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = i % G_TreeN;
}

// probes & unsigned context hash
// ----------------------------------------------------------------------
// rewireEpoch (SAFE: no functions inside ifelse)
// ----------------------------------------------------------------------
void rewireEpoch(var lambda, var mean, var energy, var power)
{
    int i;

    if(ENABLE_WATCH) watch("?A");   // before predictability
    for(i=0;i<G_N;i++){
        int  tid = safeTreeIndexFromEq(G_EqTreeId[i]);
        Node* t  = treeAt(tid);
        G_Pred[i] = nodePredictability(t);
    }

    if(ENABLE_WATCH) watch("?B");   // after predictability, before adjacency

    // (7) adapt adjacency sampling breadth by regime entropy
    G_CandNeigh = ifelse(MC_Entropy < 0.45, CAND_NEIGH+4, CAND_NEIGH);

    rewireAdjacency_DTREE(lambda,mean,energy,power);

    if(ENABLE_WATCH) watch("?C");   // after adjacency, before synthesize
    sanitizeAdjacency();

    for(i=0;i<G_N;i++)
        synthesizeEquationFromDTREE(i,lambda,mean,energy,power);

    if(ENABLE_WATCH) watch("?D");   // before normalize / ctx hash
    normalizeProportions();

    // Unsigned context hash of current adjacency (+ epoch) for logging
    {
        int D = G_D;
        unsigned int h = 2166136261u;
        int total = G_N * D;
        for(i=0;i<total;i++){
            unsigned int x = (unsigned int)G_Adj[i];
            h ^= x + 0x9e3779b9u + (h<<6) + (h>>2);
        }
        G_CtxID = (int)((h ^ ((unsigned int)G_Epoch<<8)) & 0x7fffffff);
    }

    // Optional expression text (only when LOG_EXPR_TEXT==1)
    for(i=0;i<G_N;i++){
        int n1 = adjSafe(i,0);
        int n2 = n1;
        if(G_D >= 2) n2 = adjSafe(i,1);
        if(LOG_EXPR_TEXT) buildSymbolicExpr(i,n1,n2);
    }
}

var projectNet()
{
    int N=G_N,i; var sum=0,sumsq=0,cross=0;
    for(i=0;i<N;i++){ sum+=G_State[i]; sumsq+=G_State[i]*G_State[i]; if(i+1<N) cross+=G_State[i]*G_State[i+1]; }
    var mean=sum/N, corr=cross/(N-1);
    return 0.6*tanh(mean + 0.001*sumsq) + 0.4*sin(corr);
}

// ----------------------------------------------------------------------
// updateNet (SAFE: no functions inside ifelse for neighbor indices)
// ----------------------------------------------------------------------
void updateNet(var driver, var* outMean, var* outEnergy, var* outPower, int writeMeta)
{
    int N = G_N, D = G_D, i;

    var sum = 0, sumsq = 0;
    for(i = 0; i < N; i++){
        sum   += G_State[i];
        sumsq += G_State[i]*G_State[i];
    }
    var mean   = sum / N;
    var energy = sumsq;
    var power  = sumsq / N;

    for(i = 0; i < N; i++){
        int  tid = safeTreeIndexFromEq(G_EqTreeId[i]);
        Node* t  = treeAt(tid);
        G_Pred[i] = nodePredictability(t);
    }

    for(i = 0; i < N; i++){
        int n1 = adjSafe(i,0);
        int n2 = n1;
        if(D >= 2) n2 = adjSafe(i,1);

        var xi   = G_State[i];
        var xn1  = G_State[n1];
        var xn2  = G_State[n2];
        var mom  = xi - G_Prev[i];

        int topEq = -1;
        var topW  = 0;
        var dt    = dtreeTerm(i, &topEq, &topW);
        G_TreeTerm[i] = dt;
        G_TopEq[i]    = topEq;
        G_TopW[i]     = topW;

        // call advisor only when allowed
        var adv = 0;
        if(allowAdvise(i))
             adv = adviseEq(i, driver, mean, energy, power);
 
        G_AdvScore[i] = adv;

        var arg1 = A1x[i]*xn1 + A1lam[i]*driver + A1mean[i]*mean + A1E[i]*energy + A1P[i]*power + A1i[i]*i + A1c[i];
        var arg2 = A2x[i]*xn2 + A2lam[i]*driver + A2mean[i]*mean + A2E[i]*energy + A2P[i]*power + A2i[i]*i + A2c[i];

        var nl1, nl2;
        if(G_Mode[i] == 0){ nl1 = sin(arg1);  nl2 = cos(arg2); }
        else if(G_Mode[i] == 1){ nl1 = tanh(arg1); nl2 = sin(arg2); }
        else if(G_Mode[i] == 2){ nl1 = cos(arg1);  nl2 = tanh(arg2); }
        else { nl1 = sin(arg1); nl2 = cos(arg2); }

        var glob1 = tanh(G1mean[i]*mean + G1E[i]*energy);
        var glob2 = sin (G2P[i]*power + G2lam[i]*driver);

        var xNew =
            G_WSelf[i]*xi +
            G_WN1[i]*nl1 +
            G_WN2[i]*nl2 +
            G_WGlob1[i]*glob1 +
            G_WGlob2[i]*glob2 +
            G_WMom[i]*mom +
            G_WTree[i]*dt +
            G_WAdv[i]*adv;

        G_Prev[i]  = xi;
        G_Vel[i]   = xNew - xi;
        G_State[i] = clamp(xNew, -10, 10);

        if(writeMeta && (G_Epoch % META_EVERY == 0) && !G_LogsOff){
            int  tid2 = safeTreeIndexFromEq(G_EqTreeId[i]);
            Node* t2  = treeAt(tid2);
            int  nn1  = adjSafe(i,0);
            int  nn2  = nn1;
            if(G_D >= 2) nn2 = adjSafe(i,1);

            if(LOG_EQ_TO_ONE_FILE){
                string expr = "";
                if(LOG_EXPR_TEXT) expr = G_Sym[i];
                appendEqMetaLine(
                    Bar, G_Epoch, G_CtxID, i, nn1, nn2, tid2, t2->d, t2->r,
                    G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
                    MC_PBullNext, MC_Entropy, MC_Cur, expr
                );
            } else {
                char fname[64];
                buildEqFileName(i, fname);
                string expr2 = "";
                if(LOG_EXPR_TEXT) expr2 = G_Sym[i];
                file_append(fname,
                    strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
                        G_Epoch, G_CtxID, NET_EQNS, i, nn1, nn2, tid2, t2->d, t2->r,
                        G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
                        MC_PBullNext, MC_Entropy, MC_Cur, expr2));
            }
        }
    }

    if(outMean)   *outMean   = mean;
    if(outEnergy) *outEnergy = energy;
    if(outPower)  *outPower  = power;
}

// ----------------- MAIN -----------------
function run()
{
    static int initialized = 0;
    static var lambda;
    static int fileInit = 0;

    BarPeriod = BAR_PERIOD;
    if(LookBack < NWIN) LookBack = NWIN;
    if(Train) Hedge = 2;

    // Plots are opt-in via ENABLE_PLOTS
    set(RULES|LEAN);
    if(ENABLE_PLOTS) set(PLOTNOW);
    asset(ASSET_SYMBOL);

    if(is(INITRUN) && !initialized){

        // init dummy node
        G_DummyNode.v = 0;
        G_DummyNode.r = 0;
        G_DummyNode.c = 0;
        G_DummyNode.n = 0;
        G_DummyNode.d = 0;

        // allocate Markov matrices (zeroed)
        MC_Count  = (int*)malloc(MC_STATES*MC_STATES*sizeof(int));
        MC_RowSum = (int*)malloc(MC_STATES*sizeof(int));
        int k;
        for(k=0;k<MC_STATES*MC_STATES;k++) MC_Count[k]=0;
        for(k=0;k<MC_STATES;k++) MC_RowSum[k]=0;

        // capture pattern names (optional)
        var tmp[MC_NPAT];
        buildCDL_TA61(tmp, MC_Names);

        // build tree + network
        Root = createNode(MAX_DEPTH);
        allocateNet();

        // engine params
        G_DTreeExp = 1.10 + random(0.50);   // [1.10..1.60)
        G_FB_A     = 0.60 + random(0.25);   // [0.60..0.85) (kept)
        G_FB_B     = 1.0 - G_FB_A;

        randomizeRP();
        computeProjection();
        rewireInit();

        G_Epoch = 0;
        rewireEpoch(0,0,0,0);

        // Header setup (consolidated vs legacy)
        if(LOG_EQ_TO_ONE_FILE){
            writeEqHeaderOnce();
        } else {
            char fname[64];
            int i2;
            for(i2=0;i2<NET_EQNS;i2++){
                buildEqFileName(i2,fname);
                file_append(fname,
                    "Bar,lambda,gamma,i,State,n1,n2,mean,energy,power,Vel,Mode,WAdv,WSelf,WN1,WN2,WGlob1,WGlob2,WMom,WTree,Pred,Adv,Prop,TreeTerm,TopEq,TopW,TreeId,Depth,Rate,PBull,Entropy,MCState\n");
            }
        }

        // Markov CSV header
        if(!fileInit){
            file_append("Log\\Alpha12_markov.csv","Bar,State,PBullNext,Entropy,RowSum\n");
            fileInit=1;
        }

        // initial META dump (consolidated or legacy)
        int i;
        for(i=0;i<G_N;i++){
            int n1 = adjSafe(i,0);
            int n2 = n1;
            if(G_D >= 2) n2 = adjSafe(i,1);
            int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
            Node* t = treeAt(tid);

            if(LOG_EQ_TO_ONE_FILE){
                string expr = "";
                if(LOG_EXPR_TEXT) expr = G_Sym[i];
                appendEqMetaLine(
                    Bar, G_Epoch, G_CtxID, i, n1, n2, tid, t->d, t->r,
                    G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
                    MC_PBullNext, MC_Entropy, MC_Cur, expr
                );
            } else {
                char fname2[64];
                buildEqFileName(i,fname2);
                string expr2 = "";
                if(LOG_EXPR_TEXT) expr2 = G_Sym[i];
                file_append(fname2,
                    strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
                        G_Epoch, G_CtxID, NET_EQNS, i, n1, n2, tid, t->d, t->r,
                        G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
                        MC_PBullNext, MC_Entropy, MC_Cur, expr2));
            }
        }

        initialized=1;
        printf("\nRoot nodes: %i | Net equations: %i (degree=%i, kproj=%i)",
               countNodes(Root), G_N, G_D, G_K);
    }

    // early zero-cost shedding when approaching cap
    if(mem_mb_est() >= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_ShedStage == 0)
        shed_zero_cost_once();

    // ==== Runtime memory / depth manager (acts only when near the cap)
    depth_manager_runtime();

    // ====== Per bar: Candles ? Markov
    static var CDL[MC_NPAT];
    buildCDL_TA61(CDL,0);

    // (2) adaptive threshold for Markov state acceptance
    MC_Cur = MC_stateFromCDL(CDL, G_MC_ACT);

    if(Bar > LookBack) MC_update(MC_Prev, MC_Cur);
    MC_Prev = MC_Cur;

    // (6) ? decays with row support to sharpen PBull as rows fill
    var rs = (var)MC_RowSum[MC_Cur];
    G_MC_Alpha = clamp(1.0 / (1.0 + rs/256.0), 0.05, 1.0);

    MC_PBullNext = MC_nextBullishProb(MC_Cur);
    MC_Entropy   = MC_rowEntropy01(MC_Cur);

    // expose Markov features
    G_MCF_PBull   = MC_PBullNext;
    G_MCF_Entropy = MC_Entropy;
    G_MCF_State   = (var)MC_Cur;

    // (2) EW acceptance rate of nonzero states ? adapt threshold toward target rate
    {
        var aEW = 0.01; // ~100-bar half-life
        G_AccRate = (1 - aEW)*G_AccRate + aEW*(MC_Cur != 0);
        var target = 0.35; // aim for ~35% nonzero states
        G_MC_ACT = clamp(G_MC_ACT + 0.02*(G_AccRate - target), 0.15, 0.60);
    }

    // ====== Tree driver lambda
    lambda = evaluateNode(Root);

    // ====== Rewire cadence (4) + epoch work
    {
        int doRewire = ((Bar % REWIRE_EVERY) == 0);

        // (4) early rewire when utility falls
        static var U_prev = 0;
        var U_now = util_now();
        if(U_now + 0.01 < U_prev) doRewire = 1;
        U_prev = U_now;

        if(doRewire){
            G_Epoch++;

            int ii;
            var sum=0;
            for(ii=0;ii<G_N;ii++) sum += G_State[ii];
            var mean = sum/G_N;

            var energy=0;
            for(ii=0;ii<G_N;ii++) energy += G_State[ii]*G_State[ii];
            var power = energy/G_N;

            rewireEpoch(lambda,mean,energy,power);
        }

        // (8) adapt effective projection K each bar and recompute projection once
        G_Keff = ifelse(MC_Entropy < 0.45, KPROJ, KPROJ/2);
        computeProjection();

        // (3) dynamic advisor budget per bar (before updateNet so it applies now)
        int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
        G_AdviseMax = ifelse(tight, 12, ifelse(MC_Entropy < 0.45, 32, 16));

        // Update net this bar (write META only if rewired and not shedding logs)
        var meanB, energyB, powerB;
        updateNet(lambda, &meanB, &energyB, &powerB, doRewire);

        // Feedback: compute ensemble projection
        var gamma = projectNet();

        // --- Accuracy sentinel update & elastic depth controller ---
        acc_update(lambda, gamma);
        edc_runtime();

        // (1) Adaptive feedback blend toward the more informative component
        var w = 0.5 + 0.5*G_AccCorr;                 // 0..1
        G_FB_W = clamp(0.9*G_FB_W + 0.1*w, 0.2, 0.9);
        lambda  = G_FB_W*lambda + (1.0 - G_FB_W)*gamma;

        // Plot/log gating
        int doPlot = (ENABLE_PLOTS && !G_ChartsOff);
        int doLog = ifelse(G_LogsOff, ((Bar % (LOG_EVERY*4)) == 0), ((Bar % LOG_EVERY) == 0));

        // Plots
        if(doPlot){
            plot("lambda", lambda, LINE, 0);
            plot("gamma",  gamma,  LINE, 0);
            plot("P_win",  powerB, LINE, 0);
            plot("PBullNext", MC_PBullNext, LINE, 0);
            plot("MC_Entropy", MC_Entropy, LINE, 0);
            plot("MemMB", memory(0)/(1024.*1024.), LINE, 0);
            plot("Allocs", (var)memory(2), LINE, 0);
        }

        // Markov CSV log (decimated; further decimated when shedding)
        if(doLog){
            file_append("Log\\Alpha12_markov.csv",
                strf("%i,%i,%.6f,%.6f,%i\n", Bar, MC_Cur, MC_PBullNext, MC_Entropy, MC_RowSum[MC_Cur]));
        }

        // ====== Entries (Markov-gated) ======
        if( MC_PBullNext > PBULL_LONG_TH && lambda > 0.7 )  enterLong();
        if( MC_PBullNext < PBULL_SHORT_TH && lambda < -0.7 ) enterShort();
    }
}

// Clean up memory
function cleanup()
{
    if(Root) freeTree(Root);
    if(MC_Count)  free(MC_Count);
    if(MC_RowSum) free(MC_RowSum);
    freeNet();
}

Last edited by TipmyPip; 09/06/25 20:26.