2 registered members (TipmyPip, alibaba),
15,838
guests, and 5
spiders. |
Key:
Admin,
Global Mod,
Mod
|
|
|
Gate-and-Flow Adaptive Navigator
[Re: TipmyPip]
#488876
09/06/25 00:26
09/06/25 00:26
|
Joined: Sep 2017
Posts: 164
TipmyPip
OP
Member
|
OP
Member
Joined: Sep 2017
Posts: 164
|
Gate-and-Flow Adaptive NavigatorA. A small market lexicon Market moments are compressed into a compact set of archetypes. Think of it as a modest alphabet describing what the tape “looks like” right now. From the stream of past archetypes, the system develops expectations about what tends to follow what, and how tightly those follow-ups cluster. Each moment it reads two quiet dials: a lean (directional tilt for the next step) and a clarity (how decisive that tilt appears). This pair forms a permission gate; sometimes it opens wide, sometimes it holds. B. A soft landscape of influences Running beneath the gate is a smooth, continuously evolving field of interacting influences. Many small components co-exist; each carries a trace of its own past, listens to a couple of peers, feels market pulse across multiple horizons, and heeds coarse crowd summaries (central tendency, dispersion, co-movement). A hint of recent change is included so the field can tip faster when the tape turns. All signals are bounded. Attention is scarce—capital and focus are rationed proportionally, so louder agreement earns more weight while lone, weak voices fade. C. Occasional reshaping of who listens to whom At planned intervals, the “seating chart” is refreshed: which components attend to which others, how much weight each pathway receives, and which simple bends or transforms are active. Selection favors regular, well-behaved contributors, kindred pairings along a rhythm ladder (slow with slow, fast with fast when it helps), and compact combinations that don’t overfit. The structure molts rather than resets—the scaffold persists while connections and strengths are updated, preserving continuity. D. Capacity that breathes, with guardrails Model size is flexible. When resources are tight or added detail stops paying, the form trims the deepest, least helpful nuance. When there’s headroom and clear benefit, it adds a thin layer. Each change is tentative and reversible: small growth trials are evaluated after a delay; harmful expansions are rolled back. Utility balances two things: quality of alignment (signal usefulness) and a mild cost for resource consumption. The system seeks useful complexity, not maximal complexity. E. Permission meets timing and size Trades happen only when permission (lean & clarity) is convincing and the soft landscape hums in the same key. The landscape then suggests when to act and how much to commit. Strength of agreement raises size; disagreement or ambiguity pushes toward patience. “No trade” is treated as a first-class decision, not a failure mode. F. A brief, human-readable diary The engine keeps a compact ledger for audit and research: the current archetype, the two dials of the gate, and terse sketches of how influences combined to justify recent posture. The emphasis is on explainability without revealing recipes—clear enough for oversight, lean enough for speed. G. What tends to emerge Coherence without rigidity. When rhythms align, groups of components move as a unit; when clarity fades, solos recede. Adaptation is maintained through small, distributed adjustments rather than dramatic overhauls. The result is regime-following behavior that can stand down gracefully between regimes. H. Risk doctrine as a controlling atmosphere Exposure is capped at both gross and net levels; incremental size responds to confidence and recent variability. When the environment becomes noisy or resources get tight, the system de-emphasizes fine detail, concentrates on the few strongest agreements, and allows activity to fall toward neutral rather than force action. This keeps drawdown sensitivity and operational risk in check. I. Memory, recency, and drift Assessments use decaying memory so recent tape action matters more while older evidence fades. Because the gate and the landscape both learn from the stream, their alignment naturally drifts with the market: when relationships change, the system doesn’t snap—it glides toward the new posture at a controlled pace. J. Separation of roles The lexicon gives a compact, discrete view of market context and provides the permission signal. The landscape offers a continuous, multi-horizon view that shapes timing and size. The reshaper keeps connections healthy and simple. The capacity governor ensures the form remains useful under constraints. Together they reduce overreaction to noise while still allowing timely response to structural change. K. Practical trading implications Expect fewer, stronger actions when the tape is muddled; more decisive engagement when the lexicon and landscape agree. Expect the sizing to track consensus strength, not single-indicator extremes. Expect the structure to age well: it refreshes itself without discarding accumulated context, aiming for stable behavior across regime shifts. L. Philosophy in one line Trade when the story is both clear and corroborated; keep the model light, the adjustments small, and the diary open. // ======================================================================
// Alpha12 - Markov-augmented Harmonic D-Tree Engine (Candlestick 122-dir)
// (with runtime memory shaping, selective depth pruning, and elastic accuracy-aware depth growth)
// ======================================================================
// ================= USER CONFIG =================
#define ASSET_SYMBOL "EUR/USD"
#define BAR_PERIOD 60
#define MC_ACT 0.30 // threshold on |CDL| in [-1..1] to accept a pattern
#define PBULL_LONG_TH 0.60 // Markov gate for long
#define PBULL_SHORT_TH 0.40 // Markov gate for short
// ===== Debug toggles (Fix #1 - chart/watch growth off by default) =====
#define ENABLE_PLOTS 0 // 0 = no plot buffers; 1 = enable plot() calls
#define ENABLE_WATCH 0 // 0 = disable watch() probes; 1 = enable
// ================= ENGINE PARAMETERS =================
#define MAX_BRANCHES 3
#define MAX_DEPTH 4
#define NWIN 256
#define NET_EQNS 100
#define DEGREE 4
#define KPROJ 16
#define REWIRE_EVERY 127
#define CAND_NEIGH 8
// ===== LOGGING CONTROLS (memory management) =====
#define LOG_EQ_TO_ONE_FILE 1 // 1: single consolidated EQ CSV; 0: per-eq files
#define LOG_EXPR_TEXT 0 // 0: omit full expression (store signature only); 1: include text
#define META_EVERY 4 // write META every N rewires
#define LOG_EQ_SAMPLE NET_EQNS // limit number of equations logged
#define EXPR_MAXLEN 512 // cap expression string
// decimate Markov log cadence
#define LOG_EVERY 16
// ---- DTREE feature sizes (extended for Markov features) ----
#define ADV_EQ_NF 12 // per-equation features
#define ADV_PAIR_NF 12 // per-pair features (kept for completeness; DTREE pair disabled)
// ================= Candles ? 122-state Markov =================
#define MC_NPAT 61
#define MC_STATES 123 // 1 + 2*MC_NPAT
#define MC_NONE 0
#define MC_LAPLACE 1.0
// ================= Runtime Memory / Accuracy Manager =================
#define MEM_BUDGET_MB 50
#define MEM_HEADROOM_MB 5
#define DEPTH_STEP_BARS 16
#define KEEP_CHILDREN_HI 2
#define KEEP_CHILDREN_LO 1
#define RUNTIME_MIN_DEPTH 2
int G_ShedStage = 0; // 0..2
int G_LastDepthActBar = -999999;
int G_ChartsOff = 0; // gates plot()
int G_LogsOff = 0; // gates file_append cadence
int G_SymFreed = 0; // expression buffers freed
int G_RT_TreeMaxDepth = MAX_DEPTH;
// ---- Accuracy sentinel (EW correlation of lambda vs gamma) ----
var ACC_mx=0, ACC_my=0, ACC_mx2=0, ACC_my2=0, ACC_mxy=0;
var G_AccCorr = 0; // [-1..1]
var G_AccBase = 0; // first seen sentinel
int G_HaveBase = 0;
// ---- Elastic depth tuner (small growth trials with rollback) ----
#define DEPTH_TUNE_BARS 64 // start a growth “trial” this often (when memory allows)
#define TUNE_DELAY_BARS 64 // evaluate the trial after this many bars
var G_UtilBefore = 0, G_UtilAfter = 0;
int G_TunePending = 0;
int G_TuneStartBar = 0;
int G_TuneAction = 0; // +1 grow trial, 0 none
// ======================================================================
// (FIX) Move the type and globals used by mem_bytes_est() up here
// ======================================================================
// HARMONIC D-TREE type (we define it early so globals below compile fine)
typedef struct Node { var v; var r; void* c; int n; int d; } Node;
// Minimal globals needed before mem_bytes_est()
Node* Root = 0;
Node** G_TreeIdx = 0;
int G_TreeN = 0;
int G_TreeCap = 0;
var G_DTreeExp = 0;
Node G_DummyNode; // defined early so treeAt() can return &G_DummyNode
// Network sizing globals (used by mem_bytes_est)
int G_N = NET_EQNS;
int G_D = DEGREE;
int G_K = KPROJ;
// Optional expression buffer pointer (referenced by mem_bytes_est)
string* G_Sym = 0;
// Forward decls that reference Node
var nodeImportance(Node* u); // fwd decl (uses nodePredictability below)
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK);
void reindexTreeAndMap();
// Forward decls for advisor functions (so adviseSeed can call them)
var adviseEq(int i, var lambda, var mean, var energy, var power);
var advisePair(int i,int j, var lambda, var mean, var energy, var power);
// ---- Advise budget/rotation (Fix #2) ----
#define ADVISE_MAX_EQ 16 // how many equations may use DTREE per bar
#define ADVISE_ROTATE 1 // 1 = rotate which equations get DTREE each bar
int allowAdvise(int i)
{
if(ADVISE_ROTATE){
int groups = NET_EQNS / ADVISE_MAX_EQ;
if(groups < 1) groups = 1;
return ((i / ADVISE_MAX_EQ) % groups) == (Bar % groups);
} else {
return (i < ADVISE_MAX_EQ);
}
}
// ---- tree byte size (counts nodes + child pointer arrays) ----
int tree_bytes(Node* u)
{
if(!u) return 0;
int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
int sz_node = 2*SZV + SZP + 2*SZI;
int total = sz_node;
if(u->n > 0 && u->c) total += u->n * SZP;
int i;
for(i=0;i<u->n;i++)
total += tree_bytes(((Node**)u->c)[i]);
return total;
}
// ======================================================================
// Conservative in-script memory estimator (arrays + pointers)
// ======================================================================
int mem_bytes_est()
{
int N = G_N, D = G_D, K = G_K;
int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
int b = 0;
b += N*SZV*(3 + 8 + 7 + 7 + 4 + 2 + 2 + 2 + 2);
b += N*SZI*(3); // G_Mode, G_TopEq, G_EqTreeId
b += N*D*SZI; // G_Adj
b += K*N*SZV; // G_RP
b += K*SZV; // G_Z
b += G_TreeCap*SZP; // G_TreeIdx pointer vector
if(G_Sym && !G_SymFreed) b += N*EXPR_MAXLEN; // optional expression buffers
b += MC_STATES*MC_STATES*SZI + MC_STATES*SZI; // Markov
b += tree_bytes(Root); // include D-Tree
return b;
}
int mem_mb_est(){ return mem_bytes_est() / (1024*1024); }
// === total memory (Zorro-wide) in MB ===
int memMB(){ return (int)(memory(0)/(1024*1024)); }
// light one-shot shedding
void shed_zero_cost_once()
{
if(G_ShedStage > 0) return;
set(PLOTNOW|OFF); G_ChartsOff = 1; // stop chart buffers
G_LogsOff = 1; // decimate logs (gated later)
G_ShedStage = 1;
}
void freeExprBuffers()
{
if(!G_Sym || G_SymFreed) return;
int i; for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]);
free(G_Sym); G_Sym = 0; G_SymFreed = 1;
}
// depth manager (prune & shedding)
void depth_manager_runtime()
{
int trigger = MEM_BUDGET_MB - MEM_HEADROOM_MB;
int mb = mem_mb_est();
if(mb < trigger) return;
if(G_ShedStage == 0) shed_zero_cost_once();
if(G_ShedStage <= 1){
if(LOG_EXPR_TEXT==0 && !G_SymFreed) freeExprBuffers();
G_ShedStage = 2;
}
int overBudget = (mb >= MEM_BUDGET_MB);
if(!overBudget && (Bar - G_LastDepthActBar < DEPTH_STEP_BARS))
return;
while(G_RT_TreeMaxDepth > RUNTIME_MIN_DEPTH)
{
int keepK = ifelse(mem_mb_est() < MEM_BUDGET_MB + 2, KEEP_CHILDREN_HI, KEEP_CHILDREN_LO);
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, keepK);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
mb = mem_mb_est();
printf("\n[DepthMgr] depth=%i keepK=%i est=%i MB", G_RT_TreeMaxDepth, keepK, mb);
if(mb < trigger) break;
}
G_LastDepthActBar = Bar;
}
// ----------------------------------------------------------------------
// 61 candlestick patterns (Zorro spellings kept). Each returns [-100..100].
// We rescale to [-1..1] for Markov state construction.
// ----------------------------------------------------------------------
int buildCDL_TA61(var* out, string* names)
{
int n = 0;
#define ADD(Name, Call) do{ var v = (Call); out[n] = v/100.; if(names) names[n] = Name; n++; }while(0)
ADD("CDL2Crows", CDL2Crows());
ADD("CDL3BlackCrows", CDL3BlackCrows());
ADD("CDL3Inside", CDL3Inside());
ADD("CDL3LineStrike", CDL3LineStrike());
ADD("CDL3Outside", CDL3Outside());
ADD("CDL3StarsInSouth", CDL3StarsInSouth());
ADD("CDL3WhiteSoldiers", CDL3WhiteSoldiers());
ADD("CDLAbandonedBaby", CDLAbandonedBaby(0.3));
ADD("CDLAdvanceBlock", CDLAdvanceBlock());
ADD("CDLBeltHold", CDLBeltHold());
ADD("CDLBreakaway", CDLBreakaway());
ADD("CDLClosingMarubozu", CDLClosingMarubozu());
ADD("CDLConcealBabysWall", CDLConcealBabysWall());
ADD("CDLCounterAttack", CDLCounterAttack());
ADD("CDLDarkCloudCover", CDLDarkCloudCover(0.3));
ADD("CDLDoji", CDLDoji());
ADD("CDLDojiStar", CDLDojiStar());
ADD("CDLDragonflyDoji", CDLDragonflyDoji());
ADD("CDLEngulfing", CDLEngulfing());
ADD("CDLEveningDojiStar", CDLEveningDojiStar(0.3));
ADD("CDLEveningStar", CDLEveningStar(0.3));
ADD("CDLGapSideSideWhite", CDLGapSideSideWhite());
ADD("CDLGravestoneDoji", CDLGravestoneDoji());
ADD("CDLHammer", CDLHammer());
ADD("CDLHangingMan", CDLHangingMan());
ADD("CDLHarami", CDLHarami());
ADD("CDLHaramiCross", CDLHaramiCross());
ADD("CDLHignWave", CDLHignWave());
ADD("CDLHikkake", CDLHikkake());
ADD("CDLHikkakeMod", CDLHikkakeMod());
ADD("CDLHomingPigeon", CDLHomingPigeon());
ADD("CDLIdentical3Crows", CDLIdentical3Crows());
ADD("CDLInNeck", CDLInNeck());
ADD("CDLInvertedHammer", CDLInvertedHammer());
ADD("CDLKicking", CDLKicking());
ADD("CDLKickingByLength", CDLKickingByLength());
ADD("CDLLadderBottom", CDLLadderBottom());
ADD("CDLLongLeggedDoji", CDLLongLeggedDoji());
ADD("CDLLongLine", CDLLongLine());
ADD("CDLMarubozu", CDLMarubozu());
ADD("CDLMatchingLow", CDLMatchingLow());
ADD("CDLMatHold", CDLMatHold(0.5));
ADD("CDLMorningDojiStar", CDLMorningDojiStar(0.3));
ADD("CDLMorningStar", CDLMorningStar(0.3));
ADD("CDLOnNeck", CDLOnNeck());
ADD("CDLPiercing", CDLPiercing());
ADD("CDLRickshawMan", CDLRickshawMan());
ADD("CDLRiseFall3Methods", CDLRiseFall3Methods());
ADD("CDLSeperatingLines", CDLSeperatingLines());
ADD("CDLShootingStar", CDLShootingStar());
ADD("CDLShortLine", CDLShortLine());
ADD("CDLSpinningTop", CDLSpinningTop());
ADD("CDLStalledPattern", CDLStalledPattern());
ADD("CDLStickSandwhich", CDLStickSandwhich());
ADD("CDLTakuri", CDLTakuri());
ADD("CDLTasukiGap", CDLTasukiGap());
ADD("CDLThrusting", CDLThrusting());
ADD("CDLTristar", CDLTristar());
ADD("CDLUnique3River", CDLUnique3River());
ADD("CDLUpsideGap2Crows", CDLUpsideGap2Crows());
ADD("CDLXSideGap3Methods", CDLXSideGap3Methods());
#undef ADD
return n; // 61
}
// ================= Markov storage & helpers =================
static int* MC_Count; // [MC_STATES*MC_STATES]
static int* MC_RowSum; // [MC_STATES]
static int MC_Prev = -1;
static int MC_Cur = 0;
static var MC_PBullNext = 0.5;
static var MC_Entropy = 0.0;
static string MC_Names[MC_NPAT];
#define MC_IDX(fr,to) ((fr)*MC_STATES + (to))
int MC_stateFromCDL(var* cdl /*len=61*/, var thr)
{
int i, best=-1; var besta=0;
for(i=0;i<MC_NPAT;i++){
var a = abs(cdl[i]);
if(a>besta){ besta=a; best=i; }
}
if(best<0) return MC_NONE;
if(besta < thr) return MC_NONE;
int bull = (cdl[best] > 0);
return 1 + 2*best + bull; // 1..122
}
int MC_isBull(int s){ if(s<=0) return 0; return ((s-1)%2)==1; }
void MC_update(int sPrev,int sCur){ if(sPrev<0) return; MC_Count[MC_IDX(sPrev,sCur)]++; MC_RowSum[sPrev]++; }
var MC_prob(int s,int t){
var num = (var)MC_Count[MC_IDX(s,t)] + MC_LAPLACE;
var den = (var)MC_RowSum[s] + MC_LAPLACE*MC_STATES;
if(den<=0) return 1.0/MC_STATES;
return num/den;
}
var MC_nextBullishProb(int s){
if(s<0) return 0.5;
int t; var pBull=0, pTot=0;
for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); pTot+=p; if(MC_isBull(t)) pBull+=p; }
if(pTot<=0) return 0.5;
return pBull/pTot;
}
var MC_rowEntropy01(int s){
if(s<0) return 1.0;
int t; var H=0, Z=0;
for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); Z+=p; }
if(Z<=0) return 1.0;
for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t)/Z; if(p>0) H += -p*log(p); }
var Hmax = log(MC_STATES-1);
if(Hmax<=0) return 0;
return H/Hmax;
}
// ================= HARMONIC D-TREE ENGINE =================
// ---------- utils ----------
var randsign(){ return ifelse(random(1) < 0.5, -1.0, 1.0); }
var mapUnit(var u,var lo,var hi){ if(u<-1) u=-1; if(u>1) u=1; var t=0.5*(u+1.0); return lo + t*(hi-lo); }
// ---- safety helpers ----
var safeNum(var x){ if(x!=x) return 0; if(x > 1e100) return 1e100; if(x < -1e100) return -1e100; return x; }
void sanitize(var* A,int n){ int k; for(k=0;k<n;k++) A[k]=safeNum(A[k]); }
var sat100(var x){ return clamp(x,-100,100); }
// ---- small string helpers (for memory-safe logging) ----
void strlcat_safe(string dst, string src, int cap)
{
if(!dst || !src || cap <= 0) return;
int dl = strlen(dst);
int sl = strlen(src);
int room = cap - 1 - dl;
if(room <= 0){ if(cap > 0) dst[cap-1] = 0; return; }
int i; for(i = 0; i < room && i < sl; i++) dst[dl + i] = src[i];
dst[dl + i] = 0;
}
int countSubStr(string s, string sub){
if(!s || !sub) return 0;
int n=0; string p=s;
int sublen = strlen(sub);
if(sublen<=0) return 0;
while((p=strstr(p,sub))){ n++; p += sublen; }
return n;
}
// ---------- FIXED: use int (lite-C) and keep non-negative ----------
int djb2_hash(string s){
int h = 5381, c, i = 0;
if(!s) return h;
while((c = s[i++])) h = ((h<<5)+h) ^ c; // h*33 ^ c
return h & 0x7fffffff; // force non-negative
}
// ---- tree helpers ----
int validTreeIndex(int tid){ if(!G_TreeIdx) return 0; if(tid<0||tid>=G_TreeN) return 0; return (G_TreeIdx[tid]!=0); }
Node* treeAt(int tid){ if(validTreeIndex(tid)) return G_TreeIdx[tid]; return &G_DummyNode; }
int safeTreeIndexFromEq(int eqi){
int denom = ifelse(G_TreeN>0, G_TreeN, 1);
int tid = eqi;
if(tid < 0) tid = 0;
if(denom > 0) tid = tid % denom;
if(tid < 0) tid = 0;
return tid;
}
// ---- tree indexing ----
void pushTreeNode(Node* u){
if(G_TreeN >= G_TreeCap){
int newCap = G_TreeCap*2;
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
}
G_TreeIdx[G_TreeN++] = u;
}
void indexTreeDFS(Node* u){ if(!u) return; pushTreeNode(u); int i; for(i=0;i<u->n;i++) indexTreeDFS(((Node**)u->c)[i]); }
// ---- shrink index capacity after pruning (Fix #3) ----
void maybeShrinkTreeIdx(){
if(!G_TreeIdx) return;
if(G_TreeCap > 64 && G_TreeN < (G_TreeCap >> 1)){
int newCap = (G_TreeCap >> 1);
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
}
}
// ---- tree create/eval ----
Node* createNode(int depth)
{
Node* u = (Node*)malloc(sizeof(Node));
u->v = random();
u->r = 0.01 + 0.02*depth + random(0.005);
u->d = depth;
if(depth > 0){
u->n = 1 + (int)random(MAX_BRANCHES);
u->c = malloc(u->n * sizeof(void*));
int i; for(i=0;i<u->n;i++) ((Node**)u->c)[i] = createNode(depth - 1);
} else { u->n = 0; u->c = 0; }
return u;
}
var evaluateNode(Node* u)
{
if(!u) return 0;
var sum=0; int i; for(i=0;i<u->n;i++) sum += evaluateNode(((Node**)u->c)[i]);
var phase = sin(u->r * Bar + sum);
var weight = 1.0 / pow(u->d + 1, G_DTreeExp);
u->v = (1 - weight)*u->v + weight*phase;
return u->v;
}
int countNodes(Node* u){ if(!u) return 0; int c=1,i; for(i=0;i<u->n;i++) c += countNodes(((Node**)u->c)[i]); return c; }
void freeTree(Node* u){ if(!u) return; int i; for(i=0;i<u->n;i++) freeTree(((Node**)u->c)[i]); if(u->c) free(u->c); free(u); }
// =========== NETWORK STATE & COEFFICIENTS ===========
var* G_State; var* G_Prev; var* G_Vel;
int* G_Adj;
var* G_RP; var* G_Z;
int* G_Mode;
var* G_WSelf; var* G_WN1; var* G_WN2; var* G_WGlob1; var* G_WGlob2; var* G_WMom; var* G_WTree; var* G_WAdv;
var* A1x; var* A1lam; var* A1mean; var* A1E; var* A1P; var* A1i; var* A1c;
var* A2x; var* A2lam; var* A2mean; var* A2E; var* A2P; var* A2i; var* A2c;
var* G1mean; var* G1E; var* G2P; var* G2lam;
var* G_TreeTerm; int* G_TopEq; var* G_TopW; int* G_EqTreeId; var* TAlpha; var* TBeta;
var* G_Pred; var* G_AdvScore;
var* G_PropRaw; var* G_Prop;
// ===== Markov features exposed to DTREE =====
var G_MCF_PBull; // 0..1
var G_MCF_Entropy; // 0..1
var G_MCF_State; // 0..122
// epoch/context & feedback
int G_Epoch = 0;
int G_CtxID = 0;
var G_FB_A = 0.7;
var G_FB_B = 0.3;
// ---------- predictability ----------
var nodePredictability(Node* t)
{
if(!t) return 0.5;
var disp=0; int n=t->n, i;
for(i=0;i<n;i++){ Node* c=((Node**)t->c)[i]; disp += abs(c->v - t->v); }
if(n>0) disp /= n;
var depthFac = 1.0/(1+t->d);
var rateBase = 0.01 + 0.02*t->d;
var rateFac = exp(-25.0*abs(t->r - rateBase));
var p = 0.5*(depthFac + rateFac);
p = 0.5*p + 0.5*(1.0 + (-disp))/(1.0);
if(p<0) p=0; if(p>1) p=1;
return p;
}
// importance for selective pruning
var nodeImportance(Node* u)
{
if(!u) return 0;
var amp = abs(u->v); if(amp>1) amp=1;
var p = nodePredictability(u);
var depthW = 1.0/(1.0 + u->d);
var imp = (0.6*p + 0.4*amp) * depthW;
return imp;
}
// ====== Elastic growth helpers ======
// create a leaf at depth d (no children)
Node* createLeafDepth(int d){
Node* u = (Node*)malloc(sizeof(Node));
u->v = random();
u->r = 0.01 + 0.02*d + random(0.005);
u->d = d;
u->n = 0;
u->c = 0;
return u;
}
// add up to addK new children to all nodes at frontierDepth
void growSelectiveAtDepth(Node* u, int frontierDepth, int addK)
{
if(!u) return;
if(u->d == frontierDepth){
int want = addK;
if(want <= 0) return;
int oldN = u->n;
int newN = oldN + want;
Node** Cnew = (Node**)malloc(newN * sizeof(void*));
int i;
for(i=0;i<oldN;i++) Cnew[i] = ((Node**)u->c)[i];
for(i=oldN;i<newN;i++) Cnew[i] = createLeafDepth(frontierDepth-1);
if(u->c) free(u->c);
u->c = Cnew; u->n = newN;
return;
}
int j; for(j=0;j<u->n;j++) growSelectiveAtDepth(((Node**)u->c)[j], frontierDepth, addK);
}
// keep top-K children by importance at targetDepth, drop the rest
void freeChildAt(Node* parent, int idx)
{
if(!parent || !parent->c) return;
Node** C = (Node**)parent->c;
freeTree(C[idx]);
int i;
for(i=idx+1;i<parent->n;i++) C[i-1] = C[i];
parent->n--;
if(parent->n==0){ free(parent->c); parent->c=0; }
}
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK)
{
if(!u) return;
if(u->d == targetDepth-1 && u->n > 0){
int n = u->n, i, kept = 0;
int mark[16]; for(i=0;i<16;i++) mark[i]=0;
int iter;
for(iter=0; iter<keepK && iter<n; iter++){
int bestI = -1; var bestImp = -1;
for(i=0;i<n;i++){
if(i<16 && mark[i]==1) continue;
var imp = nodeImportance(((Node**)u->c)[i]);
if(imp > bestImp){ bestImp = imp; bestI = i; }
}
if(bestI>=0 && bestI<16){ mark[bestI]=1; kept++; }
}
for(i=n-1;i>=0;i--) if(i<16 && mark[i]==0) freeChildAt(u,i);
return;
}
int j; for(j=0;j<u->n;j++) pruneSelectiveAtDepth(((Node**)u->c)[j], targetDepth, keepK);
}
void reindexTreeAndMap()
{
G_TreeN = 0;
indexTreeDFS(Root);
if(G_TreeN<=0){ G_TreeN=1; if(G_TreeIdx) G_TreeIdx[0]=Root; }
int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = i % G_TreeN;
maybeShrinkTreeIdx(); // Fix #3
}
// ====== Accuracy sentinel & elastic-depth controller ======
void acc_update(var x /*lambda*/, var y /*gamma*/)
{
var a = 0.01; // ~100-bar half-life
ACC_mx = (1-a)*ACC_mx + a*x;
ACC_my = (1-a)*ACC_my + a*y;
ACC_mx2 = (1-a)*ACC_mx2 + a*(x*x);
ACC_my2 = (1-a)*ACC_my2 + a*(y*y);
ACC_mxy = (1-a)*ACC_mxy + a*(x*y);
var vx = ACC_mx2 - ACC_mx*ACC_mx;
var vy = ACC_my2 - ACC_my*ACC_my;
var cv = ACC_mxy - ACC_mx*ACC_my;
if(vx>0 && vy>0) G_AccCorr = cv / sqrt(vx*vy); else G_AccCorr = 0;
if(!G_HaveBase){ G_AccBase = G_AccCorr; G_HaveBase = 1; }
}
// utility to maximize: accuracy minus gentle memory penalty
var util_now()
{
int mb = mem_mb_est();
var mem_pen = 0;
if(mb > MEM_BUDGET_MB) mem_pen = (mb - MEM_BUDGET_MB)/(var)MEM_BUDGET_MB; else mem_pen = 0;
return G_AccCorr - 0.5*mem_pen;
}
// apply a +1 “grow one level” action if safe memory headroom
int apply_grow_step()
{
int mb = mem_mb_est();
if(G_RT_TreeMaxDepth >= MAX_DEPTH) return 0;
if(mb > MEM_BUDGET_MB - 2*MEM_HEADROOM_MB) return 0;
int newFrontier = G_RT_TreeMaxDepth;
growSelectiveAtDepth(Root, newFrontier, KEEP_CHILDREN_HI);
G_RT_TreeMaxDepth++;
reindexTreeAndMap();
printf("\n[EDC] Grew depth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
return 1;
}
// revert last growth (drop newly-added frontier children)
void revert_last_grow()
{
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, 0);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
printf("\n[EDC] Reverted growth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
}
// main elastic-depth controller; call once per bar (after acc_update)
void edc_runtime()
{
int mb = mem_mb_est();
if(G_TunePending){
if(Bar - G_TuneStartBar >= TUNE_DELAY_BARS){
G_UtilAfter = util_now();
var eps = 0.01;
if(G_UtilAfter + eps < G_UtilBefore){
revert_last_grow();
} else {
printf("\n[EDC] Growth kept (U: %.4f -> %.4f)", G_UtilBefore, G_UtilAfter);
}
G_TunePending = 0; G_TuneAction = 0;
}
return;
}
if( (Bar % DEPTH_TUNE_BARS)==0 && mb <= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_RT_TreeMaxDepth < MAX_DEPTH ){
G_UtilBefore = util_now();
if(apply_grow_step()){
G_TunePending = 1; G_TuneAction = 1; G_TuneStartBar = Bar;
}
}
}
// filenames (legacy; still used if LOG_EQ_TO_ONE_FILE==0)
void buildEqFileName(int idx, char* outName /*>=64*/)
{
strcpy(outName, "Log\\Alpha12_eq_");
string idxs = strf("%03i", idx);
strcat(outName, idxs);
strcat(outName, ".csv");
}
// ===== consolidated EQ log =====
void writeEqHeaderOnce()
{
static int done=0; if(done) return; done=1;
file_append("Log\\Alpha12_eq_all.csv",
"Bar,Epoch,Ctx,EqCount,i,n1,n2,TreeId,Depth,Rate,Pred,Adv,Prop,Mode,WAdv,WTree,PBull,Entropy,MCState,ExprLen,ExprHash,tanhN,sinN,cosN\n");
}
void appendEqMetaLine(
int bar, int epoch, int ctx, int i, int n1, int n2, int tid, int depth, var rate,
var pred, var adv, var prop, int mode, var wadv, var wtree,
var pbull, var ent, int mcstate, string expr)
{
if(i >= LOG_EQ_SAMPLE) return;
int eLen = (int)ifelse(expr != 0, strlen(expr), 0);
int eHash = (int)ifelse(expr != 0, djb2_hash(expr), djb2_hash(""));
int cT = (int)ifelse(expr != 0, countSubStr(expr,"tanh("), 0);
int cS = (int)ifelse(expr != 0, countSubStr(expr,"sin("), 0);
int cC = (int)ifelse(expr != 0, countSubStr(expr,"cos("), 0);
file_append("Log\\Alpha12_eq_all.csv",
strf("%i,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,%.4f,%.4f,%.6f,%i,%.3f,%.3f,%.4f,%.4f,%i,%i,%i,%i,%i,%i\n",
bar, epoch, ctx, NET_EQNS, i, n1, n2, tid, depth, rate,
pred, adv, prop, mode, wadv, wtree,
pbull, ent, mcstate, eLen, eHash, cT, cS, cC));
}
// --------- allocation ----------
void randomizeRP()
{
int K=G_K,N=G_N,k,j;
for(k=0;k<K;k++)
for(j=0;j<N;j++)
G_RP[k*N+j] = ifelse(random(1) < 0.5, -1.0, 1.0);
}
void computeProjection(){ int K=G_K,N=G_N,k,j; for(k=0;k<K;k++){ var acc=0; for(j=0;j<N;j++) acc+=G_RP[k*N+j]*(G_State[j]*G_State[j]); G_Z[k]=acc; }}
void allocateNet()
{
int N=G_N, D=G_D, K=G_K;
G_State=(var*)malloc(N*sizeof(var)); G_Prev=(var*)malloc(N*sizeof(var)); G_Vel=(var*)malloc(N*sizeof(var));
G_Adj=(int*)malloc(N*D*sizeof(int));
G_RP=(var*)malloc(K*N*sizeof(var)); G_Z=(var*)malloc(K*sizeof(var));
G_Mode=(int*)malloc(N*sizeof(int));
G_WSelf=(var*)malloc(N*sizeof(var)); G_WN1=(var*)malloc(N*sizeof(var)); G_WN2=(var*)malloc(N*sizeof(var));
G_WGlob1=(var*)malloc(N*sizeof(var)); G_WGlob2=(var*)malloc(N*sizeof(var));
G_WMom=(var*)malloc(N*sizeof(var)); G_WTree=(var*)malloc(N*sizeof(var)); G_WAdv=(var*)malloc(N*sizeof(var));
A1x=(var*)malloc(N*sizeof(var)); A1lam=(var*)malloc(N*sizeof(var)); A1mean=(var*)malloc(N*sizeof(var));
A1E=(var*)malloc(N*sizeof(var)); A1P=(var*)malloc(N*sizeof(var)); A1i=(var*)malloc(N*sizeof(var)); A1c=(var*)malloc(N*sizeof(var));
A2x=(var*)malloc(N*sizeof(var)); A2lam=(var*)malloc(N*sizeof(var)); A2mean=(var*)malloc(N*sizeof(var));
A2E=(var*)malloc(N*sizeof(var)); A2P=(var*)malloc(N*sizeof(var)); A2i=(var*)malloc(N*sizeof(var)); A2c=(var*)malloc(N*sizeof(var));
G1mean=(var*)malloc(N*sizeof(var)); G1E=(var*)malloc(N*sizeof(var));
G2P=(var*)malloc(N*sizeof(var)); G2lam=(var*)malloc(N*sizeof(var));
G_TreeTerm=(var*)malloc(N*sizeof(var)); G_TopEq=(int*)malloc(N*sizeof(int)); G_TopW=(var*)malloc(N*sizeof(var));
TAlpha=(var*)malloc(N*sizeof(var)); TBeta=(var*)malloc(N*sizeof(var));
G_Pred=(var*)malloc(N*sizeof(var)); G_AdvScore=(var*)malloc(N*sizeof(var));
G_PropRaw=(var*)malloc(N*sizeof(var)); G_Prop=(var*)malloc(N*sizeof(var));
if(LOG_EXPR_TEXT){
G_Sym=(string*)malloc(N*sizeof(char*));
} else {
G_Sym=0;
}
G_TreeCap=128; // was 512 (Fix #3: start smaller; still grows if needed)
G_TreeIdx=(Node**)malloc(G_TreeCap*sizeof(Node*)); G_TreeN=0;
G_EqTreeId=(int*)malloc(N*sizeof(int));
// Pre-init adjacency to safe value
int tInit; for(tInit=0; tInit<N*D; tInit++) G_Adj[tInit] = -1;
int i;
for(i=0;i<N;i++){
G_State[i]=random();
G_Prev[i]=G_State[i]; G_Vel[i]=0;
G_Mode[i]=0;
G_WSelf[i]=0.5; G_WN1[i]=0.2; G_WN2[i]=0.2; G_WGlob1[i]=0.1; G_WGlob2[i]=0.1; G_WMom[i]=0.05; G_WTree[i]=0.15; G_WAdv[i]=0.15;
A1x[i]=1; A1lam[i]=0.1; A1mean[i]=0; A1E[i]=0; A1P[i]=0; A1i[i]=0; A1c[i]=0;
A2x[i]=1; A2lam[i]=0.1; A2mean[i]=0; A2E[i]=0; A2P[i]=0; A2i[i]=0; A2c[i]=0;
G1mean[i]=1.0; G1E[i]=0.001; G2P[i]=0.6; G2lam[i]=0.3;
TAlpha[i]=0.8; TBeta[i]=25.0;
G_TreeTerm[i]=0; G_TopEq[i]=-1; G_TopW[i]=0;
G_Pred[i]=0.5; G_AdvScore[i]=0;
G_PropRaw[i]=1; G_Prop[i]=1.0/G_N;
if(LOG_EXPR_TEXT){
G_Sym[i] = (char*)malloc(EXPR_MAXLEN);
if(G_Sym[i]) strcpy(G_Sym[i], "");
}
}
}
void freeNet()
{
int i;
if(G_State)free(G_State); if(G_Prev)free(G_Prev); if(G_Vel)free(G_Vel);
if(G_Adj)free(G_Adj); if(G_RP)free(G_RP); if(G_Z)free(G_Z);
if(G_Mode)free(G_Mode); if(G_WSelf)free(G_WSelf); if(G_WN1)free(G_WN1); if(G_WN2)free(G_WN2);
if(G_WGlob1)free(G_WGlob1); if(G_WGlob2)free(G_WGlob2); if(G_WMom)free(G_WMom);
if(G_WTree)free(G_WTree); if(G_WAdv)free(G_WAdv);
if(A1x)free(A1x); if(A1lam)free(A1lam); if(A1mean)free(A1mean); if(A1E)free(A1E); if(A1P)free(A1P); if(A1i)free(A1i); if(A1c)free(A1c);
if(A2x)free(A2x); if(A2lam)free(A2lam); if(A2mean)free(A2mean); if(A2E)free(A2E); if(A2P)free(A2P); if(A2i)free(A2i); if(A2c)free(A2c);
if(G1mean)free(G1mean); if(G1E)free(G1E); if(G2P)free(G2P); if(G2lam)free(G2lam);
if(G_TreeTerm)free(G_TreeTerm); if(G_TopEq)free(G_TopEq); if(G_TopW)free(G_TopW);
if(TAlpha)free(TAlpha); if(TBeta)free(TBeta);
if(G_Pred)free(G_Pred); if(G_AdvScore)free(G_AdvScore);
if(G_PropRaw)free(G_PropRaw); if(G_Prop)free(G_Prop);
if(G_Sym){ for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]); free(G_Sym); }
if(G_TreeIdx)free(G_TreeIdx); if(G_EqTreeId)free(G_EqTreeId);
}
// --------- DTREE feature builders ----------
var nrm_s(var x){ return sat100(100.0*tanh(x)); }
var nrm_scl(var x,var s){ return sat100(100.0*tanh(s*x)); }
void buildEqFeatures(int i, var lambda, var mean, var energy, var power, var* S /*ADV_EQ_NF*/)
{
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
S[0] = nrm_s(G_State[i]);
S[1] = nrm_s(mean);
S[2] = nrm_scl(power,0.05);
S[3] = nrm_scl(energy,0.01);
S[4] = nrm_s(lambda);
S[5] = sat100(200.0*(G_Pred[i]-0.5));
S[6] = sat100(200.0*((var)t->d/MAX_DEPTH)-100.0);
S[7] = sat100(1000.0*t->r);
S[8] = nrm_s(G_TreeTerm[i]);
S[9] = sat100(200.0*((var)G_Mode[i]/3.0)-100.0);
S[10] = sat100(200.0*(G_MCF_PBull-0.5));
S[11] = sat100(200.0*(G_MCF_Entropy-0.5));
sanitize(S,ADV_EQ_NF);
}
// (Kept for completeness; not used by DTREE anymore)
void buildPairFeatures(int i,int j, var lambda, var mean, var energy, var power, var* P /*ADV_PAIR_NF*/)
{
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* ti = treeAt(tid_i);
Node* tj = treeAt(tid_j);
P[0]=nrm_s(G_State[i]); P[1]=nrm_s(G_State[j]);
P[2]=sat100(200.0*((var)ti->d/MAX_DEPTH)-100.0);
P[3]=sat100(200.0*((var)tj->d/MAX_DEPTH)-100.0);
P[4]=sat100(1000.0*ti->r); P[5]=sat100(1000.0*tj->r);
P[6]=sat100(abs(P[2]-P[3]));
P[7]=sat100(abs(P[4]-P[5]));
P[8]=sat100(100.0*(G_Pred[i]+G_Pred[j]-1.0));
P[9]=nrm_s(lambda); P[10]=nrm_s(mean); P[11]=nrm_scl(power,0.05);
sanitize(P,ADV_PAIR_NF);
}
// --- Safe neighbor helpers & adjacency sanitizer ---
int adjSafe(int i, int d){
int N = G_N, D = G_D;
if(!G_Adj || N <= 1 || D <= 0) return 0;
if(d < 0) d = 0;
if(d >= D) d = d % D;
int v = G_Adj[i*D + d];
if(v < 0 || v >= N || v == i){
v = (i + 1) % N;
}
return v;
}
void sanitizeAdjacency(){
if(!G_Adj) return;
int N = G_N, D = G_D;
int i, d;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
int *p = &G_Adj[i*D + d];
if(*p < 0 || *p >= N || *p == i){
int r = (int)random(N);
if(r == i) r = (r+1) % N;
*p = r;
}
}
if(D >= 2 && G_Adj[i*D+0] == G_Adj[i*D+1]){
int r2 = (G_Adj[i*D+1] + 1) % N;
if(r2 == i) r2 = (r2+1) % N;
G_Adj[i*D+1] = r2;
}
}
}
// --------- advisor helpers (NEW) ----------
// cache one advisor value per equation per bar
var adviseSeed(int i, var lambda, var mean, var energy, var power)
{
static int seedBar = -1;
static int haveSeed[NET_EQNS];
static var seedVal[NET_EQNS];
if(seedBar != Bar){
int k; for(k=0;k<NET_EQNS;k++) haveSeed[k] = 0;
seedBar = Bar;
}
if(i < 0) i = 0;
if(i >= NET_EQNS) i = i % NET_EQNS;
// Fix #2: obey advisor budget/rotation for seed too
if(!allowAdvise(i)) return 0;
if(!haveSeed[i]){
seedVal[i] = adviseEq(i, lambda, mean, energy, power); // trains (once) in Train mode
haveSeed[i] = 1;
}
return seedVal[i];
}
// simple deterministic mixer for diversity in [-1..1] without extra advise calls
var mix01(var a, int salt){
var z = sin(123.456*a + 0.001*salt) + cos(98.765*a + 0.002*salt);
return tanh(0.75*z);
}
// --------- advise wrappers (single-equation only) ----------
// Use estimator to halt when tight; respect rotation budget.
var adviseEq(int i, var lambda, var mean, var energy, var power)
{
if(!allowAdvise(i)) return 0;
var S[ADV_EQ_NF];
buildEqFeatures(i,lambda,mean,energy,power,S);
if(is(INITRUN)) return 0;
// Fix #2: stop early based on our estimator, not memory(0)
int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
if(tight) return 0;
var obj = 0;
if(Train && !tight)
obj = sat100(100.0*tanh(0.6*lambda + 0.4*mean));
int objI = (int)obj;
var a = adviseLong(DTREE, objI, S, ADV_EQ_NF);
return a/100.;
}
// --------- advisePair disabled: never call DTREE here ----------
var advisePair(int i,int j, var lambda, var mean, var energy, var power)
{
return 0;
}
// --------- heuristic pair scoring ----------
var scorePairSafe(int i, int j, var lambda, var mean, var energy, var power)
{
int ti = safeTreeIndexFromEq(G_EqTreeId[i]);
int tj = safeTreeIndexFromEq(G_EqTreeId[j]);
Node *ni = treeAt(ti), *nj = treeAt(tj);
var simD = 1.0 / (1.0 + abs((var)ni->d - (var)nj->d));
var simR = 1.0 / (1.0 + 50.0*abs(ni->r - nj->r));
var pred = 0.5*(G_Pred[i] + G_Pred[j]);
var score = 0.5*pred + 0.3*simD + 0.2*simR;
return 2.0*score - 1.0;
}
// --------- adjacency selection (heuristic only) ----------
// safer clash check using prev>=0
void rewireAdjacency_DTREE(var lambda, var mean, var energy, var power)
{
int N=G_N, D=G_D, i, d, c, best, cand;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
var bestScore = -2; best = -1;
for(c=0;c<CAND_NEIGH;c++){
cand = (int)random(N);
if(cand==i) continue;
int clash=0, k;
for(k=0;k<d;k++){
int prev = G_Adj[i*D+k];
if(prev>=0 && prev==cand){ clash=1; break; }
}
if(clash) continue;
var s = scorePairSafe(i,cand,lambda,mean,energy,power);
if(s > bestScore){ bestScore=s; best=cand; }
}
if(best<0){ do{ best = (int)random(N);} while(best==i); }
G_Adj[i*D + d] = best;
}
}
}
// --------- DTREE-created coefficients, modes & proportions ----------
var mapA(var a,var lo,var hi){ return mapUnit(a,lo,hi); }
void synthesizeEquationFromDTREE(int i, var lambda, var mean, var energy, var power)
{
var seed = adviseSeed(i,lambda,mean,energy,power);
G_Mode[i] = (int)(abs(1000*seed)) & 3;
// derive weights & params deterministically from the single seed
G_WSelf[i] = mapA(mix01(seed, 11), 0.15, 0.85);
G_WN1[i] = mapA(mix01(seed, 12), 0.05, 0.35);
G_WN2[i] = mapA(mix01(seed, 13), 0.05, 0.35);
G_WGlob1[i] = mapA(mix01(seed, 14), 0.05, 0.30);
G_WGlob2[i] = mapA(mix01(seed, 15), 0.05, 0.30);
G_WMom[i] = mapA(mix01(seed, 16), 0.02, 0.15);
G_WTree[i] = mapA(mix01(seed, 17), 0.05, 0.35);
G_WAdv[i] = mapA(mix01(seed, 18), 0.05, 0.35);
A1x[i] = randsign()*mapA(mix01(seed, 21), 0.6, 1.2);
A1lam[i] = randsign()*mapA(mix01(seed, 22), 0.05,0.35);
A1mean[i]= mapA(mix01(seed, 23),-0.30,0.30);
A1E[i] = mapA(mix01(seed, 24),-0.0015,0.0015);
A1P[i] = mapA(mix01(seed, 25),-0.30,0.30);
A1i[i] = mapA(mix01(seed, 26),-0.02,0.02);
A1c[i] = mapA(mix01(seed, 27),-0.20,0.20);
A2x[i] = randsign()*mapA(mix01(seed, 31), 0.6, 1.2);
A2lam[i] = randsign()*mapA(mix01(seed, 32), 0.05,0.35);
A2mean[i]= mapA(mix01(seed, 33),-0.30,0.30);
A2E[i] = mapA(mix01(seed, 34),-0.0015,0.0015);
A2P[i] = mapA(mix01(seed, 35),-0.30,0.30);
A2i[i] = mapA(mix01(seed, 36),-0.02,0.02);
A2c[i] = mapA(mix01(seed, 37),-0.20,0.20);
G1mean[i] = mapA(mix01(seed, 41), 0.4, 1.6);
G1E[i] = mapA(mix01(seed, 42),-0.004,0.004);
G2P[i] = mapA(mix01(seed, 43), 0.1, 1.2);
G2lam[i] = mapA(mix01(seed, 44), 0.05, 0.7);
TAlpha[i] = mapA(mix01(seed, 51), 0.3, 1.5);
TBeta[i] = mapA(mix01(seed, 52), 6.0, 50.0);
G_PropRaw[i] = 0.01 + 0.99*(0.5*(seed+1.0));
}
void normalizeProportions()
{
int N=G_N,i; var s=0; for(i=0;i<N;i++) s += G_PropRaw[i];
if(s<=0) { for(i=0;i<N;i++) G_Prop[i] = 1.0/N; return; }
for(i=0;i<N;i++) G_Prop[i] = G_PropRaw[i]/s;
}
var dtreeTerm(int i, int* outTopEq, var* outTopW)
{
int N=G_N,j;
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* ti=treeAt(tid_i); int di=ti->d; var ri=ti->r;
var alpha=TAlpha[i], beta=TBeta[i];
var sumw=0, acc=0, bestW=-1; int bestJ=-1;
for(j=0;j<N;j++){
if(j==i) continue;
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* tj=treeAt(tid_j); int dj=tj->d; var rj=tj->r;
var w = exp(-alpha*abs(di-dj)) * exp(-beta*abs(ri-rj));
var predBoost = 0.5 + 0.5*(G_Pred[i]*G_Pred[j]);
var propBoost = 0.5 + 0.5*( (G_Prop[i] + G_Prop[j]) );
w *= predBoost * propBoost;
var pairAdv = scorePairSafe(i,j,0,0,0,0);
var pairBoost = 0.75 + 0.25*(0.5*(pairAdv+1.0));
w *= pairBoost;
sumw += w; acc += w*G_State[j];
if(w>bestW){bestW=w; bestJ=j;}
}
if(outTopEq) *outTopEq = bestJ;
if(outTopW) *outTopW = ifelse(sumw>0, bestW/sumw, 0);
if(sumw>0) return acc/sumw; return 0;
}
// --------- expression builder (capped & optional) ----------
void buildSymbolicExpr(int i, int n1, int n2)
{
if(LOG_EXPR_TEXT){
string s = G_Sym[i]; s[0]=0;
string a1 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
A1x[i], n1, A1lam[i], A1mean[i], A1E[i], A1P[i], A1i[i], A1c[i]);
string a2 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
A2x[i], n2, A2lam[i], A2mean[i], A2E[i], A2P[i], A2i[i], A2c[i]);
strlcat_safe(s, "x[i]_next = ", EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*x[i] + ", G_WSelf[i]), EXPR_MAXLEN);
if(G_Mode[i]==1){
strlcat_safe(s, strf("%.3f*tanh%s + ", G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin%s + ", G_WN2[i], a2), EXPR_MAXLEN);
} else if(G_Mode[i]==2){
strlcat_safe(s, strf("%.3f*cos%s + ", G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*tanh%s + ", G_WN2[i], a2), EXPR_MAXLEN);
} else {
strlcat_safe(s, strf("%.3f*sin%s + ", G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*cos%s + ", G_WN2[i], a2), EXPR_MAXLEN);
}
strlcat_safe(s, strf("%.3f*tanh(%.3f*mean + %.5f*E) + ", G_WGlob1[i], G1mean[i], G1E[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin(%.3f*P + %.3f*lam) + ", G_WGlob2[i], G2P[i], G2lam[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*(x[i]-x_prev[i]) + ", G_WMom[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("Prop[i]=%.4f; ", G_Prop[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DT(i) + ", G_WTree[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DTREE(i)", G_WAdv[i]), EXPR_MAXLEN);
}
}
// --------- one-time rewire init ----------
void rewireInit()
{
randomizeRP(); computeProjection();
G_TreeN=0; indexTreeDFS(Root);
if(G_TreeN<=0){ G_TreeN=1; if(G_TreeIdx) G_TreeIdx[0]=Root; }
int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = i % G_TreeN;
}
// probes & unsigned context hash
void rewireEpoch(var lambda, var mean, var energy, var power)
{
int i;
if(ENABLE_WATCH) watch("?A"); // before predictability
for(i=0;i<G_N;i++){
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
G_Pred[i] = nodePredictability(t);
}
if(ENABLE_WATCH) watch("?B"); // after predictability, before adjacency
rewireAdjacency_DTREE(lambda,mean,energy,power);
if(ENABLE_WATCH) watch("?C"); // after adjacency, before synthesize
sanitizeAdjacency();
for(i=0;i<G_N;i++)
synthesizeEquationFromDTREE(i,lambda,mean,energy,power);
if(ENABLE_WATCH) watch("?D"); // before normalize / ctx hash
normalizeProportions();
// Unsigned context hash of current adjacency (+ epoch) for logging
{
int D = G_D;
unsigned int h = 2166136261u;
int total = G_N * D;
for(i=0;i<total;i++){
unsigned int x = (unsigned int)G_Adj[i];
h ^= x + 0x9e3779b9u + (h<<6) + (h>>2);
}
G_CtxID = (int)((h ^ ((unsigned int)G_Epoch<<8)) & 0x7fffffff);
}
// Optional expression text (only when LOG_EXPR_TEXT==1)
for(i=0;i<G_N;i++){
int n1 = adjSafe(i,0);
int n2 = (int)ifelse(G_D >= 2, adjSafe(i,1), n1);
if(LOG_EXPR_TEXT) buildSymbolicExpr(i,n1,n2);
}
}
var projectNet()
{
int N=G_N,i; var sum=0,sumsq=0,cross=0;
for(i=0;i<N;i++){ sum+=G_State[i]; sumsq+=G_State[i]*G_State[i]; if(i+1<N) cross+=G_State[i]*G_State[i+1]; }
var mean=sum/N, corr=cross/(N-1);
return 0.6*tanh(mean + 0.001*sumsq) + 0.4*sin(corr);
}
void updateNet(var driver, var* outMean, var* outEnergy, var* outPower, int writeMeta)
{
int N = G_N, D = G_D, i;
var sum = 0, sumsq = 0;
for(i = 0; i < N; i++){
sum += G_State[i];
sumsq += G_State[i]*G_State[i];
}
var mean = sum / N;
var energy = sumsq;
var power = sumsq / N;
for(i = 0; i < N; i++){
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
G_Pred[i] = nodePredictability(t);
}
for(i = 0; i < N; i++){
int n1 = adjSafe(i,0);
int n2 = (int)ifelse(D >= 2, adjSafe(i,1), n1);
var xi = G_State[i];
var xn1 = G_State[n1];
var xn2 = G_State[n2];
var mom = xi - G_Prev[i];
int topEq = -1;
var topW = 0;
var dt = dtreeTerm(i, &topEq, &topW);
G_TreeTerm[i] = dt;
G_TopEq[i] = topEq;
G_TopW[i] = topW;
// Fix #2: call advisor only when allowed
var adv = 0;
if(allowAdvise(i))
adv = adviseEq(i, driver, mean, energy, power);
G_AdvScore[i] = adv;
var arg1 = A1x[i]*xn1 + A1lam[i]*driver + A1mean[i]*mean + A1E[i]*energy + A1P[i]*power + A1i[i]*i + A1c[i];
var arg2 = A2x[i]*xn2 + A2lam[i]*driver + A2mean[i]*mean + A2E[i]*energy + A2P[i]*power + A2i[i]*i + A2c[i];
var nl1, nl2;
if(G_Mode[i] == 0){ nl1 = sin(arg1); nl2 = cos(arg2); }
else if(G_Mode[i] == 1){ nl1 = tanh(arg1); nl2 = sin(arg2); }
else if(G_Mode[i] == 2){ nl1 = cos(arg1); nl2 = tanh(arg2); }
else { nl1 = sin(arg1); nl2 = cos(arg2); }
var glob1 = tanh(G1mean[i]*mean + G1E[i]*energy);
var glob2 = sin (G2P[i]*power + G2lam[i]*driver);
var xNew =
G_WSelf[i]*xi +
G_WN1[i]*nl1 +
G_WN2[i]*nl2 +
G_WGlob1[i]*glob1 +
G_WGlob2[i]*glob2 +
G_WMom[i]*mom +
G_WTree[i]*dt +
G_WAdv[i]*adv;
G_Prev[i] = xi;
G_Vel[i] = xNew - xi;
G_State[i] = clamp(xNew, -10, 10);
if(writeMeta && (G_Epoch % META_EVERY == 0) && !G_LogsOff){
int tid2 = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t2 = treeAt(tid2);
int nn1 = adjSafe(i,0);
int nn2 = (int)ifelse(G_D >= 2, adjSafe(i,1), nn1);
if(LOG_EQ_TO_ONE_FILE){
string expr = "";
if(LOG_EXPR_TEXT) expr = G_Sym[i];
appendEqMetaLine(
Bar, G_Epoch, G_CtxID, i, nn1, nn2, tid2, t2->d, t2->r,
G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr
);
} else {
char fname[64];
buildEqFileName(i, fname);
string expr2 = "";
if(LOG_EXPR_TEXT) expr2 = G_Sym[i];
file_append(fname,
strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
G_Epoch, G_CtxID, NET_EQNS, i, nn1, nn2, tid2, t2->d, t2->r,
G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr2));
}
}
}
if(outMean) *outMean = mean;
if(outEnergy) *outEnergy = energy;
if(outPower) *outPower = power;
}
// ----------------- MAIN -----------------
function run()
{
static int initialized = 0;
static var lambda;
static int fileInit = 0;
BarPeriod = BAR_PERIOD;
if(LookBack < NWIN) LookBack = NWIN;
if(Train) Hedge = 2;
// Plots are opt-in via ENABLE_PLOTS
set(RULES|LEAN);
if(ENABLE_PLOTS) set(PLOTNOW);
asset(ASSET_SYMBOL);
if(is(INITRUN) && !initialized){
// init dummy node
G_DummyNode.v = 0;
G_DummyNode.r = 0;
G_DummyNode.c = 0;
G_DummyNode.n = 0;
G_DummyNode.d = 0;
// allocate Markov matrices (zeroed)
MC_Count = (int*)malloc(MC_STATES*MC_STATES*sizeof(int));
MC_RowSum = (int*)malloc(MC_STATES*sizeof(int));
int k;
for(k=0;k<MC_STATES*MC_STATES;k++) MC_Count[k]=0;
for(k=0;k<MC_STATES;k++) MC_RowSum[k]=0;
// capture pattern names (optional)
var tmp[MC_NPAT];
buildCDL_TA61(tmp, MC_Names);
// build tree + network
Root = createNode(MAX_DEPTH);
allocateNet();
// engine params
G_DTreeExp = 1.10 + random(0.50); // [1.10..1.60)
G_FB_A = 0.60 + random(0.25); // [0.60..0.85)
G_FB_B = 1.0 - G_FB_A;
randomizeRP();
computeProjection();
rewireInit();
G_Epoch = 0;
rewireEpoch(0,0,0,0);
// Header setup (consolidated vs legacy)
if(LOG_EQ_TO_ONE_FILE){
writeEqHeaderOnce();
} else {
char fname[64];
int i2;
for(i2=0;i2<NET_EQNS;i2++){
buildEqFileName(i2,fname);
file_append(fname,
"Bar,lambda,gamma,i,State,n1,n2,mean,energy,power,Vel,Mode,WAdv,WSelf,WN1,WN2,WGlob1,WGlob2,WMom,WTree,Pred,Adv,Prop,TreeTerm,TopEq,TopW,TreeId,Depth,Rate,PBull,Entropy,MCState\n");
}
}
// Markov CSV header
if(!fileInit){
file_append("Log\\Alpha12_markov.csv","Bar,State,PBullNext,Entropy,RowSum\n");
fileInit=1;
}
// initial META dump (consolidated or legacy)
int i;
for(i=0;i<G_N;i++){
int n1 = adjSafe(i,0);
int n2 = (int)ifelse(G_D >= 2, adjSafe(i,1), n1);
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
if(LOG_EQ_TO_ONE_FILE){
string expr = "";
if(LOG_EXPR_TEXT) expr = G_Sym[i];
appendEqMetaLine(
Bar, G_Epoch, G_CtxID, i, n1, n2, tid, t->d, t->r,
G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr
);
} else {
char fname2[64];
buildEqFileName(i,fname2);
string expr2 = "";
if(LOG_EXPR_TEXT) expr2 = G_Sym[i];
file_append(fname2,
strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
G_Epoch, G_CtxID, NET_EQNS, i, n1, n2, tid, t->d, t->r,
G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr2));
}
}
initialized=1;
printf("\nRoot nodes: %i | Net equations: %i (degree=%i, kproj=%i)",
countNodes(Root), G_N, G_D, G_K);
}
// Fix #4: earlier zero-cost shedding when approaching cap
if(mem_mb_est() >= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_ShedStage == 0)
shed_zero_cost_once();
// ==== Runtime memory / depth manager (acts only when near the cap)
depth_manager_runtime();
// ====== Per bar: Candles ? Markov
static var CDL[MC_NPAT];
buildCDL_TA61(CDL,0);
MC_Cur = MC_stateFromCDL(CDL, MC_ACT);
if(Bar > LookBack) MC_update(MC_Prev, MC_Cur);
MC_Prev = MC_Cur;
MC_PBullNext = MC_nextBullishProb(MC_Cur);
MC_Entropy = MC_rowEntropy01(MC_Cur);
// expose Markov features
G_MCF_PBull = MC_PBullNext;
G_MCF_Entropy = MC_Entropy;
G_MCF_State = (var)MC_Cur;
// ====== Tree driver lambda
lambda = evaluateNode(Root);
// Rewire epoch?
{
int doRewire = ((Bar % REWIRE_EVERY) == 0);
if(doRewire){
G_Epoch++;
int ii;
var sum=0;
for(ii=0;ii<G_N;ii++) sum += G_State[ii];
var mean = sum/G_N;
var energy=0;
for(ii=0;ii<G_N;ii++) energy += G_State[ii]*G_State[ii];
var power = energy/G_N;
rewireEpoch(lambda,mean,energy,power);
}
// Update net this bar (write META only if rewired and not shedding logs)
var meanB, energyB, powerB;
updateNet(lambda, &meanB, &energyB, &powerB, doRewire);
// Feedback blend
var gamma = projectNet();
lambda = G_FB_A*lambda + G_FB_B*gamma;
// --- Accuracy sentinel update & elastic depth controller ---
acc_update(lambda, gamma);
edc_runtime();
// Plot/log gating
int doPlot = (ENABLE_PLOTS && !G_ChartsOff);
int doLog = ifelse(G_LogsOff, ((Bar % (LOG_EVERY*4)) == 0), ((Bar % LOG_EVERY) == 0));
// Plots
if(doPlot){
plot("lambda", lambda, LINE, 0);
plot("gamma", gamma, LINE, 0);
plot("P_win", powerB, LINE, 0);
plot("PBullNext", MC_PBullNext, LINE, 0);
plot("MC_Entropy", MC_Entropy, LINE, 0);
plot("MemMB", memory(0)/(1024.*1024.), LINE, 0);
plot("Allocs", (var)memory(2), LINE, 0);
}
// Markov CSV log (decimated; further decimated when shedding)
if(doLog){
file_append("Log\\Alpha12_markov.csv",
strf("%i,%i,%.6f,%.6f,%i\n", Bar, MC_Cur, MC_PBullNext, MC_Entropy, MC_RowSum[MC_Cur]));
}
// ====== Entries (Markov-gated) ======
if( MC_PBullNext > PBULL_LONG_TH && lambda > 0.7 ) enterLong();
if( MC_PBullNext < PBULL_SHORT_TH && lambda < -0.7 ) enterShort();
}
}
// Clean up memory
function cleanup()
{
if(Root) freeTree(Root);
if(MC_Count) free(MC_Count);
if(MC_RowSum) free(MC_RowSum);
freeNet();
}
Last edited by TipmyPip; 09/06/25 01:08.
|
|
|
Regime-Responsive Graph Rewiring of Influences
[Re: TipmyPip]
#488877
09/06/25 19:35
09/06/25 19:35
|
Joined: Sep 2017
Posts: 164
TipmyPip
OP
Member
|
OP
Member
Joined: Sep 2017
Posts: 164
|
Regime-Responsive Graph Rewiring of InfluencesA. A small market lexicon The pattern alphabet now self-calibrates. It keeps a steady share of “active” moments even as volatility changes, and it lets early uncertainty be smoothed more generously while sharpening as evidence accumulates. The two dials—lean and clarity—stay comparable across regimes, so permission means the same thing in quiet and in storm. B. A soft landscape of influences The continuous field underneath reallocates attention as conditions evolve. Its effective dimensionality breathes: higher when structure is clean, lower when noise rises. Multiple drivers are blended with a bias toward whichever one is currently more predictive. Effort is budgeted—more attention when order emerges, less when the tape is muddled or resources are tight. Signals remain bounded; only the emphasis moves. C. Occasional reshaping of who listens to whom Connectivity refresh widens its search when the environment is organized and narrows it when it’s chaotic. The refresh can also trigger early after a utility dip, helping the structure realign quickly after shocks without constant churn. D. Capacity that breathes, with guardrails Form adjusts to usefulness. When added detail stops paying or resources tighten, it trims deepest, least helpful nuance; when there’s headroom and benefit, it adds a thin layer. Changes are small, tested, and reversible. Depth emphasis also adapts, shifting weight between shallow and deep context as regimes change. E. Permission meets timing and size Action still requires both a clear lean and sufficient clarity, plus harmony from the broader landscape. Because the dictionary stays rate-stable and sharpens with evidence, and because drivers are blended by current informativeness, timing improves around transitions. Position size tracks agreement strength; ambiguity defaults to patience. F. A brief, human-readable diary The ledger stays compact and comparable across regimes: current archetype, the two dials, and a terse sketch of how influences combined. It aims for oversight-grade clarity without sacrificing speed. G. What tends to emerge In ordered tape: broader search, richer projection, more active guidance, and a tilt toward deeper context. In choppy tape: narrower search, leaner projection, tighter guidance, and a tilt toward shallower, more robust cues. The posture glides between modes via small, distributed adjustments. H. Risk doctrine as a controlling atmosphere Exposure respects caps at all times. When noise rises or resources get tight, the system automatically de-emphasizes fine detail, focuses on the strongest agreements, and lets activity drift toward neutral rather than forcing trades—keeping drawdown sensitivity in check. I. Memory, recency, and drift Assessments use decaying memory so recent tape matters more while stale evidence fades. Permission and landscape both learn continuously, producing controlled drift—no whiplash, no stickiness. J. Separation of roles The lexicon offers a compact, discrete view of context and stable permission. The landscape provides a continuous, multi-horizon view that shapes timing and size. The reshaper keeps connections healthy, widening or narrowing search as needed. The capacity governor ensures useful complexity under constraints. Together they reduce overreaction to noise while preserving responsiveness to structural change. K. Practical trading implications Expect fewer, stronger actions when clarity is low; more decisive engagement when agreement is broad. Expect size to follow consensus strength, not single-indicator extremes. Expect quicker realignment after shocks, but without perpetual reshuffling. Expect behavior to remain stable across regime shifts, with measured changes rather than leaps. L. Philosophy in one line Trade when the story is both clear and corroborated; keep the model light, the adjustments small, and the diary open.
// ======================================================================
// Alpha12 - Markov-augmented Harmonic D-Tree Engine (Candlestick 122-dir)
// with runtime memory shaping, selective depth pruning,
// and elastic accuracy-aware depth growth + 8 adaptive improvements.
// ======================================================================
// ================= USER CONFIG =================
#define ASSET_SYMBOL "EUR/USD"
#define BAR_PERIOD 60
#define MC_ACT 0.30 // initial threshold on |CDL| in [-1..1] to accept a pattern
#define PBULL_LONG_TH 0.60 // Markov gate for long
#define PBULL_SHORT_TH 0.40 // Markov gate for short
// ===== Debug toggles (Fix #1 - chart/watch growth off by default) =====
#define ENABLE_PLOTS 0 // 0 = no plot buffers; 1 = enable plot() calls
#define ENABLE_WATCH 0 // 0 = disable watch() probes; 1 = enable
// ================= ENGINE PARAMETERS =================
#define MAX_BRANCHES 3
#define MAX_DEPTH 4
#define NWIN 256
#define NET_EQNS 100
#define DEGREE 4
#define KPROJ 16
#define REWIRE_EVERY 127
#define CAND_NEIGH 8
// ===== LOGGING CONTROLS (memory management) =====
#define LOG_EQ_TO_ONE_FILE 1 // 1: single consolidated EQ CSV; 0: per-eq files
#define LOG_EXPR_TEXT 0 // 0: omit full expression (store signature only); 1: include text
#define META_EVERY 4 // write META every N rewires
#define LOG_EQ_SAMPLE NET_EQNS // limit number of equations logged
#define EXPR_MAXLEN 512 // cap expression string
// decimate Markov log cadence
#define LOG_EVERY 16
// ---- DTREE feature sizes (extended for Markov features) ----
#define ADV_EQ_NF 12 // per-equation features
#define ADV_PAIR_NF 12 // per-pair features (kept for completeness; DTREE pair disabled)
// ================= Candles ? 122-state Markov =================
#define MC_NPAT 61
#define MC_STATES 123 // 1 + 2*MC_NPAT
#define MC_NONE 0
#define MC_LAPLACE 1.0 // kept for reference; runtime uses G_MC_Alpha
// ================= Runtime Memory / Accuracy Manager =================
#define MEM_BUDGET_MB 50
#define MEM_HEADROOM_MB 5
#define DEPTH_STEP_BARS 16
#define KEEP_CHILDREN_HI 2
#define KEEP_CHILDREN_LO 1
#define RUNTIME_MIN_DEPTH 2
int G_ShedStage = 0; // 0..2
int G_LastDepthActBar = -999999;
int G_ChartsOff = 0; // gates plot()
int G_LogsOff = 0; // gates file_append cadence
int G_SymFreed = 0; // expression buffers freed
int G_RT_TreeMaxDepth = MAX_DEPTH;
// ---- Accuracy sentinel (EW correlation of lambda vs gamma) ----
var ACC_mx=0, ACC_my=0, ACC_mx2=0, ACC_my2=0, ACC_mxy=0;
var G_AccCorr = 0; // [-1..1]
var G_AccBase = 0; // first seen sentinel
int G_HaveBase = 0;
// ---- Elastic depth tuner (small growth trials with rollback) ----
#define DEPTH_TUNE_BARS 64 // start a growth “trial” this often (when memory allows)
#define TUNE_DELAY_BARS 64 // evaluate the trial after this many bars
var G_UtilBefore = 0, G_UtilAfter = 0;
int G_TunePending = 0;
int G_TuneStartBar = 0;
int G_TuneAction = 0; // +1 grow trial, 0 none
// ======================================================================
// (FIX) Move the type and globals used by mem_bytes_est() up here
// ======================================================================
// HARMONIC D-TREE type (we define it early so globals below compile fine)
typedef struct Node { var v; var r; void* c; int n; int d; } Node;
// Minimal globals needed before mem_bytes_est()
Node* Root = 0;
Node** G_TreeIdx = 0;
int G_TreeN = 0;
int G_TreeCap = 0;
var G_DTreeExp = 0;
Node G_DummyNode; // defined early so treeAt() can return &G_DummyNode
// Network sizing globals (used by mem_bytes_est)
int G_N = NET_EQNS;
int G_D = DEGREE;
int G_K = KPROJ;
// Optional expression buffer pointer (referenced by mem_bytes_est)
string* G_Sym = 0;
// Forward decls that reference Node
var nodeImportance(Node* u); // fwd decl (uses nodePredictability below)
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK);
void reindexTreeAndMap();
// Forward decls for advisor functions (so adviseSeed can call them)
var adviseEq(int i, var lambda, var mean, var energy, var power);
var advisePair(int i,int j, var lambda, var mean, var energy, var power);
// ----------------------------------------------------------------------
// === Adaptive knobs & sentinels (NEW) ===
// ----------------------------------------------------------------------
var G_FB_W = 0.70; // (1) dynamic lambda?gamma blend weight 0..1
var G_MC_ACT = MC_ACT; // (2) adaptive candlestick acceptance threshold
var G_AccRate = 0; // (2) EW acceptance rate of (state != 0)
// (3) advisor budget per bar (replaces the macro)
int G_AdviseMax = 16;
// (6) Markov Laplace smoothing (runtime ?)
var G_MC_Alpha = 1.0;
// (7) adaptive candidate breadth for adjacency search
int G_CandNeigh = CAND_NEIGH;
// (8) effective projection dimension (? KPROJ)
int G_Keff = KPROJ;
// (5) depth emphasis hill-climber
var G_DTreeExpStep = 0.05;
int G_DTreeExpDir = 1;
// ---- Advise budget/rotation (Fix #2) ----
#define ADVISE_ROTATE 1 // 1 = rotate which equations get DTREE each bar
int allowAdvise(int i)
{
if(ADVISE_ROTATE){
int groups = NET_EQNS / G_AdviseMax;
if(groups < 1) groups = 1;
return ((i / G_AdviseMax) % groups) == (Bar % groups);
} else {
return (i < G_AdviseMax);
}
}
// ---- tree byte size (counts nodes + child pointer arrays) ----
int tree_bytes(Node* u)
{
if(!u) return 0;
int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
int sz_node = 2*SZV + SZP + 2*SZI;
int total = sz_node;
if(u->n > 0 && u->c) total += u->n * SZP;
int i;
for(i=0;i<u->n;i++)
total += tree_bytes(((Node**)u->c)[i]);
return total;
}
// ======================================================================
// Conservative in-script memory estimator (arrays + pointers)
// ======================================================================
int mem_bytes_est()
{
int N = G_N, D = G_D, K = G_K;
int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
int b = 0;
b += N*SZV*(3 + 8 + 7 + 7 + 4 + 2 + 2 + 2 + 2);
b += N*SZI*(3); // G_Mode, G_TopEq, G_EqTreeId
b += N*D*SZI; // G_Adj
b += K*N*SZV; // G_RP
b += K*SZV; // G_Z
b += G_TreeCap*SZP; // G_TreeIdx pointer vector
if(G_Sym && !G_SymFreed) b += N*EXPR_MAXLEN; // optional expression buffers
b += MC_STATES*MC_STATES*SZI + MC_STATES*SZI; // Markov
b += tree_bytes(Root); // include D-Tree
return b;
}
int mem_mb_est(){ return mem_bytes_est() / (1024*1024); }
// === total memory (Zorro-wide) in MB ===
int memMB(){ return (int)(memory(0)/(1024*1024)); }
// light one-shot shedding
void shed_zero_cost_once()
{
if(G_ShedStage > 0) return;
set(PLOTNOW|OFF); G_ChartsOff = 1; // stop chart buffers
G_LogsOff = 1; // decimate logs (gated later)
G_ShedStage = 1;
}
void freeExprBuffers()
{
if(!G_Sym || G_SymFreed) return;
int i; for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]);
free(G_Sym); G_Sym = 0; G_SymFreed = 1;
}
// depth manager (prune & shedding)
void depth_manager_runtime()
{
int trigger = MEM_BUDGET_MB - MEM_HEADROOM_MB;
int mb = mem_mb_est();
if(mb < trigger) return;
if(G_ShedStage == 0) shed_zero_cost_once();
if(G_ShedStage <= 1){
if(LOG_EXPR_TEXT==0 && !G_SymFreed) freeExprBuffers();
G_ShedStage = 2;
}
int overBudget = (mb >= MEM_BUDGET_MB);
if(!overBudget && (Bar - G_LastDepthActBar < DEPTH_STEP_BARS))
return;
while(G_RT_TreeMaxDepth > RUNTIME_MIN_DEPTH)
{
int keepK = ifelse(mem_mb_est() < MEM_BUDGET_MB + 2, KEEP_CHILDREN_HI, KEEP_CHILDREN_LO);
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, keepK);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
mb = mem_mb_est();
printf("\n[DepthMgr] depth=%i keepK=%i est=%i MB", G_RT_TreeMaxDepth, keepK, mb);
if(mb < trigger) break;
}
G_LastDepthActBar = Bar;
}
// ----------------------------------------------------------------------
// 61 candlestick patterns (Zorro spellings kept). Each returns [-100..100].
// We rescale to [-1..1] for Markov state construction.
// ----------------------------------------------------------------------
int buildCDL_TA61(var* out, string* names)
{
int n = 0;
#define ADD(Name, Call) do{ var v = (Call); out[n] = v/100.; if(names) names[n] = Name; n++; }while(0)
ADD("CDL2Crows", CDL2Crows());
ADD("CDL3BlackCrows", CDL3BlackCrows());
ADD("CDL3Inside", CDL3Inside());
ADD("CDL3LineStrike", CDL3LineStrike());
ADD("CDL3Outside", CDL3Outside());
ADD("CDL3StarsInSouth", CDL3StarsInSouth());
ADD("CDL3WhiteSoldiers", CDL3WhiteSoldiers());
ADD("CDLAbandonedBaby", CDLAbandonedBaby(0.3));
ADD("CDLAdvanceBlock", CDLAdvanceBlock());
ADD("CDLBeltHold", CDLBeltHold());
ADD("CDLBreakaway", CDLBreakaway());
ADD("CDLClosingMarubozu", CDLClosingMarubozu());
ADD("CDLConcealBabysWall", CDLConcealBabysWall());
ADD("CDLCounterAttack", CDLCounterAttack());
ADD("CDLDarkCloudCover", CDLDarkCloudCover(0.3));
ADD("CDLDoji", CDLDoji());
ADD("CDLDojiStar", CDLDojiStar());
ADD("CDLDragonflyDoji", CDLDragonflyDoji());
ADD("CDLEngulfing", CDLEngulfing());
ADD("CDLEveningDojiStar", CDLEveningDojiStar(0.3));
ADD("CDLEveningStar", CDLEveningStar(0.3));
ADD("CDLGapSideSideWhite", CDLGapSideSideWhite());
ADD("CDLGravestoneDoji", CDLGravestoneDoji());
ADD("CDLHammer", CDLHammer());
ADD("CDLHangingMan", CDLHangingMan());
ADD("CDLHarami", CDLHarami());
ADD("CDLHaramiCross", CDLHaramiCross());
ADD("CDLHignWave", CDLHignWave());
ADD("CDLHikkake", CDLHikkake());
ADD("CDLHikkakeMod", CDLHikkakeMod());
ADD("CDLHomingPigeon", CDLHomingPigeon());
ADD("CDLIdentical3Crows", CDLIdentical3Crows());
ADD("CDLInNeck", CDLInNeck());
ADD("CDLInvertedHammer", CDLInvertedHammer());
ADD("CDLKicking", CDLKicking());
ADD("CDLKickingByLength", CDLKickingByLength());
ADD("CDLLadderBottom", CDLLadderBottom());
ADD("CDLLongLeggedDoji", CDLLongLeggedDoji());
ADD("CDLLongLine", CDLLongLine());
ADD("CDLMarubozu", CDLMarubozu());
ADD("CDLMatchingLow", CDLMatchingLow());
ADD("CDLMatHold", CDLMatHold(0.5));
ADD("CDLMorningDojiStar", CDLMorningDojiStar(0.3));
ADD("CDLMorningStar", CDLMorningStar(0.3));
ADD("CDLOnNeck", CDLOnNeck());
ADD("CDLPiercing", CDLPiercing());
ADD("CDLRickshawMan", CDLRickshawMan());
ADD("CDLRiseFall3Methods", CDLRiseFall3Methods());
ADD("CDLSeperatingLines", CDLSeperatingLines());
ADD("CDLShootingStar", CDLShootingStar());
ADD("CDLShortLine", CDLShortLine());
ADD("CDLSpinningTop", CDLSpinningTop());
ADD("CDLStalledPattern", CDLStalledPattern());
ADD("CDLStickSandwhich", CDLStickSandwhich());
ADD("CDLTakuri", CDLTakuri());
ADD("CDLTasukiGap", CDLTasukiGap());
ADD("CDLThrusting", CDLThrusting());
ADD("CDLTristar", CDLTristar());
ADD("CDLUnique3River", CDLUnique3River());
ADD("CDLUpsideGap2Crows", CDLUpsideGap2Crows());
ADD("CDLXSideGap3Methods", CDLXSideGap3Methods());
#undef ADD
return n; // 61
}
// ================= Markov storage & helpers =================
static int* MC_Count; // [MC_STATES*MC_STATES]
static int* MC_RowSum; // [MC_STATES]
static int MC_Prev = -1;
static int MC_Cur = 0;
static var MC_PBullNext = 0.5;
static var MC_Entropy = 0.0;
static string MC_Names[MC_NPAT];
#define MC_IDX(fr,to) ((fr)*MC_STATES + (to))
int MC_stateFromCDL(var* cdl /*len=61*/, var thr)
{
int i, best=-1; var besta=0;
for(i=0;i<MC_NPAT;i++){
var a = abs(cdl[i]);
if(a>besta){ besta=a; best=i; }
}
if(best<0) return MC_NONE;
if(besta < thr) return MC_NONE;
int bull = (cdl[best] > 0);
return 1 + 2*best + bull; // 1..122
}
int MC_isBull(int s){ if(s<=0) return 0; return ((s-1)%2)==1; }
void MC_update(int sPrev,int sCur){ if(sPrev<0) return; MC_Count[MC_IDX(sPrev,sCur)]++; MC_RowSum[sPrev]++; }
// === (6) Use runtime Laplace ? (G_MC_Alpha) ===
var MC_prob(int s,int t){
var num = (var)MC_Count[MC_IDX(s,t)] + G_MC_Alpha;
var den = (var)MC_RowSum[s] + G_MC_Alpha*MC_STATES;
if(den<=0) return 1.0/MC_STATES;
return num/den;
}
var MC_nextBullishProb(int s){
if(s<0) return 0.5;
int t; var pBull=0, pTot=0;
for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); pTot+=p; if(MC_isBull(t)) pBull+=p; }
if(pTot<=0) return 0.5;
return pBull/pTot;
}
var MC_rowEntropy01(int s){
if(s<0) return 1.0;
int t; var H=0, Z=0;
for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); Z+=p; }
if(Z<=0) return 1.0;
for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t)/Z; if(p>0) H += -p*log(p); }
var Hmax = log(MC_STATES-1);
if(Hmax<=0) return 0;
return H/Hmax;
}
// ================= HARMONIC D-TREE ENGINE =================
// ---------- utils ----------
var randsign(){ return ifelse(random(1) < 0.5, -1.0, 1.0); }
var mapUnit(var u,var lo,var hi){ if(u<-1) u=-1; if(u>1) u=1; var t=0.5*(u+1.0); return lo + t*(hi-lo); }
// ---- safety helpers ----
var safeNum(var x){ if(x!=x) return 0; if(x > 1e100) return 1e100; if(x < -1e100) return -1e100; return x; }
void sanitize(var* A,int n){ int k; for(k=0;k<n;k++) A[k]=safeNum(A[k]); }
var sat100(var x){ return clamp(x,-100,100); }
// ---- small string helpers (for memory-safe logging) ----
void strlcat_safe(string dst, string src, int cap)
{
if(!dst || !src || cap <= 0) return;
int dl = strlen(dst);
int sl = strlen(src);
int room = cap - 1 - dl;
if(room <= 0){ if(cap > 0) dst[cap-1] = 0; return; }
int i; for(i = 0; i < room && i < sl; i++) dst[dl + i] = src[i];
dst[dl + i] = 0;
}
int countSubStr(string s, string sub){
if(!s || !sub) return 0;
int n=0; string p=s;
int sublen = strlen(sub);
if(sublen<=0) return 0;
while((p=strstr(p,sub))){ n++; p += sublen; }
return n;
}
// ---------- FIXED: use int (lite-C) and keep non-negative ----------
int djb2_hash(string s){
int h = 5381, c, i = 0;
if(!s) return h;
while((c = s[i++])) h = ((h<<5)+h) ^ c; // h*33 ^ c
return h & 0x7fffffff; // force non-negative
}
// ---- tree helpers ----
int validTreeIndex(int tid){ if(!G_TreeIdx) return 0; if(tid<0||tid>=G_TreeN) return 0; return (G_TreeIdx[tid]!=0); }
Node* treeAt(int tid){ if(validTreeIndex(tid)) return G_TreeIdx[tid]; return &G_DummyNode; }
int safeTreeIndexFromEq(int eqi){
int denom = ifelse(G_TreeN>0, G_TreeN, 1);
int tid = eqi;
if(tid < 0) tid = 0;
if(denom > 0) tid = tid % denom;
if(tid < 0) tid = 0;
return tid;
}
// ---- tree indexing ----
void pushTreeNode(Node* u){
if(G_TreeN >= G_TreeCap){
int newCap = G_TreeCap*2;
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
}
G_TreeIdx[G_TreeN++] = u;
}
void indexTreeDFS(Node* u){ if(!u) return; pushTreeNode(u); int i; for(i=0;i<u->n;i++) indexTreeDFS(((Node**)u->c)[i]); }
// ---- shrink index capacity after pruning (Fix #3) ----
void maybeShrinkTreeIdx(){
if(!G_TreeIdx) return;
if(G_TreeCap > 64 && G_TreeN < (G_TreeCap >> 1)){
int newCap = (G_TreeCap >> 1);
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
}
}
// ---- tree create/eval ----
Node* createNode(int depth)
{
Node* u = (Node*)malloc(sizeof(Node));
u->v = random();
u->r = 0.01 + 0.02*depth + random(0.005);
u->d = depth;
if(depth > 0){
u->n = 1 + (int)random(MAX_BRANCHES);
u->c = malloc(u->n * sizeof(void*));
int i; for(i=0;i<u->n;i++) ((Node**)u->c)[i] = createNode(depth - 1);
} else { u->n = 0; u->c = 0; }
return u;
}
var evaluateNode(Node* u)
{
if(!u) return 0;
var sum=0; int i; for(i=0;i<u->n;i++) sum += evaluateNode(((Node**)u->c)[i]);
var phase = sin(u->r * Bar + sum);
var weight = 1.0 / pow(u->d + 1, G_DTreeExp);
u->v = (1 - weight)*u->v + weight*phase;
return u->v;
}
int countNodes(Node* u){ if(!u) return 0; int c=1,i; for(i=0;i<u->n;i++) c += countNodes(((Node**)u->c)[i]); return c; }
void freeTree(Node* u){ if(!u) return; int i; for(i=0;i<u->n;i++) freeTree(((Node**)u->c)[i]); if(u->c) free(u->c); free(u); }
// =========== NETWORK STATE & COEFFICIENTS ===========
var* G_State; var* G_Prev; var* G_Vel;
int* G_Adj;
var* G_RP; var* G_Z;
int* G_Mode;
var* G_WSelf; var* G_WN1; var* G_WN2; var* G_WGlob1; var* G_WGlob2; var* G_WMom; var* G_WTree; var* G_WAdv;
var* A1x; var* A1lam; var* A1mean; var* A1E; var* A1P; var* A1i; var* A1c;
var* A2x; var* A2lam; var* A2mean; var* A2E; var* A2P; var* A2i; var* A2c;
var* G1mean; var* G1E; var* G2P; var* G2lam;
var* G_TreeTerm; int* G_TopEq; var* G_TopW; int* G_EqTreeId; var* TAlpha; var* TBeta;
var* G_Pred; var* G_AdvScore;
var* G_PropRaw; var* G_Prop;
// ===== Markov features exposed to DTREE =====
var G_MCF_PBull; // 0..1
var G_MCF_Entropy; // 0..1
var G_MCF_State; // 0..122
// epoch/context & feedback
int G_Epoch = 0;
int G_CtxID = 0;
var G_FB_A = 0.7; // kept (not used in blend now)
var G_FB_B = 0.3; // kept (not used in blend now)
// ---------- predictability ----------
var nodePredictability(Node* t)
{
if(!t) return 0.5;
var disp=0; int n=t->n, i;
for(i=0;i<n;i++){ Node* c=((Node**)t->c)[i]; disp += abs(c->v - t->v); }
if(n>0) disp /= n;
var depthFac = 1.0/(1+t->d);
var rateBase = 0.01 + 0.02*t->d;
var rateFac = exp(-25.0*abs(t->r - rateBase));
var p = 0.5*(depthFac + rateFac);
p = 0.5*p + 0.5*(1.0 + (-disp))/(1.0);
if(p<0) p=0; if(p>1) p=1;
return p;
}
// importance for selective pruning
var nodeImportance(Node* u)
{
if(!u) return 0;
var amp = abs(u->v); if(amp>1) amp=1;
var p = nodePredictability(u);
var depthW = 1.0/(1.0 + u->d);
var imp = (0.6*p + 0.4*amp) * depthW;
return imp;
}
// ====== Elastic growth helpers ======
// create a leaf at depth d (no children)
Node* createLeafDepth(int d){
Node* u = (Node*)malloc(sizeof(Node));
u->v = random();
u->r = 0.01 + 0.02*d + random(0.005);
u->d = d;
u->n = 0;
u->c = 0;
return u;
}
// add up to addK new children to all nodes at frontierDepth
void growSelectiveAtDepth(Node* u, int frontierDepth, int addK)
{
if(!u) return;
if(u->d == frontierDepth){
int want = addK;
if(want <= 0) return;
int oldN = u->n;
int newN = oldN + want;
Node** Cnew = (Node**)malloc(newN * sizeof(void*));
int i;
for(i=0;i<oldN;i++) Cnew[i] = ((Node**)u->c)[i];
for(i=oldN;i<newN;i++) Cnew[i] = createLeafDepth(frontierDepth-1);
if(u->c) free(u->c);
u->c = Cnew; u->n = newN;
return;
}
int j; for(j=0;j<u->n;j++) growSelectiveAtDepth(((Node**)u->c)[j], frontierDepth, addK);
}
// keep top-K children by importance at targetDepth, drop the rest
void freeChildAt(Node* parent, int idx)
{
if(!parent || !parent->c) return;
Node** C = (Node**)parent->c;
freeTree(C[idx]);
int i;
for(i=idx+1;i<parent->n;i++) C[i-1] = C[i];
parent->n--;
if(parent->n==0){ free(parent->c); parent->c=0; }
}
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK)
{
if(!u) return;
if(u->d == targetDepth-1 && u->n > 0){
int n = u->n, i, kept = 0;
int mark[16]; for(i=0;i<16;i++) mark[i]=0;
int iter;
for(iter=0; iter<keepK && iter<n; iter++){
int bestI = -1; var bestImp = -1;
for(i=0;i<n;i++){
if(i<16 && mark[i]==1) continue;
var imp = nodeImportance(((Node**)u->c)[i]);
if(imp > bestImp){ bestImp = imp; bestI = i; }
}
if(bestI>=0 && bestI<16){ mark[bestI]=1; kept++; }
}
for(i=n-1;i>=0;i--) if(i<16 && mark[i]==0) freeChildAt(u,i);
return;
}
int j; for(j=0;j<u->n;j++) pruneSelectiveAtDepth(((Node**)u->c)[j], targetDepth, keepK);
}
void reindexTreeAndMap()
{
G_TreeN = 0;
indexTreeDFS(Root);
if(G_TreeN<=0){ G_TreeN=1; if(G_TreeIdx) G_TreeIdx[0]=Root; }
int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = i % G_TreeN;
maybeShrinkTreeIdx(); // Fix #3
}
// ====== Accuracy sentinel & elastic-depth controller ======
void acc_update(var x /*lambda*/, var y /*gamma*/)
{
var a = 0.01; // ~100-bar half-life
ACC_mx = (1-a)*ACC_mx + a*x;
ACC_my = (1-a)*ACC_my + a*y;
ACC_mx2 = (1-a)*ACC_mx2 + a*(x*x);
ACC_my2 = (1-a)*ACC_my2 + a*(y*y);
ACC_mxy = (1-a)*ACC_mxy + a*(x*y);
var vx = ACC_mx2 - ACC_mx*ACC_mx;
var vy = ACC_my2 - ACC_my*ACC_my;
var cv = ACC_mxy - ACC_mx*ACC_my;
if(vx>0 && vy>0) G_AccCorr = cv / sqrt(vx*vy); else G_AccCorr = 0;
if(!G_HaveBase){ G_AccBase = G_AccCorr; G_HaveBase = 1; }
}
// utility to maximize: accuracy minus gentle memory penalty
var util_now()
{
int mb = mem_mb_est();
var mem_pen = 0;
if(mb > MEM_BUDGET_MB) mem_pen = (mb - MEM_BUDGET_MB)/(var)MEM_BUDGET_MB; else mem_pen = 0;
return G_AccCorr - 0.5*mem_pen;
}
// apply a +1 “grow one level” action if safe memory headroom
int apply_grow_step()
{
int mb = mem_mb_est();
if(G_RT_TreeMaxDepth >= MAX_DEPTH) return 0;
if(mb > MEM_BUDGET_MB - 2*MEM_HEADROOM_MB) return 0;
int newFrontier = G_RT_TreeMaxDepth;
growSelectiveAtDepth(Root, newFrontier, KEEP_CHILDREN_HI);
G_RT_TreeMaxDepth++;
reindexTreeAndMap();
printf("\n[EDC] Grew depth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
return 1;
}
// revert last growth (drop newly-added frontier children)
void revert_last_grow()
{
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, 0);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
printf("\n[EDC] Reverted growth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
}
// main elastic-depth controller; call once per bar (after acc_update)
void edc_runtime()
{
// (5) slow hill-climb on G_DTreeExp
if((Bar % DEPTH_TUNE_BARS) == 0){
var U0 = util_now();
var trial = clamp(G_DTreeExp + G_DTreeExpDir*G_DTreeExpStep, 0.8, 2.0);
var old = G_DTreeExp;
G_DTreeExp = trial;
if(util_now() + 0.005 < U0){
G_DTreeExp = old;
G_DTreeExpDir = -G_DTreeExpDir;
}
}
int mb = mem_mb_est();
if(G_TunePending){
if(Bar - G_TuneStartBar >= TUNE_DELAY_BARS){
G_UtilAfter = util_now();
var eps = 0.01;
if(G_UtilAfter + eps < G_UtilBefore){
revert_last_grow();
} else {
printf("\n[EDC] Growth kept (U: %.4f -> %.4f)", G_UtilBefore, G_UtilAfter);
}
G_TunePending = 0; G_TuneAction = 0;
}
return;
}
if( (Bar % DEPTH_TUNE_BARS)==0 && mb <= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_RT_TreeMaxDepth < MAX_DEPTH ){
G_UtilBefore = util_now();
if(apply_grow_step()){
G_TunePending = 1; G_TuneAction = 1; G_TuneStartBar = Bar;
}
}
}
// filenames (legacy; still used if LOG_EQ_TO_ONE_FILE==0)
void buildEqFileName(int idx, char* outName /*>=64*/)
{
strcpy(outName, "Log\\Alpha12_eq_");
string idxs = strf("%03i", idx);
strcat(outName, idxs);
strcat(outName, ".csv");
}
// ===== consolidated EQ log =====
void writeEqHeaderOnce()
{
static int done=0; if(done) return; done=1;
file_append("Log\\Alpha12_eq_all.csv",
"Bar,Epoch,Ctx,EqCount,i,n1,n2,TreeId,Depth,Rate,Pred,Adv,Prop,Mode,WAdv,WTree,PBull,Entropy,MCState,ExprLen,ExprHash,tanhN,sinN,cosN\n");
}
void appendEqMetaLine(
int bar, int epoch, int ctx, int i, int n1, int n2, int tid, int depth, var rate,
var pred, var adv, var prop, int mode, var wadv, var wtree,
var pbull, var ent, int mcstate, string expr)
{
if(i >= LOG_EQ_SAMPLE) return;
// ---- SAFE: never call functions inside ifelse; handle NULL explicitly
int eLen = 0, eHash = 0, cT = 0, cS = 0, cC = 0;
if(expr){
eLen = (int)strlen(expr);
eHash = (int)djb2_hash(expr);
cT = countSubStr(expr,"tanh(");
cS = countSubStr(expr,"sin(");
cC = countSubStr(expr,"cos(");
} else {
eHash = (int)djb2_hash("");
}
file_append("Log\\Alpha12_eq_all.csv",
strf("%i,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,%.4f,%.4f,%.6f,%i,%.3f,%.3f,%.4f,%.4f,%i,%i,%i,%i,%i,%i\n",
bar, epoch, ctx, NET_EQNS, i, n1, n2, tid, depth, rate,
pred, adv, prop, mode, wadv, wtree,
pbull, ent, mcstate, eLen, eHash, cT, cS, cC));
}
// --------- allocation ----------
void randomizeRP()
{
int K=G_K,N=G_N,k,j;
for(k=0;k<K;k++)
for(j=0;j<N;j++)
G_RP[k*N+j] = ifelse(random(1) < 0.5, -1.0, 1.0);
}
// === (8) Use effective K (G_Keff) ===
void computeProjection(){
int K=G_Keff, N=G_N, k, j;
for(k=0;k<K;k++){
var acc=0;
for(j=0;j<N;j++) acc += G_RP[k*N+j]*(G_State[j]*G_State[j]);
G_Z[k]=acc;
}
}
void allocateNet()
{
int N=G_N, D=G_D, K=G_K;
G_State=(var*)malloc(N*sizeof(var)); G_Prev=(var*)malloc(N*sizeof(var)); G_Vel=(var*)malloc(N*sizeof(var));
G_Adj=(int*)malloc(N*D*sizeof(int));
G_RP=(var*)malloc(K*N*sizeof(var)); G_Z=(var*)malloc(K*sizeof(var));
G_Mode=(int*)malloc(N*sizeof(int));
G_WSelf=(var*)malloc(N*sizeof(var)); G_WN1=(var*)malloc(N*sizeof(var)); G_WN2=(var*)malloc(N*sizeof(var));
G_WGlob1=(var*)malloc(N*sizeof(var)); G_WGlob2=(var*)malloc(N*sizeof(var));
G_WMom=(var*)malloc(N*sizeof(var)); G_WTree=(var*)malloc(N*sizeof(var)); G_WAdv=(var*)malloc(N*sizeof(var));
A1x=(var*)malloc(N*sizeof(var)); A1lam=(var*)malloc(N*sizeof(var)); A1mean=(var*)malloc(N*sizeof(var));
A1E=(var*)malloc(N*sizeof(var)); A1P=(var*)malloc(N*sizeof(var)); A1i=(var*)malloc(N*sizeof(var)); A1c=(var*)malloc(N*sizeof(var));
A2x=(var*)malloc(N*sizeof(var)); A2lam=(var*)malloc(N*sizeof(var)); A2mean=(var*)malloc(N*sizeof(var));
A2E=(var*)malloc(N*sizeof(var)); A2P=(var*)malloc(N*sizeof(var)); A2i=(var*)malloc(N*sizeof(var)); A2c=(var*)malloc(N*sizeof(var));
G1mean=(var*)malloc(N*sizeof(var)); G1E=(var*)malloc(N*sizeof(var));
G2P=(var*)malloc(N*sizeof(var)); G2lam=(var*)malloc(N*sizeof(var));
G_TreeTerm=(var*)malloc(N*sizeof(var)); G_TopEq=(int*)malloc(N*sizeof(int)); G_TopW=(var*)malloc(N*sizeof(var));
TAlpha=(var*)malloc(N*sizeof(var)); TBeta=(var*)malloc(N*sizeof(var));
G_Pred=(var*)malloc(N*sizeof(var)); G_AdvScore=(var*)malloc(N*sizeof(var));
G_PropRaw=(var*)malloc(N*sizeof(var)); G_Prop=(var*)malloc(N*sizeof(var));
if(LOG_EXPR_TEXT){
G_Sym=(string*)malloc(N*sizeof(char*));
} else {
G_Sym=0;
}
G_TreeCap=128; // was 512 (Fix #3: start smaller; still grows if needed)
G_TreeIdx=(Node**)malloc(G_TreeCap*sizeof(Node*)); G_TreeN=0;
G_EqTreeId=(int*)malloc(N*sizeof(int));
// Pre-init adjacency to safe value
int tInit; for(tInit=0; tInit<N*D; tInit++) G_Adj[tInit] = -1;
int i;
for(i=0;i<N;i++){
G_State[i]=random();
G_Prev[i]=G_State[i]; G_Vel[i]=0;
G_Mode[i]=0;
G_WSelf[i]=0.5; G_WN1[i]=0.2; G_WN2[i]=0.2; G_WGlob1[i]=0.1; G_WGlob2[i]=0.1; G_WMom[i]=0.05; G_WTree[i]=0.15; G_WAdv[i]=0.15;
A1x[i]=1; A1lam[i]=0.1; A1mean[i]=0; A1E[i]=0; A1P[i]=0; A1i[i]=0; A1c[i]=0;
A2x[i]=1; A2lam[i]=0.1; A2mean[i]=0; A2E[i]=0; A2P[i]=0; A2i[i]=0; A2c[i]=0;
G1mean[i]=1.0; G1E[i]=0.001; G2P[i]=0.6; G2lam[i]=0.3;
TAlpha[i]=0.8; TBeta[i]=25.0;
G_TreeTerm[i]=0; G_TopEq[i]=-1; G_TopW[i]=0;
G_Pred[i]=0.5; G_AdvScore[i]=0;
G_PropRaw[i]=1; G_Prop[i]=1.0/G_N;
if(LOG_EXPR_TEXT){
G_Sym[i] = (char*)malloc(EXPR_MAXLEN);
if(G_Sym[i]) strcpy(G_Sym[i], "");
}
}
}
void freeNet()
{
int i;
if(G_State)free(G_State); if(G_Prev)free(G_Prev); if(G_Vel)free(G_Vel);
if(G_Adj)free(G_Adj); if(G_RP)free(G_RP); if(G_Z)free(G_Z);
if(G_Mode)free(G_Mode); if(G_WSelf)free(G_WSelf); if(G_WN1)free(G_WN1); if(G_WN2)free(G_WN2);
if(G_WGlob1)free(G_WGlob1); if(G_WGlob2)free(G_WGlob2); if(G_WMom)free(G_WMom);
if(G_WTree)free(G_WTree); if(G_WAdv)free(G_WAdv);
if(A1x)free(A1x); if(A1lam)free(A1lam); if(A1mean)free(A1mean); if(A1E)free(A1E); if(A1P)free(A1P); if(A1i)free(A1i); if(A1c)free(A1c);
if(A2x)free(A2x); if(A2lam)free(A2lam); if(A2mean)free(A2mean); if(A2E)free(A2E); if(A2P)free(A2P); if(A2i)free(A2i); if(A2c)free(A2c);
if(G1mean)free(G1mean); if(G1E)free(G1E); if(G2P)free(G2P); if(G2lam)free(G2lam);
if(G_TreeTerm)free(G_TreeTerm); if(G_TopEq)free(G_TopEq); if(G_TopW)free(G_TopW);
if(TAlpha)free(TAlpha); if(TBeta)free(TBeta);
if(G_Pred)free(G_Pred); if(G_AdvScore)free(G_AdvScore);
if(G_PropRaw)free(G_PropRaw); if(G_Prop)free(G_Prop);
if(G_Sym){ for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]); free(G_Sym); }
if(G_TreeIdx)free(G_TreeIdx); if(G_EqTreeId)free(G_EqTreeId);
}
// --------- DTREE feature builders ----------
var nrm_s(var x){ return sat100(100.0*tanh(x)); }
var nrm_scl(var x,var s){ return sat100(100.0*tanh(s*x)); }
void buildEqFeatures(int i, var lambda, var mean, var energy, var power, var* S /*ADV_EQ_NF*/)
{
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
S[0] = nrm_s(G_State[i]);
S[1] = nrm_s(mean);
S[2] = nrm_scl(power,0.05);
S[3] = nrm_scl(energy,0.01);
S[4] = nrm_s(lambda);
S[5] = sat100(200.0*(G_Pred[i]-0.5));
S[6] = sat100(200.0*((var)t->d/MAX_DEPTH)-100.0);
S[7] = sat100(1000.0*t->r);
S[8] = nrm_s(G_TreeTerm[i]);
S[9] = sat100(200.0*((var)G_Mode[i]/3.0)-100.0);
S[10] = sat100(200.0*(G_MCF_PBull-0.5));
S[11] = sat100(200.0*(G_MCF_Entropy-0.5));
sanitize(S,ADV_EQ_NF);
}
// (Kept for completeness; not used by DTREE anymore)
void buildPairFeatures(int i,int j, var lambda, var mean, var energy, var power, var* P /*ADV_PAIR_NF*/)
{
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* ti = treeAt(tid_i);
Node* tj = treeAt(tid_j);
P[0]=nrm_s(G_State[i]); P[1]=nrm_s(G_State[j]);
P[2]=sat100(200.0*((var)ti->d/MAX_DEPTH)-100.0);
P[3]=sat100(200.0*((var)tj->d/MAX_DEPTH)-100.0);
P[4]=sat100(1000.0*ti->r); P[5]=sat100(1000.0*tj->r);
P[6]=sat100(abs(P[2]-P[3]));
P[7]=sat100(abs(P[4]-P[5]));
P[8]=sat100(100.0*(G_Pred[i]+G_Pred[j]-1.0));
P[9]=nrm_s(lambda); P[10]=nrm_s(mean); P[11]=nrm_scl(power,0.05);
sanitize(P,ADV_PAIR_NF);
}
// --- Safe neighbor helpers & adjacency sanitizer ---
int adjSafe(int i, int d){
int N = G_N, D = G_D;
if(!G_Adj || N <= 1 || D <= 0) return 0;
if(d < 0) d = 0;
if(d >= D) d = d % D;
int v = G_Adj[i*D + d];
if(v < 0 || v >= N || v == i){
v = (i + 1) % N;
}
return v;
}
void sanitizeAdjacency(){
if(!G_Adj) return;
int N = G_N, D = G_D;
int i, d;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
int *p = &G_Adj[i*D + d];
if(*p < 0 || *p >= N || *p == i){
int r = (int)random(N);
if(r == i) r = (r+1) % N;
*p = r;
}
}
if(D >= 2 && G_Adj[i*D+0] == G_Adj[i*D+1]){
int r2 = (G_Adj[i*D+1] + 1) % N;
if(r2 == i) r2 = (r2+1) % N;
G_Adj[i*D+1] = r2;
}
}
}
// --------- advisor helpers (NEW) ----------
// cache one advisor value per equation per bar
var adviseSeed(int i, var lambda, var mean, var energy, var power)
{
static int seedBar = -1;
static int haveSeed[NET_EQNS];
static var seedVal[NET_EQNS];
if(seedBar != Bar){
int k; for(k=0;k<NET_EQNS;k++) haveSeed[k] = 0;
seedBar = Bar;
}
if(i < 0) i = 0;
if(i >= NET_EQNS) i = i % NET_EQNS;
// Respect advisor budget/rotation for seed too
if(!allowAdvise(i)) return 0;
if(!haveSeed[i]){
seedVal[i] = adviseEq(i, lambda, mean, energy, power); // trains (once) in Train mode
haveSeed[i] = 1;
}
return seedVal[i];
}
// simple deterministic mixer for diversity in [-1..1] without extra advise calls
var mix01(var a, int salt){
var z = sin(123.456*a + 0.001*salt) + cos(98.765*a + 0.002*salt);
return tanh(0.75*z);
}
// --------- advise wrappers (single-equation only) ----------
// Use estimator to halt when tight; respect rotation budget.
var adviseEq(int i, var lambda, var mean, var energy, var power)
{
if(!allowAdvise(i)) return 0;
var S[ADV_EQ_NF];
buildEqFeatures(i,lambda,mean,energy,power,S);
if(is(INITRUN)) return 0;
// stop early based on our estimator, not memory(0)
int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
if(tight) return 0;
var obj = 0;
if(Train && !tight)
obj = sat100(100.0*tanh(0.6*lambda + 0.4*mean));
int objI = (int)obj;
var a = adviseLong(DTREE, objI, S, ADV_EQ_NF);
return a/100.;
}
// --------- advisePair disabled: never call DTREE here ----------
var advisePair(int i,int j, var lambda, var mean, var energy, var power)
{
return 0;
}
// --------- heuristic pair scoring ----------
var scorePairSafe(int i, int j, var lambda, var mean, var energy, var power)
{
int ti = safeTreeIndexFromEq(G_EqTreeId[i]);
int tj = safeTreeIndexFromEq(G_EqTreeId[j]);
Node *ni = treeAt(ti), *nj = treeAt(tj);
var simD = 1.0 / (1.0 + abs((var)ni->d - (var)nj->d));
var simR = 1.0 / (1.0 + 50.0*abs(ni->r - nj->r));
var pred = 0.5*(G_Pred[i] + G_Pred[j]);
var score = 0.5*pred + 0.3*simD + 0.2*simR;
return 2.0*score - 1.0;
}
// --------- adjacency selection (heuristic only) ----------
// safer clash check using prev>=0
void rewireAdjacency_DTREE(var lambda, var mean, var energy, var power)
{
int N=G_N, D=G_D, i, d, c, best, cand;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
var bestScore = -2; best = -1;
// (7) adaptive candidate breadth
for(c=0;c<G_CandNeigh;c++){
cand = (int)random(N);
if(cand==i) continue;
int clash=0, k;
for(k=0;k<d;k++){
int prev = G_Adj[i*D+k];
if(prev>=0 && prev==cand){ clash=1; break; }
}
if(clash) continue;
var s = scorePairSafe(i,cand,lambda,mean,energy,power);
if(s > bestScore){ bestScore=s; best=cand; }
}
if(best<0){ do{ best = (int)random(N);} while(best==i); }
G_Adj[i*D + d] = best;
}
}
}
// --------- DTREE-created coefficients, modes & proportions ----------
var mapA(var a,var lo,var hi){ return mapUnit(a,lo,hi); }
void synthesizeEquationFromDTREE(int i, var lambda, var mean, var energy, var power)
{
var seed = adviseSeed(i,lambda,mean,energy,power);
G_Mode[i] = (int)(abs(1000*seed)) & 3;
// derive weights & params deterministically from the single seed
G_WSelf[i] = mapA(mix01(seed, 11), 0.15, 0.85);
G_WN1[i] = mapA(mix01(seed, 12), 0.05, 0.35);
G_WN2[i] = mapA(mix01(seed, 13), 0.05, 0.35);
G_WGlob1[i] = mapA(mix01(seed, 14), 0.05, 0.30);
G_WGlob2[i] = mapA(mix01(seed, 15), 0.05, 0.30);
G_WMom[i] = mapA(mix01(seed, 16), 0.02, 0.15);
G_WTree[i] = mapA(mix01(seed, 17), 0.05, 0.35);
G_WAdv[i] = mapA(mix01(seed, 18), 0.05, 0.35);
A1x[i] = randsign()*mapA(mix01(seed, 21), 0.6, 1.2);
A1lam[i] = randsign()*mapA(mix01(seed, 22), 0.05,0.35);
A1mean[i]= mapA(mix01(seed, 23),-0.30,0.30);
A1E[i] = mapA(mix01(seed, 24),-0.0015,0.0015);
A1P[i] = mapA(mix01(seed, 25),-0.30,0.30);
A1i[i] = mapA(mix01(seed, 26),-0.02,0.02);
A1c[i] = mapA(mix01(seed, 27),-0.20,0.20);
A2x[i] = randsign()*mapA(mix01(seed, 31), 0.6, 1.2);
A2lam[i] = randsign()*mapA(mix01(seed, 32), 0.05,0.35);
A2mean[i]= mapA(mix01(seed, 33),-0.30,0.30);
A2E[i] = mapA(mix01(seed, 34),-0.0015,0.0015);
A2P[i] = mapA(mix01(seed, 35),-0.30,0.30);
A2i[i] = mapA(mix01(seed, 36),-0.02,0.02);
A2c[i] = mapA(mix01(seed, 37),-0.20,0.20);
G1mean[i] = mapA(mix01(seed, 41), 0.4, 1.6);
G1E[i] = mapA(mix01(seed, 42),-0.004,0.004);
G2P[i] = mapA(mix01(seed, 43), 0.1, 1.2);
G2lam[i] = mapA(mix01(seed, 44), 0.05, 0.7);
TAlpha[i] = mapA(mix01(seed, 51), 0.3, 1.5);
TBeta[i] = mapA(mix01(seed, 52), 6.0, 50.0);
G_PropRaw[i] = 0.01 + 0.99*(0.5*(seed+1.0));
}
void normalizeProportions()
{
int N=G_N,i; var s=0; for(i=0;i<N;i++) s += G_PropRaw[i];
if(s<=0) { for(i=0;i<N;i++) G_Prop[i] = 1.0/N; return; }
for(i=0;i<N;i++) G_Prop[i] = G_PropRaw[i]/s;
}
var dtreeTerm(int i, int* outTopEq, var* outTopW)
{
int N=G_N,j;
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* ti=treeAt(tid_i); int di=ti->d; var ri=ti->r;
var alpha=TAlpha[i], beta=TBeta[i];
var sumw=0, acc=0, bestW=-1; int bestJ=-1;
for(j=0;j<N;j++){
if(j==i) continue;
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* tj=treeAt(tid_j); int dj=tj->d; var rj=tj->r;
var w = exp(-alpha*abs(di-dj)) * exp(-beta*abs(ri-rj));
var predBoost = 0.5 + 0.5*(G_Pred[i]*G_Pred[j]);
var propBoost = 0.5 + 0.5*( (G_Prop[i] + G_Prop[j]) );
w *= predBoost * propBoost;
var pairAdv = scorePairSafe(i,j,0,0,0,0);
var pairBoost = 0.75 + 0.25*(0.5*(pairAdv+1.0));
w *= pairBoost;
sumw += w; acc += w*G_State[j];
if(w>bestW){bestW=w; bestJ=j;}
}
if(outTopEq) *outTopEq = bestJ;
if(outTopW) *outTopW = ifelse(sumw>0, bestW/sumw, 0);
if(sumw>0) return acc/sumw; return 0;
}
// --------- expression builder (capped & optional) ----------
void buildSymbolicExpr(int i, int n1, int n2)
{
if(LOG_EXPR_TEXT){
string s = G_Sym[i]; s[0]=0;
string a1 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
A1x[i], n1, A1lam[i], A1mean[i], A1E[i], A1P[i], A1i[i], A1c[i]);
string a2 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
A2x[i], n2, A2lam[i], A2mean[i], A2E[i], A2P[i], A2i[i], A2c[i]);
strlcat_safe(s, "x[i]_next = ", EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*x[i] + ", G_WSelf[i]), EXPR_MAXLEN);
if(G_Mode[i]==1){
strlcat_safe(s, strf("%.3f*tanh%s + ", G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin%s + ", G_WN2[i], a2), EXPR_MAXLEN);
} else if(G_Mode[i]==2){
strlcat_safe(s, strf("%.3f*cos%s + ", G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*tanh%s + ", G_WN2[i], a2), EXPR_MAXLEN);
} else {
strlcat_safe(s, strf("%.3f*sin%s + ", G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*cos%s + ", G_WN2[i], a2), EXPR_MAXLEN);
}
strlcat_safe(s, strf("%.3f*tanh(%.3f*mean + %.5f*E) + ", G_WGlob1[i], G1mean[i], G1E[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin(%.3f*P + %.3f*lam) + ", G_WGlob2[i], G2P[i], G2lam[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*(x[i]-x_prev[i]) + ", G_WMom[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("Prop[i]=%.4f; ", G_Prop[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DT(i) + ", G_WTree[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DTREE(i)", G_WAdv[i]), EXPR_MAXLEN);
}
}
// --------- one-time rewire init ----------
void rewireInit()
{
randomizeRP(); computeProjection();
G_TreeN=0; indexTreeDFS(Root);
if(G_TreeN<=0){ G_TreeN=1; if(G_TreeIdx) G_TreeIdx[0]=Root; }
int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = i % G_TreeN;
}
// probes & unsigned context hash
// ----------------------------------------------------------------------
// rewireEpoch (SAFE: no functions inside ifelse)
// ----------------------------------------------------------------------
void rewireEpoch(var lambda, var mean, var energy, var power)
{
int i;
if(ENABLE_WATCH) watch("?A"); // before predictability
for(i=0;i<G_N;i++){
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
G_Pred[i] = nodePredictability(t);
}
if(ENABLE_WATCH) watch("?B"); // after predictability, before adjacency
// (7) adapt adjacency sampling breadth by regime entropy
G_CandNeigh = ifelse(MC_Entropy < 0.45, CAND_NEIGH+4, CAND_NEIGH);
rewireAdjacency_DTREE(lambda,mean,energy,power);
if(ENABLE_WATCH) watch("?C"); // after adjacency, before synthesize
sanitizeAdjacency();
for(i=0;i<G_N;i++)
synthesizeEquationFromDTREE(i,lambda,mean,energy,power);
if(ENABLE_WATCH) watch("?D"); // before normalize / ctx hash
normalizeProportions();
// Unsigned context hash of current adjacency (+ epoch) for logging
{
int D = G_D;
unsigned int h = 2166136261u;
int total = G_N * D;
for(i=0;i<total;i++){
unsigned int x = (unsigned int)G_Adj[i];
h ^= x + 0x9e3779b9u + (h<<6) + (h>>2);
}
G_CtxID = (int)((h ^ ((unsigned int)G_Epoch<<8)) & 0x7fffffff);
}
// Optional expression text (only when LOG_EXPR_TEXT==1)
for(i=0;i<G_N;i++){
int n1 = adjSafe(i,0);
int n2 = n1;
if(G_D >= 2) n2 = adjSafe(i,1);
if(LOG_EXPR_TEXT) buildSymbolicExpr(i,n1,n2);
}
}
var projectNet()
{
int N=G_N,i; var sum=0,sumsq=0,cross=0;
for(i=0;i<N;i++){ sum+=G_State[i]; sumsq+=G_State[i]*G_State[i]; if(i+1<N) cross+=G_State[i]*G_State[i+1]; }
var mean=sum/N, corr=cross/(N-1);
return 0.6*tanh(mean + 0.001*sumsq) + 0.4*sin(corr);
}
// ----------------------------------------------------------------------
// updateNet (SAFE: no functions inside ifelse for neighbor indices)
// ----------------------------------------------------------------------
void updateNet(var driver, var* outMean, var* outEnergy, var* outPower, int writeMeta)
{
int N = G_N, D = G_D, i;
var sum = 0, sumsq = 0;
for(i = 0; i < N; i++){
sum += G_State[i];
sumsq += G_State[i]*G_State[i];
}
var mean = sum / N;
var energy = sumsq;
var power = sumsq / N;
for(i = 0; i < N; i++){
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
G_Pred[i] = nodePredictability(t);
}
for(i = 0; i < N; i++){
int n1 = adjSafe(i,0);
int n2 = n1;
if(D >= 2) n2 = adjSafe(i,1);
var xi = G_State[i];
var xn1 = G_State[n1];
var xn2 = G_State[n2];
var mom = xi - G_Prev[i];
int topEq = -1;
var topW = 0;
var dt = dtreeTerm(i, &topEq, &topW);
G_TreeTerm[i] = dt;
G_TopEq[i] = topEq;
G_TopW[i] = topW;
// call advisor only when allowed
var adv = 0;
if(allowAdvise(i))
adv = adviseEq(i, driver, mean, energy, power);
G_AdvScore[i] = adv;
var arg1 = A1x[i]*xn1 + A1lam[i]*driver + A1mean[i]*mean + A1E[i]*energy + A1P[i]*power + A1i[i]*i + A1c[i];
var arg2 = A2x[i]*xn2 + A2lam[i]*driver + A2mean[i]*mean + A2E[i]*energy + A2P[i]*power + A2i[i]*i + A2c[i];
var nl1, nl2;
if(G_Mode[i] == 0){ nl1 = sin(arg1); nl2 = cos(arg2); }
else if(G_Mode[i] == 1){ nl1 = tanh(arg1); nl2 = sin(arg2); }
else if(G_Mode[i] == 2){ nl1 = cos(arg1); nl2 = tanh(arg2); }
else { nl1 = sin(arg1); nl2 = cos(arg2); }
var glob1 = tanh(G1mean[i]*mean + G1E[i]*energy);
var glob2 = sin (G2P[i]*power + G2lam[i]*driver);
var xNew =
G_WSelf[i]*xi +
G_WN1[i]*nl1 +
G_WN2[i]*nl2 +
G_WGlob1[i]*glob1 +
G_WGlob2[i]*glob2 +
G_WMom[i]*mom +
G_WTree[i]*dt +
G_WAdv[i]*adv;
G_Prev[i] = xi;
G_Vel[i] = xNew - xi;
G_State[i] = clamp(xNew, -10, 10);
if(writeMeta && (G_Epoch % META_EVERY == 0) && !G_LogsOff){
int tid2 = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t2 = treeAt(tid2);
int nn1 = adjSafe(i,0);
int nn2 = nn1;
if(G_D >= 2) nn2 = adjSafe(i,1);
if(LOG_EQ_TO_ONE_FILE){
string expr = "";
if(LOG_EXPR_TEXT) expr = G_Sym[i];
appendEqMetaLine(
Bar, G_Epoch, G_CtxID, i, nn1, nn2, tid2, t2->d, t2->r,
G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr
);
} else {
char fname[64];
buildEqFileName(i, fname);
string expr2 = "";
if(LOG_EXPR_TEXT) expr2 = G_Sym[i];
file_append(fname,
strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
G_Epoch, G_CtxID, NET_EQNS, i, nn1, nn2, tid2, t2->d, t2->r,
G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr2));
}
}
}
if(outMean) *outMean = mean;
if(outEnergy) *outEnergy = energy;
if(outPower) *outPower = power;
}
// ----------------- MAIN -----------------
function run()
{
static int initialized = 0;
static var lambda;
static int fileInit = 0;
BarPeriod = BAR_PERIOD;
if(LookBack < NWIN) LookBack = NWIN;
if(Train) Hedge = 2;
// Plots are opt-in via ENABLE_PLOTS
set(RULES|LEAN);
if(ENABLE_PLOTS) set(PLOTNOW);
asset(ASSET_SYMBOL);
if(is(INITRUN) && !initialized){
// init dummy node
G_DummyNode.v = 0;
G_DummyNode.r = 0;
G_DummyNode.c = 0;
G_DummyNode.n = 0;
G_DummyNode.d = 0;
// allocate Markov matrices (zeroed)
MC_Count = (int*)malloc(MC_STATES*MC_STATES*sizeof(int));
MC_RowSum = (int*)malloc(MC_STATES*sizeof(int));
int k;
for(k=0;k<MC_STATES*MC_STATES;k++) MC_Count[k]=0;
for(k=0;k<MC_STATES;k++) MC_RowSum[k]=0;
// capture pattern names (optional)
var tmp[MC_NPAT];
buildCDL_TA61(tmp, MC_Names);
// build tree + network
Root = createNode(MAX_DEPTH);
allocateNet();
// engine params
G_DTreeExp = 1.10 + random(0.50); // [1.10..1.60)
G_FB_A = 0.60 + random(0.25); // [0.60..0.85) (kept)
G_FB_B = 1.0 - G_FB_A;
randomizeRP();
computeProjection();
rewireInit();
G_Epoch = 0;
rewireEpoch(0,0,0,0);
// Header setup (consolidated vs legacy)
if(LOG_EQ_TO_ONE_FILE){
writeEqHeaderOnce();
} else {
char fname[64];
int i2;
for(i2=0;i2<NET_EQNS;i2++){
buildEqFileName(i2,fname);
file_append(fname,
"Bar,lambda,gamma,i,State,n1,n2,mean,energy,power,Vel,Mode,WAdv,WSelf,WN1,WN2,WGlob1,WGlob2,WMom,WTree,Pred,Adv,Prop,TreeTerm,TopEq,TopW,TreeId,Depth,Rate,PBull,Entropy,MCState\n");
}
}
// Markov CSV header
if(!fileInit){
file_append("Log\\Alpha12_markov.csv","Bar,State,PBullNext,Entropy,RowSum\n");
fileInit=1;
}
// initial META dump (consolidated or legacy)
int i;
for(i=0;i<G_N;i++){
int n1 = adjSafe(i,0);
int n2 = n1;
if(G_D >= 2) n2 = adjSafe(i,1);
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
if(LOG_EQ_TO_ONE_FILE){
string expr = "";
if(LOG_EXPR_TEXT) expr = G_Sym[i];
appendEqMetaLine(
Bar, G_Epoch, G_CtxID, i, n1, n2, tid, t->d, t->r,
G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr
);
} else {
char fname2[64];
buildEqFileName(i,fname2);
string expr2 = "";
if(LOG_EXPR_TEXT) expr2 = G_Sym[i];
file_append(fname2,
strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
G_Epoch, G_CtxID, NET_EQNS, i, n1, n2, tid, t->d, t->r,
G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr2));
}
}
initialized=1;
printf("\nRoot nodes: %i | Net equations: %i (degree=%i, kproj=%i)",
countNodes(Root), G_N, G_D, G_K);
}
// early zero-cost shedding when approaching cap
if(mem_mb_est() >= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_ShedStage == 0)
shed_zero_cost_once();
// ==== Runtime memory / depth manager (acts only when near the cap)
depth_manager_runtime();
// ====== Per bar: Candles ? Markov
static var CDL[MC_NPAT];
buildCDL_TA61(CDL,0);
// (2) adaptive threshold for Markov state acceptance
MC_Cur = MC_stateFromCDL(CDL, G_MC_ACT);
if(Bar > LookBack) MC_update(MC_Prev, MC_Cur);
MC_Prev = MC_Cur;
// (6) ? decays with row support to sharpen PBull as rows fill
var rs = (var)MC_RowSum[MC_Cur];
G_MC_Alpha = clamp(1.0 / (1.0 + rs/256.0), 0.05, 1.0);
MC_PBullNext = MC_nextBullishProb(MC_Cur);
MC_Entropy = MC_rowEntropy01(MC_Cur);
// expose Markov features
G_MCF_PBull = MC_PBullNext;
G_MCF_Entropy = MC_Entropy;
G_MCF_State = (var)MC_Cur;
// (2) EW acceptance rate of nonzero states ? adapt threshold toward target rate
{
var aEW = 0.01; // ~100-bar half-life
G_AccRate = (1 - aEW)*G_AccRate + aEW*(MC_Cur != 0);
var target = 0.35; // aim for ~35% nonzero states
G_MC_ACT = clamp(G_MC_ACT + 0.02*(G_AccRate - target), 0.15, 0.60);
}
// ====== Tree driver lambda
lambda = evaluateNode(Root);
// ====== Rewire cadence (4) + epoch work
{
int doRewire = ((Bar % REWIRE_EVERY) == 0);
// (4) early rewire when utility falls
static var U_prev = 0;
var U_now = util_now();
if(U_now + 0.01 < U_prev) doRewire = 1;
U_prev = U_now;
if(doRewire){
G_Epoch++;
int ii;
var sum=0;
for(ii=0;ii<G_N;ii++) sum += G_State[ii];
var mean = sum/G_N;
var energy=0;
for(ii=0;ii<G_N;ii++) energy += G_State[ii]*G_State[ii];
var power = energy/G_N;
rewireEpoch(lambda,mean,energy,power);
}
// (8) adapt effective projection K each bar and recompute projection once
G_Keff = ifelse(MC_Entropy < 0.45, KPROJ, KPROJ/2);
computeProjection();
// (3) dynamic advisor budget per bar (before updateNet so it applies now)
int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
G_AdviseMax = ifelse(tight, 12, ifelse(MC_Entropy < 0.45, 32, 16));
// Update net this bar (write META only if rewired and not shedding logs)
var meanB, energyB, powerB;
updateNet(lambda, &meanB, &energyB, &powerB, doRewire);
// Feedback: compute ensemble projection
var gamma = projectNet();
// --- Accuracy sentinel update & elastic depth controller ---
acc_update(lambda, gamma);
edc_runtime();
// (1) Adaptive feedback blend toward the more informative component
var w = 0.5 + 0.5*G_AccCorr; // 0..1
G_FB_W = clamp(0.9*G_FB_W + 0.1*w, 0.2, 0.9);
lambda = G_FB_W*lambda + (1.0 - G_FB_W)*gamma;
// Plot/log gating
int doPlot = (ENABLE_PLOTS && !G_ChartsOff);
int doLog = ifelse(G_LogsOff, ((Bar % (LOG_EVERY*4)) == 0), ((Bar % LOG_EVERY) == 0));
// Plots
if(doPlot){
plot("lambda", lambda, LINE, 0);
plot("gamma", gamma, LINE, 0);
plot("P_win", powerB, LINE, 0);
plot("PBullNext", MC_PBullNext, LINE, 0);
plot("MC_Entropy", MC_Entropy, LINE, 0);
plot("MemMB", memory(0)/(1024.*1024.), LINE, 0);
plot("Allocs", (var)memory(2), LINE, 0);
}
// Markov CSV log (decimated; further decimated when shedding)
if(doLog){
file_append("Log\\Alpha12_markov.csv",
strf("%i,%i,%.6f,%.6f,%i\n", Bar, MC_Cur, MC_PBullNext, MC_Entropy, MC_RowSum[MC_Cur]));
}
// ====== Entries (Markov-gated) ======
if( MC_PBullNext > PBULL_LONG_TH && lambda > 0.7 ) enterLong();
if( MC_PBullNext < PBULL_SHORT_TH && lambda < -0.7 ) enterShort();
}
}
// Clean up memory
function cleanup()
{
if(Root) freeTree(Root);
if(MC_Count) free(MC_Count);
if(MC_RowSum) free(MC_RowSum);
freeNet();
}
Last edited by TipmyPip; 09/06/25 20:26.
|
|
|
Canticle of the Rewoven Mandala
[Re: TipmyPip]
#488909
09/14/25 12:15
09/14/25 12:15
|
Joined: Sep 2017
Posts: 164
TipmyPip
OP
Member
|
OP
Member
Joined: Sep 2017
Posts: 164
|
Canticle of the Rewoven Mandala1) Proem Time arrays itself as a lattice of finite breaths, and each breath inscribes a glyph upon the quiet. Most glyphs are rumor; a few ring like bells in stone cloisters. The ear that listens must learn to weigh rumor without banishing it, to honor bell without worshiping echo. Thus a measure awakens between noise and sign, a small invariance that keeps the pulse when weather changes. 2) The Alphabet of Signs A compact alphabet approaches the gate of discernment, where amplitude is baptized into meaning. The gate narrows under fog and widens under stars, so that a constant portion of passage remains sacred. Soft priors cradle the rare and the first, then yield as witness accumulates. In this way, significance is renormalized without losing humility. 3) The Harmonic Tree Depth vows restraint: each rung bears less authority by a gentle power, rather than by decree. Branches breathe in sines and breathe out memory, their phases braided by quiet sums. Weight drifts until usefulness nods, then lingers as if remembering why it came. The crown does not command the root; they meet in an average that forgives their difference. 4) Sun and Moon Two lights shepherd the pilgrim number—one hewn from structure, one echoed by multitudes. A small scribe keeps the concordance of their songs, counting when agreement appears more than once. The weight shifts toward the truer singer of the hour, and no light is shamed for dimming. So guidance becomes a balance of reverence and revision. 5) The Reweaving At chosen beats the web of attention is unspooled and rewoven, the knots reconsidered without rancor. In settled air the cast opens wide; in crosswinds the mesh draws close to the mast. Threads avoid duplicating their crossings, and stale tangles are quietly undone. Each new pattern is signed by a modest seal so the loom remembers where it has wandered. 6) Poverty and Plenty Form owns only what it can bless. When the bowl brims, leaves fall from distant branches with gratitude; when there is room and reason, a single ring of buds appears. Every addition is a trial, and every trial keeps a door for return. Thus growth is reversible, and thrift becomes a geometry. 7) A Single Measure Merit is counted on one bead: clarity earned minus burden carried. The bead rolls forward of its own accord, and the mandala bends to meet it, not the reverse. When the bead stalls, the slope is gently reversed so the climb resumes by another path. No trumpet announces this; the stone merely remembers the foot. 8) Seeds of Counsel Advice arrives as a seed and is rationed like lamp oil in winter. Some nights a few wicks suffice; some dawns welcome a small festival of flame. Diversity is invited by lawful dice, so surprise is shaped, not squandered. The seed is single, but the harvest wears many faces. 9) Proportions as Offering Every voice brings an offering to the altar, and the offerings are washed until their sum is one. No bowl is allowed to overflow by insistence, and none is left dry by shyness. The sharing is impartial to volume, partial to coherence. Thus chorus replaces clamor without silencing the small. 10) Neighbor Grace Affinity is a distance remembered in three tongues: depth, tempo, and temper. Near does not mean same, and far is not foreign; kinship is a gradient, not a border. Trust is earned by steadiness and granted in weights that bow to it. From many neighbors a single counsel is poured, proportioned by grace rather than by force. 11) Fading Without Forgetting Incense rises; ash remains: so the near past perfumes more than the far, and the far is not despised. Memory decays as a hymn, not as a fall, allowing drift without lurch. The ledger of moments is tempered by forgetting that remembers why it forgets. In this way continuity holds hands with change. 12) The Small Chronicle Each day leaves a narrow chronicle: the hour, the season, a modest seal of the tapestry, and the tilt of the lights. Numbers are trimmed like candles so the wax does not drown the flame. The script favors witness over spectacle, sufficiency over excess. It is enough that a future monk can nod and say, “Yes, I see.” 13) Postures That Emerge When order visits, nets widen, depth speaks, counsel is generous, and steps chime with the stones. When weather breaks, meshes tighten, the surface steadies, counsel turns frugal, and stillness earns its wage. The glide between these postures is by small hinges, not by leaps. Thus resilience ceases to be a tactic and becomes a habit. 14) The Rule in One Sentence Move only when the oracle and the breath agree; otherwise keep vigil. Let the form stay light, the changes small, and the diary honest. Spend attention where harmony gathers and thrift where it frays. In all things, prefer the reversible path that leaves the meadow unscarred. // ======================================================================
// Alpha12 - Markov-augmented Harmonic D-Tree Engine (Candlestick 122-dir)
// with runtime memory shaping, selective depth pruning,
// and elastic accuracy-aware depth growth + 10 performance upgrades.
// ======================================================================
// ================= USER CONFIG =================
#define ASSET_SYMBOL "EUR/USD"
#define BAR_PERIOD 60
#define MC_ACT 0.30 // initial threshold on |CDL| in [-1..1] to accept a pattern
#define PBULL_LONG_TH 0.60 // Markov gate for long
#define PBULL_SHORT_TH 0.40 // Markov gate for short
// ===== Debug toggles (Fix #1 - chart/watch growth off by default) =====
#define ENABLE_PLOTS 0 // 0 = no plot buffers; 1 = enable plot() calls
#define ENABLE_WATCH 0 // 0 = disable watch() probes; 1 = enable
// ================= ENGINE PARAMETERS =================
#define MAX_BRANCHES 3
#define MAX_DEPTH 4
#define NWIN 256
#define NET_EQNS 100
#define DEGREE 4
#define KPROJ 16
#define REWIRE_EVERY 127
#define CAND_NEIGH 8
// ===== LOGGING CONTROLS (memory management) =====
#define LOG_EQ_TO_ONE_FILE 1 // 1: single consolidated EQ CSV; 0: per-eq files
#define LOG_EXPR_TEXT 0 // 0: omit full expression (store signature only); 1: include text
#define META_EVERY 4 // write META every N rewires
#define LOG_EQ_SAMPLE NET_EQNS // limit number of equations logged
#define EXPR_MAXLEN 512 // cap expression string
#define LOG_FLOAT_TRIM
// decimate Markov log cadence
#define LOG_EVERY 16
// Optional: cadence for candle scan/Markov update (1 = every bar)
#define MC_EVERY 1
// ---- DTREE feature sizes (extended for Markov features) ----
#define ADV_EQ_NF 13 // +1: per-eq hit-rate feature (PATCH A)
#define ADV_PAIR_NF 12 // per-pair features (kept for completeness; DTREE pair disabled)
// ================= Candles ? 122-state Markov =================
#define MC_NPAT 61
#define MC_STATES 123 // 1 + 2*MC_NPAT
#define MC_NONE 0
#define MC_LAPLACE 1.0 // kept for reference; runtime uses G_MC_Alpha
// ================= Runtime Memory / Accuracy Manager =================
#define MEM_BUDGET_MB 50
#define MEM_HEADROOM_MB 5
#define DEPTH_STEP_BARS 16
#define KEEP_CHILDREN_HI 2
#define KEEP_CHILDREN_LO 1
#define RUNTIME_MIN_DEPTH 2
int G_ShedStage = 0; // 0..2
int G_LastDepthActBar = -999999;
int G_ChartsOff = 0; // gates plot()
int G_LogsOff = 0; // gates file_append cadence
int G_SymFreed = 0; // expression buffers freed
int G_RT_TreeMaxDepth = MAX_DEPTH;
// ---- Accuracy sentinel (EW correlation of lambda vs gamma) ----
var ACC_mx=0, ACC_my=0, ACC_mx2=0, ACC_my2=0, ACC_mxy=0;
var G_AccCorr = 0; // [-1..1]
var G_AccBase = 0; // first seen sentinel
int G_HaveBase = 0;
// ---- Elastic depth tuner (small growth trials with rollback) ----
#define DEPTH_TUNE_BARS 64 // start a growth trial this often (when memory allows)
#define TUNE_DELAY_BARS 64 // evaluate the trial after this many bars
var G_UtilBefore = 0, G_UtilAfter = 0;
int G_TunePending = 0;
int G_TuneStartBar = 0;
int G_TuneAction = 0; // +1 grow trial, 0 none
// ======================================================================
// Types & globals used by memory estimator
// ======================================================================
// HARMONIC D-TREE type
typedef struct Node { var v; var r; void* c; int n; int d; } Node;
// ====== Node pool (upgrade #2) ======
typedef struct NodeChunk {
struct NodeChunk* next;
int used; // 4 bytes
int _pad; // 4 bytes -> ensures nodes[] starts at 8-byte offset on 32-bit
Node nodes[256]; // each Node contains doubles; keep this 8-byte aligned
} NodeChunk;
NodeChunk* G_ChunkHead = 0;
Node* G_FreeList = 0;
Node* poolAllocNode()
{
if(G_FreeList){
Node* n = G_FreeList;
G_FreeList = (Node*)n->c;
n->c = 0; n->n = 0; n->d = 0; n->v = 0; n->r = 0;
return n;
}
if(!G_ChunkHead || G_ChunkHead->used >= 256){
NodeChunk* ch = (NodeChunk*)malloc(sizeof(NodeChunk));
if(!ch) { quit("Alpha12: OOM allocating NodeChunk (poolAllocNode)"); return 0; }
// ensure clean + alignment-friendly start
memset(ch, 0, sizeof(NodeChunk));
ch->next = G_ChunkHead;
ch->used = 0;
G_ChunkHead = ch;
}
if(G_ChunkHead->used < 0 || G_ChunkHead->used >= 256){
quit("Alpha12: Corrupt node pool state");
return 0;
}
return &G_ChunkHead->nodes[G_ChunkHead->used++];
}
void poolFreeNode(Node* u){ if(!u) return; u->c = (void*)G_FreeList; G_FreeList = u; }
void freeNodePool()
{
NodeChunk* ch = G_ChunkHead;
while(ch){ NodeChunk* nx = ch->next; free(ch); ch = nx; }
G_ChunkHead = 0; G_FreeList = 0;
}
// Minimal globals needed before mem estimator
Node* Root = 0;
Node** G_TreeIdx = 0;
int G_TreeN = 0;
int G_TreeCap = 0;
var G_DTreeExp = 0;
// ---- (upgrade #1) depth LUT for pow() ----
#define DEPTH_LUT_SIZE (MAX_DEPTH + 1) // <- keep constant for lite-C
var* G_DepthW = 0; // heap-allocated LUT
var G_DepthExpLast = -1.0; // sentinel as var
Node G_DummyNode; // treeAt() can return &G_DummyNode
// Network sizing globals (used by mem estimator)
int G_N = NET_EQNS;
int G_D = DEGREE;
int G_K = KPROJ;
// Optional expression buffer pointer (referenced by mem estimator)
string* G_Sym = 0;
// Forward decls that reference Node
var nodePredictability(Node* t); // fwd decl (needed by predByTid)
var nodeImportance(Node* u); // fwd decl (uses nodePredictability below)
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK);
void reindexTreeAndMap();
// Forward decls for advisor functions (so adviseSeed can call them)
var adviseEq(int i, var lambda, var mean, var energy, var power);
var advisePair(int i,int j, var lambda, var mean, var energy, var power);
// ----------------------------------------------------------------------
// === Adaptive knobs & sentinels (NEW) ===
var G_FB_W = 0.70; // (1) dynamic lambda/gamma blend weight 0..1
var G_MC_ACT = MC_ACT; // (2) adaptive candlestick acceptance threshold
var G_AccRate = 0; // (2) EW acceptance rate of (state != 0)
// (3) advisor budget per bar (replaces the macro)
int G_AdviseMax = 16;
// (6) Markov Laplace smoothing (runtime)
var G_MC_Alpha = 1.0;
// (7) adaptive candidate breadth for adjacency search
int G_CandNeigh = CAND_NEIGH;
// (8) effective projection dimension (= KPROJ or KPROJ/2)
int G_Keff = KPROJ;
// (5) depth emphasis hill-climber
var G_DTreeExpStep = 0.05;
int G_DTreeExpDir = 1;
// ---- Advise budget/rotation (Fix #2) ----
#define ADVISE_ROTATE 1 // 1 = rotate which equations get DTREE each bar
int allowAdvise(int i)
{
if(ADVISE_ROTATE){
int groups = NET_EQNS / G_AdviseMax;
if(groups < 1) groups = 1;
return ((i / G_AdviseMax) % groups) == (Bar % groups);
} else {
return (i < G_AdviseMax);
}
}
// ======================================================================
// A) Tight-memory switches and compact types
// ======================================================================
#define TIGHT_MEM 1 // turn on compact types for arrays
// lite-C precompiler doesn't support '#if' expressions.
// Use presence test instead (LOG_EQ_TO_ONE_FILE defined = single-file mode).
#ifdef LOG_EQ_TO_ONE_FILE
/* consolidated EQ CSV -> don't enable extra meta */
#else
#define KEEP_TOP_META
#endif
#ifdef TIGHT_MEM
typedef float fvar; // 4B instead of 8B 'var' for large coefficient arrays
typedef short i16; // -32768..32767 indices
typedef char i8; // small enums/modes
#else /* not TIGHT_MEM */
typedef var fvar;
typedef int i16;
typedef int i8;
#endif
// ---- tree byte size (counts nodes + child pointer arrays) ----
int tree_bytes(Node* u)
{
if(!u) return 0;
int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
int sz_node = 2*SZV + SZP + 2*SZI;
int total = sz_node;
if(u->n > 0 && u->c) total += u->n * SZP;
int i;
for(i=0;i<u->n;i++)
total += tree_bytes(((Node**)u->c)[i]);
return total;
}
// ======================================================================
// Optimized memory estimator & predictability caches
// ======================================================================
// ===== Memory estimator & predictability caches =====
int G_MemFixedBytes = 0; // invariant part (arrays, Markov + pointer vec + expr opt)
int G_TreeBytesCached = 0; // current D-Tree structure bytes
var* G_PredNode = 0; // length == G_TreeN; -2 = not computed this bar
int G_PredLen = 0;
int G_PredCap = 0; // (upgrade #5)
int G_PredCacheBar = -1;
void recalcTreeBytes(){ G_TreeBytesCached = tree_bytes(Root); }
//
// C) Updated memory estimator (matches compact types).
// Includes pointer vector & optional expr into the "fixed" baseline.
// Note: we refresh this when capacity/logging changes.
//
void computeMemFixedBytes()
{
int N = G_N, D = G_D, K = G_K;
int SZV = sizeof(var), SZF = sizeof(fvar), SZI16 = sizeof(i16), SZI8 = sizeof(i8), SZP = sizeof(void*);
int b = 0;
// --- core state (var-precision) ---
b += N*SZV*2; // G_State, G_Prev
// --- adjacency & ids ---
b += N*D*SZI16; // G_Adj
b += N*SZI16; // G_EqTreeId
b += N*SZI8; // G_Mode
// --- random projection ---
b += K*N*SZF; // G_RP
b += K*SZF; // G_Z
// --- weights & params (fvar) ---
b += N*SZF*(8); // G_W*
b += N*SZF*(7 + 7); // A1*, A2*
b += N*SZF*(2 + 2); // G1mean,G1E,G2P,G2lam
b += N*SZF*(2); // TAlpha, TBeta
b += N*SZF*(1); // G_TreeTerm
#ifdef KEEP_TOP_META
b += N*(SZI16 + SZF); // G_TopEq, G_TopW
#endif
// --- proportions ---
b += N*SZF*2; // G_PropRaw, G_Prop
// --- per-equation hit-rate bookkeeping --- (PATCH C)
b += N*SZF; // G_HitEW
b += N*SZF; // G_AdvPrev
b += N*sizeof(int); // G_HitN
// --- Markov storage (unchanged ints) ---
b += MC_STATES*MC_STATES*sizeof(int) + MC_STATES*sizeof(int);
// pointer vector for tree index (capacity part)
b += G_TreeCap*SZP;
// optional expression buffers
if(LOG_EXPR_TEXT && G_Sym && !G_SymFreed) b += N*EXPR_MAXLEN;
G_MemFixedBytes = b;
}
void ensurePredCache()
{
if(G_PredCacheBar != Bar){
if(G_PredNode){
int i, n = G_PredLen; // use allocated length, not G_TreeN
for(i=0;i<n;i++) G_PredNode[i] = -2;
}
G_PredCacheBar = Bar;
}
}
var predByTid(int tid)
{
if(!G_TreeIdx || tid < 0 || tid >= G_TreeN || !G_TreeIdx[tid]) return 0.5;
ensurePredCache();
// Guard reads/writes by the allocated cache length
if(G_PredNode && tid < G_PredLen && G_PredNode[tid] > -1.5)
return G_PredNode[tid];
Node* t = G_TreeIdx[tid];
var p = 0.5;
if(t) p = nodePredictability(t);
if(G_PredNode && tid < G_PredLen)
G_PredNode[tid] = p;
return p;
}
// ======================================================================
// Conservative in-script memory estimator (arrays + pointers) - O(1)
// ======================================================================
int mem_bytes_est()
{
// With the updated computeMemFixedBytes() counting pointer capacity
// and optional expr buffers, only add current tree structure here.
return G_MemFixedBytes + G_TreeBytesCached;
}
int mem_mb_est(){ return mem_bytes_est() / (1024*1024); }
// === total memory (Zorro-wide) in MB ===
int memMB(){ return (int)(memory(0)/(1024*1024)); }
// light one-shot shedding
void shed_zero_cost_once()
{
if(G_ShedStage > 0) return;
set(PLOTNOW|OFF); G_ChartsOff = 1; // stop chart buffers
G_LogsOff = 1; // decimate logs (gated later)
G_ShedStage = 1;
}
void freeExprBuffers()
{
if(!G_Sym || G_SymFreed) return;
int i; for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]);
free(G_Sym); G_Sym = 0; G_SymFreed = 1;
computeMemFixedBytes(); // refresh baseline
}
// depth manager (prune & shedding)
void depth_manager_runtime()
{
int trigger = MEM_BUDGET_MB - MEM_HEADROOM_MB;
int mb = mem_mb_est();
if(mb < trigger) return;
if(G_ShedStage == 0) shed_zero_cost_once();
if(G_ShedStage <= 1){
if(LOG_EXPR_TEXT==0 && !G_SymFreed) freeExprBuffers();
G_ShedStage = 2;
}
int overBudget = (mb >= MEM_BUDGET_MB);
if(!overBudget && (Bar - G_LastDepthActBar < DEPTH_STEP_BARS))
return;
while(G_RT_TreeMaxDepth > RUNTIME_MIN_DEPTH)
{
int keepK = ifelse(mem_mb_est() < MEM_BUDGET_MB + 2, KEEP_CHILDREN_HI, KEEP_CHILDREN_LO);
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, keepK);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
mb = mem_mb_est();
printf("\n[DepthMgr] depth=%i keepK=%i est=%i MB", G_RT_TreeMaxDepth, keepK, mb);
if(mb < trigger) break;
}
G_LastDepthActBar = Bar;
}
// ----------------------------------------------------------------------
// 61 candlestick patterns (Zorro spellings kept). Each returns [-100..100].
// We rescale to [-1..1] for Markov state construction.
// ----------------------------------------------------------------------
int buildCDL_TA61(var* out, string* names)
{
int n = 0;
#define ADD(Name, Call) do{ var v = (Call); if(out) out[n] = v/100.; if(names) names[n] = Name; n++; }while(0)
ADD("CDL2Crows", CDL2Crows());
ADD("CDL3BlackCrows", CDL3BlackCrows());
ADD("CDL3Inside", CDL3Inside());
ADD("CDL3LineStrike", CDL3LineStrike());
ADD("CDL3Outside", CDL3Outside());
ADD("CDL3StarsInSouth", CDL3StarsInSouth());
ADD("CDL3WhiteSoldiers", CDL3WhiteSoldiers());
ADD("CDLAbandonedBaby", CDLAbandonedBaby(0.3));
ADD("CDLAdvanceBlock", CDLAdvanceBlock());
ADD("CDLBeltHold", CDLBeltHold());
ADD("CDLBreakaway", CDLBreakaway());
ADD("CDLClosingMarubozu", CDLClosingMarubozu());
ADD("CDLConcealBabysWall", CDLConcealBabysWall());
ADD("CDLCounterAttack", CDLCounterAttack());
ADD("CDLDarkCloudCover", CDLDarkCloudCover(0.3));
ADD("CDLDoji", CDLDoji());
ADD("CDLDojiStar", CDLDojiStar());
ADD("CDLDragonflyDoji", CDLDragonflyDoji());
ADD("CDLEngulfing", CDLEngulfing());
ADD("CDLEveningDojiStar", CDLEveningDojiStar(0.3));
ADD("CDLEveningStar", CDLEveningStar(0.3));
ADD("CDLGapSideSideWhite", CDLGapSideSideWhite());
ADD("CDLGravestoneDoji", CDLGravestoneDoji());
ADD("CDLHammer", CDLHammer());
ADD("CDLHangingMan", CDLHangingMan());
ADD("CDLHarami", CDLHarami());
ADD("CDLHaramiCross", CDLHaramiCross());
ADD("CDLHignWave", CDLHignWave());
ADD("CDLHikkake", CDLHikkake());
ADD("CDLHikkakeMod", CDLHikkakeMod());
ADD("CDLHomingPigeon", CDLHomingPigeon());
ADD("CDLIdentical3Crows", CDLIdentical3Crows());
ADD("CDLInNeck", CDLInNeck());
ADD("CDLInvertedHammer", CDLInvertedHammer());
ADD("CDLKicking", CDLKicking());
ADD("CDLKickingByLength", CDLKickingByLength());
ADD("CDLLadderBottom", CDLLadderBottom());
ADD("CDLLongLeggedDoji", CDLLongLeggedDoji());
ADD("CDLLongLine", CDLLongLine());
ADD("CDLMarubozu", CDLMarubozu());
ADD("CDLMatchingLow", CDLMatchingLow());
ADD("CDLMatHold", CDLMatHold(0.5));
ADD("CDLMorningDojiStar", CDLMorningDojiStar(0.3));
ADD("CDLMorningStar", CDLMorningStar(0.3));
ADD("CDLOnNeck", CDLOnNeck());
ADD("CDLPiercing", CDLPiercing());
ADD("CDLRickshawMan", CDLRickshawMan());
ADD("CDLRiseFall3Methods", CDLRiseFall3Methods());
ADD("CDLSeperatingLines", CDLSeperatingLines());
ADD("CDLShootingStar", CDLShootingStar());
ADD("CDLShortLine", CDLShortLine());
ADD("CDLSpinningTop", CDLSpinningTop());
ADD("CDLStalledPattern", CDLStalledPattern());
ADD("CDLStickSandwhich", CDLStickSandwhich());
ADD("CDLTakuri", CDLTakuri());
ADD("CDLTasukiGap", CDLTasukiGap());
ADD("CDLThrusting", CDLThrusting());
ADD("CDLTristar", CDLTristar());
ADD("CDLUnique3River", CDLUnique3River());
ADD("CDLUpsideGap2Crows", CDLUpsideGap2Crows());
ADD("CDLXSideGap3Methods", CDLXSideGap3Methods());
#undef ADD
return n; // 61
}
// ================= Markov storage & helpers =================
static int* MC_Count; // [MC_STATES*MC_STATES]
static int* MC_RowSum; // [MC_STATES]
static int MC_Prev = -1;
static int MC_Cur = 0;
static var MC_PBullNext = 0.5;
static var MC_Entropy = 0.0;
#define MC_IDX(fr,to) ((fr)*MC_STATES + (to))
int MC_stateFromCDL(var* cdl /*len=61*/, var thr)
{
int i, best=-1; var besta=0;
for(i=0;i<MC_NPAT;i++){
var a = abs(cdl[i]);
if(a>besta){ besta=a; best=i; }
}
if(best<0) return MC_NONE;
if(besta < thr) return MC_NONE;
int bull = (cdl[best] > 0);
return 1 + 2*best + bull; // 1..122
}
int MC_isBull(int s){ if(s<=0) return 0; return ((s-1)%2)==1; }
void MC_update(int sPrev,int sCur){ if(sPrev<0) return; MC_Count[MC_IDX(sPrev,sCur)]++; MC_RowSum[sPrev]++; }
// === (6) Use runtime Laplace ? (G_MC_Alpha) ===
var MC_prob(int s,int t){
var num = (var)MC_Count[MC_IDX(s,t)] + G_MC_Alpha;
var den = (var)MC_RowSum[s] + G_MC_Alpha*MC_STATES;
if(den<=0) return 1.0/MC_STATES;
return num/den;
}
// === (6) one-pass PBull + Entropy
void MC_rowStats(int s, var* outPBull, var* outEntropy)
{
if(s<0){ if(outPBull) *outPBull=0.5; if(outEntropy) *outEntropy=1.0; return; }
int t; var Z=0, pBull=0;
for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); Z+=p; if(MC_isBull(t)) pBull+=p; }
if(Z<=0){ if(outPBull) *outPBull=0.5; if(outEntropy) *outEntropy=1.0; return; }
var H=0;
for(t=1;t<MC_STATES;t++){
var p = MC_prob(s,t)/Z;
if(p>0) H += -p*log(p);
}
var Hmax = log(MC_STATES-1);
if(Hmax<=0) H = 0; else H = H/Hmax;
if(outPBull) *outPBull = pBull/Z;
if(outEntropy) *outEntropy = H;
}
// ================= HARMONIC D-TREE ENGINE =================
// ---------- utils ----------
var randsign(){ return ifelse(random(1) < 0.5, -1.0, 1.0); }
var mapUnit(var u,var lo,var hi){ if(u<-1) u=-1; if(u>1) u=1; var t=0.5*(u+1.0); return lo + t*(hi-lo); }
// ---- safety helpers ----
inline var safeNum(var x)
{
if(invalid(x)) return 0; // 0 for NaN/INF
return clamp(x,-1e100,1e100); // hard-limit range
}
void sanitize(var* A,int n){ int k; for(k=0;k<n;k++) A[k]=safeNum(A[k]); }
var sat100(var x){ return clamp(x,-100,100); }
// ---- small string helpers (for memory-safe logging) ----
void strlcat_safe(string dst, string src, int cap)
{
if(!dst || !src || cap <= 0) return;
int dl = strlen(dst);
int sl = strlen(src);
int room = cap - 1 - dl;
if(room <= 0){ if(cap > 0) dst[cap-1] = 0; return; }
int i; for(i = 0; i < room && i < sl; i++) dst[dl + i] = src[i];
dst[dl + i] = 0;
}
int countSubStr(string s, string sub){
if(!s || !sub) return 0;
int n=0; string p=s;
int sublen = strlen(sub);
if(sublen<=0) return 0;
while((p=strstr(p,sub))){ n++; p += sublen; }
return n;
}
// ---------- FIXED: use int (lite-C) and keep non-negative ----------
int djb2_hash(string s){
int h = 5381, c, i = 0;
if(!s) return h;
while((c = s[i++])) h = ((h<<5)+h) ^ c; // h*33 ^ c
return h & 0x7fffffff; // force non-negative
}
// ---- tree helpers ----
int validTreeIndex(int tid){ if(!G_TreeIdx) return 0; if(tid<0||tid>=G_TreeN) return 0; return (G_TreeIdx[tid]!=0); }
Node* treeAt(int tid){ if(validTreeIndex(tid)) return G_TreeIdx[tid]; return &G_DummyNode; }
int safeTreeIndexFromEq(int eqi){
int denom = ifelse(G_TreeN>0, G_TreeN, 1);
int tid = eqi;
if(tid < 0) tid = 0;
if(denom > 0) tid = tid % denom;
if(tid < 0) tid = 0;
return tid;
}
// ---- tree indexing ----
void pushTreeNode(Node* u){
if(G_TreeN >= G_TreeCap){
int newCap = G_TreeCap*2;
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
computeMemFixedBytes(); // pointer vector size changed
}
G_TreeIdx[G_TreeN++] = u;
}
void indexTreeDFS(Node* u){ if(!u) return; pushTreeNode(u); int i; for(i=0;i<u->n;i++) indexTreeDFS(((Node**)u->c)[i]); }
// ---- shrink index capacity after pruning (Fix #3) ----
void maybeShrinkTreeIdx(){
if(!G_TreeIdx) return;
if(G_TreeCap > 64 && G_TreeN < (G_TreeCap >> 1)){
int newCap = (G_TreeCap >> 1);
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
computeMemFixedBytes(); // pointer vector size changed
}
}
// ---- depth LUT helper (upgrade #1) ----
void refreshDepthW()
{
if(!G_DepthW) return;
int d;
for(d=0; d<DEPTH_LUT_SIZE; d++)
G_DepthW[d] = 1.0 / pow(d+1, G_DTreeExp);
G_DepthExpLast = G_DTreeExp;
}
// ---- tree create/eval (with pool & LUT upgrades) ----
Node* createNode(int depth)
{
Node* u = poolAllocNode();
if(!u) return 0; // safety
u->v = random();
u->r = 0.01 + 0.02*depth + random(0.005);
u->d = depth;
if(depth > 0){
u->n = 1 + (int)random(MAX_BRANCHES); // 1..MAX_BRANCHES (cast ok)
u->c = malloc(u->n * sizeof(void*));
if(!u->c){
// Could not allocate children array; keep leaf instead
u->n = 0; u->c = 0;
return u;
}
int i;
for(i=0;i<u->n;i++){
Node* child = createNode(depth - 1);
((Node**)u->c)[i] = child; // ok if child==0, downstream code guards
}
} else {
u->n = 0; u->c = 0;
}
return u;
}
var evaluateNode(Node* u) // upgrade #1
{
if(!u) return 0;
var sum = 0; int i;
for(i=0;i<u->n;i++) sum += evaluateNode(((Node**)u->c)[i]);
if(G_DepthExpLast < 0 || abs(G_DTreeExp - G_DepthExpLast) > 1e-9)
refreshDepthW();
var phase = sin(u->r * Bar + sum);
var weight = G_DepthW[u->d];
u->v = (1 - weight)*u->v + weight*phase;
return u->v;
}
int countNodes(Node* u){ if(!u) return 0; int c=1,i; for(i=0;i<u->n;i++) c += countNodes(((Node**)u->c)[i]); return c; }
void freeTree(Node* u) // upgrade #2
{
if(!u) return; int i; for(i=0;i<u->n;i++) freeTree(((Node**)u->c)[i]);
if(u->c) free(u->c);
poolFreeNode(u);
}
// =========== NETWORK STATE & COEFFICIENTS ===========
var* G_State; var* G_Prev; // keep as var (precision)
var* G_StateSq = 0; // upgrade #3
i16* G_Adj;
fvar* G_RP; fvar* G_Z;
i8* G_Mode;
fvar* G_WSelf; fvar* G_WN1; fvar* G_WN2; fvar* G_WGlob1; fvar* G_WGlob2; fvar* G_WMom; fvar* G_WTree; fvar* G_WAdv;
fvar* A1x; fvar* A1lam; fvar* A1mean; fvar* A1E; fvar* A1P; fvar* A1i; fvar* A1c;
fvar* A2x; fvar* A2lam; fvar* A2mean; fvar* A2E; fvar* A2P; fvar* A2i; fvar* A2c;
fvar* G1mean; fvar* G1E; fvar* G2P; fvar* G2lam;
fvar* G_TreeTerm;
#ifdef KEEP_TOP_META
i16* G_TopEq;
fvar* G_TopW;
#endif
i16* G_EqTreeId;
fvar* TAlpha; fvar* TBeta;
fvar* G_PropRaw; fvar* G_Prop;
// --- Per-equation hit-rate (EW average of 1-bar directional correctness) (PATCH B)
#define HIT_ALPHA 0.02 // EW smoothing (~50-bar memory)
#define HIT_EPS 0.0001 // ignore tiny advisor values
fvar* G_HitEW; // [N] 0..1 EW hit-rate
int* G_HitN; // [N] # of scored comparisons
fvar* G_AdvPrev; // [N] previous bar's advisor output (-1..+1)
var G_Ret1 = 0; // realized 1-bar return for scoring
// ===== Markov features exposed to DTREE =====
var G_MCF_PBull; // 0..1
var G_MCF_Entropy; // 0..1
var G_MCF_State; // 0..122
// epoch/context & feedback
int G_Epoch = 0;
int G_CtxID = 0;
var G_FB_A = 0.7; // kept (not used in blend now)
var G_FB_B = 0.3; // kept (not used in blend now)
// ---------- predictability ----------
var nodePredictability(Node* t)
{
if(!t) return 0.5;
var disp = 0;
int n = t->n, i, cnt = 0;
if(t->c){
for(i=0;i<n;i++){
Node* c = ((Node**)t->c)[i];
if(c){ disp += abs(c->v - t->v); cnt++; }
}
if(cnt > 0) disp /= cnt;
}
var depthFac = 1.0/(1 + t->d);
var rateBase = 0.01 + 0.02*t->d;
var rateFac = exp(-25.0*abs(t->r - rateBase));
var p = 0.5*(depthFac + rateFac);
p = 0.5*p + 0.5*(1.0 + (-disp));
if(p<0) p=0; if(p>1) p=1;
return p;
}
// importance for selective pruning
var nodeImportance(Node* u)
{
if(!u) return 0;
var amp = abs(u->v); if(amp>1) amp=1;
var p = nodePredictability(u);
var depthW = 1.0/(1.0 + u->d);
var imp = (0.6*p + 0.4*amp) * depthW;
return imp;
}
// ====== Elastic growth helpers ======
// create a leaf at depth d (no children) — upgrade #2
Node* createLeafDepth(int d)
{
Node* u = poolAllocNode();
if(!u) return 0; // safety
u->v = random();
u->r = 0.01 + 0.02*d + random(0.005);
u->d = d;
u->n = 0;
u->c = 0;
return u;
}
// add up to addK new children to all nodes at frontierDepth (with memcpy) — upgrade #4
void growSelectiveAtDepth(Node* u, int frontierDepth, int addK)
{
if(!u) return;
if(u->d == frontierDepth){
int want = addK; if(want <= 0) return;
int oldN = u->n; int newN = oldN + want;
Node** Cnew = (Node**)malloc(newN * sizeof(void*));
if(oldN>0 && u->c) memcpy(Cnew, u->c, oldN*sizeof(void*)); // memcpy optimization
int i; for(i=oldN;i<newN;i++) Cnew[i] = createLeafDepth(frontierDepth-1);
if(u->c) free(u->c);
u->c = Cnew; u->n = newN; return;
}
int j; for(j=0;j<u->n;j++) growSelectiveAtDepth(((Node**)u->c)[j], frontierDepth, addK);
}
// keep top-K children by importance at targetDepth, drop the rest
void freeChildAt(Node* parent, int idx)
{
if(!parent || !parent->c) return;
Node** C = (Node**)parent->c;
freeTree(C[idx]);
int i;
for(i=idx+1;i<parent->n;i++) C[i-1] = C[i];
parent->n--;
if(parent->n==0){ free(parent->c); parent->c=0; }
}
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK)
{
if(!u) return;
if(u->d == targetDepth-1 && u->n > 0){
int n = u->n, i, kept = 0;
int mark[16]; for(i=0;i<16;i++) mark[i]=0;
int iter;
for(iter=0; iter<keepK && iter<n; iter++){
int bestI = -1; var bestImp = -1;
for(i=0;i<n;i++){
if(i<16 && mark[i]==1) continue;
var imp = nodeImportance(((Node**)u->c)[i]);
if(imp > bestImp){ bestImp = imp; bestI = i; }
}
if(bestI>=0 && bestI<16){ mark[bestI]=1; kept++; }
}
for(i=n-1;i>=0;i--) if(i<16 && mark[i]==0) freeChildAt(u,i);
return;
}
int j; for(j=0;j<u->n;j++) pruneSelectiveAtDepth(((Node**)u->c)[j], targetDepth, keepK);
}
// ---------- reindex (sizes pred cache without ternary) ----------
void reindexTreeAndMap()
{
G_TreeN = 0;
indexTreeDFS(Root);
if(G_TreeN<=0){
G_TreeN=1;
if(G_TreeIdx) G_TreeIdx[0]=Root;
}
// map equations to tree nodes
int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = (i16)(i % G_TreeN);
// resize predictability cache safely (upgrade #5)
G_PredLen = G_TreeN; if(G_PredLen <= 0) G_PredLen = 1;
if(G_PredLen > G_PredCap){
if(G_PredNode) free(G_PredNode);
G_PredNode = (var*)malloc(G_PredLen*sizeof(var));
G_PredCap = G_PredLen;
}
G_PredCacheBar = -1; // force refill next bar
maybeShrinkTreeIdx();
recalcTreeBytes();
}
// ====== Accuracy sentinel & elastic-depth controller ======
void acc_update(var x /*lambda*/, var y /*gamma*/)
{
var a = 0.01; // ~100-bar half-life
ACC_mx = (1-a)*ACC_mx + a*x;
ACC_my = (1-a)*ACC_my + a*y;
ACC_mx2 = (1-a)*ACC_mx2 + a*(x*x);
ACC_my2 = (1-a)*ACC_my2 + a*(y*y);
ACC_mxy = (1-a)*ACC_mxy + a*(x*y);
var vx = ACC_mx2 - ACC_mx*ACC_mx;
var vy = ACC_my2 - ACC_my*ACC_my;
var cv = ACC_mxy - ACC_mx*ACC_my;
if(vx>0 && vy>0) G_AccCorr = cv / sqrt(vx*vy); else G_AccCorr = 0;
if(!G_HaveBase){ G_AccBase = G_AccCorr; G_HaveBase = 1; }
}
// utility to maximize: accuracy minus gentle memory penalty
var util_now()
{
int mb = mem_mb_est();
var mem_pen = 0;
if(mb > MEM_BUDGET_MB) mem_pen = (mb - MEM_BUDGET_MB)/(var)MEM_BUDGET_MB; else mem_pen = 0;
return G_AccCorr - 0.5*mem_pen;
}
// apply a +1 “grow one level” action if safe memory headroom
int apply_grow_step()
{
int mb = mem_mb_est();
if(G_RT_TreeMaxDepth >= MAX_DEPTH) return 0;
if(mb > MEM_BUDGET_MB - 2*MEM_HEADROOM_MB) return 0;
int newFrontier = G_RT_TreeMaxDepth;
growSelectiveAtDepth(Root, newFrontier, KEEP_CHILDREN_HI);
G_RT_TreeMaxDepth++;
reindexTreeAndMap();
printf("\n[EDC] Grew depth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
return 1;
}
// revert last growth (drop newly-added frontier children)
void revert_last_grow()
{
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, 0);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
printf("\n[EDC] Reverted growth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
}
// main elastic-depth controller; call once per bar (after acc_update)
void edc_runtime()
{
// (5) slow hill-climb on G_DTreeExp
if((Bar % DEPTH_TUNE_BARS) == 0){
var U0 = util_now();
var trial = clamp(G_DTreeExp + G_DTreeExpDir*G_DTreeExpStep, 0.8, 2.0);
var old = G_DTreeExp;
G_DTreeExp = trial;
if(util_now() + 0.005 < U0){
G_DTreeExp = old;
G_DTreeExpDir = -G_DTreeExpDir;
}
}
int mb = mem_mb_est();
if(G_TunePending){
if(Bar - G_TuneStartBar >= TUNE_DELAY_BARS){
G_UtilAfter = util_now();
var eps = 0.01;
if(G_UtilAfter + eps < G_UtilBefore){
revert_last_grow();
} else {
printf("\n[EDC] Growth kept (U: %.4f -> %.4f)", G_UtilBefore, G_UtilAfter);
}
G_TunePending = 0; G_TuneAction = 0;
}
return;
}
if( (Bar % DEPTH_TUNE_BARS)==0 && mb <= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_RT_TreeMaxDepth < MAX_DEPTH ){
G_UtilBefore = util_now();
if(apply_grow_step()){
G_TunePending = 1; G_TuneAction = 1; G_TuneStartBar = Bar;
}
}
}
// Builds "Log\\Alpha12_eq_###.csv" into outName (must be >=64 bytes)
void buildEqFileName(int idx, char* outName /*>=64*/)
{
strcpy(outName, "Log\\Alpha12_eq_");
string idxs = strf("%03i", idx);
strcat(outName, idxs);
strcat(outName, ".csv");
}
// ===== consolidated EQ log =====
void writeEqHeaderOnce()
{
static int done=0; if(done) return; done=1;
file_append("Log\\Alpha12_eq_all.csv",
"Bar,Epoch,Ctx,EqCount,i,n1,n2,TreeId,Depth,Rate,Pred,Adv,Prop,Mode,WAdv,WTree,PBull,Entropy,MCState,ExprLen,ExprHash,tanhN,sinN,cosN\n");
}
void appendEqMetaLine(
int bar, int epoch, int ctx, int i, int n1, int n2, int tid, int depth, var rate,
var pred, var adv, var prop, int mode, var wadv, var wtree,
var pbull, var ent, int mcstate, string expr)
{
if(i >= LOG_EQ_SAMPLE) return;
int eLen = 0, eHash = 0, cT = 0, cS = 0, cC = 0;
if(expr){
eLen = (int)strlen(expr);
eHash = (int)djb2_hash(expr);
cT = countSubStr(expr,"tanh(");
cS = countSubStr(expr,"sin(");
cC = countSubStr(expr,"cos(");
} else {
eHash = (int)djb2_hash("");
}
#ifdef LOG_FLOAT_TRIM
file_append("Log\\Alpha12_eq_all.csv",
strf("%i,%i,%i,%i,%i,%i,%i,%i,%i,%.4f,%.4f,%.4f,%.4f,%i,%.3f,%.3f,%.4f,%.4f,%i,%i,%i,%i,%i,%i\n",
bar, epoch, ctx, NET_EQNS, i, n1, n2, tid, depth,
rate, pred, adv, prop, mode, wadv, wtree, pbull, ent,
mcstate, eLen, eHash, cT, cS, cC));
#else
file_append("Log\\Alpha12_eq_all.csv",
strf("%i,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,%.4f,%.4f,%.6f,%i,%.3f,%.3f,%.4f,%.4f,%i,%i,%i,%i,%i,%i\n",
bar, epoch, ctx, NET_EQNS, i, n1, n2, tid, depth,
rate, pred, adv, prop, mode, wadv, wtree, pbull, ent,
mcstate, eLen, eHash, cT, cS, cC));
#endif
}
// --------- allocation ----------
void randomizeRP()
{
int K=G_K,N=G_N,k,j;
for(k=0;k<K;k++)
for(j=0;j<N;j++)
G_RP[k*N+j] = ifelse(random(1) < 0.5, -1.0, 1.0);
}
// === (8/9) Use effective K + per-bar guard ===
int G_ProjBar = -1; int G_ProjK = -1;
void computeProjection(){
if(G_ProjBar == Bar && G_ProjK == G_Keff) return; // guard (upgrade #9)
int K=G_Keff, N=G_N, k, j;
for(k=0;k<K;k++){
var acc=0;
for(j=0;j<N;j++) acc += (var)G_RP[k*N+j]*G_StateSq[j]; // reuse squares (upgrade #3)
G_Z[k]=(fvar)acc;
}
G_ProjBar = Bar; G_ProjK = G_Keff;
}
// D) Compact allocate/free
void allocateNet()
{
int N = G_N, D = G_D, K = G_K;
// core
G_State = (var*)malloc(N*sizeof(var));
G_Prev = (var*)malloc(N*sizeof(var));
G_StateSq= (var*)malloc(N*sizeof(var));
// graph / projection
G_Adj = (i16*) malloc(N*D*sizeof(i16));
G_RP = (fvar*) malloc(K*N*sizeof(fvar));
G_Z = (fvar*) malloc(K*sizeof(fvar));
G_Mode = (i8*) malloc(N*sizeof(i8));
// weights & params
G_WSelf = (fvar*)malloc(N*sizeof(fvar));
G_WN1 = (fvar*)malloc(N*sizeof(fvar));
G_WN2 = (fvar*)malloc(N*sizeof(fvar));
G_WGlob1= (fvar*)malloc(N*sizeof(fvar));
G_WGlob2= (fvar*)malloc(N*sizeof(fvar));
G_WMom = (fvar*)malloc(N*sizeof(fvar));
G_WTree = (fvar*)malloc(N*sizeof(fvar));
G_WAdv = (fvar*)malloc(N*sizeof(fvar));
A1x = (fvar*)malloc(N*sizeof(fvar));
A1lam = (fvar*)malloc(N*sizeof(fvar));
A1mean= (fvar*)malloc(N*sizeof(fvar));
A1E = (fvar*)malloc(N*sizeof(fvar));
A1P = (fvar*)malloc(N*sizeof(fvar));
A1i = (fvar*)malloc(N*sizeof(fvar));
A1c = (fvar*)malloc(N*sizeof(fvar));
A2x = (fvar*)malloc(N*sizeof(fvar));
A2lam = (fvar*)malloc(N*sizeof(fvar));
A2mean= (fvar*)malloc(N*sizeof(fvar));
A2E = (fvar*)malloc(N*sizeof(fvar));
A2P = (fvar*)malloc(N*sizeof(fvar));
A2i = (fvar*)malloc(N*sizeof(fvar));
A2c = (fvar*)malloc(N*sizeof(fvar));
G1mean= (fvar*)malloc(N*sizeof(fvar));
G1E = (fvar*)malloc(N*sizeof(fvar));
G2P = (fvar*)malloc(N*sizeof(fvar));
G2lam = (fvar*)malloc(N*sizeof(fvar));
TAlpha= (fvar*)malloc(N*sizeof(fvar));
TBeta = (fvar*)malloc(N*sizeof(fvar));
G_TreeTerm = (fvar*)malloc(N*sizeof(fvar));
#ifdef KEEP_TOP_META
G_TopEq = (i16*) malloc(N*sizeof(i16));
G_TopW = (fvar*)malloc(N*sizeof(fvar));
#endif
G_PropRaw = (fvar*)malloc(N*sizeof(fvar));
G_Prop = (fvar*)malloc(N*sizeof(fvar));
if(LOG_EXPR_TEXT) G_Sym = (string*)malloc(N*sizeof(char*)); else G_Sym = 0;
// tree index
G_TreeCap = 128;
G_TreeIdx = (Node**)malloc(G_TreeCap*sizeof(Node*));
G_TreeN = 0;
G_EqTreeId = (i16*)malloc(N*sizeof(i16));
// initialize adjacency
{ int t; for(t=0; t<N*D; t++) G_Adj[t] = -1; }
// initialize state and parameters
{
int i;
for(i=0;i<N;i++){
G_State[i] = random();
G_Prev[i] = G_State[i];
G_StateSq[i]= G_State[i]*G_State[i];
G_Mode[i] = 0;
G_WSelf[i]=0.5; G_WN1[i]=0.2; G_WN2[i]=0.2;
G_WGlob1[i]=0.1; G_WGlob2[i]=0.1;
G_WMom[i]=0.05; G_WTree[i]=0.15; G_WAdv[i]=0.15;
A1x[i]=1; A1lam[i]=0.1; A1mean[i]=0; A1E[i]=0; A1P[i]=0; A1i[i]=0; A1c[i]=0;
A2x[i]=1; A2lam[i]=0.1; A2mean[i]=0; A2E[i]=0; A2P[i]=0; A2i[i]=0; A2c[i]=0;
G1mean[i]=1.0; G1E[i]=0.001; G2P[i]=0.6; G2lam[i]=0.3;
TAlpha[i]=0.8; TBeta[i]=25.0;
G_TreeTerm[i]=0;
#ifdef KEEP_TOP_META
G_TopEq[i]=-1; G_TopW[i]=0;
#endif
G_PropRaw[i]=1; G_Prop[i]=1.0/G_N;
if(LOG_EXPR_TEXT){
G_Sym[i] = (char*)malloc(EXPR_MAXLEN);
if(G_Sym[i]) strcpy(G_Sym[i],"");
}
}
}
// --- Hit-rate state --- (PATCH D)
G_HitEW = (fvar*)malloc(N*sizeof(fvar));
G_HitN = (int*) malloc(N*sizeof(int));
G_AdvPrev = (fvar*)malloc(N*sizeof(fvar));
{
int i;
for(i=0;i<N;i++){
G_HitEW[i] = 0.5; // neutral start
G_HitN[i] = 0;
G_AdvPrev[i] = 0; // no prior advice yet
}
}
computeMemFixedBytes();
if(G_PredNode) free(G_PredNode);
G_PredLen = G_TreeN; if(G_PredLen<=0) G_PredLen=1;
G_PredNode = (var*)malloc(G_PredLen*sizeof(var));
G_PredCap = G_PredLen;
G_PredCacheBar = -1;
}
void freeNet()
{
int i;
if(G_State)free(G_State); if(G_Prev)free(G_Prev); if(G_StateSq)free(G_StateSq);
if(G_Adj)free(G_Adj); if(G_RP)free(G_RP); if(G_Z)free(G_Z); if(G_Mode)free(G_Mode);
if(G_WSelf)free(G_WSelf); if(G_WN1)free(G_WN1); if(G_WN2)free(G_WN2);
if(G_WGlob1)free(G_WGlob1); if(G_WGlob2)free(G_WGlob2);
if(G_WMom)free(G_WMom); if(G_WTree)free(G_WTree); if(G_WAdv)free(G_WAdv);
if(A1x)free(A1x); if(A1lam)free(A1lam); if(A1mean)free(A1mean); if(A1E)free(A1E); if(A1P)free(A1P); if(A1i)free(A1i); if(A1c)free(A1c);
if(A2x)free(A2x); if(A2lam)free(A2lam); if(A2mean)free(A2mean); if(A2E)free(A2E); if(A2P)free(A2P); if(A2i)free(A2i); if(A2c)free(A2c);
if(G1mean)free(G1mean); if(G1E)free(G1E); if(G2P)free(G2P); if(G2lam)free(G2lam);
if(TAlpha)free(TAlpha); if(TBeta)free(TBeta);
if(G_TreeTerm)free(G_TreeTerm);
#ifdef KEEP_TOP_META
if(G_TopEq)free(G_TopEq); if(G_TopW)free(G_TopW);
#endif
if(G_EqTreeId)free(G_EqTreeId);
if(G_PropRaw)free(G_PropRaw); if(G_Prop)free(G_Prop);
if(G_Sym){ for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]); free(G_Sym); }
if(G_TreeIdx)free(G_TreeIdx); if(G_PredNode)free(G_PredNode);
}
// --------- DTREE feature builders ----------
// MEMORYLESS normalization to avoid series misuse in conditional paths
inline var nrm_s(var x) { return sat100(100.*tanh(x)); }
inline var nrm_scl(var x, var s) { return sat100(100.*tanh(s*x)); }
// F) Features accept 'pred' (no G_Pred[])
void buildEqFeatures(int i, var lambda, var mean, var energy, var power, var pred, var* S /*ADV_EQ_NF*/)
{
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
S[0] = nrm_s(G_State[i]);
S[1] = nrm_s(mean);
S[2] = nrm_scl(power,0.05);
S[3] = nrm_scl(energy,0.01);
S[4] = nrm_s(lambda);
S[5] = sat100(200.0*(pred-0.5));
S[6] = sat100(200.0*((var)t->d/MAX_DEPTH)-100.0);
S[7] = sat100(1000.0*t->r);
S[8] = nrm_s(G_TreeTerm[i]);
S[9] = sat100( (200.0/3.0) * (var)( (int)G_Mode[i] ) - 100.0 );
S[10] = sat100(200.0*(G_MCF_PBull-0.5));
S[11] = sat100(200.0*(G_MCF_Entropy-0.5));
S[12] = sat100(200.0*((var)G_HitEW[i] - 0.5)); // NEW: reliability feature (PATCH G)
sanitize(S,ADV_EQ_NF);
}
// (Kept for completeness; not used by DTREE anymore)
void buildPairFeatures(int i,int j, var lambda, var mean, var energy, var power, var* P /*ADV_PAIR_NF*/)
{
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* ti = treeAt(tid_i);
Node* tj = treeAt(tid_j);
var predi = predByTid(tid_i);
var predj = predByTid(tid_j);
P[0]=nrm_s(G_State[i]); P[1]=nrm_s(G_State[j]);
P[2]=sat100(200.0*((var)ti->d/MAX_DEPTH)-100.0);
P[3]=sat100(200.0*((var)tj->d/MAX_DEPTH)-100.0);
P[4]=sat100(1000.0*ti->r); P[5]=sat100(1000.0*tj->r);
P[6]=sat100(abs(P[2]-P[3]));
P[7]=sat100(abs(P[4]-P[5]));
P[8]=sat100(100.0*(predi+predj-1.0));
P[9]=nrm_s(lambda); P[10]=nrm_s(mean); P[11]=nrm_scl(power,0.05);
sanitize(P,ADV_PAIR_NF);
}
// --- Safe neighbor helpers & adjacency sanitizer ---
int adjSafe(int i, int d){
int N = G_N, D = G_D;
if(!G_Adj || N <= 1 || D <= 0) return 0;
if(d < 0) d = 0; if(d >= D) d = d % D;
int v = G_Adj[i*D + d];
if(v < 0 || v >= N || v == i) v = (i + 1) % N;
return v;
}
void sanitizeAdjacency(){
if(!G_Adj) return;
int N = G_N, D = G_D, i, d;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
i16 *p = &G_Adj[i*D + d];
if(*p < 0 || *p >= N || *p == i){
int r = (int)random(N);
if(r == i) r = (r+1) % N;
*p = (i16)r;
}
}
if(D >= 2 && G_Adj[i*D+0] == G_Adj[i*D+1]){
int r2 = (G_Adj[i*D+1] + 1) % N;
if(r2 == i) r2 = (r2+1) % N;
G_Adj[i*D+1] = (i16)r2;
}
}
}
// --------- advisor helpers (NEW) ----------
// cache one advisor value per equation per bar
var adviseSeed(int i, var lambda, var mean, var energy, var power)
{
static int seedBar = -1;
static int haveSeed[NET_EQNS];
static var seedVal[NET_EQNS];
if(seedBar != Bar){
int k; for(k=0;k<NET_EQNS;k++) haveSeed[k] = 0;
seedBar = Bar;
}
if(i < 0) i = 0;
if(i >= NET_EQNS) i = i % NET_EQNS;
if(!allowAdvise(i)) return 0;
if(!haveSeed[i]){
seedVal[i] = adviseEq(i, lambda, mean, energy, power); // trains (once) in Train mode
haveSeed[i] = 1;
}
return seedVal[i];
}
// simple deterministic mixer for diversity in [-1..1] without extra advise calls
var mix01(var a, int salt){
var z = sin(123.456*a + 0.001*salt) + cos(98.765*a + 0.002*salt);
return tanh(0.75*z);
}
// --------- advise wrappers (single-equation only) ----------
// upgrade #7: early exit on tight memory BEFORE building features
var adviseEq(int i, var lambda, var mean, var energy, var power)
{
if(!allowAdvise(i)) return 0;
if(is(INITRUN)) return 0;
int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
if(tight) return 0;
// --- Patch L: Prefer advising reliable equations; explore a bit for the rest
if(G_HitN[i] > 32){ // wait until some evidence
var h = (var)G_HitEW[i];
var gate = 0.40 + 0.15*(1.0 - MC_Entropy); // uses the Markov entropy directly
if(h < gate){
if(random() >= 0.5) return 0; // ~50% exploration
}
}
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
var pred = predByTid(tid);
var S[ADV_EQ_NF];
buildEqFeatures(i,lambda,mean,energy,power,pred,S);
// --- Patch 4: reliability-weighted DTREE training objective
var obj = 0;
if(Train){
obj = sat100(100.0*tanh(0.6*lambda + 0.4*mean));
// Reliability prior (0.5..1.0) to bias learning toward historically better equations
var prior = 0.75 + 0.5*((var)G_HitEW[i] - 0.5); // 0.5..1.0
obj *= prior;
}
int objI = (int)obj;
var a = adviseLong(DTREE, objI, S, ADV_EQ_NF);
return a/100.;
}
// --------- advisePair disabled: never call DTREE here ----------
var advisePair(int i,int j, var lambda, var mean, var energy, var power)
{
return 0;
}
// --------- heuristic pair scoring ----------
var scorePairSafe(int i, int j, var lambda, var mean, var energy, var power)
{
int ti = safeTreeIndexFromEq(G_EqTreeId[i]);
int tj = safeTreeIndexFromEq(G_EqTreeId[j]);
Node *ni = treeAt(ti), *nj = treeAt(tj);
var simD = 1.0 / (1.0 + abs((var)ni->d - (var)nj->d));
var dr = 50.0*abs(ni->r - nj->r); // upgrade #10
var simR = 1.0 / (1.0 + dr);
var predi = predByTid(ti);
var predj = predByTid(tj);
var pred = 0.5*(predi + predj);
var score = 0.5*pred + 0.3*simD + 0.2*simR;
return 2.0*score - 1.0;
}
// --------- adjacency selection (heuristic only) ----------
// safer clash check using prev>=0
void rewireAdjacency_DTREE(var lambda, var mean, var energy, var power)
{
int N=G_N, D=G_D, i, d, c, best, cand;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
var bestScore = -2; best = -1;
for(c=0;c<G_CandNeigh;c++){
cand = (int)random(N);
if(cand==i) continue;
int clash=0, k;
for(k=0;k<d;k++){
int prev = G_Adj[i*D+k];
if(prev>=0 && prev==cand){ clash=1; break; }
}
if(clash) continue;
var s = scorePairSafe(i,cand,lambda,mean,energy,power);
if(s > bestScore){ bestScore=s; best=cand; }
}
if(best<0){ do{ best = (int)random(N);} while(best==i); }
G_Adj[i*D + d] = (i16)best;
}
}
}
// --------- DTREE-created coefficients, modes & proportions ----------
var mapA(var a,var lo,var hi){ return mapUnit(a,lo,hi); }
void synthesizeEquationFromDTREE(int i, var lambda, var mean, var energy, var power)
{
var seed = adviseSeed(i,lambda,mean,energy,power);
G_Mode[i] = (int)(abs(1000*seed)) & 3;
// derive weights & params deterministically from the single seed
G_WSelf[i] = (fvar)mapA(mix01(seed, 11), 0.15, 0.85);
G_WN1[i] = (fvar)mapA(mix01(seed, 12), 0.05, 0.35);
G_WN2[i] = (fvar)mapA(mix01(seed, 13), 0.05, 0.35);
G_WGlob1[i] = (fvar)mapA(mix01(seed, 14), 0.05, 0.30);
G_WGlob2[i] = (fvar)mapA(mix01(seed, 15), 0.05, 0.30);
G_WMom[i] = (fvar)mapA(mix01(seed, 16), 0.02, 0.15);
G_WTree[i] = (fvar)mapA(mix01(seed, 17), 0.05, 0.35);
G_WAdv[i] = (fvar)mapA(mix01(seed, 18), 0.05, 0.35);
A1x[i] = (fvar)(randsign()*mapA(mix01(seed, 21), 0.6, 1.2));
A1lam[i] = (fvar)(randsign()*mapA(mix01(seed, 22), 0.05,0.35));
A1mean[i]= (fvar) mapA(mix01(seed, 23),-0.30,0.30);
A1E[i] = (fvar) mapA(mix01(seed, 24),-0.0015,0.0015);
A1P[i] = (fvar) mapA(mix01(seed, 25),-0.30,0.30);
A1i[i] = (fvar) mapA(mix01(seed, 26),-0.02,0.02);
A1c[i] = (fvar) mapA(mix01(seed, 27),-0.20,0.20);
A2x[i] = (fvar)(randsign()*mapA(mix01(seed, 31), 0.6, 1.2));
A2lam[i] = (fvar)(randsign()*mapA(mix01(seed, 32), 0.05,0.35));
A2mean[i]= (fvar) mapA(mix01(seed, 33),-0.30,0.30);
A2E[i] = (fvar) mapA(mix01(seed, 34),-0.0015,0.0015);
A2P[i] = (fvar) mapA(mix01(seed, 35),-0.30,0.30);
A2i[i] = (fvar) mapA(mix01(seed, 36),-0.02,0.02);
A2c[i] = (fvar) mapA(mix01(seed, 37),-0.20,0.20);
G1mean[i] = (fvar) mapA(mix01(seed, 41), 0.4, 1.6);
G1E[i] = (fvar) mapA(mix01(seed, 42),-0.004,0.004);
G2P[i] = (fvar) mapA(mix01(seed, 43), 0.1, 1.2);
G2lam[i] = (fvar) mapA(mix01(seed, 44), 0.05, 0.7);
TAlpha[i] = (fvar) mapA(mix01(seed, 51), 0.3, 1.5);
TBeta[i] = (fvar) mapA(mix01(seed, 52), 6.0, 50.0);
G_PropRaw[i] = (fvar)(0.01 + 0.99*(0.5*(seed+1.0)));
// Reliability-aware proportion boost (0.75..1.25 multiplier) (PATCH I)
{
var boost = 0.75 + 0.5*(var)G_HitEW[i];
G_PropRaw[i] = (fvar)((var)G_PropRaw[i] * boost);
}
}
void normalizeProportions()
{
int N=G_N,i; var s=0; for(i=0;i<N;i++) s += G_PropRaw[i];
if(s<=0) { for(i=0;i<N;i++) G_Prop[i] = (fvar)(1.0/N); return; }
for(i=0;i<N;i++) G_Prop[i] = (fvar)(G_PropRaw[i]/s);
}
// H) dtreeTerm gets predictabilities on demand
var dtreeTerm(int i, int* outTopEq, var* outTopW)
{
int N=G_N,j;
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* ti=treeAt(tid_i); int di=ti->d; var ri=ti->r;
var predI = predByTid(tid_i);
var alpha=TAlpha[i], beta=TBeta[i];
var sumw=0, acc=0, bestW=-1; int bestJ=-1;
for(j=0;j<N;j++){
if(j==i) continue;
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* tj=treeAt(tid_j); int dj=tj->d; var rj=tj->r;
var predJ = predByTid(tid_j);
var w = exp(-alpha*abs(di-dj)) * exp(-beta*abs(ri-rj));
var predBoost = 0.5 + 0.5*(predI*predJ);
var propBoost = 0.5 + 0.5*( (G_Prop[i] + G_Prop[j]) );
w *= predBoost * propBoost;
var pairAdv = scorePairSafe(i,j,0,0,0,0);
var pairBoost = 0.75 + 0.25*(0.5*(pairAdv+1.0));
w *= pairBoost;
sumw += w; acc += w*G_State[j];
if(w>bestW){bestW=w; bestJ=j;}
}
if(outTopEq) *outTopEq = bestJ;
if(outTopW) *outTopW = ifelse(sumw>0, bestW/sumw, 0);
if(sumw>0) return acc/sumw; return 0;
}
// --------- expression builder (capped & optional) ----------
void buildSymbolicExpr(int i, int n1, int n2)
{
if(LOG_EXPR_TEXT){
string s = G_Sym[i]; s[0]=0;
string a1 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
(var)A1x[i], n1, (var)A1lam[i], (var)A1mean[i], (var)A1E[i], (var)A1P[i], (var)A1i[i], (var)A1c[i]);
string a2 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
(var)A2x[i], n2, (var)A2lam[i], (var)A2mean[i], (var)A2E[i], (var)A2P[i], (var)A2i[i], (var)A2c[i]);
strlcat_safe(s, "x[i]_next = ", EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*x[i] + ", (var)G_WSelf[i]), EXPR_MAXLEN);
if(G_Mode[i]==1){
strlcat_safe(s, strf("%.3f*tanh%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
} else if(G_Mode[i]==2){
strlcat_safe(s, strf("%.3f*cos%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*tanh%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
} else {
strlcat_safe(s, strf("%.3f*sin%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*cos%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
}
strlcat_safe(s, strf("%.3f*tanh(%.3f*mean + %.5f*E) + ", (var)G_WGlob1[i], (var)G1mean[i], (var)G1E[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin(%.3f*P + %.3f*lam) + ", (var)G_WGlob2[i], (var)G2P[i], (var)G2lam[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*(x[i]-x_prev[i]) + ", (var)G_WMom[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("Prop[i]=%.4f; ", (var)G_Prop[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DT(i) + ", (var)G_WTree[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DTREE(i)", (var)G_WAdv[i]), EXPR_MAXLEN);
}
}
// ---------- one-time rewire init (call central reindex) ----------
void rewireInit()
{
randomizeRP();
computeProjection();
reindexTreeAndMap(); // ensures G_PredNode sized before any use
}
// ----------------------------------------------------------------------
// I) Trim rewireEpoch (no G_Pred sweep; same behavior)
// ----------------------------------------------------------------------
void rewireEpoch(var lambda, var mean, var energy, var power)
{
int i;
if(ENABLE_WATCH) watch("?A"); // before adjacency
// (7) adapt breadth by regime entropy
G_CandNeigh = ifelse(MC_Entropy < 0.45, CAND_NEIGH+4, CAND_NEIGH);
rewireAdjacency_DTREE(lambda,mean,energy,power);
if(ENABLE_WATCH) watch("?C"); // after adjacency
sanitizeAdjacency();
for(i=0;i<G_N;i++)
synthesizeEquationFromDTREE(i,lambda,mean,energy,power);
if(ENABLE_WATCH) watch("?D");
normalizeProportions();
// Unsigned context hash of current adjacency (+ epoch) for logging
{
int D = G_D;
unsigned int h = 2166136261u;
int total = G_N * D;
for(i=0;i<total;i++){
unsigned int x = (unsigned int)G_Adj[i];
h ^= x + 0x9e3779b9u + (h<<6) + (h>>2);
}
G_CtxID = (int)((h ^ ((unsigned int)G_Epoch<<8)) & 0x7fffffff);
}
if(LOG_EXPR_TEXT){
for(i=0;i<G_N;i++){
int n1, n2;
n1 = adjSafe(i,0);
if(G_D >= 2) n2 = adjSafe(i,1); else n2 = n1;
buildSymbolicExpr(i,n1,n2);
}
}
}
var projectNet()
{
int N=G_N,i; var sum=0,sumsq=0,cross=0;
for(i=0;i<N;i++){ sum+=G_State[i]; sumsq+=G_State[i]*G_State[i]; if(i+1<N) cross+=G_State[i]*G_State[i+1]; }
var mean=sum/N, corr=cross/(N-1);
return 0.6*tanh(mean + 0.001*sumsq) + 0.4*sin(corr);
}
// ----------------------------------------------------------------------
// J) Tighten updateNet (local pred, no G_AdvScore, log directly)
// ----------------------------------------------------------------------
void updateNet(var driver, var* outMean, var* outEnergy, var* outPower, int writeMeta)
{
int N = G_N, D = G_D, i;
var sum = 0, sumsq = 0;
for(i = 0; i < N; i++){ sum += G_State[i]; sumsq += G_State[i]*G_State[i]; }
var mean = sum / N;
var energy = sumsq;
var power = sumsq / N;
for(i = 0; i < N; i++){
int n1, n2;
n1 = adjSafe(i,0);
if(D >= 2) n2 = adjSafe(i,1); else n2 = n1;
var xi = G_State[i];
var xn1 = G_State[n1];
var xn2 = G_State[n2];
var mom = xi - G_Prev[i];
// --- EW hit-rate update from previous bar's advice vs this bar's realized return (PATCH H)
{
int canScore = 1;
if(is(INITRUN)) canScore = 0;
if(Bar <= LookBack) canScore = 0;
if(abs((var)G_AdvPrev[i]) <= HIT_EPS) canScore = 0;
if(canScore){
int sameSign = 0;
if( ( (var)G_AdvPrev[i] > 0 && G_Ret1 > 0 ) || ( (var)G_AdvPrev[i] < 0 && G_Ret1 < 0 ) )
sameSign = 1;
G_HitEW[i] = (fvar)((1.0 - HIT_ALPHA)*(var)G_HitEW[i] + HIT_ALPHA*(var)sameSign);
if(G_HitN[i] < 0x7fffffff) G_HitN[i] += 1;
}
}
int topEq = -1; var topW = 0;
var dt = dtreeTerm(i, &topEq, &topW);
G_TreeTerm[i] = (fvar)dt;
#ifdef KEEP_TOP_META
G_TopEq[i] = (i16)topEq;
G_TopW[i] = (fvar)topW;
#endif
{
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
var pred = predByTid(tid); // local predictability if you need it for features
var adv = 0;
if(allowAdvise(i))
adv = adviseEq(i, driver, mean, energy, power);
// Reliability gating of advisor by hit-rate (0.5..1.0) (PATCH H)
var wHit = 0.5 + 0.5*(var)G_HitEW[i];
var advEff = adv * wHit;
var arg1 = (var)(A1x[i])*xn1 + (var)(A1lam[i])*driver + (var)(A1mean[i])*mean + (var)(A1E[i])*energy + (var)(A1P[i])*power + (var)(A1i[i])*i + (var)(A1c[i]);
var arg2 = (var)(A2x[i])*xn2 + (var)(A2lam[i])*driver + (var)(A2mean[i])*mean + (var)(A2E[i])*energy + (var)(A2P[i])*power + (var)(A2i[i])*i + (var)(A2c[i]);
var nl1, nl2;
if(G_Mode[i] == 0){ nl1 = sin(arg1); nl2 = cos(arg2); }
else if(G_Mode[i] == 1){ nl1 = tanh(arg1); nl2 = sin(arg2); }
else if(G_Mode[i] == 2){ nl1 = cos(arg1); nl2 = tanh(arg2); }
else { nl1 = sin(arg1); nl2 = cos(arg2); }
var glob1 = tanh((var)G1mean[i]*mean + (var)G1E[i]*energy);
var glob2 = sin ((var)G2P[i]*power + (var)G2lam[i]*driver);
var xNew =
(var)G_WSelf[i]*xi +
(var)G_WN1[i]*nl1 +
(var)G_W
Last edited by TipmyPip; 09/19/25 04:45.
|
|
|
Canticle of the Rewoven Mandala
[Re: TipmyPip]
#488911
09/14/25 12:28
09/14/25 12:28
|
Joined: Sep 2017
Posts: 164
TipmyPip
OP
Member
|
OP
Member
Joined: Sep 2017
Posts: 164
|
From the previous post : The code continuation : Canticle of the Rewoven Mandala // ----------------------------------------------------------------------
// J) Tighten updateNet (local pred, no G_AdvScore, log directly)
// ----------------------------------------------------------------------
void updateNet(var driver, var* outMean, var* outEnergy, var* outPower, int writeMeta)
{
int N = G_N, D = G_D, i;
var sum = 0, sumsq = 0;
for(i = 0; i < N; i++){ sum += G_State[i]; sumsq += G_State[i]*G_State[i]; }
var mean = sum / N;
var energy = sumsq;
var power = sumsq / N;
for(i = 0; i < N; i++){
int n1, n2;
n1 = adjSafe(i,0);
if(D >= 2) n2 = adjSafe(i,1); else n2 = n1;
var xi = G_State[i];
var xn1 = G_State[n1];
var xn2 = G_State[n2];
var mom = xi - G_Prev[i];
// --- EW hit-rate update from previous bar's advice vs this bar's realized return (PATCH H)
{
int canScore = 1;
if(is(INITRUN)) canScore = 0;
if(Bar <= LookBack) canScore = 0;
if(abs((var)G_AdvPrev[i]) <= HIT_EPS) canScore = 0;
if(canScore){
int sameSign = 0;
if( ( (var)G_AdvPrev[i] > 0 && G_Ret1 > 0 ) || ( (var)G_AdvPrev[i] < 0 && G_Ret1 < 0 ) )
sameSign = 1;
G_HitEW[i] = (fvar)((1.0 - HIT_ALPHA)*(var)G_HitEW[i] + HIT_ALPHA*(var)sameSign);
if(G_HitN[i] < 0x7fffffff) G_HitN[i] += 1;
}
}
int topEq = -1; var topW = 0;
var dt = dtreeTerm(i, &topEq, &topW);
G_TreeTerm[i] = (fvar)dt;
#ifdef KEEP_TOP_META
G_TopEq[i] = (i16)topEq;
G_TopW[i] = (fvar)topW;
#endif
{
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
var pred = predByTid(tid); // local predictability if you need it for features
var adv = 0;
if(allowAdvise(i))
adv = adviseEq(i, driver, mean, energy, power);
// Reliability gating of advisor by hit-rate (0.5..1.0) (PATCH H)
var wHit = 0.5 + 0.5*(var)G_HitEW[i];
var advEff = adv * wHit;
var arg1 = (var)(A1x[i])*xn1 + (var)(A1lam[i])*driver + (var)(A1mean[i])*mean + (var)(A1E[i])*energy + (var)(A1P[i])*power + (var)(A1i[i])*i + (var)(A1c[i]);
var arg2 = (var)(A2x[i])*xn2 + (var)(A2lam[i])*driver + (var)(A2mean[i])*mean + (var)(A2E[i])*energy + (var)(A2P[i])*power + (var)(A2i[i])*i + (var)(A2c[i]);
var nl1, nl2;
if(G_Mode[i] == 0){ nl1 = sin(arg1); nl2 = cos(arg2); }
else if(G_Mode[i] == 1){ nl1 = tanh(arg1); nl2 = sin(arg2); }
else if(G_Mode[i] == 2){ nl1 = cos(arg1); nl2 = tanh(arg2); }
else { nl1 = sin(arg1); nl2 = cos(arg2); }
var glob1 = tanh((var)G1mean[i]*mean + (var)G1E[i]*energy);
var glob2 = sin ((var)G2P[i]*power + (var)G2lam[i]*driver);
var xNew =
(var)G_WSelf[i]*xi +
(var)G_WN1[i]*nl1 +
(var)G_WN2[i]*nl2 +
(var)G_WGlob1[i]*glob1 +
(var)G_WGlob2[i]*glob2 +
(var)G_WMom[i]*mom +
(var)G_WTree[i]*dt +
(var)G_WAdv[i]*advEff; // <-- changed to advEff (PATCH H)
G_Prev[i] = xi;
G_State[i] = clamp(xNew, -10, 10);
// Keep last advisor output for next-bar scoring (PATCH H)
G_AdvPrev[i] = (fvar)adv;
}
if(writeMeta && (G_Epoch % META_EVERY == 0) && !G_LogsOff){
int tid2, nn1, nn2;
Node* t2;
tid2 = safeTreeIndexFromEq(G_EqTreeId[i]);
t2 = treeAt(tid2);
nn1 = adjSafe(i,0);
if(G_D >= 2) nn2 = adjSafe(i,1); else nn2 = nn1;
if(LOG_EQ_TO_ONE_FILE){
string expr = "";
if(LOG_EXPR_TEXT) expr = G_Sym[i];
appendEqMetaLine(
Bar, G_Epoch, G_CtxID, i, nn1, nn2, tid2, t2->d, t2->r,
predByTid(tid2), 0, G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr
);
} else {
char fname[64];
buildEqFileName(i, fname);
string expr2 = "";
if(LOG_EXPR_TEXT) expr2 = G_Sym[i];
#ifdef LOG_FLOAT_TRIM
file_append(fname,
strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.4f,Pred=%.4f,Adv=%.4f,Prop=%.4f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
G_Epoch, G_CtxID, NET_EQNS, i, nn1, nn2, tid2, t2->d, t2->r,
predByTid(tid2), 0.0, (var)G_Prop[i], G_Mode[i], (var)G_WAdv[i], (var)G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr2));
#else
file_append(fname,
strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
G_Epoch, G_CtxID, NET_EQNS, i, nn1, nn2, tid2, t2->d, t2->r,
predByTid(tid2), 0.0, (var)G_Prop[i], G_Mode[i], (var)G_WAdv[i], (var)G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr2));
#endif
}
}
}
// refresh squared state once per bar for projection (upgrade #3)
int jj; for(jj=0; jj<N; jj++) G_StateSq[jj] = G_State[jj]*G_State[jj];
if(outMean) *outMean = mean;
if(outEnergy) *outEnergy = energy;
if(outPower) *outPower = power;
}
// ----------------- MAIN -----------------
function run()
{
static int initialized = 0;
static var lambda;
static int fileInit = 0;
BarPeriod = BAR_PERIOD;
if(LookBack < NWIN) LookBack = NWIN;
if(Train) Hedge = 2;
// Plots are opt-in via ENABLE_PLOTS
set(RULES|LEAN);
if(ENABLE_PLOTS) set(PLOTNOW);
asset(ASSET_SYMBOL);
// --- 1-bar realized return for scoring (Close_t - Close_{t-1}) (PATCH F)
{
static var *S_Close;
S_Close = series(priceClose());
if(Bar > LookBack)
G_Ret1 = S_Close[0] - S_Close[1];
else
G_Ret1 = 0;
}
if(is(INITRUN) && !initialized){
// init dummy node
G_DummyNode.v = 0;
G_DummyNode.r = 0;
G_DummyNode.c = 0;
G_DummyNode.n = 0;
G_DummyNode.d = 0;
// allocate Markov matrices (zeroed)
MC_Count = (int*)malloc(MC_STATES*MC_STATES*sizeof(int));
MC_RowSum = (int*)malloc(MC_STATES*sizeof(int));
{
int k;
for(k=0;k<MC_STATES*MC_STATES;k++) MC_Count[k]=0;
for(k=0;k<MC_STATES;k++) MC_RowSum[k]=0;
}
// Candlestick list (names not needed)
buildCDL_TA61(0, 0);
// build tree + network
// Pre-warm node pool so first allocation is guaranteed aligned & ready
if(!G_ChunkHead){
NodeChunk* ch0 = (NodeChunk*)malloc(sizeof(NodeChunk));
if(!ch0) quit("Alpha12: OOM preallocating NodeChunk");
memset(ch0, 0, sizeof(NodeChunk));
ch0->next = 0; ch0->used = 0;
G_ChunkHead = ch0;
}
Root = createNode(MAX_DEPTH);
recalcTreeBytes();
allocateNet();
// ---- depth LUT allocation (heap) ----
G_DepthW = (var*)malloc(DEPTH_LUT_SIZE * sizeof(var));
{ int d; for(d=0; d<DEPTH_LUT_SIZE; d++) G_DepthW[d] = 0; }
G_DepthExpLast = -1.0; // force first refresh
// engine params
G_DTreeExp = 1.10 + random(0.50); // [1.10..1.60)
G_FB_A = 0.60 + random(0.25); // [0.60..0.85)
G_FB_B = 1.0 - G_FB_A;
refreshDepthW(); // prefill LUT
randomizeRP();
computeProjection();
rewireInit();
G_Epoch = 0;
rewireEpoch(0,0,0,0);
// Header setup (consolidated vs legacy)
if(LOG_EQ_TO_ONE_FILE){
writeEqHeaderOnce();
} else {
char fname[64];
int i2;
for(i2=0;i2<NET_EQNS;i2++){
buildEqFileName(i2,fname);
file_append(fname,
"Bar,lambda,gamma,i,State,n1,n2,mean,energy,power,Vel,Mode,WAdv,WSelf,WN1,WN2,WGlob1,WGlob2,WMom,WTree,Pred,Adv,Prop,TreeTerm,TopEq,TopW,TreeId,Depth,Rate,PBull,Entropy,MCState\n");
}
}
// Markov CSV header
if(!fileInit){
file_append("Log\\Alpha12_markov.csv","Bar,State,PBullNext,Entropy,RowSum\n");
fileInit=1;
}
// initial META dump (consolidated or legacy)
{
int i;
for(i=0;i<G_N;i++){
int n1, n2, tid;
Node* t;
var pred, adv;
n1 = adjSafe(i,0);
if(G_D >= 2) n2 = adjSafe(i,1); else n2 = n1;
tid = safeTreeIndexFromEq(G_EqTreeId[i]);
t = treeAt(tid);
pred = predByTid(tid);
adv = 0; // no advising during INITRUN
if(LOG_EQ_TO_ONE_FILE){
string expr = "";
if(LOG_EXPR_TEXT) expr = G_Sym[i];
appendEqMetaLine(
Bar, G_Epoch, G_CtxID, i, n1, n2, tid, t->d, t->r,
pred, adv, G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr
);
} else {
char fname2[64];
buildEqFileName(i,fname2);
string expr2 = "";
if(LOG_EXPR_TEXT) expr2 = G_Sym[i];
#ifdef LOG_FLOAT_TRIM
file_append(fname2,
strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.4f,Pred=%.4f,Adv=%.4f,Prop=%.4f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
G_Epoch, G_CtxID, NET_EQNS, i, n1, n2, tid, t->d, t->r,
pred, adv, (var)G_Prop[i], G_Mode[i], (var)G_WAdv[i], (var)G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr2));
#else
file_append(fname2,
strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
G_Epoch, G_CtxID, NET_EQNS, i, n1, n2, tid, t->d, t->r,
pred, adv, (var)G_Prop[i], G_Mode[i], (var)G_WAdv[i], (var)G_WTree[i],
MC_PBullNext, MC_Entropy, MC_Cur, expr2));
#endif
}
}
}
initialized=1;
printf("\nRoot nodes: %i | Net equations: %i (degree=%i, kproj=%i)",
countNodes(Root), G_N, G_D, G_K);
}
// early zero-cost shedding when approaching cap
if(mem_mb_est() >= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_ShedStage == 0)
shed_zero_cost_once();
// ==== Runtime memory / depth manager (acts only when near the cap)
depth_manager_runtime();
// ====== Per bar: Candles ? Markov (with optional cadence)
{
static var CDL[MC_NPAT];
if((Bar % MC_EVERY) == 0){
buildCDL_TA61(CDL,0);
MC_Cur = MC_stateFromCDL(CDL, G_MC_ACT);
if(Bar > LookBack) MC_update(MC_Prev, MC_Cur);
MC_Prev = MC_Cur;
{
var rs = (var)MC_RowSum[MC_Cur];
G_MC_Alpha = clamp(1.0 / (1.0 + rs/256.0), 0.05, 1.0);
}
// one-pass stats (upgrade #6)
MC_rowStats(MC_Cur, &MC_PBullNext, &MC_Entropy);
}
// expose Markov features
G_MCF_PBull = MC_PBullNext;
G_MCF_Entropy = MC_Entropy;
G_MCF_State = (var)MC_Cur;
// adaptive acceptance rate ? adjust threshold
{
var aEW = 0.01; // ~100-bar half-life
G_AccRate = (1 - aEW)*G_AccRate + aEW*(MC_Cur != 0);
{
var target = 0.35; // aim for ~35% nonzero states
G_MC_ACT = clamp(G_MC_ACT + 0.02*(G_AccRate - target), 0.15, 0.60);
}
}
}
// ====== Tree driver lambda
lambda = evaluateNode(Root);
// ====== Rewire cadence (4) + epoch work
{
int doRewire = ((Bar % REWIRE_EVERY) == 0);
// --- Patch N: If reliability is weak, allow a light early rewire
{
var HitAvg = 0; int ii;
for(ii=0; ii<G_N; ii++) HitAvg += (var)G_HitEW[ii] * (var)G_Prop[ii];
if(G_N <= 0) HitAvg = 0.5;
// At most 1/4 the normal cadence, only when weak
if(HitAvg < 0.46 && (Bar % (REWIRE_EVERY/4)) == 0) doRewire = 1;
}
// (4) early rewire when utility falls
{
static var U_prev = 0;
var U_now = util_now();
if(U_now + 0.01 < U_prev) doRewire = 1;
U_prev = U_now;
}
if(doRewire){
G_Epoch++;
{
int ii;
var sum=0;
for(ii=0;ii<G_N;ii++) sum += G_State[ii];
{
var mean = sum/G_N;
var energy=0;
for(ii=0;ii<G_N;ii++) energy += G_State[ii]*G_State[ii];
var power = energy/G_N;
rewireEpoch(lambda,mean,energy,power);
}
}
}
// (8/9) adapt effective projection K each bar and recompute projection once
G_Keff = ifelse(MC_Entropy < 0.45, KPROJ, KPROJ/2);
computeProjection();
// (3) dynamic advisor budget per bar (before updateNet so it applies now)
{
int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
G_AdviseMax = ifelse(tight, 12, ifelse(MC_Entropy < 0.45, 32, 16));
}
// Update net this bar (write META only if rewired and not shedding logs)
{
var meanB, energyB, powerB;
updateNet(lambda, &meanB, &energyB, &powerB, doRewire);
var gamma = 0;
// Feedback: compute ensemble projection
{
gamma = projectNet();
// --- Accuracy sentinel update & elastic depth controller ---
acc_update(lambda, gamma);
edc_runtime();
// (1) Adaptive feedback blend toward the more informative component
{
var w = 0.5 + 0.5*G_AccCorr; // 0..1
G_FB_W = clamp(0.9*G_FB_W + 0.1*w, 0.2, 0.9);
lambda = G_FB_W*lambda + (1.0 - G_FB_W)*gamma;
}
}
// Plot/log gating
{
int doPlot = (ENABLE_PLOTS && !G_ChartsOff);
int doLog = ifelse(G_LogsOff, ((Bar % (LOG_EVERY*4)) == 0), ((Bar % LOG_EVERY) == 0));
// Plots
if(doPlot){
plot("lambda", lambda, LINE, 0);
plot("gamma", gamma, LINE, 0);
plot("P_win", powerB, LINE, 0);
plot("PBullNext", MC_PBullNext, LINE, 0);
plot("MC_Entropy", MC_Entropy, LINE, 0);
plot("MemMB", memory(0)/(1024.*1024.), LINE, 0);
plot("Allocs", (var)memory(2), LINE, 0);
plot("HitEW_7", (var)G_HitEW[7], LINE, 0); // (PATCH K) watch eq #7
}
// Markov CSV log (decimated; further decimated when shedding)
if(doLog){
#ifdef LOG_FLOAT_TRIM
file_append("Log\\Alpha12_markov.csv",
strf("%i,%i,%.4f,%.4f,%i\n", Bar, MC_Cur, MC_PBullNext, MC_Entropy, MC_RowSum[MC_Cur]));
#else
file_append("Log\\Alpha12_markov.csv",
strf("%i,%i,%.6f,%.6f,%i\n", Bar, MC_Cur, MC_PBullNext, MC_Entropy, MC_RowSum[MC_Cur]));
#endif
// Optional: per-eq hit snapshot (throttled by LOG_EQ_SAMPLE) (PATCH J)
{
int ii;
for(ii=0; ii<G_N && ii<LOG_EQ_SAMPLE; ii++){
file_append("Log\\Alpha12_hits.csv",
strf("%i,%i,%.4f,%i,%.3f,%.6f\n",
Bar, ii, (var)G_HitEW[ii], G_HitN[ii], (var)G_AdvPrev[ii], G_Ret1));
// Columns: Bar,i,HitEW,HitN,PrevAdv,Ret1
}
}
}
}
}
// --- Patch M: reliability-weighted ensemble hit ? position sizing
{
var HitAvg = 0;
int ii;
for(ii=0; ii<G_N; ii++) HitAvg += (var)G_HitEW[ii] * (var)G_Prop[ii];
if(G_N <= 0) HitAvg = 0.5;
// Map 0..1 -> 0.5..2.0 lots, gently
Lots = clamp(1.0 + 2.0*(HitAvg - 0.5), 0.5, 2.0);
}
// ====== Entries (Markov-gated) ======
if( MC_PBullNext > PBULL_LONG_TH && lambda > 0.7 ) enterLong();
if( MC_PBullNext < PBULL_SHORT_TH && lambda < -0.7 ) enterShort();
}
}
// Clean up memory
function cleanup()
{
if(Root) freeTree(Root);
freeNodePool(); // upgrade #2: release pool chunks
if(MC_Count) free(MC_Count);
if(MC_RowSum) free(MC_RowSum);
if(G_DepthW) free(G_DepthW); // free LUT
freeNet();
}
Last edited by TipmyPip; 09/19/25 04:47.
|
|
|
Consensus Gate Orchestrator
[Re: TipmyPip]
#488924
Yesterday at 10:02
Yesterday at 10:02
|
Joined: Sep 2017
Posts: 164
TipmyPip
OP
Member
|
OP
Member
Joined: Sep 2017
Posts: 164
|
Consensus Gate OrchestratorThe system follows a gate-and-flow pattern. It begins by compressing raw, fast-moving observations into a small alphabet of archetypes—a compact context that says “what the moment looks like” right now. From the rolling stream of these archetypes it infers two quiet dials: a lean (directional tendency for the immediate next step) and a clarity (how decisive that tendency appears). Those two dials form a permission gate: sometimes it opens, sometimes it holds; sometimes it opens in one direction but not the other. The gate is conservative by design and adjusts as evidence accumulates or disperses. Beneath the gate, a soft influence field evolves continuously. Many small units—lightweight, partially independent—carry a trace of their own past, listen to a few peers, and absorb coarse summaries from the broader environment across multiple horizons. Signals are intentionally bounded to prevent spikes from dominating. Attention is rationed: weight is allocated in proportion to agreement and reliability, so faint, inconsistent voices naturally recede while convergent evidence rises to the surface. Connections among these units are reshaped in measured slices. Rather than restarting from scratch, the system refreshes “who listens to whom” and how strongly, favoring simple, stable pairings and rhythm-compatible neighbors. Structure molts; scaffold stays. The goal is to remain adaptive without becoming erratic. Capacity breathes with circumstances. When resources tighten or extra detail stops helping, the system trims depth where it matters least. When there’s headroom and a demonstrable benefit, it adds a thin layer. Changes are tentative and reversible: growth is trialed, scored after a delay, and rolled back if utility falls. Utility balances quality of alignment with a mild cost for complexity. Decisions happen only when permission and the influence field agree meaningfully. Timing and size of action (in any application) scale with consensus strength; ambiguity elevates patience. “Do nothing” is first-class, not failure. A compact diary records the moment’s archetype, the two gate dials, and terse sketches of how influences combined to justify the current posture. It favors clarity over detail, enabling auditability without exposing internals. What emerges is coherence without rigidity. Groups move together when rhythms align; solos fade when clarity drops. Adaptation is maintained through many small adjustments, not dramatic overhauls, so behavior tracks structural change while staying steady between regimes. // ======================================================================
// Alpha12 - Markov-augmented Harmonic D-Tree Engine (Candlestick 122-dir)
// with runtime memory shaping, selective depth pruning,
// elastic accuracy-aware depth growth, and equation-cycle time series.
// ======================================================================
// ================= USER CONFIG =================
#define ASSET_SYMBOL "EUR/USD"
#define BAR_PERIOD 5
#define TF_H1 12
// ... (rest of your USER CONFIG defines)
// ---- Forward declarations (needed by hooks placed early) ----
void Alpha12_init();
void Alpha12_bar();
void Alpha12_cleanup();
void updateAllMarkov();
#define MC_ACT 0.30 // initial threshold on |CDL| in [-1..1] to accept a pattern
#define PBULL_LONG_TH 0.60 // Markov gate for long
#define PBULL_SHORT_TH 0.40 // Markov gate for short
// ===== Debug toggles (Fix #1 - chart/watch growth off by default) =====
#define ENABLE_PLOTS 0 // 0 = no plot buffers; 1 = enable plot() calls
#define ENABLE_WATCH 0 // 0 = disable watch() probes; 1 = enable
// ================= ENGINE PARAMETERS =================
#define MAX_BRANCHES 3
#define MAX_DEPTH 4
#define NWIN 256
#define NET_EQNS 100
#define DEGREE 4
#define KPROJ 16
#define REWIRE_EVERY 127
#define CAND_NEIGH 8
// ===== LOGGING CONTROLS (memory management) =====
#define LOG_EQ_TO_ONE_FILE 1 // 1: single consolidated EQ CSV; 0: per-eq files
#define LOG_EXPR_TEXT 0 // 0: omit full expression (store signature only); 1: include text
#define META_EVERY 4 // write META every N rewires
#define LOG_EQ_SAMPLE NET_EQNS
#define EXPR_MAXLEN 512
#define LOG_FLOAT_TRIM
#define LOG_EVERY 16
#define MC_EVERY 1
// ---- DTREE feature sizes (extended: adds cycle + multi-TF features) ----
#define ADV_EQ_NF 19 // CHANGED: was 15, now +4 (5M + Relation)
#define ADV_PAIR_NF 12 // <— RESTORED: used by buildPairFeatures()
// ================= Candles ? 122-state Markov =================
#define MC_NPAT 61
#define MC_STATES 123 // 1 + 2*MC_NPAT
#define MC_NONE 0
#define MC_LAPLACE 1.0 // kept for reference; runtime uses G_MC_Alpha
// ================= Runtime Memory / Accuracy Manager =================
#define MEM_BUDGET_MB 50
#define MEM_HEADROOM_MB 5
#define DEPTH_STEP_BARS 16
#define KEEP_CHILDREN_HI 2
#define KEEP_CHILDREN_LO 1
#define RUNTIME_MIN_DEPTH 2
// ===== Chunked rewire settings =====
#define REWIRE_BATCH_EQ_5M 24 // equations to (re)build on 5m bars
#define REWIRE_BATCH_EQ_H1 64 // bigger chunk when an H1 closes
#define REWIRE_MIN_BATCH 8 // floor under pressure
#define REWIRE_NORM_EVERY 1 // normalize after completing 1 full pass
// If mem est near budget, scale batch down
#define REWIRE_MEM_SOFT (MEM_BUDGET_MB - 4)
#define REWIRE_MEM_HARD (MEM_BUDGET_MB - 1)
// ===== Chunked update settings (heavy DTREE/advisor in slices) =====
// (Added per your patch)
#define UPDATE_BATCH_EQ_5M 32 // heavy updates on 5m bars
#define UPDATE_BATCH_EQ_H1 96 // larger slice when an H1 closes
#define UPDATE_MIN_BATCH 8
#define UPDATE_MEM_SOFT (MEM_BUDGET_MB - 4)
#define UPDATE_MEM_HARD (MEM_BUDGET_MB - 1)
// runtime flag used by alpha12_step()
int ALPHA12_READY = 0; // single global init sentinel (int)
int G_ShedStage = 0; // 0..2
int G_LastDepthActBar = -999999;
int G_ChartsOff = 0; // gates plot()
int G_LogsOff = 0; // gates file_append cadence
int G_SymFreed = 0; // expression buffers freed
int G_RT_TreeMaxDepth = MAX_DEPTH;
// ---- Accuracy sentinel (EW correlation of lambda vs gamma) ----
var ACC_mx=0, ACC_my=0, ACC_mx2=0, ACC_my2=0, ACC_mxy=0;
var G_AccCorr = 0; // [-1..1]
var G_AccBase = 0; // first seen sentinel
int G_HaveBase = 0;
// ---- Elastic depth tuner (small growth trials with rollback) ----
#define DEPTH_TUNE_BARS 64 // start a growth trial this often (when memory allows)
#define TUNE_DELAY_BARS 64 // evaluate the trial after this many bars
var G_UtilBefore = 0, G_UtilAfter = 0;
int G_TunePending = 0;
int G_TuneStartBar = 0;
int G_TuneAction = 0; // +1 grow trial, 0 none
// ======================================================================
// Types & globals used by memory estimator
// ======================================================================
// HARMONIC D-TREE type
typedef struct Node {
var v;
var r;
void* c;
int n;
int d;
} Node;
// ====== Node pool (upgrade #2) ======
typedef struct NodeChunk {
struct NodeChunk* next;
int used; // 4 bytes
int _pad; // 4 bytes -> ensures nodes[] starts at 8-byte offset on 32-bit
Node nodes[256]; // each Node contains doubles; keep this 8-byte aligned
} NodeChunk;
NodeChunk* G_ChunkHead = 0;
Node* G_FreeList = 0;
Node* poolAllocNode() {
if(G_FreeList){
Node* n = G_FreeList;
G_FreeList = (Node*)n->c;
n->c = 0;
n->n = 0;
n->d = 0;
n->v = 0;
n->r = 0;
return n;
}
if(!G_ChunkHead || G_ChunkHead->used >= 256){
NodeChunk* ch = (NodeChunk*)malloc(sizeof(NodeChunk));
if(!ch) { quit("Alpha12: OOM allocating NodeChunk (poolAllocNode)"); return 0; }
memset(ch, 0, sizeof(NodeChunk));
ch->next = G_ChunkHead;
ch->used = 0;
G_ChunkHead = ch;
}
if(G_ChunkHead->used < 0 || G_ChunkHead->used >= 256){
quit("Alpha12: Corrupt node pool state");
return 0;
}
return &G_ChunkHead->nodes[G_ChunkHead->used++];
}
void poolFreeNode(Node* u){
if(!u) return;
u->c = (void*)G_FreeList;
G_FreeList = u;
}
void freeNodePool() {
NodeChunk* ch = G_ChunkHead;
while(ch){
NodeChunk* nx = ch->next;
free(ch);
ch = nx;
}
G_ChunkHead = 0;
G_FreeList = 0;
}
// Minimal globals needed before mem estimator
Node* Root = 0;
Node** G_TreeIdx = 0;
int G_TreeN = 0;
int G_TreeCap = 0;
var G_DTreeExp = 0;
// ---- (upgrade #1) depth LUT for pow() ----
#define DEPTH_LUT_SIZE (MAX_DEPTH + 1) // <- keep constant for lite-C
var* G_DepthW = 0; // heap-allocated LUT
var G_DepthExpLast = -1.0; // sentinel as var
Node G_DummyNode; // treeAt() can return &G_DummyNode
// Network sizing globals (used by mem estimator)
int G_N = NET_EQNS;
int G_D = DEGREE;
int G_K = KPROJ;
// Optional expression buffer pointer (referenced by mem estimator)
string* G_Sym = 0;
// Forward decls that reference Node
var nodePredictability(Node* t); // fwd decl (needed by predByTid)
var nodeImportance(Node* u); // fwd decl (uses nodePredictability below)
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK);
void reindexTreeAndMap();
// Forward decls for advisor functions (so adviseSeed can call them)
var adviseEq(int i, var lambda, var mean, var energy, var power);
var advisePair(int i,int j, var lambda, var mean, var energy, var power);
// ----------------------------------------------------------------------
// === Adaptive knobs & sentinels (NEW) ===
var G_FB_W = 0.70; // (1) dynamic lambda/gamma blend weight 0..1
var G_MC_ACT = MC_ACT; // (2) adaptive candlestick acceptance threshold
var G_AccRate = 0; // (2) EW acceptance rate of (state != 0)
// (3) advisor budget per bar (replaces the macro)
int G_AdviseMax = 16;
// (6) Markov Laplace smoothing (runtime)
var G_MC_Alpha = 1.0;
// (7) adaptive candidate breadth for adjacency search
int G_CandNeigh = CAND_NEIGH;
// (8) effective projection dimension (= KPROJ or KPROJ/2)
int G_Keff = KPROJ;
// (5) depth emphasis hill-climber
var G_DTreeExpStep = 0.05;
int G_DTreeExpDir = 1;
// ---- Advise budget/rotation (Fix #2) ----
#define ADVISE_ROTATE 1 // 1 = rotate which equations get DTREE each bar
int allowAdvise(int i) {
if(ADVISE_ROTATE){
int groups = NET_EQNS / G_AdviseMax;
if(groups < 1) groups = 1;
return ((i / G_AdviseMax) % groups) == (Bar % groups);
} else {
return (i < G_AdviseMax);
}
}
// ======================================================================
// A) Tight-memory switches and compact types
// ======================================================================
#define TIGHT_MEM 1 // turn on compact types for arrays
// consolidated EQ CSV -> don't enable extra meta
// (no #if available; force meta OFF explicitly)
#ifdef TIGHT_MEM
typedef float fvar; // 4B instead of 8B 'var' for large coefficient arrays
typedef short i16; // -32768..32767 indices
typedef char i8; // small enums/modes
#else
typedef var fvar;
typedef int i16;
typedef int i8;
#endif
// ---- tree byte size (counts nodes + child pointer arrays) ----
int tree_bytes(Node* u) {
if(!u) return 0;
int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
int sz_node = 2*SZV + SZP + 2*SZI;
int total = sz_node;
if(u->n > 0 && u->c) total += u->n * SZP;
int i;
for(i=0;i<u->n;i++) total += tree_bytes(((Node**)u->c)[i]);
return total;
}
// ======================================================================
// Optimized memory estimator & predictability caches
// ======================================================================
// ===== Memory estimator & predictability caches =====
int G_MemFixedBytes = 0; // invariant part (arrays, Markov + pointer vec + expr opt)
int G_TreeBytesCached = 0; // current D-Tree structure bytes
var* G_PredNode = 0; // length == G_TreeN; -2 = not computed this bar
int G_PredLen = 0;
int G_PredCap = 0; // (upgrade #5)
int G_PredCacheBar = -1;
void recalcTreeBytes(){
G_TreeBytesCached = tree_bytes(Root);
}
void computeMemFixedBytes() {
int N = G_N, D = G_D, K = G_K;
int SZV = sizeof(var), SZF = sizeof(fvar), SZI16 = sizeof(i16), SZI8 = sizeof(i8), SZP = sizeof(void*);
int b = 0;
// --- core state (var-precision) ---
b += N*SZV*2; // G_State, G_Prev
// --- adjacency & ids ---
b += N*D*SZI16; // G_Adj
b += N*SZI16; // G_EqTreeId
b += N*SZI8; // G_Mode
// --- random projection ---
b += K*N*SZF; // G_RP
b += K*SZF; // G_Z
// --- weights & params (fvar) ---
b += N*SZF*(8); // G_W* (WSelf, WN1, WN2, WGlob1, WGlob2, WMom, WTree, WAdv)
b += N*SZF*(7 + 7); // A1*, A2*
b += N*SZF*(2 + 2); // G1mean,G1E,G2P,G2lam
b += N*SZF*(2); // TAlpha, TBeta
b += N*SZF*(1); // G_TreeTerm
b += N*(SZI16 + SZF); // G_TopEq, G_TopW
// --- proportions ---
b += N*SZF*2; // G_PropRaw, G_Prop
// --- per-equation hit-rate bookkeeping ---
b += N*SZF; // G_HitEW
b += N*SZF; // G_AdvPrev
b += N*sizeof(int); // G_HitN
// --- Markov storage (unchanged ints) ---
b += MC_STATES*MC_STATES*sizeof(int) + MC_STATES*sizeof(int);
// pointer vector for tree index (capacity part)
b += G_TreeCap*SZP;
// optional expression buffers
if(LOG_EXPR_TEXT && G_Sym && !G_SymFreed) b += N*EXPR_MAXLEN;
G_MemFixedBytes = b;
}
void ensurePredCache() {
if(G_PredCacheBar != Bar){
if(G_PredNode){
int i, n = G_PredLen;
for(i=0;i<n;i++) G_PredNode[i] = -2;
}
G_PredCacheBar = Bar;
}
}
var predByTid(int tid) {
if(!G_TreeIdx || tid < 0 || tid >= G_TreeN || !G_TreeIdx[tid]) return 0.5;
ensurePredCache();
if(G_PredNode && tid < G_PredLen && G_PredNode[tid] > -1.5) return G_PredNode[tid];
Node* t = G_TreeIdx[tid];
var p = 0.5;
if(t) p = nodePredictability(t);
if(G_PredNode && tid < G_PredLen) G_PredNode[tid] = p;
return p;
}
// ======================================================================
// Conservative in-script memory estimator (arrays + pointers) - O(1)
// ======================================================================
int mem_bytes_est(){ return G_MemFixedBytes + G_TreeBytesCached; }
int mem_mb_est(){ return mem_bytes_est() / (1024*1024); }
int memMB(){ return (int)(memory(0)/(1024*1024)); }
// light one-shot shedding
void shed_zero_cost_once() {
if(G_ShedStage > 0) return;
set(PLOTNOW|OFF);
G_ChartsOff = 1;
G_LogsOff = 1;
G_ShedStage = 1;
}
void freeExprBuffers() {
if(!G_Sym || G_SymFreed) return;
int i;
for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]);
free(G_Sym);
G_Sym = 0;
G_SymFreed = 1;
computeMemFixedBytes();
}
// depth manager (prune & shedding)
void depth_manager_runtime() {
int trigger = MEM_BUDGET_MB - MEM_HEADROOM_MB;
int mb = mem_mb_est();
if(mb < trigger) return;
if(G_ShedStage == 0) shed_zero_cost_once();
if(G_ShedStage <= 1){
if(LOG_EXPR_TEXT==0 && !G_SymFreed) freeExprBuffers();
G_ShedStage = 2;
}
int overBudget = (mb >= MEM_BUDGET_MB);
if(!overBudget && (Bar - G_LastDepthActBar < DEPTH_STEP_BARS)) return;
while(G_RT_TreeMaxDepth > RUNTIME_MIN_DEPTH) {
int keepK = ifelse(mem_mb_est() < MEM_BUDGET_MB + 2, KEEP_CHILDREN_HI, KEEP_CHILDREN_LO);
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, keepK);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
mb = mem_mb_est();
printf("\n[DepthMgr] depth=%i keepK=%i est=%i MB", G_RT_TreeMaxDepth, keepK, mb);
if(mb < trigger) break;
}
G_LastDepthActBar = Bar;
}
// ----------------------------------------------------------------------
// 61 candlestick patterns (Zorro spellings kept). Each returns [-100..100].
// We rescale to [-1..1] for Markov state construction.
// ----------------------------------------------------------------------
int buildCDL_TA61(var* out, string* names)
{
int n = 0;
#define ADD(Name, Call) do{ var v = (Call); if(out) out[n] = v/100.; if(names) names[n] = Name; n++; }while(0)
ADD("CDL2Crows", CDL2Crows());
ADD("CDL3BlackCrows", CDL3BlackCrows());
ADD("CDL3Inside", CDL3Inside());
ADD("CDL3LineStrike", CDL3LineStrike());
ADD("CDL3Outside", CDL3Outside());
ADD("CDL3StarsInSouth", CDL3StarsInSouth());
ADD("CDL3WhiteSoldiers", CDL3WhiteSoldiers());
ADD("CDLAbandonedBaby", CDLAbandonedBaby(0.3));
ADD("CDLAdvanceBlock", CDLAdvanceBlock());
ADD("CDLBeltHold", CDLBeltHold());
ADD("CDLBreakaway", CDLBreakaway());
ADD("CDLClosingMarubozu", CDLClosingMarubozu());
ADD("CDLConcealBabysWall", CDLConcealBabysWall());
ADD("CDLCounterAttack", CDLCounterAttack());
ADD("CDLDarkCloudCover", CDLDarkCloudCover(0.3));
ADD("CDLDoji", CDLDoji());
ADD("CDLDojiStar", CDLDojiStar());
ADD("CDLDragonflyDoji", CDLDragonflyDoji());
ADD("CDLEngulfing", CDLEngulfing());
ADD("CDLEveningDojiStar", CDLEveningDojiStar(0.3));
ADD("CDLEveningStar", CDLEveningStar(0.3));
ADD("CDLGapSideSideWhite", CDLGapSideSideWhite());
ADD("CDLGravestoneDoji", CDLGravestoneDoji());
ADD("CDLHammer", CDLHammer());
ADD("CDLHangingMan", CDLHangingMan());
ADD("CDLHarami", CDLHarami());
ADD("CDLHaramiCross", CDLHaramiCross());
ADD("CDLHignWave", CDLHignWave());
ADD("CDLHikkake", CDLHikkake());
ADD("CDLHikkakeMod", CDLHikkakeMod());
ADD("CDLHomingPigeon", CDLHomingPigeon());
ADD("CDLIdentical3Crows", CDLIdentical3Crows());
ADD("CDLInNeck", CDLInNeck());
ADD("CDLInvertedHammer", CDLInvertedHammer());
ADD("CDLKicking", CDLKicking());
ADD("CDLKickingByLength", CDLKickingByLength());
ADD("CDLLadderBottom", CDLLadderBottom());
ADD("CDLLongLeggedDoji", CDLLongLeggedDoji());
ADD("CDLLongLine", CDLLongLine());
ADD("CDLMarubozu", CDLMarubozu());
ADD("CDLMatchingLow", CDLMatchingLow());
ADD("CDLMatHold", CDLMatHold(0.5));
ADD("CDLMorningDojiStar", CDLMorningDojiStar(0.3));
ADD("CDLMorningStar", CDLMorningStar(0.3));
ADD("CDLOnNeck", CDLOnNeck());
ADD("CDLPiercing", CDLPiercing());
ADD("CDLRickshawMan", CDLRickshawMan());
ADD("CDLRiseFall3Methods", CDLRiseFall3Methods());
ADD("CDLSeperatingLines", CDLSeperatingLines());
ADD("CDLShootingStar", CDLShootingStar());
ADD("CDLShortLine", CDLShortLine());
ADD("CDLSpinningTop", CDLSpinningTop());
ADD("CDLStalledPattern", CDLStalledPattern());
ADD("CDLStickSandwhich", CDLStickSandwhich());
ADD("CDLTakuri", CDLTakuri());
ADD("CDLTasukiGap", CDLTasukiGap());
ADD("CDLThrusting", CDLThrusting());
ADD("CDLTristar", CDLTristar());
ADD("CDLUnique3River", CDLUnique3River());
ADD("CDLUpsideGap2Crows", CDLUpsideGap2Crows());
ADD("CDLXSideGap3Methods", CDLXSideGap3Methods());
#undef ADD
return n; // 61
}
// ================= Markov storage & helpers =================
static int* MC_Count; // [MC_STATES*MC_STATES] -> we alias this as the 1H (HTF) chain
static int* MC_RowSum; // [MC_STATES]
static int MC_Prev = -1;
static int MC_Cur = 0;
static var MC_PBullNext = 0.5;
static var MC_Entropy = 0.0;
#define MC_IDX(fr,to) ((fr)*MC_STATES + (to))
int MC_stateFromCDL(var* cdl /*len=61*/, var thr) {
int i, best=-1;
var besta=0;
for(i=0;i<MC_NPAT;i++){
var a = abs(cdl[i]);
if(a>besta){ besta=a; best=i; }
}
if(best<0) return MC_NONE;
if(besta < thr) return MC_NONE;
int bull = (cdl[best] > 0);
return 1 + 2*best + bull; // 1..122
}
int MC_isBull(int s){
if(s<=0) return 0;
return ((s-1)%2)==1;
}
void MC_update(int sPrev,int sCur){
if(sPrev<0) return;
MC_Count[MC_IDX(sPrev,sCur)]++;
MC_RowSum[sPrev]++;
}
// === (6) Use runtime Laplace ? (G_MC_Alpha) ===
var MC_prob(int s,int t){
var num = (var)MC_Count[MC_IDX(s,t)] + G_MC_Alpha;
var den = (var)MC_RowSum[s] + G_MC_Alpha*MC_STATES;
if(den<=0) return 1.0/MC_STATES;
return num/den;
}
// === (6) one-pass PBull + Entropy
void MC_rowStats(int s, var* outPBull, var* outEntropy) {
if(s<0){
if(outPBull) *outPBull=0.5;
if(outEntropy) *outEntropy=1.0;
return;
}
int t;
var Z=0, pBull=0;
for(t=1;t<MC_STATES;t++){
var p=MC_prob(s,t);
Z+=p;
if(MC_isBull(t)) pBull+=p;
}
if(Z<=0){
if(outPBull) *outPBull=0.5;
if(outEntropy) *outEntropy=1.0;
return;
}
var H=0;
for(t=1;t<MC_STATES;t++){
var p = MC_prob(s,t)/Z;
if(p>0) H += -p*log(p);
}
var Hmax = log(MC_STATES-1);
if(Hmax<=0) H = 0; else H = H/Hmax;
if(outPBull) *outPBull = pBull/Z;
if(outEntropy) *outEntropy = H;
}
// ==================== NEW: Multi-TF Markov extensions ====================
// We keep the legacy MC_* as the HTF (1H) chain via aliases:
#define MH_Count MC_Count
#define MH_RowSum MC_RowSum
#define MH_Prev MC_Prev
#define MH_Cur MC_Cur
#define MH_PBullNext MC_PBullNext
#define MH_Entropy MC_Entropy
// ---------- 5M (LTF) Markov ----------
static int* ML_Count; // [MC_STATES*MC_STATES]
static int* ML_RowSum; // [MC_STATES]
static int ML_Prev = -1;
static int ML_Cur = 0;
static var ML_PBullNext = 0.5;
static var ML_Entropy = 0.0;
// ---------- Relation Markov (links 5M & 1H) ----------
#define MR_STATES MC_STATES
static int* MR_Count; // [MR_STATES*MC_STATES]
static int* MR_RowSum; // [MR_STATES]
static int MR_Prev = -1;
static int MR_Cur = 0;
static var MR_PBullNext = 0.5;
static var MR_Entropy = 0.0;
// Relation state mapping (agreement only)
int MC_relFromHL(int sL, int sH) /* sL, sH in [0..122], 0 = none
return in [0..122], 0 = no-agreement */ {
if(sL <= 0 || sH <= 0) return MC_NONE;
int idxL = (sL - 1)/2; int bullL = ((sL - 1)%2)==1;
int idxH = (sH - 1)/2; int bullH = ((sH - 1)%2)==1;
if(idxL == idxH && bullL == bullH) return sL; // same shared state
return MC_NONE; // no-agreement bucket
}
// Small helpers reused for all three chains
void MC_update_any(int* C, int* R, int sPrev, int sCur) {
if(sPrev<0) return;
C[MC_IDX(sPrev,sCur)]++;
R[sPrev]++;
}
// Ultra-safe row stats for any Markov matrix (Zorro lite-C friendly)
void MC_rowStats_any(int* C, int* R, int s, var alpha, var* outPBull, var* outEntropy)
{
// Defaults
if(outPBull) *outPBull = 0.5;
if(outEntropy) *outEntropy = 1.0;
// Guards
if(!C || !R) return;
if(!(alpha > 0)) alpha = 1.0; // also catches NaN/INF
if(s <= 0 || s >= MC_STATES) return; // ignore NONE(0) and OOB
// Row must have observations
{
int rs = R[s];
if(rs <= 0) return;
}
// Precompute safe row slice
int STATES = MC_STATES;
int NN = STATES * STATES;
int rowBase = s * STATES;
if(rowBase < 0 || rowBase > NN - STATES) return; // paranoid bound
int* Crow = C + rowBase;
// Denominator with Laplace smoothing
var den = (var)R[s] + alpha * (var)STATES;
if(!(den > 0)) return;
// Pass 1: mass and bull mass
var Z = 0.0, pBull = 0.0;
int t;
for(t = 1; t < STATES; t++){
var num = (var)Crow[t] + alpha;
var p = num / den;
Z += p;
if(MC_isBull(t)) pBull += p;
}
if(!(Z > 0)) return;
// Pass 2: normalized entropy
var H = 0.0;
var Hmax = log((var)(STATES - 1));
if(!(Hmax > 0)) Hmax = 1.0;
for(t = 1; t < STATES; t++){
var num = (var)Crow[t] + alpha;
var p = (num / den) / Z;
if(p > 0) H += -p*log(p);
}
if(outPBull) *outPBull = pBull / Z;
if(outEntropy) *outEntropy = H / Hmax;
}
// --------------- 5M chain (every 5-minute bar) ---------------
void updateMarkov_5M()
{
// arrays must exist
if(!ML_Count || !ML_RowSum) return;
// compute LTF candlestick state
static var CDL_L[MC_NPAT];
buildCDL_TA61(CDL_L, 0);
int s = MC_stateFromCDL(CDL_L, G_MC_ACT); // 0..MC_STATES-1 (0 = NONE)
// debug/guard: emit when state is NONE or out of range (no indexing yet)
if(s <= 0 || s >= MC_STATES) printf("\n[MC] skip s=%d (Bar=%d)", s, Bar);
// update transitions once we have enough history
if(Bar > LookBack) MC_update_any(ML_Count, ML_RowSum, ML_Prev, s);
ML_Prev = s;
// only compute stats when s is a valid, in-range state and the row has mass
if(s > 0 && s < MC_STATES){
if(ML_RowSum[s] > 0)
MC_rowStats_any(ML_Count, ML_RowSum, s, G_MC_Alpha, &ML_PBullNext, &ML_Entropy);
ML_Cur = s; // keep last valid state; do not overwrite on NONE
}
// else: leave ML_Cur unchanged (sticky last valid)
}
// --------------- 1H chain (only when an H1 bar closes) ---------------
void updateMarkov_1H()
{
// arrays must exist
if(!MC_Count || !MC_RowSum) return;
// switch to 1H timeframe for the patterns
int saveTF = TimeFrame;
TimeFrame = TF_H1;
static var CDL_H[MC_NPAT];
buildCDL_TA61(CDL_H, 0);
int sH = MC_stateFromCDL(CDL_H, G_MC_ACT); // 0..MC_STATES-1
// debug/guard: emit when state is NONE or out of range (no indexing yet)
if(sH <= 0 || sH >= MC_STATES) printf("\n[MC] skip sH=%d (Bar=%d)", sH, Bar);
if(Bar > LookBack) MC_update(MH_Prev, sH);
MH_Prev = sH;
// only compute stats when sH is valid and its row has mass
if(sH > 0 && sH < MC_STATES){
if(MH_RowSum[sH] > 0)
MC_rowStats(sH, &MH_PBullNext, &MH_Entropy); // HTF uses legacy helper
MH_Cur = sH; // keep last valid HTF state
}
// else: leave MH_Cur unchanged
// restore original timeframe
TimeFrame = saveTF;
}
// --------------- Relation chain (agreement-only between 5M & 1H) ---------------
void updateMarkov_REL()
{
// arrays must exist
if(!MR_Count || !MR_RowSum) return;
// relation state from current LTF state and last HTF state
int r = MC_relFromHL(ML_Cur, MH_Cur); // 0 = no agreement / none
// debug/guard: emit when relation is NONE or out of range (no indexing yet)
if(r <= 0 || r >= MC_STATES) printf("\n[MC] skip r=%d (Bar=%d)", r, Bar);
if(Bar > LookBack) MC_update_any(MR_Count, MR_RowSum, MR_Prev, r);
MR_Prev = r;
// only compute stats when r is valid and row has mass
if(r > 0 && r < MC_STATES){
if(MR_RowSum[r] > 0)
MC_rowStats_any(MR_Count, MR_RowSum, r, G_MC_Alpha, &MR_PBullNext, &MR_Entropy);
MR_Cur = r; // keep last valid relation state
}
// else: leave MR_Cur unchanged
}
// ================= HARMONIC D-TREE ENGINE =================
// ---------- utils ----------
var randsign(){ return ifelse(random(1) < 0.5, -1.0, 1.0); }
var mapUnit(var u,var lo,var hi){
if(u<-1) u=-1;
if(u>1) u=1;
var t=0.5*(u+1.0);
return lo + t*(hi-lo);
}
// ---- safety helpers ----
var safeNum(var x) {
if(invalid(x)) return 0; // 0 for NaN/INF
return clamp(x,-1e100,1e100); // hard-limit range
}
void sanitize(var* A,int n){
int k; for(k=0;k<n;k++) A[k]=safeNum(A[k]);
}
var sat100(var x){ return clamp(x,-100,100); }
// ===== EQC-0: Equation-cycle angle helpers =====
var pi() { return 3.141592653589793; }
var wrapPi(var a) {
while(a <= -pi()) a += 2.*pi();
while(a > pi()) a -= 2.*pi();
return a;
}
var angDiff(var a, var b) { return wrapPi(b - a); }
// ---- small string helpers (for memory-safe logging) ----
void strlcat_safe(string dst, string src, int cap) {
if(!dst || !src || cap <= 0) return;
int dl = strlen(dst);
int sl = strlen(src);
int room = cap - 1 - dl;
if(room <= 0){ if(cap > 0) dst[cap-1] = 0; return; }
int i;
for(i = 0; i < room && i < sl; i++) dst[dl + i] = src[i];
dst[dl + i] = 0;
}
int countSubStr(string s, string sub){
if(!s || !sub) return 0;
int n=0; string p=s; int sublen = strlen(sub); if(sublen<=0) return 0;
while((p=strstr(p,sub))){ n++; p += sublen; }
return n;
}
// ---------- FIXED: use int (lite-C) and keep non-negative ----------
int djb2_hash(string s){
int h = 5381, c, i = 0;
if(!s) return h;
while((c = s[i++])) h = ((h<<5)+h) ^ c; // h*33 ^ c
return h & 0x7fffffff; // force non-negative
}
// ---- tree helpers ----
int validTreeIndex(int tid){
if(!G_TreeIdx) return 0;
if(tid<0||tid>=G_TreeN) return 0;
return (G_TreeIdx[tid]!=0);
}
Node* treeAt(int tid){
if(validTreeIndex(tid)) return G_TreeIdx[tid];
return &G_DummyNode;
}
int safeTreeIndexFromEq(int eqi){
int denom = ifelse(G_TreeN>0, G_TreeN, 1);
int tid = eqi;
if(tid < 0) tid = 0;
if(denom > 0) tid = tid % denom;
if(tid < 0) tid = 0;
return tid;
}
// ---- tree indexing ----
void pushTreeNode(Node* u){
if(G_TreeN >= G_TreeCap){
int newCap = G_TreeCap*2;
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
computeMemFixedBytes();
}
G_TreeIdx[G_TreeN++] = u;
}
void indexTreeDFS(Node* u){
if(!u) return;
pushTreeNode(u);
int i;
for(i=0;i<u->n;i++) indexTreeDFS(((Node**)u->c)[i]);
}
// ---- shrink index capacity after pruning (Fix #3) ----
void maybeShrinkTreeIdx(){
if(!G_TreeIdx) return;
if(G_TreeCap > 64 && G_TreeN < (G_TreeCap >> 1)){
int newCap = (G_TreeCap >> 1);
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
computeMemFixedBytes();
}
}
// ---- depth LUT helper (upgrade #1) ----
void refreshDepthW() {
if(!G_DepthW) return;
int d; for(d=0; d<DEPTH_LUT_SIZE; d++) G_DepthW[d] = 1.0 / pow(d+1, G_DTreeExp);
G_DepthExpLast = G_DTreeExp;
}
// ---- tree create/eval (with pool & LUT upgrades) ----
Node* createNode(int depth) {
Node* u = poolAllocNode();
if(!u) return 0;
u->v = random();
u->r = 0.01 + 0.02*depth + random(0.005);
u->d = depth;
if(depth > 0){
u->n = 1 + (int)random(MAX_BRANCHES);
u->c = malloc(u->n * sizeof(void*));
if(!u->c){ u->n = 0; u->c = 0; return u; }
int i;
for(i=0;i<u->n;i++){
Node* child = createNode(depth - 1);
((Node**)u->c)[i] = child;
}
} else {
u->n = 0; u->c = 0;
}
return u;
}
var evaluateNode(Node* u) {
if(!u) return 0;
var sum = 0; int i;
for(i=0;i<u->n;i++) sum += evaluateNode(((Node**)u->c)[i]);
if(G_DepthExpLast < 0 || abs(G_DTreeExp - G_DepthExpLast) > 1e-9) refreshDepthW();
var phase = sin(u->r * Bar + sum);
var weight = G_DepthW[u->d];
u->v = (1 - weight)*u->v + weight*phase;
return u->v;
}
int countNodes(Node* u){
if(!u) return 0;
int c=1,i;
for(i=0;i<u->n;i++) c += countNodes(((Node**)u->c)[i]);
return c;
}
void freeTree(Node* u) {
if(!u) return;
int i;
for(i=0;i<u->n;i++) freeTree(((Node**)u->c)[i]);
if(u->c) free(u->c);
poolFreeNode(u);
}
// =========== NETWORK STATE & COEFFICIENTS ===========
var* G_State;
var* G_Prev;
var* G_StateSq = 0;
i16* G_Adj;
fvar* G_RP;
fvar* G_Z;
i8* G_Mode;
fvar* G_WSelf;
fvar* G_WN1;
fvar* G_WN2;
fvar* G_WGlob1;
fvar* G_WGlob2;
fvar* G_WMom;
fvar* G_WTree;
fvar* G_WAdv;
fvar* A1x; fvar* A1lam; fvar* A1mean; fvar* A1E; fvar* A1P; fvar* A1i; fvar* A1c;
fvar* A2x; fvar* A2lam; fvar* A2mean; fvar* A2E; fvar* A2P; fvar* A2i; fvar* A2c;
fvar* G1mean; fvar* G1E; fvar* G2P; fvar* G2lam;
fvar* G_TreeTerm;
i16* G_TopEq;
fvar* G_TopW;
i16* G_EqTreeId;
fvar* TAlpha;
fvar* TBeta;
fvar* G_PropRaw;
fvar* G_Prop;
// --- Per-equation hit-rate (EW average of 1-bar directional correctness) ---
#define HIT_ALPHA 0.02
#define HIT_EPS 0.0001
fvar* G_HitEW; // [N] 0..1 EW hit-rate
int* G_HitN; // [N] # of scored comparisons
fvar* G_AdvPrev; // [N] previous bar's advisor output (-1..+1)
var G_Ret1 = 0; // realized 1-bar return for scoring
// ===== Markov features exposed to DTREE (HTF) =====
var G_MCF_PBull; // 0..1
var G_MCF_Entropy; // 0..1
var G_MCF_State; // 0..122
// ===== EQC-1: Equation-cycle globals =====
var* G_EqTheta = 0; // [G_N] fixed angle per equation on ring (0..2?)
int G_LeadEq = -1; // last bar's leader eq index
var G_LeadTh = 0; // leader's angle
var G_CycPh = 0; // wrapped cumulative phase (-?..?)
var G_CycSpd = 0; // smoothed angular speed (?? EMA)
// epoch/context & feedback
int G_Epoch = 0;
int G_CtxID = 0;
var G_FB_A = 0.7;
var G_FB_B = 0.3;
// ---------- predictability ----------
var nodePredictability(Node* t) {
if(!t) return 0.5;
var disp = 0; int n = t->n, i, cnt = 0;
if(t->c){
for(i=0;i<n;i++){
Node* c = ((Node**)t->c)[i];
if(c){ disp += abs(c->v - t->v); cnt++; }
}
if(cnt > 0) disp /= cnt;
}
var depthFac = 1.0/(1 + t->d);
var rateBase = 0.01 + 0.02*t->d;
var rateFac = exp(-25.0*abs(t->r - rateBase));
var p = 0.5*(depthFac + rateFac);
p = 0.5*p + 0.5*(1.0 + (-disp));
if(p<0) p=0; if(p>1) p=1;
return p;
}
var nodeImportance(Node* u) {
if(!u) return 0;
var amp = abs(u->v); if(amp>1) amp=1;
var p = nodePredictability(u);
var depthW = 1.0/(1.0 + u->d);
var imp = (0.6*p + 0.4*amp) * depthW;
return imp;
}
// ====== Elastic growth helpers ======
Node* createLeafDepth(int d) {
Node* u = poolAllocNode();
if(!u) return 0;
u->v = random();
u->r = 0.01 + 0.02*d + random(0.005);
u->n = 0;
u->c = 0;
return u;
}
void growSelectiveAtDepth(Node* u, int frontierDepth, int addK) {
if(!u) return;
if(u->d == frontierDepth){
int want = addK; if(want <= 0) return;
int oldN = u->n; int newN = oldN + want;
Node** Cnew = (Node**)malloc(newN * sizeof(void*));
if(oldN>0 && u->c) memcpy(Cnew, u->c, oldN*sizeof(void*));
int i;
for(i=oldN;i<newN;i++) Cnew[i] = createLeafDepth(frontierDepth-1);
if(u->c) free(u->c);
u->c = Cnew;
u->n = newN;
return;
}
int j;
for(j=0;j<u->n;j++) growSelectiveAtDepth(((Node**)u->c)[j], frontierDepth, addK);
}
void freeChildAt(Node* parent, int idx) {
if(!parent || !parent->c) return;
Node** C = (Node**)parent->c;
freeTree(C[idx]);
int i;
for(i=idx+1;i<parent->n;i++) C[i-1] = C[i];
parent->n--;
if(parent->n==0){ free(parent->c); parent->c=0; }
}
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK) {
if(!u) return;
if(u->d == targetDepth-1 && u->n > 0){
int n = u->n, i, kept = 0;
int mark[16]; for(i=0;i<16;i++) mark[i]=0;
int iter;
for(iter=0; iter<keepK && iter<n; iter++){
int bestI = -1; var bestImp = -1;
for(i=0;i<n;i++){
if(i<16 && mark[i]==1) continue;
var imp = nodeImportance(((Node**)u->c)[i]);
if(imp > bestImp){ bestImp = imp; bestI = i; }
}
if(bestI>=0 && bestI<16){ mark[bestI]=1; kept++; }
}
for(i=n-1;i>=0;i--) if(i<16 && mark[i]==0) freeChildAt(u,i);
return;
}
int j; for(j=0;j<u->n;j++) pruneSelectiveAtDepth(((Node**)u->c)[j], targetDepth, keepK);
}
// ----- refresh fixed ring angles per equation (0..2?) -----
void refreshEqAngles() {
if(!G_EqTheta){ G_EqTheta = (var*)malloc(G_N*sizeof(var)); }
int i; var twoPi = 2.*pi(); var denom = ifelse(G_TreeN>0,(var)G_TreeN,1.0);
for(i=0;i<G_N;i++){
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
var u = ((var)tid)/denom; // 0..1
G_EqTheta[i] = twoPi*u; // 0..2?
}
}
// ---------- reindex (sizes pred cache; + refresh angles) ----------
void reindexTreeAndMap() {
G_TreeN = 0;
indexTreeDFS(Root);
if(G_TreeN<=0){ G_TreeN=1; if(G_TreeIdx) G_TreeIdx[0]=Root; }
{ int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = (i16)(i % G_TreeN); }
G_PredLen = G_TreeN; if(G_PredLen <= 0) G_PredLen = 1;
if(G_PredLen > G_PredCap){
if(G_PredNode) free(G_PredNode);
G_PredNode = (var*)malloc(G_PredLen*sizeof(var));
G_PredCap = G_PredLen;
}
G_PredCacheBar = -1;
// NEW: compute equation ring angles after mapping
refreshEqAngles();
maybeShrinkTreeIdx();
recalcTreeBytes();
}
// ====== Accuracy sentinel & elastic-depth controller ======
void acc_update(var x /*lambda*/, var y /*gamma*/) {
var a = 0.01; // ~100-bar half-life
ACC_mx = (1-a)*ACC_mx + a*x;
ACC_my = (1-a)*ACC_my + a*y;
ACC_mx2 = (1-a)*ACC_mx2 + a*(x*x);
ACC_my2 = (1-a)*ACC_my2 + a*(y*y);
ACC_mxy = (1-a)*ACC_mxy + a*(x*y);
var vx = ACC_mx2 - ACC_mx*ACC_mx;
var vy = ACC_my2 - ACC_my*ACC_my;
var cv = ACC_mxy - ACC_mx*ACC_my;
if(vx>0 && vy>0) G_AccCorr = cv / sqrt(vx*vy);
else G_AccCorr = 0;
if(!G_HaveBase){ G_AccBase = G_AccCorr; G_HaveBase = 1; }
}
var util_now() {
int mb = mem_mb_est();
var mem_pen = 0;
if(mb > MEM_BUDGET_MB) mem_pen = (mb - MEM_BUDGET_MB)/(var)MEM_BUDGET_MB;
else mem_pen = 0;
return G_AccCorr - 0.5*mem_pen;
}
int apply_grow_step() {
int mb = mem_mb_est();
if(G_RT_TreeMaxDepth >= MAX_DEPTH) return 0;
if(mb > MEM_BUDGET_MB - 2*MEM_HEADROOM_MB) return 0;
int newFrontier = G_RT_TreeMaxDepth;
growSelectiveAtDepth(Root, newFrontier, KEEP_CHILDREN_HI);
G_RT_TreeMaxDepth++;
reindexTreeAndMap();
printf("\n[EDC] Grew depth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
return 1;
}
void revert_last_grow() {
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, 0);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
printf("\n[EDC] Reverted growth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
}
void edc_runtime() {
if((Bar % DEPTH_TUNE_BARS) == 0){
var U0 = util_now();
var trial = clamp(G_DTreeExp + G_DTreeExpDir*G_DTreeExpStep, 0.8, 2.0);
var old = G_DTreeExp;
G_DTreeExp = trial;
if(util_now() + 0.005 < U0){
G_DTreeExp = old;
G_DTreeExpDir = -G_DTreeExpDir;
}
}
int mb = mem_mb_est();
if(G_TunePending){
if(Bar - G_TuneStartBar >= TUNE_DELAY_BARS){
G_UtilAfter = util_now();
var eps = 0.01;
if(G_UtilAfter + eps < G_UtilBefore){ revert_last_grow(); }
else { printf("\n[EDC] Growth kept (U: %.4f -> %.4f)", G_UtilBefore, G_UtilAfter); }
G_TunePending = 0;
G_TuneAction = 0;
}
return;
}
if( (Bar % DEPTH_TUNE_BARS)==0 && mb <= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_RT_TreeMaxDepth < MAX_DEPTH ){
G_UtilBefore = util_now();
if(apply_grow_step()){
G_TunePending = 1;
G_TuneAction = 1;
G_TuneStartBar = Bar;
}
}
}
// Builds "Log\\Alpha12_eq_###.csv" into outName (must be >=64)
void buildEqFileName(int idx, char* outName /*>=64*/) {
strcpy(outName, "Log\\Alpha12_eq_");
string idxs = strf("%03i", idx);
strcat(outName, idxs);
strcat(outName, ".csv");
}
// ===== consolidated EQ log =====
void writeEqHeaderOnce() {
static int done=0; if(done) return; done=1;
file_append("Log\\Alpha12_eq_all.csv",
"Bar,Epoch,Ctx,EqCount,i,n1,n2,TreeId,Depth,Rate,Pred,Adv,Prop,Mode,WAdv,WTree,PBull,Entropy,MCState,ExprLen,ExprHash,tanhN,sinN,cosN\n");
}
void appendEqMetaLine(
int bar, int epoch, int ctx,
int i, int n1, int n2, int tid, int depth, var rate, var pred, var adv, var prop, int mode,
var wadv, var wtree, var pbull, var ent, int mcstate, string expr)
{
if(i >= LOG_EQ_SAMPLE) return;
// Lightweight expression stats (safe if expr == 0)
int eLen = 0, eHash = 0, cT = 0, cS = 0, cC = 0;
if(expr){
eLen = (int)strlen(expr);
eHash = (int)djb2_hash(expr);
cT = countSubStr(expr,"tanh(");
cS = countSubStr(expr,"sin(");
cC = countSubStr(expr,"cos(");
} else {
eHash = (int)djb2_hash("");
}
// One trimmed CSV line; order matches writeEqHeaderOnce()
file_append("Log\\Alpha12_eq_all.csv",
strf("%i,%i,%i,%i,%i,%i,%i,%i,%i,%.4f,%.4f,%.4f,%.4f,%i,%.3f,%.3f,%.4f,%.4f,%i,%i,%i,%i,%i,%i\n",
bar, epoch, ctx, NET_EQNS, i, n1, n2, tid, depth,
rate, pred, adv, prop, mode, wadv, wtree, pbull, ent,
mcstate, eLen, eHash, cT, cS, cC));
}
// --------- allocation ----------
void randomizeRP() {
int K=G_K,N=G_N,k,j;
for(k=0;k<K;k++)
for(j=0;j<N;j++)
G_RP[k*N+j] = ifelse(random(1) < 0.5, -1.0, 1.0);
}
// === (8/9) Use effective K + per-bar guard ===
int G_ProjBar = -1;
int G_ProjK = -1;
int keffClamped(){
int K = G_Keff;
if(K < 0) K = 0;
if(K > G_K) K = G_K;
return K;
}
void computeProjection()
{
if(!G_RP || !G_Z || !G_StateSq) return;
int K = keffClamped();
if(G_ProjBar == Bar && G_ProjK == K) return;
int N = G_N, k, j;
for(k = 0; k < K; k++){
var acc = 0;
for(j = 0; j < N; j++) acc += (var)G_RP[k*N + j] * G_StateSq[j];
G_Z[k] = (fvar)acc;
}
G_ProjBar = Bar;
G_ProjK = K;
}
// D) Compact allocate/free
void allocateNet() {
int N = G_N, D = G_D, K = G_K;
// core
G_State = (var*)malloc(N*sizeof(var));
G_Prev = (var*)malloc(N*sizeof(var));
G_StateSq = (var*)malloc(N*sizeof(var));
// graph / projection
G_Adj = (i16*) malloc(N*D*sizeof(i16));
G_RP = (fvar*)malloc(K*N*sizeof(fvar));
G_Z = (fvar*)malloc(K*sizeof(fvar));
G_Mode= (i8*) malloc(N*sizeof(i8));
// weights & params
G_WSelf = (fvar*)malloc(N*sizeof(fvar));
G_WN1 = (fvar*)malloc(N*sizeof(fvar));
G_WN2 = (fvar*)malloc(N*sizeof(fvar));
G_WGlob1 = (fvar*)malloc(N*sizeof(fvar));
G_WGlob2 = (fvar*)malloc(N*sizeof(fvar));
G_WMom = (fvar*)malloc(N*sizeof(fvar));
G_WTree = (fvar*)malloc(N*sizeof(fvar));
G_WAdv = (fvar*)malloc(N*sizeof(fvar));
A1x = (fvar*)malloc(N*sizeof(fvar));
A1lam=(fvar*)malloc(N*sizeof(fvar));
A1mean=(fvar*)malloc(N*sizeof(fvar));
A1E=(fvar*)malloc(N*sizeof(fvar));
A1P=(fvar*)malloc(N*sizeof(fvar));
A1i=(fvar*)malloc(N*sizeof(fvar));
A1c=(fvar*)malloc(N*sizeof(fvar));
A2x = (fvar*)malloc(N*sizeof(fvar));
A2lam=(fvar*)malloc(N*sizeof(fvar));
A2mean=(fvar*)malloc(N*sizeof(fvar));
A2E=(fvar*)malloc(N*sizeof(fvar));
A2P=(fvar*)malloc(N*sizeof(fvar));
A2i=(fvar*)malloc(N*sizeof(fvar));
A2c=(fvar*)malloc(N*sizeof(fvar));
G1mean=(fvar*)malloc(N*sizeof(fvar));
G1E =(fvar*)malloc(N*sizeof(fvar));
G2P =(fvar*)malloc(N*sizeof(fvar));
G2lam =(fvar*)malloc(N*sizeof(fvar));
TAlpha=(fvar*)malloc(N*sizeof(fvar));
TBeta =(fvar*)malloc(N*sizeof(fvar));
G_TreeTerm=(fvar*)malloc(N*sizeof(fvar));
G_TopEq=(i16*) malloc(N*sizeof(i16));
G_TopW=(fvar*)malloc(N*sizeof(fvar));
G_PropRaw=(fvar*)malloc(N*sizeof(fvar));
G_Prop =(fvar*)malloc(N*sizeof(fvar));
if(LOG_EXPR_TEXT) G_Sym = (string*)malloc(N*sizeof(char*));
else G_Sym = 0;
// tree index
G_TreeCap = 128;
G_TreeIdx = (Node**)malloc(G_TreeCap*sizeof(Node*));
G_TreeN = 0;
G_EqTreeId = (i16*)malloc(N*sizeof(i16));
// initialize adjacency
{ int t; for(t=0; t<N*D; t++) G_Adj[t] = -1; }
// initialize state and parameters
{
int i;
for(i=0;i<N;i++){
G_State[i] = random();
G_Prev[i] = G_State[i];
G_StateSq[i]= G_State[i]*G_State[i];
G_Mode[i] = 0;
G_WSelf[i]=0.5; G_WN1[i]=0.2; G_WN2[i]=0.2;
G_WGlob1[i]=0.1; G_WGlob2[i]=0.1; G_WMom[i]=0.05;
G_WTree[i]=0.15; G_WAdv[i]=0.15;
A1x[i]=1; A1lam[i]=0.1; A1mean[i]=0; A1E[i]=0; A1P[i]=0; A1i[i]=0; A1c[i]=0;
A2x[i]=1; A2lam[i]=0.1; A2mean[i]=0; A2E[i]=0; A2P[i]=0; A2i[i]=0; A2c[i]=0;
G1mean[i]=1.0; G1E[i]=0.001;
G2P[i]=0.6; G2lam[i]=0.3;
TAlpha[i]=0.8; TBeta[i]=25.0;
G_TreeTerm[i]=0;
G_TopEq[i]=-1; G_TopW[i]=0;
G_PropRaw[i]=1; G_Prop[i]=1.0/G_N;
if(LOG_EXPR_TEXT){
G_Sym[i] = (char*)malloc(EXPR_MAXLEN);
if(G_Sym[i]) strcpy(G_Sym[i],"");
}
}
}
// --- Hit-rate state ---
G_HitEW = (fvar*)malloc(N*sizeof(fvar));
G_HitN = (int*) malloc(N*sizeof(int));
G_AdvPrev = (fvar*)malloc(N*sizeof(fvar));
{ int i; for(i=0;i<N;i++){ G_HitEW[i] = 0.5; G_HitN[i] = 0; G_AdvPrev[i] = 0; } }
computeMemFixedBytes();
if(G_PredNode) free(G_PredNode);
G_PredLen = G_TreeN; if(G_PredLen<=0) G_PredLen=1;
G_PredNode = (var*)malloc(G_PredLen*sizeof(var));
G_PredCap = G_PredLen;
G_PredCacheBar = -1;
}
void freeNet() {
int i;
if(G_State)free(G_State);
if(G_Prev)free(G_Prev);
if(G_StateSq)free(G_StateSq);
if(G_Adj)free(G_Adj);
if(G_RP)free(G_RP);
if(G_Z)free(G_Z);
if(G_Mode)free(G_Mode);
if(G_WSelf)free(G_WSelf);
if(G_WN1)free(G_WN1);
if(G_WN2)free(G_WN2);
if(G_WGlob1)free(G_WGlob1);
if(G_WGlob2)free(G_WGlob2);
if(G_WMom)free(G_WMom);
if(G_WTree)free(G_WTree);
if(G_WAdv)free(G_WAdv);
if(A1x)free(A1x); if(A1lam)free(A1lam); if(A1mean)free(A1mean);
if(A1E)free(A1E); if(A1P)free(A1P); if(A1i)free(A1i); if(A1c)free(A1c);
if(A2x)free(A2x); if(A2lam)free(A2lam); if(A2mean)free(A2mean);
if(A2E)free(A2E); if(A2P)free(A2P); if(A2i)free(A2i); if(A2c)free(A2c);
if(G1mean)free(G1mean); if(G1E)free(G1E);
if(G2P)free(G2P); if(G2lam)free(G2lam);
if(TAlpha)free(TAlpha); if(TBeta)free(TBeta);
if(G_TreeTerm)free(G_TreeTerm);
if(G_TopEq)free(G_TopEq);
if(G_TopW)free(G_TopW);
if(G_EqTreeId)free(G_EqTreeId);
if(G_PropRaw)free(G_PropRaw);
if(G_Prop)free(G_Prop);
if(G_Sym){
for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]);
free(G_Sym);
}
if(G_TreeIdx)free(G_TreeIdx);
if(G_PredNode)free(G_PredNode);
if(G_EqTheta) free(G_EqTheta); // NEW: free ring angles
}
// --------- DTREE feature builders ----------
var nrm_s(var x) { return sat100(100.*tanh(x)); }
var nrm_scl(var x, var s) { return sat100(100.*tanh(s*x)); }
void buildEqFeatures(int i, var lambda, var mean, var energy, var power, var pred, var* S /*ADV_EQ_NF*/) {
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
// equation-cycle alignment
var th_i = ifelse(G_EqTheta!=0, G_EqTheta[i], 0);
var dphi = angDiff(G_CycPh, th_i);
var alignC = cos(dphi); // +1 aligned, -1 opposite
var alignS = sin(dphi); // quadrature
S[0] = nrm_s(G_State[i]);
S[1] = nrm_s(mean);
S[2] = nrm_scl(power,0.05);
S[3] = nrm_scl(energy,0.01);
S[4] = nrm_s(lambda);
S[5] = sat100(200.0*(pred-0.5));
S[6] = sat100(200.0*((var)t->d/MAX_DEPTH)-100.0);
S[7] = sat100(1000.0*t->r);
S[8] = nrm_s(G_TreeTerm[i]);
S[9] = sat100( (200.0/3.0) * (var)( (int)G_Mode[i] ) - 100.0 );
// HTF (1H)
S[10] = sat100(200.0*(G_MCF_PBull-0.5));
S[11] = sat100(200.0*(G_MCF_Entropy-0.5));
S[12] = sat100(200.0*((var)G_HitEW[i] - 0.5));
S[13] = sat100(100.*alignC);
S[14] = sat100(100.*alignS);
// NEW: 5M & Relation Markov features
S[15] = sat100(200.0*(ML_PBullNext - 0.5)); // 5M PBull
S[16] = sat100(200.0*(ML_Entropy - 0.5)); // 5M Entropy
S[17] = sat100(200.0*(MR_PBullNext - 0.5)); // Relation PBull
S[18] = sat100(200.0*(MR_Entropy - 0.5)); // Relation Entropy
sanitize(S,ADV_EQ_NF);
}
void buildPairFeatures(int i,int j, var lambda, var mean, var energy, var power, var* P /*ADV_PAIR_NF*/) {
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* ti = treeAt(tid_i);
Node* tj = treeAt(tid_j);
var predi = predByTid(tid_i);
var predj = predByTid(tid_j);
P[0]=nrm_s(G_State[i]);
P[1]=nrm_s(G_State[j]);
P[2]=sat100(200.0*((var)ti->d/MAX_DEPTH)-100.0);
P[3]=sat100(200.0*((var)tj->d/MAX_DEPTH)-100.0);
P[4]=sat100(1000.0*ti->r);
P[5]=sat100(1000.0*tj->r);
P[6]=sat100(abs(P[2]-P[3]));
P[7]=sat100(abs(P[4]-P[5]));
P[8]=sat100(100.0*(predi+predj-1.0));
P[9]=nrm_s(lambda);
P[10]=nrm_s(mean);
P[11]=nrm_scl(power,0.05);
sanitize(P,ADV_PAIR_NF);
}
// --- Safe neighbor helpers & adjacency sanitizer ---
int adjSafe(int i, int d){
int N = G_N, D = G_D;
if(!G_Adj || N <= 1 || D <= 0) return 0;
if(d < 0) d = 0;
if(d >= D) d = d % D;
int v = G_Adj[i*D + d];
if(v < 0 || v >= N || v == i) v = (i + 1) % N;
return v;
}
void sanitizeAdjacency(){
if(!G_Adj) return;
int N = G_N, D = G_D, i, d;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
i16 *p = &G_Adj[i*D + d];
if(*p < 0 || *p >= N || *p == i){
int r = (int)random(N);
if(r == i) r = (r+1) % N;
*p = (i16)r;
}
}
if(D >= 2 && G_Adj[i*D+0] == G_Adj[i*D+1]){
int r2 = (G_Adj[i*D+1] + 1) % N;
if(r2 == i) r2 = (r2+1) % N;
G_Adj[i*D+1] = (i16)r2;
}
}
}
// --------- advisor helpers (NEW) ----------
var adviseSeed(int i, var lambda, var mean, var energy, var power) {
static int seedBar = -1;
static int haveSeed[NET_EQNS];
static var seedVal[NET_EQNS];
if(seedBar != Bar){
int k; for(k=0;k<NET_EQNS;k++) haveSeed[k] = 0;
seedBar = Bar;
}
if(i < 0) i = 0; if(i >= NET_EQNS) i = i % NET_EQNS;
if(!allowAdvise(i)) return 0;
if(!haveSeed[i]){
seedVal[i] = adviseEq(i, lambda, mean, energy, power); // trains (once) in Train mode
haveSeed[i] = 1;
}
return seedVal[i];
}
var mix01(var a, int salt){
var z = sin(123.456*a + 0.001*salt) + cos(98.765*a + 0.002*salt);
return tanh(0.75*z);
}
// --------- advise wrappers (single-equation only) ----------
var adviseEq(int i, var lambda, var mean, var energy, var power) {
if(!allowAdvise(i)) return 0;
if(is(INITRUN)) return 0;
int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
if(tight) return 0;
if(G_HitN[i] > 32){
var h = (var)G_HitEW[i];
var gate = 0.40 + 0.15*(1.0 - MC_Entropy);
if(h < gate){
if(random() >= 0.5) return 0;
}
}
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
var pred = predByTid(tid);
var S[ADV_EQ_NF];
buildEqFeatures(i,lambda,mean,energy,power,pred,S);
var obj = 0;
if(Train){
obj = sat100(100.0*tanh(0.6*lambda + 0.4*mean));
var prior = 0.75 + 0.5*((var)G_HitEW[i] - 0.5); // 0.5..1.0
obj *= prior;
// --- EQC-5: cycle priors (reward aligned & non-stalled rotation)
{ var th_i = ifelse(G_EqTheta!=0, G_EqTheta[i], 0);
var dphi = angDiff(G_CycPh, th_i);
var align = 0.90 + 0.20*(0.5*(cos(dphi)+1.0));
var spdOK = 0.90 + 0.20*clamp(abs(G_CycSpd)/(0.15), 0., 1.);
obj *= align * spdOK;
}
}
int objI = (int)obj;
var a = adviseLong(DTREE, objI, S, ADV_EQ_NF);
return a/100.;
}
var advisePair(int i,int j, var lambda, var mean, var energy, var power) {
return 0;
}
// --------- heuristic pair scoring ----------
var scorePairSafe(int i, int j, var lambda, var mean, var energy, var power) {
int ti = safeTreeIndexFromEq(G_EqTreeId[i]);
int tj = safeTreeIndexFromEq(G_EqTreeId[j]);
Node *ni = treeAt(ti), *nj = treeAt(tj);
var simD = 1.0 / (1.0 + abs((var)ni->d - (var)nj->d));
var dr = 50.0*abs(ni->r - nj->r);
var simR = 1.0 / (1.0 + dr);
var predi = predByTid(ti);
var predj = predByTid(tj);
var pred = 0.5*(predi + predj);
var score = 0.5*pred + 0.3*simD + 0.2*simR;
return 2.0*score - 1.0;
}
// --------- adjacency selection (heuristic only) ----------
void rewireAdjacency_DTREE(var lambda, var mean, var energy, var power) {
int N=G_N, D=G_D, i, d, c, best, cand;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
var bestScore = -2; best = -1;
for(c=0;c<G_CandNeigh;c++){
cand = (int)random(N);
if(cand==i) continue;
// avoid duplicate neighbors
int clash=0, k;
for(k=0;k<d;k++){
int prev = G_Adj[i*D+k];
if(prev>=0 && prev==cand){ clash=1; break; }
}
if(clash) continue;
var s = scorePairSafe(i,cand,lambda,mean,energy,power);
if(s > bestScore){ bestScore=s; best=cand; }
}
if(best<0){
do{ best = (int)random(N);} while(best==i);
}
G_Adj[i*D + d] = (i16)best;
}
}
}
// --------- DTREE-created coefficients, modes & proportions ----------
var mapA(var a,var lo,var hi){ return mapUnit(a,lo,hi); }
void synthesizeEquationFromDTREE(int i, var lambda, var mean, var energy, var power) {
var seed = adviseSeed(i,lambda,mean,energy,power);
G_Mode[i] = (int)(abs(1000*seed)) & 3;
G_WSelf[i] = (fvar)mapA(mix01(seed, 11), 0.15, 0.85);
G_WN1[i] = (fvar)mapA(mix01(seed, 12), 0.05, 0.35);
G_WN2[i] = (fvar)mapA(mix01(seed, 13), 0.05, 0.35);
G_WGlob1[i] = (fvar)mapA(mix01(seed, 14), 0.05, 0.30);
G_WGlob2[i] = (fvar)mapA(mix01(seed, 15), 0.05, 0.30);
G_WMom[i] = (fvar)mapA(mix01(seed, 16), 0.02, 0.15);
G_WTree[i] = (fvar)mapA(mix01(seed, 17), 0.05, 0.35);
G_WAdv[i] = (fvar)mapA(mix01(seed, 18), 0.05, 0.35);
A1x[i] = (fvar)(randsign()*mapA(mix01(seed, 21), 0.6, 1.2));
A1lam[i] = (fvar)(randsign()*mapA(mix01(seed, 22), 0.05,0.35));
A1mean[i]= (fvar) mapA(mix01(seed, 23),-0.30,0.30);
A1E[i] = (fvar) mapA(mix01(seed, 24),-0.0015,0.0015);
A1P[i] = (fvar) mapA(mix01(seed, 25),-0.30,0.30);
A1i[i] = (fvar) mapA(mix01(seed, 26),-0.02,0.02);
A1c[i] = (fvar) mapA(mix01(seed, 27),-0.20,0.20);
A2x[i] = (fvar)(randsign()*mapA(mix01(seed, 31), 0.6, 1.2));
A2lam[i] = (fvar)(randsign()*mapA(mix01(seed, 32), 0.05,0.35));
A2mean[i]= (fvar) mapA(mix01(seed, 33),-0.30,0.30);
A2E[i] = (fvar) mapA(mix01(seed, 34),-0.0015,0.0015);
A2P[i] = (fvar) mapA(mix01(seed, 35),-0.30,0.30);
A2i[i] = (fvar) mapA(mix01(seed, 36),-0.02,0.02);
A2c[i] = (fvar) mapA(mix01(seed, 37),-0.20,0.20);
G1mean[i] = (fvar) mapA(mix01(seed, 41), 0.4, 1.6);
G1E[i] = (fvar) mapA(mix01(seed, 42),-0.004,0.004);
G2P[i] = (fvar) mapA(mix01(seed, 43), 0.1, 1.2);
G2lam[i] = (fvar) mapA(mix01(seed, 44), 0.05, 0.7);
TAlpha[i] = (fvar) mapA(mix01(seed, 51), 0.3, 1.5);
TBeta[i] = (fvar) mapA(mix01(seed, 52), 6.0, 50.0);
G_PropRaw[i] = (fvar)(0.01 + 0.99*(0.5*(seed+1.0)));
{ // reliability boost
var boost = 0.75 + 0.5*(var)G_HitEW[i];
G_PropRaw[i] = (fvar)((var)G_PropRaw[i] * boost);
}
}
void normalizeProportions() {
int N=G_N,i;
var s=0;
for(i=0;i<N;i++) s += G_PropRaw[i];
if(s<=0) {
for(i=0;i<N;i++) G_Prop[i] = (fvar)(1.0/N);
return;
}
for(i=0;i<N;i++) G_Prop[i] = (fvar)(G_PropRaw[i]/s);
}
// H) dtreeTerm gets predictabilities on demand
var dtreeTerm(int i, int* outTopEq, var* outTopW) {
int N=G_N,j;
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* ti=treeAt(tid_i);
int di=ti->d; var ri=ti->r;
var predI = predByTid(tid_i);
var alpha=TAlpha[i], beta=TBeta[i];
var sumw=0, acc=0, bestW=-1; int bestJ=-1;
for(j=0;j<N;j++){
if(j==i) continue;
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* tj=treeAt(tid_j);
int dj=tj->d; var rj=tj->r;
var predJ = predByTid(tid_j);
var w = exp(-alpha*abs(di-dj)) * exp(-beta*abs(ri-rj));
var predBoost = 0.5 + 0.5*(predI*predJ);
var propBoost = 0.5 + 0.5*( (G_Prop[i] + G_Prop[j]) );
w *= predBoost * propBoost;
var pairAdv = scorePairSafe(i,j,0,0,0,0);
var pairBoost = 0.75 + 0.25*(0.5*(pairAdv+1.0));
w *= pairBoost;
sumw += w;
acc += w*G_State[j];
if(w>bestW){bestW=w; bestJ=j;}
}
if(outTopEq) *outTopEq = bestJ;
if(outTopW) *outTopW = ifelse(sumw>0, bestW/sumw, 0);
if(sumw>0) return acc/sumw;
return 0;
}
// --------- expression builder (capped & optional) ----------
void buildSymbolicExpr(int i, int n1, int n2) {
if(LOG_EXPR_TEXT){
string s = G_Sym[i];
s[0]=0;
string a1 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
(var)A1x[i], n1, (var)A1lam[i], (var)A1mean[i], (var)A1E[i], (var)A1P[i], (var)A1i[i], (var)A1c[i]);
string a2 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
(var)A2x[i], n2, (var)A2lam[i], (var)A2mean[i], (var)A2E[i], (var)A2P[i], (var)A2i[i], (var)A2c[i]);
strlcat_safe(s, "x[i]_next = ", EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*x[i] + ", (var)G_WSelf[i]), EXPR_MAXLEN);
if(G_Mode[i]==1){
strlcat_safe(s, strf("%.3f*tanh%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
} else if(G_Mode[i]==2){
strlcat_safe(s, strf("%.3f*cos%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*tanh%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
} else {
strlcat_safe(s, strf("%.3f*sin%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*cos%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
}
strlcat_safe(s, strf("%.3f*tanh(%.3f*mean + %.5f*E) + ",
(var)G_WGlob1[i], (var)G1mean[i], (var)G1E[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin(%.3f*P + %.3f*lam) + ",
(var)G_WGlob2[i], (var)G2P[i], (var)G2lam[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*(x[i]-x_prev[i]) + ", (var)G_WMom[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("Prop[i]=%.4f; ", (var)G_Prop[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DT(i) + ", (var)G_WTree[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DTREE(i)", (var)G_WAdv[i]), EXPR_MAXLEN);
}
}
// ======================= NEW: Range builders for chunked rewires =======================
// Rewire adjacency for i in [i0..i1), keeps others unchanged
void rewireAdjacency_DTREE_range(int i0,int i1, var lambda, var mean, var energy, var power) {
int N=G_N, D=G_D, i, d, c, best, cand;
if(i0<0) i0=0; if(i1>N) i1=N;
for(i=i0;i<i1;i++){
for(d=0; d<D; d++){
var bestScore = -2; best = -1;
for(c=0;c<G_CandNeigh;c++){
cand = (int)random(N);
if(cand==i) continue;
int clash=0, k;
for(k=0;k<d;k++){
int prev = G_Adj[i*D+k];
if(prev>=0 && prev==cand){ clash=1; break; }
}
if(clash) continue;
var s = scorePairSafe(i,cand,lambda,mean,energy,power);
if(s > bestScore){ bestScore=s; best=cand; }
}
if(best<0){
do{ best = (int)random(N);} while(best==i);
}
G_Adj[i*D + d] = (i16)best;
}
}
}
// Synthesize equations only for [i0..i1)
void synthesizeEquation_range(int i0,int i1, var lambda, var mean, var energy, var power) {
int i; if(i0<0) i0=0; if(i1>G_N) i1=G_N;
for(i=i0;i<i1;i++) synthesizeEquationFromDTREE(i,lambda,mean,energy,power);
}
// Build expr text only for [i0..i1) — guarded at runtime for lite-C compatibility
void buildSymbolicExpr_range(int i0,int i1) {
if(!LOG_EXPR_TEXT) return; // 0 = omit; 1 = build
int i; if(i0<0) i0=0; if(i1>G_N) i1=G_N;
for(i=i0;i<i1;i++){
int n1 = adjSafe(i,0);
int n2 = ifelse(G_D >= 2, adjSafe(i,1), n1);
buildSymbolicExpr(i,n1,n2);
}
}
// ======================= NEW: Rolling rewire cursor state =======================
int G_RewirePos = 0; // next equation index to process
int G_RewirePasses = 0; // #completed full passes since start
int G_RewireBatch = REWIRE_BATCH_EQ_5M; // effective batch for this bar
// ======================= NEW: Rolling cursor for heavy per-bar updates =======================
int G_UpdatePos = 0; // next equation index to do heavy work
int G_UpdatePasses = 0; // #completed full heavy passes
// ======================= NEW: Chunked rewire orchestrator =======================
// Run part of a rewire: only a slice of equations this bar.
// Returns 1 if a full pass just completed (we can normalize), else 0.
int rewireEpochChunk(var lambda, var mean, var energy, var power, int batch) {
int N = G_N;
if(N <= 0) return 0;
if(batch < REWIRE_MIN_BATCH) batch = REWIRE_MIN_BATCH;
if(G_RewirePos >= N) G_RewirePos = 0;
int i0 = G_RewirePos;
int i1 = i0 + batch; if(i1 > N) i1 = N;
// Adapt neighbor breadth by entropy (your original heuristic)
G_CandNeigh = ifelse(MC_Entropy < 0.45, CAND_NEIGH+4, CAND_NEIGH);
// Rewire only the target slice
rewireAdjacency_DTREE_range(i0,i1, lambda,mean,energy,power);
sanitizeAdjacency(); // cheap; can keep global
synthesizeEquation_range(i0,i1, lambda,mean,energy,power);
buildSymbolicExpr_range(i0,i1);
// advance cursor
G_RewirePos = i1;
// Full pass finished?
if(G_RewirePos >= N){
G_RewirePos = 0;
G_RewirePasses += 1;
return 1;
}
return 0;
}
// ---------- one-time rewire init (call central reindex) ----------
void rewireInit() {
randomizeRP();
computeProjection();
reindexTreeAndMap(); // ensures G_PredNode sized before any use
}
// ----------------------------------------------------------------------
// I) Trim rewireEpoch -> now used for one-shot/initialization full pass
// ----------------------------------------------------------------------
void rewireEpoch(var lambda, var mean, var energy, var power) {
// Backward compatibility: do one full pass immediately
int done = 0;
while(!done){
done = rewireEpochChunk(lambda,mean,energy,power, REWIRE_BATCH_EQ_H1);
}
// After full pass, normalize proportions once (exact)
normalizeProportions();
// Context hash (unchanged)
{
int D = G_D, i, total = G_N * D;
unsigned int h = 2166136261u;
for(i=0;i<total;i++){
unsigned int x = (unsigned int)G_Adj[i];
h ^= x + 0x9e3779b9u + (h<<6) + (h>>2);
}
G_CtxID = (int)((h ^ ((unsigned int)G_Epoch<<8)) & 0x7fffffff);
}
}
// coarse projection-based driver for gamma
var projectNet() {
int N=G_N,i;
var sum=0,sumsq=0,cross=0;
for(i=0;i<N;i++){
sum+=G_State[i];
sumsq+=G_State[i]*G_State[i];
if(i+1<N) cross+=G_State[i]*G_State[i+1];
}
var mean=sum/N, corr=cross/(N-1);
return 0.6*tanh(mean + 0.001*sumsq) + 0.4*sin(corr);
}
// ----------------------------------------------------------------------
// J) Heavy per-bar update slice (uses rolling G_UpdatePos cursor)
// ----------------------------------------------------------------------
var f_affine(var x, var lam, var mean, var E, var P, var i, var c){
return x + lam*m
|
|
|
Consensus Gate Orchestrator (continue)
[Re: TipmyPip]
#488925
Yesterday at 10:05
Yesterday at 10:05
|
Joined: Sep 2017
Posts: 164
TipmyPip
OP
Member
|
OP
Member
Joined: Sep 2017
Posts: 164
|
continuation...  // ----------------------------------------------------------------------
// J) Heavy per-bar update slice (uses rolling G_UpdatePos cursor)
// ----------------------------------------------------------------------
var f_affine(var x, var lam, var mean, var E, var P, var i, var c){
return x + lam*mean + E + P + i + c; // small helper used inside nonlins
}
var nonlin1(int i, int n1, var lam, var mean, var E, var P){
var x = G_State[n1];
var arg = (var)A1x[i]*x + (var)A1lam[i]*lam + (var)A1mean[i]*mean + (var)A1E[i]*E + (var)A1P[i]*P + (var)A1i[i]*i + (var)A1c[i];
return arg;
}
var nonlin2(int i, int n2, var lam, var mean, var E, var P){
var x = G_State[n2];
var arg = (var)A2x[i]*x + (var)A2lam[i]*lam + (var)A2mean[i]*mean + (var)A2E[i]*E + (var)A2P[i]*P + (var)A2i[i]*i + (var)A2c[i];
return arg;
}
// returns 1 if a full heavy-update pass finishes, else 0
int heavyUpdateChunk(var lambda, var mean, var energy, var power, int batch){
int N = G_N;
if(N <= 0) return 0;
if(batch < UPDATE_MIN_BATCH) batch = UPDATE_MIN_BATCH;
if(G_UpdatePos >= N) G_UpdatePos = 0;
int i0 = G_UpdatePos;
int i1 = i0 + batch; if(i1 > N) i1 = N;
// projection may be reused by multiple chunks within the same bar
computeProjection();
int i;
for(i=i0;i<i1;i++){
// --- neighbors (safe) ---
int n1 = adjSafe(i,0);
int n2 = ifelse(G_D>=2, adjSafe(i,1), n1);
// --- DTREE ensemble term (also returns top meta) ---
int topEq = -1; var topW = 0;
var treeT = dtreeTerm(i, &topEq, &topW);
G_TreeTerm[i] = (fvar)treeT;
G_TopEq[i] = (i16)topEq;
G_TopW[i] = (fvar)topW;
// --- advisor (data-driven) ---
var adv = adviseEq(i, lambda, mean, energy, power);
// --- nonlinear pair terms controlled by Mode ---
var a1 = nonlin1(i,n1,lambda,mean,energy,power);
var a2 = nonlin2(i,n2,lambda,mean,energy,power);
var t1, t2;
if(G_Mode[i]==1){ t1 = tanh(a1); t2 = sin(a2);
} else if(G_Mode[i]==2){ t1 = cos(a1); t2 = tanh(a2);
} else { t1 = sin(a1); t2 = cos(a2); }
// --- global couplings & momentum ---
var glob1 = tanh( (var)G1mean[i]*mean + (var)G1E[i]*energy );
var glob2 = sin ( (var)G2P[i]*power + (var)G2lam[i]*lambda );
var mom = (G_State[i] - G_Prev[i]);
// --- next state synthesis ---
var xnext =
(var)G_WSelf[i]*G_State[i]
+ (var)G_WN1[i]*t1
+ (var)G_WN2[i]*t2
+ (var)G_WGlob1[i]*glob1
+ (var)G_WGlob2[i]*glob2
+ (var)G_WMom[i]*mom
+ (var)G_WTree[i]*treeT
+ (var)G_WAdv[i]*adv;
// --- stability clamp & book-keeping ---
xnext = clamp(xnext, -10, 10);
G_Prev[i] = G_State[i];
G_State[i]= xnext;
G_StateSq[i] = xnext*xnext;
// --- keep last advisor output for hit-rate scoring next bar ---
G_AdvPrev[i] = (fvar)adv;
// --- lightweight per-eq meta logging (sampled) ---
if(!G_LogsOff && (Bar % LOG_EVERY)==0 && (i < LOG_EQ_SAMPLE)){
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* tnode = treeAt(tid);
int nodeDepth = 0;
if(tnode) nodeDepth = tnode->d;
var rate = (var)TBeta[i]; // any per-eq scalar to inspect quickly
var pred = predByTid(tid);
// last parameter must be a string (avoid ternary; lite-C friendly)
string expr = 0;
if(LOG_EXPR_TEXT){
if(G_Sym) expr = G_Sym[i];
else expr = 0;
}
appendEqMetaLine(Bar, G_Epoch, G_CtxID,
i, n1, n2, tid, nodeDepth, rate, pred, adv, G_Prop[i], (int)G_Mode[i],
(var)G_WAdv[i], (var)G_WTree[i], G_MCF_PBull, G_MCF_Entropy, (int)G_MCF_State,
expr);
}
}
// advance rolling cursor
G_UpdatePos = i1;
// full pass completed?
if(G_UpdatePos >= N){
G_UpdatePos = 0;
G_UpdatePasses += 1;
return 1;
}
return 0;
}
// ----------------------------------------------------------------------
// K) Cycle tracker: pick leader eq on ring and update phase/speed
// ----------------------------------------------------------------------
void updateEquationCycle() {
if(!G_EqTheta){ G_CycPh = wrapPi(G_CycPh); return; }
// Leader = argmax Prop[i]
int i, bestI = 0; var bestP = -1;
for(i=0;i<G_N;i++){
var p = (var)G_Prop[i];
if(p > bestP){ bestP = p; bestI = i; }
}
var th = ifelse(G_EqTheta != 0, G_EqTheta[bestI], 0);
// angular speed (wrapped diff)
var d = angDiff(G_LeadTh, th);
// EW smoothing for speed
G_CycSpd = 0.9*G_CycSpd + 0.1*d;
// integrate phase, keep wrapped
G_CycPh = wrapPi( G_CycPh + G_CycSpd );
G_LeadEq = bestI;
G_LeadTh = th;
}
// ----------------------------------------------------------------------
// L) Markov orchestration per bar (5m every bar; 1H & Relation on close)
// ----------------------------------------------------------------------
int is_H1_close(){ return (Bar % TF_H1) == 0; }
void updateAllMarkov(){
// Don’t touch Markov until all chains are allocated
if(!MC_Count || !MC_RowSum || !ML_Count || !ML_RowSum || !MR_Count || !MR_RowSum)
return;
// low TF always
updateMarkov_5M();
// on 1H close, refresh HTF & relation
if(is_H1_close()){
updateMarkov_1H();
updateMarkov_REL();
}
// expose HTF features to DTREE (the legacy MC_* are HTF via MH_*)
G_MCF_PBull = MH_PBullNext;
G_MCF_Entropy = MH_Entropy;
G_MCF_State = MH_Cur;
}
// ----------------------------------------------------------------------
// M) Rewire scheduler (chunked): decide batch and normalize periodically
// ----------------------------------------------------------------------
void maybeRewireNow(var lambda, var mean, var energy, var power){
int mb = mem_mb_est();
// Near budget? shrink batch or skip
if(mb >= UPDATE_MEM_HARD) return;
// choose batch by bar type
int batch = ifelse(is_H1_close(), REWIRE_BATCH_EQ_H1, REWIRE_BATCH_EQ_5M);
// soften by memory
if(mb >= REWIRE_MEM_SOFT) batch = (batch>>1);
if(batch < REWIRE_MIN_BATCH) batch = REWIRE_MIN_BATCH;
int finished = rewireEpochChunk(lambda,mean,energy,power,batch);
// Normalize proportions after completing a full pass (and every REWIRE_NORM_EVERY passes)
if(finished && (G_RewirePasses % REWIRE_NORM_EVERY) == 0){
normalizeProportions();
// write a header once and roll context id every META_EVERY full passes
writeEqHeaderOnce();
if((G_RewirePasses % META_EVERY) == 0){
// refresh context hash using adjacency (same as in rewireEpoch)
int D = G_D, i, total = G_N * D;
unsigned int h = 2166136261u;
for(i=0;i<total;i++){
unsigned int x = (unsigned int)G_Adj[i];
h ^= x + 0x9e3779b9u + (h<<6) + (h>>2);
}
G_CtxID = (int)((h ^ ((unsigned int)G_Epoch<<8)) & 0x7fffffff);
}
}
}
// ----------------------------------------------------------------------
// N) Heavy update scheduler (chunked) for each bar
// ----------------------------------------------------------------------
void runHeavyUpdates(var lambda, var mean, var energy, var power){
int mb = mem_mb_est();
// Near hard ceiling? skip heavy work this bar
if(mb >= UPDATE_MEM_HARD) return;
int batch = ifelse(is_H1_close(), UPDATE_BATCH_EQ_H1, UPDATE_BATCH_EQ_5M);
if(mb >= UPDATE_MEM_SOFT) batch = (batch>>1);
if(batch < UPDATE_MIN_BATCH) batch = UPDATE_MIN_BATCH;
heavyUpdateChunk(lambda,mean,energy,power,batch);
}
// ----------------------------------------------------------------------
// O) Hit-rate scorer (EW average of 1-bar directional correctness)
// ----------------------------------------------------------------------
void updateHitRates(){
if(is(INITRUN)) return;
if(Bar <= LookBack) return;
int i;
var r = G_Ret1; // realized 1-bar return provided by outer loop
var sgnR = sign(r);
for(i=0;i<G_N;i++){
var a = (var)G_AdvPrev[i]; // last bar's advisor score (-1..+1)
var sgnA = ifelse(a > HIT_EPS, 1, ifelse(a < -HIT_EPS, -1, 0));
var hit = ifelse(sgnR == 0, 0.5, ifelse(sgnA == sgnR, 1.0, 0.0));
G_HitEW[i] = (fvar)((1.0 - HIT_ALPHA)*(var)G_HitEW[i] + HIT_ALPHA*hit);
G_HitN[i] += 1;
}
}
// ----------------------------------------------------------------------
// P) Lambda/Gamma blend & accuracy sentinel
// ----------------------------------------------------------------------
var blendLambdaGamma(var lambda_raw, var gamma_raw){
// adapt blend weight a bit with entropy: more uncertainty -> lean on gamma
var w = clamp(G_FB_W + 0.15*(0.5 - G_MCF_Entropy), 0.4, 0.9);
var x = w*lambda_raw + (1.0 - w)*gamma_raw;
acc_update(lambda_raw, gamma_raw);
return x;
}
// ----------------------------------------------------------------------
// Q) Per-bar orchestrator (no orders here; main run() will call this)
// ----------------------------------------------------------------------
// ----------------------------------------------------------------------
// Q) Per-bar orchestrator (no orders here; main run() will call this)
// Hardened for warmup & init guards (safe if called before LookBack)
// ----------------------------------------------------------------------
void alpha12_step(var ret1_now /*1-bar realized return for scoring*/)
{
// If somehow called before init completes, do nothing
if(!ALPHA12_READY) return;
// 1) Markov update & expose HTF features (always safe)
updateAllMarkov();
// --- Warmup guard ---------------------------------------------------
// Some environments (or external callers) may invoke this before LookBack.
// In warmup we only maintain projections and exit; no rewires/heavy updates.
if(Bar < LookBack){
// keep projection cache alive if arrays exist
computeProjection();
G_Ret1 = ret1_now; // harmless bookkeeping for scorer
// optionally adapt MC threshold very slowly even in warmup
{
var h = 0.5;
int i; var acc = 0;
for(i=0;i<G_N;i++) acc += (var)G_HitEW[i];
if(G_N > 0) h = acc/(var)G_N;
var target = MC_ACT
+ 0.15*(0.55 - h)
+ 0.10*(G_MCF_Entropy - 0.5);
target = clamp(target, 0.20, 0.50);
G_MC_ACT = 0.95*G_MC_ACT + 0.05*target;
}
return; // <-- nothing heavy before LookBack
}
// --------------------------------------------------------------------
// 2) Compute lambda from current projection snapshot
var lambda = 0.0;
{
computeProjection();
int K = keffClamped(); // clamped effective projection dimension
int k;
var e = 0;
var pwr = 0;
for(k = 0; k < K; k++){
var z = (var)G_Z[k];
e += z;
pwr += z*z;
}
var mean = 0;
var energy = pwr; // total energy = sum of squares
var power = 0;
if(K > 0){
mean = e / (var)K;
power = pwr / (var)K;
}
// local "lambda" = trend proxy mixing price-like aggregates
lambda = 0.7*tanh(mean) + 0.3*tanh(0.05*power);
// 3) Maybe rewire a slice this bar (uses same features)
maybeRewireNow(lambda, mean, energy, power);
// 4) Heavy updates for a slice
runHeavyUpdates(lambda, mean, energy, power);
}
// 5) Gamma from coarse network projection (stable, uses whole state)
var gamma = projectNet();
// 6) Blend & store accuracy sentinel
var x = blendLambdaGamma(lambda, gamma);
// 7) Update ring / equation-cycle tracker
updateEquationCycle();
// 8) Score previous advisors against realized 1-bar return
G_Ret1 = ret1_now;
updateHitRates();
// 9) Depth manager & elastic growth controller (memory-aware)
depth_manager_runtime();
edc_runtime();
// 10) Adapt MC acceptance threshold by hit-rate/entropy
{
var h = 0.0;
int i;
for(i = 0; i < G_N; i++) h += (var)G_HitEW[i];
if(G_N > 0) h /= (var)G_N; else h = 0.5;
var target = MC_ACT
+ 0.15*(0.55 - h)
+ 0.10*(G_MCF_Entropy - 0.5);
target = clamp(target, 0.20, 0.50);
G_MC_ACT = 0.9*G_MC_ACT + 0.1*target;
}
// silence unused warning if trading block is removed
x = x;
}
// ==================== Part 4/4 — Runtime, Trading, Init/Cleanup ====================
// ---- globals used by Part 4
var G_LastSig = 0; // blended lambda/gamma used for trading view
int G_LastBarTraded = -1;
// ---- small guards for optional plotting
void plotSafe(string name, var v){
if(ENABLE_PLOTS && !G_ChartsOff) plot(name, v, NEW|LINE, 0);
}
// ---- lite-C compatible calloc replacement ----
void* xcalloc(int count, int size) // removed 'static' (lite-C)
{
int bytes = count*size;
void* p = malloc(bytes);
if(p) memset(p,0,bytes);
else quit("Alpha12: OOM in xcalloc");
return p;
}
// ======================= Markov alloc/free =======================
void allocMarkov()
{
int NN = MC_STATES*MC_STATES;
int bytesMat = NN*sizeof(int);
int bytesRow = MC_STATES*sizeof(int);
// --- HTF (1H) chain (legacy MC_*) ---
MC_Count = (int*)malloc(bytesMat);
MC_RowSum= (int*)malloc(bytesRow);
if(!MC_Count || !MC_RowSum) quit("Alpha12: OOM in allocMarkov(MC)");
memset(MC_Count, 0, bytesMat);
memset(MC_RowSum, 0, bytesRow);
// --- LTF (5M) chain ---
ML_Count = (int*)malloc(bytesMat);
ML_RowSum= (int*)malloc(bytesRow);
if(!ML_Count || !ML_RowSum) quit("Alpha12: OOM in allocMarkov(ML)");
memset(ML_Count, 0, bytesMat);
memset(ML_RowSum, 0, bytesRow);
// --- Relation chain (links 5M & 1H) ---
MR_Count = (int*)malloc(bytesMat);
MR_RowSum= (int*)malloc(MR_STATES*sizeof(int)); // MR_STATES == MC_STATES
if(!MR_Count || !MR_RowSum) quit("Alpha12: OOM in allocMarkov(MR)");
memset(MR_Count, 0, bytesMat);
memset(MR_RowSum, 0, MR_STATES*sizeof(int));
// --- initial states & defaults ---
MC_Prev = MH_Prev = -1; MC_Cur = MH_Cur = 0;
ML_Prev = -1; ML_Cur = 0;
MR_Prev = -1; MR_Cur = 0;
MC_PBullNext = 0.5; MC_Entropy = 1.0;
ML_PBullNext = 0.5; ML_Entropy = 1.0;
MR_PBullNext = 0.5; MR_Entropy = 1.0;
}
void freeMarkov(){
if(MC_Count) free(MC_Count);
if(MC_RowSum)free(MC_RowSum);
if(ML_Count) free(ML_Count);
if(ML_RowSum)free(ML_RowSum);
if(MR_Count) free(MR_Count);
if(MR_RowSum)free(MR_RowSum);
MC_Count=MC_RowSum=ML_Count=ML_RowSum=MR_Count=MR_RowSum=0;
}
// ======================= Alpha12 init / cleanup =======================
void Alpha12_init()
{
if(ALPHA12_READY) return;
// 1) Session context first
asset(ASSET_SYMBOL);
BarPeriod = BAR_PERIOD;
set(PLOTNOW); // plotting gated by ENABLE_PLOTS at call sites
// 2) Warmup window
LookBack = max(300, NWIN);
// 3) Clamp effective projection size and reset projection cache
if(G_Keff < 1) G_Keff = 1;
if(G_Keff > G_K) G_Keff = G_K;
G_ProjBar = -1;
G_ProjK = -1;
// 4) Core allocations
allocateNet();
allocMarkov();
// 5) Depth LUT + initial tree + indexing
if(!G_DepthW) G_DepthW = (var*)malloc(DEPTH_LUT_SIZE*sizeof(var));
if(!Root) Root = createNode(MAX_DEPTH);
G_RT_TreeMaxDepth = MAX_DEPTH;
refreshDepthW();
reindexTreeAndMap(); // sizes pred cache & ring angles
// 6) Bootstrap: RP, projection, one full rewire pass (also sets proportions & CtxID)
rewireInit();
computeProjection();
rewireEpoch(0,0,0,0);
// 7) Logging header once
writeEqHeaderOnce();
// 8) Reset rolling cursors / exposed Markov defaults
G_RewirePos = 0; G_RewirePasses = 0;
G_UpdatePos = 0; G_UpdatePasses = 0;
G_MCF_PBull = 0.5; G_MCF_Entropy = 1.0; G_MCF_State = 0;
// 9) Done
ALPHA12_READY = 1;
printf("\n[Alpha12] init done: N=%i D=%i K=%i (Keff=%i) Depth=%i est=%i MB",
G_N, G_D, G_K, G_Keff, G_RT_TreeMaxDepth, mem_mb_est());
}
void Alpha12_cleanup(){
freeMarkov();
if(Root){ freeTree(Root); Root=0; }
freeNodePool();
if(G_DepthW){ free(G_DepthW); G_DepthW=0; }
freeNet();
ALPHA12_READY = 0;
}
// ======================= Helpers for realized 1-bar return =======================
var realizedRet1(){
// Basic 1-bar return proxy from close series
vars C = series(priceClose());
if(Bar <= LookBack) return 0;
return C[0] - C[1];
}
// ======================= Trading gate =======================
// Combines blended network signal with Markov PBull gate.
// Returns signed signal in [-1..1].
var tradeSignal(){
// --- EARLY GUARDS ---
if(!ALPHA12_READY) return 0; // init not completed
if(!G_RP || !G_Z || !G_StateSq) return 0; // projection buffers not allocated
// Recompute a lightweight lambda/gamma snapshot for display/decisions.
// (Alpha12_step already ran heavy ops; this is cheap.)
computeProjection();
int Keff = keffClamped(); // clamped effective projection size
if(Keff <= 0) return 0; // nothing to project yet; be safe
int k;
var e = 0;
var pwr = 0;
for(k = 0; k < Keff; k++){
var z = (var)G_Z[k];
e += z;
pwr += z*z;
}
// --- NO TERNARY: explicit guards for lite-C ---
var mean = 0;
var power = 0;
if(Keff > 0){
mean = e / (var)Keff;
power = pwr / (var)Keff;
}
var lambda = 0.7*tanh(mean) + 0.3*tanh(0.05*power);
var gamma = projectNet();
var x = blendLambdaGamma(lambda, gamma);
G_LastSig = x;
// Markov (HTF) directional gating (no ternaries)
var gLong = 0;
var gShort = 0;
if(G_MCF_PBull >= PBULL_LONG_TH) gLong = 1.0;
if(G_MCF_PBull <= PBULL_SHORT_TH) gShort = 1.0;
// Symmetric gate around x (no ternary)
var s = 0;
if(x > 0) s = x * gLong;
else s = x * gShort;
// Modulate by relation chain confidence (lower entropy -> stronger)
var conf = 1.0 - 0.5*(MR_Entropy); // 0.5..1.0 typically
s *= conf;
return clamp(s, -1, 1);
}
// ======================= Position sizing & risk =======================
var posSizeFromSignal(var s){
// Simple linear sizing, capped
var base = 1;
var scale = 2.0 * abs(s); // 0..2
return base * (0.5 + 0.5*scale); // 0.5..1.5 lots (example)
}
void placeOrders(var s){
// Basic long/short logic with soft handoff
if(s > 0){
if(!NumOpenLong) enterLong(posSizeFromSignal(s));
if(NumOpenShort) exitShort();
} else if(s < 0){
if(!NumOpenShort) enterShort(posSizeFromSignal(s));
if(NumOpenLong) exitLong();
}
// if s==0 do nothing (hold)
}
// ======================= Main per-bar runtime =======================
void Alpha12_bar(){
// 1) Provide last realized return to the engine scorer
var r1 = realizedRet1();
// 2) Run the engine step (updates Markov, rewires slices, heavy updates, etc.)
alpha12_step(r1);
// 3) Build trading signal & place orders (once per bar)
var s = tradeSignal();
placeOrders(s);
// 4) Plots (guarded)
plotSafe("PBull(1H)", 100*(G_MCF_PBull-0.5));
plotSafe("PBull(5M)", 100*(ML_PBullNext-0.5));
plotSafe("PBull(Rel)", 100*(MR_PBullNext-0.5));
plotSafe("Entropy(1H)", 100*(G_MCF_Entropy));
plotSafe("Sig", 100*G_LastSig);
}
// ---- Zorro hooks (after macros!) ----
function init(){ Alpha12_init(); }
function run()
{
// keep it lean; do NOT change BarPeriod/asset here anymore
if(Bar < LookBack){
updateAllMarkov();
return;
}
Alpha12_bar();
}
function cleanup(){ Alpha12_cleanup(); }
Last edited by TipmyPip; Yesterday at 10:05.
|
|
|
|