OP
Member
Joined: Sep 2017
Posts: 164
|
Canticle of the Rewoven Mandala1) Proem Time arrays itself as a lattice of finite breaths, and each breath inscribes a glyph upon the quiet. Most glyphs are rumor; a few ring like bells in stone cloisters. The ear that listens must learn to weigh rumor without banishing it, to honor bell without worshiping echo. Thus a measure awakens between noise and sign, a small invariance that keeps the pulse when weather changes. 2) The Alphabet of Signs A compact alphabet approaches the gate of discernment, where amplitude is baptized into meaning. The gate narrows under fog and widens under stars, so that a constant portion of passage remains sacred. Soft priors cradle the rare and the first, then yield as witness accumulates. In this way, significance is renormalized without losing humility. 3) The Harmonic Tree Depth vows restraint: each rung bears less authority by a gentle power, rather than by decree. Branches breathe in sines and breathe out memory, their phases braided by quiet sums. Weight drifts until usefulness nods, then lingers as if remembering why it came. The crown does not command the root; they meet in an average that forgives their difference. 4) Sun and Moon Two lights shepherd the pilgrim number—one hewn from structure, one echoed by multitudes. A small scribe keeps the concordance of their songs, counting when agreement appears more than once. The weight shifts toward the truer singer of the hour, and no light is shamed for dimming. So guidance becomes a balance of reverence and revision. 5) The Reweaving At chosen beats the web of attention is unspooled and rewoven, the knots reconsidered without rancor. In settled air the cast opens wide; in crosswinds the mesh draws close to the mast. Threads avoid duplicating their crossings, and stale tangles are quietly undone. Each new pattern is signed by a modest seal so the loom remembers where it has wandered. 6) Poverty and Plenty Form owns only what it can bless. When the bowl brims, leaves fall from distant branches with gratitude; when there is room and reason, a single ring of buds appears. Every addition is a trial, and every trial keeps a door for return. Thus growth is reversible, and thrift becomes a geometry. 7) A Single Measure Merit is counted on one bead: clarity earned minus burden carried. The bead rolls forward of its own accord, and the mandala bends to meet it, not the reverse. When the bead stalls, the slope is gently reversed so the climb resumes by another path. No trumpet announces this; the stone merely remembers the foot. 8) Seeds of Counsel Advice arrives as a seed and is rationed like lamp oil in winter. Some nights a few wicks suffice; some dawns welcome a small festival of flame. Diversity is invited by lawful dice, so surprise is shaped, not squandered. The seed is single, but the harvest wears many faces. 9) Proportions as Offering Every voice brings an offering to the altar, and the offerings are washed until their sum is one. No bowl is allowed to overflow by insistence, and none is left dry by shyness. The sharing is impartial to volume, partial to coherence. Thus chorus replaces clamor without silencing the small. 10) Neighbor Grace Affinity is a distance remembered in three tongues: depth, tempo, and temper. Near does not mean same, and far is not foreign; kinship is a gradient, not a border. Trust is earned by steadiness and granted in weights that bow to it. From many neighbors a single counsel is poured, proportioned by grace rather than by force. 11) Fading Without Forgetting Incense rises; ash remains: so the near past perfumes more than the far, and the far is not despised. Memory decays as a hymn, not as a fall, allowing drift without lurch. The ledger of moments is tempered by forgetting that remembers why it forgets. In this way continuity holds hands with change. 12) The Small Chronicle Each day leaves a narrow chronicle: the hour, the season, a modest seal of the tapestry, and the tilt of the lights. Numbers are trimmed like candles so the wax does not drown the flame. The script favors witness over spectacle, sufficiency over excess. It is enough that a future monk can nod and say, “Yes, I see.” 13) Postures That Emerge When order visits, nets widen, depth speaks, counsel is generous, and steps chime with the stones. When weather breaks, meshes tighten, the surface steadies, counsel turns frugal, and stillness earns its wage. The glide between these postures is by small hinges, not by leaps. Thus resilience ceases to be a tactic and becomes a habit. 14) The Rule in One Sentence Move only when the oracle and the breath agree; otherwise keep vigil. Let the form stay light, the changes small, and the diary honest. Spend attention where harmony gathers and thrift where it frays. In all things, prefer the reversible path that leaves the meadow unscarred. // ======================================================================
// Alpha12 - Markov-augmented Harmonic D-Tree Engine (Candlestick 122-dir)
// with runtime memory shaping, selective depth pruning,
// and elastic accuracy-aware depth growth + 10 performance upgrades.
// ======================================================================
// ================= USER CONFIG =================
#define ASSET_SYMBOL "EUR/USD"
#define BAR_PERIOD 60
#define MC_ACT 0.30 // initial threshold on |CDL| in [-1..1] to accept a pattern
#define PBULL_LONG_TH 0.60 // Markov gate for long
#define PBULL_SHORT_TH 0.40 // Markov gate for short
// ===== Debug toggles (Fix #1 - chart/watch growth off by default) =====
#define ENABLE_PLOTS 0 // 0 = no plot buffers; 1 = enable plot() calls
#define ENABLE_WATCH 0 // 0 = disable watch() probes; 1 = enable
// ================= ENGINE PARAMETERS =================
#define MAX_BRANCHES 3
#define MAX_DEPTH 4
#define NWIN 256
#define NET_EQNS 100
#define DEGREE 4
#define KPROJ 16
#define REWIRE_EVERY 127
#define CAND_NEIGH 8
// ===== LOGGING CONTROLS (memory management) =====
#define LOG_EQ_TO_ONE_FILE 1 // 1: single consolidated EQ CSV; 0: per-eq files
#define LOG_EXPR_TEXT 0 // 0: omit full expression (store signature only); 1: include text
#define META_EVERY 4 // write META every N rewires
#define LOG_EQ_SAMPLE NET_EQNS // limit number of equations logged
#define EXPR_MAXLEN 512 // cap expression string
#define LOG_FLOAT_TRIM
// decimate Markov log cadence
#define LOG_EVERY 16
// Optional: cadence for candle scan/Markov update (1 = every bar)
#define MC_EVERY 1
// ---- DTREE feature sizes (extended for Markov features) ----
#define ADV_EQ_NF 13 // +1: per-eq hit-rate feature (PATCH A)
#define ADV_PAIR_NF 12 // per-pair features (kept for completeness; DTREE pair disabled)
// ================= Candles ? 122-state Markov =================
#define MC_NPAT 61
#define MC_STATES 123 // 1 + 2*MC_NPAT
#define MC_NONE 0
#define MC_LAPLACE 1.0 // kept for reference; runtime uses G_MC_Alpha
// ================= Runtime Memory / Accuracy Manager =================
#define MEM_BUDGET_MB 50
#define MEM_HEADROOM_MB 5
#define DEPTH_STEP_BARS 16
#define KEEP_CHILDREN_HI 2
#define KEEP_CHILDREN_LO 1
#define RUNTIME_MIN_DEPTH 2
int G_ShedStage = 0; // 0..2
int G_LastDepthActBar = -999999;
int G_ChartsOff = 0; // gates plot()
int G_LogsOff = 0; // gates file_append cadence
int G_SymFreed = 0; // expression buffers freed
int G_RT_TreeMaxDepth = MAX_DEPTH;
// ---- Accuracy sentinel (EW correlation of lambda vs gamma) ----
var ACC_mx=0, ACC_my=0, ACC_mx2=0, ACC_my2=0, ACC_mxy=0;
var G_AccCorr = 0; // [-1..1]
var G_AccBase = 0; // first seen sentinel
int G_HaveBase = 0;
// ---- Elastic depth tuner (small growth trials with rollback) ----
#define DEPTH_TUNE_BARS 64 // start a growth trial this often (when memory allows)
#define TUNE_DELAY_BARS 64 // evaluate the trial after this many bars
var G_UtilBefore = 0, G_UtilAfter = 0;
int G_TunePending = 0;
int G_TuneStartBar = 0;
int G_TuneAction = 0; // +1 grow trial, 0 none
// ======================================================================
// Types & globals used by memory estimator
// ======================================================================
// HARMONIC D-TREE type
typedef struct Node { var v; var r; void* c; int n; int d; } Node;
// ====== Node pool (upgrade #2) ======
typedef struct NodeChunk {
struct NodeChunk* next;
int used; // 4 bytes
int _pad; // 4 bytes -> ensures nodes[] starts at 8-byte offset on 32-bit
Node nodes[256]; // each Node contains doubles; keep this 8-byte aligned
} NodeChunk;
NodeChunk* G_ChunkHead = 0;
Node* G_FreeList = 0;
Node* poolAllocNode()
{
if(G_FreeList){
Node* n = G_FreeList;
G_FreeList = (Node*)n->c;
n->c = 0; n->n = 0; n->d = 0; n->v = 0; n->r = 0;
return n;
}
if(!G_ChunkHead || G_ChunkHead->used >= 256){
NodeChunk* ch = (NodeChunk*)malloc(sizeof(NodeChunk));
if(!ch) { quit("Alpha12: OOM allocating NodeChunk (poolAllocNode)"); return 0; }
// ensure clean + alignment-friendly start
memset(ch, 0, sizeof(NodeChunk));
ch->next = G_ChunkHead;
ch->used = 0;
G_ChunkHead = ch;
}
if(G_ChunkHead->used < 0 || G_ChunkHead->used >= 256){
quit("Alpha12: Corrupt node pool state");
return 0;
}
return &G_ChunkHead->nodes[G_ChunkHead->used++];
}
void poolFreeNode(Node* u){ if(!u) return; u->c = (void*)G_FreeList; G_FreeList = u; }
void freeNodePool()
{
NodeChunk* ch = G_ChunkHead;
while(ch){ NodeChunk* nx = ch->next; free(ch); ch = nx; }
G_ChunkHead = 0; G_FreeList = 0;
}
// Minimal globals needed before mem estimator
Node* Root = 0;
Node** G_TreeIdx = 0;
int G_TreeN = 0;
int G_TreeCap = 0;
var G_DTreeExp = 0;
// ---- (upgrade #1) depth LUT for pow() ----
#define DEPTH_LUT_SIZE (MAX_DEPTH + 1) // <- keep constant for lite-C
var* G_DepthW = 0; // heap-allocated LUT
var G_DepthExpLast = -1.0; // sentinel as var
Node G_DummyNode; // treeAt() can return &G_DummyNode
// Network sizing globals (used by mem estimator)
int G_N = NET_EQNS;
int G_D = DEGREE;
int G_K = KPROJ;
// Optional expression buffer pointer (referenced by mem estimator)
string* G_Sym = 0;
// Forward decls that reference Node
var nodePredictability(Node* t); // fwd decl (needed by predByTid)
var nodeImportance(Node* u); // fwd decl (uses nodePredictability below)
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK);
void reindexTreeAndMap();
// Forward decls for advisor functions (so adviseSeed can call them)
var adviseEq(int i, var lambda, var mean, var energy, var power);
var advisePair(int i,int j, var lambda, var mean, var energy, var power);
// ----------------------------------------------------------------------
// === Adaptive knobs & sentinels (NEW) ===
var G_FB_W = 0.70; // (1) dynamic lambda/gamma blend weight 0..1
var G_MC_ACT = MC_ACT; // (2) adaptive candlestick acceptance threshold
var G_AccRate = 0; // (2) EW acceptance rate of (state != 0)
// (3) advisor budget per bar (replaces the macro)
int G_AdviseMax = 16;
// (6) Markov Laplace smoothing (runtime)
var G_MC_Alpha = 1.0;
// (7) adaptive candidate breadth for adjacency search
int G_CandNeigh = CAND_NEIGH;
// (8) effective projection dimension (= KPROJ or KPROJ/2)
int G_Keff = KPROJ;
// (5) depth emphasis hill-climber
var G_DTreeExpStep = 0.05;
int G_DTreeExpDir = 1;
// ---- Advise budget/rotation (Fix #2) ----
#define ADVISE_ROTATE 1 // 1 = rotate which equations get DTREE each bar
int allowAdvise(int i)
{
if(ADVISE_ROTATE){
int groups = NET_EQNS / G_AdviseMax;
if(groups < 1) groups = 1;
return ((i / G_AdviseMax) % groups) == (Bar % groups);
} else {
return (i < G_AdviseMax);
}
}
// ======================================================================
// A) Tight-memory switches and compact types
// ======================================================================
#define TIGHT_MEM 1 // turn on compact types for arrays
// lite-C precompiler doesn't support '#if' expressions.
// Use presence test instead (LOG_EQ_TO_ONE_FILE defined = single-file mode).
#ifdef LOG_EQ_TO_ONE_FILE
/* consolidated EQ CSV -> don't enable extra meta */
#else
#define KEEP_TOP_META
#endif
#ifdef TIGHT_MEM
typedef float fvar; // 4B instead of 8B 'var' for large coefficient arrays
typedef short i16; // -32768..32767 indices
typedef char i8; // small enums/modes
#else /* not TIGHT_MEM */
typedef var fvar;
typedef int i16;
typedef int i8;
#endif
// ---- tree byte size (counts nodes + child pointer arrays) ----
int tree_bytes(Node* u)
{
if(!u) return 0;
int SZV = sizeof(var), SZI = sizeof(int), SZP = sizeof(void*);
int sz_node = 2*SZV + SZP + 2*SZI;
int total = sz_node;
if(u->n > 0 && u->c) total += u->n * SZP;
int i;
for(i=0;i<u->n;i++)
total += tree_bytes(((Node**)u->c)[i]);
return total;
}
// ======================================================================
// Optimized memory estimator & predictability caches
// ======================================================================
// ===== Memory estimator & predictability caches =====
int G_MemFixedBytes = 0; // invariant part (arrays, Markov + pointer vec + expr opt)
int G_TreeBytesCached = 0; // current D-Tree structure bytes
var* G_PredNode = 0; // length == G_TreeN; -2 = not computed this bar
int G_PredLen = 0;
int G_PredCap = 0; // (upgrade #5)
int G_PredCacheBar = -1;
void recalcTreeBytes(){ G_TreeBytesCached = tree_bytes(Root); }
//
// C) Updated memory estimator (matches compact types).
// Includes pointer vector & optional expr into the "fixed" baseline.
// Note: we refresh this when capacity/logging changes.
//
void computeMemFixedBytes()
{
int N = G_N, D = G_D, K = G_K;
int SZV = sizeof(var), SZF = sizeof(fvar), SZI16 = sizeof(i16), SZI8 = sizeof(i8), SZP = sizeof(void*);
int b = 0;
// --- core state (var-precision) ---
b += N*SZV*2; // G_State, G_Prev
// --- adjacency & ids ---
b += N*D*SZI16; // G_Adj
b += N*SZI16; // G_EqTreeId
b += N*SZI8; // G_Mode
// --- random projection ---
b += K*N*SZF; // G_RP
b += K*SZF; // G_Z
// --- weights & params (fvar) ---
b += N*SZF*(8); // G_W*
b += N*SZF*(7 + 7); // A1*, A2*
b += N*SZF*(2 + 2); // G1mean,G1E,G2P,G2lam
b += N*SZF*(2); // TAlpha, TBeta
b += N*SZF*(1); // G_TreeTerm
#ifdef KEEP_TOP_META
b += N*(SZI16 + SZF); // G_TopEq, G_TopW
#endif
// --- proportions ---
b += N*SZF*2; // G_PropRaw, G_Prop
// --- per-equation hit-rate bookkeeping --- (PATCH C)
b += N*SZF; // G_HitEW
b += N*SZF; // G_AdvPrev
b += N*sizeof(int); // G_HitN
// --- Markov storage (unchanged ints) ---
b += MC_STATES*MC_STATES*sizeof(int) + MC_STATES*sizeof(int);
// pointer vector for tree index (capacity part)
b += G_TreeCap*SZP;
// optional expression buffers
if(LOG_EXPR_TEXT && G_Sym && !G_SymFreed) b += N*EXPR_MAXLEN;
G_MemFixedBytes = b;
}
void ensurePredCache()
{
if(G_PredCacheBar != Bar){
if(G_PredNode){
int i, n = G_PredLen; // use allocated length, not G_TreeN
for(i=0;i<n;i++) G_PredNode[i] = -2;
}
G_PredCacheBar = Bar;
}
}
var predByTid(int tid)
{
if(!G_TreeIdx || tid < 0 || tid >= G_TreeN || !G_TreeIdx[tid]) return 0.5;
ensurePredCache();
// Guard reads/writes by the allocated cache length
if(G_PredNode && tid < G_PredLen && G_PredNode[tid] > -1.5)
return G_PredNode[tid];
Node* t = G_TreeIdx[tid];
var p = 0.5;
if(t) p = nodePredictability(t);
if(G_PredNode && tid < G_PredLen)
G_PredNode[tid] = p;
return p;
}
// ======================================================================
// Conservative in-script memory estimator (arrays + pointers) - O(1)
// ======================================================================
int mem_bytes_est()
{
// With the updated computeMemFixedBytes() counting pointer capacity
// and optional expr buffers, only add current tree structure here.
return G_MemFixedBytes + G_TreeBytesCached;
}
int mem_mb_est(){ return mem_bytes_est() / (1024*1024); }
// === total memory (Zorro-wide) in MB ===
int memMB(){ return (int)(memory(0)/(1024*1024)); }
// light one-shot shedding
void shed_zero_cost_once()
{
if(G_ShedStage > 0) return;
set(PLOTNOW|OFF); G_ChartsOff = 1; // stop chart buffers
G_LogsOff = 1; // decimate logs (gated later)
G_ShedStage = 1;
}
void freeExprBuffers()
{
if(!G_Sym || G_SymFreed) return;
int i; for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]);
free(G_Sym); G_Sym = 0; G_SymFreed = 1;
computeMemFixedBytes(); // refresh baseline
}
// depth manager (prune & shedding)
void depth_manager_runtime()
{
int trigger = MEM_BUDGET_MB - MEM_HEADROOM_MB;
int mb = mem_mb_est();
if(mb < trigger) return;
if(G_ShedStage == 0) shed_zero_cost_once();
if(G_ShedStage <= 1){
if(LOG_EXPR_TEXT==0 && !G_SymFreed) freeExprBuffers();
G_ShedStage = 2;
}
int overBudget = (mb >= MEM_BUDGET_MB);
if(!overBudget && (Bar - G_LastDepthActBar < DEPTH_STEP_BARS))
return;
while(G_RT_TreeMaxDepth > RUNTIME_MIN_DEPTH)
{
int keepK = ifelse(mem_mb_est() < MEM_BUDGET_MB + 2, KEEP_CHILDREN_HI, KEEP_CHILDREN_LO);
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, keepK);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
mb = mem_mb_est();
printf("\n[DepthMgr] depth=%i keepK=%i est=%i MB", G_RT_TreeMaxDepth, keepK, mb);
if(mb < trigger) break;
}
G_LastDepthActBar = Bar;
}
// ----------------------------------------------------------------------
// 61 candlestick patterns (Zorro spellings kept). Each returns [-100..100].
// We rescale to [-1..1] for Markov state construction.
// ----------------------------------------------------------------------
int buildCDL_TA61(var* out, string* names)
{
int n = 0;
#define ADD(Name, Call) do{ var v = (Call); if(out) out[n] = v/100.; if(names) names[n] = Name; n++; }while(0)
ADD("CDL2Crows", CDL2Crows());
ADD("CDL3BlackCrows", CDL3BlackCrows());
ADD("CDL3Inside", CDL3Inside());
ADD("CDL3LineStrike", CDL3LineStrike());
ADD("CDL3Outside", CDL3Outside());
ADD("CDL3StarsInSouth", CDL3StarsInSouth());
ADD("CDL3WhiteSoldiers", CDL3WhiteSoldiers());
ADD("CDLAbandonedBaby", CDLAbandonedBaby(0.3));
ADD("CDLAdvanceBlock", CDLAdvanceBlock());
ADD("CDLBeltHold", CDLBeltHold());
ADD("CDLBreakaway", CDLBreakaway());
ADD("CDLClosingMarubozu", CDLClosingMarubozu());
ADD("CDLConcealBabysWall", CDLConcealBabysWall());
ADD("CDLCounterAttack", CDLCounterAttack());
ADD("CDLDarkCloudCover", CDLDarkCloudCover(0.3));
ADD("CDLDoji", CDLDoji());
ADD("CDLDojiStar", CDLDojiStar());
ADD("CDLDragonflyDoji", CDLDragonflyDoji());
ADD("CDLEngulfing", CDLEngulfing());
ADD("CDLEveningDojiStar", CDLEveningDojiStar(0.3));
ADD("CDLEveningStar", CDLEveningStar(0.3));
ADD("CDLGapSideSideWhite", CDLGapSideSideWhite());
ADD("CDLGravestoneDoji", CDLGravestoneDoji());
ADD("CDLHammer", CDLHammer());
ADD("CDLHangingMan", CDLHangingMan());
ADD("CDLHarami", CDLHarami());
ADD("CDLHaramiCross", CDLHaramiCross());
ADD("CDLHignWave", CDLHignWave());
ADD("CDLHikkake", CDLHikkake());
ADD("CDLHikkakeMod", CDLHikkakeMod());
ADD("CDLHomingPigeon", CDLHomingPigeon());
ADD("CDLIdentical3Crows", CDLIdentical3Crows());
ADD("CDLInNeck", CDLInNeck());
ADD("CDLInvertedHammer", CDLInvertedHammer());
ADD("CDLKicking", CDLKicking());
ADD("CDLKickingByLength", CDLKickingByLength());
ADD("CDLLadderBottom", CDLLadderBottom());
ADD("CDLLongLeggedDoji", CDLLongLeggedDoji());
ADD("CDLLongLine", CDLLongLine());
ADD("CDLMarubozu", CDLMarubozu());
ADD("CDLMatchingLow", CDLMatchingLow());
ADD("CDLMatHold", CDLMatHold(0.5));
ADD("CDLMorningDojiStar", CDLMorningDojiStar(0.3));
ADD("CDLMorningStar", CDLMorningStar(0.3));
ADD("CDLOnNeck", CDLOnNeck());
ADD("CDLPiercing", CDLPiercing());
ADD("CDLRickshawMan", CDLRickshawMan());
ADD("CDLRiseFall3Methods", CDLRiseFall3Methods());
ADD("CDLSeperatingLines", CDLSeperatingLines());
ADD("CDLShootingStar", CDLShootingStar());
ADD("CDLShortLine", CDLShortLine());
ADD("CDLSpinningTop", CDLSpinningTop());
ADD("CDLStalledPattern", CDLStalledPattern());
ADD("CDLStickSandwhich", CDLStickSandwhich());
ADD("CDLTakuri", CDLTakuri());
ADD("CDLTasukiGap", CDLTasukiGap());
ADD("CDLThrusting", CDLThrusting());
ADD("CDLTristar", CDLTristar());
ADD("CDLUnique3River", CDLUnique3River());
ADD("CDLUpsideGap2Crows", CDLUpsideGap2Crows());
ADD("CDLXSideGap3Methods", CDLXSideGap3Methods());
#undef ADD
return n; // 61
}
// ================= Markov storage & helpers =================
static int* MC_Count; // [MC_STATES*MC_STATES]
static int* MC_RowSum; // [MC_STATES]
static int MC_Prev = -1;
static int MC_Cur = 0;
static var MC_PBullNext = 0.5;
static var MC_Entropy = 0.0;
#define MC_IDX(fr,to) ((fr)*MC_STATES + (to))
int MC_stateFromCDL(var* cdl /*len=61*/, var thr)
{
int i, best=-1; var besta=0;
for(i=0;i<MC_NPAT;i++){
var a = abs(cdl[i]);
if(a>besta){ besta=a; best=i; }
}
if(best<0) return MC_NONE;
if(besta < thr) return MC_NONE;
int bull = (cdl[best] > 0);
return 1 + 2*best + bull; // 1..122
}
int MC_isBull(int s){ if(s<=0) return 0; return ((s-1)%2)==1; }
void MC_update(int sPrev,int sCur){ if(sPrev<0) return; MC_Count[MC_IDX(sPrev,sCur)]++; MC_RowSum[sPrev]++; }
// === (6) Use runtime Laplace ? (G_MC_Alpha) ===
var MC_prob(int s,int t){
var num = (var)MC_Count[MC_IDX(s,t)] + G_MC_Alpha;
var den = (var)MC_RowSum[s] + G_MC_Alpha*MC_STATES;
if(den<=0) return 1.0/MC_STATES;
return num/den;
}
// === (6) one-pass PBull + Entropy
void MC_rowStats(int s, var* outPBull, var* outEntropy)
{
if(s<0){ if(outPBull) *outPBull=0.5; if(outEntropy) *outEntropy=1.0; return; }
int t; var Z=0, pBull=0;
for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); Z+=p; if(MC_isBull(t)) pBull+=p; }
if(Z<=0){ if(outPBull) *outPBull=0.5; if(outEntropy) *outEntropy=1.0; return; }
var H=0;
for(t=1;t<MC_STATES;t++){
var p = MC_prob(s,t)/Z;
if(p>0) H += -p*log(p);
}
var Hmax = log(MC_STATES-1);
if(Hmax<=0) H = 0; else H = H/Hmax;
if(outPBull) *outPBull = pBull/Z;
if(outEntropy) *outEntropy = H;
}
// ================= HARMONIC D-TREE ENGINE =================
// ---------- utils ----------
var randsign(){ return ifelse(random(1) < 0.5, -1.0, 1.0); }
var mapUnit(var u,var lo,var hi){ if(u<-1) u=-1; if(u>1) u=1; var t=0.5*(u+1.0); return lo + t*(hi-lo); }
// ---- safety helpers ----
inline var safeNum(var x)
{
if(invalid(x)) return 0; // 0 for NaN/INF
return clamp(x,-1e100,1e100); // hard-limit range
}
void sanitize(var* A,int n){ int k; for(k=0;k<n;k++) A[k]=safeNum(A[k]); }
var sat100(var x){ return clamp(x,-100,100); }
// ---- small string helpers (for memory-safe logging) ----
void strlcat_safe(string dst, string src, int cap)
{
if(!dst || !src || cap <= 0) return;
int dl = strlen(dst);
int sl = strlen(src);
int room = cap - 1 - dl;
if(room <= 0){ if(cap > 0) dst[cap-1] = 0; return; }
int i; for(i = 0; i < room && i < sl; i++) dst[dl + i] = src[i];
dst[dl + i] = 0;
}
int countSubStr(string s, string sub){
if(!s || !sub) return 0;
int n=0; string p=s;
int sublen = strlen(sub);
if(sublen<=0) return 0;
while((p=strstr(p,sub))){ n++; p += sublen; }
return n;
}
// ---------- FIXED: use int (lite-C) and keep non-negative ----------
int djb2_hash(string s){
int h = 5381, c, i = 0;
if(!s) return h;
while((c = s[i++])) h = ((h<<5)+h) ^ c; // h*33 ^ c
return h & 0x7fffffff; // force non-negative
}
// ---- tree helpers ----
int validTreeIndex(int tid){ if(!G_TreeIdx) return 0; if(tid<0||tid>=G_TreeN) return 0; return (G_TreeIdx[tid]!=0); }
Node* treeAt(int tid){ if(validTreeIndex(tid)) return G_TreeIdx[tid]; return &G_DummyNode; }
int safeTreeIndexFromEq(int eqi){
int denom = ifelse(G_TreeN>0, G_TreeN, 1);
int tid = eqi;
if(tid < 0) tid = 0;
if(denom > 0) tid = tid % denom;
if(tid < 0) tid = 0;
return tid;
}
// ---- tree indexing ----
void pushTreeNode(Node* u){
if(G_TreeN >= G_TreeCap){
int newCap = G_TreeCap*2;
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
computeMemFixedBytes(); // pointer vector size changed
}
G_TreeIdx[G_TreeN++] = u;
}
void indexTreeDFS(Node* u){ if(!u) return; pushTreeNode(u); int i; for(i=0;i<u->n;i++) indexTreeDFS(((Node**)u->c)[i]); }
// ---- shrink index capacity after pruning (Fix #3) ----
void maybeShrinkTreeIdx(){
if(!G_TreeIdx) return;
if(G_TreeCap > 64 && G_TreeN < (G_TreeCap >> 1)){
int newCap = (G_TreeCap >> 1);
if(newCap < 64) newCap = 64;
G_TreeIdx = (Node**)realloc(G_TreeIdx, newCap*sizeof(Node*));
G_TreeCap = newCap;
computeMemFixedBytes(); // pointer vector size changed
}
}
// ---- depth LUT helper (upgrade #1) ----
void refreshDepthW()
{
if(!G_DepthW) return;
int d;
for(d=0; d<DEPTH_LUT_SIZE; d++)
G_DepthW[d] = 1.0 / pow(d+1, G_DTreeExp);
G_DepthExpLast = G_DTreeExp;
}
// ---- tree create/eval (with pool & LUT upgrades) ----
Node* createNode(int depth)
{
Node* u = poolAllocNode();
if(!u) return 0; // safety
u->v = random();
u->r = 0.01 + 0.02*depth + random(0.005);
u->d = depth;
if(depth > 0){
u->n = 1 + (int)random(MAX_BRANCHES); // 1..MAX_BRANCHES (cast ok)
u->c = malloc(u->n * sizeof(void*));
if(!u->c){
// Could not allocate children array; keep leaf instead
u->n = 0; u->c = 0;
return u;
}
int i;
for(i=0;i<u->n;i++){
Node* child = createNode(depth - 1);
((Node**)u->c)[i] = child; // ok if child==0, downstream code guards
}
} else {
u->n = 0; u->c = 0;
}
return u;
}
var evaluateNode(Node* u) // upgrade #1
{
if(!u) return 0;
var sum = 0; int i;
for(i=0;i<u->n;i++) sum += evaluateNode(((Node**)u->c)[i]);
if(G_DepthExpLast < 0 || abs(G_DTreeExp - G_DepthExpLast) > 1e-9)
refreshDepthW();
var phase = sin(u->r * Bar + sum);
var weight = G_DepthW[u->d];
u->v = (1 - weight)*u->v + weight*phase;
return u->v;
}
int countNodes(Node* u){ if(!u) return 0; int c=1,i; for(i=0;i<u->n;i++) c += countNodes(((Node**)u->c)[i]); return c; }
void freeTree(Node* u) // upgrade #2
{
if(!u) return; int i; for(i=0;i<u->n;i++) freeTree(((Node**)u->c)[i]);
if(u->c) free(u->c);
poolFreeNode(u);
}
// =========== NETWORK STATE & COEFFICIENTS ===========
var* G_State; var* G_Prev; // keep as var (precision)
var* G_StateSq = 0; // upgrade #3
i16* G_Adj;
fvar* G_RP; fvar* G_Z;
i8* G_Mode;
fvar* G_WSelf; fvar* G_WN1; fvar* G_WN2; fvar* G_WGlob1; fvar* G_WGlob2; fvar* G_WMom; fvar* G_WTree; fvar* G_WAdv;
fvar* A1x; fvar* A1lam; fvar* A1mean; fvar* A1E; fvar* A1P; fvar* A1i; fvar* A1c;
fvar* A2x; fvar* A2lam; fvar* A2mean; fvar* A2E; fvar* A2P; fvar* A2i; fvar* A2c;
fvar* G1mean; fvar* G1E; fvar* G2P; fvar* G2lam;
fvar* G_TreeTerm;
#ifdef KEEP_TOP_META
i16* G_TopEq;
fvar* G_TopW;
#endif
i16* G_EqTreeId;
fvar* TAlpha; fvar* TBeta;
fvar* G_PropRaw; fvar* G_Prop;
// --- Per-equation hit-rate (EW average of 1-bar directional correctness) (PATCH B)
#define HIT_ALPHA 0.02 // EW smoothing (~50-bar memory)
#define HIT_EPS 0.0001 // ignore tiny advisor values
fvar* G_HitEW; // [N] 0..1 EW hit-rate
int* G_HitN; // [N] # of scored comparisons
fvar* G_AdvPrev; // [N] previous bar's advisor output (-1..+1)
var G_Ret1 = 0; // realized 1-bar return for scoring
// ===== Markov features exposed to DTREE =====
var G_MCF_PBull; // 0..1
var G_MCF_Entropy; // 0..1
var G_MCF_State; // 0..122
// epoch/context & feedback
int G_Epoch = 0;
int G_CtxID = 0;
var G_FB_A = 0.7; // kept (not used in blend now)
var G_FB_B = 0.3; // kept (not used in blend now)
// ---------- predictability ----------
var nodePredictability(Node* t)
{
if(!t) return 0.5;
var disp = 0;
int n = t->n, i, cnt = 0;
if(t->c){
for(i=0;i<n;i++){
Node* c = ((Node**)t->c)[i];
if(c){ disp += abs(c->v - t->v); cnt++; }
}
if(cnt > 0) disp /= cnt;
}
var depthFac = 1.0/(1 + t->d);
var rateBase = 0.01 + 0.02*t->d;
var rateFac = exp(-25.0*abs(t->r - rateBase));
var p = 0.5*(depthFac + rateFac);
p = 0.5*p + 0.5*(1.0 + (-disp));
if(p<0) p=0; if(p>1) p=1;
return p;
}
// importance for selective pruning
var nodeImportance(Node* u)
{
if(!u) return 0;
var amp = abs(u->v); if(amp>1) amp=1;
var p = nodePredictability(u);
var depthW = 1.0/(1.0 + u->d);
var imp = (0.6*p + 0.4*amp) * depthW;
return imp;
}
// ====== Elastic growth helpers ======
// create a leaf at depth d (no children) — upgrade #2
Node* createLeafDepth(int d)
{
Node* u = poolAllocNode();
if(!u) return 0; // safety
u->v = random();
u->r = 0.01 + 0.02*d + random(0.005);
u->d = d;
u->n = 0;
u->c = 0;
return u;
}
// add up to addK new children to all nodes at frontierDepth (with memcpy) — upgrade #4
void growSelectiveAtDepth(Node* u, int frontierDepth, int addK)
{
if(!u) return;
if(u->d == frontierDepth){
int want = addK; if(want <= 0) return;
int oldN = u->n; int newN = oldN + want;
Node** Cnew = (Node**)malloc(newN * sizeof(void*));
if(oldN>0 && u->c) memcpy(Cnew, u->c, oldN*sizeof(void*)); // memcpy optimization
int i; for(i=oldN;i<newN;i++) Cnew[i] = createLeafDepth(frontierDepth-1);
if(u->c) free(u->c);
u->c = Cnew; u->n = newN; return;
}
int j; for(j=0;j<u->n;j++) growSelectiveAtDepth(((Node**)u->c)[j], frontierDepth, addK);
}
// keep top-K children by importance at targetDepth, drop the rest
void freeChildAt(Node* parent, int idx)
{
if(!parent || !parent->c) return;
Node** C = (Node**)parent->c;
freeTree(C[idx]);
int i;
for(i=idx+1;i<parent->n;i++) C[i-1] = C[i];
parent->n--;
if(parent->n==0){ free(parent->c); parent->c=0; }
}
void pruneSelectiveAtDepth(Node* u, int targetDepth, int keepK)
{
if(!u) return;
if(u->d == targetDepth-1 && u->n > 0){
int n = u->n, i, kept = 0;
int mark[16]; for(i=0;i<16;i++) mark[i]=0;
int iter;
for(iter=0; iter<keepK && iter<n; iter++){
int bestI = -1; var bestImp = -1;
for(i=0;i<n;i++){
if(i<16 && mark[i]==1) continue;
var imp = nodeImportance(((Node**)u->c)[i]);
if(imp > bestImp){ bestImp = imp; bestI = i; }
}
if(bestI>=0 && bestI<16){ mark[bestI]=1; kept++; }
}
for(i=n-1;i>=0;i--) if(i<16 && mark[i]==0) freeChildAt(u,i);
return;
}
int j; for(j=0;j<u->n;j++) pruneSelectiveAtDepth(((Node**)u->c)[j], targetDepth, keepK);
}
// ---------- reindex (sizes pred cache without ternary) ----------
void reindexTreeAndMap()
{
G_TreeN = 0;
indexTreeDFS(Root);
if(G_TreeN<=0){
G_TreeN=1;
if(G_TreeIdx) G_TreeIdx[0]=Root;
}
// map equations to tree nodes
int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = (i16)(i % G_TreeN);
// resize predictability cache safely (upgrade #5)
G_PredLen = G_TreeN; if(G_PredLen <= 0) G_PredLen = 1;
if(G_PredLen > G_PredCap){
if(G_PredNode) free(G_PredNode);
G_PredNode = (var*)malloc(G_PredLen*sizeof(var));
G_PredCap = G_PredLen;
}
G_PredCacheBar = -1; // force refill next bar
maybeShrinkTreeIdx();
recalcTreeBytes();
}
// ====== Accuracy sentinel & elastic-depth controller ======
void acc_update(var x /*lambda*/, var y /*gamma*/)
{
var a = 0.01; // ~100-bar half-life
ACC_mx = (1-a)*ACC_mx + a*x;
ACC_my = (1-a)*ACC_my + a*y;
ACC_mx2 = (1-a)*ACC_mx2 + a*(x*x);
ACC_my2 = (1-a)*ACC_my2 + a*(y*y);
ACC_mxy = (1-a)*ACC_mxy + a*(x*y);
var vx = ACC_mx2 - ACC_mx*ACC_mx;
var vy = ACC_my2 - ACC_my*ACC_my;
var cv = ACC_mxy - ACC_mx*ACC_my;
if(vx>0 && vy>0) G_AccCorr = cv / sqrt(vx*vy); else G_AccCorr = 0;
if(!G_HaveBase){ G_AccBase = G_AccCorr; G_HaveBase = 1; }
}
// utility to maximize: accuracy minus gentle memory penalty
var util_now()
{
int mb = mem_mb_est();
var mem_pen = 0;
if(mb > MEM_BUDGET_MB) mem_pen = (mb - MEM_BUDGET_MB)/(var)MEM_BUDGET_MB; else mem_pen = 0;
return G_AccCorr - 0.5*mem_pen;
}
// apply a +1 “grow one level” action if safe memory headroom
int apply_grow_step()
{
int mb = mem_mb_est();
if(G_RT_TreeMaxDepth >= MAX_DEPTH) return 0;
if(mb > MEM_BUDGET_MB - 2*MEM_HEADROOM_MB) return 0;
int newFrontier = G_RT_TreeMaxDepth;
growSelectiveAtDepth(Root, newFrontier, KEEP_CHILDREN_HI);
G_RT_TreeMaxDepth++;
reindexTreeAndMap();
printf("\n[EDC] Grew depth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
return 1;
}
// revert last growth (drop newly-added frontier children)
void revert_last_grow()
{
pruneSelectiveAtDepth((Node*)Root, G_RT_TreeMaxDepth, 0);
G_RT_TreeMaxDepth--;
reindexTreeAndMap();
printf("\n[EDC] Reverted growth to %i (est %i MB)", G_RT_TreeMaxDepth, mem_mb_est());
}
// main elastic-depth controller; call once per bar (after acc_update)
void edc_runtime()
{
// (5) slow hill-climb on G_DTreeExp
if((Bar % DEPTH_TUNE_BARS) == 0){
var U0 = util_now();
var trial = clamp(G_DTreeExp + G_DTreeExpDir*G_DTreeExpStep, 0.8, 2.0);
var old = G_DTreeExp;
G_DTreeExp = trial;
if(util_now() + 0.005 < U0){
G_DTreeExp = old;
G_DTreeExpDir = -G_DTreeExpDir;
}
}
int mb = mem_mb_est();
if(G_TunePending){
if(Bar - G_TuneStartBar >= TUNE_DELAY_BARS){
G_UtilAfter = util_now();
var eps = 0.01;
if(G_UtilAfter + eps < G_UtilBefore){
revert_last_grow();
} else {
printf("\n[EDC] Growth kept (U: %.4f -> %.4f)", G_UtilBefore, G_UtilAfter);
}
G_TunePending = 0; G_TuneAction = 0;
}
return;
}
if( (Bar % DEPTH_TUNE_BARS)==0 && mb <= MEM_BUDGET_MB - 2*MEM_HEADROOM_MB && G_RT_TreeMaxDepth < MAX_DEPTH ){
G_UtilBefore = util_now();
if(apply_grow_step()){
G_TunePending = 1; G_TuneAction = 1; G_TuneStartBar = Bar;
}
}
}
// Builds "Log\\Alpha12_eq_###.csv" into outName (must be >=64 bytes)
void buildEqFileName(int idx, char* outName /*>=64*/)
{
strcpy(outName, "Log\\Alpha12_eq_");
string idxs = strf("%03i", idx);
strcat(outName, idxs);
strcat(outName, ".csv");
}
// ===== consolidated EQ log =====
void writeEqHeaderOnce()
{
static int done=0; if(done) return; done=1;
file_append("Log\\Alpha12_eq_all.csv",
"Bar,Epoch,Ctx,EqCount,i,n1,n2,TreeId,Depth,Rate,Pred,Adv,Prop,Mode,WAdv,WTree,PBull,Entropy,MCState,ExprLen,ExprHash,tanhN,sinN,cosN\n");
}
void appendEqMetaLine(
int bar, int epoch, int ctx, int i, int n1, int n2, int tid, int depth, var rate,
var pred, var adv, var prop, int mode, var wadv, var wtree,
var pbull, var ent, int mcstate, string expr)
{
if(i >= LOG_EQ_SAMPLE) return;
int eLen = 0, eHash = 0, cT = 0, cS = 0, cC = 0;
if(expr){
eLen = (int)strlen(expr);
eHash = (int)djb2_hash(expr);
cT = countSubStr(expr,"tanh(");
cS = countSubStr(expr,"sin(");
cC = countSubStr(expr,"cos(");
} else {
eHash = (int)djb2_hash("");
}
#ifdef LOG_FLOAT_TRIM
file_append("Log\\Alpha12_eq_all.csv",
strf("%i,%i,%i,%i,%i,%i,%i,%i,%i,%.4f,%.4f,%.4f,%.4f,%i,%.3f,%.3f,%.4f,%.4f,%i,%i,%i,%i,%i,%i\n",
bar, epoch, ctx, NET_EQNS, i, n1, n2, tid, depth,
rate, pred, adv, prop, mode, wadv, wtree, pbull, ent,
mcstate, eLen, eHash, cT, cS, cC));
#else
file_append("Log\\Alpha12_eq_all.csv",
strf("%i,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,%.4f,%.4f,%.6f,%i,%.3f,%.3f,%.4f,%.4f,%i,%i,%i,%i,%i,%i\n",
bar, epoch, ctx, NET_EQNS, i, n1, n2, tid, depth,
rate, pred, adv, prop, mode, wadv, wtree, pbull, ent,
mcstate, eLen, eHash, cT, cS, cC));
#endif
}
// --------- allocation ----------
void randomizeRP()
{
int K=G_K,N=G_N,k,j;
for(k=0;k<K;k++)
for(j=0;j<N;j++)
G_RP[k*N+j] = ifelse(random(1) < 0.5, -1.0, 1.0);
}
// === (8/9) Use effective K + per-bar guard ===
int G_ProjBar = -1; int G_ProjK = -1;
void computeProjection(){
if(G_ProjBar == Bar && G_ProjK == G_Keff) return; // guard (upgrade #9)
int K=G_Keff, N=G_N, k, j;
for(k=0;k<K;k++){
var acc=0;
for(j=0;j<N;j++) acc += (var)G_RP[k*N+j]*G_StateSq[j]; // reuse squares (upgrade #3)
G_Z[k]=(fvar)acc;
}
G_ProjBar = Bar; G_ProjK = G_Keff;
}
// D) Compact allocate/free
void allocateNet()
{
int N = G_N, D = G_D, K = G_K;
// core
G_State = (var*)malloc(N*sizeof(var));
G_Prev = (var*)malloc(N*sizeof(var));
G_StateSq= (var*)malloc(N*sizeof(var));
// graph / projection
G_Adj = (i16*) malloc(N*D*sizeof(i16));
G_RP = (fvar*) malloc(K*N*sizeof(fvar));
G_Z = (fvar*) malloc(K*sizeof(fvar));
G_Mode = (i8*) malloc(N*sizeof(i8));
// weights & params
G_WSelf = (fvar*)malloc(N*sizeof(fvar));
G_WN1 = (fvar*)malloc(N*sizeof(fvar));
G_WN2 = (fvar*)malloc(N*sizeof(fvar));
G_WGlob1= (fvar*)malloc(N*sizeof(fvar));
G_WGlob2= (fvar*)malloc(N*sizeof(fvar));
G_WMom = (fvar*)malloc(N*sizeof(fvar));
G_WTree = (fvar*)malloc(N*sizeof(fvar));
G_WAdv = (fvar*)malloc(N*sizeof(fvar));
A1x = (fvar*)malloc(N*sizeof(fvar));
A1lam = (fvar*)malloc(N*sizeof(fvar));
A1mean= (fvar*)malloc(N*sizeof(fvar));
A1E = (fvar*)malloc(N*sizeof(fvar));
A1P = (fvar*)malloc(N*sizeof(fvar));
A1i = (fvar*)malloc(N*sizeof(fvar));
A1c = (fvar*)malloc(N*sizeof(fvar));
A2x = (fvar*)malloc(N*sizeof(fvar));
A2lam = (fvar*)malloc(N*sizeof(fvar));
A2mean= (fvar*)malloc(N*sizeof(fvar));
A2E = (fvar*)malloc(N*sizeof(fvar));
A2P = (fvar*)malloc(N*sizeof(fvar));
A2i = (fvar*)malloc(N*sizeof(fvar));
A2c = (fvar*)malloc(N*sizeof(fvar));
G1mean= (fvar*)malloc(N*sizeof(fvar));
G1E = (fvar*)malloc(N*sizeof(fvar));
G2P = (fvar*)malloc(N*sizeof(fvar));
G2lam = (fvar*)malloc(N*sizeof(fvar));
TAlpha= (fvar*)malloc(N*sizeof(fvar));
TBeta = (fvar*)malloc(N*sizeof(fvar));
G_TreeTerm = (fvar*)malloc(N*sizeof(fvar));
#ifdef KEEP_TOP_META
G_TopEq = (i16*) malloc(N*sizeof(i16));
G_TopW = (fvar*)malloc(N*sizeof(fvar));
#endif
G_PropRaw = (fvar*)malloc(N*sizeof(fvar));
G_Prop = (fvar*)malloc(N*sizeof(fvar));
if(LOG_EXPR_TEXT) G_Sym = (string*)malloc(N*sizeof(char*)); else G_Sym = 0;
// tree index
G_TreeCap = 128;
G_TreeIdx = (Node**)malloc(G_TreeCap*sizeof(Node*));
G_TreeN = 0;
G_EqTreeId = (i16*)malloc(N*sizeof(i16));
// initialize adjacency
{ int t; for(t=0; t<N*D; t++) G_Adj[t] = -1; }
// initialize state and parameters
{
int i;
for(i=0;i<N;i++){
G_State[i] = random();
G_Prev[i] = G_State[i];
G_StateSq[i]= G_State[i]*G_State[i];
G_Mode[i] = 0;
G_WSelf[i]=0.5; G_WN1[i]=0.2; G_WN2[i]=0.2;
G_WGlob1[i]=0.1; G_WGlob2[i]=0.1;
G_WMom[i]=0.05; G_WTree[i]=0.15; G_WAdv[i]=0.15;
A1x[i]=1; A1lam[i]=0.1; A1mean[i]=0; A1E[i]=0; A1P[i]=0; A1i[i]=0; A1c[i]=0;
A2x[i]=1; A2lam[i]=0.1; A2mean[i]=0; A2E[i]=0; A2P[i]=0; A2i[i]=0; A2c[i]=0;
G1mean[i]=1.0; G1E[i]=0.001; G2P[i]=0.6; G2lam[i]=0.3;
TAlpha[i]=0.8; TBeta[i]=25.0;
G_TreeTerm[i]=0;
#ifdef KEEP_TOP_META
G_TopEq[i]=-1; G_TopW[i]=0;
#endif
G_PropRaw[i]=1; G_Prop[i]=1.0/G_N;
if(LOG_EXPR_TEXT){
G_Sym[i] = (char*)malloc(EXPR_MAXLEN);
if(G_Sym[i]) strcpy(G_Sym[i],"");
}
}
}
// --- Hit-rate state --- (PATCH D)
G_HitEW = (fvar*)malloc(N*sizeof(fvar));
G_HitN = (int*) malloc(N*sizeof(int));
G_AdvPrev = (fvar*)malloc(N*sizeof(fvar));
{
int i;
for(i=0;i<N;i++){
G_HitEW[i] = 0.5; // neutral start
G_HitN[i] = 0;
G_AdvPrev[i] = 0; // no prior advice yet
}
}
computeMemFixedBytes();
if(G_PredNode) free(G_PredNode);
G_PredLen = G_TreeN; if(G_PredLen<=0) G_PredLen=1;
G_PredNode = (var*)malloc(G_PredLen*sizeof(var));
G_PredCap = G_PredLen;
G_PredCacheBar = -1;
}
void freeNet()
{
int i;
if(G_State)free(G_State); if(G_Prev)free(G_Prev); if(G_StateSq)free(G_StateSq);
if(G_Adj)free(G_Adj); if(G_RP)free(G_RP); if(G_Z)free(G_Z); if(G_Mode)free(G_Mode);
if(G_WSelf)free(G_WSelf); if(G_WN1)free(G_WN1); if(G_WN2)free(G_WN2);
if(G_WGlob1)free(G_WGlob1); if(G_WGlob2)free(G_WGlob2);
if(G_WMom)free(G_WMom); if(G_WTree)free(G_WTree); if(G_WAdv)free(G_WAdv);
if(A1x)free(A1x); if(A1lam)free(A1lam); if(A1mean)free(A1mean); if(A1E)free(A1E); if(A1P)free(A1P); if(A1i)free(A1i); if(A1c)free(A1c);
if(A2x)free(A2x); if(A2lam)free(A2lam); if(A2mean)free(A2mean); if(A2E)free(A2E); if(A2P)free(A2P); if(A2i)free(A2i); if(A2c)free(A2c);
if(G1mean)free(G1mean); if(G1E)free(G1E); if(G2P)free(G2P); if(G2lam)free(G2lam);
if(TAlpha)free(TAlpha); if(TBeta)free(TBeta);
if(G_TreeTerm)free(G_TreeTerm);
#ifdef KEEP_TOP_META
if(G_TopEq)free(G_TopEq); if(G_TopW)free(G_TopW);
#endif
if(G_EqTreeId)free(G_EqTreeId);
if(G_PropRaw)free(G_PropRaw); if(G_Prop)free(G_Prop);
if(G_Sym){ for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]); free(G_Sym); }
if(G_TreeIdx)free(G_TreeIdx); if(G_PredNode)free(G_PredNode);
}
// --------- DTREE feature builders ----------
// MEMORYLESS normalization to avoid series misuse in conditional paths
inline var nrm_s(var x) { return sat100(100.*tanh(x)); }
inline var nrm_scl(var x, var s) { return sat100(100.*tanh(s*x)); }
// F) Features accept 'pred' (no G_Pred[])
void buildEqFeatures(int i, var lambda, var mean, var energy, var power, var pred, var* S /*ADV_EQ_NF*/)
{
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* t = treeAt(tid);
S[0] = nrm_s(G_State[i]);
S[1] = nrm_s(mean);
S[2] = nrm_scl(power,0.05);
S[3] = nrm_scl(energy,0.01);
S[4] = nrm_s(lambda);
S[5] = sat100(200.0*(pred-0.5));
S[6] = sat100(200.0*((var)t->d/MAX_DEPTH)-100.0);
S[7] = sat100(1000.0*t->r);
S[8] = nrm_s(G_TreeTerm[i]);
S[9] = sat100( (200.0/3.0) * (var)( (int)G_Mode[i] ) - 100.0 );
S[10] = sat100(200.0*(G_MCF_PBull-0.5));
S[11] = sat100(200.0*(G_MCF_Entropy-0.5));
S[12] = sat100(200.0*((var)G_HitEW[i] - 0.5)); // NEW: reliability feature (PATCH G)
sanitize(S,ADV_EQ_NF);
}
// (Kept for completeness; not used by DTREE anymore)
void buildPairFeatures(int i,int j, var lambda, var mean, var energy, var power, var* P /*ADV_PAIR_NF*/)
{
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* ti = treeAt(tid_i);
Node* tj = treeAt(tid_j);
var predi = predByTid(tid_i);
var predj = predByTid(tid_j);
P[0]=nrm_s(G_State[i]); P[1]=nrm_s(G_State[j]);
P[2]=sat100(200.0*((var)ti->d/MAX_DEPTH)-100.0);
P[3]=sat100(200.0*((var)tj->d/MAX_DEPTH)-100.0);
P[4]=sat100(1000.0*ti->r); P[5]=sat100(1000.0*tj->r);
P[6]=sat100(abs(P[2]-P[3]));
P[7]=sat100(abs(P[4]-P[5]));
P[8]=sat100(100.0*(predi+predj-1.0));
P[9]=nrm_s(lambda); P[10]=nrm_s(mean); P[11]=nrm_scl(power,0.05);
sanitize(P,ADV_PAIR_NF);
}
// --- Safe neighbor helpers & adjacency sanitizer ---
int adjSafe(int i, int d){
int N = G_N, D = G_D;
if(!G_Adj || N <= 1 || D <= 0) return 0;
if(d < 0) d = 0; if(d >= D) d = d % D;
int v = G_Adj[i*D + d];
if(v < 0 || v >= N || v == i) v = (i + 1) % N;
return v;
}
void sanitizeAdjacency(){
if(!G_Adj) return;
int N = G_N, D = G_D, i, d;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
i16 *p = &G_Adj[i*D + d];
if(*p < 0 || *p >= N || *p == i){
int r = (int)random(N);
if(r == i) r = (r+1) % N;
*p = (i16)r;
}
}
if(D >= 2 && G_Adj[i*D+0] == G_Adj[i*D+1]){
int r2 = (G_Adj[i*D+1] + 1) % N;
if(r2 == i) r2 = (r2+1) % N;
G_Adj[i*D+1] = (i16)r2;
}
}
}
// --------- advisor helpers (NEW) ----------
// cache one advisor value per equation per bar
var adviseSeed(int i, var lambda, var mean, var energy, var power)
{
static int seedBar = -1;
static int haveSeed[NET_EQNS];
static var seedVal[NET_EQNS];
if(seedBar != Bar){
int k; for(k=0;k<NET_EQNS;k++) haveSeed[k] = 0;
seedBar = Bar;
}
if(i < 0) i = 0;
if(i >= NET_EQNS) i = i % NET_EQNS;
if(!allowAdvise(i)) return 0;
if(!haveSeed[i]){
seedVal[i] = adviseEq(i, lambda, mean, energy, power); // trains (once) in Train mode
haveSeed[i] = 1;
}
return seedVal[i];
}
// simple deterministic mixer for diversity in [-1..1] without extra advise calls
var mix01(var a, int salt){
var z = sin(123.456*a + 0.001*salt) + cos(98.765*a + 0.002*salt);
return tanh(0.75*z);
}
// --------- advise wrappers (single-equation only) ----------
// upgrade #7: early exit on tight memory BEFORE building features
var adviseEq(int i, var lambda, var mean, var energy, var power)
{
if(!allowAdvise(i)) return 0;
if(is(INITRUN)) return 0;
int tight = (mem_mb_est() >= MEM_BUDGET_MB - MEM_HEADROOM_MB);
if(tight) return 0;
// --- Patch L: Prefer advising reliable equations; explore a bit for the rest
if(G_HitN[i] > 32){ // wait until some evidence
var h = (var)G_HitEW[i];
var gate = 0.40 + 0.15*(1.0 - MC_Entropy); // uses the Markov entropy directly
if(h < gate){
if(random() >= 0.5) return 0; // ~50% exploration
}
}
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
var pred = predByTid(tid);
var S[ADV_EQ_NF];
buildEqFeatures(i,lambda,mean,energy,power,pred,S);
// --- Patch 4: reliability-weighted DTREE training objective
var obj = 0;
if(Train){
obj = sat100(100.0*tanh(0.6*lambda + 0.4*mean));
// Reliability prior (0.5..1.0) to bias learning toward historically better equations
var prior = 0.75 + 0.5*((var)G_HitEW[i] - 0.5); // 0.5..1.0
obj *= prior;
}
int objI = (int)obj;
var a = adviseLong(DTREE, objI, S, ADV_EQ_NF);
return a/100.;
}
// --------- advisePair disabled: never call DTREE here ----------
var advisePair(int i,int j, var lambda, var mean, var energy, var power)
{
return 0;
}
// --------- heuristic pair scoring ----------
var scorePairSafe(int i, int j, var lambda, var mean, var energy, var power)
{
int ti = safeTreeIndexFromEq(G_EqTreeId[i]);
int tj = safeTreeIndexFromEq(G_EqTreeId[j]);
Node *ni = treeAt(ti), *nj = treeAt(tj);
var simD = 1.0 / (1.0 + abs((var)ni->d - (var)nj->d));
var dr = 50.0*abs(ni->r - nj->r); // upgrade #10
var simR = 1.0 / (1.0 + dr);
var predi = predByTid(ti);
var predj = predByTid(tj);
var pred = 0.5*(predi + predj);
var score = 0.5*pred + 0.3*simD + 0.2*simR;
return 2.0*score - 1.0;
}
// --------- adjacency selection (heuristic only) ----------
// safer clash check using prev>=0
void rewireAdjacency_DTREE(var lambda, var mean, var energy, var power)
{
int N=G_N, D=G_D, i, d, c, best, cand;
for(i=0;i<N;i++){
for(d=0; d<D; d++){
var bestScore = -2; best = -1;
for(c=0;c<G_CandNeigh;c++){
cand = (int)random(N);
if(cand==i) continue;
int clash=0, k;
for(k=0;k<d;k++){
int prev = G_Adj[i*D+k];
if(prev>=0 && prev==cand){ clash=1; break; }
}
if(clash) continue;
var s = scorePairSafe(i,cand,lambda,mean,energy,power);
if(s > bestScore){ bestScore=s; best=cand; }
}
if(best<0){ do{ best = (int)random(N);} while(best==i); }
G_Adj[i*D + d] = (i16)best;
}
}
}
// --------- DTREE-created coefficients, modes & proportions ----------
var mapA(var a,var lo,var hi){ return mapUnit(a,lo,hi); }
void synthesizeEquationFromDTREE(int i, var lambda, var mean, var energy, var power)
{
var seed = adviseSeed(i,lambda,mean,energy,power);
G_Mode[i] = (int)(abs(1000*seed)) & 3;
// derive weights & params deterministically from the single seed
G_WSelf[i] = (fvar)mapA(mix01(seed, 11), 0.15, 0.85);
G_WN1[i] = (fvar)mapA(mix01(seed, 12), 0.05, 0.35);
G_WN2[i] = (fvar)mapA(mix01(seed, 13), 0.05, 0.35);
G_WGlob1[i] = (fvar)mapA(mix01(seed, 14), 0.05, 0.30);
G_WGlob2[i] = (fvar)mapA(mix01(seed, 15), 0.05, 0.30);
G_WMom[i] = (fvar)mapA(mix01(seed, 16), 0.02, 0.15);
G_WTree[i] = (fvar)mapA(mix01(seed, 17), 0.05, 0.35);
G_WAdv[i] = (fvar)mapA(mix01(seed, 18), 0.05, 0.35);
A1x[i] = (fvar)(randsign()*mapA(mix01(seed, 21), 0.6, 1.2));
A1lam[i] = (fvar)(randsign()*mapA(mix01(seed, 22), 0.05,0.35));
A1mean[i]= (fvar) mapA(mix01(seed, 23),-0.30,0.30);
A1E[i] = (fvar) mapA(mix01(seed, 24),-0.0015,0.0015);
A1P[i] = (fvar) mapA(mix01(seed, 25),-0.30,0.30);
A1i[i] = (fvar) mapA(mix01(seed, 26),-0.02,0.02);
A1c[i] = (fvar) mapA(mix01(seed, 27),-0.20,0.20);
A2x[i] = (fvar)(randsign()*mapA(mix01(seed, 31), 0.6, 1.2));
A2lam[i] = (fvar)(randsign()*mapA(mix01(seed, 32), 0.05,0.35));
A2mean[i]= (fvar) mapA(mix01(seed, 33),-0.30,0.30);
A2E[i] = (fvar) mapA(mix01(seed, 34),-0.0015,0.0015);
A2P[i] = (fvar) mapA(mix01(seed, 35),-0.30,0.30);
A2i[i] = (fvar) mapA(mix01(seed, 36),-0.02,0.02);
A2c[i] = (fvar) mapA(mix01(seed, 37),-0.20,0.20);
G1mean[i] = (fvar) mapA(mix01(seed, 41), 0.4, 1.6);
G1E[i] = (fvar) mapA(mix01(seed, 42),-0.004,0.004);
G2P[i] = (fvar) mapA(mix01(seed, 43), 0.1, 1.2);
G2lam[i] = (fvar) mapA(mix01(seed, 44), 0.05, 0.7);
TAlpha[i] = (fvar) mapA(mix01(seed, 51), 0.3, 1.5);
TBeta[i] = (fvar) mapA(mix01(seed, 52), 6.0, 50.0);
G_PropRaw[i] = (fvar)(0.01 + 0.99*(0.5*(seed+1.0)));
// Reliability-aware proportion boost (0.75..1.25 multiplier) (PATCH I)
{
var boost = 0.75 + 0.5*(var)G_HitEW[i];
G_PropRaw[i] = (fvar)((var)G_PropRaw[i] * boost);
}
}
void normalizeProportions()
{
int N=G_N,i; var s=0; for(i=0;i<N;i++) s += G_PropRaw[i];
if(s<=0) { for(i=0;i<N;i++) G_Prop[i] = (fvar)(1.0/N); return; }
for(i=0;i<N;i++) G_Prop[i] = (fvar)(G_PropRaw[i]/s);
}
// H) dtreeTerm gets predictabilities on demand
var dtreeTerm(int i, int* outTopEq, var* outTopW)
{
int N=G_N,j;
int tid_i = safeTreeIndexFromEq(G_EqTreeId[i]);
Node* ti=treeAt(tid_i); int di=ti->d; var ri=ti->r;
var predI = predByTid(tid_i);
var alpha=TAlpha[i], beta=TBeta[i];
var sumw=0, acc=0, bestW=-1; int bestJ=-1;
for(j=0;j<N;j++){
if(j==i) continue;
int tid_j = safeTreeIndexFromEq(G_EqTreeId[j]);
Node* tj=treeAt(tid_j); int dj=tj->d; var rj=tj->r;
var predJ = predByTid(tid_j);
var w = exp(-alpha*abs(di-dj)) * exp(-beta*abs(ri-rj));
var predBoost = 0.5 + 0.5*(predI*predJ);
var propBoost = 0.5 + 0.5*( (G_Prop[i] + G_Prop[j]) );
w *= predBoost * propBoost;
var pairAdv = scorePairSafe(i,j,0,0,0,0);
var pairBoost = 0.75 + 0.25*(0.5*(pairAdv+1.0));
w *= pairBoost;
sumw += w; acc += w*G_State[j];
if(w>bestW){bestW=w; bestJ=j;}
}
if(outTopEq) *outTopEq = bestJ;
if(outTopW) *outTopW = ifelse(sumw>0, bestW/sumw, 0);
if(sumw>0) return acc/sumw; return 0;
}
// --------- expression builder (capped & optional) ----------
void buildSymbolicExpr(int i, int n1, int n2)
{
if(LOG_EXPR_TEXT){
string s = G_Sym[i]; s[0]=0;
string a1 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
(var)A1x[i], n1, (var)A1lam[i], (var)A1mean[i], (var)A1E[i], (var)A1P[i], (var)A1i[i], (var)A1c[i]);
string a2 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
(var)A2x[i], n2, (var)A2lam[i], (var)A2mean[i], (var)A2E[i], (var)A2P[i], (var)A2i[i], (var)A2c[i]);
strlcat_safe(s, "x[i]_next = ", EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*x[i] + ", (var)G_WSelf[i]), EXPR_MAXLEN);
if(G_Mode[i]==1){
strlcat_safe(s, strf("%.3f*tanh%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
} else if(G_Mode[i]==2){
strlcat_safe(s, strf("%.3f*cos%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*tanh%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
} else {
strlcat_safe(s, strf("%.3f*sin%s + ", (var)G_WN1[i], a1), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*cos%s + ", (var)G_WN2[i], a2), EXPR_MAXLEN);
}
strlcat_safe(s, strf("%.3f*tanh(%.3f*mean + %.5f*E) + ", (var)G_WGlob1[i], (var)G1mean[i], (var)G1E[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*sin(%.3f*P + %.3f*lam) + ", (var)G_WGlob2[i], (var)G2P[i], (var)G2lam[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*(x[i]-x_prev[i]) + ", (var)G_WMom[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("Prop[i]=%.4f; ", (var)G_Prop[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DT(i) + ", (var)G_WTree[i]), EXPR_MAXLEN);
strlcat_safe(s, strf("%.3f*DTREE(i)", (var)G_WAdv[i]), EXPR_MAXLEN);
}
}
// ---------- one-time rewire init (call central reindex) ----------
void rewireInit()
{
randomizeRP();
computeProjection();
reindexTreeAndMap(); // ensures G_PredNode sized before any use
}
// ----------------------------------------------------------------------
// I) Trim rewireEpoch (no G_Pred sweep; same behavior)
// ----------------------------------------------------------------------
void rewireEpoch(var lambda, var mean, var energy, var power)
{
int i;
if(ENABLE_WATCH) watch("?A"); // before adjacency
// (7) adapt breadth by regime entropy
G_CandNeigh = ifelse(MC_Entropy < 0.45, CAND_NEIGH+4, CAND_NEIGH);
rewireAdjacency_DTREE(lambda,mean,energy,power);
if(ENABLE_WATCH) watch("?C"); // after adjacency
sanitizeAdjacency();
for(i=0;i<G_N;i++)
synthesizeEquationFromDTREE(i,lambda,mean,energy,power);
if(ENABLE_WATCH) watch("?D");
normalizeProportions();
// Unsigned context hash of current adjacency (+ epoch) for logging
{
int D = G_D;
unsigned int h = 2166136261u;
int total = G_N * D;
for(i=0;i<total;i++){
unsigned int x = (unsigned int)G_Adj[i];
h ^= x + 0x9e3779b9u + (h<<6) + (h>>2);
}
G_CtxID = (int)((h ^ ((unsigned int)G_Epoch<<8)) & 0x7fffffff);
}
if(LOG_EXPR_TEXT){
for(i=0;i<G_N;i++){
int n1, n2;
n1 = adjSafe(i,0);
if(G_D >= 2) n2 = adjSafe(i,1); else n2 = n1;
buildSymbolicExpr(i,n1,n2);
}
}
}
var projectNet()
{
int N=G_N,i; var sum=0,sumsq=0,cross=0;
for(i=0;i<N;i++){ sum+=G_State[i]; sumsq+=G_State[i]*G_State[i]; if(i+1<N) cross+=G_State[i]*G_State[i+1]; }
var mean=sum/N, corr=cross/(N-1);
return 0.6*tanh(mean + 0.001*sumsq) + 0.4*sin(corr);
}
// ----------------------------------------------------------------------
// J) Tighten updateNet (local pred, no G_AdvScore, log directly)
// ----------------------------------------------------------------------
void updateNet(var driver, var* outMean, var* outEnergy, var* outPower, int writeMeta)
{
int N = G_N, D = G_D, i;
var sum = 0, sumsq = 0;
for(i = 0; i < N; i++){ sum += G_State[i]; sumsq += G_State[i]*G_State[i]; }
var mean = sum / N;
var energy = sumsq;
var power = sumsq / N;
for(i = 0; i < N; i++){
int n1, n2;
n1 = adjSafe(i,0);
if(D >= 2) n2 = adjSafe(i,1); else n2 = n1;
var xi = G_State[i];
var xn1 = G_State[n1];
var xn2 = G_State[n2];
var mom = xi - G_Prev[i];
// --- EW hit-rate update from previous bar's advice vs this bar's realized return (PATCH H)
{
int canScore = 1;
if(is(INITRUN)) canScore = 0;
if(Bar <= LookBack) canScore = 0;
if(abs((var)G_AdvPrev[i]) <= HIT_EPS) canScore = 0;
if(canScore){
int sameSign = 0;
if( ( (var)G_AdvPrev[i] > 0 && G_Ret1 > 0 ) || ( (var)G_AdvPrev[i] < 0 && G_Ret1 < 0 ) )
sameSign = 1;
G_HitEW[i] = (fvar)((1.0 - HIT_ALPHA)*(var)G_HitEW[i] + HIT_ALPHA*(var)sameSign);
if(G_HitN[i] < 0x7fffffff) G_HitN[i] += 1;
}
}
int topEq = -1; var topW = 0;
var dt = dtreeTerm(i, &topEq, &topW);
G_TreeTerm[i] = (fvar)dt;
#ifdef KEEP_TOP_META
G_TopEq[i] = (i16)topEq;
G_TopW[i] = (fvar)topW;
#endif
{
int tid = safeTreeIndexFromEq(G_EqTreeId[i]);
var pred = predByTid(tid); // local predictability if you need it for features
var adv = 0;
if(allowAdvise(i))
adv = adviseEq(i, driver, mean, energy, power);
// Reliability gating of advisor by hit-rate (0.5..1.0) (PATCH H)
var wHit = 0.5 + 0.5*(var)G_HitEW[i];
var advEff = adv * wHit;
var arg1 = (var)(A1x[i])*xn1 + (var)(A1lam[i])*driver + (var)(A1mean[i])*mean + (var)(A1E[i])*energy + (var)(A1P[i])*power + (var)(A1i[i])*i + (var)(A1c[i]);
var arg2 = (var)(A2x[i])*xn2 + (var)(A2lam[i])*driver + (var)(A2mean[i])*mean + (var)(A2E[i])*energy + (var)(A2P[i])*power + (var)(A2i[i])*i + (var)(A2c[i]);
var nl1, nl2;
if(G_Mode[i] == 0){ nl1 = sin(arg1); nl2 = cos(arg2); }
else if(G_Mode[i] == 1){ nl1 = tanh(arg1); nl2 = sin(arg2); }
else if(G_Mode[i] == 2){ nl1 = cos(arg1); nl2 = tanh(arg2); }
else { nl1 = sin(arg1); nl2 = cos(arg2); }
var glob1 = tanh((var)G1mean[i]*mean + (var)G1E[i]*energy);
var glob2 = sin ((var)G2P[i]*power + (var)G2lam[i]*driver);
var xNew =
(var)G_WSelf[i]*xi +
(var)G_WN1[i]*nl1 +
(var)G_W
Last edited by TipmyPip; 09/19/25 04:45.
|