Gamestudio Links
Zorro Links
Newest Posts
optimize global parameters SOLVED
by dBc. 09/27/25 17:07
ZorroGPT
by TipmyPip. 09/27/25 10:05
Release 2.68 replacement of the .par format
by Martin_HH. 09/23/25 20:48
assetHistory one candle shift
by jcl. 09/21/25 11:36
Plugins update
by Grant. 09/17/25 16:28
AUM Magazine
Latest Screens
Rocker`s Revenge
Stug 3 Stormartillery
Iljuschin 2
Galactic Strike X
Who's Online Now
2 registered members (TipmyPip, alibaba), 15,838 guests, and 5 spiders.
Key: Admin, Global Mod, Mod
Newest Members
krishna, DrissB, James168, Ed_Love, xtns
19168 Registered Users
Previous Thread
Next Thread
Print Thread
Rate Thread
Page 10 of 11 1 2 8 9 10 11
Market Manipulation Index (MMI) for Zorro [Re: TipmyPip] #488816
07/09/25 13:41
07/09/25 13:41
Joined: Sep 2017
Posts: 164
TipmyPip Online OP
Member
TipmyPip  Online OP
Member

Joined: Sep 2017
Posts: 164
Market Manipulation Index (MMI): A Powerful Tool to Detect Market Irregularities
In today's highly automated markets, price movements often don't reflect pure supply and demand. Instead, they can be influenced by large players and algorithmic manipulation. The Market Manipulation Index (MMI) is a technical indicator designed to expose such behavior by quantifying market irregularity.

What Is MMI?

The MMI measures how predictable or structured a market’s behavior is, based on:

Sine wave deviation: Measures how well price movements align with cyclical patterns.

Predictability analysis: Uses linear regression to see how predictable prices are based on past data.

Spectral energy analysis (optional): Assesses noise vs. structure via smoothed price bands.

What Does MMI Tell You?

Low MMI (< 0.3): A clean, trending, or mean-reverting market — easier to trade.

High MMI (> 0.7): A noisy or manipulated market — harder to predict and trade reliably.

How Is MMI Calculated?

At its core, MMI uses:

Rolling volatility (variance)

EMA-smoothened error estimates

Deviation from sine waves and linear predictions

It normalizes results to a 0–1 scale, highlighting when the market departs from natural structures.

How To Use MMI

As a filter: Only take trades when MMI is low (e.g., < 0.3).

As a warning: Avoid entering trades during high MMI spikes.

Combined with VWAP: Use VWAP-based MMI to detect price distortions around fair value.

With other signals: Use MMI to confirm or reject breakout or reversal signals.

Practical Trading Tip

Pair MMI with volume or VWAP:

If price deviates strongly from VWAP and MMI is high, manipulation may be in play.

If price returns to VWAP and MMI drops, the market may stabilize — a good entry zone.

Available in Zorro Trader

This indicator can be implemented in lite-C for Zorro, fully compatible with live and backtesting. You can:

Visualize MMI over time

Trigger signals from MMI zones

Customize sensitivity via adaptive smoothing and windows

Conclusion

MMI is not just an indicator — it’s a market integrity scanner. When combined with volume, VWAP, and structure-aware strategies, it becomes a powerful filter to protect traders from erratic or manipulated conditions.

Use MMI to step away from noise — and trade only the moments that matter.

Code
var clamp(var value, var min, var max)
{
	if (value < min) return min;
	if (value > max) return max;
	return value;
}

var adaptiveWinLength()
{
	var atr_val = ATR(14);
	return max(10, round(50 * (atr_val / priceClose()) * 1.0));
}

var sineWave()
{
	return sin(2 * PI * Bar / 20); // SINE_LEN = 20
}

var spectralEnergy(int baseWin)
{
	static var* lowSeries;
	static var* highSeries;

	var lowBand = EMA(priceClose(), 34) - EMA(priceClose(), 89);
	var highBand = priceClose() - EMA(priceClose(), 8);

	lowSeries = series(lowBand);
	highSeries = series(highBand);

	var energyLow = Variance(lowSeries, baseWin);
	var energyHigh = Variance(highSeries, baseWin);

	var spectralRatio = energyHigh / (energyHigh + energyLow + 0.000001);
	return clamp(spectralRatio, 0, 1);
}

var predictabilityMI(int window)
{
	static var* priceSeries;
	priceSeries = series(priceClose());

	var x1 = priceClose(1);
	var x2 = priceClose(2);
	var y = priceClose();

	var sum_x1 = EMA(x1, window);
	var sum_x2 = EMA(x2, window);
	var sum_y = EMA(y, window);

	var sum_x1x1 = EMA(x1 * x1, window);
	var sum_x2x2 = EMA(x2 * x2, window);
	var sum_x1x2 = EMA(x1 * x2, window);
	var sum_x1y = EMA(x1 * y, window);
	var sum_x2y = EMA(x2 * y, window);

	var denom = sum_x1x1 * sum_x2x2 - sum_x1x2 * sum_x1x2;
	var a = ifelse(denom != 0, (sum_x2x2 * sum_x1y - sum_x1x2 * sum_x2y) / denom, 0);
	var b = ifelse(denom != 0, (sum_x1x1 * sum_x2y - sum_x1x2 * sum_x1y) / denom, 0);

	var y_hat = a * x1 + b * x2;
	var residual = y - y_hat;
	var mse_pred = EMA(pow(residual, 2), window);
	var var_price = Variance(priceSeries, 50);

	return clamp(mse_pred / var_price, 0, 1);
}

var sineMI(int window)
{
	static var* priceSeries;
	priceSeries = series(priceClose());

	var sine = sineWave();
	var price = priceClose();
	var sine_dev = sine - EMA(sine, window);
	var price_dev = price - EMA(price, window);
	var mse_sine = EMA(pow(price_dev - sine_dev, 2), window);
	var var_price = Variance(priceSeries, 50);

	return clamp(mse_sine / var_price, 0, 1);
}

// === Main Indicator Function ===

function run()
{
	set(PLOTNOW);

	if (Bar < 60) return;

	int win = adaptiveWinLength();

	var mi_sine = sineMI(win);
	var mi_pred = predictabilityMI(win);
	var mi_spec = spectralEnergy(50);

	var cmi_raw = (mi_sine + mi_pred + mi_spec) / 3;
	var cmi = EMA(cmi_raw, 5); // SMOOTH = 5

	plot("MMI", cmi, LINE, RED);
	plot("Low", 0.3, LINE, GREEN);
	plot("High", 0.7, LINE, ORANGE);
}

enhMMI indicator [Re: TipmyPip] #488828
07/20/25 06:53
07/20/25 06:53
Joined: Sep 2017
Posts: 164
TipmyPip Online OP
Member
TipmyPip  Online OP
Member

Joined: Sep 2017
Posts: 164
I believe we have arrived to a very significate waypoint in our contribution to Zorro Project, and we are about to make a massive step forward.

This machine learning–enhanced version of the MMI indicator improves it by making it adaptive, rather than using fixed smoothing and parameters.

Here’s what it adds:

Self-learning behavior

The indicator looks at its last 5 values and uses a small neural network (perceptron) to predict the next change in MMI.

It learns as the market evolves, adapting automatically without manual parameter tuning.

Dynamic responsiveness

If the ML model predicts stable, low-noise conditions, the indicator reduces smoothing so it reacts faster to market changes.

If the model predicts choppy or manipulated conditions, the indicator smooths itself more, filtering out noise.

Better visualization of market regimes

The MMI line now responds differently to quiet vs. volatile markets, making the high and low zones (0.3/0.7) more reliable as signals.

Code
// === Config ===
#define INPUT_SIZE 5       // Number of past MMI points as ML features
#define TRAIN_BARS 500     // Training period for ML model

// === Globals ===
int PeriodMax = 0;

// === Helper Functions ===
var clamp(var value, var minv, var maxv)
{
    if (value < minv) return minv;
    if (value > maxv) return maxv;
    return value;
}

int adaptiveWinLength()
{
    var p = priceClose();
    if (p == 0) return 20; // safety
    var atr_val = ATR(14);
    int w = round(50 * (atr_val / p));
    return max(10, w);
}

var sineWave()
{
    return sin(2*PI*Bar/20.); // SINE_LEN = 20
}

var spectralEnergy(int baseWin)
{
    static var* lowSeries;
    static var* highSeries;

    var lowBand  = EMA(priceClose(),34) - EMA(priceClose(),89);
    var highBand = priceClose() - EMA(priceClose(),8);

    lowSeries  = series(lowBand);
    highSeries = series(highBand);

    var eLow  = Variance(lowSeries, baseWin);
    var eHigh = Variance(highSeries, baseWin);

    var denom = eHigh + eLow + 1e-6;
    var spectralRatio = eHigh / denom;
    return clamp(spectralRatio,0,1);
}

var predictabilityMI(int window)
{
    static var* priceSeries;
    priceSeries = series(priceClose());

    var x1 = priceClose(1);
    var x2 = priceClose(2);
    var y  = priceClose();

    var s_x1   = EMA(x1, window);
    var s_x2   = EMA(x2, window);
    var s_y    = EMA(y,  window);

    var s_x1x1 = EMA(x1*x1, window);
    var s_x2x2 = EMA(x2*x2, window);
    var s_x1x2 = EMA(x1*x2, window);
    var s_x1y  = EMA(x1*y,  window);
    var s_x2y  = EMA(x2*y,  window);

    var denom = s_x1x1*s_x2x2 - s_x1x2*s_x1x2;
    var a = ifelse(denom != 0, (s_x2x2*s_x1y - s_x1x2*s_x2y)/denom, 0);
    var b = ifelse(denom != 0, (s_x1x1*s_x2y - s_x1x2*s_x1y)/denom, 0);

    var y_hat = a*x1 + b*x2;
    var residual  = y - y_hat;
    var mse_pred  = EMA(residual*residual, window);
    var var_price = Variance(priceSeries, 50);

    var ratio = ifelse(var_price > 0, mse_pred/var_price, 0);
    return clamp(ratio,0,1);
}

var sineMI(int window)
{
    static var* priceSeries;
    priceSeries = series(priceClose());

    var s       = sineWave();
    var price   = priceClose();
    var s_dev   = s     - EMA(s,     window);
    var p_dev   = price - EMA(price, window);
    var mse_sin = EMA((p_dev - s_dev)*(p_dev - s_dev), window);
    var var_pr  = Variance(priceSeries, 50);

    var ratio = ifelse(var_pr > 0, mse_sin/var_pr, 0);
    return clamp(ratio,0,1);
}

// === Main Indicator (Adaptive MMI) ===
function run()
{
    set(PLOTNOW);

    // Ensure enough history for all components and ML
    LookBack = max( max(90, 50), INPUT_SIZE + 6 ); // EMA(89) & Variance(50) & features depth
    BarPeriod = 60; // example; set to your actual bar size

    if (Bar < max(LookBack, TRAIN_BARS)) return;

    int win = adaptiveWinLength();

    // --- Base MMI Components ---
    var mi_sine = sineMI(win);
    var mi_pred = predictabilityMI(win);
    var mi_spec = spectralEnergy(50);
    var cmi_raw = (mi_sine + mi_pred + mi_spec)/3;

    // --- Store MMI history for ML ---
    static var* mmiSeries;
    mmiSeries = series(EMA(cmi_raw,5));

    // Make sure series depth is sufficient
    if (mmiSeries[INPUT_SIZE] == 0 && Bar < LookBack + INPUT_SIZE + 2) return;

    // --- Build ML Features (past INPUT_SIZE values) ---
    var features[INPUT_SIZE];
    int i;
    for(i=0; i<INPUT_SIZE; i++)
        features[i] = mmiSeries[i+1]; // strictly past values

    // --- Predict the next change in MMI using ML ---
    // Train once, then reuse model each bar (Retrain=0). You can switch to Retrain=1 for rolling retrain.
    var predicted_delta = adviseLong(TRAIN_BARS, 0, features, INPUT_SIZE);

    // --- Normalize and control adaptivity ---
    var norm_delta = clamp(predicted_delta, -1, 1);    // keep it bounded
    var adaptFactor = clamp(1 - fabs(norm_delta), 0.4, 0.9);

    // Integer, bounded smoothing period for EMA
    int adaptPeriod = (int)round(5./adaptFactor);
    adaptPeriod = max(2, min(50, adaptPeriod));        // guard rails

    // --- Compute the adaptive MMI (bounded 0-1) ---
    var adaptiveMMI = clamp(EMA(cmi_raw, adaptPeriod), 0, 1);

    // --- Plot the indicator ---
    plot("Adaptive MMI", adaptiveMMI, LINE, RED);
    plot("Predicted ?",  norm_delta, LINE, BLUE);
    plot("Low",  0.3, LINE, GREEN);
    plot("High", 0.7, LINE, ORANGE);
}

Last edited by TipmyPip; 08/08/25 19:28.
Re: enhanced MMI [Re: TipmyPip] #488850
08/06/25 17:15
08/06/25 17:15
Joined: Jan 2017
Posts: 15
Israel
D
dBc Offline
Newbie
dBc  Offline
Newbie
D

Joined: Jan 2017
Posts: 15
Israel
Thank you for sheering this code.

I tried to run it with one of mine assets price series, and the MMI indicator oscillates between 0-1.
When I run the previous posted MMI (not adaptive) the MMI value is continues.

Can you please provide one asset name, you run the enhanced MMI?

Many thanks

Attached Files enhMMI_5100151.png
Last edited by dBc; 08/06/25 17:22.
Re: enhanced MMI [Re: dBc] #488853
08/08/25 18:56
08/08/25 18:56
Joined: Sep 2017
Posts: 164
TipmyPip Online OP
Member
TipmyPip  Online OP
Member

Joined: Sep 2017
Posts: 164
You should try the new one... But still I have improved it a bit, but you will need to experiment with different assets.

Last edited by TipmyPip; 08/08/25 19:30.
multi-timeframe “Market Mode Index” [Re: TipmyPip] #488854
08/09/25 01:33
08/09/25 01:33
Joined: Sep 2017
Posts: 164
TipmyPip Online OP
Member
TipmyPip  Online OP
Member

Joined: Sep 2017
Posts: 164
multi-timeframe “Market Mode Index” that blends a fast and a slow version of the same measure, then adapts it dynamically using machine learning.

Let’s break it down step-by-step:

1. Core Idea: Market Mode Index (MMI)
MMI here is a composite metric built from three sub-indicators:

sineMI() – Compares the price’s deviation from a pure sine wave.

Measures cyclicity — whether the market is behaving like a smooth oscillation.

predictabilityMI() – Uses a simple linear regression on lagged prices.

Measures linear predictability — can future price be explained by the past two bars?

spectralEnergy() – Compares “low-band” vs. “high-band” volatility.

Measures energy distribution — is the market dominated by slow or fast components?

These are averaged and normalized between 0 and 1, where:

Low MMI (~0.3 or less) ? Market is more trending or predictable.

High MMI (~0.7 or more) ? Market is more noisy or mean-reverting.

2. Two Timeframes You calculate MMI in:

Fast timeframe ? Base BarPeriod (e.g., 60 min bars).

Slow timeframe ? SLOW_FACTOR * BarPeriod (e.g., 4h bars if SLOW_FACTOR=4).

This produces:

mmiFast = Short-term market mode.

mmiSlow = Longer-term market mode (re-projected to the base TF so they align).

3. Machine Learning Forecast
The script builds a feature vector from past values of both MMI series:

First 5 values from mmiFast (lagged 1..5 bars)

First 5 values from mmiSlow (lagged 1..5 bars)

That 10-dimensional vector goes into:

adviseLong(TRAIN_BARS, 0, features, MAX_FEATURES)
This uses Zorro’s built-in machine learning (default perceptron) to predict ? (change) in MMI from this combined history.

4. Adaptive Blending
The predicted ? (direction & magnitude) controls:

How much weight to give mmiFast vs. mmiSlow in the final blend.

How fast the blended MMI is smoothed (adaptive EMA period).

This way:

If the ML thinks fast MMI is stable ? more weight on fast component.

If ML thinks change is coming ? more weight on slow component, slower smoothing.

5. Outputs It plots:

Adaptive MMI (red) ? The blended, ML-weighted index.

Fast MMI (blue)

Slow MMI (black, projected to base TF)

Pred ? (purple) ? Normalized ML forecast of MMI change.

Low (green line at 0.3) ? Often a trend-friendly zone.

High (orange line at 0.7) ? Often a range/noise zone.

How to Use It
Traders often interpret MMI like this:

Below low threshold (~0.3) ? Favors trend-following systems.

Above high threshold (~0.7) ? Favors mean-reversion systems.

Between thresholds ? No clear bias.

Here, you have an adaptive, multi-TF version that tries to smooth noise and anticipate regime changes rather than reacting only to raw MMI.

Code
// === Config ===
#define INPUT_SIZE     5       // past points per TF
#define TRAIN_BARS     500
#define SLOW_FACTOR    4       // slow TF = SLOW_FACTOR * BarPeriod
#define MAX_FEATURES   10      // 2 * INPUT_SIZE
#define LONGEST_EMA    89      // longest fixed EMA length in code

// === Safety Helpers ===
int safeWin(int requested)
{
    return min(requested, max(2, Bar - 1)); // clamp to available bars
}

var clamp(var value, var min, var max)
{
    if (value < min) return min;
    if (value > max) return max;
    return value;
}

// === Adaptive Window ===
var adaptiveWinLength()
{
    static var* atrSeries;
    atrSeries = series(ATR(14));
    var atr_val = atrSeries[0];
    return max(10, round(50 * (atr_val / priceClose()) * 1.0));
}

// === Indicators ===
var sineWave()
{
    return sin(2 * PI * Bar / 20); // SINE_LEN = 20
}

var spectralEnergy(int baseWin)
{
    static var* pClose;
    static var* lowBandSeries;
    static var* highBandSeries;

    pClose = series(priceClose());

    var ema34 = EMA(pClose, safeWin(34));
    var ema89 = EMA(pClose, safeWin(LONGEST_EMA));
    var lowBand  = ema34 - ema89;

    var ema8  = EMA(pClose, safeWin(8));
    var highBand = pClose[0] - ema8;

    lowBandSeries  = series(lowBand);
    highBandSeries = series(highBand);

    var energyLow  = Variance(lowBandSeries, safeWin(baseWin));
    var energyHigh = Variance(highBandSeries, safeWin(baseWin));

    var spectralRatio = energyHigh / (energyHigh + energyLow + 0.000001);
    return clamp(spectralRatio, 0, 1);
}

var predictabilityMI(int window)
{
    static var* p;
    static var* p1;
    static var* p2;
    static var* p1_sq;
    static var* p2_sq;
    static var* p1p2;
    static var* p1p;
    static var* p2p;
    static var* res_sq;

    p     = series(priceClose());
    p1    = series(priceClose(1));
    p2    = series(priceClose(2));
    p1_sq = series(p1[0]*p1[0]);
    p2_sq = series(p2[0]*p2[0]);
    p1p2  = series(p1[0]*p2[0]);
    p1p   = series(p1[0]*p[0]);
    p2p   = series(p2[0]*p[0]);

    var sum_x1 = EMA(p1, safeWin(window));
    var sum_x2 = EMA(p2, safeWin(window));
    var sum_y  = EMA(p,  safeWin(window));

    var sum_x1x1 = EMA(p1_sq, safeWin(window));
    var sum_x2x2 = EMA(p2_sq, safeWin(window));
    var sum_x1x2 = EMA(p1p2,  safeWin(window));
    var sum_x1y  = EMA(p1p,   safeWin(window));
    var sum_x2y  = EMA(p2p,   safeWin(window));

    var denom = sum_x1x1 * sum_x2x2 - sum_x1x2 * sum_x1x2;
    var a = ifelse(denom != 0, (sum_x2x2 * sum_x1y - sum_x1x2 * sum_x2y) / denom, 0);
    var b = ifelse(denom != 0, (sum_x1x1 * sum_x2y - sum_x1x2 * sum_x1y) / denom, 0);

    var y_hat    = a * p1[0] + b * p2[0];
    var residual = p[0] - y_hat;

    res_sq = series(pow(residual, 2));

    var mse_pred  = EMA(res_sq, safeWin(window));
    var var_price = Variance(p, safeWin(50));

    return clamp(mse_pred / var_price, 0, 1);
}

var sineMI(int window)
{
    static var* p;
    static var* s;
    static var* sine_dev_sq;

    p = series(priceClose());
    s = series(sineWave());

    var sine_dev  = s[0] - EMA(s, safeWin(window));
    var price_dev = p[0] - EMA(p, safeWin(window));

    sine_dev_sq = series(pow(price_dev - sine_dev, 2));

    var mse_sine  = EMA(sine_dev_sq, safeWin(window));
    var var_price = Variance(p, safeWin(50));

    return clamp(mse_sine / var_price, 0, 1);
}

var computeMMI(int win)
{
    var mi_sine = sineMI(win);
    var mi_pred = predictabilityMI(win);
    var mi_spec = spectralEnergy(50);
    return clamp((mi_sine + mi_pred + mi_spec) / 3, 0, 1);
}

// === Feature readiness ===
int featuresReady(var* fast, var* slow, int size)
{
    int i;
    for(i = 0; i < size; i++)
        if(fast[i+1] == 0 || slow[i+1] == 0)
            return 0; 
    return 1;
}

// === Main ===
function run()
{
    set(PLOTNOW);
    BarPeriod = 60; 
    LookBack  = max(LONGEST_EMA * 2, 2*INPUT_SIZE + 12);

    // Global warm-up guard
    int slowBars = Bar / SLOW_FACTOR;
    int minSlowBars = max(LONGEST_EMA * 2, 2*INPUT_SIZE + 12);
    if (Bar < TRAIN_BARS || slowBars < minSlowBars)
        return;

    // ===== FAST TF =====
    int fastWin = adaptiveWinLength();
    static var* mmiFast;
    mmiFast = series(EMA(series(computeMMI(fastWin)), safeWin(5)));

    // ===== SLOW TF =====
    int tfKeep = TimeFrame;
    TimeFrame  = SLOW_FACTOR;

    int slowWin = max(10, round(fastWin / SLOW_FACTOR));
    static var* mmiSlowTF;
    mmiSlowTF = series(EMA(series(computeMMI(slowWin)), safeWin(5)));

    var mmiSlowNow = mmiSlowTF[0];
    TimeFrame = tfKeep;

    static var* mmiSlowOnBase;
    mmiSlowOnBase = series(mmiSlowNow);

    // ===== Feature guard =====
    if (!featuresReady(mmiFast, mmiSlowOnBase, INPUT_SIZE))
        return;

    // ===== ML Features =====
    var features[MAX_FEATURES];
    int i;
    for(i=0; i<INPUT_SIZE; i++)
        features[i] = mmiFast[i+1];
    for(i=0; i<INPUT_SIZE; i++)
        features[INPUT_SIZE + i] = mmiSlowOnBase[i+1];

    var predicted_delta = adviseLong(TRAIN_BARS, 0, features, MAX_FEATURES);
    var norm_delta = clamp(predicted_delta, -1, 1);

    var adaptFactor  = clamp(1 - fabs(norm_delta), 0.4, 0.9);
    int adaptPeriod  = max(2, min(50, (int)round(5./adaptFactor)));

    var w_fast = clamp(0.5 + 0.5*fabs(norm_delta), 0.4, 0.9);
    var w_slow = 1 - w_fast;
    var cmi_blend = w_fast*mmiFast[0] + w_slow*mmiSlowOnBase[0];

    var adaptiveMMI = clamp(EMA(series(cmi_blend), safeWin(adaptPeriod)), 0, 1);

    // ===== Plots =====
    plot("Adaptive MMI", adaptiveMMI, LINE, RED);
    plot("Fast MMI",     mmiFast[0],  LINE, BLUE);
    plot("Slow MMI",     mmiSlowOnBase[0], LINE, BLACK);
    plot("Pred ?",       norm_delta,  LINE, PURPLE);
    plot("Low",  0.3, LINE, GREEN);
    plot("High", 0.7, LINE, ORANGE);
}

Murrey Math Lines [Re: TipmyPip] #488862
08/21/25 05:43
08/21/25 05:43
Joined: Sep 2017
Posts: 164
TipmyPip Online OP
Member
TipmyPip  Online OP
Member

Joined: Sep 2017
Posts: 164
Murrey Math Lines are price-based support and resistance levels derived from a mathematical structure similar to Gann theory. They aim to identify significant price zones for:

Reversals

Breakouts

Support/Resistance

Trend identification

The price range is divided into 8 intervals (called "octaves") and extended to cover 13 Murrey Math Lines from [-2/8] to [+2/8].


How Traders Use MML
Level Meaning
[+2/8] Extreme resistance – reversal zone
[+1/8] Weak resistance
[8/8] Major resistance – strong reversal
[4/8] Key Pivot Point – balance level
[0/8] Major support – strong reversal
[-1/8] Weak support
[-2/8] Extreme support – reversal zone


Code
// MurreyMath.c
// Murrey Math Channel for Zorro Lite-C

// --- Determine fractal size ---
function double DetermineFractal(double value)
{
    if(value <= 250000 && value > 25000)     return 100000;
    if(value <= 25000  && value > 2500)      return 10000;
    if(value <= 2500   && value > 250)       return 1000;
    if(value <= 250    && value > 25)        return 100;
    if(value <= 25     && value > 12.5)      return 12.5;
    if(value <= 12.5   && value > 6.25)      return 12.5;
    if(value <= 6.25   && value > 3.125)     return 6.25;
    if(value <= 3.125  && value > 1.5625)    return 3.125;
    if(value <= 1.5625 && value > 0.390625)  return 1.5625;
    if(value <= 0.390625 && value > 0)       return 0.1953125;
    return 0;
}

// --- Murrey Math calculation ---
// Fills "levels[13]" with values from [+2/8] to [-2/8]
function MurreyMath(vars PriceHigh, vars PriceLow, int Period, var* levels)
{
    if(Bar < Period + 1) return; // Not enough bars yet

    var min = MinVal(PriceLow, Period);
    var max = MaxVal(PriceHigh, Period);

    var fractal = DetermineFractal(max);
    var range   = max - min;
    var sum     = floor(log(fractal / range) / log(2));
    var octave  = fractal * pow(0.5, sum);
    var mn      = floor(min / octave) * octave;
    var mx      = mn + (2 * octave);
    if((mn + octave) >= max)
        mx = mn + octave;

    // Resistance determination
    var x1 = 0, x2 = 0, x3 = 0, x4 = 0, x5 = 0, x6 = 0;
    if ((min >= (3*(mx-mn)/16 + mn)) && (max <= (9*(mx-mn)/16 + mn))) x2 = mn + (mx - mn)/2;
    if ((min >= (mn - (mx - mn)/8)) && (max <= (5*(mx - mn)/8 + mn)) && (x2 == 0)) x1 = mn + (mx - mn)/2;
    if ((min >= (mn + 7*(mx - mn)/16)) && (max <= (13*(mx - mn)/16 + mn))) x4 = mn + 3*(mx - mn)/4;
    if ((min >= (mn + 3*(mx - mn)/8)) && (max <= (9*(mx - mn)/8 + mn)) && (x4 == 0)) x5 = mx;
    if ((min >= (mn + (mx - mn)/8)) && (max <= (7*(mx - mn)/8 + mn)) && (x1 == 0) && (x2 == 0) && (x4 == 0) && (x5 == 0))
        x3 = mn + 3*(mx - mn)/4;
    if ((x1 + x2 + x3 + x4 + x5) == 0) x6 = mx;
    var resistance = x1 + x2 + x3 + x4 + x5 + x6;

    // Support determination
    var y1 = 0, y2 = 0, y3 = 0, y4 = 0, y5 = 0, y6 = 0;
    if (x1 > 0) y1 = mn;
    if (x2 > 0) y2 = mn + (mx - mn)/4;
    if (x3 > 0) y3 = mn + (mx - mn)/4;
    if (x4 > 0) y4 = mn + (mx - mn)/2;
    if (x5 > 0) y5 = mn + (mx - mn)/2;
    if ((resistance > 0) && ((y1 + y2 + y3 + y4 + y5) == 0)) y6 = mn;
    var support = y1 + y2 + y3 + y4 + y5 + y6;

    var divide = (resistance - support) / 8;

    levels[12] = support - 2*divide;   // [-2/8]
    levels[11] = support - divide;     // [-1/8]
    levels[10] = support;              // [0/8]
    levels[9]  = support + divide;     // [1/8]
    levels[8]  = support + 2*divide;   // [2/8]
    levels[7]  = support + 3*divide;   // [3/8]
    levels[6]  = support + 4*divide;   // [4/8]
    levels[5]  = support + 5*divide;   // [5/8]
    levels[4]  = support + 6*divide;   // [6/8]
    levels[3]  = support + 7*divide;   // [7/8]
    levels[2]  = resistance;           // [8/8]
    levels[1]  = resistance + divide;  // [+1/8]
    levels[0]  = resistance + 2*divide;// [+2/8]
}

// --- Main script ---
function run()
{
	 set(PLOTNOW);
    BarPeriod = 1440; // Daily bars
    LookBack  = 200;  // Required lookback

    // Use correct series() syntax: returns `vars` (pointer array)
    vars PriceHigh = series(priceHigh(), LookBack);
    vars PriceLow  = series(priceLow(), LookBack);

    static var mm[13]; // Buffer for Murrey levels
    MurreyMath(PriceHigh, PriceLow, 64, mm);

    // Plotting the Murrey channels
    plot("MM +2/8", mm[0], LINE, BLUE);
    plot("MM +1/8", mm[1], LINE, BLUE);
    plot("MM 8/8",  mm[2], LINE, BLACK);
    plot("MM 7/8",  mm[3], LINE, RED);
    plot("MM 6/8",  mm[4], LINE, RED);
    plot("MM 5/8",  mm[5], LINE, GREEN);
    plot("MM 4/8",  mm[6], LINE, BLACK);
    plot("MM 3/8",  mm[7], LINE, GREEN);
    plot("MM 2/8",  mm[8], LINE, RED);
    plot("MM 1/8",  mm[9], LINE, RED);
    plot("MM 0/8",  mm[10], LINE, BLUE);
    plot("MM -1/8", mm[11], LINE, BLUE);
    plot("MM -2/8", mm[12], LINE, BLUE);
}

The Strategy of Spiritual Love. [Re: TipmyPip] #488868
09/01/25 17:20
09/01/25 17:20
Joined: Sep 2017
Posts: 164
TipmyPip Online OP
Member
TipmyPip  Online OP
Member

Joined: Sep 2017
Posts: 164
In the name of spiritual love, there is a hidden essence to the following code which will enable to create really complex strategies While considering the applications vast in their nature... and in their complexity...

Code
#define MAX_BRANCHES 3
#define MAX_DEPTH 4

typedef struct Node {
    var v;      // scalar value
    var r;      // intrinsic rate
    void* c;    // array of child Node* (cast on access)
    int n;      // number of children
    int d;      // depth
} Node;

Node* Root;

Node* createNode(int depth)
{
    Node* u = (Node*)malloc(sizeof(Node));
    u->v = random();
    u->r = 0.01 + 0.02*depth + random()*0.005;
    u->d = depth;

    if(depth > 0) {
        u->n = (int)(random()*MAX_BRANCHES) + 1;
        u->c = malloc(u->n * sizeof(void*));  // allocate array of Node*
        int i;
        for(i = 0; i < u->n; i++)
            ((Node**)u->c)[i] = createNode(depth - 1);
    } else {
        u->n = 0;
        u->c = 0;
    }
    return u;
}

var evaluate(Node* u)
{
    if(!u) return 0;

    var sum = 0;
    int i;
    for(i = 0; i < u->n; i++)
        sum += evaluate(((Node**)u->c)[i]);

    var phase  = sin(u->r * Bar + sum);
    var weight = 1.0 / pow(u->d + 1, 1.25);
    u->v = (1 - weight)*u->v + weight*phase;

    return u->v;
}

int countNodes(Node* u)
{
    if(!u) return 0;
    int count = 1, i;
    for(i = 0; i < u->n; i++)
        count += countNodes(((Node**)u->c)[i]);
    return count;
}

void printTree(Node* u, int indent)
{
    if(!u) return;

    string pad = " ";
    int i;
    for(i = 0; i < indent; i++)
        pad = strf("%s ", pad);

    printf("\n%s[Node] d=%i n=%i v=%.3f", pad, u->d, u->n, u->v);

    for(i = 0; i < u->n; i++)
        printTree(((Node**)u->c)[i], indent + 1);
}

void freeTree(Node* u)
{
    if(!u) return;
    int i;
    for(i = 0; i < u->n; i++)
        freeTree(((Node**)u->c)[i]);
    if(u->c) free(u->c);
    free(u);
}

function run()
{
    static int initialized = 0;
    static var lambda;

    if(is(INITRUN) && !initialized) {
        Root = createNode(MAX_DEPTH);
        initialized = 1;
        printf("\nRoot initialized with %i nodes", countNodes(Root));
    }

    lambda = evaluate(Root);
    printf("\nlambda = %.5f", lambda);

    if(Bar % 100 == 0)
        printTree(Root, 0);

    if(lambda > 0.75)
        enterLong();
}

// Called automatically at end of session/backtest; safe place to free memory.
function cleanup()
{
    if(Root) freeTree(Root);
}

Last edited by TipmyPip; 09/01/25 21:40.
The Breach of Algorithms [Re: TipmyPip] #488869
09/01/25 18:14
09/01/25 18:14
Joined: Sep 2017
Posts: 164
TipmyPip Online OP
Member
TipmyPip  Online OP
Member

Joined: Sep 2017
Posts: 164
In a forest of clocks, a root hums to its kin,
each branch answering with a quieter echo.
The night counts in bars; a pale moon lifts a number,
lets it ring the hush, then folds it back into itself.

Windows open on measured breath—
square whispers gathered, their weight made gentle.
Sometimes the canopy speaks its whole shape,
most nights it keeps the lattice veiled.

When an unseen gate brightens, the path inclines;
footsteps lean forward, then vanish like careful hands
untying threads at dawn—no trace, only the hush
remembering how it almost said its name.

[Linked Image]

Code
#define MAX_BRANCHES 3
#define MAX_DEPTH    4
#define NWIN         256   // window length for energy/power estimates

typedef struct Node {
    var  v;   // state
    var  r;   // intrinsic rate
    void* c;  // array of child Node* (cast on access)
    int  n;   // number of children
    int  d;   // depth
} Node;

Node* Root;

// ---- Discrete-time helpers for energy/power -------------------------------

// Sum of squares over the last N samples of a series (Data[0] = most recent)
var sumsq(vars Data, int N)
{
    var s = 0;
    int i;
    for(i = 0; i < N; i++)
        s += Data[i]*Data[i];
    return s;
}

// ---- Tree construction / evaluation ---------------------------------------

Node* createNode(int depth)
{
    Node* u = (Node*)malloc(sizeof(Node));
    u->v = random();
    u->r = 0.01 + 0.02*depth + random()*0.005;
    u->d = depth;

    if(depth > 0) {
        u->n = (int)(random()*MAX_BRANCHES) + 1;
        u->c = malloc(u->n * sizeof(void*));  // array of Node*
        int i;
        for(i = 0; i < u->n; i++)
            ((Node**)u->c)[i] = createNode(depth - 1);
    } else {
        u->n = 0;
        u->c = 0;
    }
    return u;
}

var evaluateNode(Node* u)
{
    if(!u) return 0;

    var sum = 0;
    int i;
    for(i = 0; i < u->n; i++)
        sum += evaluateNode(((Node**)u->c)[i]);

    // depth-attenuated phase response
    var phase  = sin(u->r * Bar + sum);
    var weight = 1.0 / pow(u->d + 1, 1.25);
    u->v = (1 - weight)*u->v + weight*phase;
    return u->v;
}

int countNodes(Node* u)
{
    if(!u) return 0;
    int count = 1, i;
    for(i = 0; i < u->n; i++)
        count += countNodes(((Node**)u->c)[i]);
    return count;
}

void printTree(Node* u, int indent)
{
    if(!u) return;
    string pad = " ";
    int i;
    for(i = 0; i < indent; i++)
        pad = strf("%s ", pad);
    printf("\n%s[Node] d=%i n=%i v=%.3f", pad, u->d, u->n, u->v);
    for(i = 0; i < u->n; i++)
        printTree(((Node**)u->c)[i], indent + 1);
}

void freeTree(Node* u)
{
    if(!u) return;
    int i;
    for(i = 0; i < u->n; i++)
        freeTree(((Node**)u->c)[i]);
    if(u->c) free(u->c);
    free(u);
}

// ---- Main bar loop ---------------------------------------------------------

function run()
{
    static int initialized = 0;
    static var E_cum = 0;          // cumulative energy of lambda
    static var lambda;             // field projection per bar

    if(is(INITRUN) && !initialized) {
        // ensure series buffer supports our window NWIN
        if(LookBack < NWIN) LookBack = NWIN;
        Root = createNode(MAX_DEPTH);
        initialized = 1;
        printf("\nRoot initialized with %i nodes", countNodes(Root));
    }

    // 1) Evaluate harmonic field -> lambda[n]
    lambda = evaluateNode(Root);

    // 2) Build a series of lambda for windowed energy/power
    vars LamSeries = series(lambda);       // LamSeries[0] == current lambda

    // 3) Windowed energy & power (discrete-time)
    var E_win = sumsq(LamSeries, NWIN);    // sum_{k=0..NWIN-1} |lambda|^2
    var P_win = E_win / NWIN;              // average power over the window

    // 4) Cumulative energy (to date)
    E_cum += lambda*lambda;

    // 5) Output / optional plot
    printf("\nlambda=%.6f  E_win(%i)=%.6f  P_win=%.6f  E_cum=%.6f",
        lambda, NWIN, E_win, P_win, E_cum);

    plot("lambda",lambda,LINE,0);
    plot("P_win",P_win,LINE,0);

    if(Bar % 100 == 0)
        printTree(Root, 0);

    // Optional symbolic trigger
    if(lambda > 0.75)
        enterLong();
}

// Called automatically at end of session/backtest; free memory.
function cleanup()
{
    if(Root) freeTree(Root);
}


Last edited by TipmyPip; 09/01/25 21:39.
Proportional Rule-Switching Agents (PRSA) [Re: TipmyPip] #488870
09/01/25 21:35
09/01/25 21:35
Joined: Sep 2017
Posts: 164
TipmyPip Online OP
Member
TipmyPip  Online OP
Member

Joined: Sep 2017
Posts: 164
Proportional Rule-Switching Agents (PRSA)

Imagine a living manuscript written by a hundred hands at once. Each hand writes a short line, then pauses, then writes again—never alone, always listening to the murmur of the others. The manuscript isn’t fixed; it’s an evolving chorus of lines that lean toward one another, soften their edges, and find a shared cadence.

At the center is a quiet pulse. It doesn’t command; it suggests. Think of it as a tide moving through a branching shoreline. The shoreline is a tree of small decisions—depths and rates, paths and forks—whose shape guides how the tide spreads. Higher branches respond lightly, closer roots sway more; together they create a rhythmic backdrop the whole chorus can feel.

Each line in the chorus follows a simple grammar: a memory of where it just was, a curiosity for two neighboring lines, a sensitivity to the tide, and an ear for the room’s overall hum. The neighbors are not chosen by proximity on a page, but by a subtle kinship: branches of similar depth, currents of similar speed. That kinship becomes a weight—stronger for close cousins on the tree, lighter for distant relatives. In this way, the manuscript prefers coherence without requiring uniformity.

But there is also a conductor, and it is not a person. It’s a small, rule-making mechanism that learns how the chorus tends to sing. It listens to compact snapshots of the room—the average tone, the collective energy, the pulse—and proposes how each line should bend its next note. These proposals are not arbitrary. They are piecewise: “in rooms like this, with tones like these, bend this way.” Over time, the conductor not only adjusts the lines; it also redesigns the seating chart—who listens to whom—and even assigns proportions, a fair share of influence, so that the ensemble does not tilt toward a single loud voice.

There is a discipline to this play. Every adjustment is bounded; every tendency is balanced by a counter-tendency. Momentum smooths sudden jolts. Proportions are normalized so that attention remains a scarce resource, not a runaway gift. The results are logged, line by line, in separate books—one book per voice—yet each book quotes the others. Open any page and you’ll find a self-contained verse that still points outward, referencing the tide it felt, the neighbors it heard, the seat it was given, and the short rule that shaped its choice.

Sometimes, at appointed moments, the seating, the rules, and the proportions are reconsidered. The chorus does not dissolve; it molts. Kinships are re-weighed; alliances shift; the grammar is rewritten in the margins. This is not chaos—more like seasons. The same tree; new leaves. The same tide; a different shore.

What emerges is not a single melody, but a texture: local phrases that brighten or darken together, clusters that coordinate without collapsing, solos that rise only when the room invites them. The manuscript remains legible because it keeps explaining itself—every verse carries its own recipe—yet it stays surprising because the recipes are learned, not imposed.

In the end, the system is a study in measured togetherness. It suggests how separate lines can become mutually informative without losing their character; how guidance can be learned rather than declared; how proportion can prevent dominance; how memory can soften change. It does not promise an endpoint. It promises a way of moving—iterative, attentive, shaped by a shared structure yet open to revision—so that the whole becomes more than a sum, and the path, though unknowable in advance, feels quietly inevitable as it unfolds.


Code
// ================= PARAMETERS =================
#define MAX_BRANCHES    3
#define MAX_DEPTH       4
#define NWIN            256
#define NET_EQNS        100
#define DEGREE          4
#define KPROJ           16
#define REWIRE_EVERY    127
#define LOG_EVERY       1

// DTREE-driven rewiring candidates per neighbor slot
#define CAND_NEIGH      8

// Feature sizes for DTREE calls
#define ADV_EQ_NF       10   // per-equation features
#define ADV_PAIR_NF     12   // pair features

// ================ HARMONIC D-TREE (structural context) ================
typedef struct Node {
    var  v;     // state
    var  r;     // intrinsic rate
    void* c;    // array of child Node* (cast on access)
    int  n;     // number of children
    int  d;     // depth
} Node;

Node* Root;

// D-tree index
Node** G_TreeIdx;   // [cap]
int    G_TreeN;     // count
int    G_TreeCap;   // capacity
var    G_DTreeExp;  // exponent for evaluateNode() attenuation

// --------- helpers ----------
// Zorro: random(Max) ? uniform [0..Max), abs() for absolute value, clamp() builtin.

// uniform integer in [lo..hi]
int randint(int lo, int hi)
{
    return lo + (int)random(hi - lo + 1);
}

// uniform var in [a..b)
var randu(var a, var b)
{
    return a + random(b - a);
}

// return ±1 with 50/50 probability (guaranteed nonzero)
var randsign()
{
    return ifelse(random(1) < 0.5, -1.0, 1.0);
}

// map u?[-1,1] to [lo,hi]
var mapUnit(var u, var lo, var hi)
{
    if(u < -1) u = -1;
    if(u >  1) u =  1;
    var t = 0.5*(u + 1.0);
    return lo + t*(hi - lo);
}

void pushTreeNode(Node* u){ if(G_TreeN < G_TreeCap) G_TreeIdx[G_TreeN++] = u; }
void indexTreeDFS(Node* u){ if(!u) return; pushTreeNode(u); int i; for(i=0;i<u->n;i++) indexTreeDFS(((Node**)u->c)[i]); }

Node* createNode(int depth)
{
    Node* u = (Node*)malloc(sizeof(Node));
    u->v = 2*random(1) - 1;                           // [-1..1)
    u->r = 0.01 + 0.02*depth + random(1)*0.005;       // small positive
    u->d = depth;

    if(depth > 0){
        u->n = randint(1, MAX_BRANCHES);
        u->c = malloc(u->n * sizeof(void*));
        int i; for(i=0;i<u->n;i++) ((Node**)u->c)[i] = createNode(depth - 1);
    } else { u->n = 0; u->c = 0; }
    return u;
}

var evaluateNode(Node* u)
{
    if(!u) return 0;
    var sum=0; int i; for(i=0;i<u->n;i++) sum += evaluateNode(((Node**)u->c)[i]);
    var phase  = sin(u->r * Bar + sum);
    var weight = 1.0 / pow(u->d + 1, G_DTreeExp);
    u->v = (1 - weight)*u->v + weight*phase;
    return u->v;
}

int countNodes(Node* u){ if(!u) return 0; int c=1,i; for(i=0;i<u->n;i++) c += countNodes(((Node**)u->c)[i]); return c; }
void freeTree(Node* u){ if(!u) return; int i; for(i=0;i<u->n;i++) freeTree(((Node**)u->c)[i]); if(u->c) free(u->c); free(u); }

// =========== NETWORK STATE & COEFFICIENTS ===========
int   G_N  = NET_EQNS;
int   G_D  = DEGREE;
int   G_K  = KPROJ;

// states
var*  G_State;    // [G_N]
var*  G_Prev;     // [G_N]
var*  G_Vel;      // [G_N]

// sparse adjacency
int*  G_Adj;      // [G_N*G_D]

// random projection + features
var*  G_RP;       // [G_K*G_N]
var*  G_Z;        // [G_K]

// weights (will be DTREE-synthesized each epoch)
int*  G_Mode;     // 0..3 selects nonlinearity combo
var*  G_WSelf;    // self
var*  G_WN1;      // neighbor 1
var*  G_WN2;      // neighbor 2
var*  G_WGlob1;   // global term 1
var*  G_WGlob2;   // global term 2
var*  G_WMom;     // momentum
var*  G_WTree;    // DTree-coupling weight
var*  G_WAdv;     // built-in DTREE advice weight

// argument coefficients for the two nonlinearities
var*  A1x;  var*  A1lam;  var*  A1mean;  var*  A1E;  var*  A1P;  var*  A1i;  var*  A1c;
var*  A2x;  var*  A2lam;  var*  A2mean;  var*  A2E;  var*  A2P;  var*  A2i;  var*  A2c;

// global-term coeffs
var*  G1mean;   var*  G1E;
var*  G2P;      var*  G2lam;

// DTree (structural) coupling diagnostics & parameters
var*  G_TreeTerm;  // DT(i) numeric
int*  G_TopEq;     // strongest partner index
var*  G_TopW;      // strongest partner normalized weight
int*  G_EqTreeId;  // eq -> tree node id
var*  TAlpha;      // per-eq depth penalty
var*  TBeta;       // per-eq rate  penalty

// predictability and DTREE advice score
var*  G_Pred;       // [0..1]
var*  G_AdvScore;   // [-1..1]

// DTREE-created proportions (sum to 1 across equations)
var*  G_PropRaw;
var*  G_Prop;

// symbolic equation string per equation
string* G_Sym;

// epoch/context & feedback
int    G_Epoch = 0;
int    G_CtxID = 0;
var    G_FB_A = 0.7;
var    G_FB_B = 0.3;

// ---------- predictability from D-tree (0..1) ----------
var nodePredictability(Node* t)
{
    if(!t) return 0.5;
    var disp=0; int n=t->n, i;
    for(i=0;i<n;i++){ Node* c=((Node**)t->c)[i]; disp += abs(c->v - t->v); }  // abs(var)
    if(n>0) disp /= n;
    var depthFac = 1.0/(1+t->d);
    var rateBase = 0.01 + 0.02*t->d;
    var rateFac  = exp(-25.0*abs(t->r - rateBase));
    var p = 0.5*(depthFac + rateFac);
    p = 0.5*p + 0.5*(1.0/(1.0 + disp));
    return clamp(p,0,1);  // built-in clamp
}

// filenames
void buildEqFileName(int idx, char* outName /*>=64*/)
{
    strcpy(outName, "Log\\Alpha01_eq_");
    string idxs = strf("%03i", idx);
    strcat(outName, idxs);
    strcat(outName, ".csv");
}

// --------- allocation ----------
void allocateNet()
{
    int N=G_N, D=G_D, K=G_K;

    G_State=(var*)malloc(N*sizeof(var));  G_Prev=(var*)malloc(N*sizeof(var));  G_Vel=(var*)malloc(N*sizeof(var));
    G_Adj=(int*)malloc(N*D*sizeof(int));
    G_RP=(var*)malloc(K*N*sizeof(var));   G_Z=(var*)malloc(K*sizeof(var));

    G_Mode=(int*)malloc(N*sizeof(int));
    G_WSelf=(var*)malloc(N*sizeof(var));  G_WN1=(var*)malloc(N*sizeof(var));   G_WN2=(var*)malloc(N*sizeof(var));
    G_WGlob1=(var*)malloc(N*sizeof(var)); G_WGlob2=(var*)malloc(N*sizeof(var));
    G_WMom=(var*)malloc(N*sizeof(var));   G_WTree=(var*)malloc(N*sizeof(var)); G_WAdv=(var*)malloc(N*sizeof(var));

    A1x=(var*)malloc(N*sizeof(var)); A1lam=(var*)malloc(N*sizeof(var)); A1mean=(var*)malloc(N*sizeof(var));
    A1E=(var*)malloc(N*sizeof(var)); A1P=(var*)malloc(N*sizeof(var));   A1i=(var*)malloc(N*sizeof(var)); A1c=(var*)malloc(N*sizeof(var));
    A2x=(var*)malloc(N*sizeof(var)); A2lam=(var*)malloc(N*sizeof(var)); A2mean=(var*)malloc(N*sizeof(var));
    A2E=(var*)malloc(N*sizeof(var)); A2P=(var*)malloc(N*sizeof(var));   A2i=(var*)malloc(N*sizeof(var)); A2c=(var*)malloc(N*sizeof(var));

    G1mean=(var*)malloc(N*sizeof(var)); G1E=(var*)malloc(N*sizeof(var));
    G2P=(var*)malloc(N*sizeof(var));    G2lam=(var*)malloc(N*sizeof(var));

    G_TreeTerm=(var*)malloc(N*sizeof(var)); G_TopEq=(int*)malloc(N*sizeof(int)); G_TopW=(var*)malloc(N*sizeof(var));
    TAlpha=(var*)malloc(N*sizeof(var));     TBeta=(var*)malloc(N*sizeof(var));

    G_Pred=(var*)malloc(N*sizeof(var)); G_AdvScore=(var*)malloc(N*sizeof(var));

    G_PropRaw=(var*)malloc(N*sizeof(var));  G_Prop=(var*)malloc(N*sizeof(var));

    G_Sym=(string*)malloc(N*sizeof(string));

    int i;
    for(i=0;i<N;i++){
        G_State[i]=2*random(1)-1; G_Prev[i]=G_State[i]; G_Vel[i]=0;

        // initialize; will be overwritten by DTREE synthesis
        G_Mode[i]=0;
        G_WSelf[i]=0.5; G_WN1[i]=0.2; G_WN2[i]=0.2; G_WGlob1[i]=0.1; G_WGlob2[i]=0.1; G_WMom[i]=0.05; G_WTree[i]=0.15; G_WAdv[i]=0.15;

        A1x[i]=1; A1lam[i]=0.1; A1mean[i]=0; A1E[i]=0; A1P[i]=0; A1i[i]=0; A1c[i]=0;
        A2x[i]=1; A2lam[i]=0.1; A2mean[i]=0; A2E[i]=0; A2P[i]=0; A2i[i]=0; A2c[i]=0;

        G1mean[i]=1.0; G1E[i]=0.001;
        G2P[i]=0.6;    G2lam[i]=0.3;

        TAlpha[i]=0.8; TBeta[i]=25.0;

        G_TreeTerm[i]=0; G_TopEq[i]=-1; G_TopW[i]=0;
        G_Pred[i]=0.5;   G_AdvScore[i]=0;

        G_PropRaw[i]=1;  G_Prop[i]=1.0/G_N;

        G_Sym[i]=(char*)malloc(1024); strcpy(G_Sym[i],"");
    }

    // D-tree index & mapping buffers
    G_TreeCap=512; G_TreeIdx=(Node**)malloc(G_TreeCap*sizeof(Node*)); G_TreeN=0;
    G_EqTreeId=(int*)malloc(N*sizeof(int));
}

void freeNet()
{
    int i;
    if(G_State)free(G_State); if(G_Prev)free(G_Prev); if(G_Vel)free(G_Vel);
    if(G_Adj)free(G_Adj); if(G_RP)free(G_RP); if(G_Z)free(G_Z);
    if(G_Mode)free(G_Mode); if(G_WSelf)free(G_WSelf); if(G_WN1)free(G_WN1); if(G_WN2)free(G_WN2);
    if(G_WGlob1)free(G_WGlob1); if(G_WGlob2)free(G_WGlob2); if(G_WMom)free(G_WMom);
    if(G_WTree)free(G_WTree); if(G_WAdv)free(G_WAdv);

    if(A1x)free(A1x); if(A1lam)free(A1lam); if(A1mean)free(A1mean); if(A1E)free(A1E); if(A1P)free(A1P); if(A1i)free(A1i); if(A1c)free(A1c);
    if(A2x)free(A2x); if(A2lam)free(A2lam); if(A2mean)free(A2mean); if(A2E)free(A2E); if(A2P)free(A2P); if(A2i)free(A2i); if(A2c)free(A2c);

    if(G1mean)free(G1mean); if(G1E)free(G1E); if(G2P)free(G2P); if(G2lam)free(G2lam);

    if(G_TreeTerm)free(G_TreeTerm); if(G_TopEq)free(G_TopEq); if(G_TopW)free(G_TopW);
    if(TAlpha)free(TAlpha); if(TBeta)free(TBeta);

    if(G_Pred)free(G_Pred); if(G_AdvScore)free(G_AdvScore);

    if(G_PropRaw)free(G_PropRaw); if(G_Prop)free(G_Prop);

    if(G_Sym){ for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]); free(G_Sym); }
    if(G_TreeIdx)free(G_TreeIdx); if(G_EqTreeId)free(G_EqTreeId);
}

// --------- random projection ----------
void randomizeRP(){
    int K=G_K,N=G_N,k,j;
    for(k=0;k<K;k++)
        for(j=0;j<N;j++)
            G_RP[k*N+j] = ifelse(random(1) < 0.5, -1.0, 1.0);  // unbiased ±1
}
void computeProjection(){ int K=G_K,N=G_N,k,j; for(k=0;k<K;k++){ var acc=0; for(j=0;j<N;j++) acc+=G_RP[k*N+j]*(G_State[j]*G_State[j]); G_Z[k]=acc; }}

// --------- build features for DTREE ----------
void buildEqFeatures(int i, var lambda, var mean, var energy, var power, var* S /*ADV_EQ_NF*/)
{
    Node* t=G_TreeIdx[G_EqTreeId[i]];
    S[0]=G_State[i];
    S[1]=mean;
    S[2]=power;
    S[3]=energy;
    S[4]=lambda;
    S[5]=G_Pred[i];
    S[6]=t->d;
    S[7]=t->r;
    S[8]=G_TreeTerm[i];
    S[9]=G_Mode[i];
}

void buildPairFeatures(int i,int j, var lambda, var mean, var energy, var power, var* P /*ADV_PAIR_NF*/)
{
    Node* ti=G_TreeIdx[G_EqTreeId[i]];
    Node* tj=G_TreeIdx[G_EqTreeId[j]];
    P[0]=G_State[i]; P[1]=G_State[j];
    P[2]=ti->d; P[3]=tj->d;
    P[4]=ti->r; P[5]=tj->r;
    P[6]=abs(P[2]-P[3]); P[7]=abs(P[4]-P[5]); // abs(var)
    P[8]=G_Pred[i]*G_Pred[j];
    P[9]=lambda; P[10]=mean; P[11]=power;
}

// --------- DTREE advice wrappers ----------
var adviseEq(int i, var lambda, var mean, var energy, var power)
{
    var S[ADV_EQ_NF]; buildEqFeatures(i,lambda,mean,energy,power,S);
    var a = adviseLong(DTREE, 0, S, ADV_EQ_NF); // ~[-100,100]
    return a/100.;
}

var advisePair(int i,int j, var lambda, var mean, var energy, var power)
{
    var P[ADV_PAIR_NF]; buildPairFeatures(i,j,lambda,mean,energy,power,P);
    var a = adviseLong(DTREE, 0, P, ADV_PAIR_NF);
    return a/100.;
}

// --------- DTREE-driven adjacency selection ----------
void rewireAdjacency_DTREE(var lambda, var mean, var energy, var power)
{
    int N=G_N, D=G_D, i, d, c, best, cand;
    for(i=0;i<N;i++){
        for(d=0; d<D; d++){
            var bestScore = -2; best = -1;
            for(c=0;c<CAND_NEIGH;c++){
                cand = randint(0,N-1);
                if(cand==i) continue;
                // avoid duplicates already chosen for this row
                int clash=0, k; for(k=0;k<d;k++) if(G_Adj[i*D+k]==cand){clash=1; break;}
                if(clash) continue;

                var s = advisePair(i,cand,lambda,mean,energy,power); // [-1,1]
                if(s > bestScore){ bestScore=s; best=cand; }
            }
            if(best<0){ // fallback
                do{ best = randint(0,N-1);} while(best==i);
            }
            G_Adj[i*D + d] = best;
        }
    }
}

// --------- DTREE-created coefficients, modes & proportions ----------
void synthesizeEquationFromDTREE(int i, var lambda, var mean, var energy, var power)
{
    // multiple advice calls; each mapped to a coefficient range
    var a_mode = adviseEq(i,lambda,mean,energy,power);
    G_Mode[i] = (int)(abs(a_mode*1000)) & 3;

    var a_wself = adviseEq(i,lambda,mean,energy,power);
    var a_wn1   = adviseEq(i,lambda,mean,energy,power);
    var a_wn2   = adviseEq(i,lambda,mean,energy,power);
    var a_g1    = adviseEq(i,lambda,mean,energy,power);
    var a_g2    = adviseEq(i,lambda,mean,energy,power);
    var a_mom   = adviseEq(i,lambda,mean,energy,power);
    var a_tree  = adviseEq(i,lambda,mean,energy,power);
    var a_adv   = adviseEq(i,lambda,mean,energy,power);

    G_WSelf[i]  = mapUnit(a_wself, 0.15, 0.85);
    G_WN1[i]    = mapUnit(a_wn1,   0.05, 0.35);
    G_WN2[i]    = mapUnit(a_wn2,   0.05, 0.35);
    G_WGlob1[i] = mapUnit(a_g1,    0.05, 0.30);
    G_WGlob2[i] = mapUnit(a_g2,    0.05, 0.30);
    G_WMom[i]   = mapUnit(a_mom,   0.02, 0.15);
    G_WTree[i]  = mapUnit(a_tree,  0.05, 0.35);
    G_WAdv[i]   = mapUnit(a_adv,   0.05, 0.35);

    // argument coefficients (range chosen to be stable)
    var a1 = adviseEq(i,lambda,mean,energy,power);
    var a2 = adviseEq(i,lambda,mean,energy,power);
    var a3 = adviseEq(i,lambda,mean,energy,power);
    var a4 = adviseEq(i,lambda,mean,energy,power);
    var a5 = adviseEq(i,lambda,mean,energy,power);
    var a6 = adviseEq(i,lambda,mean,energy,power);
    var a7 = adviseEq(i,lambda,mean,energy,power);

    A1x[i]   = randsign()*mapUnit(a1, 0.6, 1.2);
    A1lam[i] = randsign()*mapUnit(a2, 0.05,0.35);
    A1mean[i]= mapUnit(a3,-0.30,0.30);
    A1E[i]   = mapUnit(a4,-0.0015,0.0015);
    A1P[i]   = mapUnit(a5,-0.30,0.30);
    A1i[i]   = mapUnit(a6,-0.02,0.02);
    A1c[i]   = mapUnit(a7,-0.20,0.20);

    // second nonlinearity args
    var b1 = adviseEq(i,lambda,mean,energy,power);
    var b2 = adviseEq(i,lambda,mean,energy,power);
    var b3 = adviseEq(i,lambda,mean,energy,power);
    var b4 = adviseEq(i,lambda,mean,energy,power);
    var b5 = adviseEq(i,lambda,mean,energy,power);
    var b6 = adviseEq(i,lambda,mean,energy,power);
    var b7 = adviseEq(i,lambda,mean,energy,power);

    A2x[i]   = randsign()*mapUnit(b1, 0.6, 1.2);
    A2lam[i] = randsign()*mapUnit(b2, 0.05,0.35);
    A2mean[i]= mapUnit(b3,-0.30,0.30);
    A2E[i]   = mapUnit(b4,-0.0015,0.0015);
    A2P[i]   = mapUnit(b5,-0.30,0.30);
    A2i[i]   = mapUnit(b6,-0.02,0.02);
    A2c[i]   = mapUnit(b7,-0.20,0.20);

    // global-term coeffs
    var c1 = adviseEq(i,lambda,mean,energy,power);
    var c2 = adviseEq(i,lambda,mean,energy,power);
    var d1 = adviseEq(i,lambda,mean,energy,power);
    var d2 = adviseEq(i,lambda,mean,energy,power);
    G1mean[i] = mapUnit(c1, 0.4, 1.6);
    G1E[i]    = mapUnit(c2,-0.004,0.004);
    G2P[i]    = mapUnit(d1, 0.1, 1.2);
    G2lam[i]  = mapUnit(d2, 0.05,0.7);

    // per-equation alpha/beta penalties (for structural DTree kernel)
    var e1 = adviseEq(i,lambda,mean,energy,power);
    var e2 = adviseEq(i,lambda,mean,energy,power);
    TAlpha[i] = mapUnit(e1, 0.3, 1.5);
    TBeta[i]  = mapUnit(e2, 6.0, 50.0);

    // DTREE-created raw proportion; normalized later
    var p = adviseEq(i,lambda,mean,energy,power);      // [-1,1]
    G_PropRaw[i] = 0.01 + 0.99 * (0.5*(p+1.0));        // in (0.01..1.0]
}

// normalize proportions so sum_i Prop[i] = 1
void normalizeProportions()
{
    int N=G_N,i; var s=0; for(i=0;i<N;i++) s += G_PropRaw[i];
    if(s<=0) { for(i=0;i<N;i++) G_Prop[i] = 1.0/N; return; }
    for(i=0;i<N;i++) G_Prop[i] = G_PropRaw[i]/s;
}

// --------- DTree proportional coupling: DT(i) with Proportion & Predictability ----------
var dtreeTerm(int i, int* outTopEq, var* outTopW)
{
    int N=G_N,j;
    int tid_i=G_EqTreeId[i]; Node* ti=G_TreeIdx[tid_i]; int di=ti->d; var ri=ti->r;
    var alpha=TAlpha[i], beta=TBeta[i];

    var sumw=0, acc=0, bestW=-1; int bestJ=-1;
    for(j=0;j<N;j++){
        if(j==i) continue;
        int tid_j=G_EqTreeId[j]; Node* tj=G_TreeIdx[tid_j]; int dj=tj->d; var rj=tj->r;

        var w = exp(-alpha*abs(di-dj)) * exp(-beta*abs(ri-rj));
        var predBoost = 0.5 + 0.5*(G_Pred[i]*G_Pred[j]);
        var propBoost = 0.5 + 0.5*( (G_Prop[i] + G_Prop[j]) );  // favors high-proportion participants
        w *= predBoost * propBoost;

        // Optional: DTREE pair advice boost
        var pairAdv = advisePair(i,j,0,0,0,0);  // safe call; if untrained ? ~0
        w *= (0.75 + 0.25*(0.5*(pairAdv+1.0))); // 0.75..1.0 range

        sumw += w; acc += w*G_State[j];
        if(w>bestW){bestW=w; bestJ=j;}
    }
    if(outTopEq) *outTopEq = bestJ;
    if(outTopW)  *outTopW  = ifelse(sumw>0, bestW/sumw, 0);
    if(sumw>0) return acc/sumw; return 0;
}

// --------- symbolic expression builder (now includes Prop[i]) ----------
void buildSymbolicExpr(int i, int n1, int n2)
{
    string s = G_Sym[i]; strcpy(s,"");
    string a1 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
                     A1x[i], n1, A1lam[i], A1mean[i], A1E[i], A1P[i], A1i[i], A1c[i]);
    string a2 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
                     A2x[i], n2, A2lam[i], A2mean[i], A2E[i], A2P[i], A2i[i], A2c[i]);

    strcat(s, "x[i]_next = ");
    strcat(s, strf("%.3f*x[i] + ", G_WSelf[i]));
    if(G_Mode[i]==0){ strcat(s, strf("%.3f*sin%s + ",  G_WN1[i], a1)); strcat(s, strf("%.3f*cos%s + ",  G_WN2[i], a2)); }
    else if(G_Mode[i]==1){ strcat(s, strf("%.3f*tanh%s + ", G_WN1[i], a1)); strcat(s, strf("%.3f*sin%s + ",  G_WN2[i], a2)); }
    else if(G_Mode[i]==2){ strcat(s, strf("%.3f*cos%s + ",  G_WN1[i], a1)); strcat(s, strf("%.3f*tanh%s + ", G_WN2[i], a2)); }
    else { strcat(s, strf("%.3f*sin%s + ",  G_WN1[i], a1)); strcat(s, strf("%.3f*cos%s + ",  G_WN2[i], a2)); }

    strcat(s, strf("%.3f*tanh(%.3f*mean + %.5f*E) + ", G_WGlob1[i], G1mean[i], G1E[i]));
    strcat(s, strf("%.3f*sin(%.3f*P + %.3f*lam) + ",   G_WGlob2[i], G2P[i],   G2lam[i]));
    strcat(s, strf("%.3f*(x[i]-x_prev[i]) + ",         G_WMom[i]));
    strcat(s, strf("Prop[i]=%.4f; ",                    G_Prop[i]));
    strcat(s, strf("%.3f*DT(i) + ",                    G_WTree[i]));
    strcat(s, strf("%.3f*DTREE(i)",                    G_WAdv[i]  ));
}

// --------- one-time rewire init (build mapping) ----------
void rewireInit()
{
    randomizeRP(); computeProjection();

    // Build D-tree index and eq->tree mapping
    G_TreeN=0; indexTreeDFS(Root);
    int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = i % G_TreeN;
}

// --------- full "rewire epoch": adjacency by DTREE + coefficients by DTREE + proportions ----------
void rewireEpoch(var lambda, var mean, var energy, var power)
{
    // 1) refresh predictability before synthesis
    int i;
    for(i=0;i<G_N;i++){ Node* t=G_TreeIdx[G_EqTreeId[i]]; G_Pred[i]=nodePredictability(t); }

    // 2) topology chosen by DTREE
    rewireAdjacency_DTREE(lambda,mean,energy,power);

    // 3) coefficients/modes/penalties/proportions created by DTREE
    for(i=0;i<G_N;i++) synthesizeEquationFromDTREE(i,lambda,mean,energy,power);

    // 4) normalize proportions across equations
    normalizeProportions();

    // 5) update context id (from adjacency)
    int D=G_D; int h=0; for(i=0;i<G_N*D;i++) h = (h*1315423911) ^ G_Adj[i];
    G_CtxID = (h ^ (G_Epoch<<8)) & 0x7FFFFFFF;

    // 6) rebuild symbolic strings with current neighbors
    for(i=0;i<G_N;i++){ int n1=G_Adj[i*G_D+0], n2=G_Adj[i*G_D+1]; buildSymbolicExpr(i,n1,n2); }
}

// --------- update step (per bar) ----------
var projectNet()
{
    int N=G_N,i; var sum=0,sumsq=0,cross=0;
    for(i=0;i<N;i++){ sum+=G_State[i]; sumsq+=G_State[i]*G_State[i]; if(i+1<N) cross+=G_State[i]*G_State[i+1]; }
    var mean=sum/N, corr=cross/(N-1);
    return 0.6*tanh(mean + 0.001*sumsq) + 0.4*sin(corr);
}

void updateNet(var driver, var* outMean, var* outEnergy, var* outPower, int writeMeta)
{
    int N=G_N, D=G_D, i;

    // aggregates for this bar (before update)
    var sum=0,sumsq=0; for(i=0;i<N;i++){ sum+=G_State[i]; sumsq+=G_State[i]*G_State[i]; }
    var mean=sum/N, energy=sumsq, power=sumsq/N;

    // refresh predictability & (optional) cached DT advice per equation
    for(i=0;i<N;i++){ Node* t=G_TreeIdx[G_EqTreeId[i]]; G_Pred[i]=nodePredictability(t); }

    // update each equation
    for(i=0;i<N;i++){
        int n1=G_Adj[i*D+0], n2=G_Adj[i*D+1];
        var xi=G_State[i], xn1=G_State[n1], xn2=G_State[n2], mom=xi-G_Prev[i];

        // structural consensus first (uses proportions & predictability internally)
        int topEq=-1; var topW=0;
        var dt = dtreeTerm(i,&topEq,&topW);
        G_TreeTerm[i]=dt; G_TopEq[i]=topEq; G_TopW[i]=topW;

        // built-in DTREE advice from current features
        var adv = adviseEq(i, driver, mean, energy, power);
        G_AdvScore[i] = adv;

        // nonlinear arguments (from DTREE-generated coeffs)
        var arg1=A1x[i]*xn1 + A1lam[i]*driver + A1mean[i]*mean + A1E[i]*energy + A1P[i]*power + A1i[i]*i + A1c[i];
        var arg2=A2x[i]*xn2 + A2lam[i]*driver + A2mean[i]*mean + A2E[i]*energy + A2P[i]*power + A2i[i]*i + A2c[i];

        var nl1,nl2;
        if(G_Mode[i]==0){ nl1=sin(arg1); nl2=cos(arg2); }
        else if(G_Mode[i]==1){ nl1=tanh(arg1); nl2=sin(arg2); }
        else if(G_Mode[i]==2){ nl1=cos(arg1);  nl2=tanh(arg2); }
        else { nl1=sin(arg1); nl2=cos(arg2); }

        var glob1=tanh(G1mean[i]*mean + G1E[i]*energy);
        var glob2=sin (G2P[i]*power + G2lam[i]*driver);

        var xNew =
            G_WSelf[i]*xi +
            G_WN1[i]*nl1 +
            G_WN2[i]*nl2 +
            G_WGlob1[i]*glob1 +
            G_WGlob2[i]*glob2 +
            G_WMom[i]*mom +
            G_WTree[i]*dt +
            G_WAdv[i] *adv;

        G_Prev[i]=xi; G_Vel[i]=xNew-xi; G_State[i]=xNew;

        // META on rewire bars
        if(writeMeta){
            char fname[64]; buildEqFileName(i,fname);
            int tid=G_EqTreeId[i]; Node* t=G_TreeIdx[tid];
            int nn1=G_Adj[i*D+0], nn2=G_Adj[i*D+1];
            file_append(fname,
                strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,\"%s\"\n",
                    G_Epoch, G_CtxID, NET_EQNS, i, nn1, nn2, tid, t->d, t->r,
                    G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i], G_Sym[i]));
        }
    }

    if(outMean) *outMean=mean; if(outEnergy) *outEnergy=energy; if(outPower) *outPower=power;
}

// ----------------- MAIN -----------------
function run()
{
    static int initialized=0;
    static var lambda;
    if(is(INITRUN) && !initialized){
        if(LookBack < NWIN) LookBack = NWIN;

        Root=createNode(MAX_DEPTH);
        allocateNet();

        G_DTreeExp = randu(1.10,1.60);
        G_FB_A     = randu(0.60,0.85);
        G_FB_B     = 1.0 - G_FB_A;

        randomizeRP(); computeProjection();

        // Build tree index + mapping once
        rewireInit();

        // First epoch synthesis (uses current states as context)
        G_Epoch = 0;
        rewireEpoch(0,0,0,0);

        // Prepare files: header per equation
        char fname[64]; int i;
        for(i=0;i<NET_EQNS;i++){
            buildEqFileName(i,fname);
            file_append(fname,
                "Bar,lambda,gamma,i,State,n1,n2,mean,energy,power,Vel,Mode,WAdv,WSelf,WN1,WN2,WGlob1,WGlob2,WMom,WTree,Pred,Adv,Prop,TreeTerm,TopEq,TopW,TreeId,Depth,Rate\n");
        }

        // Initial META dump (epoch 0)
        for(i=0;i<G_N;i++){
            int n1=G_Adj[i*G_D+0], n2=G_Adj[i*G_D+1]; int tid=G_EqTreeId[i]; Node* t=G_TreeIdx[tid];
            char fname2[64]; buildEqFileName(i,fname2);
            file_append(fname2,
                strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,\"%s\"\n",
                    G_Epoch, G_CtxID, NET_EQNS, i, n1, n2, tid, t->d, t->r,
                    G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i], G_Sym[i]));
        }

        initialized=1;
        printf("\nRoot nodes: %i | Net equations: %i (deg=%i, kproj=%i)", countNodes(Root), G_N, G_D, G_K);
    }

    // 1) Tree ? lambda
    lambda = evaluateNode(Root);

    // 2) Rewire epoch?
    int doRewire = ((Bar % REWIRE_EVERY) == 0);
    if(doRewire){
        G_Epoch++;
        // Use current aggregates as context for synthesis
        // quick pre-aggregates for better guidance
        int i; var sum=0; for(i=0;i<G_N;i++) sum += G_State[i];
        var mean = sum/G_N;
        var energy=0; for(i=0;i<G_N;i++) energy += G_State[i]*G_State[i];
        var power = energy/G_N;

        rewireEpoch(lambda,mean,energy,power);
    }

    // 3) Update net this bar (write META only if rewired)
    var meanB, energyB, powerB;
    updateNet(lambda, &meanB, &energyB, &powerB, doRewire);

    // 4) Feedback blend
    var gamma = projectNet();
    lambda = G_FB_A*lambda + G_FB_B*gamma;

    // 5) Plots
    plot("lambda", lambda, LINE, 0);
    plot("gamma",  gamma,  LINE, 0);
    plot("P_win",  powerB, LINE, 0);

    // 6) Numeric logging
    if(Bar % LOG_EVERY == 0){
        char fname[64]; int i;
        for(i=0;i<NET_EQNS;i++){
            int n1=G_Adj[i*G_D+0], n2=G_Adj[i*G_D+1]; int tid=G_EqTreeId[i]; Node* t=G_TreeIdx[tid];
            buildEqFileName(i,fname);
            file_append(fname,
                strf("%i,%.9f,%.9f,%i,%.9f,%i,%i,%.9f,%.9f,%.9f,%.9f,%i,%.6f,%.6f,%.6f,%.6f,%.6f,%.6f,%.6f,%.6f,%.4f,%.4f,%.6f,%.9f,%i,%.6f,%i,%i,%.6f\n",
                    Bar, lambda, gamma, i, G_State[i], n1, n2,
                    meanB, energyB, powerB, G_Vel[i], G_Mode[i],
                    G_WAdv[i], G_WSelf[i], G_WN1[i], G_WN2[i], G_WGlob1[i], G_WGlob2[i], G_WMom[i], G_WTree[i],
                    G_Pred[i], G_AdvScore[i], G_Prop[i], G_TreeTerm[i], G_TopEq[i], G_TopW[i], tid, t->d, t->r));
        }
    }

    if(lambda > 0.9) enterLong();
}

// Clean up memory
function cleanup()
{
    if(Root) freeTree(Root);
    freeNet();
}

Last edited by TipmyPip; 09/04/25 16:58.
Gate-and-Field Adaptive Engine (GFAE) [Re: TipmyPip] #488874
09/04/25 16:56
09/04/25 16:56
Joined: Sep 2017
Posts: 164
TipmyPip Online OP
Member
TipmyPip  Online OP
Member

Joined: Sep 2017
Posts: 164
Gate-and-Field Adaptive Engine (GFAE)

A. Finite language of situations
Each moment is mapped to a single symbol from a small alphabet. From experience, the system remembers which symbols tend to follow which, and how concentrated those follow-ups are. Two summaries are read each moment: a lean (which way the next step tilts) and a clarity (how decisive that tilt is). They serve as a gate—sometimes permissive, sometimes not.

B. Continuous field of influences
Alongside the gate runs a smooth field of interacting elements. Each element updates by blending:

a trace of itself, a couple of neighbor signals passed through simple bends, a soft background rhythm spanning slow to fast, coarse crowd summaries, a touch of recent change, and a bias toward kindred elements.

All ingredients are bounded; attention is a scarce resource shared proportionally.

C. Periodic seat-reshaping
At appointed intervals the system revisits who listens to whom, how much weight each path receives, and which bends are in play. Preference goes to regular, well-behaved parts, kindred pairings along the rhythm ladder, and compact formulas. The structure molts rather than resets: same scaffold, refreshed connections.

D. Permission meets timing
Actions arise only when the gate’s lean is convincing and the field’s rhythm agrees. The field then chooses the when and the how-much.

E. Self-explanation
As it runs, the system writes short, human-readable snippets: the current symbol, the gate’s lean and clarity, and concise sketches of how elements combined their inputs. The result is a living ledger of conditions and responses.

F. What emerges
Coherence without uniformity: clusters coordinate when the rhythm invites them, solos recede when clarity fades, and adaptability is maintained through small, proportional adjustments spread across the whole.


Code
// ======================================================================
// Markov-augmented Harmonic D-Tree Engine (Candlestick 122-directional)
// ======================================================================

// ================= USER CONFIG =================
#define ASSET_SYMBOL   "EUR/USD"   // symbol to trade
#define ALGO_NAME      "Alpha10b"  // algo tag (keeps models/files separated)

// Markov gating thresholds
#define MC_ACT         0.30        // min |CDL| ([-1..1]) to mark a pattern active
#define PBULL_LONG_TH  0.60        // Markov gate for long entries
#define PBULL_SHORT_TH 0.40        // Markov gate for short entries

// ================= ENGINE PARAMETERS =================
#define MAX_BRANCHES    3
#define MAX_DEPTH       4
#define NWIN            256
#define NET_EQNS        100
#define DEGREE          4
#define KPROJ           16
#define REWIRE_EVERY    127
#define LOG_EVERY       1

// DTREE-driven rewiring candidates per neighbor slot
#define CAND_NEIGH      8

// ---- DTREE feature sizes (extended with Markov features) ----
#define ADV_EQ_NF       12
#define ADV_PAIR_NF     12

// ================= Candles ? 122-state Markov =================
#define MC_NPAT    61
#define MC_STATES  (1 + 2*MC_NPAT)  // 0=NONE, 1..122 directional
#define MC_NONE    0
#define MC_LAPLACE 1.0

// ---------- helpers ----------
var clamp01(var x){ if(x<0) return 0; if(x>1) return 1; return x; }

int isInvalid(var x)
{
  if(x != x) return 1;                // NaN
  if(x > 1e100 || x < -1e100) return 1; // ±INF or astronomic values
  return 0;
}

var safeSig(var x){
  if(x != x) return 0;            // NaN -> 0
  if(x >  999.) return  999.;
  if(x < -999.) return -999.;
  return x;
}

// ========== Heuristic 61-candle feature builder ==========
int buildCDL_TA61(var* out, string* names)
{
  int i; for(i=0;i<MC_NPAT;i++){ out[i]=0; if(names) names[i]="UNUSED"; }

  var O = priceOpen();
  var H = priceHigh();
  var L = priceLow();
  var C = priceClose();

  var rng = H - L; if(rng <= 0) rng = 1e-8;
  var body = C - O;
  var dir  = ifelse(body >= 0, 1.0, -1.0);
  var bodyFrac  = clamp(body/rng, -1, 1);                  // [-1..1]
  var upperFrac = clamp01( (H - max(O,C)) / rng );         // [0..1]
  var lowerFrac = clamp01( (min(O,C) - L) / rng );         // [0..1]
  var absBody   = abs(body)/rng;                           // [0..1]

  // 0: body direction & size
  out[0] = bodyFrac; if(names) names[0] = "BODY";

  // 1..2: upper/lower dominance (signed)
  out[1] = clamp( upperFrac - lowerFrac, -1, 1 ); if(names) names[1] = "UPPER_DOM";
  out[2] = clamp( lowerFrac - upperFrac, -1, 1 ); if(names) names[2] = "LOWER_DOM";

  // 3: doji-ish (very small body), signed by direction
  var dojiT = 0, thresh = 0.10;
  if(absBody < thresh) dojiT = 1.0 - absBody/thresh;   // 0..1
  out[3] = dir * dojiT; if(names) names[3] = "DOJI";

  // 4: marubozu-ish (both shadows tiny), signed by direction
  var shadowSum = upperFrac + lowerFrac;
  var maru = 0; if(shadowSum < 0.10) maru = 1.0 - shadowSum/0.10;
  out[4] = dir * clamp01(maru); if(names) names[4] = "MARUBOZU";

  // 5: hammer-ish (long lower, tiny upper)
  var hamm = 0; if(lowerFrac > 0.60 && upperFrac < 0.10) hamm = (lowerFrac - 0.60)/0.40;
  out[5] = dir * clamp01(hamm); if(names) names[5] = "HAMMER";

  // 6: shooting-star-ish (long upper, tiny lower), bearish by shape
  var star = 0; if(upperFrac > 0.60 && lowerFrac < 0.10) star = (upperFrac - 0.60)/0.40;
  out[6] = -clamp01(star); if(names) names[6] = "SHOOTING";

  // 7: long body strength (signed)
  var longB = 0; if(absBody > 0.70) longB = (absBody - 0.70)/0.30;
  out[7] = dir * clamp01(longB); if(names) names[7] = "LONG_BODY";

  // 8: small body (spinning top-ish)
  out[8] = dir * (1.0 - absBody); if(names) names[8] = "SPIN_TOP";

  // 9: momentum-ish scalar from body size (signed)
  out[9] = dir * (2.0*absBody - 1.0); if(names) names[9] = "BODY_MOM";

  // 10..60 left as zero/UNUSED
  return 61;
}

// ===== Markov storage =====
#define MC_IDX(s,t) ((s)*MC_STATES + (t))
static int*  MC_Count;        // size MC_STATES*MC_STATES
static int*  MC_RowSum;       // size MC_STATES
static int   MC_Prev = -1;
static int   MC_Cur  = 0;
static var   MC_PBullNext = 0.5;
static var   MC_Entropy   = 0.0;
static string MC_Names[MC_NPAT];

void MC_alloc()
{
  int i;
  MC_Count  = (int*)malloc(MC_STATES*MC_STATES*sizeof(int));
  MC_RowSum = (int*)malloc(MC_STATES*sizeof(int));
  for(i=0;i<MC_STATES*MC_STATES;i++) MC_Count[i]=0;
  for(i=0;i<MC_STATES;i++) MC_RowSum[i]=0;
  MC_Prev = -1; MC_Cur = 0; MC_PBullNext = 0.5; MC_Entropy = 0.;
}
void MC_free(){ if(MC_Count){free(MC_Count);MC_Count=0;} if(MC_RowSum){free(MC_RowSum);MC_RowSum=0;} }

int MC_stateFromCDL(var* cdl /*len=61*/, var thr)
{
  int i, best=-1; var besta=0;
  for(i=0;i<MC_NPAT;i++){ var a=abs(cdl[i]); if(a>besta){ besta=a; best=i; } }
  if(best<0) return MC_NONE;
  if(besta < thr) return MC_NONE;
  int bull = (cdl[best] > 0);
  return 1 + 2*best + bull;
}
int MC_indexFromState(int s){ if(s<=0) return -1; return (s-1)/2; }
int MC_isBull(int s){ if(s<=0) return 0; return ((s-1)%2)==1; }

void MC_update(int sPrev,int sCur)
{
  if(sPrev<0) return;
  MC_Count[MC_IDX(sPrev,sCur)] += 1;
  MC_RowSum[sPrev]             += 1;
}

var MC_prob(int s,int t)
{
  var num = (var)MC_Count[MC_IDX(s,t)] + MC_LAPLACE;
  var den = (var)MC_RowSum[s] + MC_LAPLACE*MC_STATES;
  if(den<=0) return 1.0/MC_STATES;
  return num/den;
}

var MC_nextBullishProb(int s)
{
  if(s<0) return 0.5;
  int t; var pBull=0, pTot=0;
  for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); pTot += p; if(MC_isBull(t)) pBull += p; }
  if(pTot<=0) return 0.5;
  return pBull/pTot;
}

var MC_rowEntropy01(int s)
{
  if(s<0) return 1.0;
  int t; var H=0, Z=0;
  for(t=1;t<MC_STATES;t++){ var p=MC_prob(s,t); Z+=p; }
  if(Z<=0) return 1.0;
  for(t=1;t<MC_STATES;t++){
    var p=MC_prob(s,t)/Z;
    if(p>0) H += -p*log(p);
  }
  var Hmax = log(MC_STATES-1);
  if(Hmax<=0) return 0;
  return H/Hmax;
}

// ================= HARMONIC D-TREE (engine) =================
typedef struct Node { var v; var r; void* c; int n; int d; } Node;
Node* Root;
Node** G_TreeIdx;  int G_TreeN; int G_TreeCap; var G_DTreeExp;

// ------- helpers (built-ins used) -------
int randint(int lo,int hi){ return lo + (int)random(hi - lo + 1); }      // [lo..hi]
var randu(var a,var b){ return a + random(b - a); }                       // [a..b)
var randsign(){ return ifelse(random(1) < 0.5, -1.0, 1.0); }
var mapUnit(var u,var lo,var hi){ u = clamp(u,-1.,1.); var t=0.5*(u+1.0); return lo + t*(hi-lo); }

void pushTreeNode(Node* u){ if(G_TreeN < G_TreeCap) G_TreeIdx[G_TreeN++] = u; }
void indexTreeDFS(Node* u){ if(!u) return; pushTreeNode(u); int i; for(i=0;i<u->n;i++) indexTreeDFS(((Node**)u->c)[i]); }

Node* createNode(int depth)
{
  Node* u = (Node*)malloc(sizeof(Node));
  u->v = 2*random(1)-1;                             // [-1..1)
  u->r = 0.01 + 0.02*depth + random(1)*0.005;       // small positive
  u->d = depth;
  if(depth > 0){
    u->n = randint(1, MAX_BRANCHES);
    u->c = malloc(u->n * sizeof(void*));
    int i; for(i=0;i<u->n;i++) ((Node**)u->c)[i] = createNode(depth - 1);
  } else { u->n = 0; u->c = 0; }
  return u;
}
var evaluateNode(Node* u)
{
  if(!u) return 0;
  var sum=0; int i; for(i=0;i<u->n;i++) sum += evaluateNode(((Node**)u->c)[i]);
  var phase  = sin(u->r * Bar + sum);
  var weight = 1.0 / pow(u->d + 1, G_DTreeExp);
  u->v = (1 - weight)*u->v + weight*phase;
  return u->v;
}
int countNodes(Node* u){ if(!u) return 0; int c=1,i; for(i=0;i<u->n;i++) c += countNodes(((Node**)u->c)[i]); return c; }
void freeTree(Node* u){ if(!u) return; int i; for(i=0;i<u->n;i++) freeTree(((Node**)u->c)[i]); if(u->c) free(u->c); free(u); }

// =========== NETWORK STATE & COEFFICIENTS ===========
int   G_N  = NET_EQNS;
int   G_D  = DEGREE;
int   G_K  = KPROJ;

var*  G_State; var*  G_Prev; var*  G_Vel;
int*  G_Adj;  var*   G_RP;   var*  G_Z;

int*  G_Mode;
var*  G_WSelf; var*  G_WN1; var*  G_WN2; var*  G_WGlob1; var*  G_WGlob2; var*  G_WMom; var*  G_WTree; var*  G_WAdv;

var*  A1x;  var*  A1lam;  var*  A1mean;  var*  A1E;  var*  A1P;  var*  A1i;  var*  A1c;
var*  A2x;  var*  A2lam;  var*  A2mean;  var*  A2E;  var*  A2P;  var*  A2i;  var*  A2c;

var*  G1mean; var*  G1E; var*  G2P; var*  G2lam;

var*  G_TreeTerm; int*  G_TopEq; var*  G_TopW; int*  G_EqTreeId; var*  TAlpha; var*  TBeta;
var*  G_Pred; var*  G_AdvScore;

var*  G_PropRaw; var*  G_Prop;
string* G_Sym;

// Markov features exposed to DTREE
var G_MCF_PBull, G_MCF_Entropy, G_MCF_State;

// epoch/context & feedback
int    G_Epoch = 0;
int    G_CtxID = 0;
var    G_FB_A = 0.7;
var    G_FB_B = 0.3;

// ---------- predictability from D-tree ----------
var nodePredictability(Node* t)
{
  if(!t) return 0.5;
  var disp=0; int n=t->n, i;
  for(i=0;i<n;i++){ Node* c=((Node**)t->c)[i]; disp += abs(c->v - t->v); }
  if(n>0) disp /= n;
  var depthFac = 1.0/(1+t->d);
  var rateBase = 0.01 + 0.02*t->d;
  var rateFac  = exp(-25.0*abs(t->r - rateBase));
  var p = 0.5*(depthFac + rateFac);
  p = 0.5*p + 0.5*(1.0/(1.0 + disp));
  return clamp(p,0,1);
}

// filenames
void buildEqFileName(int idx, char* outName /*>=64*/) { strcpy(outName, strf("Log\\%s_eq_%03i.csv", ALGO_NAME, idx)); }

// --------- allocation ----------
void allocateNet()
{
  int N=G_N, D=G_D, K=G_K;
  G_State=(var*)malloc(N*sizeof(var));  G_Prev=(var*)malloc(N*sizeof(var));  G_Vel=(var*)malloc(N*sizeof(var));
  G_Adj=(int*)malloc(N*D*sizeof(int));
  G_RP=(var*)malloc(K*N*sizeof(var));   G_Z=(var*)malloc(K*sizeof(var));
  G_Mode=(int*)malloc(N*sizeof(int));
  G_WSelf=(var*)malloc(N*sizeof(var));  G_WN1=(var*)malloc(N*sizeof(var));   G_WN2=(var*)malloc(N*sizeof(var));
  G_WGlob1=(var*)malloc(N*sizeof(var)); G_WGlob2=(var*)malloc(N*sizeof(var));
  G_WMom=(var*)malloc(N*sizeof(var));   G_WTree=(var*)malloc(N*sizeof(var)); G_WAdv=(var*)malloc(N*sizeof(var));
  A1x=(var*)malloc(N*sizeof(var)); A1lam=(var*)malloc(N*sizeof(var)); A1mean=(var*)malloc(N*sizeof(var));
  A1E=(var*)malloc(N*sizeof(var)); A1P=(var*)malloc(N*sizeof(var));   A1i=(var*)malloc(N*sizeof(var)); A1c=(var*)malloc(N*sizeof(var));
  A2x=(var*)malloc(N*sizeof(var)); A2lam=(var*)malloc(N*sizeof(var)); A2mean=(var*)malloc(N*sizeof(var));
  A2E=(var*)malloc(N*sizeof(var)); A2P=(var*)malloc(N*sizeof(var));   A2i=(var*)malloc(N*sizeof(var)); A2c=(var*)malloc(N*sizeof(var));
  G1mean=(var*)malloc(N*sizeof(var)); G1E=(var*)malloc(N*sizeof(var));
  G2P=(var*)malloc(N*sizeof(var));    G2lam=(var*)malloc(N*sizeof(var));
  G_TreeTerm=(var*)malloc(N*sizeof(var)); G_TopEq=(int*)malloc(N*sizeof(int)); G_TopW=(var*)malloc(N*sizeof(var));
  TAlpha=(var*)malloc(N*sizeof(var));     TBeta=(var*)malloc(N*sizeof(var));
  G_Pred=(var*)malloc(N*sizeof(var)); G_AdvScore=(var*)malloc(N*sizeof(var));
  G_PropRaw=(var*)malloc(N*sizeof(var));  G_Prop=(var*)malloc(N*sizeof(var));
  G_Sym=(string*)malloc(N*sizeof(string));
  G_TreeCap=512; G_TreeIdx=(Node**)malloc(G_TreeCap*sizeof(Node*)); G_TreeN=0;
  G_EqTreeId=(int*)malloc(N*sizeof(int));

  int i;
  for(i=0;i<N;i++){
    G_State[i]=2*random(1)-1; G_Prev[i]=G_State[i]; G_Vel[i]=0;
    G_Mode[i]=0;
    G_WSelf[i]=0.5; G_WN1[i]=0.2; G_WN2[i]=0.2; G_WGlob1[i]=0.1; G_WGlob2[i]=0.1; G_WMom[i]=0.05; G_WTree[i]=0.15; G_WAdv[i]=0.15;
    A1x[i]=1; A1lam[i]=0.1; A1mean[i]=0; A1E[i]=0; A1P[i]=0; A1i[i]=0; A1c[i]=0;
    A2x[i]=1; A2lam[i]=0.1; A2mean[i]=0; A2E[i]=0; A2P[i]=0; A2i[i]=0; A2c[i]=0;
    G1mean[i]=1.0; G1E[i]=0.001; G2P[i]=0.6; G2lam[i]=0.3;
    TAlpha[i]=0.8; TBeta[i]=25.0;
    G_TreeTerm[i]=0; G_TopEq[i]=-1; G_TopW[i]=0;
    G_Pred[i]=0.5;   G_AdvScore[i]=0;
    G_PropRaw[i]=1;  G_Prop[i]=1.0/G_N;
    G_Sym[i]=(char*)malloc(1024); strcpy(G_Sym[i],"");
  }
}
void freeNet()
{
  int i;
  if(G_State)free(G_State); if(G_Prev)free(G_Prev); if(G_Vel)free(G_Vel);
  if(G_Adj)free(G_Adj); if(G_RP)free(G_RP); if(G_Z)free(G_Z);
  if(G_Mode)free(G_Mode); if(G_WSelf)free(G_WSelf); if(G_WN1)free(G_WN1); if(G_WN2)free(G_WN2);
  if(G_WGlob1)free(G_WGlob1); if(G_WGlob2)free(G_WGlob2); if(G_WMom)free(G_WMom);
  if(G_WTree)free(G_WTree); if(G_WAdv)free(G_WAdv);
  if(A1x)free(A1x); if(A1lam)free(A1lam); if(A1mean)free(A1mean); if(A1E)free(A1E); if(A1P)free(A1P); if(A1i)free(A1i); if(A1c)free(A1c);
  if(A2x)free(A2x); if(A2lam)free(A2lam); if(A2mean)free(A2mean); if(A2E)free(A2E); if(A2P)free(A2P); if(A2i)free(A2i); if(A2c)free(A2c);
  if(G1mean)free(G1mean); if(G1E)free(G1E); if(G2P)free(G2P); if(G2lam)free(G2lam);
  if(G_TreeTerm)free(G_TreeTerm); if(G_TopEq)free(G_TopEq); if(G_TopW)free(G_TopW);
  if(TAlpha)free(TAlpha); if(TBeta)free(TBeta);
  if(G_Pred)free(G_Pred); if(G_AdvScore)free(G_AdvScore);
  if(G_PropRaw)free(G_PropRaw); if(G_Prop)free(G_Prop);
  if(G_Sym){ for(i=0;i<G_N;i++) if(G_Sym[i]) free(G_Sym[i]); free(G_Sym); }
  if(G_TreeIdx)free(G_TreeIdx); if(G_EqTreeId)free(G_EqTreeId);
}

// --------- random projection ----------
void randomizeRP(){ int K=G_K,N=G_N,k,j; for(k=0;k<K;k++) for(j=0;j<N;j++) G_RP[k*N+j]=ifelse(random(1)<0.5,-1.0,1.0); }
void computeProjection(){ int K=G_K,N=G_N,k,j; for(k=0;k<K;k++){ var acc=0; for(j=0;j<N;j++) acc+=G_RP[k*N+j]*(G_State[j]*G_State[j]); G_Z[k]=acc; }}

// --------- build features for DTREE (EXTENDED with Markov) ----------
void buildEqFeatures(int i, var lambda, var mean, var energy, var power, var* S /*ADV_EQ_NF*/)
{
  Node* t=G_TreeIdx[G_EqTreeId[i]];
  S[0]=safeSig(G_State[i]);   S[1]=safeSig(mean);     S[2]=safeSig(power); S[3]=safeSig(energy);
  S[4]=safeSig(lambda);       S[5]=safeSig(G_Pred[i]); S[6]=safeSig(t->d);  S[7]=safeSig(t->r);
  S[8]=safeSig(G_TreeTerm[i]); S[9]=safeSig(G_Mode[i]);
  S[10]=safeSig(G_MCF_PBull);  S[11]=safeSig(G_MCF_Entropy);
}
void buildPairFeatures(int i,int j, var lambda, var mean, var energy, var power, var* P /*ADV_PAIR_NF*/)
{
  Node* ti=G_TreeIdx[G_EqTreeId[i]];
  Node* tj=G_TreeIdx[G_EqTreeId[j]];
  P[0]=safeSig(G_State[i]); P[1]=safeSig(G_State[j]);
  P[2]=safeSig(ti->d);      P[3]=safeSig(tj->d);
  P[4]=safeSig(ti->r);      P[5]=safeSig(tj->r);
  P[6]=safeSig(abs(P[2]-P[3])); P[7]=safeSig(abs(P[4]-P[5]));
  P[8]=safeSig(G_Pred[i]*G_Pred[j]);
  P[9]=safeSig(lambda); P[10]=safeSig(mean); P[11]=safeSig(power);
}

// --------- DTREE advice wrappers ----------
var adviseEq(int i, var lambda, var mean, var energy, var power)
{
  var S[ADV_EQ_NF]; buildEqFeatures(i,lambda,mean,energy,power,S);
  var a = adviseLong(DTREE+RETURNS, 0, S, ADV_EQ_NF); // RETURNS => use next trade return as target in Train
  return a/100.;
}
var advisePair(int i,int j, var lambda, var mean, var energy, var power)
{
  var P[ADV_PAIR_NF]; buildPairFeatures(i,j,lambda,mean,energy,power,P);
  var a = adviseLong(DTREE+RETURNS, 0, P, ADV_PAIR_NF);
  return a/100.;
}

// --------- DTREE-driven adjacency selection ----------
void rewireAdjacency_DTREE(var lambda, var mean, var energy, var power)
{
  int N=G_N, D=G_D, i, d, c, best, cand;
  for(i=0;i<N;i++){
    for(d=0; d<D; d++){
      var bestScore = -2; best = -1;
      for(c=0;c<CAND_NEIGH;c++){
        cand = randint(0,N-1);
        if(cand==i) continue;
        int clash=0, k; for(k=0;k<d;k++) if(G_Adj[i*D+k]==cand){clash=1; break;}
        if(clash) continue;
        var s = advisePair(i,cand,lambda,mean,energy,power);
        if(s > bestScore){ bestScore=s; best=cand; }
      }
      if(best<0){ do{ best = randint(0,N-1);} while(best==i); }
      G_Adj[i*D + d] = best;
    }
  }
}

// --------- DTREE-created coefficients, modes & proportions ----------
void synthesizeEquationFromDTREE(int i, var lambda, var mean, var energy, var power)
{
  var a_mode = adviseEq(i,lambda,mean,energy,power);
  G_Mode[i] = (int)(abs(a_mode*1000)) & 3;

  var a_wself = adviseEq(i,lambda,mean,energy,power);
  var a_wn1   = adviseEq(i,lambda,mean,energy,power);
  var a_wn2   = adviseEq(i,lambda,mean,energy,power);
  var a_g1    = adviseEq(i,lambda,mean,energy,power);
  var a_g2    = adviseEq(i,lambda,mean,energy,power);
  var a_mom   = adviseEq(i,lambda,mean,energy,power);
  var a_tree  = adviseEq(i,lambda,mean,energy,power);
  var a_adv   = adviseEq(i,lambda,mean,energy,power);

  G_WSelf[i]  = mapUnit(a_wself, 0.15, 0.85);
  G_WN1[i]    = mapUnit(a_wn1,   0.05, 0.35);
  G_WN2[i]    = mapUnit(a_wn2,   0.05, 0.35);
  G_WGlob1[i] = mapUnit(a_g1,    0.05, 0.30);
  G_WGlob2[i] = mapUnit(a_g2,    0.05, 0.30);
  G_WMom[i]   = mapUnit(a_mom,   0.02, 0.15);
  G_WTree[i]  = mapUnit(a_tree,  0.05, 0.35);
  G_WAdv[i]   = mapUnit(a_adv,   0.05, 0.35);

  var a1=adviseEq(i,lambda,mean,energy,power);
  var a2=adviseEq(i,lambda,mean,energy,power);
  var a3=adviseEq(i,lambda,mean,energy,power);
  var a4=adviseEq(i,lambda,mean,energy,power);
  var a5=adviseEq(i,lambda,mean,energy,power);
  var a6=adviseEq(i,lambda,mean,energy,power);
  var a7=adviseEq(i,lambda,mean,energy,power);

  A1x[i]   = randsign()*mapUnit(a1, 0.6, 1.2);
  A1lam[i] = randsign()*mapUnit(a2, 0.05,0.35);
  A1mean[i]= mapUnit(a3,-0.30,0.30);
  A1E[i]   = mapUnit(a4,-0.0015,0.0015);
  A1P[i]   = mapUnit(a5,-0.30,0.30);
  A1i[i]   = mapUnit(a6,-0.02,0.02);
  A1c[i]   = mapUnit(a7,-0.20,0.20);

  var b1=adviseEq(i,lambda,mean,energy,power);
  var b2=adviseEq(i,lambda,mean,energy,power);
  var b3=adviseEq(i,lambda,mean,energy,power);
  var b4=adviseEq(i,lambda,mean,energy,power);
  var b5=adviseEq(i,lambda,mean,energy,power);
  var b6=adviseEq(i,lambda,mean,energy,power);
  var b7=adviseEq(i,lambda,mean,energy,power);

  A2x[i]   = randsign()*mapUnit(b1, 0.6, 1.2);
  A2lam[i] = randsign()*mapUnit(b2, 0.05,0.35);
  A2mean[i]= mapUnit(b3,-0.30,0.30);
  A2E[i]   = mapUnit(b4,-0.0015,0.0015);
  A2P[i]   = mapUnit(b5,-0.30,0.30);
  A2i[i]   = mapUnit(b6,-0.02,0.02);
  A2c[i]   = mapUnit(b7,-0.20,0.20);

  var c1=adviseEq(i,lambda,mean,energy,power);
  var c2=adviseEq(i,lambda,mean,energy,power);
  var d1=adviseEq(i,lambda,mean,energy,power);
  var d2=adviseEq(i,lambda,mean,energy,power);
  G1mean[i] = mapUnit(c1, 0.4, 1.6);
  G1E[i]    = mapUnit(c2,-0.004,0.004);
  G2P[i]    = mapUnit(d1, 0.1, 1.2);
  G2lam[i]  = mapUnit(d2, 0.05,0.7);

  var e1=adviseEq(i,lambda,mean,energy,power);
  var e2=adviseEq(i,lambda,mean,energy,power);
  TAlpha[i] = mapUnit(e1, 0.3, 1.5);
  TBeta[i]  = mapUnit(e2, 6.0, 50.0);

  var p = adviseEq(i,lambda,mean,energy,power);
  G_PropRaw[i] = 0.01 + 0.99 * (0.5*(p+1.0));
}

void normalizeProportions()
{
  int N=G_N,i; var s=0; for(i=0;i<N;i++) s += G_PropRaw[i];
  if(s<=0) { for(i=0;i<N;i++) G_Prop[i] = 1.0/N; return; }
  for(i=0;i<N;i++) G_Prop[i] = G_PropRaw[i]/s;
}

// --------- DTree proportional coupling ----------
var dtreeTerm(int i, int* outTopEq, var* outTopW)
{
  int N=G_N,j;
  int tid_i=G_EqTreeId[i]; Node* ti=G_TreeIdx[tid_i]; int di=ti->d; var ri=ti->r;
  var alpha=TAlpha[i], beta=TBeta[i];

  var sumw=0, acc=0, bestW=-1; int bestJ=-1;
  for(j=0;j<N;j++){
    if(j==i) continue;
    int tid_j=G_EqTreeId[j]; Node* tj=G_TreeIdx[tid_j]; int dj=tj->d; var rj=tj->r;

    var w = exp(-alpha*abs(di-dj)) * exp(-beta*abs(ri-rj));
    var predBoost = 0.5 + 0.5*(G_Pred[i]*G_Pred[j]);
    var propBoost = 0.5 + 0.5*( (G_Prop[i] + G_Prop[j]) );
    w *= predBoost * propBoost;

    var pairAdv = advisePair(i,j,0,0,0,0);
    w *= (0.75 + 0.25*(0.5*(pairAdv+1.0)));

    sumw += w; acc += w*G_State[j];
    if(w>bestW){bestW=w; bestJ=j;}
  }
  if(outTopEq) *outTopEq = bestJ;
  if(outTopW)  *outTopW  = ifelse(sumw>0, bestW/sumw, 0);
  if(sumw>0) return acc/sumw; return 0;
}

// --------- symbolic expression builder ----------
void buildSymbolicExpr(int i, int n1, int n2)
{
  string s = G_Sym[i]; strcpy(s,"");
  string a1 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
                   A1x[i], n1, A1lam[i], A1mean[i], A1E[i], A1P[i], A1i[i], A1c[i]);
  string a2 = strf("(%.3f*x[%i] + %.3f*lam + %.3f*mean + %.5f*E + %.3f*P + %.3f*i + %.3f)",
                   A2x[i], n2, A2lam[i], A2mean[i], A2E[i], A2P[i], A2i[i], A2c[i]);

  strcat(s, "x[i]_next = ");
  strcat(s, strf("%.3f*x[i] + ", G_WSelf[i]));
  if(G_Mode[i]==0){ strcat(s, strf("%.3f*sin%s + ", G_WN1[i], a1)); strcat(s, strf("%.3f*cos%s + ", G_WN2[i], a2)); }
  else if(G_Mode[i]==1){ strcat(s, strf("%.3f*tanh%s + ", G_WN1[i], a1)); strcat(s, strf("%.3f*sin%s + ", G_WN2[i], a2)); }
  else if(G_Mode[i]==2){ strcat(s, strf("%.3f*cos%s + ", G_WN1[i], a1)); strcat(s, strf("%.3f*tanh%s + ", G_WN2[i], a2)); }
  else { strcat(s, strf("%.3f*sin%s + ", G_WN1[i], a1)); strcat(s, strf("%.3f*cos%s + ", G_WN2[i], a2)); }

  strcat(s, strf("%.3f*tanh(%.3f*mean + %.5f*E) + ", G_WGlob1[i], G1mean[i], G1E[i]));
  strcat(s, strf("%.3f*sin(%.3f*P + %.3f*lam) + ",   G_WGlob2[i], G2P[i],   G2lam[i]));
  strcat(s, strf("%.3f*(x[i]-x_prev[i]) + ",         G_WMom[i]));
  strcat(s, strf("Prop[i]=%.4f; ",                    G_Prop[i]));
  strcat(s, strf("%.3f*DT(i) + ",                    G_WTree[i]));
  strcat(s, strf("%.3f*DTREE(i)",                    G_WAdv[i]  ));
}

// --------- one-time rewire init ----------
void rewireInit()
{
  randomizeRP(); computeProjection();
  G_TreeN=0; indexTreeDFS(Root);
  int i; for(i=0;i<G_N;i++) G_EqTreeId[i] = i % G_TreeN;
}

// --------- epoch rewire ----------
void rewireEpoch(var lambda, var mean, var energy, var power)
{
  int i;
  for(i=0;i<G_N;i++){ Node* t=G_TreeIdx[G_EqTreeId[i]]; G_Pred[i]=nodePredictability(t); }
  rewireAdjacency_DTREE(lambda,mean,energy,power);
  for(i=0;i<G_N;i++) synthesizeEquationFromDTREE(i,lambda,mean,energy,power);
  normalizeProportions();
  int D=G_D; int h=0; for(i=0;i<G_N*D;i++) h = (h*1315423911) ^ G_Adj[i];
  G_CtxID = (h ^ (G_Epoch<<8)) & 0x7FFFFFFF;
  for(i=0;i<G_N;i++){ int n1=G_Adj[i*G_D+0], n2=G_Adj[i*G_D+1]; buildSymbolicExpr(i,n1,n2); }
}

// --------- compact driver ----------
var projectNet()
{
  int N=G_N,i; var sum=0,sumsq=0,cross=0;
  for(i=0;i<N;i++){ sum+=G_State[i]; sumsq+=G_State[i]*G_State[i]; if(i+1<N) cross+=G_State[i]*G_State[i+1]; }
  var mean=sum/N, corr=cross/(N-1);
  return 0.6*tanh(mean + 0.001*sumsq) + 0.4*sin(corr);
}

// --------- per-bar update ----------
void updateNet(var driver, var* outMean, var* outEnergy, var* outPower, int writeMeta)
{
  int N=G_N, D=G_D, i;

  var sum=0,sumsq=0; for(i=0;i<N;i++){ sum+=G_State[i]; sumsq+=G_State[i]*G_State[i]; }
  var mean=sum/N, energy=sumsq, power=sumsq/N;

  for(i=0;i<N;i++){ Node* t=G_TreeIdx[G_EqTreeId[i]]; G_Pred[i]=nodePredictability(t); }

  for(i=0;i<N;i++){
    int n1=G_Adj[i*D+0], n2=G_Adj[i*D+1];
    var xi=G_State[i], xn1=G_State[n1], xn2=G_State[n2], mom=xi-G_Prev[i];

    int topEq=-1; var topW=0;
    var dt = dtreeTerm(i,&topEq,&topW);
    G_TreeTerm[i]=dt; G_TopEq[i]=topEq; G_TopW[i]=topW;

    var adv = adviseEq(i, driver, mean, energy, power);
    G_AdvScore[i] = adv;

    var arg1=A1x[i]*xn1 + A1lam[i]*driver + A1mean[i]*mean + A1E[i]*energy + A1P[i]*power + A1i[i]*i + A1c[i];
    var arg2=A2x[i]*xn2 + A2lam[i]*driver + A2mean[i]*mean + A2E[i]*energy + A2P[i]*power + A2i[i]*i + A2c[i];

    var nl1,nl2;
    if(G_Mode[i]==0){ nl1=sin(arg1); nl2=cos(arg2); }
    else if(G_Mode[i]==1){ nl1=tanh(arg1); nl2=sin(arg2); }
    else if(G_Mode[i]==2){ nl1=cos(arg1);  nl2=tanh(arg2); }
    else { nl1=sin(arg1); nl2=cos(arg2); }

    var glob1=tanh(G1mean[i]*mean + G1E[i]*energy);
    var glob2=sin (G2P[i]*power + G2lam[i]*driver);

    var xNew =
      G_WSelf[i]*xi +
      G_WN1[i]*nl1 +
      G_WN2[i]*nl2 +
      G_WGlob1[i]*glob1 +
      G_WGlob2[i]*glob2 +
      G_WMom[i]*mom +
      G_WTree[i]*dt +
      G_WAdv[i] *adv;
		
	 // prevent runaway values
    if(xNew != xNew) xNew = 0;       // NaN -> 0
    else {
             if(xNew > 1e6) xNew = 1e6;
             if(xNew < -1e6) xNew = -1e6;
    }

    G_Prev[i]=xi; G_Vel[i]=xNew-xi; G_State[i]=xNew;

    if(writeMeta){
      char fname[64]; buildEqFileName(i,fname);
      int tid=G_EqTreeId[i]; Node* t=G_TreeIdx[tid];
      int nn1=G_Adj[i*D+0], nn2=G_Adj[i*D+1];
      file_append(fname,
        strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
          G_Epoch, G_CtxID, NET_EQNS, i, nn1, nn2, tid, t->d, t->r,
          G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
          G_MCF_PBull, G_MCF_Entropy, MC_Cur, G_Sym[i]));
    }
  }
  if(outMean) *outMean=mean; if(outEnergy) *outEnergy=energy; if(outPower) *outPower=power;
}

// ----------------- MAIN -----------------
function run()
{
  // ===== required for ML training / auto-test =====
  NumWFOCycles = 5;                 // WFO is recommended for ML
  set(RULES|TESTNOW|PLOTNOW);       // generate rules; auto-test after Train
  if(Train){                        // RETURNS target = next trade's P/L
    Hedge   = 2;                    // allow simultaneous L/S during training
    LifeTime= 1;                    // 1-bar horizon for return labeling
  } else {
    MaxLong = MaxShort = 1;         // clean behavior in Test/Trade
  }

  // ===== init once =====
  static int initialized=0;
  static var lambda;
  static int fileInit=0;

  if(LookBack < NWIN) LookBack = NWIN;
  asset(ASSET_SYMBOL);
  algo(ALGO_NAME);

  if(is(INITRUN) && !initialized){
    seed(365);  // <<< ensure deterministic Train/Test advise order

    var tmp[MC_NPAT]; buildCDL_TA61(tmp, MC_Names);

    Root=createNode(MAX_DEPTH);
    allocateNet();
    MC_alloc();

    G_DTreeExp = randu(1.10,1.60);
    G_FB_A     = randu(0.60,0.85);
    G_FB_B     = 1.0 - G_FB_A;

    randomizeRP(); computeProjection();
    rewireInit();

    // First epoch synthesis
    G_Epoch = 0;
    rewireEpoch(0,0,0,0);

    // Prepare per-equation CSVs
    char fname[64]; int i;
    for(i=0;i<NET_EQNS;i++){
      buildEqFileName(i,fname);
      file_append(fname,
        "Bar,lambda,gamma,i,State,n1,n2,mean,energy,power,Vel,Mode,WAdv,WSelf,WN1,WN2,WGlob1,WGlob2,WMom,WTree,Pred,Adv,Prop,TreeTerm,TopEq,TopW,TreeId,Depth,Rate,PBull,Entropy,MCState\n");
    }
    if(!fileInit){
      file_append(strf("Log\\%s_markov.csv",ALGO_NAME),"Bar,State,PBullNext,Entropy,RowSum\n");
      fileInit=1;
    }

    // Initial META
    for(i=0;i<G_N;i++){
      int n1=G_Adj[i*G_D+0], n2=G_Adj[i*G_D+1]; int tid=G_EqTreeId[i]; Node* t=G_TreeIdx[tid];
      char fname2[64]; buildEqFileName(i,fname2);
      file_append(fname2,
        strf("META,%i,%i,%i,%i,%i,%i,%i,%i,%.6f,Pred=%.4f,Adv=%.4f,Prop=%.6f,Mode=%i,WAdv=%.3f,WTree=%.3f,PBull=%.4f,Ent=%.4f,State=%i,\"%s\"\n",
          G_Epoch, G_CtxID, NET_EQNS, i, n1, n2, tid, t->d, t->r,
          G_Pred[i], G_AdvScore[i], G_Prop[i], G_Mode[i], G_WAdv[i], G_WTree[i],
          G_MCF_PBull, G_MCF_Entropy, MC_Cur, G_Sym[i]));
    }
    initialized=1;
    printf("\nRoot nodes: %i | Net equations: %i (deg=%i, kproj=%i)", countNodes(Root), G_N, G_D, G_K);
  }

  // ====== Per bar: Candles ? Markov
  static var CDL[MC_NPAT];
  buildCDL_TA61(CDL,0);
  MC_Cur = MC_stateFromCDL(CDL, MC_ACT);
  if(Bar > LookBack) MC_update(MC_Prev, MC_Cur);
  MC_Prev = MC_Cur;

  MC_PBullNext = MC_nextBullishProb(MC_Cur);
  MC_Entropy   = MC_rowEntropy01(MC_Cur);

  // expose global Markov features
  G_MCF_PBull   = MC_PBullNext;
  G_MCF_Entropy = MC_Entropy;
  G_MCF_State   = (var)MC_Cur;

  // ====== Tree driver lambda
  lambda = evaluateNode(Root);

  // Rewire epoch?
  int doRewire = ((Bar % REWIRE_EVERY) == 0);
  if(doRewire){
    G_Epoch++;
    int i; var sum=0; for(i=0;i<G_N;i++) sum += G_State[i];
    var mean = sum/G_N;
    var energy=0; for(i=0;i<G_N;i++) energy += G_State[i]*G_State[i];
    var power = energy/G_N;
    rewireEpoch(lambda,mean,energy,power);
  }

  // Update net this bar (write META only if rewired)
  var meanB, energyB, powerB;
  updateNet(lambda, &meanB, &energyB, &powerB, doRewire);

  // Feedback blend
  var gamma = projectNet();
  lambda = G_FB_A*lambda + G_FB_B*gamma;

  // --- safe plotting (after LookBack only) ---
if(!is(LOOKBACK))
{
  var lam = safeSig(lambda);
  var gam = safeSig(gamma);
  var pw  = safeSig(powerB);
  var pb  = clamp01(MC_PBullNext);
  var ent = clamp01(MC_Entropy);

  plot("lambda",     lam, LINE, 0);
  plot("gamma",      gam, LINE, 0);
  plot("P_win",      pw,  LINE, 0);
  plot("PBullNext",  pb,  LINE, 0);
  plot("MC_Entropy", ent, LINE, 0);
}


  // Markov CSV log
  if(Bar % LOG_EVERY == 0){
    file_append(strf("Log\\%s_markov.csv",ALGO_NAME),
      strf("%i,%i,%.6f,%.6f,%i\n", Bar, MC_Cur, MC_PBullNext, MC_Entropy, MC_RowSum[MC_Cur]));
  }

  // ====== Entries ======
  if(Train){
    // Ensure samples for RETURNS training (hedged & 1-bar life set above)
    if(NumOpenLong  == 0) enterLong();
    if(NumOpenShort == 0) enterShort();
  } else {
    // Markov-gated live logic
    if( MC_PBullNext > PBULL_LONG_TH  && lambda >  0.7 ) enterLong();
    if( MC_PBullNext < PBULL_SHORT_TH && lambda < -0.7 ) enterShort();
  }
}

// Clean up memory
function cleanup()
{
  if(Root) freeTree(Root);
  MC_free();
  freeNet();
}

Last edited by TipmyPip; 09/04/25 18:18.
Page 10 of 11 1 2 8 9 10 11

Moderated by  Petra 

Powered by UBB.threads™ PHP Forum Software 7.7.1