Gamestudio Links
Zorro Links
Newest Posts
Flash USDT Software
by jhacko. 05/22/25 08:36
Accounts
by Foller. 05/20/25 20:10
Edit button opens Visual Studio project file
by AndrewAMD. 05/20/25 14:08
SGT_FW
by Aku_Aku. 05/17/25 09:02
Zorro S subscription disappears
by Petra. 05/15/25 16:02
AUM Magazine
Latest Screens
Stug 3 Stormartillery
Iljuschin 2
Galactic Strike X
Zeal-X2
Who's Online Now
2 registered members (AndrewAMD, VoroneTZ), 277 guests, and 2 spiders.
Key: Admin, Global Mod, Mod
Newest Members
Foller, jaidsis, Crazi, jhacko, EddieTest
19131 Registered Users
Previous Thread
Next Thread
Print Thread
Rate Thread
Page 8 of 9 1 2 3 4 5 6 7 8 9
The War of Shifting Fronts [Re: TipmyPip] #488565
01/30/25 05:31
01/30/25 05:31
Joined: Sep 2017
Posts: 146
T
TipmyPip Offline OP
Member
TipmyPip  Offline OP
Member
T

Joined: Sep 2017
Posts: 146
[Linked Image]

The War

The War of Shifting Fronts

Prologue: The Endless Struggle

The war had no beginning. It had no end. It simply was.
Two great factions—the Legion of the Everlasting Bastion and the Phantom Raiders—had waged battle for so long that neither side remembered why they fought. The battlefield itself was fluid, ever-changing, as though it had a will of its own.
The Everlasting Bastion built fortresses, great walls that stood firm against any siege. Their architects were masterful, each new fortress stronger than the last, each barricade more impenetrable than the one before it.
The Phantom Raiders were not an army in the traditional sense. They were shadows, warriors who struck where the walls were weakest, never lingering, always adapting. No matter how high the Bastion built, no matter how thick their defenses, the Raiders always found a way through.

The war was not one of brute force. It was something more intricate, something that felt less like battle and more like… a pattern.
And patterns, once formed, are nearly impossible to break.

Chapter One: The General’s Burden

High General Oren stood atop the walls of Fortress Aegis, surveying the endless landscape of trenches, towers, and barricades. The engineers below worked ceaselessly, reinforcing weak points, expanding the fortifications, ensuring that this stronghold would be the one to hold.

But deep down, Oren knew the truth.
They had said the same thing about Fortress Halcyon before it fell. And Citadel Varn before that. And Bastion Ironhold before that.
No matter how strong they built, the Raiders always adapted.
He turned to Master Architect Lysara, the woman whose mind had designed countless fortresses, each one an evolution of the last. Lysara had long since abandoned the arrogance of thinking any fortress could last forever.
“The Raiders are learning faster,” Oren said.

Lysara nodded. “It’s as if they already know how to dismantle what we haven’t even finished building.”
That statement chilled Oren.
If that were true, then this war was not just a battle of walls and sieges. It was something far more insidious.
He gazed beyond the walls, into the distant haze where the Phantom Raiders gathered. He imagined them not as an enemy force, but as something more abstract—an echo of their own decisions, coming back to haunt them.
And then the thought occurred to him.
What if our own fortresses are creating the very weaknesses they exploit?

Chapter Two: The Phantom’s Shadow

Far from the great walls of Aegis, Phantom Lord Kael stood with his commanders, studying the latest defenses erected by the Everlasting Bastion.
At a glance, the fortress was impenetrable.

But Kael knew better.
The Raiders never attacked blindly. They didn’t storm walls with brute force. They didn’t waste men in pointless battles. They studied, they adapted, they exploited. They did not simply react to fortifications; they used them.
His second-in-command, Shadowmaster Veylen, spoke softly.
“The weak point will emerge soon. It always does.”
Kael turned his gaze to the towering walls and traced an unseen path in his mind.
He had done this before.
Find the opening. Strike. Move on.
But something about this battle felt different. He had spent years leading the Phantom Raiders, and yet… he had never asked himself a simple question.
Why do the weak points always emerge?
His forces never broke through the same way twice. Every siege was different. Every assault required new tactics. Yet, no matter how much the Bastion evolved…
The flaws were always there.
And then the thought occurred to him.
What if the Bastion isn’t just reacting to us? What if it’s shaping itself around our attacks?
The realization shook him. It was as if the battlefield was alive, responding to each decision before it was even made.
And if that were true…Then neither side was actually in control.

Chapter Three: The Fractured Reality

Both generals, on opposite sides of the war, reached the same conclusion.
The battle was no longer about fortresses and invasions.
It was something deeper. Something neither side fully understood.
Oren and Lysara devised an unconventional plan.
“What if we don’t reinforce our defenses?” Oren asked.
Lysara’s brow furrowed. “That’s suicide.”
“Not if we’re right,” Oren said. “If the Phantom Raiders only attack because we strengthen our walls, then what happens if we stop strengthening them?”
Lysara’s breath caught. The idea was madness. But what if it worked?
Meanwhile, Kael and Veylen devised their own experiment.

“What if we stop attacking weak points?” Kael asked.
Veylen’s eyes narrowed. “That’s the only way we win.”

“Is it?” Kael asked. “What if attacking is what’s causing new weaknesses to appear? What if, instead of exploiting the gaps, we reinforce them?”
The Raiders had always been shadows. But this time, Kael issued an order that no Raider had ever given.
“Hold your ground.”

And at the same moment, Oren issued an order that no Bastion had ever given.
“Stand down.”

The battlefield froze.
For the first time in history… nothing happened.
No walls were built.
No attacks were launched.

It was as if reality itself hesitated—as if the very existence of the war had depended on the cycle continuing.
And then…Something began to collapse.

Chapter Four: The Breaking of the Cycle

As the fortresses stood untouched and the Raiders remained motionless, the landscape itself began to unravel.
The ground beneath them fractured. The sky above them wavered, as if it had only ever been a painted canvas stretched too thin.
Oren and Kael both watched in horror as the truth revealed itself.
This was not a war.

It was a pattern.
One that had been running for so long that neither side had realized they were trapped inside it.
The Bastion had never truly been building fortresses—they had been shaping the conditions for their own downfall.
The Raiders had never truly been finding weaknesses—they had been carving them into existence with every assault.
It had never been about strength or strategy.
It had been about the structure of the war itself.

And now that they had both stepped outside of it—refused to play their roles—the entire system was breaking apart.

Epilogue: The Silent Observer

As the last walls crumbled and the last shadows vanished, something watched from beyond the battlefield.
Something outside the war, something that had never taken part in the endless cycle but had been watching all along.
Something waiting.
Because if a pattern collapses, it does not simply end—it creates the conditions for something new to begin.
And just before the battlefield dissolved completely, both generals heard the same whispered voice in their minds:

“Now the true war begins.”

Last edited by TipmyPip; 01/30/25 06:38.
Re: Zorro Trader GPT [Re: TipmyPip] #488566
01/30/25 07:12
01/30/25 07:12
Joined: Aug 2018
Posts: 101
O
OptimusPrime Offline
Member
OptimusPrime  Offline
Member
O

Joined: Aug 2018
Posts: 101
I love it! I dabbled in chess programming for several years and created a private aggressive port of Stockfish for my own use. (I called it Primordial.) Very nice! These interests definitely help when it comes to coding for wealth accumulation. I am using ZorroGPT right now and I absolutely love it.


Thanks so much,

OptimusPrime

Hard Question [Re: OptimusPrime] #488567
01/30/25 07:37
01/30/25 07:37
Joined: Sep 2017
Posts: 146
T
TipmyPip Offline OP
Member
TipmyPip  Offline OP
Member
T

Joined: Sep 2017
Posts: 146
Do you see something in the story, you didn't see before? it seems always if you look deeper, you might find that your solution doesn't seem to be noticed because we all are limited by the light we create to see. But it is not the only solution that hides within your own creation, but I suppose because you love it, it would be quite a struggle to see why two parts are actually related to our common interests.

Last edited by TipmyPip; 01/30/25 07:41.
Re: Hard Question [Re: TipmyPip] #488568
01/30/25 10:43
01/30/25 10:43
Joined: Jul 2000
Posts: 28,019
Frankfurt
jcl Offline

Chief Engineer
jcl  Offline

Chief Engineer

Joined: Jul 2000
Posts: 28,019
Frankfurt
I like it.

The War of Shifting Fronts (Part 2) [Re: TipmyPip] #488570
01/30/25 12:58
01/30/25 12:58
Joined: Sep 2017
Posts: 146
T
TipmyPip Offline OP
Member
TipmyPip  Offline OP
Member
T

Joined: Sep 2017
Posts: 146
[Linked Image]

Abstract War

Abstract Mathematical Construct
Let 𝒰 be an abstract process space, where elements undergo transformations based on evolving conditions.

1. Core Elements & Definitions
We define a mapping over 𝒰, denoted as:

𝒳:𝒰×𝑅×𝑁→𝒰

where each instance 𝒰ₙ consists of a triple:

𝒰𝑛=(𝒫𝑛,𝒮𝑛,𝒴𝑛)

where:

𝒫𝑛 ∈ ℝ represents a continuous scalar adjustment.
𝒮𝑛 ∈ ℕ represents a discrete state magnitude.
𝒴𝑛 ∈ 𝒰 represents an evolving reference structure.

2. Transformation Rule
A process 𝒜 applies adjustments to 𝒰, evolving it under a conditionally propagated mapping:

𝒳(𝒰𝑛,𝒫𝑛,𝒮𝑛)={ ∅, 𝒮𝑛 ≤0 otherwise (𝒫𝑛,𝒮𝑛,𝒴𝑛) }
This transformation continues under the presence of a binary condition.

3. Conditional Evolution
A transition function 𝒯 is introduced, acting within a probabilistic structure:

𝒰𝑛+1={ 𝒳(𝒰𝑛,𝒫𝑛−𝛿,⌊𝒮𝑛/2⌋) -> 𝑋𝑛=1 otherwise 𝒰𝑛 -> 𝑋𝑛=0 }
​
where:

𝒫𝑛 undergoes a gradual decrement by δ.
𝒮𝑛 undergoes quantized contraction.
𝑋𝑛 ∈ {0,1} is determined by an independent stochastic event.

4. Underlying Structure
The transformation 𝒯 ensures a structured evolution, yet never explicitly defines iteration or recursion.

∃ 𝑛 0∈𝑁, ∀ 𝑛>𝑛0, 𝑃(𝒰𝑛=∅)=1

This ensures that, over an extended progression, the transformation reaches a terminal state, albeit through non-deterministic yet structured steps.

Fundamental Definitions
Let 𝒵 be a class of structures evolving over successive transformative interactions, denoted as:

𝒵𝑛=(𝒫𝑛,𝒬𝑛,𝒵𝑛−1)

where:

𝒫𝑛 ∈ ℝ represents a principal scalar undergoing progressive adjustments.
𝒬𝑛 ∈ ℝ represents an external perturbation affecting state transitions.
𝒵𝑛{n-1} ∈ 𝒵 provides an implicit reference to prior evolutionary states.
A transformation 𝒮 governs the system, dynamically modifying 𝒫𝑛 under structured dependencies.

2. Evolutionary Process: Perturbation-Driven Adaptation
We define an adjustment operator 𝒯 acting over 𝒵, modifying the system based on a decaying propagative rule:

𝒯(𝒵𝑛,𝒫𝑛,𝒬𝑛)={𝒫𝑛 -> 𝑛=0 otherwise 𝒯(𝒵𝑛−1,𝒫𝑛−1,𝒬𝑛−1)+(Δ−𝒵𝑛𝜀)−𝒬𝑛 -> 𝑛>0 }

where:

𝒫𝑛 recursively inherits the prior state 𝒵𝑛{n-1}.
𝒬𝑛 is an external stochastic perturbation, influencing transitions.
Δ represents a structured bias introduced in every step.
𝜀 scales the internal transformation based on prior conditions.
This formulation inherently adapts based on preceding influences while adjusting dynamically due to probabilistic perturbations.

3. Probabilistic Interference Mechanism
A perturbation generator 𝒫𝒳 : ℝ → {0,1} defines interference based on an uncertain external process, akin to selective disruption mechanisms:

𝒬𝑛={ 𝜆,𝑃(𝑋𝑛=1)=𝑝0, otherwise 𝑃(𝑋𝑛=0)=1−𝑝 }

where:

𝒫𝒳 enforces an external intervention with probability p.
The scalar λ dictates the intensity of modification when intervention occurs.
The process introduces non-deterministic fluctuations influencing the evolution.

4. Emergent Behavior & Structured Adaptation
By applying repeated transformations, the structure of 𝒵 evolves in a way that balances prior adjustments while reacting to perturbative influences. The final form expresses a regulated adaptive process, where the outcome reflects both historical dependencies and external interactions.

For sufficiently large n, the process asymptotically stabilizes under:

𝑛
lim⁡ 𝒫𝑛= ∑ (Δ−𝜀𝒵𝑘−𝒬𝑘)
𝑛→∞ 𝑘=1

where the cumulative perturbations regulate the ultimate adjustment.

Who among the minds that wander sees war not as blood and steel,
But as a silent drift of shifting states, where choice and chance congeal?
Who discerns in walls that crumble the weight of forms unseen,
Where every strike, a measured shift, shapes fate's unwritten scheme?

Who shall trace, in veiled equations, the battlefield's silent code,
Where power bends, where fate unfolds, yet none escape the road?

don't try this
Code
(=!BA#9]7<"`z2Vxwv4Ts+Oqponm.-j*,)(/&%cba`_^] , (=!BA#9]7<"`z2Vxwv4Ts+Oqponm.-j*,)(/&%cba`_^]

because at run time they are completely different.

Last edited by TipmyPip; 01/30/25 14:00.
The War of Shifting Fronts (Part 3) [Re: TipmyPip] #488571
01/30/25 14:13
01/30/25 14:13
Joined: Sep 2017
Posts: 146
T
TipmyPip Offline OP
Member
TipmyPip  Offline OP
Member
T

Joined: Sep 2017
Posts: 146
[Linked Image]

It is hidden within the War

Code
import matplotlib.pyplot as plt
import networkx as nx

# Create a directed graph
G = nx.DiGraph()

# Recursive function to generate the war recursion tree with unique labels
def add_war_nodes(graph, parent, depth, max_depth):
    if depth > max_depth:
        return

    # Generate unique battle names
    left_battle = f"Battle-{depth}L"
    right_battle = f"Battle-{depth}R"

    # Add edges to represent battles branching from strategic decisions
    graph.add_edge(parent, left_battle)
    graph.add_edge(parent, right_battle)

    # Recursively create deeper battle strategies
    add_war_nodes(graph, left_battle, depth + 1, max_depth)
    add_war_nodes(graph, right_battle, depth + 1, max_depth)

# Root node representing the beginning of war decisions
G.add_node("War Begins")

# Generate recursive war strategy tree
add_war_nodes(G, "War Begins", 1, 5)  # Depth of recursion is 5

# Set up figure size
plt.figure(figsize=(14, 10))

# Recalculate positions for better label visibility
pos = nx.spring_layout(G, seed=42)  

# Draw nodes and edges without labels to prevent overlap
nx.draw(G, pos, with_labels=False, node_color="black", edge_color="gray", node_size=200)

# Draw labels separately in red with a white background for clarity
for node, (x, y) in pos.items():
    plt.text(x, y + 0.03, node, fontsize=9, ha='center', color="red", 
             bbox=dict(facecolor="white", edgecolor="none", boxstyle="round,pad=0.2"))

# Set the title of the plot
plt.title("The War of Shifting Fronts - Hidden Recursion Tree with Clear Labels")

# Show the final visualization
plt.show()


It would surprise you but every game is the ending of conditions for a probable game that inspires the rules to fall apart.
This code can create a highly complex adaptive strategy.

Code
// Structure representing a war decision node
typedef struct {
    char name[50];  // Fixed-size character array for name
    int strength;   // Hidden game-theoretic influence
    int risk;       // Determines adaptability to opposition
} BattleNode;

// Global statistics variables
int battleCount = 0;
var totalStrength = 0;
var totalRisk = 0;
int maxStrength = 0;
int minRisk = 100;  // Start with a high value to track lowest risk

// Function to generate a pseudo-random number in a range (Zorro's method)
int randomInt(int min, int max) {
    return min + (int)((max - min) * random()); 
}

// Recursive function to simulate war strategies with game-theoretic decisions
void simulateBattle(BattleNode* battle, int depth, int maxDepth) {
    if (depth > maxDepth) return;  // Base case: stop recursion

    // Generate random strategy values using Zorro's `random()` function
    battle->strength = randomInt(1, 100);
    battle->risk = randomInt(1, 100); // Ensuring no negative values

    // Update statistics
    battleCount++;
    totalStrength += battle->strength;
    totalRisk += battle->risk;
    
    if (battle->strength > maxStrength) maxStrength = battle->strength;
    if (battle->risk < minRisk) minRisk = battle->risk;

    // Debugging: Print battle details to log
    printf("\n[Battle %d] %s | Strength: %d | Risk: %d", 
           battleCount, battle->name, battle->strength, battle->risk);

    // Hidden recursive expansion influenced by game theory
    if (battle->strength > battle->risk) {
        battle->strength += randomInt(1, 50);
        battle->risk -= randomInt(1, 20);
    } else {
        battle->strength -= randomInt(1, 30);
        battle->risk += randomInt(1, 10);
    }

    // Ensure risk does not go negative
    if (battle->risk < 0) battle->risk = 0;

    // Recursively simulate further battles
    if (depth + 1 <= maxDepth) {
        BattleNode nextBattle;
        sprintf(nextBattle.name, "%s-%d", battle->name, depth);
        simulateBattle(&nextBattle, depth + 1, maxDepth);
    }
}

// Function to display final statistics
void displayStatistics() {
    printf("\n--- War Simulation Statistics ---");
    printf("\nTotal Battles Simulated: %d", battleCount);

    if (battleCount > 0) {
        printf("\nAverage Strength: %.2f", totalStrength / (var)battleCount);
        printf("\nAverage Risk: %.2f", totalRisk / (var)battleCount);
    }
    printf("\nMax Strength Encountered: %d", maxStrength);
    printf("\nMin Risk Encountered: %d", minRisk);
    printf("\n--------------------------------\n");
}

// Main function to trigger the war simulation in Zorro
void run() {  
    if (is(INITRUN)) {  // Run only once at the start
        BattleNode root;
        strcpy(root.name, "War Begins");
        simulateBattle(&root, 1, 5);  // Start recursive war simulation
        
        // Display statistics at the end
        displayStatistics();
    }
}


Here are some statistics to test :

Code
[Battle 4] War Begins-1-2-3 | Strength: 82 | Risk: -13
[Battle 5] War Begins-1-2-3-4 | Strength: 77 | Risk: 17
--- War Simulation Statistics ---
Total Battles Simulated: 5
Average Strength: -2.00
Average Risk: -1.80
Max Strength Encountered: 82
Min Risk Encountered: -79
--------------------------------


Last edited by TipmyPip; 01/31/25 12:18.
Re: Zorro Trader GPT [Re: TipmyPip] #488574
01/30/25 17:24
01/30/25 17:24
Joined: Sep 2017
Posts: 146
T
TipmyPip Offline OP
Member
TipmyPip  Offline OP
Member
T

Joined: Sep 2017
Posts: 146
[Linked Image]

The War Begins


Recursive Market Maker Algorithm in Algorithmic Trading

In an algorithmic trading system, we need to design a market maker algorithm that balances liquidity provision and adversarial trading risk. The algorithm must predict optimal bid/ask spreads while ensuring resilience to adverse selection.

We define:

A recursive graph-based order book model, where orders form a tree of limit order dependencies.
A recursive game-theoretic strategy, where the market maker competes against adversarial high-frequency traders (HFTs).
Each function depends on the other recursively, making an iterative solution nearly impossible.

Graph Theory Component: Recursive Order Book Graph
We define an order book as a directed graph, where:

Nodes represent limit orders.
Edges represent dependencies (e.g., order A at $100 depends on order B at $99).
The graph grows recursively, as new limit orders are placed based on past orders.

Function 1: Recursive Order Book Growth

Code
typedef struct Order {
    double price;
    int size;
    double spread;  
    int parentIndex;  
} Order;

void addOrder(Order* orderBook, int index, double price, int size, int totalOrders) {
    if (index >= totalOrders) return;  

    price = randomizePrice(price + random() * 5, VOLATILITY * (0.8 + random() * 0.6));  
    size = generateOrderSize(MAX_ORDER_SIZE);
    double spread = clamp(0.007 + random() * 0.025, 0.007, 0.04);  //  Spread cannot be negative

    orderBook[index].price = price;
    orderBook[index].size = size;
    orderBook[index].spread = spread;

    if (index > 0)
        orderBook[index].parentIndex = index - 1;  
    else
        orderBook[index].parentIndex = -1;  

    print(TO_LOG, " Order Added: Index %d | Price: %.4f | Size: %d | Spread: %.5f", index, price, size, spread);

    if (random() < 0.75 && index + 1 < totalOrders) {  
        int newSize = generateOrderSize(size * (0.8 + random() * 0.3));
        addOrder(orderBook, index + 1, price - 0.01 * randomInt(1, 12), newSize, totalOrders);
    }
}


Game Theory Component: Recursive Market Maker Pricing
Now, we must determine optimal bid/ask spreads to balance market efficiency and profitability.

A recursive game-theoretic function models how:

The market maker sets spreads recursively.
HFTs attack weak spreads by executing trades that maximize adverse selection.

Function 2: Recursive Bid/Ask Adjustment Based on Game Theory

Code
double calculateSpread(Order* orderBook, int index) {
    if (index < 0 || index >= MAX_ORDERS) return 0.007;  //  Prevent invalid indices

    double baseSpread = clamp(0.007 + random() * 0.02, 0.007, 0.04);  
    double hftPressure = ifelse(random() < 0.5, 0.002 + (random() - 0.5) * 0.005, 0.001);  
    double volatilityFactor = (random() - 0.5) * 0.03;  

    double spread = baseSpread + (0.025 - orderBook[index].price * 0.0003) - hftPressure + volatilityFactor;
    return clamp(spread, 0.007, 0.04);  
}


3. Entanglement Between Recursions
Now, the recursive order book structure directly affects the recursive pricing strategy:

More orders in the book → tighter spreads (more liquidity).
HFT attacks → wider spreads (risk mitigation).
Spreads impact future orders, creating a recursive feedback loop.
Final Algorithm: Recursive Market Maker Strategy.

Code
void addOrder(Order* orderBook, int index, double price, int size, int totalOrders) {
    if (index >= totalOrders) return;  

    price = randomizePrice(price + random() * 5, VOLATILITY * (0.8 + random() * 0.6));  
    size = generateOrderSize(MAX_ORDER_SIZE);
    double spread = clamp(0.007 + random() * 0.025, 0.007, 0.04);  //  Spread cannot be negative

    orderBook[index].price = price;
    orderBook[index].size = size;
    orderBook[index].spread = spread;

    if (index > 0)
        orderBook[index].parentIndex = index - 1;  
    else
        orderBook[index].parentIndex = -1;  

    print(TO_LOG, " Order Added: Index %d | Price: %.4f | Size: %d | Spread: %.5f", index, price, size, spread);

    if (random() < 0.75 && index + 1 < totalOrders) {  
        int newSize = generateOrderSize(size * (0.8 + random() * 0.3));
        addOrder(orderBook, index + 1, price - 0.01 * randomInt(1, 12), newSize, totalOrders);
    }
}


Here is a working code, but anyone who wants to improve the simulator, please do, share your suggestions please, thank you.

Code
#define MIN_ORDERS 5  
#define MAX_ORDERS 15  
#define MIN_ORDER_SIZE 5  
#define MAX_ORDER_SIZE 100  
#define VOLATILITY 0.09  //  Slightly increased to ensure better price fluctuations
#define MAX_DEPTH 10  

void initializeRandomSeed() {
    seed(random() * timer());
}

int randomInt(int min, int max) {
    return clamp(min + (int)((max - min + 1) * random()), min, max);
}

double clamp(double x, double min, double max) {
    return ifelse(x < min, min, ifelse(x > max, max, x));
}

//  Ensures Order Sizes Are Always in a Valid Range
int generateOrderSize(int maxSize) {
    double randFactor = random();
    int size = MIN_ORDER_SIZE + (int)((randFactor * randFactor * (maxSize - MIN_ORDER_SIZE)) * (0.9 + random() * 0.5));  
    return clamp(size, MIN_ORDER_SIZE, MAX_ORDER_SIZE);
}

//  Fully Randomized Order Prices Within a Safe Range
double randomizePrice(double baseValue, double volatility) {
    double direction = ifelse(random() < 0.5, -1, 1);  
    double priceChange = direction * (random() * volatility * 18 + random() * 12);  
    double newPrice = baseValue + priceChange;
    return clamp(newPrice, 50, 200);
}

typedef struct Order {
    double price;
    int size;
    double spread;  
    int parentIndex;  
} Order;

//  Ensures Safe Order Creation with Valid Values
void addOrder(Order* orderBook, int index, double price, int size, int totalOrders) {
    if (index >= totalOrders) return;  

    price = randomizePrice(price + random() * 5, VOLATILITY * (0.8 + random() * 0.6));  
    size = generateOrderSize(MAX_ORDER_SIZE);
    double spread = clamp(0.007 + random() * 0.025, 0.007, 0.04);  //  Spread cannot be negative

    orderBook[index].price = price;
    orderBook[index].size = size;
    orderBook[index].spread = spread;

    if (index > 0)
        orderBook[index].parentIndex = index - 1;  
    else
        orderBook[index].parentIndex = -1;  

    print(TO_LOG, " Order Added: Index %d | Price: %.4f | Size: %d | Spread: %.5f", index, price, size, spread);

    if (random() < 0.75 && index + 1 < totalOrders) {  
        int newSize = generateOrderSize(size * (0.8 + random() * 0.3));
        addOrder(orderBook, index + 1, price - 0.01 * randomInt(1, 12), newSize, totalOrders);
    }
}

//  Ensures Safe and Realistic Spread Calculation
double calculateSpread(Order* orderBook, int index) {
    if (index < 0 || index >= MAX_ORDERS) return 0.007;  //  Prevent invalid indices

    double baseSpread = clamp(0.007 + random() * 0.02, 0.007, 0.04);  
    double hftPressure = ifelse(random() < 0.5, 0.002 + (random() - 0.5) * 0.005, 0.001);  
    double volatilityFactor = (random() - 0.5) * 0.03;  

    double spread = baseSpread + (0.025 - orderBook[index].price * 0.0003) - hftPressure + volatilityFactor;
    return clamp(spread, 0.007, 0.04);  
}

//  Main Trading Simulation Function
void run() {
    set(LOGFILE);  
    Verbose = 2;

    initializeRandomSeed();

    int totalOrders = randomInt(MIN_ORDERS, MAX_ORDERS);  //  Ensures a dynamic number of orders

    static Order orderBook[MAX_ORDERS];  

    int i;
    for (i = 0; i < totalOrders; i++) {  
        orderBook[i].price = clamp(100.0 + random() * 8, 50, 200);  
        orderBook[i].size = generateOrderSize(MAX_ORDER_SIZE);  
        orderBook[i].spread = clamp(0.007 + random() * 0.02, 0.007, 0.04);  
        orderBook[i].parentIndex = -1;  
    }

    addOrder(orderBook, 0, 100.00 + random() * 6, generateOrderSize(85), totalOrders);

    vars SpreadSeries = series(0);
    vars PriceSeries = series(0);

    for (i = 1; i <= MAX_DEPTH; i++) { 
        int orderIndex = randomInt(0, totalOrders - 1);  //  Ensures valid index selection
        double spread = calculateSpread(orderBook, orderIndex);  
        SpreadSeries[0] = spread;
        PriceSeries[0] = orderBook[orderIndex].price;

        plotBar("Spreads", i, spread * 10000, 1, SUM + BARS, RED);
        plot("Price", PriceSeries[0], LINE, BLUE);
        plotBar("Order Sizes", i, orderBook[orderIndex].size, 1, SUM + BARS, GREEN);

        print(TO_LOG, " Depth %d: Order Index = %d | Spread = %.5f | Price = %.4f | Order Size = %d",
              i, orderIndex, spread, PriceSeries[0], orderBook[orderIndex].size);
    }
}

Last edited by TipmyPip; 01/31/25 14:38.
Risk Diversification in Portfolio Optimization [Re: TipmyPip] #488581
01/31/25 10:58
01/31/25 10:58
Joined: Sep 2017
Posts: 146
T
TipmyPip Offline OP
Member
TipmyPip  Offline OP
Member
T

Joined: Sep 2017
Posts: 146
Risk Diversification in Portfolio Optimization Using XOR-Based Asset Selection

In financial markets, we want to select two assets from a given portfolio that exhibit the highest volatility divergence while maintaining decorrelation to maximize risk-adjusted returns.

Mathematical Formulation
1. Portfolio Assets Representation Let 𝐴 be a set of assets in a portfolio:

𝐴={𝑎1,𝑎2,...,𝑎𝑛} Each asset 𝑎𝑖 is associated with a historical return vector 𝑅𝑖 , where: 𝑅𝑖 = (𝑟𝑖,1,𝑟𝑖,2,...,𝑟𝑖,𝑇) for 𝑇 time periods.

Each asset return sequence is represented in binary encoding (Bitwise representation of normalized returns), denoted by:

𝐵𝑖=𝑓(𝑅𝑖), 𝐵𝑖∈𝑍 2𝑇​ where 𝑓(𝑅𝑖) is a transformation that converts returns into binary sequences.

2. Objective: Find the Maximum XOR Pair We define the XOR distance metric between two asset return sequences:

XOR(𝐵𝑖,𝐵𝑗) = 𝐵𝑖⊕𝐵𝑗
​
where ⊕ represents the bitwise XOR operation.

The objective is to maximize the XOR value over all pairs (𝑖,𝑗):(𝑖∗,𝑗∗) = arg⁡max⁡ 𝑖,𝑗∈𝐴,𝑖≠𝑗

XOR(𝐵𝑖,𝐵𝑗)(i ∗ ,j ∗ )=arg max(i,j∈A,i≠j)​ such that: Corr(𝑅𝑖∗,𝑅𝑗∗)<𝜏

where Corr(𝑅𝑖,𝑅𝑗) is the correlation between asset return sequences, and 𝜏 is a pre-defined correlation threshold.

Computational Finance Insight
The higher the XOR value, the greater the return divergence between the two assets.
This ensures that choosing assets based on MaxXor selects assets that move in highly uncorrelated ways, which improves risk diversification.
The problem can be efficiently solved using a Trie-based MaxXor algorithm in
𝑂(𝑁) time instead of 𝑂(𝑁2) brute force.

Here is an example for the above problem. For all those who didn't have patience for my childish play to produce proper code for lite-c, I promise you my childish play is improving very fast :

Code
#define INT_BITS 32
#define MAX_ASSETS 1000  

typedef struct TrieNode {
    struct TrieNode* bit[2];
} TrieNode;

// Dynamically Allocate Trie Nodes
TrieNode* newTrieNode() {
    TrieNode* newNode = (TrieNode*)malloc(sizeof(TrieNode));  
    if (!newNode) {
        printf("\n Memory allocation failed in newTrieNode()");
        return NULL;
    }
    newNode->bit[0] = NULL;
    newNode->bit[1] = NULL;
    return newNode;
}

// Free Trie Memory After Use
void freeTrie(TrieNode* root) {
    if (!root) return;
    freeTrie(root->bit[0]);
    freeTrie(root->bit[1]);
    free(root);
}

// Find the Highest Bit Position in the Dataset
int findHighestBit(int* numbers, int size) {
    int highestBit = 0;
    int i;
    for (i = 0; i < size; i++) {
        int num = numbers[i];
        int bitPos = 0;
        while (num) {
            bitPos++;
            num >>= 1;
        }
        if (bitPos > highestBit) highestBit = bitPos;
    }
    return highestBit - 1;  // Only use as many bits as needed
}

// Insert a Number into the Trie
void insert(TrieNode* root, int number, int highestBit) {
    if (!root) {
        printf("\n Error: Root is NULL in insert()! Skipping...\n");
        return;
    }

    TrieNode* current = root;
    int i;
    for (i = highestBit; i >= 0; i--) {
        int bit = (number >> i) & 1;
        if (!current->bit[bit]) {
            current->bit[bit] = newTrieNode();
            if (!current->bit[bit]) {
                printf("\n Error: Trie Node Creation Failed at Bit %d! Skipping...\n", i);
                return;
            }
        }
        current = current->bit[bit];
    }
}

// Find the Maximum XOR for a Given Number
int findMaxXOR(TrieNode* root, int number, int highestBit) {
    TrieNode* current = root;
    int maxXOR = 0;
    int i;
    for (i = highestBit; i >= 0; i--) {
        int bit = (number >> i) & 1;
        if (current->bit[1 - bit]) {
            maxXOR |= (1 << i);
            current = current->bit[1 - bit];
        } else {
            current = current->bit[bit];
        }
    }
    return maxXOR;
}

// Brute Force XOR Calculation
void maxXorBruteForce(int* assetReturns, int size) {
    int maxXOR = 0, best1 = 0, best2 = 0;
    int i, j;
    var start = timer();  // Start Timing

    for (i = 0; i < size; i++) {
        for (j = i + 1; j < size; j++) {
            int currentXOR = assetReturns[i] ^ assetReturns[j];
            if (currentXOR > maxXOR) {
                maxXOR = currentXOR;
                best1 = assetReturns[i];
                best2 = assetReturns[j];
            }
        }
    }

    var execTime = (timer() - start) * 1000;  // End Timing
    printf("\n Brute Force XOR: (%d, %d) -> XOR: %d | Time: %.3f ms", best1, best2, maxXOR, execTime);
}

// Optimized Max XOR Function (Trie)
void maxXorOptimized(int* assetReturns, int size) {
    TrieNode* root = newTrieNode();
    if (!root) return;

    int highestBit = findHighestBit(assetReturns, size);
    insert(root, assetReturns[0], highestBit);

    int maxXOR = 0, best1 = 0, best2 = 0;
    int i;
    var start = timer();  // Start Timing

    for (i = 1; i < size; i++) {
        int currentXOR = findMaxXOR(root, assetReturns[i], highestBit);
        if (currentXOR > maxXOR) {
            maxXOR = currentXOR;
            best1 = assetReturns[i];
            best2 = best1 ^ maxXOR;
        }
        insert(root, assetReturns[i], highestBit);
    }

    var execTime = (timer() - start) * 1000;  // End Timing
    printf("\n Optimized Trie XOR: (%d, %d) -> XOR: %d | Time: %.3f ms", best1, best2, maxXOR, execTime);

    freeTrie(root);  // Free Memory
}

// Generate Proper Random Asset Returns
void generateRandomAssetReturns(var* assetReturns, int numAssets, int numBars) {
    int i, j;
    printf("\n Debugging Random Values Before Conversion:\n");

    for (i = 0; i < numAssets; i++) {
        vars RandomSeries = series(0);  // Create a series to maintain randomness
        
        printf("Asset %d: ", i + 1);
        
        for (j = 0; j < numBars; j++) {
            if (j == 0)
                RandomSeries[j] = random();  // First value is random
            else
                RandomSeries[j] = RandomSeries[j - 1] + random() - 0.5;  //  Follow series logic
            
            assetReturns[i * numBars + j] = RandomSeries[j];  //  Store random values
            printf("%.5f ", assetReturns[i * numBars + j]);  //  Print values for debugging
        }
        printf("\n");
    }
}

// Convert Asset Returns to Binary Representation
int convertReturnsToBinary(var* returns, int length) {
    int binaryValue = 0;
    int i;

    for (i = 0; i < length; i++) {
        if (returns[i] > 0.05) binaryValue |= (1 << i);  
        else if (returns[i] < -0.05) binaryValue |= (1 << (i + length));  
    }

    return binaryValue;
}

// Lite-C Main Function
function run() {
    if (is(INITRUN)) {
        int numAssets = 1000;
        int numBars = 5;
        int i;
        int assetBinaryReturns[MAX_ASSETS];
        var* assetReturns = (var*)malloc(numAssets * numBars * sizeof(var));

        if (!assetReturns) {
            printf("\n Memory Allocation Failed for assetReturns! Exiting...\n");
            return;
        }

        generateRandomAssetReturns(assetReturns, numAssets, numBars);

        printf("\n Debugging Binary Conversions:\n");

        for (i = 0; i < numAssets; i++) {
            assetBinaryReturns[i] = convertReturnsToBinary(&assetReturns[i * numBars], numBars);
            printf("Asset %d Binary: %d\n", i + 1, assetBinaryReturns[i]);  //  Print binary values
        }

        //  Compare Brute Force and Optimized Trie Method
        maxXorBruteForce(assetBinaryReturns, numAssets);
        maxXorOptimized(assetBinaryReturns, numAssets);

        free(assetReturns);
    }
}


For those of you who have any doubts about my progress please run the code and see the performance:
(I believe that even if you had any doubts about my abilities, your doubts were a blessing in disguise for my progress and contribution to you. Thank you.

Code
Asset 994 Binary: 775
Asset 995 Binary: 992
Asset 996 Binary: 992
Asset 997 Binary: 961
Asset 998 Binary: 961
Asset 999 Binary: 961
Asset 1000 Binary: 992
Brute Force XOR: (31, 992) -> XOR: 1023 | Time: 4095.300 ms
Optimized Trie XOR: (992, 31) -> XOR: 1023 | Time: 488.400 ms

The Language of Symbols [Re: TipmyPip] #488588
02/02/25 17:56
02/02/25 17:56
Joined: Sep 2017
Posts: 146
T
TipmyPip Offline OP
Member
TipmyPip  Offline OP
Member
T

Joined: Sep 2017
Posts: 146
[Linked Image]

The Paradox of Uncertainty: A Symbolic Conundrum

In a realm where every asset is an abstract entity, denote each asset by 𝐴ᵢ, where 𝑖 ∈ {1, 2, …, 𝑁}. Each asset 𝐴ᵢ possesses an intrinsic risk quantified by a parameter σᵢ². Rather than being mere numbers, these σᵢ² values embody the very essence of uncertainty, capturing the variability inherent in each asset’s behavior.

The challenge is to allocate a finite resource among these assets using weights 𝑤ᵢ. Intuitively, assets with lower uncertainty should be favored. One natural (yet abstract) idea is to consider the “attractiveness” of an asset as inversely related to its risk—that is, proportional to 1⁄σᵢ². In a simplified view, one might define the allocation by

  𝑤ᵢ = (1⁄σᵢ²) ⁄ ∑ⱼ (1⁄σⱼ²).

Here, ∑ⱼ (1⁄σⱼ²) represents the aggregated “stability” measure across all assets, ensuring that the 𝑤ᵢ sum to unity.

However, in our abstract universe the σᵢ² are not static. They fluctuate over time and may be influenced by interdependencies among assets. Let δᵢⱼ denote the interdependence (or abstract correlation) between assets 𝐴ᵢ and 𝐴ⱼ. These δᵢⱼ may alter the natural hierarchy suggested by the simple inversion 1⁄σᵢ², introducing nonlinearities or “dissonances” that complicate the picture.

Your challenge is as follows:

Conceptual Reflection:
Without resorting to concrete numerical examples, discuss the theoretical rationale for favoring assets with lower σᵢ² in your allocation. How does the inversion 1⁄σᵢ² serve as a symbolic beacon of stability in a world of uncertainty?

Dynamic Interdependencies:
Reflect on the implications of introducing δᵢⱼ into the mix. In what abstract ways might these interdependencies (δᵢⱼ ≠ 0) challenge the naïve allocation suggested by the pure inversion? Consider how these relationships could be integrated into the symbolic framework without losing the simplicity of the initial idea.

Towards an Adaptive Framework:
Propose a conceptual framework—or a set of guiding principles—that reconciles the symbols σᵢ², 1⁄σᵢ², 𝑤ᵢ, and δᵢⱼ. How might the interplay between individual asset uncertainty (σᵢ²), its inversion, and the overall aggregation (∑) be modified to create an allocation strategy that is both resilient and adaptive in a dynamic, interdependent environment?

Philosophical and Computational Contemplation:
Finally, ponder the trade-offs between the simplicity of the symbolic inversion and the complexity introduced by evolving risk interdependencies. How does this abstract approach help—or hinder—our understanding of risk management in a modern financial setting?

Your answer should weave these symbols and abstract concepts into a coherent narrative, demonstrating deep insight into both the theoretical and computational aspects of resource allocation under uncertainty.

Here we go :

Reflections on the Paradox of Uncertainty
In our abstract universe of assets, where each entity 𝐴ᵢ is imbued with its own mysterious risk measure, σᵢ², we find ourselves confronted with a challenge that is as much philosophical as it is computational. The proposition of allocating a finite resource via weights 𝑤ᵢ—where 𝑤ᵢ is ideally proportional to 1⁄σᵢ²—serves as a metaphor for the eternal struggle between order and chaos, stability and volatility. Yet, as we peer deeper into this symbolic landscape, we quickly discover that the seemingly simple inversion of risk is beset by twists and nuances that complicate the matter beyond mere arithmetic.

1. The Lure of the Inversion: A Beacon of Stability
At first glance, the notion that an asset’s attractiveness should be proportional to 1⁄σᵢ² is both elegant and seductive. In our abstract setting, each σᵢ² is more than a mere statistic—it represents the inherent uncertainty of asset 𝐴ᵢ. A low σᵢ² signals that 𝐴ᵢ is relatively stable, a quiet harbor in the storm of market fluctuations. When we take the reciprocal, 1⁄σᵢ², we transform this quiet stability into a luminous beacon, suggesting that the less erratic the asset, the more capital it deserves. The very act of inversion is symbolic: it turns the traditional fear of risk on its head, proclaiming that risk, when low, becomes a virtue to be rewarded.

In our symbolic formula, 𝑤ᵢ = (1⁄σᵢ²) / ∑ⱼ (1⁄σⱼ²), the denominator—∑ⱼ (1⁄σⱼ²)—acts as an aggregator, a cosmic ledger that tallies the “stability credits” of all assets in the universe. Here, every asset’s inverted risk is pooled, and each weight 𝑤ᵢ is essentially a fraction of this total stability. In this sense, the inversion functions as a kind of alchemy: converting uncertainty into a measure of desirability. This transformation is our first brush with the abstract—a mechanism that, on the surface, appears to neatly order the chaotic fabric of financial markets.

2. The Dynamic Dance: When Uncertainty Pulsates
Yet, as we delve deeper, the simplicity of the inversion is threatened by the restless nature of uncertainty. The σᵢ² values, far from being immutable, pulse and evolve over time. They are not isolated monoliths but are intertwined with the ebb and flow of market forces. In this dynamic environment, the very stability signaled by a low σᵢ² today may be undermined by hidden currents tomorrow.

To complicate matters further, consider the notion of interdependence, symbolized by δᵢⱼ. These terms represent the subtle, often non-linear interactions between pairs of assets 𝐴ᵢ and 𝐴ⱼ. In our idealized allocation formula, we imagine that each asset’s risk is assessed in isolation. But in the real—and abstract—world, risks do not exist in a vacuum. The fortunes of one asset can be inexplicably linked to those of another; their uncertainties may coalesce, diverge, or even counterbalance one another.

Imagine that δᵢⱼ is not zero. Instead, each pair of assets is connected by invisible threads of correlation, anti-correlation, or even some exotic non-linear relationship that defies simple categorization. The existence of these δᵢⱼ values introduces a profound twist to our original intuition. Now, the straightforward inversion 1⁄σᵢ² might no longer be the best beacon of stability. For instance, an asset that appears stable in isolation (a low σᵢ²) might be entangled in a network of interdependencies that amplify its effective risk. In such a scenario, simply rewarding it with a high allocation could be akin to ignoring a hidden danger lurking beneath the surface.

Thus, the interplay between σᵢ² and δᵢⱼ forces us to confront a deeper question: How do we reconcile the raw, individual measure of uncertainty with the emergent, collective behavior that arises from interdependence? It is here that our mind must wander beyond the confines of a neat formula and embrace a more nuanced, adaptive perspective.

3. Toward an Adaptive Framework: Reconciling Symbols and Reality
In light of the evolving nature of σᵢ² and the confounding effects of δᵢⱼ, one might ask: Is there a way to refine our allocation strategy so that it remains resilient amid dynamic uncertainty? One theoretical perspective is to allow the allocation weights, 𝑤ᵢ, to be determined not by a static inversion, but by an adaptive mechanism that continuously updates in response to changing risk measures and interdependencies.

Imagine a framework where, rather than a single snapshot inversion, the allocation is derived from an iterative process. In this process, the “raw” inversion 1⁄σᵢ² serves as an initial guess—a first approximation of each asset’s attractiveness. Then, through an iterative refinement procedure, the interdependencies δᵢⱼ are gradually incorporated, adjusting the weights until a balanced equilibrium is reached. In this adaptive view, 𝑤ᵢ is not fixed; it is a function that evolves over time, responsive to both the inherent risk of each asset and the shifting tapestry of their relationships.

Consider a metaphor: In a crowded ballroom, each dancer (asset 𝐴ᵢ) has a unique rhythm (σᵢ²), and their movements are subtly influenced by the proximity and motions of others (δᵢⱼ). A static allocation might assign dance partners based solely on each dancer’s individual rhythm, but true harmony is achieved only when one accounts for the interplay between dancers. The adaptive mechanism is akin to an ongoing choreography—a dynamic balancing act where each dancer adjusts their steps in response to the group, leading to a harmonious performance.

This adaptive perspective does not offer a neat, closed-form solution; rather, it invites computational exploration. Techniques such as iterative optimization, simulation-based adjustments, or even heuristic learning methods can be employed to “fine-tune” the allocations. The idea is to allow the system to evolve, to learn from the interplay between individual risk measures and interdependencies, until it settles into an equilibrium that is robust to the turbulence of uncertainty.

4. Computational Reflections: Bridging Abstraction and Practice
In the realm of computational finance, the abstract ideas we have discussed must eventually be translated into algorithms that run on high-speed computers. The inversion 1⁄σᵢ² is computationally trivial, but when interdependencies δᵢⱼ enter the fray, the problem quickly becomes non-linear and high-dimensional. Computational strategies such as iterative optimization and Monte Carlo simulations provide one way forward. These techniques allow us to simulate many “what if” scenarios, thereby gaining insight into how the abstract symbols—σᵢ², 1⁄σᵢ², wᵢ, δᵢⱼ—interact over time.

Imagine an algorithm that periodically re-estimates the σᵢ² values from recent market data, recalculates the raw inversions, and then adjusts the weights using a feedback loop that accounts for measured correlations (δᵢⱼ) among assets. In each iteration, the system “learns” from the market’s latest behavior, nudging the allocations toward a state where the overall portfolio risk is minimized while maintaining a healthy diversity of exposure. Such a strategy is computationally intensive, yet it reflects the true complexity of the market—an environment in which risk is a moving target and interdependencies are the norm rather than the exception.

From a computational perspective, one might also consider heuristic approaches—algorithms that do not guarantee a global optimum but can find “good enough” solutions in a reasonable time frame. These heuristics may incorporate techniques from machine learning, such as reinforcement learning, where the algorithm is rewarded for achieving a balanced portfolio over time. The key is that the algorithm learns to interpret the abstract symbols in a way that is both adaptive and resilient, even if it cannot fully eliminate the inherent uncertainty.

5. Philosophical Musings: The Trade-Off Between Simplicity and Realism
At its core, the abstract inversion of risk—allocating resources in proportion to 1⁄σᵢ²—offers a powerful and elegant idea. It promises simplicity: a single, transparent rule that transforms raw uncertainty into a measure of desirability. Yet, as we have seen, this simplicity is illusory in a world where uncertainty is dynamic and interdependent. The introduction of δᵢⱼ shatters the neat ordering, forcing us to acknowledge that real markets are a tangled web of correlations, contagions, and non-linear effects.

This tension between simplicity and realism is at the heart of computational finance. On one hand, simple models provide clarity and computational efficiency; they are the first stepping stones in our journey to understand market behavior. On the other hand, the very complexity of financial markets demands that we move beyond these simplified models and embrace the full spectrum of uncertainty. The challenge is to balance these opposing forces—maintaining the elegance of symbolic abstraction while not oversimplifying the reality that those symbols represent.

In a twisted sense, the elegance of the inversion formula is both its strength and its weakness. Its beauty lies in its ability to distill the multifaceted nature of risk into a single reciprocal value. However, when confronted with the messy, dynamic interplay of real-world factors, this beauty becomes a starting point rather than an end. The real challenge is to build upon this elegant core with additional layers of adaptation and feedback—essentially, to let the symbolic inversion evolve into a living system that continuously learns and adjusts.

6. Synthesis: A Harmonious, Yet Uncertain, Vision
To summarize, the journey from the simple inversion 1⁄σᵢ² to an adaptive allocation strategy that incorporates interdependencies (δᵢⱼ) is a voyage from abstraction to complexity—a transformation that mirrors the evolution of markets themselves. In our symbolic universe, each asset 𝐴ᵢ, with its risk measure σᵢ², is not an island but a node in a vast network of uncertainty. The raw inversion of risk offers an initial, illuminating insight: lower volatility should command greater allocation. Yet, the presence of nonzero δᵢⱼ introduces a twist—a reminder that the interplay of market forces is inherently non-linear and that risk, once abstracted, may reveal hidden layers of complexity.

From a computational finance perspective, this twisted vision challenges us to design algorithms that are both simple in their core idea and sophisticated in their execution. It is not enough to merely compute 1⁄σᵢ² and normalize; one must also account for the evolving correlations among assets, adapt the allocations in real time, and embrace the inherent uncertainty that defies static modeling. The result is a dynamic, iterative process—a dance of numbers and symbols that seeks to reconcile the opposing forces of stability and volatility.

In our final reflection, we recognize that the true power of this symbolic framework lies not in providing definitive answers but in inspiring questions. How do we quantify uncertainty in a world that is perpetually in flux? How can we design allocation strategies that are both robust and agile, capable of withstanding the shocks of market turbulence while seizing fleeting opportunities? And, perhaps most intriguingly, can we ever capture the full complexity of financial markets within the elegant simplicity of a symbolic formula?

The answer, as in much of computational finance, is that we must always be prepared to revise our models, challenge our assumptions, and embrace the twists and turns of an unpredictable universe. The symbolic inversion of risk is a starting point—a beacon that illuminates the path forward, even as it reminds us that the journey is as important as the destination. ( It is a working code, Use it wisely, because it can change your present future... :-)

Code
#define MIN_ORDERS 5  
#define MAX_ORDERS 15  
#define MIN_ORDER_SIZE 5  
#define MAX_ORDER_SIZE 100  
#define BASE_VOLATILITY 0.09  
#define MAX_DEPTH 10  
#define MAX_RETURNS 10  //  Track the last 10 returns

double eurusd_returns[MAX_RETURNS];  //  Store past returns
int return_index = 0;                //  Index for updating returns

//  Function to Calculate Variance of EUR/USD Returns
double calculate_variance(double* returns, int n) {
    double mean = 0.0;
    int i;
    for (i = 0; i < n; i++) {  
        mean += returns[i];
    }
    mean /= n;
    
    double variance = 0.0;
    for (i = 0; i < n; i++) {  
        variance += pow(returns[i] - mean, 2);
    }
    return variance / n;
}

//  Compute Inverse-Variance Portfolio Weight
double compute_inverse_variance_weight(double variance) {
    return ifelse(variance > 0, 1.0 / variance, 1.0);  // Prevent division by zero
}

//  Initialize Random Seed
void initializeRandomSeed() {
    seed(random() * timer());
}

//  Generate Random Integer
int randomInt(int min, int max) {
    return clamp(min + (int)((max - min + 1) * random()), min, max);
}

//  Clamp Values to a Range
double clamp(double x, double min, double max) {
    return ifelse(x < min, min, ifelse(x > max, max, x));
}

//  Generate Order Size as a Random Value between MIN_ORDER_SIZE and MAX_ORDER_SIZE
int generateOrderSize(int maxSize) {
    return randomInt(MIN_ORDER_SIZE, maxSize);
}

//  Generate a Random Spread between 0.002 and 0.009
double updateSpread(double baseSpread, int orderIndex) {
    // Directly generate a random spread within the prescribed limits.
    double newSpread = 0.002 + random() * (0.009 - 0.002);
    return clamp(newSpread, 0.002, 0.009);
}

//  Struct for Order Book
typedef struct Order {
    double price;
    int size;
    double spread;  
    int parentIndex;  
} Order;

//  Ensure Unique Spread by Adjusting Until Different from Previous Order
void addOrder(Order* orderBook, int index, double price, int totalOrders) {
    if (index >= totalOrders)
        return;  

    price = price + (random() * 5);  
    int size = generateOrderSize(MAX_ORDER_SIZE);
    // Generate a random spread within 0.002 to 0.009.
    double spread = 0.002 + random() * (0.009 - 0.002);
    spread = clamp(spread, 0.002, 0.009);

    // If there's a previous order, regenerate the spread until it differs.
    if (index > 0) {
        while (spread == orderBook[index - 1].spread) {
            spread = 0.002 + random() * (0.009 - 0.002);
            spread = clamp(spread, 0.002, 0.009);
        }
    }

    orderBook[index].price = price;
    orderBook[index].size = size;
    orderBook[index].spread = spread;
    orderBook[index].parentIndex = ifelse(index > 0, index - 1, -1);

    print(TO_LOG, " Order Added: Index %d | Price: %.4f | Size: %d | Spread: %.5f", 
          index, price, size, spread);

    if (random() < 0.75 && index + 1 < totalOrders) {  
        addOrder(orderBook, index + 1, price - 0.01 * randomInt(1, 12), totalOrders);
    }
}

//  Update Returns for Variance Calculation (Simulating Market Returns)
void updateReturns(double new_return) {
    double randomFactor = 1 + (random() - 0.5) * 1.5;  // Introduce more randomness
    eurusd_returns[return_index] = new_return * randomFactor;  
    return_index = (return_index + 1) % MAX_RETURNS;
}

//  Smart Order Selection for Market Depth
int selectRandomOrder(int totalOrders) {
    int index = randomInt(0, totalOrders - 1);
    return ifelse(random() < 0.3 && index > 0, index - 1, index);
}

//  Main Trading Simulation with Updates to All Order Parameters
void run() {
    set(LOGFILE | PLOTNOW);  
    Verbose = 2;
    initializeRandomSeed();

    int totalOrders = randomInt(MIN_ORDERS, MAX_ORDERS);  
    static Order orderBook[MAX_ORDERS];  

    int i;
    // Initialize the order book with random values.
    for (i = 0; i < totalOrders; i++) {  
        orderBook[i].price = clamp(100.0 + random() * 8, 50, 200);  
        orderBook[i].size = generateOrderSize(MAX_ORDER_SIZE);  
        orderBook[i].spread = updateSpread(0.002 + random() * 0.007, i);  
        orderBook[i].parentIndex = -1;  
    }

    // Create a recursive series of orders.
    addOrder(orderBook, 0, 100.00 + random() * 6, totalOrders);

    vars SpreadSeries = series(0);
    vars PriceSeries = series(0);
    vars OrderSizeSeries = series(0);

    // Update loop: re-randomize price, size, and spread for a randomly selected order.
    for (i = 1; i <= MAX_DEPTH; i++) { 
        int orderIndex = selectRandomOrder(totalOrders);  

        // Update price and size randomly.
        orderBook[orderIndex].price = clamp(100.0 + random() * 8, 50, 200);
        orderBook[orderIndex].size = generateOrderSize(MAX_ORDER_SIZE);

        // Update spread while ensuring it is different from the previous spread.
        double newSpread = updateSpread(orderBook[orderIndex].spread, orderIndex);
        orderBook[orderIndex].spread = newSpread;  

        SpreadSeries[0] = newSpread;
        PriceSeries[0] = orderBook[orderIndex].price;
        OrderSizeSeries[0] = orderBook[orderIndex].size;

        plotBar("Spreads", i, newSpread * 10000, 1, SUM + BARS, RED);
        plot("Price", PriceSeries[0], LINE, BLUE);
        plotBar("Order Sizes", i, OrderSizeSeries[0], 1, SUM + BARS, GREEN);

        updateReturns((random() - 0.5) * 0.005);
    }
}

Last edited by TipmyPip; 02/02/25 17:58.
Portfolio Dynamically Allocates Capita [Re: TipmyPip] #488598
02/11/25 15:37
02/11/25 15:37
Joined: Sep 2017
Posts: 146
T
TipmyPip Offline OP
Member
TipmyPip  Offline OP
Member
T

Joined: Sep 2017
Posts: 146
Puzzle:
Your performance-based portfolio strategy dynamically allocates capital across 28 currency pairs based on their distance from a smoothed equity curve.

Question:
If the EUR/USD component consistently outperforms all other pairs, its distance metric (dist) remains positive and high, while USD/JPY struggles with a negative distance.

The strategy caps any single component’s weight at 0.3 (30% of total capital).
After several runs, the Total_Dist across all pairs is 200 and EUR/USD’s distance is 90.
Can you calculate the actual capital allocation for EUR/USD, given that the total capital is $100,000?

A working code :-)

Code
// AlgoVars for component-specific parameters
#define dist AlgoVar[0]
#define component_weight AlgoVar[1]

// Global variables accessible by all components
var Total_Dist = 0;
var Max_Weight = 0.3;

void updateDist()
{
    /* Calculate distance metric from equity curve */
    var old_dist = dist; // Store component's previous dist value
    vars EquityCurve = series(EquityLong + EquityShort); // Create component equity curve
    vars EquityFilt = series(LowPass(EquityCurve, 100)); // Create component filtered equity curve
    dist = (EquityCurve[0] - EquityFilt[0]) * PIP; // Calculate new dist value

    if (dist <= 0)
    {
        if (old_dist > 0) Total_Dist = Total_Dist - old_dist; 
    }
    else if (dist > 0)
    {
        if (old_dist <= 0) Total_Dist = Total_Dist + dist; 
        if (old_dist > 0) Total_Dist = Total_Dist - old_dist + dist; 
    }

    // Plots
    plot("Component_Eq", EquityCurve, NEW, BLUE);
    plot("Filtered_Eq", EquityFilt, 0, RED);
}

void componentWeight()
{
    if (dist <= 0) 
    {
        Lots = 0.01; // Set the lot size to 0.01
        Margin = 0.025 * Max_Weight * Capital;
        component_weight = 0; 
    }
    else if (dist > 0)
    {
        component_weight = ifelse(Total_Dist > 0, dist / Total_Dist, 0); // Prevent division by zero
        if (component_weight > Max_Weight) component_weight = Max_Weight; // Limit max weight
        Lots = 0.01; // Turn off phantom trading
        Margin = 0.025 * component_weight * Capital; // Set margin according to weight
    }

    // Plots
    plot("dist", dist, NEW | BARS, BLUE);
    plot("Total_Dist", Total_Dist, NEW, RED);
    plot("wgt", component_weight, NEW, BLACK);
}

void tradeRSI()
{
    TimeFrame = 4;
    vars PriceH4 = series(price());
    vars Filter = series(LowPass(PriceH4, 200));

    TimeFrame = 1;
    vars PriceH1 = series(price());
    vars rsi = series(RSI(PriceH1, 14));

    int overbought = optimize(70, 60, 90, 5);
    int oversold = optimize(30, 10, 40, 5);

    Stop = 4 * ATR(100);
    Trail = Stop;
    TakeProfit = optimize(4, 1, 12, 1) * ATR(100);

    if (crossOver(rsi, overbought) && PriceH1[0] < Filter[0] && NumOpenShort == 0)
    {
        enterShort();
    }
    if (crossUnder(rsi, oversold) && PriceH1[0] > Filter[0] && NumOpenLong == 0)
    {
        enterLong();
    }
}

void tradeDigi()
{
    vars Price = series(price());
    vars filter = series(Roof(Price, 50, 100));

    Stop = optimize(3, 1, 6, 0.5) * ATR(100);
    Trail = 0.5 * Stop;
    TrailLock = 10;
    TrailSpeed = 200;

    if (valley(filter)) 
    {
        MaxLong = 1;
        enterLong();
    }
    if (peak(filter))
    {
        MaxShort = 1;
        enterShort();
    }
}

function run()
{
    set(TESTNOW | PLOTNOW | PARAMETERS);
    StartDate = 20231231;
    EndDate = 2025;
    NumWFOCycles = 10;
    BarPeriod = 60;
    LookBack = 150;
    Capital = 1000;

    // Full Asset List
    string My_Assets[28]; 
    My_Assets[0] = "EUR/USD"; My_Assets[1] = "GBP/USD"; My_Assets[2] = "USD/JPY"; 
    My_Assets[3] = "USD/CHF"; My_Assets[4] = "USD/CAD"; My_Assets[5] = "AUD/USD"; 
    My_Assets[6] = "NZD/USD"; My_Assets[7] = "EUR/GBP"; My_Assets[8] = "EUR/JPY"; 
    My_Assets[9] = "EUR/CHF"; My_Assets[10] = "GBP/JPY"; My_Assets[11] = "GBP/CHF";
    My_Assets[12] = "AUD/JPY"; My_Assets[13] = "AUD/CHF"; My_Assets[14] = "NZD/JPY";
    My_Assets[15] = "NZD/CHF"; My_Assets[16] = "CAD/JPY"; My_Assets[17] = "CAD/CHF";
    My_Assets[18] = "CHF/JPY";
    My_Assets[19] = "EUR/AUD"; My_Assets[20] = "EUR/NZD"; My_Assets[21] = "EUR/CAD";
    My_Assets[22] = "GBP/AUD"; My_Assets[23] = "GBP/NZD"; My_Assets[24] = "GBP/CAD";
    My_Assets[25] = "AUD/NZD"; My_Assets[26] = "GBP/CHF"; My_Assets[27] ="NZD/CAD";

    string My_Algos[2]; 
    My_Algos[0] = "rsi"; 
    My_Algos[1] = "digi";

    // Update dist metric and Total_Dist for all components
    int i, j;
    for (i = 0; i < 28; i++) 
    {
        for (j = 0; j < 2; j++) 
        {
            asset(My_Assets[i]); 
            algo(My_Algos[j]);
            updateDist();
        }
    }

    // Update component weights and trade
    while (asset(loop(
        "EUR/USD", "GBP/USD", "USD/JPY", "USD/CHF", "USD/CAD", "AUD/USD", "NZD/USD",
        "EUR/GBP", "EUR/JPY", "EUR/CHF", "GBP/JPY", "GBP/CHF", "AUD/JPY", "AUD/CHF", "GBP/CHF", "NZD/CAD",
        "NZD/JPY", "NZD/CHF", "CAD/JPY", "CAD/CHF", "CHF/JPY",
        "EUR/AUD", "EUR/NZD", "EUR/CAD", "GBP/AUD", "GBP/NZD", "GBP/CAD", "AUD/NZD")))
    {
        while (algo(loop("rsi", "digi")))
        {
            componentWeight();
            if (Algo == "rsi") tradeRSI();
            else if (Algo == "digi") tradeDigi();
        }
    }

    PlotWidth = 600;
    PlotHeight1 = 400;
}


(It is quite hard to develop strategies with higher complexities, when documentation of parameters and flags are limited in how they are to be used. It would be quite an advantage to extend the manual of Zorro Trader to include detailed information for the use of parameters.)
While it is possible to use a dynamic threshold for a more sophisticated strategy :

Code
#define dist AlgoVar[0]
#define component_weight AlgoVar[1]  // Each pair-algo has its own weight stored in AlgoVar

var Total_Dist = 0;
var Max_Weight = 0.3;
var MLsignals[8];

#define condition1 MLsignals[0]
#define condition2 MLsignals[1]
#define condition3 MLsignals[2]
#define condition4 MLsignals[3]
#define condition5 MLsignals[4]
#define component_weight_signal MLsignals[5]
#define dynamicThreshold_RSI MLsignals[6]     
#define dynamicThreshold_Digi MLsignals[7]    

var totalWeight = 0;  // Global to store total weights

void updateDist() {
    vars EquityCurve = series(EquityLong + EquityShort);
    vars EquityFilt = series(LowPass(EquityCurve, 100));
    dist = (EquityCurve[0] - EquityFilt[0]) * PIP;

    vars rsiSeries = series(RSI(series(price()), 14));
    vars atrSeries = series(ATR(100));
    condition1 = rsiSeries[0];
    condition2 = atrSeries[0];
    condition3 = EquityCurve[0];
    condition4 = EquityFilt[0];
    condition5 = dist;
    component_weight_signal = component_weight;

    if (dist > 0) Total_Dist += dist;
}

void componentWeight() {
    if (dist <= 0) {
        component_weight = 0;
    } else {
        component_weight = ifelse(Total_Dist > 0, dist / Total_Dist, 0);
        if (component_weight > Max_Weight) component_weight = Max_Weight;

        var perceptronOutput = adviseLong(PERCEPTRON+RETURNS, 2, MLsignals, 8);  
        if (perceptronOutput > 0) {
            Margin = 0.025 * component_weight * Capital * (1 + perceptronOutput / 100);
        } else {
            Margin = 0.025 * component_weight * Capital * (1 + perceptronOutput / 200);
        }
    }
    totalWeight += component_weight;  // Accumulate total weight during the loop
    plot("dist", dist, NEW | BARS, BLUE);
    plot("wgt", component_weight, NEW, BLACK);
}

void tradeRSI() {
    vars PriceH4 = series(price());
    vars Filter = series(LowPass(PriceH4, 200));
    vars PriceH1 = series(price());
    vars rsi = series(RSI(PriceH1, 14));
    var Objective = priceClose(0) - priceClose(5);

    if (adviseLong(DTREE+RETURNS, Objective, MLsignals, 5) > dynamicThreshold_RSI && PriceH1[0] > Filter[0]) 
        enterLong();
    if (adviseShort(DTREE+RETURNS, Objective, MLsignals, 5) > dynamicThreshold_RSI && PriceH1[0] < Filter[0]) 
        enterShort();
}

void tradeDigi() {
    vars Price = series(price());
    vars filter = series(Roof(Price, 50, 100));
    var Objective = priceClose(0) - priceClose(5);

    if (valley(filter) && adviseLong(PATTERN+RETURNS, Objective, MLsignals, 5) > dynamicThreshold_Digi) 
        enterLong();
    if (peak(filter) && adviseShort(PATTERN+RETURNS, Objective, MLsignals, 5) > dynamicThreshold_Digi) 
        enterShort();
}

function run() {
    set(PARAMETERS | RULES | PLOTNOW | TESTNOW);  
    StartDate = 20231231;
    EndDate = 2025;
    NumWFOCycles = 10;
    BarPeriod = 60;
    LookBack = 150;
    Capital = 1000;

    while (asset(loop(
        "EUR/USD", "GBP/USD", "USD/JPY", "USD/CHF", "USD/CAD", "AUD/USD", "NZD/USD",
        "EUR/GBP", "EUR/JPY", "EUR/CHF", "GBP/JPY", "GBP/CHF", "AUD/JPY", "AUD/CHF", "GBP/CHF", "NZD/CAD",
        "NZD/JPY", "NZD/CHF", "CAD/JPY", "CAD/CHF", "CHF/JPY",
        "EUR/AUD", "EUR/NZD", "EUR/CAD", "GBP/AUD", "GBP/NZD", "GBP/CAD", "AUD/NZD")))
    {
         while (algo(loop("rsi","digi"))) {
            updateDist();
            componentWeight();
            if (Algo == "rsi") tradeRSI();
            else if (Algo == "digi") tradeDigi();
        }
    }

    // Normalize weights after all pairs and algos are processed
    while (asset(loop(
        "EUR/USD", "GBP/USD", "USD/JPY", "USD/CHF", "USD/CAD", "AUD/USD", "NZD/USD",
        "EUR/GBP", "EUR/JPY", "EUR/CHF", "GBP/JPY", "GBP/CHF", "AUD/JPY", "AUD/CHF", "GBP/CHF", "NZD/CAD",
        "NZD/JPY", "NZD/CHF", "CAD/JPY", "CAD/CHF", "CHF/JPY",
        "EUR/AUD", "EUR/NZD", "EUR/CAD", "GBP/AUD", "GBP/NZD", "GBP/CAD", "AUD/NZD")))
    {
         while (algo(loop("rsi","digi"))) {
            component_weight = component_weight / totalWeight;  // Normalize
            plot(strf("Weight_%s_%s", Asset, Algo), component_weight, NEW, RED);
        }
    }

    PlotWidth = 600;
    PlotHeight1 = 400;
}

Attached Files
Last edited by TipmyPip; 02/14/25 17:37.
Page 8 of 9 1 2 3 4 5 6 7 8 9

Moderated by  Petra 

Powered by UBB.threads™ PHP Forum Software 7.7.1