How to Build a Trading Robot (Part 1): Core Architecture of an Algorithmic Trading System
“Advisor” vs “Robot”
Recently a friend reached out - he trades crypto as a hobby - and asked me to “write a trading robot” for him.
This is where terminology matters. In casual разговор they call all of this “robots”, but in practice what you usually want first is a trading advisor.
An advisor analyzes the market and produces recommendations: where to enter, where to place a stop, and where it’s better to do nothing.
A robot takes the next step for you - and opens/closes positions on its own.
And that’s why “let’s go straight to a robot” is usually a bad idea. Until the algorithm is truly tested, the probability of losing money is highest: a mistake in analysis, a bug in logic, a wrong candle/order handling - and the robot will calmly do exactly what you wouldn’t do by hand.
So the sane path is almost always the same: start with an advisor (analysis + signals), then automate execution - and only if you have stats and risk controls.
What This Article Is About: Advisor Architecture
No “signals that print money” and no return promises - just theory: what a trading advisor is made of, what its architecture looks like, and why clear responsibility boundaries matter.
We won’t dive deep into analysis algorithms - that’s not the goal here. The goal is to show which components an advisor usually has and how they connect.
For the “brain” that interprets data and produces conclusions, today people often pick off-the-shelf LLM services: ChatGPT, Gemini, Grok. You can fine-tune or replace them with your own model, but architecturally that’s secondary.
The nuance stays the same: the model can’t read your mind. It needs a prompt - clear, structured, and repeatable.
A prompt is the text instruction (plus context) you send to the model: what data it is looking at, what you want it to do, and what format the answer must follow.
And of course you need the data you’ll be working with.
The key question that follows is: what exactly should go into the prompt (candles, indicators, order book, trades, risk context) and in what shape.
Data and API
We’ll talk about prompts later. For now - data.
The entry-level flow is simple:
- Fetch price data (quotes).
- Combine it with the request context.
- Send it into the model prompt - via the API.
Almost every exchange exposes quotes programmatically. And yes - through an API.
Let’s pause for a minute and clarify what we mean by “API”, since we’ve already used the word several times and we’ll keep using it.
API (Application Programming Interface) is an interface for programs to communicate with each other.
In practice, it’s “the rules of the game”:
- which requests you can make (for example: get candles, get the order book);
- which data format to use (often JSON);
- what the response looks like (field structure, errors, limits, authentication).
In other words: an API lets you use an exchange (or a model) as a set of functions without caring about its internal implementation.
User Layer: Showing the Result
So at this stage we already have two core components:
- data (fetched from an exchange);
- an analyst (the model that receives the data and returns a conclusion/signal).
But an application that fetches and analyzes data “inside itself” is useless until a human can actually see the result.
So you need a third component - an interface.
The simplest and most universal option is a web interface: a chart, recent signals, a brief “why”, and risk settings.
As an option - messenger notifications (for example, Telegram): a short signal plus a link to details in the web UI.
Context Beyond Price: News
For a base version, this is enough. But if you want more depth, you can add another context layer: news.
The idea is straightforward: collect relevant news articles and include them in the prompt. Usually not as raw text, but as short summaries (so you don’t bloat context and burn tokens).
This is not a mandatory part of the architecture, but in some modes it noticeably improves advice quality: the model starts factoring in reasons behind moves, not only chart shape.
How to Tie It All Together
Okay - the modules are clear. The practical question is: how do you assemble this into a single application?
In reality you usually end up with two modes.
On-demand analysis
The user clicks a button (or calls an endpoint) → you fetch fresh quotes → build the prompt → get the model’s response → show the signal.
- Pros: cheaper in tokens/resources, easier to debug.
- Cons: you will miss events between requests - and along with them, potential entries.
Scheduled, periodic market analysis
The user sets a cadence (e.g., every 5 minutes/hour/day) → the system runs the pipeline and pushes the result.
- Pros: steady signals, you keep a “market pulse”, easier to collect statistics.
- Cons: more expensive (tokens/infra), you need limits and protection from self-inflicted damage (rate limiting, retries, signal deduplication), and you must have monitoring and logging - otherwise you won’t know why everything went silent at 03:17.
In both modes, the base component set is the same:
- web server (UI and user-facing API);
- exchange data module (quotes, candles, and optionally order book/trades);
- LLM module (prompt building → model call → response parsing);
- storage (user settings, signal history, quote cache, analysis results).
And you need “glue” that ties it together: an application-level entity - call it core/service - that owns dependencies, lifecycle, and scenarios (on-demand vs scheduled).
Long-term it’s almost always worth designing for plugins:
- plugins for quote sources (add a new exchange without rewriting everything);
- plugins for LLM providers (a new model/pricing appears - swap an adapter, not the architecture).
From here you can “land” into implementation: interfaces between modules, data structures, and where the boundary is between an “advisor” and a “robot”.
For additional technical explanations and common implementation questions, see the related Trading Automation FAQ section dedicated to this topic.