Building a Trading Robot (Part 2): Market Data Pipelines and Signal Generation

We walked through the minimal architecture: quotes module → prompt building → model call → result storage → web UI and notifications. We also mentioned an optional context layer — news that can be added to the prompt as short summaries.

In part two there will be less “in general” and more practice: which data you actually need at the start, where exchange APIs bite, what to store, how not to smash into limits, and why without caching, logging, and guardrails your advisor will fail exactly when you rely on it.

What market data to start with

At the beginning you have three main classes of market data you can fetch from an exchange via API. The most common and easiest is OHLCV candles: Open/High/Low/Close plus volume per interval. This is a “compressed” view of price that’s convenient to store, cache, and pass into a model as a stable context slice (for example, the last N candles of a chosen timeframe1).

If you want to see what happens inside a candle, you can add the trade tape (trades) — a stream of executions: price, size, time (sometimes side). This layer adds more granularity about activity and impulses and is usually used as a “finer” source than candles.

The third option is the order book: bid/ask levels and sizes at each level. These are market microstructure and liquidity data: not “what happened”, but “what is currently queued”. People add it as an extra context layer when they want to account for depth, bid/ask imbalance, and reactions around levels.

For now we’ll stick to the simplest option — OHLCV candles. In your own implementation you can add trades and the order book, but the goal of this article is educational: build a clear skeleton and avoid drowning in details.

A unified data format and history

As soon as you start pulling quotes from more than one source, an unpleasant reality shows up: different sources return the same data in different formats. Somewhere a candle is an array of numbers, somewhere it’s an object with fields, somewhere timestamps are milliseconds, somewhere seconds — and field names and ordering can differ even across “similar” APIs.

If you don’t introduce a unified internal format, the system quickly turns into a pile of special cases: every new source brings its own parsers, exceptions, and “one-off” handling. That’s why the quotes module almost always needs a common internal representation — the minimal set required for downstream analysis. At our current level it’s typically: instrument, timeframe1, timestamp2, Open/High/Low/Close, and Volume.

It’s also worth calling out historical data. “The current market state” is nice, but without history you can’t expand analysis, test hypotheses, or even explain to a user why a signal looks the way it does. So even in a simple implementation it’s worth storing candle history (or being able to load it fast) and recording which exact inputs formed a particular signal.

Rate limits: why they matter

Almost every exchange has limits on how frequently you can call the API — rate limits. The reason is simple: if everyone could pull quotes endlessly, the API would fall over even without any DDoS.

For an advisor this is doubly important. Requests are not continuous — they run on a timer — and if you pick a wrong cadence or multiply it by the number of instruments, you’ll hit the limit very fast. The outcome is usually the same: 429 errors / key bans, gaps in data, and silence exactly when the market moves the most.

The pragmatic approach is to budget requests up front (cadence × instruments × data types), cache results, avoid fetching the same thing twice, and handle limits properly (backoff / retry after a pause instead of spamming requests).

Prompts and models: same API, different rules

With data fetching covered, we can move to the next level: prompts and working with models.

At first glance it looks very similar to pulling quotes: again an API, again rate limits, again you need to think about retries, timeouts, and caching. The only difference is that instead of “give me candles” your request becomes “here is the data, here is context — return a signal and an explanation”.

Then the differences begin. For an exchange the answer is a data structure; for a model it’s an interpretation. That’s why the prompt becomes a contract, not just “text”, and you should treat it like a data format: version it, validate it, constrain it, and log it.

A small feature that makes integration much easier: you can predefine the response structure in the prompt. For example, you can ask the model to return strictly valid JSON with fields like signal, confidence, and reasons. It’s not “magic”, but it disciplines the output format and lets your application parse and validate responses automatically instead of scraping free-form text.

It’s also worth mentioning MCP (Model Context Protocol): an approach/protocol that helps standardize how models receive context and how external data sources and tools are connected. Even if today you call a specific provider API directly, keeping an MCP-like layer in mind is useful — it forces better boundaries and makes future expansion easier.

And yes — in the long run the “analyst” does not have to be an external service. If you later have resources and motivation, this module can be replaced with your own model (or a fine-tuned open-source one) without rewriting the rest of the system — as long as interfaces were separated correctly from the start.

Storage: simpler than it looks

At this point we already have at least two “buckets” of data: market candles and analysis results. Looking ahead, you’ll add news history (and summaries), plus storing prompts/model responses for debugging and reruns. In other words, there’s enough data, and it appears at different stages of the pipeline.

So it makes sense to think early about unified storage. The nature of this data often means you don’t need a full SQL schema: in many cases an embedded database and a simple key-value approach is enough.

A KV store (key-value) is a model where data is stored as “key → value” pairs: by key you quickly fetch the object you need (for example, candles for an instrument and timeframe1 for a period, or an analysis result for a specific request).

Interface: the minimum without which the advisor is useless

And finally — the interface. We won’t dive into UI/UX right now, but the basic point from part one still stands: you need at least a web UI, and as a nice addition — Telegram notifications.

Practically, a user needs only a few things. First, to see current signals and alerts (and quickly understand what’s happening). Second, to request analysis on demand — pull fresh data and get a signal in one click. And third, to configure the system: choose instruments and data sources, manage API keys, and — importantly — edit prompts and analysis parameters without touching code.

Core: “the ring” that binds everything

Tolkien had “one ring to rule them all… and in the darkness bind them”. An advisor architecture has its own ring too — without the drama: the core that keeps modules together.

The core is not responsible for analysis itself. It owns lifecycle: start/stop, dependency initialization, a scheduler, request routing (on demand and on schedule), and clean shutdown. This is where you decide what runs when, where data is written, and how components talk to each other without turning into a knot.

If you want to add new quote sources, swap LLM providers, plug in news, and avoid rewriting everything each time, the core must be simple but strict: modules behind interfaces, dependencies explicit, configuration centralized, and lifecycle predictable.

This is a good place to stop and be honest: the advisor “skeleton” is already visible. Data comes in, prompts are built, the model responds, results are stored and shown — and it all stands on a core that manages lifecycle. In the next article we’ll go one level deeper into implementation details: what a unified interface for quote sources looks like, how to design caching and retries, where to store history, how to validate model outputs (including JSON), and what to do so the system doesn’t break because of limits, time, and “rare” failures.

For additional technical explanations and common implementation questions, see the related Trading Automation FAQ section dedicated to this topic.

Footnotes

  1. A timeframe is the duration of one candle (the aggregation interval): for example 1m, 5m, 1h, 1d. It defines how the raw price stream is “compressed” into OHLCV.
  2. A timestamp is the time marker (usually UTC) a candle/trade is attached to. A classic mistake is mixing seconds and milliseconds (or local time and UTC), which shifts data and breaks analysis.


Frequently Asked Questions

A trading robot primarily relies on market data streams that describe how prices evolve over time. The most common format used in trading systems is OHLCV data, which includes the open, high, low, close prices and trading volume for each time interval. This structure is simple, standardized, and widely supported by exchanges, making it the foundation of most trading pipelines. In addition to OHLCV candles, more advanced trading systems may ingest trade tape data, which represents every individual trade executed in the market. Another optional but powerful data source is the order book, which provides information about buy and sell liquidity at different price levels. Some trading systems also enrich market data with external signals, such as technical indicators, macroeconomic news, or sentiment analysis. The key architectural requirement is that all incoming data sources must be processed and transformed into a consistent internal format that downstream modules can consume reliably.

Historical data is essential because every trading decision must be evaluated against past market behavior. Without historical datasets, it is impossible to test whether a trading strategy would have performed well before deploying it in a live environment. Traders and engineers rely on historical candles to run backtesting simulations, which measure profitability, drawdowns, and risk characteristics of a strategy. In addition to strategy evaluation, historical data is useful for debugging and explaining why a system generated a specific signal. For AI-driven trading systems, historical datasets can also serve as input for model training or validation experiments. Storing prompts and model responses alongside market data creates an audit trail that allows developers to analyze how decisions were produced. This is particularly important in automated systems where human supervision is limited. As a result, most trading infrastructures treat historical data as a critical asset rather than a temporary cache.

Every exchange provides market data through its own API, and these APIs often return different field names, timestamp formats, and data structures. For example, one exchange may return timestamps in milliseconds while another uses seconds, or they may represent candles with slightly different schemas. If this raw data is passed directly into strategy or AI modules, it creates inconsistency and increases the risk of incorrect signals. To prevent this, trading systems implement a normalization layer that converts all incoming data into a unified internal schema. A normalized structure typically includes fields such as instrument, timeframe, timestamp, open, high, low, close, and volume. Once data is normalized, downstream components such as signal generation engines or machine learning models can operate without exchange-specific logic. This architectural pattern dramatically simplifies system maintenance and enables a trading robot to support multiple exchanges without rewriting core logic. In practice, normalization acts as a boundary that separates unreliable external APIs from stable internal system components.

At minimum, a trading robot should provide a dashboard that displays current positions, signals, and system status. This interface helps operators verify that the system is functioning as expected. Many developers also implement notification channels such as Telegram or email alerts for important events. For example, alerts can notify users about executed trades, unusual market conditions, or system errors. The interface does not need to be complex, especially during early development stages. A simple web panel that displays signals and logs can be sufficient. As the system grows, the interface may evolve into a more advanced monitoring platform. The key objective is to maintain transparency and allow operators to quickly detect abnormal system behavior.

Structured outputs ensure that trading decisions can be interpreted consistently by automated systems. When an AI model produces plain text responses, it becomes difficult for downstream components to parse and validate the results. To avoid ambiguity, trading architectures often require the model to return responses in a strict JSON format. This structure typically includes fields such as signal type, confidence score, and explanatory reasoning. Because the format is predictable, the execution system can safely extract the relevant values and trigger the appropriate actions. Structured outputs also make it easier to store and analyze model decisions later. Engineers can build dashboards or analytics pipelines that examine signal distributions, accuracy, or model performance over time. In production trading environments, structured responses are critical for maintaining reliability and traceability of automated decisions.

AI models can be integrated into a trading robot as a signal generation component that analyzes market conditions and produces trading recommendations. Instead of relying purely on deterministic indicators, an AI model can process complex patterns in market data and infer potential trends. In many modern systems, the AI module receives structured input such as recent market candles, derived indicators, or contextual information. The model then generates an output that represents a possible trading action, such as buy, sell, or hold. For automation to work reliably, the output must be structured in a machine-readable format rather than natural language. The trading system interprets this structured output and decides whether to execute a trade. AI modules are typically integrated as stateless services so they can scale independently from other components. This architecture allows engineers to experiment with new models without redesigning the entire trading system.

The core controller is the component responsible for coordinating all other modules in the trading infrastructure. It acts as the system’s central orchestrator, ensuring that different services communicate correctly. When the system starts, the controller initializes data feeds, analysis modules, and execution interfaces. It also schedules periodic tasks such as market data updates or model evaluations. During runtime, the controller routes information between components and manages the lifecycle of each module. If a service fails or produces an error, the controller can restart it or trigger fallback procedures. This design allows the trading robot to operate continuously without manual supervision. By separating orchestration from business logic, the architecture becomes more stable and easier to maintain.

Trading systems store several different categories of data, including market candles, trading signals, and system logs. Because market data is naturally chronological, many systems rely on time-series databases that efficiently handle timestamped records. However, some architectures prefer key-value databases due to their simplicity and fast read-write performance. Key-value stores are particularly useful for storing signals, prompts, or model outputs that need to be retrieved quickly. In larger infrastructures, engineers may combine multiple storage solutions to optimize different workloads. For example, a time-series database may store historical candles while a key-value store tracks system state. Storage systems must also support high write throughput because trading bots continuously ingest new data. Proper database selection ensures that the system can scale as the number of instruments and strategies grows. Ultimately, the storage layer acts as the system’s memory and enables long-term analysis of trading behavior.

A trading robot is usually built as a modular system composed of several independent layers. The first layer collects raw market data from exchange APIs and converts it into standardized internal structures. The second layer performs preprocessing tasks such as aggregation, filtering, or feature extraction from the incoming data streams. The next component is the signal generation engine, which can include algorithmic strategies, statistical models, or AI-based decision systems. After signals are generated, they pass through risk checks or validation modules before reaching the execution layer. The execution layer interacts with exchange APIs to place or cancel orders based on the system’s decisions. All system activity is recorded in a storage layer that keeps historical market data, signals, prompts, and model responses for later analysis. Finally, a user interface or monitoring layer provides visibility into the robot’s activity through dashboards or notifications.

Most cryptocurrency and financial exchanges impose rate limits on API requests to prevent abuse and ensure system stability. If a trading robot sends too many requests within a short time period, the exchange may temporarily block the connection or reject requests. This can lead to missing market updates or failed order submissions, which may directly impact trading performance. To handle this constraint, trading systems implement a request management layer that tracks how many calls are allowed per time window. This layer often includes caching mechanisms so that frequently requested data does not require repeated API calls. Systems may also implement retry strategies with exponential backoff to recover from temporary throttling errors. Some architectures maintain local data buffers to reduce dependency on frequent API queries. Proper rate-limit management is therefore essential for ensuring that the trading robot remains reliable during volatile market conditions.