Skip to main content
Prism is built on a multi-pipeline architecture designed to keep data accurate, up to date, and reliable — even at scale. The system is both robust and modular, allowing each data flow to operate independently while maintaining global consistency and performance. While we don’t disclose every implementation detail, here’s an overview of what makes Prism fast, efficient, and resilient.

Prism Multi-Pipeline Architecture

We adopted a multi-pipeline architecture to isolate each dataset and apply specialized processing strategies without interfering with other datasets.

Real-Time Pipeline

The Real-Time Pipeline is the main entry point for all ingested blockchain data.
It handles data validation, filtering, and incremental updates across all datasets in real time.
Beyond simply inserting or updating data, this pipeline prioritizes the most requested datasets, ensuring they’re refreshed with the highest frequency possible.

Snapshot Pipeline

The Snapshot Pipeline periodically captures full historical snapshots of each dataset.
These snapshots preserve the state of the data at specific intervals and are stored in the database for retrospective analysis and long-term consistency.

History Pipeline

The History Pipeline ensures that all historical datasets remain current and reflect the latest state of the system.
It periodically reprocesses and reconciles past data using new information from the Real-Time and Snapshot pipelines, maintaining a coherent and evolving historical view.

Together, these pipelines form the backbone of Prism’s data infrastructure — balancing speed, accuracy, and resilience while supporting continuous growth.