Senior Quant Developer / Backend - HFT
Crypto Social Trading app
Our Client
Our client is building a new category-defining trading platform — a single app to discover, understand, and trade real-time trends. By fusing perpetual futures with real-time attention data from major social platforms, they are enabling users to measure and trade cultural mindshare at scale. It’s an ambitious and complex product, rethinking how attention can become an asset and how financial products can feel as intuitive and engaging as consumer apps.
About the Team
-
A tight-knit team of six full-time employees, all based in New York City
-
Fully in-person: engineers, designers, and product minds all work side-by-side
-
Everyone contributes directly to shipping product, shaping vision, and learning from users
Core Values
-
Invent, Don’t Optimize: The team is focused on first-principles thinking to build something entirely new — not just a better version of what already exists.
-
Products That Teach People What They Want: The most powerful ideas often feel inevitable in hindsight. This team is betting on its ability to see around corners.
-
No Passengers: Everyone is an owner. No layers, no handoffs. Fast execution, high trust, full commitment — all in one room
What You’ll Do
-
Build ultra-low-latency data pipelines to ingest, normalize, and process high-frequency trading data, including order books, trade events, and real-time market signals.
-
Engineer robust C++ systems for managing real-time and historical financial data feeds, ensuring performance, reliability, and fault tolerance.
-
Automate analytics workflows using Python for scripting, data processing, and reporting.
-
Analyze market microstructure data to identify patterns in liquidity, spreads, slippage, and order flow—supporting alpha research and risk monitoring.
-
Collaborate with trading and quant research teams to integrate data pipelines into core trading and analytics platforms.
-
Ensure data accuracy and integrity, writing unit/integration tests and validating outputs to avoid downstream issues.
-
Optimize time-series data storage and access, tackling challenges of scale, latency, and memory usage in production environments.
-
Troubleshoot pipeline issues in real-time, minimizing downtime and ensuring consistent data availability across systems.