How I Built Bulkbeat TV: A Telegram Bot for AI-Filtered NSE Market Intelligence

How I Built Bulkbeat TV
Bulkbeat TV was built around a simple but hard problem: financial data is everywhere, but actionable signal is rare.
As Project Manager at Sitekraft.dev, I delivered a Telegram-first product that continuously watches market disclosures and related news sources, then decides which events deserve to become alerts. The result was a production system that behaves less like a generic news bot and more like a selective intelligence layer for traders.
The Real Problem Was Not Data Collection
Fetching updates from NSE and market websites is not the interesting part. Many bots can scrape headlines and forward them. The real engineering problem starts after ingestion:
- which events are important enough to interrupt a user
- how to read filings that arrive as messy PDFs
- how to control duplicate or weak alerts
- how to keep a bot reliable on a modest VPS
That framing changed the whole architecture. Instead of optimizing for message count, I optimized for signal quality.
The Product Workflow
The system follows a multi-stage flow:
- pull updates from multiple market-facing sources
- normalize and deduplicate the incoming events
- enrich documents through PDF parsing and OCR where necessary
- score the event for likely relevance using an AI layer
- apply strict rules before alerting anyone
- format and deliver the final alert through Telegram
Each stage exists to remove noise before the user sees anything.
Why Async Architecture Mattered
This bot needed to do many small tasks repeatedly: poll sources, download documents, parse content, score events, check payments, run scheduled jobs, and respond inside Telegram. A blocking design would have wasted time and memory.
Using a fully async Python architecture made the runtime practical for a small VPS deployment. It allowed the bot to keep moving across concurrent network-heavy tasks — polling, downloading, parsing, scoring, and responding — without turning the system into a heavyweight backend.
OCR Was Not a Fancy Add-On
In this kind of workflow, OCR is not just a nice feature. It is a reliability feature.
Some filings are image-heavy or difficult to extract cleanly. If the enrichment layer fails, the AI layer receives weak context, and weak context leads to weak alerts. Adding OCR fallback made the pipeline much more resilient because it improved the quality of text available for scoring and formatting.
The Most Important Design Decision: Send Less
One of the strongest lessons from this project was that a premium alerting product should usually send fewer messages, not more.
To enforce that, I leaned on:
- deduplication
- source-aware filtering
- cooldown windows
- conservative gating before alert delivery
That choice shaped the whole product. It made the bot more trustworthy and reduced the chance of users tuning it out.
Operations Needed Their Own Interface
User features alone were not enough. A live product also needs operational visibility.
That is why I built separate admin-side controls for tasks like:
- health monitoring
- subscriber access management
- broadcast workflows
- support-friendly control paths
This prevented the system from becoming a black box after deployment.
Engineering Lessons I Took Forward
Bulkbeat TV reinforced a few ideas I care about deeply:
- the best automation systems are opinionated, not just fast
- AI becomes far more useful when the input pipeline is disciplined
- OCR and document handling are often make-or-break in real products
- reliability work is product work, not afterthought work
Why I Added This Project to My Portfolio
This project represents the kind of engineering I want to keep doing: product-focused backend systems that combine automation, AI, document processing, user delivery, and operational thinking in one deployable workflow.
It also reflects something important about my approach. I enjoy building systems where the hard part is not just making the code run, but deciding how the product should behave under real constraints.
