Explore the technical architecture, engineering decisions, and modern tech stack that powers Idlyze's intelligent business idea analysis.
This project started as a fun idea: a tool that tells you whether you should use AI in your project or not. Later, I shifted its focus to something more meaningful — analysing business ideas and providing insights.
Now it has become an intelligent tool that evaluates your business idea and gives you insights like feasibility, market potential, cost, and risk analysis. It also provides actionables such as an implementation timeline. The core of the tool is its multi-layer analysis process, which makes the results more reliable.
Building a tool like this is not just about experimenting with AI. It shows how modern systems can combine AI, traditional rules, and data-driven methods to give practical insights.
The project matters because:
In AI systems, especially when using LLMs, answers can vary a lot between runs because of randomness (temperature, prompt interpretation, context drift, etc.). By building a multi-layer pipeline (heuristics + vector DB + ML + LLM), you reduce this randomness. That means if two users enter the same idea, the system doesn't produce wildly different insights each time — the process is repeatable.
When working on an idea, I often think: what if this grows into a company with dozens of engineers adding features? I wanted a setup that is predictable, robust, developer-friendly, and easy to scale.
In one word: shareability. If multiple apps in your project share utilities, components, or types, a monorepo helps keep them in one place. It also keeps project configurations centralised so developers can focus on features instead of setting up the same configuration again and again.
Of course, setting up a monorepo adds complexity, but I felt the trade-off was worth it.
I chose Bun because it is fast, has great support for modern features (especially catalog workspaces), and since my backend is written in Bun, the setup felt natural.
Modern React-based frontend with excellent developer experience
Fast, type-safe backend with modern JavaScript runtime
Intelligent analysis with vector search and caching
Scalable cloud infrastructure with automated deployments
Comprehensive error tracking and user analytics
The system is built in layers so that the final output is not just a raw LLM response but a well-reasoned result based on rules, data, and AI models.
User enters their business idea. Handles form validations using React Hook Form. Sends request to the backend through an API.
Receives the request from the frontend. Routes it to different services (preprocessing, analysis, LLM). Makes sure authentication, validation, and logging are in place.
Cleans and normalises the input text. Extracts key entities, domains, and complexity signals. Prepares the data so later stages can make more accurate decisions.
Uses simple rules and keyword checks to quickly judge if the idea looks AI-heavy or not. Acts as a baseline scoring system.
Stores embeddings of past ideas and contexts. Finds similar problems or projects to compare with the new idea. Returns context that strengthens the analysis.
Runs on structured features extracted from the idea. Uses decision-tree-like logic to give another perspective. Adds another confidence score (separate from heuristics and vector DB).
Combines results from heuristics, vector DB, and ML classifier. Uses weighted scoring to avoid bias from any single method. Produces a balanced "yes/no + confidence score" output.
Takes the fused score and reasoning. Generates a human-readable explanation, risks, and possible roadmap. Ensures the insights are practical, not just raw numbers.
Stores user submissions, results, and analysis history. Helps in tracking improvements over time.
Tracks errors, performance bottlenecks, and user interactions. Feeds back into improving the product.
User submits an idea on the frontend.
API Gateway sends the idea to the preprocessing service.
The cleaned input passes through heuristic rules and embedding search in Pinecone.
In parallel, the ML classifier runs feature-based scoring.
Decision Fusion combines these results into a single score.
LLM adds explanation, risks, and next steps.
Final structured output is returned to the frontend and stored in the database.
Key technical achievements and engineering decisions that make Idlyze robust, scalable, and maintainable.
The backend streams data section by section in JSON format. The frontend consumes this stream and renders insights in real time, improving UX for long-running requests.
Unified error handling on both frontend and backend. All errors are logged and reported to Sentry for monitoring and alerting. This ensures consistent debugging and faster issue resolution.
The same schema is used across frontend and backend for validating queries, params, request bodies, and responses. Response validation schemas are inferred back into frontend types, so developers always work with the correct type.
Secure session-based authentication with the ability to revoke sessions. Can be extended easily to log out users from multiple devices at once. Adds flexibility and tighter security compared to plain token-based flows.
Fully automated CI/CD pipelines built with GitHub Actions. Frontend deployed on AWS Amplify. Backend deployed on EC2 with Docker and load balancing. Optimised for zero-downtime deployments and scaling.
Every engineering decision was made with scalability in mind. From the monorepo architecture to the multi-layer AI pipeline, Idlyze is designed to grow from a personal project to a production-ready platform that can serve thousands of users.
Every engineering challenge is an opportunity to build something better. Here are the key obstacles we overcame to create a robust, scalable platform.
Designing secure multi-device, role-based authentication with session revocation.
Creating a centralised error dictionary for consistent debugging and monitoring.
Achieving data consistency across heuristics, ML, vector DB, and LLM layers.
Reducing LLM randomness by introducing a deterministic fusion pipeline.
Setting up a Bun monorepo with AWS CI/CD pipelines despite limited ecosystem maturity.
Each roadblock taught us valuable lessons about system design, security, and scalability. These challenges ultimately made Idlyze more robust and production-ready.
Every technical decision involves trade-offs. Here are the key choices we made and the reasoning behind each decision.
Chose monorepo for shareability and scalability across multiple apps and packages.
Picked Bun for speed and modern developer experience with better TypeScript support.
Went with session-based for stronger security and multi-device support capabilities.
Fusion chosen for reliability and explainability over raw LLM responses.
Streaming chosen for real-time user experience and better perceived performance.
Each trade-off was evaluated based on scalability, maintainability, developer experience, and long-term project goals. The decisions prioritize reliability and future growth.
We optimised for scalability, reliability, and user trust. Here's how our technical choices support these core principles.
Ensures shared code and developer velocity across all applications and packages.
Makes insights repeatable and explainable by combining multiple analysis methods.
Provides enterprise-grade security and observability for production environments.
Enhances UX with real-time feedback during long-running analysis processes.
Gives full control and room to scale into a production-ready platform.
Every technical decision was made with long-term scalability in mind. From the monorepo architecture to the multi-layer AI pipeline, Idlyze is designed to evolve from a personal project to an enterprise-ready platform that can serve thousands of users while maintaining reliability and performance.
Honest assessment of current constraints and areas that need improvement. Understanding limitations is the first step toward building better solutions.
Heuristics and semantic layer are generic and hard-coded, limiting adaptability to specific domains.
Analysis depth limited to high-level insights due to time constraints and resource limitations.
Acknowledging limitations is crucial for honest technical communication. These constraints represent opportunities for future improvements and help set realistic expectations for current capabilities.
The roadmap for evolving Idlyze into an even more powerful and comprehensive business analysis platform.
Develop more intelligent heuristics and better semantic context understanding.
Implement deeper market, financial, and technical analysis capabilities.
Add charts and graphs for richer data visualization and insights presentation.
Implement export options (CSV/docs) and public sharing capabilities.
Idlyze is designed to grow and adapt. These planned improvements represent our commitment to building a platform that evolves with user needs and technological advances. Each enhancement is carefully planned to maintain the reliability and performance that users expect.
Our evolution from a reliable, repeatable system to an intelligent, self-improving AI assistant that learns and adapts continuously.
Multi-layer fusion (heuristics + ML + vector DB + LLM).
Domain-specialised semantic layer for industry-aware analysis
Produces structured, explainable, and repeatable insights
Multi-layer fusion combining multiple analysis methods
Continuous retraining of ML classifier on real submissions + feedback.
Dynamic heuristics that evolve with new industry patterns
Expanding vector DB with every new idea, improving contextual recall
Continuous retraining of ML classifier on real submissions + feedback
User feedback loop → ratings on insights automatically adjust decision weights.
Automated fusion optimisation → weights between heuristics, ML, and vector DB recalibrate over time
Self-maintaining pipeline → heuristics, embeddings, and models update continuously without manual tweaks
User feedback loop → ratings on insights automatically adjust decision weights
Each phase builds upon the previous one, creating a system that becomes more intelligent and valuable over time. The roadmap ensures Idlyze evolves from a reliable tool to an indispensable AI assistant that understands context, learns from interactions, and continuously improves its analysis capabilities.