Harshitha's
AI/ML Roadmap
Built on the Five Laws of Learning. Tailored to an IIT Dharwad CSE final-year student interning at Dassault Systèmes. Depth over breadth — always. https://www.youtube.com/watch?v=YL0xu8Z73v4
Before the roadmap, here is what's actually true about your situation.
— your biggest asset
— use this now
— critical gap to close
+ Dassault Systèmes proof
IIT Credential + Live AI Internship
You're not a beginner. You have a premium institution badge AND active exposure to production AI at Dassault Systèmes. Most applicants have neither. This changes how you should position yourself entirely.
Outrageous Networking Confidence
A 9/10 networking score is genuinely rare. Your biggest hiring friction — getting past resume screening — can be completely bypassed by reaching out directly to hiring managers and ML leads. You have the confidence. Use it.
Strong Mathematical Foundation
Linear algebra, statistics, probability — already solid. This means you can go deep on ML theory immediately. You won't hit the math wall that stops most learners at Month 2.
Surface-Level Knowledge — No Real Depth
You said it yourself: "I know most topics but lack depth." Using ChatGPT to generate logistic regression code is not the same as understanding it. Interviewers will test depth. You need to own every line you write.
Nothing Deployed — Only Notebooks
4/10 on deploying real projects. Companies don't care about your Jupyter notebooks. They want to click a URL and see something running. Every project in this roadmap ends with a deployed URL or a GitHub repo that runs in one command.
Leverage Dassault Systèmes Now
You are inside an AI team at a major company. Whatever you're working on — document it, write about it (within NDA bounds), extract learnings. This is your proof. Hiring managers trust "worked on X at Dassault" over any personal project.
⚡ Critical Reframe
Your fear is: "I'll just consume videos and not build anything."
The real problem is: You've been learning topics instead of learning to solve problems.
This roadmap has one rule: no topic is "done" until you have a deployed artifact, a GitHub commit, or a LinkedIn post with code.
Emotion Creates Judgment — the law you know works for you — means every concept needs a real stake attached to it.
You don't need more video time. You need more commit time.
Before learning anything new, find exactly where your depth breaks down.
⚠ Do This Before Anything Else
Most people skip this and waste months. You said you "know most topics but lack depth." This sprint reveals precisely which topics you understand vs which ones you just recognise. The roadmap then focuses only where depth is missing.
Law 1: Prediction Before Explanation
Before each test below, write your prediction: "I think I know X because..." Then test. The gap between prediction and test result is your actual learning target.
7-Day Depth Test Protocol
Close the Depth Gap
You have breadth. Now you need to own each topic cold — without AI assistance. Focus: Python comprehension, classical ML from scratch, and your first deployed project.
Law 4: Emotion Creates Judgment — Your Anchor Law
You identified this as what used to work for you. Every topic in Phase 1 must have a stake: a deployed thing, a published post, or a benchmark. If there is no public output, the topic isn't done. This is non-negotiable for you specifically.
Law 3: Compression Beats Coverage — Skip What You Already Know
Do NOT restart from Python basics. Use your diagnostic results. If you passed the Day 1 logistic regression test, you skip to Month 2 content. This roadmap compresses to YOUR gaps — not to a generic beginner's curriculum.
Python — Code Reading, Not Writing
NumPy + Pandas — Deep Mastery
Classical ML — From-Scratch Implementations
PyTorch Fundamentals + Backprop by Hand
Git + FastAPI + Docker — First Deployment
Phase 1 Projects
ML Performance Comparison — But Deployed
You already did a project comparing ML models. Rebuild it: clean code you wrote yourself, FastAPI endpoint, Docker container, live URL. Write a case study on what the models actually fail at — not just accuracy scores.
Open-Source Contribution (Small)
Pick a Python ML repo with "good first issue" tags. Fix a bug or improve documentation. Submit a PR. Getting it merged is the goal — even one line. This proves you can read and change other people's code.
Build Hireable Skills
The market is hiring for GenAI and MLOps. This phase targets the exact skills that appear in job descriptions at AI-first companies in India and UK. Every topic ends with something you can show.
Law 2: Failure Modes Over Features
For every technology in Phase 2, learn it by breaking it first. RAG fails with chunking strategy. LLMs fail on hallucination. Vector databases fail on recall. Know exactly how it breaks BEFORE you know how to build it — that's what senior engineers actually know.
Transformers — Deep Understanding
Embeddings + Vector Databases
RAG Systems — Build One End to End
Prompt Engineering + LLM Cost Maths
MLOps Basics — Experiment Tracking + Model Registry
System Design for ML — Basics
Phase 2 Flagship Project
Domain RAG System — Dassault-Adjacent
Build a RAG system over a public technical dataset related to your internship domain (engineering, CAD, manufacturing, 3D simulation — whatever you can reference without violating NDA). Deploy it. Write a full case study: architecture decisions, failure modes found, cost analysis, what you'd do differently.
Open-Source Contribution (Real)
Target: LangChain, LlamaIndex, or any AI/ML library with active issues. Fix a real bug or add a documented feature. The goal is: your name in a merged PR of a repo with 1000+ GitHub stars. This is the resume equivalent of a gold badge.
Read, Contribute, Tear Apart
Your stated goal: read difficult code, point out bugs, suggest improvements, and tear it apart. This phase makes that real. You move from user of libraries to contributor to critic.
Law 5: AI Accelerates, Humans Judge
Use AI to read code faster — Claude, Copilot, GPT-4. But the judgment — "this is a bad abstraction", "this will fail under load", "this is the wrong tradeoff" — must come from you. AI finds what; you decide so what.
Open-Source Target Repos
Pick 2 Repos — Study Deeply
Bug Hunt — Find Real Issues
Write a Technical Deep-Dive Post
Model Interpretability + SHAP
Get Past the Resume. Every Time.
Your 9/10 networking score is your weapon. The Offer Framework says: make NOT hiring you feel like a risk. Here is exactly how to do that for India and UK markets.
Your Target Market (Offer Framework: Ex 1)
🇮🇳 India — Primary Market
- AI-first product startups (Series A–B, Bangalore / Delhi / Mumbai / Pune)
- Fintech companies with ML teams (Zerodha, Razorpay, Groww — adjacent companies)
- Mid-size tech companies building AI products (not IT services)
- Defence / Industrial AI companies (your Dassault background is directly relevant)
🇬🇧 UK — Secondary Market
- UK Tier 2 Graduate Route — IIT Dharwad qualifies. Target Edinburgh, Manchester, London startups.
- AI-first startups via Y Combinator UK / Seedcamp portfolio
- Engineering-adjacent AI companies (manufacturing AI, simulation AI — your Dassault angle)
- Note: get your visa eligibility confirmed before investing heavily here
📋 Your Value Proposition (Ex 2)
- "Get production-ready AI features deployed in 60 days without hiring a ₹30L+ senior engineer"
- IIT credentialed + Dassault Systèmes production AI experience = proof, not promise
- Can navigate complex industrial codebases — most junior candidates cannot
🚫 Your Hiring Frictions (Ex 4) & Fixes
- Friction: No deployed projects → Fix: 3 live URLs before Month 4
- Friction: Surface-level knowledge → Fix: From-scratch implementations on GitHub
- Friction: Resume rejected → Fix: Bypass with direct outreach (your 9/10 advantage)
- Friction: Can't prove production thinking → Fix: Published architecture case studies
The Bypass Play — Using Your 9/10 Networking
Do not wait until Month 6 to start this. Start outreach in Month 2 as soon as you have one deployed project. This is your primary resume bypass strategy.
"Hi [Name], I noticed [Company] is building [specific AI feature from their job description or product page]. I'm a final-year CSE student at IIT Dharwad, currently interning at Dassault Systèmes on their AI team working on [general area, no NDA details]. I built a [RAG system / deployed ML API / open-source contribution] recently — link: [URL]. I'm genuinely curious how your team handles [specific technical challenge they'd face]. Happy to share what I learned at Dassault if it's useful — or just to hear how you've solved it."
This works because: (1) you're not asking for a job, (2) you have a real link to click, (3) IIT + Dassault is a credible combination, (4) it shows specific technical curiosity about their work.
What To Apply To — A Decision Framework
⚠ Your Q38 Answer — "What should I apply to?"
You asked: "What jobs should I apply to and how do I know what the hiring manager wants me to showcase?" Here is the decision filter.
✅ Apply If the JD Contains
- "Deploy ML models to production" — you can prove this
- "RAG", "LLM integration", "GenAI" — you build this in Phase 2
- "Python, FastAPI, Docker" — your stack
- "Work independently with minimal supervision" — highlight Dassault
- "Open-source contributions welcome" — you have these
❌ Skip (For Now)
- FAANG / big tech — 12–24 months of competitive preparation needed
- Pure research roles without production component — mismatch
- Roles requiring 2+ years experience — you don't need these
- IT services companies labelling ML work as "AI" — these aren't AI roles
Reading the Hiring Manager's Mind
Hiring managers at AI-first startups are asking exactly ONE question: "Can this person ship without me babysitting them?" Everything you build, write, and post must answer that question.
Approximate hiring manager confidence boost from each proof asset. A live URL is your single most powerful credential.
You're at Dassault Systèmes from 8 to 7. That leaves 7PM–11PM weekdays and full weekends. Here is how to use them without burning out.
Weekday Evening (7PM–10PM)
Weekend (Saturday + Sunday)
⚡ Your 7-Day Litmus Test
At the end of every week, ask: "What did I ship this week?"
If the answer is "I watched videos and read articles" — the week failed, regardless of how much you studied.
If the answer is "I committed X, published Y, or deployed Z" — the week succeeded.
Ship something every week. No exceptions.
Concrete, measurable, undeniable.
First Deployed URL
A real ML model served over a REST API. Docker container. Public GitHub. You understand every line.
First PR Merged
Any contribution to an open-source ML repo with real users. Your name in someone else's codebase.
RAG System Live
Full GenAI project deployed with documented failure modes and cost analysis. Case study written and published.
10 Hiring Manager Conversations
Using your 9/10 networking score. Not applications — direct conversations. At least 3 that turn into interviews.
First Offer
AI Engineer or ML Engineer role at an AI-first company. India or UK. You proved you can ship — not just talk.
Open Source Identity
You read production code and find bugs. You can say so publicly with receipts. This compounds forever.
💬 To Your Q40: "Will this work? Can I keep up?"
You scored 9/10 on following through for 6 months. You already said you'd jump straight in this week. You're at IIT Dharwad. You're inside an AI team at Dassault Systèmes. You network like a senior engineer. You build in public at 10/10. The only version of this that fails is if you keep consuming instead of building. The roadmap is designed around your real constraint: depth, not coverage. One topic at a time. One deployment at a time. You've done harder things. This is just consistent reps.