/ Home
7 Days GenAI Learning Challenge Task History
Season 1: Apr 2026
Day 1 Task:
Python: Variables, Lists, and Dicts for AI data handling.
Learn the above topic from Claude and create multiple code-snippets in your pynotes. Create an article on personal blog (username.github.io).
Publish a post on LinkedIn.
Come and validate your learning with our mentors. They will help you get it done.
It will take you only 15-60 minutes.
Ref: Python Fundamentals
Day 2 Task:
FastAPI: Creating your first GET & POST endpoints.
Yesterday you handled data like a pro. Today you make it live. You’re building a real HTTP server — the same kind that powers AI products and agent backends. FastAPI is what modern AI engineers actually use. And you’re writing it today.
What you’re doing:
- Build a
GETendpoint — serve your Python data as JSON - Build a
POSTendpoint — accept input, validate it, respond to it - Open Swagger UI — test everything without touching a frontend
Your deliverables:
- Code snippets in your pynotes
- Article on your GitHub Pages blog
- LinkedIn post — share your win publicly
- Mentor validation session
15–60 minutes. That’s it.
Ref: FastAPI Basics
Drop your server screenshot below when it’s running. Let’s go!
Day 3 Task:
Goal: Build a complete end-to-end AI pipeline from scratch. Focus on stateful AI and efficient database storage. Teach your local AI to “read” your own private data. Build AI that can use tools and make independent decisions.
What you’re doing:
- llama.cpp: Loading a GGUF model and generating text
- llama.cpp: Controlling temperature, top-p & max tokens
- MongoDB: Updating and deleting documents, field operators
- MongoDB: Storing entries by date, querying date ranges
- llama.cpp: Forcing output format — top text / bottom text JSON
Your deliverables:
- Code snippets in your pynotes
- Article on your GitHub Pages blog
- LinkedIn post — share your win publicly
- Mentor validation session
15–60 minutes. That’s it.
Day 4 Task:
MongoDB Connection & Persistent Storage
Goal: Connect Python to a real MongoDB instance — Atlas or local — and build the persistence layer that all your future AI pipelines will rely on. By the end of this session, your app saves, retrieves, and manages data without losing it when the process dies.
What you’re doing:
- MongoDB Atlas setup — Free tier cluster, IP whitelist, connection string
- PyMongo connection from Python — MongoClient, db/collection handles, ping check
- insert_one / insert_many — Save structured dicts, capture inserted_id
- find_one / find with filters — Query by field, equality, and projection
- Environment variable secrets — python-dotenv, .env file, never hardcode credentials
- Saving AI output to MongoDB — Persist llama.cpp response + prompt + timestamp as one document
Your deliverables:
- ☐ Code snippets in your pynotes
- ☐ Article on your GitHub Pages blog
- ☐ LinkedIn post — share your win publicly
- ☐ Mentor validation session
15–60 minutes. That’s it.