Like most of us, towards the end of the year I get serious about my health (remember that new year’s resolution). I start planning goals to reduce weight and be more fit, which usually includes joining a gym and managing my diet.

While starting the gym is easy (a purely physical task requiring no apps, though I do use AI as my personal assistant now—more on this next time), tracking calories is where you need digital help.

The Problem with Current Apps

There are many nutrition tracking apps in the market like Fittr, MyFitnessPal, or HealthifyMe. But as many of you know, if you use their free versions, you are inundated by subscription calls or advertisements. Even if you pay, you often get upsold on personal coaching.

Worse, most of these apps have so many features and screens that they are non-customizable. You end up clicking ten times just to log a coffee or be stuck in their custom boxes. As an example, a person like me who usually skips breakfast and directly eats brunch has no option to add “Brunch” as a category.

In fact, I think the major reason I never stick to nutrient tracking is the clunkiness of these apps.

I did find a workaround earlier in the year when I simply started using ChatGPT to log my meals. I would input text or a picture, and ChatGPT would extract the macros and log it. But the challenge of hallucinations appeared quickly. After 2-3 days, the summary would be wrong or filled with food I had the day before. Unless I deleted the thread every day, it wasn’t a stable solution. Since you need weekly spreads to track trends, this method fizzled quickly.

The Vision: AI-Native & Simple

What I needed was a simple, AI-native app where I could just input what I ate or take a picture, and it would automatically update and log it for me while showing a running log and trends for the last few days.

So, after writing so much about “VibeWriting” in the last couple of weeks, I decided to give its better-known cousin, “VibeCoding” a try. As you can see on YouTube, everyone and their uncles are making apps using AI, so why not me?

Yes, I am not a programmer, nor have I written any code in my life. But I know English, and I know how to prompt. Just like the Vibe Writing approach, I decided that if I have the requirements (Intent), then AI can handle the Code (Syntax) with my direction.

This was also deeply personal to me as during my school days, I envisioned becoming a computer engineer who could build cool softwares. But thanks to my ineptitude for our educational system (where you must pass advanced math, physics, and chemistry to learn to code), this never materialised. Using AI to actually build a usable piece of software was a dream come true.

Thus, I decided to put my theory to the test to build an app that i would want to use.

Vibe Coding Isn’t Magic—It’s a Tool

Let me be clear: Irrespective of what AI influencers or companies will have you believe, Vibe Coding platforms aren’t magic wands that fulfill wishes at a simple command (unless you are building a simple calculator).

However, it is equally wrong to say that Vibe Coding cannot help build a production-ready app. It just requires you to work as you would in standard software development: Build a product, test the hell out of it, fix the bugs, and then deploy.

Here is the operational breakdown of how I built TrackMacros– a mobile based nutrition tracking app, the traps I avoided, the critical “hallucinations” I had to debug, and why every Ops Leader needs to understand the difference between prompting AI and architecting it.

The Founder’s Trap (And Lessons Learned)

To be honest, this weekend was not my first attempt. I actually had this idea and worked on it last Sunday too. I used Gemini Pro, gave it a prompt, and it started generating an app. It even taught me how to use Netlify to deploy it.

It was great—I liked it so much that I immediately fell into the trap I avoid in my day job as a Program Lead: Scope Creep.

I decided to upscale the product. I took it to Google AI Studio and started adding features and iterating. However, with each iteration, I found the AI builder was rewriting the entire code. While it would fix one thing, it would break another. By the end of the day, my functional MVP had turned into a broken mess that ate all my free credits.

The Lesson: Simple prompting works for an initial prototype, but “hope” is not a strategy. For an app i can really use i needed structure. I had to treat this like a real product development cycle: create a requirement document, perform UAT, and separate the tools used for development vs. debugging.

The Team & The Journey (Attempt #2)

For my second attempt this weekend, I put on my thinking hat & spent time listing requirements—down to button placement and copy—in a single notes document. I refined it to “Must Haves” vs. “Good to Haves.” I even gave it to Gemini (my Chief of Staff) to pare down into a final PRD (Product Requirement Document) before handing it to the team:

  • Junior Developer (Google AI Studio): I gave the final prompt to Google AI Studio. Out came V1—it looked great and matched my design.
  • Deployment (GitHub + Vercel): To avoid the manual upload issues I faced with Netlify, I upskilled myself on using GitHub and Vercel quickly. Now, any changes I made would auto-push to the phone app.
  • Senior Developer (Gemini 3 Model): For user testing and debugging, I stopped using the Studio and used the Gemini 3 model directly to guide me through manual code fixes.

Believe me, there were many bugs that needed fixing & i won’t bore you with all of them. But here are the intresting ones:

The “Time Travel” Paradox

The first version was beautiful but functionally broken. I called it the “Time Travel Bug.”

If I logged “Had 2 eggs for breakfast at 9 AM,” the AI (running on a UTC server) would log it as 3:30 AM the previous day. If I said “Yesterday’s dinner,” it would hallucinate a future date.

Reason: This is a classic GenAI trap. Large Language Models (LLMs) are excellent at Reasoning (understanding “eggs” are protein) but terrible at Calculation (understanding that “Yesterday” relative to IST is a specific timestamp).

The Fix: “Split-Brain” Architecture
Instead of asking AI to do everything, we separated the workflow:

  • The Brain (AI): Extracts intent only. “Did the user mention a specific time? Yes/No.”
  • The Calculator (Client): The JavaScript on the phone handles the math.

If I type “Coffee,” it now uses the phone’s time. If I type “Coffee yesterday,” the AI flags the word “yesterday,” and the app subtracts 24 hours locally.

Result: Zero hallucinations.

The Supply Chain Shock

Mid-way through, my tests started failing. Reason: My free API key had run out of calls.

Gemini suggested moving to the 1.5 Flash model (1,500 calls/day). However, upon changing the code, the app broke completely. Instead of blindly following Gemini’s instrcution to add more code to find RCA i decided to do a Google search which revealed that 1.5 Flash had been disabled in November; only 2.5 Flash was current stable mdoel. The AI was hallucinating based on old training data instead of doing current search.

The Fix: Optimised prompts (reduced history sent per call), created new API key, stuck to supported models.

The Lesson: AI models are not static software; verify externally & stay agile.

The “Bureaucracy” Bug (Handling Vision)

I wanted to snap a photo of a label and get macros. The initial version tried to save every photo to a database, which would have bloated storage costs.

The Fix: A “Look and Forget” policy. The app sends the image to Gemini, extracts the JSON data (Protein, Carbs, etc.), and immediately discards the image.

Operational Efficiency: We store intelligence, not pixels.

The Final Result: TrackMacros

Six hours after the first prompt (& extensive testing), I deployed the final version & got myself a production ready mobile app running on my phone where i can log my daily meals, add custom recipes with keywords for my repetitive meals & have a 10 day trend always visible with below benefits:

  • Zero Latency: Loads instantly.
  • Privacy First: Runs on my personal API key. No data sold.
  • Cost: Approx ₹10 ($0.12) per month (free for the first 90 days).
  • Aesthetic: Clean “Dark Mode” built with Tailwind CSS. Designed for speed, not engagement.
Watch the app in action- I log my customised recipe for coffee & the app processes the macros locally with zero latency

I had stopped using HealthifyMe not because it was bad software, but because it was built for the average user. I needed a tool built for my specific workflow which i have now built.

The Abhay Perspective: From Consumer to Orchestrator

The real takeaway isn’t that I built a nutrition app. It’s that the barrier to entry for custom operational tools has collapsed—but only if you know how to lead. This is true for consumers and the corporate world alike.

For the last decade, Ops Leaders had two choices:

  1. Wait 6 months for IT.
  2. Buy expensive SaaS bloatware.

Today, there is a third option. We are entering the era of the “Corporate Centaur.” You don’t need to be a full-stack developer to solve technical problems anymore. You need to be a Chief of Staff to AI Agents. You need architectural thinking to spot “Time Travel” bugs and strategic agility to handle model depreciations.

If you can define the logic, manage the constraints, and audit the output, you can build your own solutions.

Stop waiting for the “perfect” AI tool. Go build the one you need.

Found this useful? I’ll be breaking down more practical strategies for operationalizing AI in future editions of The Abhay Perspective. Subscribe below & also to my Newsletter on LinkedIn to get more such updates

Quick Tech Rundown for the Techies:

  • Frontend: HTML/CSS/JavaScript
  • Styling: Tailwind CSS
  • AI: Google Gemini 2.5 Flash
  • Deployment: GitHub + Vercel
  • Platform: Progressive Web App (PWA) on iPhone

🎁 Bonus: Build This Yourself (The Master Prompt)

Want to build your own version? Copy and paste the prompt below into Google AI Studio, Gemini Advanced, or ChatGPT Canvas. I have sanitized it so you can add your own app name and style.

[BEGIN PROMPT]

Role: Act as an Expert Full-Stack Developer and UI/UX Designer. Goal: Build a Progressive Web App (PWA) for daily nutrition tracking.

1. App Identity

  • App Name: [Insert Your App Name, e.g., MyTracker]
  • Logo: [Describe your logo, e.g., Blue Lightning Bolt]
  • Theme: [Insert preference, e.g., Dark Mode with Neon Blue accents]

2. Core Functionality (The Logic)

  • Multimodal Input: Create a single input box that accepts both Text and Images (Camera/Gallery).
  • Logic: The AI must analyze the text or image to extract nutritional macros (Calories, Protein, Carbs, Fats, Sugar).
  • Composite Entry: Allow multiple photo uploads for one meal (e.g., Photo A + Photo B) and append the values to a single total before logging.
  • Time Travel Logic:
    • If the user specifies a time (e.g., “Ate eggs at 9 AM”), log it for that specific time/date.
    • If no time is specified, use the current system timestamp.
  • Calendar: Allow users to view, add, or delete logs for previous dates.
  • Saved Meals (The “Chef’s Hat”):
    • Create a “Saved Meals” tab to store frequently used items.
    • Recall: If I input a keyword matching a saved meal, auto-fill the macros from memory.

3. Data & Privacy

  • Client-Side Only: All data (logs, settings, API keys) must be stored in the phone’s LocalStorage. No external database.
  • BYO-Key: On the first launch, force a “Setup Screen” asking for the user’s Google API Key. Store this securely locally.
  • Model Auto-Detection: The app should automatically test and select the working model (e.g., Gemini 2.5 Flash) using the key.

4. UI/UX Structure

  • Dashboard: Display a “Total Calories” counter and a 10-day trend line graph (Date vs. Calories).
  • Input Area: Place the text box and camera icon at the bottom for easy thumb access.
  • Settings: Include options for Diagnostics, Factory Reset, and API Key management.
  • Footer: [Optional] Display text: “Built by [Your Name]” at the bottom of the page.

5. Technical Constraints

  • Ensure the code is a single HTML/React structure easy to deploy on Vercel or Netlify.
  • Prioritize low latency and zero-cost operation.

[END PROMPT]

Leave a comment

Trending