I rebuilt my personal site last week. New design, new stack, new everything. It looked great. Dark mode worked. The newsletter signup was hooked up.
But when I ran a GEO audit on it, I realized something. AI search engines like ChatGPT, Claude, Gemini, and Perplexity had no idea I existed.
No robots.txt. No structured data. No llms.txt. No sitemap. If someone asked an AI "who is Vinod Sharma" or "product builders using AI coding tools," my site would never show up in the answer.
This is the story of how I fixed that in one build session using Claude Code.
What is GEO?
GEO stands for Generative Engine Optimization. It is like SEO, but for AI search engines instead of Google.
When someone asks ChatGPT a question, it pulls information from websites it has crawled. If your site is not structured in a way that AI crawlers can read and understand, you are invisible.
There are 10 things that matter for AI search visibility:
- Content Structure (25%) - Clear headings, lists, FAQ sections
- Content Depth (20%) - Statistics, quotes, 1500+ words
- Technical Setup (15%) - Server-rendered HTML, clean URLs
- AI Crawler Access (10%) - robots.txt rules, llms.txt file
- Structured Data (10%) - JSON-LD schema markup
- Trust Signals (8%) - Author info, credentials, social links
- Meta Tags (5%) - Title, description, OG images
- Navigation (3%) - Sitemap, breadcrumbs, internal links
- Geographic (2%) - Location data for local queries
- Voice Assistant (2%) - Speakable content, FAQ schema
I scored about 15 out of 100 before I started. Here is what I did to get to 78.
The Before: What Was Missing
My old site (vinodsharma.co) was a simple portfolio built with Next.js and Tailwind. It had my name, my startups, some testimonials, and a contact section.
What it did NOT have:
- No
robots.txt file. AI crawlers had no explicit permission to index the site.
- No
llms.txt file. No way for AI models to understand who I am or what I do.
- No JSON-LD structured data. No Person schema, no FAQPage schema, nothing machine-readable.
- No sitemap. Search engines had to guess which pages existed.
- No OG images or Twitter cards configured properly.
- No canonical URLs set.
- No security headers.
- No RSS feed.
The content was there. The infrastructure to make it discoverable was completely missing.
Stage 1: Start with a Strong Foundation
I did not build the SEO infrastructure from scratch. I had already built it for sucana.ai (the company I am building with my co-founders Virgil and Victor). That site had everything: robots.txt, sitemap, llms.txt, JSON-LD schemas, RSS feed, security headers, IndexNow integration.
So I cloned the sucana.ai codebase and used it as my starting point.
In Claude Code, I said:
"I made a copy of www.sucana.ai at vinodsharma.co-new. Help me strip the Sucana content and rebrand it for my personal site."
This gave me all the SEO/GEO infrastructure for free. Dynamic sitemap, robots.txt with AI crawler rules, llms.txt, RSS feed, security headers, blog system with MDX support. All working out of the box.
Why this matters: Building SEO infrastructure from scratch takes days. Starting from a working codebase that already has it saves you all that time.
Stage 2: Add robots.txt with AI Crawler Rules
The robots.txt file tells crawlers what they can and cannot access on your site. Most personal sites either do not have one (which means default allow) or have a basic one that does not mention AI crawlers at all.
My new robots.txt explicitly allows every major AI crawler by name:
User-agent: GPTBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: PerplexityBot
Allow: /
User-agent: Google-Extended
Allow: /
User-agent: Amazonbot
Allow: /
User-agent: CCBot
Allow: /
There are 12 AI crawlers listed in total, including OAI-SearchBot, Claude-SearchBot, ChatGPT-User, Perplexity-User, and Applebot-Extended.
The sitemap URL is also referenced at the bottom:
Sitemap: https://vinodsharma.co/sitemap.xml
Why this matters: Explicitly naming AI crawlers tells them "yes, you are welcome here." Some sites block AI crawlers by default. Naming them removes any ambiguity.
Stage 3: Add Person JSON-LD Schema
JSON-LD is structured data that helps search engines and AI models understand what your page is about. For a personal site, the most important schema is Person.
I added this to my homepage:
{
"@context": "https://schema.org",
"@type": "Person",
"name": "Vinod Sharma",
"url": "https://vinodsharma.co",
"image": "https://vinodsharma.co/images/profile.png",
"jobTitle": "Product Builder",
"description": "Product builder who shipped 9+ products using AI coding tools. 25+ years in technology.",
"knowsAbout": [
"AI coding", "product development", "Next.js",
"TypeScript", "Tailwind CSS", "startup building",
"micro-SaaS", "Claude Code", "Vercel"
],
"alumniOf": [
{
"@type": "CollegeOrUniversity",
"name": "Webster University",
"location": "St Louis, MO"
},
{
"@type": "CollegeOrUniversity",
"name": "North Maharashtra University"
}
],
"sameAs": [
"https://x.com/VinodSharma10x",
"https://linkedin.com/in/vinodsharma10x",
"https://www.youtube.com/@vinod.sharma",
"https://vinodsharma.substack.com/"
]
}
This tells AI models: here is a person named Vinod Sharma, he is a Product Builder, he knows about these 9 topics, he went to these universities, and here are his social profiles.
Why this matters: When someone asks an AI "who builds micro-SaaS products with Claude Code," the knowsAbout field makes it more likely your name comes up.
Stage 4: Add FAQPage Schema
I already had a FAQ section on my homepage with 6 questions and answers. But without FAQPage schema markup, AI models treat it as regular text.
Adding the schema wraps each question and answer in a structured format:
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "How do you build startups in 2 hours a day?",
"acceptedAnswer": {
"@type": "Answer",
"text": "The key is extreme focus and leveraging AI tools..."
}
}
]
}
Research shows that pages with FAQ sections are 2x more likely to be cited by AI models. The Q&A format matches how people ask questions to AI assistants.
Why this matters: FAQ content is one of the highest-impact things you can add for AI visibility. It directly matches the question-answer format that AI search uses.
Stage 5: Add llms.txt
The llms.txt file is a relatively new standard. It is a plain text file at the root of your site that tells AI models who you are and what your site is about. Think of it as a cover letter for AI crawlers.
Here is what mine looks like (abbreviated):
# Vinod Sharma
> Vinod Sharma is a product builder based in Florida who builds
> startups using AI coding tools. He came back to coding after
> 12 years in management and has shipped 9+ products while
> working full-time.
## What I Build
- Sucana — AI-powered analytics platform for marketing agencies
- Part Time Founders — Community for aspiring entrepreneurs
- GEOScore — AI search visibility analyzer (open source)
## Expertise
- AI-assisted product development (Claude Code, Cursor, V0, Bolt)
- Next.js, TypeScript, Tailwind CSS, Vercel
## Newsletter
Build Notes — I am building Sucana and sharing everything I learn
through my newsletter and YouTube channel.
## Contact
- Twitter/X: https://x.com/VinodSharma10x
- LinkedIn: https://linkedin.com/in/vinodsharma10x
- YouTube: https://www.youtube.com/@vinod.sharma
I also added a <link rel="llms" href="/llms.txt" /> tag in the HTML head so crawlers can discover it.
Why this matters: While llms.txt has no proven citation impact yet (it is too new), it costs nothing to add and gives AI models a clean summary of who you are. It is also linked in the HTML head using <link rel="llms"> so it is easy to find.
Stage 6: Add a Dynamic Sitemap
A sitemap tells search engines which pages exist on your site and how important they are. My sitemap is generated dynamically in Next.js:
export default function sitemap(): MetadataRoute.Sitemap {
return [
{ url: SITE_URL, priority: 1.0, changeFrequency: "weekly" },
{ url: `${SITE_URL}/blog`, priority: 0.8, changeFrequency: "daily" },
{ url: `${SITE_URL}/about`, priority: 0.7, changeFrequency: "monthly" },
{ url: `${SITE_URL}/resources`, priority: 0.7, changeFrequency: "monthly" },
...blogEntries,
];
}
Every time I add a blog post, it automatically appears in the sitemap. The sitemap URL is referenced in robots.txt so crawlers find it immediately.
I also set up an IndexNow integration that pings Bing daily with all my URLs, so new content gets indexed faster.
Why this matters: Without a sitemap, crawlers have to discover pages by following links. A sitemap gives them the complete map upfront.
I added full Open Graph and Twitter Card metadata to every page:
og:title, og:description, og:image (1230x630)
twitter:card: summary_large_image
twitter:creator: @VinodSharma10x
- Canonical URLs on every page
- Proper robots directives
These do not directly affect AI search, but they improve how your site appears when shared on social media and in traditional search results. They also signal to crawlers that the site is well-maintained.
This is not directly about AI visibility, but it signals quality. I added:
- Content Security Policy (CSP)
- Strict-Transport-Security (HSTS)
- X-Frame-Options
- X-Content-Type-Options
- Referrer-Policy
- Permissions-Policy
Well-maintained sites with proper security headers tend to rank better across all search engines, including AI-powered ones.
The After: GEO Score 78/100
After implementing all these changes, I ran a GEO audit using GEOScore (an open-source tool I built at geoscore.sucana.ai).
Here are the results:
| Category |
Score |
Weight |
| Content Structure |
8/10 |
25% |
| Content Depth |
6/10 |
20% |
| Technical Discoverability |
9/10 |
15% |
| AI Crawler Access |
10/10 |
10% |
| Structured Data |
8/10 |
10% |
| E-E-A-T Signals |
8/10 |
8% |
| Meta Tags & OG |
9/10 |
5% |
| Navigation |
6/10 |
3% |
| Geographic |
4/10 |
2% |
| Voice & Assistant |
7/10 |
2% |
| Overall |
78/100 |
|
AI Crawler Access scored a perfect 10. All 12 major AI crawlers are explicitly allowed. llms.txt exists. Sitemap is referenced.
The weakest area is Content Depth at 6/10. The homepage has about 1,200 words, which is below the 1,500 word target. Adding blog posts (like this one) will push that score higher.
What I Would Do Next
To push from 78 to 85+, here is what is still on my list:
-
Add blog posts. The blog system is built and ready, but it had zero posts until now. Content is the single biggest driver of AI citations.
-
Add statistics as visible text. Numbers like "9+ products shipped" and "25+ years in tech" exist in my schema but not as visible text on the homepage. Adding them would give AI models more to cite.
-
Add BreadcrumbList schema. A simple addition that helps AI models understand site hierarchy.
-
Add case studies to the sitemap. I have two case studies (Nintex and Trend Micro) that are not yet in the sitemap.
-
Enrich product descriptions. Each product card has one sentence. Adding metrics and details would make them more citable.
The Stack
Here is everything I used for this build:
- Claude Code for all coding and implementation
- Next.js 16 (App Router) for the site framework
- Tailwind CSS 4 for styling
- GEOScore (geoscore.sucana.ai) for the before/after audit
- Vercel for hosting and deployment
- ConvertKit (Kit) for newsletter integration
The Bottom Line
Adding AI search visibility to a personal site is not hard. The infrastructure (robots.txt, schema, llms.txt, sitemap) takes a few hours to set up. The hard part is the content. You need depth, statistics, and structured information that AI models want to cite.
If you want to check your own site's AI visibility, try GEOScore. It is free, open source, and gives you a score out of 100 with copy-paste fixes.