Skip to main content

How to Add SEO to Lovable/Replit/AI Coded Sites

Vibe coded your site? Here's how to add SEO to it.

Updated this week

Rank on Google with any AI coded project using these exact strategies + prompts.

How SEO works in AI coded sites

SEO is fully possible in AI coded sites once your project is structured for it.

By default, providers like Lovable and Replit gives you:

• Fast load times
• Clean code
• Mobile-first layouts

That means Google can crawl your pages relatively easily.

What doesn’t happen automatically:

• Page intent

• Keyword targeting

• Metadata strategy

• Content hierarchy

If you don’t structure pages for SEO, Google has nothing to rank.

Think of AI coding sites as the rendering engine. You still control how pages are interpreted by search engines.

Page titles & meta descriptions

Google reads these things first:

Every page should have:

• A unique <title>

• A clear meta description

• Keyword intent that matches the page content

Prompt your AI like this:

“Add custom SEO fields for title and meta description per page. Keep titles under 60 characters and meta descriptions under 160.”

Best practice: • Home → brand + core value • Product → what it does + who it’s for • Blog → problem + outcome

Never duplicate.

Clean URLs

Messy URLs kill crawlability.

Bad example: /page?id=8392xk

Good example: /pricing /blog/seo-with-lovable


Prompt your AI:

“Generate clean, readable, SEO-friendly slugs for all pages instead of random IDs.”

Short URLs are always better.


Headings

Search engines don’t “read” design. They read structure.


Correct hierarchy: • H1 → main keyword (1 per page) • H2 → supporting sections • H3 → sub-points


Prompt your AI:

“Generate a semantic heading hierarchy with exactly one H1 per page and structured H2/H3 sections.”

Schema markup

Schema tells Google what your page is.

This is how you get: • FAQ dropdowns • Star ratings • Pricing snippets • Rich results

Prompt Lovable:

“Add JSON-LD schema based on page type: BlogPosting, Product, Organization, or FAQ.”


This is one of the highest leverage SEO upgrades you can make.

Site speed (important)

Google rewards fast sites.

Software like Lovable/Replit already helps here, but you can push it further.

Do this:

• Optimize images (compress large files)

• Lazy-load media (loading states)

• Avoid unnecessary scripts

Prompt your AI:

“Optimize all images for web delivery, target under 100kb when possible, and enable lazy loading.”


Speed improves:

• SEO

• Conversions

• Retention


Internal linking

Every page should link somewhere else.

Why is this important?

• Helps Google discover content

• Distributes authority

• Keeps users engaged longer


Prompt your AI:

“Auto-suggest internal links to related posts, features, or products on every content page.”


Content strategy

High-performing pages usually do one thing well:

• Answer a specific question

• Solve one clear problem

Structure content like this:

• Problem → explanation → solution
• Clear headings
• Skimmable sections
• Actionable takeaways


Lovable/Replit makes this easy because content + structure live together. You can rank just like any custom-coded site.

Important technical note: crawlers vs client-side rendering

Many web scrapers and SEO crawlers do not wait for the DOM to fully hydrate.


They:

• Fetch the initial HTML
• Parse immediately
• Move on


This means anything applied after page load (client-side JS, late metadata injection, dynamic content) may never be seen.

If your:

• Meta tags
• Headings

• Schema

• Core content

Are injected after the page is served, crawlers may miss them entirely. That’s why this matters in Lovable/Replit: SEO-critical elements must exist in the initial server-rendered HTML, not added later.


Unless you’re using:

• Server-side rendering

• Or a prerendering proxy (e.g. static snapshot for bots)


Client-only SEO tweaks are unreliable.

Best practice in Lovable/Replit:

• Define titles, meta, headings, and schema at build/render time
• Avoid “after-load” SEO logic
• Treat Google like a scraper, not a user

Did this answer your question?