Skip to content

Search intelligence API for AI agents

  • 10 supported search surfaces in one client.
  • Structured results plus LLM-ready Markdown and HTML for top matches.
  • Structured results for prices, ratings, coordinates, and citations.
  • Proxy-backed requests from the first call.

One API for recurring search workflows.

Search keeps the output consistent so monitoring jobs, SEO tooling, and AI agents need less parser logic.

Google Search

Organic results with related searches, people also ask, and knowledge graph entities.

const google = require('@microlink/google')({
  apiKey: process.env.MICROLINK_API_KEY
})

const page = await google('site:developer.mozilla.org fetch', {
  type: 'search'
})

console.log(page.results)
[
  {
    "title": "Fetch API - MDN Web Docs",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API",
    "description": "The Fetch API provides an interface for fetching resources (including across the network). It is a more powerful and flexible replacement for XMLHttpRequest."
  },
  {
    "title": "Using the Fetch API - MDN Web Docs",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch",
    "description": "The Fetch API provides a JavaScript interface for making HTTP requests and processing the responses. Fetch is the modern replacement for XMLHttpRequest."
  },
  {
    "title": "Request - Web APIs - MDN Web Docs",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API/Request",
    "description": "The Request interface of the Fetch API represents a resource request. You can create a new Request object using the Request() constructor."
  },
  {
    "title": "Using Deferred Fetch - Web APIs | MDN",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Deferred_Fetch",
    "description": "The fetchLater() API extends the Fetch API to allow setting fetch requests up in advance. These deferred fetches can be updated before they have ..."
  },
  {
    "title": "Response - Web APIs - MDN Web Docs - Mozilla",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API/Response",
    "description": "The Response interface of the Fetch API represents the response to a request. You can create a new Response object using the Response() constructor."
  },
  {
    "title": "Window: fetch() method - Web APIs | MDN",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API/Window/fetch",
    "description": "The fetch() method of the Window interface starts the process of fetching a resource from the network, returning a promise that is fulfilled once the response ..."
  },
  {
    "title": "Background Fetch API - MDN Web Docs",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API/Background_Fetch_API",
    "description": "The Background Fetch API provides a method for managing downloads that may take a significant amount of time such as movies, audio files, ..."
  },
  {
    "title": "Web APIs - MDN Web Docs - Mozilla",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API",
    "description": "Below is a list of all the APIs and interfaces (object types) that you may be able to use while developing your Web app or site."
  },
  {
    "title": "ServiceWorkerGlobalScope: fetch event - Web APIs | MDN",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API/ServiceWorkerGlobalScope/fetch_event",
    "description": "The fetch event of the ServiceWorkerGlobalScope interface is fired in the service worker's global scope when the main app thread makes a network request."
  },
  {
    "title": "WorkerGlobalScope: fetch() method - Web APIs | MDN",
    "url": "https://developer.mozilla.org/en-US/docs/Web/API/WorkerGlobalScope/fetch",
    "description": "The fetch() method of the WorkerGlobalScope interface starts the process of fetching a resource from the network, returning a promise that is fulfilled once ..."
  }
]

Built for retrieval loops, not just result pages.

Search stays lightweight on the first pass so technical workflows can stay fast under real production load.

  • A. The .markdown() helper

    Ship LLM-ready Markdown

    RAG pipelines rarely want raw HTML. They want cleaner text that is easier to embed, rerank, cite, and pass into prompts without wasting context on navigation or markup noise.

    • Use .markdown() when the model needs readable, prompt-ready context.
    • Keep .html() for DOM-aware extraction or custom downstream parsing.
  • B. The two-step retrieval model

    Lazy-load the web

    Search works best as a two-step system: lightweight results first, deeper content second. That keeps the browse step snappy, then spends the heavier extraction cost only where confidence is already high.

    • Browse structured results at roughly search latency instead of fetching every page in full up front.
    • Shortlist the top 3 sources, then call .markdown() or .html() only for those winners.
    • Keep recurring jobs faster and cheaper because enrichment is opt-in, not mandatory.
  • C. Advanced operators

    Turn Search into a document discovery engine

    Combine operators like site: and filetype: to hunt for papers, docs, filings, changelogs, or PDFs before you enrich anything. That gives technical teams much tighter recall from the first query.

    • site:arxiv.org"deep learning"filetype:pdf

    The hero example above now uses an operator-driven query so the workflow reads like real technical research instead of a generic web search.

Integrate Search without scraper debt.

Initialize once, choose the surface you need, then paginate or enrich only when a workflow needs more context.

STEP 01

Install and initialize

Install `@microlink/google`, add your Microlink API key, and create one client you can reuse across every supported search surface.

pnpm add @microlink/google

export MICROLINK_API_KEY=your_api_key

STEP 02

Run the first query

Choose the surface you need with the `type` option and keep the same client shape for search, news, images, maps, shopping, and more.

const google = require('@microlink/google')({
  apiKey: process.env.MICROLINK_API_KEY
})
      
const page = await google('technical seo checklist', {
  type: 'search',
  location: 'us',
  period: 'week'
})

STEP 03

Lazy-load the web

Keep the first pass fast, then enrich only the winners. Browse lightweight result pages first and call `.markdown()` or `.html()` only for the top matches that deserve deeper inspection.

  • Any result with a URL exposes `.markdown()` for LLM-ready Markdown on demand.
  • Call `.html()` only when your workflow actually needs raw page markup.
  • Just call `.next()` to fetch the next page.
  • Lazy-load the web: scan results at ~1s latency, then enrich only the top 3 matches.

Paid from the first request.

Search has no free tier because reliable result collection depends on managed proxy capacity, regional routing, and production safeguards on every call.

Pro

€39
/month
46,000 requests/month
Managed proxy-backed requests
10 supported search surfaces
Structured normalized results
Location and period controls
Pagination with .next()
Optional page Markdown or HTML via .markdown() and .html()

Pick a plan, then plug Search into your workflow.

Install the client, add your API key, and start shipping rank tracking, news monitoring, local research, or agent enrichment flows without building proxy infrastructure in-house.

Paid from day one
Managed proxy layer included
Built for SEO and AI workflows

Product Information

Everything you need to know about Microlink Search, pricing, and supported search surfaces.

Microlink Search is a paid search intelligence API for querying and normalizing public results from multiple Google surfaces through one product.
@microlink/google is the Node.js client for integrating Search into your own SEO tooling, monitoring jobs, and AI workflows.

Is this an official Google product?

No. Search is an independent Microlink product that works on top of public Google surfaces.
It is not affiliated with, endorsed by, or provided by Google.

Why is there no free tier?

Search starts on paid plans because reliable public-result collection depends on managed proxy capacity from the first request.
That cost is part of the product itself, so even small workloads use the same proxy-backed delivery model as production workloads.

Which surfaces are supported?

You can query Google Search, Google News, Google Images, Google Videos, Google Places, Google Maps, Google Shopping, Google Scholar, Google Patents, and Google Autocomplete.
Each one keeps the same client shape so teams can ship faster with less parser logic and less provider-specific branching.

What makes this different from a generic SERP API?

Search is designed around normalized output and reusable primitives instead of raw provider-specific payloads.
That means less cleanup for your codebase and faster handoff into rank tracking, market research, or agent pipelines.

How do pagination and HTML enrichment work?

Every result page can call `.next()` to fetch the following page, so pagination can be chained naturally.
Any result containing a URL can also expose `.html()` so you only fetch page markup when a workflow actually needs it.

Is it a fit for SEO and AI workflows?

Yes. Teams use Search for rank tracking, news monitoring, local research, query clustering, citation discovery, and agent enrichment.
The value is consistent structured output plus proxy-backed delivery for recurring public-result collection.

Can I run international or local queries?

Yes. You can use options like `location` and `period` to tune regional intent and recency for multilingual SEO and geo-specific analysis.
That makes the same integration model useful for local search intelligence as well as broader monitoring workflows.

Google is a trademark of Google LLC. Microlink Search is an independent product and is not affiliated with or endorsed by Google.