DeepAgent Course Lesson 1: Introduction to Autonomous Reasoning

Nam

Nam Hoang / Feb 04, 2026

8 min read

Welcome to the first practical lesson of our DeepAgent series. In this guide, we are moving beyond basic "question-and-answer" chatbots. We are going to build an autonomous researcher—an agent that doesn't just give you a pre-trained response but actually goes out to the internet, gathers fresh data, and organizes it for you.

To achieve this, we will use the DeepAgents library combined with Google’s Gemini 2.0 model. This setup allows the agent to utilize a "reasoning loop," where it creates a plan, executes searches, and synthesizes a final report. This is the foundation of building AI that can handle complex, multi-step tasks without constant human intervention.

By the end of this article, you will have a fully functional project structure capable of running advanced research queries. We will split our code into two parts: the search tool and the agent logic.

I. Prerequisites and Environment Setup

Before we write any code, we need to set up our environment. DeepAgents require a model that supports tool calling and an API key for search capabilities. For this lesson, we will use Tavily as our search engine.

First, install the necessary dependencies in your project:

npm install deepagents @langchain/core @langchain/google-genai @langchain/tavily dotenv

Next, create a .env file in your root directory and add your API keys. Never share these keys or commit them to version control.

GEMINI_API_KEY="your_gemini_api_key_here"
TAVILY_API_KEY="your_tavily_api_key_here"

II. Creating the Internet Search Tool

In DeepAgents, a "Tool" is a specialized function that the AI can choose to run. We will create a search tool that allows the agent to browse the web. We use the @langchain/core/tools wrapper to ensure the agent understands how to "talk" to this function.

Create a file at scripts/tools/search.ts:

// scripts/tools/search.ts
import { tool } from '@langchain/core/tools';
import { TavilySearch } from '@langchain/tavily';
import { z } from 'zod';

export const internetSearch = tool(
  async ({
    query,
    maxResults = 5,
    topic = 'general',
    includeRawContent = false,
  }: {
    query: string;
    maxResults?: number;
    topic?: 'general' | 'news' | 'finance';
    includeRawContent?: boolean;
  }) => {
    const tavilySearch = new TavilySearch({
      maxResults,
      tavilyApiKey: process.env.TAVILY_API_KEY,
      includeRawContent,
      topic,
    });
    return await tavilySearch._call({ query });
  },
  {
    name: 'internet_search',
    description: 'Run a web search to gather real-time information.',
    schema: z.object({
      query: z.string().describe('The search query'),
      maxResults: z
        .number()
        .optional()
        .default(5)
        .describe('Maximum number of results to return'),
      topic: z
        .enum(['general', 'news', 'finance'])
        .optional()
        .default('general')
        .describe('Search topic category'),
      includeRawContent: z
        .boolean()
        .optional()
        .default(false)
        .describe('Whether to include raw content'),
    }),
  },
);

III. Initializing the DeepAgent with Gemini

Now that our tool is ready, we need to build the agent. This script imports the internetSearch tool and configures the ChatGoogleGenerativeAI model. We also provide "System Instructions" to tell the agent exactly how to behave.

Create your main entry file at scripts/deep-agent.ts:

// scripts/deep-agent.ts
import { ChatGoogleGenerativeAI } from '@langchain/google-genai';
import { createDeepAgent } from 'deepagents';
import 'dotenv/config';
import { internetSearch } from './tools/search';

// Instructions to guide the agent's behavior
const researchInstructions = `You are an expert researcher. Your job is to conduct thorough research and then write a polished report.

You have access to an internet search tool as your primary means of gathering information.

## internet_search

Use this to run an internet search for a given query. You can specify the max number of results to return and the topic.
`;

const agent = createDeepAgent({
  model: new ChatGoogleGenerativeAI({
    model: 'gemini-2.0-flash', // Using Gemini 2.0 for high speed and reasoning
    apiKey: process.env.GEMINI_API_KEY,
  }),
  tools: [internetSearch],
  systemPrompt: researchInstructions,
});

// Running the agent with a sample query
const result = await agent.invoke({
  messages: [{ role: 'user', content: 'What is langgraph?' }],
});

// Print the agent's final synthesized response
console.log(result.messages[result.messages.length - 1].content);

IV. Understanding the Internal Reasoning

When you run the code above, the agent doesn't just call the Gemini API once. Behind the scenes, the DeepAgent architecture performs several steps:

  1. Planning: The agent realizes it doesn't know what "LangGraph" is. It uses its internal planning tool to write a to-do list: "1. Search for LangGraph, 2. Summarize findings."
  2. Execution: It calls the internet_search tool you built in Step II.
  3. Analysis: It reads the data returned from Tavily. If the data is incomplete, it might decide to run a second, more specific search.
  4. Reporting: Once its internal "To-Do" list is cleared, it compiles all the information into the final message you see in your console.

V. Running the Agent

To execute your agent, ensure your environment variables are set and use tsx or node (if using modules) to run the script.

# Run the script using npx and tsx for TypeScript support
npx tsx scripts/deep-agent.ts

You should see the agent's output, which will be a detailed report on LangGraph based on the most recent information found on the web. This marks the completion of your first autonomous DeepAgent! In the next lesson, we will explore how to give our agent "Persistent Memory" so it can remember its research across multiple sessions.