## Building AI agents with Frontegg AI SDK This guide walks you through building a functional AI agent application using the Frontegg AI SDK, inspired by the [Frontegg AI Agent example](https://github.com/frontegg/frontegg-ai-agent-example) reference project. div div iframe script ### This guide covers: * Adding authentication, * Integrating third-party tools, * Building a React frontend to interact with your agent. br The Frontegg AI SDK ([typescript](https://github.com/frontegg/frontegg-ai-typescript-sdk)/[python](https://github.com/frontegg/frontegg-ai-python-sdk)) simplifies adding secure authentication, user context, and tool integrations to your AI agents, allowing you to focus on the agent's core logic. ### What you'll build: * An Express.js backend powering an AI agent. * Secure authentication and user context management via Frontegg. * Secure OAuth integration with third-party tools (like Slack, Notion, etc.). * Underlying agent logic powered by Langchain and OpenAI. * A React frontend with a chat interface to interact with the agent. ### Prerequisites br Prerequisites * [Node.js](https://nodejs.org/) (v18 or later recommended) * [npm](https://www.npmjs.com/) * A [Frontegg Account](https://portal.frontegg.com/) * An [OpenAI API Key](https://platform.openai.com/api-keys) * Credentials for any third-party tools you want the agent to use. br ### 1. Frontegg configuration Set up Frontegg to handle authentication for your agent. To do this: 1. Navigate to [ENVIRONMENT] → AI Agents → Agents. 2. Click **Create Agent**. br ![AI 1](/assets/ai-agent-1.1821227b500bdb19fd0b12796a844ac4c3c5663b643e98e19e5180a1eeec8fd4.3fd2b53c.png) br 1. Fill in the following fields: - **Name** – A human-readable name for the agent. - **Description** – A short summary of the agent’s purpose. - **Model Provider** – Select `OpenAI`. - **Orchestration Platform** – Select `LangChain`. - **Agent URL** – The URL your users are redirected to after authentication. Set this to `http://localhost:3001 `. br br ![AI 2](/assets/ai-agent-2.d340de7b67b5baf0d502741a7b357b9a9c85cf3a6b7afa3c55319a4fdc4ca301.3fd2b53c.png) br 1. Click **Save** to create the agent. 2. Open the Agent page and note down the **Agent ID** for the following step. br ![AI 3](/assets/ai-agent-3.2ddff16c06b903e99f00fa6e65a6203458ea2d81be2a200ce020ff45d6cb1bdc.3fd2b53c.png) br ### 2. Environment variables Create a `.env` file in the project root (you can copy from `.env.example` if it exists). Fill it with your credentials: br ```sh # Shared backend and frontend vars VITE_FRONTEGG_CLIENT_ID=YOUR_FRONTEGG_ENV_CLIENT_ID # Frontegg Client ID VITE_FRONTEGG_AGENT_ID=YOUR_FRONTEGG_AGENT_ID # Frontegg Agent ID # Backend only vars FRONTEGG_CLIENT_SECRET=YOUR_FRONTEGG_ENV_API_KEY # Frontegg Application Secret Key OPENAI_API_KEY=YOUR_OPENAI_API_KEY # OpenAI API Key # Frontend vars VITE_API_BASE_URL=http://localhost:3001 # Your backend API URL VITE_FRONTEGG_BASE_URL=YOUR_FRONTEGG_BASE_URL # Frontegg Base URL (e.g., https://app-xxxx.stg.frontegg.com) (step 2.1) ``` br Security Never commit your `.env` file or expose secret keys in your frontend code. The `VITE_` prefix makes variables accessible in the Vite frontend; others are for the backend only. br Obtain the values for the `.env` file as described below. #### Client ID and Secret Key (API Key) To obtain the Client ID and API key: 1. [ENVIRONMENT] → Configurations → Keys & domains. 2. Copy the values of the Client ID and API key from the **Settings** tab. br ![AI 4](/assets/ai-agent-4.c2f2e206cb37292e9112195eb6bc249fc77f4e56a55503c0650c9e88e9f5704a.3fd2b53c.png) br #### Frontegg base URL To obtain the Frontegg base URL 1. [ENVIRONMENT] → Configurations → Keys & domains. 2. Go to the **Domains** tab. 3. Copy the value from the **Domain name** field. For example `https://app-xxxx.frontegg.com`. br ![AI 5](/assets/ai-agent-5.58d86341dfd375f5baf304214f18a32245d29bbbd3dc69fb45aab726234e5125.3fd2b53c.png) br #### Agent ID 1. [ENVIRONMENT] → AI Agents → Agents. 2. Open the agent created in the previous step. 3. Copy the ID displayed under the Agent's name or in the **ID** box. br ![AI 3](/assets/ai-agent-3.2ddff16c06b903e99f00fa6e65a6203458ea2d81be2a200ce020ff45d6cb1bdc.3fd2b53c.png) br ### 3. Backend implementation (Express.js, LangChain, OpenAI, Frontegg) The backend is responsible for agent logic, authentication, and tool execution. In this project, the backend is built with Express.js and uses a custom `LLMAgent` class that leverages LangChain, OpenAI, and the Frontegg AI SDK. **Key features:** - **Singleton Agent:** The agent is initialized once and reused for all requests, with robust error handling and async initialization. - **LangChain & OpenAI:** The agent uses LangChain's agent framework and OpenAI's GPT models for reasoning and tool use. - **Frontegg AI SDK:** Handles user authentication, user context, and dynamic tool access based on the user's access. - **JWT Authentication:** After authenticating, the frontend sends a JWT in the `Authorization` header, which is used to set user context for the agent. #### Backend structure - `src/server.ts`: Sets up the Express server, handles agent initialization, and defines the `/api/agent` endpoint. - `src/services/llm-agent.ts`: Defines the `LLMAgent` class, which manages conversation history, integrates with Frontegg, and orchestrates LangChain/OpenAI logic. #### Server example br ```typescript import 'dotenv/config'; import express from 'express'; import { createLLMAgent } from './services/llm-agent'; const app = express(); app.use(express.json()); // Initialize the agent with Frontegg credentials const agent = createLLMAgent(); agent.initializeFronteggAIAgentsClient(); app.post('/api/agent', async (req, res) => { const { message } = req.body; const userJwt = req.headers['authorization'] as string; // Pass the prompt and JWT to the agent const result = await agent.processRequest(message, userJwt); res.json({ response: result?.output || result }); }); app.listen(3001); ``` br #### LLM agent example br ```typescript import { ChatOpenAI } from '@langchain/openai'; import { logger } from '../utils/logger'; import { AgentExecutor, createOpenAIFunctionsAgent } from 'langchain/agents'; import { ChatPromptTemplate, MessagesPlaceholder } from '@langchain/core/prompts'; import { Environment, FronteggAiClient } from '@frontegg/ai-sdk'; export class LLMAgent { private model: ChatOpenAI; private agent: AgentExecutor | null = null; private conversationHistory: { role: string; content: string }[] = []; private systemMessage: string; private fronteggAiClient: FronteggAiClient | undefined; constructor() { this.model = new ChatOpenAI({ model: 'gpt-4o', temperature: 0.7, openAIApiKey: process.env.OPENAI_API_KEY, }); this.systemMessage = `You are Jenny, an autonomous B2B agent...`; } public async initializeFronteggAIAgentsClient(): Promise { this.fronteggAiClient = await FronteggAiClient.getInstance({ agentId: process.env.VITE_FRONTEGG_AGENT_ID!, clientId: process.env.VITE_FRONTEGG_CLIENT_ID!, clientSecret: process.env.FRONTEGG_CLIENT_SECRET!, environment: Environment.EU, }); return true; } private async createAgent(tools: any[]) { const messages = [ { role: 'system', content: this.systemMessage, }, ...this.conversationHistory, new MessagesPlaceholder('agent_scratchpad'), ]; const prompt = ChatPromptTemplate.fromMessages(messages); const openAIFunctionsAgent = await createOpenAIFunctionsAgent({ llm: this.model as any, tools: tools as any, prompt: prompt as any, }); this.agent = new AgentExecutor({ agent: openAIFunctionsAgent as any, tools: tools as any, verbose: true, }); logger.info('LangChain agent created/updated successfully'); } public async processRequest( request: string, userJwt: string | null, history?: { role: string; content: string }[] ) { if (!userJwt) { return isLoginIntent() ? { output: "Great! I'll redirect you to the login page now." } : { output: "I apologize, but I need you to log in first before I can help you with that. Would you like to log in now?" }; } if (!this.fronteggAiClient) throw new Error('Frontegg client not initialized'); if (history) this.conversationHistory = history; this.conversationHistory.push({ role: 'human', content: request }); await this.fronteggAiClient.setUserContextByJWT(userJwt); const tools = await this.fronteggAiClient.getToolsAsLangchainTools(); await this.createAgent(tools); const result = await this.agent?.invoke({ input: request }); this.conversationHistory.push({ role: 'assistant', content: result?.output || '' }); return result; } } export function createLLMAgent(): LLMAgent { return new LLMAgent(); } ``` br #### How it works - On the first request, the backend initializes the agent (with Frontegg, LangChain, and OpenAI). - For each `/api/agent` call, the backend: 1. Extracts the user's JWT from the `Authorization` header. 2. Sets user context in Frontegg (enabling user-specific context and tool access). 3. Fetches all authorized tools for the user from Frontegg and passes them to the LLM agent. 4. Passes the prompt to the agent, which uses LangChain and OpenAI to reason and call tools as needed. 5. Returns the agent's response to the frontend. br Note Sending a reset password email from the **User** page does not override expiration or policy settings. It is intended for support or recovery scenarios only. br ### 4. Frontend implementation (React + Vite) The frontend provides the user interface for user authentication and interaction with the agent. #### Key frontend points * **`FronteggProvider`:** Wraps the application to provide authentication context. `contextOptions` connects it to your Frontegg application. `hostedLoginBox={true}` redirects users to Frontegg for login. * **`useAuth` Hook:** Provides authentication state (`isAuthenticated`, `isLoading`), user information (`user`). * **API Call with Access Token:** The UI calls the `/api/agent` endpoint with the Frontegg access token of the authenticated user in the `Authorization` header. This enables secure, user-specific backend processing. * **UI Components:** Components like `ChatMessage` and `PromptInput` handle the display and input elements of the chat. #### FronteggProvider wrapper br ```typescript import React from 'react'; import ReactDOM from 'react-dom/client'; import App from './App'; import './index.css'; import { FronteggProvider } from '@frontegg/react'; // Frontegg React SDK const contextOptions = { baseUrl: import.meta.env.VITE_FRONTEGG_BASE_URL, clientId: import.meta.env.VITE_FRONTEGG_CLIENT_ID, }; ReactDOM.createRoot(document.getElementById('root')!).render( , ); ``` br #### Agent chat br ```typescript import React, { useState, useRef } from 'react'; import { ChatMessage, Message } from './ChatMessage'; import { PromptInput } from './PromptInput'; import { ContextHolder, useAuth, useLoginWithRedirect } from '@frontegg/react'; export function AgentChat() { const { isAuthenticated, user } = useAuth(); const loginWithRedirect = useLoginWithRedirect(); const [messages, setMessages] = useState([]); const [isLoading, setIsLoading] = useState(false); const messagesEndRef = useRef(null); // If not authenticated, immediately redirect to login React.useEffect(() => { if (!isAuthenticated) { loginWithRedirect(); } }, [isAuthenticated, loginWithRedirect]); const handleSubmit = async (prompt: string) => { if (!prompt.trim()) return; setMessages(prev => [...prev, { role: 'user', content: prompt }]); setIsLoading(true); try { const response = await fetch(`${import.meta.env.VITE_API_BASE_URL}/api/agent`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${ContextHolder.default().getAccessToken()}`, }, body: JSON.stringify({ message: prompt, history: messages }), }); const data = await response.json(); setMessages(prev => [...prev, { role: 'assistant', content: data.response }]); } catch (error) { setMessages(prev => [...prev, { role: 'assistant', content: 'I apologize, but I encountered an error processing your request. Please try again.' }]); } finally { setIsLoading(false); } }; return (
{messages.map((message, index) => ( ))}
{isLoading && (
Jenny is thinking...
)}
); } ``` br ### Conclusion By leveraging the Frontegg AI SDK, you can significantly accelerate the development of secure, authenticated AI agents. The SDK handles the complexities of user authentication, authorization, and context management, allowing you to integrate powerful AI capabilities with robust security and user-specific interactions. This project structure provides a solid foundation for building sophisticated agents connected to various third-party tools. Explore the Frontegg AI SDK ([typescript](https://github.com/frontegg/frontegg-ai-typescript-sdk)/[python](https://github.com/frontegg/frontegg-ai-python-sdk)) documentation for more advanced features like fine-grained authorization, advanced tool configuration, and custom context injection.