Generate a text and call tools for a given prompt using a language model.
This function does not stream the output. If you want to stream the output, use `streamText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramtoolChoice - The tool choice strategy. Default: 'auto'.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting.
The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling.
The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token.
Used to remove "long tail" low probability responses.
Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting.
It affects the likelihood of the model to repeat information that is already in the prompt.
The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting.
It affects the likelihood of the model to repeatedly use the same words or phrases.
The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences.
If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling.
If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramexperimental_generateMessageId - Generate a unique ID for each message.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnsA result object that contains the generated text, the results of the tool calls, and additional information.
generateText } from 'ai';import { const openai: OpenAIProvider
Creates a new instance of the Composio SDK.
The constructor initializes the SDK with the provided configuration options,
sets up the API client, and initializes all core models (tools, toolkits, etc.).
@paramconfig - Configuration options for the Composio SDK@paramconfig.apiKey - The API key for authenticating with the Composio API@paramconfig.baseURL - The base URL for the Composio API (defaults to production URL)@paramconfig.allowTracking - Whether to allow anonymous usage analytics@paramconfig.provider - The provider to use for this Composio instance (defaults to OpenAIProvider)@example```typescript
// Initialize with default configuration
const composio = new Composio();
// Initialize with custom API key and base URL
const composio = new Composio({
apiKey: 'your-api-key',
baseURL: 'https://api.composio.dev'
});
// Initialize with custom provider
const composio = new Composio({
apiKey: 'your-api-key',
provider: new CustomProvider()
});
```
Composio({provider?: VercelProvider | undefined
The tool provider to use for this Composio instance.
@examplenew OpenAIProvider()
provider: new
new VercelProvider({ strict }?: { strict?: boolean;}): VercelProvider
Creates a new instance of the VercelProvider.
This provider enables integration with the Vercel AI SDK,
allowing Composio tools to be used with Vercel AI applications.
@example```typescript
// Initialize the Vercel provider
const provider = new VercelProvider();
// Use with Composio
const composio = new Composio({
apiKey: 'your-api-key',
provider: new VercelProvider()
});
// Use the provider to wrap tools for Vercel AI SDK
const vercelTools = provider.wrapTools(composioTools, composio.tools.execute);
```
// create an auth config for gmail// then create a connected account with an external user id that identifies the userconstconst externalUserId: "your-external-user-id"externalUserId = "your-external-user-id";constconst tools: ToolSettools = awaitconst composio: Composio<VercelProvider>composio.Composio<VercelProvider>.tools: Tools<unknown, unknown, VercelProvider>
Get a specific tool by its slug.
This method fetches the tool from the Composio API and wraps it using the provider.
@paramuserId - The user id to get the tool for@paramslug - The slug of the tool to fetch@paramoptions - Optional provider options including modifiers@returnsThe wrapped tool@example```typescript
// Get a specific tool by slug
const hackerNewsUserTool = await composio.tools.get('default', 'HACKERNEWS_GET_USER');
// Get a tool with schema modifications
const tool = await composio.tools.get('default', 'GITHUB_GET_REPOS', {
modifySchema: (toolSlug, toolkitSlug, schema) => {
// Customize the tool schema
return {...schema, description: 'Custom description'};
}
});
```
Generate a text and call tools for a given prompt using a language model.
This function does not stream the output. If you want to stream the output, use `streamText` instead.
@parammodel - The language model to use.@paramtools - Tools that are accessible to and can be called by the model. The model needs to support calling tools.@paramtoolChoice - The tool choice strategy. Default: 'auto'.@paramsystem - A system message that will be part of the prompt.@paramprompt - A simple text prompt. You can either use `prompt` or `messages` but not both.@parammessages - A list of messages. You can either use `prompt` or `messages` but not both.@parammaxOutputTokens - Maximum number of tokens to generate.@paramtemperature - Temperature setting.
The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either `temperature` or `topP`, but not both.@paramtopP - Nucleus sampling.
The value is passed through to the provider. The range depends on the provider and model.
It is recommended to set either `temperature` or `topP`, but not both.@paramtopK - Only sample from the top K options for each subsequent token.
Used to remove "long tail" low probability responses.
Recommended for advanced use cases only. You usually only need to use temperature.@parampresencePenalty - Presence penalty setting.
It affects the likelihood of the model to repeat information that is already in the prompt.
The value is passed through to the provider. The range depends on the provider and model.@paramfrequencyPenalty - Frequency penalty setting.
It affects the likelihood of the model to repeatedly use the same words or phrases.
The value is passed through to the provider. The range depends on the provider and model.@paramstopSequences - Stop sequences.
If set, the model will stop generating text when one of the stop sequences is generated.@paramseed - The seed (integer) to use for random sampling.
If set and supported by the model, calls will generate deterministic results.@parammaxRetries - Maximum number of retries. Set to 0 to disable retries. Default: 2.@paramabortSignal - An optional abort signal that can be used to cancel the call.@paramtimeout - An optional timeout in milliseconds. The call will be aborted if it takes longer than the specified timeout.@paramheaders - Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.@paramexperimental_generateMessageId - Generate a unique ID for each message.@paramonStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.@paramonFinish - Callback that is called when all steps are finished and the response is complete.@returnsA result object that contains the generated text, the results of the tool calls, and additional information.
generateText({model: LanguageModel
The language model to use.
model: function openai(modelId: OpenAIResponsesModelId): LanguageModelV3
Default OpenAI provider instance.
openai("gpt-5"),messages: ModelMessage[]
A list of messages.
You can either use `prompt` or `messages` but not both.
messages: [ {role: "user"role: "user",content: UserContentcontent: `Send an email to soham.g@composio.dev with the subject 'Hello from composio 👋🏻' and the body 'Congratulations on sending your first email using AI Agents and Composio!'`, }, ],tools?: ToolSet | undefined
The tools that the model can call. The model needs to support calling tools.
tools,});var console: Console
The `console` module provides a simple debugging console that is similar to the
JavaScript console mechanism provided by web browsers.
The module exports two specific components:
* A `Console` class with methods such as `console.log()`, `console.error()` and `console.warn()` that can be used to write to any Node.js stream.
* A global `console` instance configured to write to [`process.stdout`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstdout) and
[`process.stderr`](https://nodejs.org/docs/latest-v24.x/api/process.html#processstderr). The global `console` can be used without importing the `node:console` module.
_**Warning**_: The global console object's methods are neither consistently
synchronous like the browser APIs they resemble, nor are they consistently
asynchronous like all other Node.js streams. See the [`note on process I/O`](https://nodejs.org/docs/latest-v24.x/api/process.html#a-note-on-process-io) for
more information.
Example using the global `console`:
```js
console.log('hello world');
// Prints: hello world, to stdout
console.log('hello %s', 'world');
// Prints: hello world, to stdout
console.error(new Error('Whoops, something bad happened'));
// Prints error message and stack trace to stderr:
// Error: Whoops, something bad happened
// at [eval]:5:15
// at Script.runInThisContext (node:vm:132:18)
// at Object.runInThisContext (node:vm:309:38)
// at node:internal/process/execution:77:19
// at [eval]-wrapper:6:22
// at evalScript (node:internal/process/execution:76:60)
// at node:internal/main/eval_string:23:3
const name = 'Will Robinson';
console.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to stderr
```
Example using the `Console` class:
```js
const out = getStreamSomehow();
const err = getStreamSomehow();
const myConsole = new console.Console(out, err);
myConsole.log('hello world');
// Prints: hello world, to out
myConsole.log('hello %s', 'world');
// Prints: hello world, to out
myConsole.error(new Error('Whoops, something bad happened'));
// Prints: [Error: Whoops, something bad happened], to err
const name = 'Will Robinson';
myConsole.warn(`Danger ${name}! Danger!`);
// Prints: Danger Will Robinson! Danger!, to err
```
Prints to `stdout` with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to [`printf(3)`](http://man7.org/linux/man-pages/man3/printf.3.html)
(the arguments are all passed to [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args)).
```js
const count = 5;
console.log('count: %d', count);
// Prints: count: 5, to stdout
console.log('count:', count);
// Prints: count: 5, to stdout
```
See [`util.format()`](https://nodejs.org/docs/latest-v24.x/api/util.html#utilformatformat-args) for more information.
@sincev0.1.100
log("Email sent successfully!", { text: stringtext });