Skip to content
  • Sandbox SDK adds file permission control

    Vercel Sandbox SDK 1.9.0 now supports setting file permissions directly when writing files.

    By passing a mode property to the writeFiles API, you can define permissions in a single operation.

    This eliminates the need for an additional chmod execution round-trip when creating executable scripts or managing access rights inside the sandbox.

    sandbox.writeFiles([{
    path: 'run.sh',
    content: '#!/bin/bash\necho "ready"',
    mode: 0o755
    }]);

    See the documentation to learn more.

  • MiniMax M2.7 is live on AI Gateway

    MiniMax M2.7 is now available on Vercel AI Gateway in two variants: standard and high-speed. M2.7 is a major step up from previous M2-series models in software engineering, agentic workflows, and professional office tasks.

    The model natively supports multi-agent collaboration, complex skill orchestration, and dynamic tool search for building agentic workflows. M2.7 also improves on production debugging and end-to-end project delivery.

    The high-speed variant delivers the same performance for 2x the cost of standard at ~100 tokens per second for latency-sensitive use cases,

    To use M2.7, set model to minimax/minimax-m2.7 or minimax/minimax-m2.7-highspeed in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'minimax/minimax-m2.7-highspeed',
    prompt:
    `Analyze the production alert logs from the last hour,
    correlate them with recent deployments, identify the
    root cause, and submit a fix with a non-blocking
    migration to restore service.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.

  • v0 now includes diff view to review code changes

    v0 now includes a dedicated diff view to review code changes directly within the interface. You can see exactly what was modified file by file, complete with line addition and deletion counts.

    Check out v0.app today or read the documentation to learn more.

    Ty Zhang, Sahaj Jain

  • End-to-end encryption for Vercel Workflow

    Vercel Workflow now encrypts all user data end-to-end without requiring any code changes. Workflow inputs, step arguments, return values, hook payloads, and stream data are automatically encrypted before being written to the event log.

    This makes it safe to pass sensitive data, such as API keys, tokens, or user credentials, across boundaries. The event log only ever stores ciphertext, while your step functions work exactly as before.

    Your workflow and step functions work exactly as before; all data flowing through the event log is encrypted automatically.

    Each Vercel deployment receives a unique encryption key. The key derivation and encryption stack works as follows:

    • Each workflow run derives its own key via HKDF-SHA256

    • Data is encrypted with AES-256-GCM to ensure confidentiality and integrity

    • Encrypted fields display as locked placeholders in the dashboard until decrypted

    You can access encrypted data through two methods:

    Web dashboard: Click the Decrypt button in the run detail panel. Decryption happens entirely in the browser via the Web Crypto API, so the observability server never sees your plaintext data.
    CLI: Add the --decrypt flag to the inspect command.

    npx workflow inspect run <run-id> --decrypt --withData​

    Decryption follows the same permissions model as project environment variables, meaning you cannot access workflow data if you lack permission to view environment variables. Each decryption request is recorded in your Vercel audit log, providing your team with full visibility into access events.

    While end-to-end encryption is built into the Vercel platform, custom World implementations can opt into this feature. You can provide your own getEncryptionKeyForRun() method, which the core runtime uses automatically.

    Learn more in the Workflow DevKit documentation.

  • New GitHub App permissions for Actions and Workflows

    The Vercel GitHub App now requests two additional repository permissions on install: Actions (read) and Workflows (read & write).

    These permissions enable Vercel Agent to read workflow run logs to help diagnose CI failures and configure CI workflow files on your behalf. This also allows v0 to create complete, production-ready repositories with configured CI/CD pipelines. To use these features, accept the updated permissions in your GitHub organization or account settings.

    For full details on all permissions requested by the Vercel GitHub App, see the documentation.

  • Introducing the Vercel plugin for coding agents

    Claude Code and Cursor can now further understand Vercel projects using the new Vercel plugin and a full platform knowledge graph.

    The plugin observes real-time activity, including file edits and terminal commands, to dynamically inject Vercel knowledge into the agent's context. Key capabilities include:

    • Platform knowledge: Access 47+ skills covering the Vercel platform, including Next.js, AI SDK, Turborepo, Vercel Functions, and Routing Middleware, powered by a relational knowledge graph

    • Specialized tooling: Use three specialist agents (AI Architect, Deployment Expert, Performance Optimizer) and five slash commands (/bootstrap, /deploy, /env, /status, /marketplace)

    • Context management: An injection engine and project profiler rank, deduplicate, and budget-control loaded context

    • Code validation: PostToolUse validation catches deprecated patterns, sunset packages, and stale APIs in real time

    Instead of standard retrieval, the plugin compiles pattern matchers at build time and runs a priority-ranked injection pipeline across seven lifecycle hooks. Skills fire when glob patterns, bash regexes, import statements, or prompt signals match, and are then deduplicated across the session to ensure accurate agent responses.

    The plugin currently supports Claude Code and Cursor, with OpenAI Codex support coming soon.

    Install the plugin via npx:

    npx plugins add vercel/vercel-plugin

    Directly in Claude Code via the official marketplace:

    /plugin install vercel

    Or directly in Cursor:

    /add-plugin vercel

    Explore the source code in the Vercel plugin repository.

  • Updates to Terms of Service

    Agents are reshaping the tools developers use, the applications they build, and the infrastructure that runs them. We’ve updated our Terms of Service and Privacy Policy to reflect how Vercel uses data to support agentic features, improve our platform, and contribute to the AI ecosystem.

    Link to headingWhat is changing?

    Link to headingAgentic infrastructure capabilities

    We are developing features that allow Vercel to do more to keep your apps running efficiently, including:

    • Proactively investigating and mitigating incidents

    • Analyzing web app performance data and suggesting improvements

    • Identifying where your spend is going and creating PRs to optimize usage

    Vercel may also use data to help improve our tools to fight fraud and abuse of the Vercel platform. 

    Link to headingOptional AI model training

    You may choose whether to allow Vercel to:

    • Use your code and Vercel agent chats to improve Vercel models

    • Share your code and Vercel agent chats with AI model providers

    Link to headingDefaults by plan for optional AI model training:

    • Hobby (including Trial Pro): Opted in for AI model training by default, with self-serve opt-out in Team and Project Settings

    • Pro (paid): Opted out of AI model training by default, with self-serve opt-in in Team and Project Settings

    • Enterprise: Opted out of any AI model training

    Sharing this data helps improve the performance of agentic tools for everyone. Participating in this model training program is fully optional, with easy opt-out in Team Settings → Data Preferences.  If you choose to opt out by March 31st 2026 11:59:59 PST ,Vercel will not use your data to train AI or share it with third parties. If you choose to opt out after March 31st 2026 11:59:59 PST, your data will not be used or shared from that point forward.

    If you are opted in, the training datasets would include:

    • Code and Vercel agent chats

    • Build and deployment telemetry data and build errors

    • Aggregate traffic stats

    All personal information, account details, environment variables, API keys, and other sensitive content is anonymized and redacted before use or sharing. 

    Other changes to our Terms of Service include updated dispute resolution processes, billing practices, and provisions to reflect compliance with the latest data protection laws. While arbitration has always been our method of resolving disputes with international and Enterprise customers, it now also applies to U.S.-based customers. The opt-out process described in Section 21 of our Terms of Service is unchanged.

    Link to headingFrequently Asked Questions

  • Use GPT 5.4 Mini and Nano on AI Gateway

    GPT-5.4 Mini and GPT-5.4 Nano from OpenAI are now available on Vercel AI Gateway. Both models deliver state-of-the-art performance for their size class in coding and computer use, and are built for sub-agent workflows where multiple smaller models coordinate on parts of a larger task.

    The models also support the verbosity and reasoning level parameters, giving you control over response detail and how much the model reasons before answering.

    GPT-5.4 Mini

    GPT-5.4 Mini handles code generation, tool orchestration, and multi-step browser interactions more reliably than previous mini-tier models. It's a strong default for agentic tasks that need to balance capability and cost. To use this model, set model to openai/gpt-5.4-mini in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5.4-mini',
    prompt:
    `Scaffold a new Next.js API route that connects to our
    Postgres database, validates the incoming webhook payload,
    and writes the event to the audit_logs table.`,
    });

    GPT-5.4 Nano

    GPT-5.4 Nano performs close to GPT-5.4 Mini in evaluations at a lower price point. The model is well-suited for high-volume use cases like sub-agent workflows where cost scales with the number of parallel calls. To use this model, set model to openai/gpt-5.4-nano in the AI SDK.

    import { streamText } from 'ai';
    const result = streamText({
    model: 'openai/gpt-5.4-nano',
    prompt:
    `Check each file in the PR diff for unused imports,
    flag any that can be removed, and return the results
    as a JSON array with file path and line number.`,
    });

    AI Gateway provides a unified API for calling models, tracking usage and cost, and configuring retries, failover, and performance optimizations for higher-than-provider uptime. It includes built-in observability, Bring Your Own Key support, and intelligent provider routing with automatic retries.

    Learn more about AI Gateway, view the AI Gateway model leaderboard or try it in our model playground.