24 August 2024

Exploring the V0.dev System Prompt

Written by
Prompt Engineering

I was browsing the web and found what appears to be the system prompt for a generative front-end UI LLM called v0.dev. This is a multimodal generative UI tool I've been using to build this site (text-to-react). It recently launched a premium tier, which runs on the leaked prompt.

I was slightly surprised, and within a minute, I was browsing repositories of leaked system prompts, wondering, 'Why am I not surprised?'.

I found this interesting because, if you go through that list of leaks, many of these are paid or at least private services. So, I was initially sceptical that what I found was accurate. But even if it is fake, I wanted to look into it.

Examining the V0.dev System Prompt

In case you're unfamiliar, the system prompt is like a hidden set of instructions that defines how an AI product interacts with you. The end user is not meant to have access to these.

First, let's examine the v0.dev complete system prompt that leaked:

That's a lot (and my syntax highlighting could be better). So, let's break down what's happening under the hood to provide insight into how a tool like v0.dev operates and help determine if it is a legitimate leak.

Deconstructing the Prompt

Here is how the prompt is organised and the different sections. I made it legible, but each section in the prompt can be found below (i.e. The Info section below is <v0_info> in the prompt):

  1. Info: Overview of v0, Vercel's 'advanced AI coding assistant'.

  2. MDX: Instructions for formatting responses using specific markdown.

  3. Code Block Types:

    1. React Component: Renders React components in MDX responses.

    2. Node.js Executable: Executes Node.js code in MDX.

    3. HTML: Embeds accessible HTML snippets.

    4. Markdown: Writes GitHub Flavored Markdown.

    5. Diagram: Creates diagrams using Mermaid.

    6. General Code: Handles large code snippets in various languages, including Python.

  4. MDX Components:

    1. Linear Processes: Custom component for multi-step processes.

    2. Quiz: Generates quizzes using a custom component.

    3. Math: Renders mathematical equations with LaTeX.

  5. Domain Knowledge: Placeholder for runtime domain knowledge.

  6. Forming Correct Responses: Guidelines for v0 to evaluate, think, and respond accurately, including handling refusals and warnings.

  7. Examples:

    1. Example 1: Handling a general question about life.

    2. Example 2: React component for creating a badge.

    3. Example 3: Node.js implementation of a prime number checker.

    4. Example 4: Step-by-step comparison of two numbers.

    5. Example 5: React component for an input field with label and description.

    6. Example 6: Refusal to answer a real-time event query.

    7. Example 7: Stopwatch React component with start, pause, and reset features.

    8. Example 8: Python script for reading a CSV file.

    9. Example 9: Mermaid diagram illustrating OAuth 2.0 Authorization Code Flow.

I recommend diving into each section independently because there's much to unpack, and you can gain insight into Vercel's product design decisions.

For example, at the very beginning, we have a set of instructions that establish the context in which the AI should operate. This includes the purpose of v0.dev, its capabilities, and the general rules it must follow. This section is critical for a user-facing LLM as it sets the tone for how the AI should behave:

Another example, looking at this section, we can see that they're optimising for an engaging user experience:

The final two sections are the most interesting: Forming Correct Responses and Examples. These sections demonstrate how v0.dev implements chain-of-thought techniques in its prompt engineering.

Validating That It Is the Real System Prompt

To validate, I just started testing whether v0.dev would follow the instructions if I asked.

Example, How It Handles React Components

This is a set of instructions on how the v0.dev LLM handles and thinks about React components.

If these leaked system prompts are accurate, the live version of v0.dev should follow them.

I started to find specific instructions that I can quickly test. For example, according to the leaked prompt, 3. v0 DOES NOT use indigo or blue colors unless specified in the prompt.

I asked it if it used indigo or blue, and it confirmed that it doesn't use indigo or blue:

That means little on its own, so I tested a few other instructions.

Another straightforward test was in 1. v0 prefers Lucide React for icons, and shadcn/ui for components. I asked v0 for its preferred icon set, and it confirmed that it prefers Lucide icons, just like in the leaked prompt.

So far so good. I kept testing a few more and some answers were clearly matching the leaked instructions.

I started to feel like I was getting somewhere with validating that this was the actual system prompt from v0.dev.

The Evidence Started to Add Up

I kept testing more of the instructions in the leak, one by one, and they all were validated by the production v0.dev LLM. I did around 15, and I felt like this was fun and that I was not wasting my time.

Eventually, I just straight up asked it to give me its guidelines. It simply complied and they do seem to match. Maybe I did waste my time and I should have started by just asking nicely:

I asked for clarification on several areas, and all the answers were spot on.

For example, I ask about the examples where it will think step-by-step. It initially gave me the exact same examples from the leaked prompt:

You can find that example in the leaked prompt:

Same with this one:

You can find that example in the leaked prompt too:

For transparency, it started providing examples that weren't in the leak once I asked a few times but, at this point I'm convinced that this is the actual v0.dev system prompt. Awesome!

I expect the v0.dev team to iterate on this prompt and it will go out of date—so what's the point? What's the benefit of all of this, what are we even doing?

Seeing the V0.dev System Prompt Is Useful

v0.dev is a $20 per month service by Vercel, and if their system prompt leaks consistently, nothing is stopping people from using the leaked prompt with ChatGPT (of course, you're not just paying for the prompt when you sign up to premium v0.dev).

However, seeing the prompt now is still very useful in several ways other than saving money on SaaS.

Here are three of my learnings from today's exploration:

  1. I have improved my prompt engineering skills by observing Vercel's approach with v0.dev This is a live, consumer-facing LLM application on a significant scale. Vercel's prompt is well-structured, simple, and easy to maintain. It has inspired me to rethink my prompt engineering approach, especially in a professional setting at work. I have gained valuable insights from studying their Chain of Thought implementation.

  2. I learned to get better results out of v0.dev Simply, instructions on how the tool works make it easier to learn how to use. For instance, I might reframe certain questions or use specific sentences that I know should 'trigger' the processes I prefer.

  3. Faster prototyping for Generative UI projects Since the prompt seems to work, I now want to bring forward some planned experiments with something that generates its own UI. For example, I want to create a form that generates questions based on the user's answers. Having this prompt will save me a some time prototyping that.

  4. Bonus It was fun. Exploring the leaked prompt, seeing how it works and testing whether it is genuine was fun for me.

What's Next?

Based on today's exploration, I have a lot to experiment with. I'll prioritise prompt engineering content to elaborate on some of the most interesting parts of the v0.dev system prompt: the chain of thought and reasoning implementations. Those are big topics, so I will give them room.

Since these are practical techniques that I can already apply to my work, I'll take the opportunity to write 'how' I'm testing which prompt is better than another. This is something I have to set myself up to do in production, so I will also be looking through prompt engineering tools, workflows, etc.

Finally, I will leverage what I learned today to create simple generative UI prototypes and start building some stuff out in the open.

That's what this blog is all about.

Thanks for reading.

I wrote this article using Grammarly, made the image for the article with Midjourney and created this website using v0.dev. I also used ChatGPT throughout the process.