LLMs 💖 Markdown (and so do we!)
I’m writing instruction prompts for LLMs all the time and one thing that became obvious pretty early on is that the formatting matters. First it was just an observation that I made (the generated outputs seem of better quality) and because I couldn’t measure the real impact, it just felt “right” to structure the prompts using Markdown. I like using Visual Studio Code and Obsidian, so I was already familiar with that.
Now there is a study that hints at my feeling being somewhat right: Does Prompt Formatting Have Any Impact on LLM Performance? What the researchers found is that the formatting does indeed have an impact on LLM performance. It is not (yet) clear what format works best for which model, but according to the study, GPT-4 models work better with Markdown inputs.
So Markdown helps with structuring the content and also with reading it. But there is one other thing that I really think is a big plus when using VSCode to design prompts; you can store the prompt in a code repository like GitHub and track changes over time. As the prompt has a significant impact on the output that will be generated, being able to iterate and see what you already have tried is super helpful.
An additional benefit is that you can use the excellent GitHub Copilot to help you with the development. 😉