Upgrading Elf Help to GPT 3.5 Turbo

In my last post, I mentioned that ElfHelp.ai was not on gpt-3.5-turbo because the prompt is designed to get the proper JSON formatting from text-davinci-003.

Now it is! Meaning that the results are a bit faster and much cheaper. My hunch is they are slightly better as well, but not by enough to say for sure without deeper investigation. (If the product became popular, we would do proper side-by-side evaluation or live experimentation.)

I mentioned that it was previously asking text-davinci-003 to generate a JSON object. We did this by ending our prompt with:

Recommendation should be in json array format:

[{ "idea": idea, "reason": reason, "product_name_amazon": product name split by a plus sign, "product_name_etsy": product name split by %20  }, ... ]
example [{ "idea": "A book", "reason": "they like to read", "product_name_amazon": "The+Great+Gatsby", "product_name_etsy": "The%20Great%20Gatsby" }, ...}]

Besides being a bit verbose, gpt-3.5-turbo would not consistently generate the JSON array correctly. For example, on one occasion it left off the leading [. (My guess is in a few months this will feel like a very old problem, like when you couldn't get five fingers on a hand in MidJourney.)

Fixing this was also a good opportunity to reduce the size of the prompt, which affects latency and cost. Now the prompt ends with:

Respond in the following format with no other punctuation or details.

idea|reason|product name||

For example:
A book|because they like to read and you can never have too many books|the great gatsby||A tennis racket|so they can get exercise, which is heart healthy and fun|beginner tennis racket||

With 45 fewer characters (305 vs. 350) and this terse custom tuple syntax, The savings mostly come from combining all the product names into one field and writing logic to propagate to '+' and '%20' on our side, but removing all the quotes and brackets is helpful too. We've also slipped in a second example to help gpt-3.5-turbo get a sense of what we're looking for. During a few manual tests, it seemed to make results a bit better.

(You can probably tell the prompt is still not highly optimized. When Elf Help becomes open source, we'll be eager to see if anyone has ideas to improve the prompt further.)

There are a few other changes required, included below for completeness.

/* utils/OpenAIStream.ts */

// requested URL is different
// const res = await fetch("https://api.openai.com/v1/completions", {
const res = await fetch("https://api.openai.com/v1/chat/completions", {

  /* ... */

  // returned object has a different structure
  //const text = json.choices[0].text || "";
  const text = json.choices[0].delta?.content || "";

  /* ... */

}
/* pages/api/generate.js */

/* ... */

  // new model with slightly altered object structure
  // full file is available at https://github.com/Nutlope/twitterbio/blob/main/pages/api/generate.ts
  // model: 'text-davinci-003',
  model: "gpt-3.5-turbo",
  // prompt: prompt,
  messages: [{ role: 'user', content: prompt }],