December 10: Support for OpenAI GPT-4 Turbo, Llama 2 and Mistral models; query by example, bug fixes

New Features

  • Added query by example to contents query. Developers can specify one or more example contents, and query will use vector embeddings to return similar contents.

  • Added query by example to conversations query. Developers can specify one or more example conversations, and query will use vector embeddings to return similar conversations.

  • Added vector search support for conversations queries. Developers can provide search text which will use vector embeddings to return similar conversations.

  • Added promptSpecifications mutation for directly prompting multiple models. This can be used to evaluate prompts against multiple models or compare different specification parameters in parallel.

  • Added promptStrategy field to Specification, which supports multiple strategy types for preprocessing the prompt before being sent to the LLM model. For example, REWRITE prompt strategy will ask LLM to rewrite the incoming user prompt based on the previous conversation messages.

  • Added suggestConversation mutation, which returns a list of suggested followup questions based on the specified conversation and related contents. This can be used to auto-suggest questions for chatbot users.

  • Added new summarization types: CHAPTERS, QUESTIONS and POSTS. See usage examples in the "LLMs for Podcasters" blog post.

  • Added versioned model enums such as GPT4_0613 and GPT35_TURBO_16K_1106. Without version specified, such as GPT35_TURBO_16K, Graphlit will use the latest production model version, as defined by the LLM vendor.

  • Added lookupContents query to get multiple contents by id in one query.

Bugs Fixed

  • GPLA-1725: Should ignore RSS.xml from web feed sitemap

  • GPLA-1726: GPT-3.5 Turbo 16k LLM is adding "Citation #" to response

  • GPLA-1698: Workflow not applied to link-crawled content

  • GPLA-1692: Mismatched project storage total size, when some content has errored

  • GPLA-1237: Add relevance threshold for semantic search

Last updated