September 26: Support for Google AI and Cerebras models, and latest Groq models
New Features
π‘ Graphlit now supports the Cerebras model service which offers the
LLAMA_3_1_70B
andLLAMA_3_1_8B
models.π‘ Graphlit now supports the Google AI model service which offers the
GEMINI_1_5_PRO
andGEMINI_1_5_FLASH
models.We have added support for the latest Groq Llama 3.2 preview models, including
LLAMA_3_2_1B_PREVIEW
,LLAMA_3_2_3B_PREVIEW
,LLAMA_3_2_11B_TEXT_PREVIEW
, andLLAMA_3_2_90B_TEXT_PREVIEW
. We have also added support for the Llama 3.2 multimodal modelLLAMA_3_2_11B_VISION_PREVIEW.
We have added a new
specification
parameter to thepromptConversation
mutation. Now you can specify your initial specification for a new conversation, or update an existing conversation, without requiring additional API calls.β‘ We have changed the retrieval behavior of the
promptConversation
mutation. Now, if no relevant content was found via vector-based semantic search (given the user prompt), we will fallback to any relevant content from the message in the conversation. If there was no content from the conversation to fallback to, we will fallback to the last ingested content in the project. This solves an issue where a first prompt like 'Summarize this' would find no relevant content. Now it will fallback to retrieve the last ingested content.β‘ We have renamed the Groq model enum from
LLAVA_1_5_7B
toLLAVA_1_5_7B_PREVIEW.
Bugs Fixed
GPLA-3083: Not sending custom instructions/guidance with extraction prompt
GPLA-3146: Filtering Persons by email not working
GPLA-3171: Not failing on deprecated OpenAI model
GPLA-3158: Summarization not using revision strategy
Last updated