April 26: Support for OpenAI image generation, GPT-4.1, o3 and o4-mini models, bug fixes
Last updated
Last updated
Graphlit now supports the latest l. As part of the publishing functionality, you can now publish contents to an image format, similar to how we already support publishing to ElevenLabs audio formats.
We have added support for the Gemini 2.5 Flash Preview model, with the GEMINI_2_5_FLASH_PREVIEW
model enum.
We have added support for the OpenAI (full), , and models, with model enums GPT41_1024K
, GPT41_MINI_1024K
, GPT41_NANO_1024K
.
We have added support for OpenAI o3 and o4-mini models, with the model enums O3_200K
, and O4_MINI_200K
.
We have added support for injecting an image content source into the RAG conversation. Previously, you weren't able to upload an image, and then start a RAG conversation which said "Describe this image". Now this will reference the recently upload image properly.
We have added a new field called 'similarContents' which provides the legacy similarity search by contents. The existing 'contents' field will now provide filtering by a list of contents.
We have made a change to the semantics of the 'contents' field in the ContentFilter object. When this was originally implemented, we'd retrieve contents similar to the vector embeddings of those provided in the 'contents' field. However, the other fields like 'collections', 'feeds', etc. do not use similarity search, which has been confusing for many.
GPLA-4075: Failed to parse large email payload. (Now truncating to 917504 characters.)
GPLA-3865: Wasn't capping screenshot height on HTML image extraction.
GPLA-4161: AskGraphlit: Unable to format conversation; exceeding token budget.