Hi Alex, an interesting question there. We want to use a model specialized in converting text to the appropriate numerical representations (embeddings in vector space), and embedding models like sentence-transformers, text-embedding-ada-002 (by OpenAI) are specialized in capturing those relatedness. Furthermore, these are typically small models that are flexible and easy to deploy/run anywhere. From your experience, have you been using a single LLM to generate both embeddings as well as summaries from document chunks? Do you see a benefit of using a single LLM to generate embeddings over the classic embeddings models?