What is the best embedding model for OpenWebUI?

I am currently using Alibaba-NLP/gte-base-en-v1.5 but it's not very good.

And as I understand it embedding models are used to retrieve parts of data from pdf's, text documents etc. according to the user's prompt. So, I imported some harry potter books (.txt files) and asked the AI (Qwen2.5 32b) "can you recall the first paragraph of chapter 10?" but it says "The provided context does not contain the text from Chapter 10, so I cannot recall the first paragraph. Could you provide more details or clarify your request based on available information?" and when I checked the retrievals its completely different from what I want.

https://preview.redd.it/8uu0bztn33ce1.png?width=1909&format=png&auto=webp&s=19fa9a7980794f404452ea2746b2bb2d3aa16188

https://preview.redd.it/bywttxtn33ce1.png?width=1918&format=png&auto=webp&s=74c6be33fb3ccc798370c9b860caf91b2d88f338

And the settings I used for the "Top K" value and the "RAG Template" are from this article.

https://preview.redd.it/o52ndjk443ce1.png?width=897&format=png&auto=webp&s=fd73b09de45a791a446ed17466079fc3c4f4168f

These are the retrievals and not a single one of them include a paragraph all of them are just "CHAPTER (chapter number)" and "(chapter name)"

Update (for anyone who finds this post): I went into my txt file and remove the new lines between the chapter name/number and the paragraphs and now it gets the paragraph right so I think the problem is solved! but my QwenTile 2.5 is not being able to get the paragraph even though the paragraph is the top retrieval (relevance with about 53%) I think I'll try with different llms.

Update 2: I have solved it the solution was to just remove the lines between the chapter name and the paragraph in my .txt file and once I've done that it works but my LLM was not being able to get the retrievals so I had to go to the model settings in admin panel and change the context length in advance parameters from 2048 to 32k (I don't think it needs to be 32k it just makes my model slow I think I'll change it to 10k) and everything is working perfectly here are my settings and results:

https://preview.redd.it/umcbvs0etqce1.png?width=1912&format=png&auto=webp&s=2477b4fde140a35f880132c44dbb2357118fb768

https://preview.redd.it/0dxvfs3etqce1.png?width=1111&format=png&auto=webp&s=b7cdfe21eb6bf3b3181c0d639a21a9d34a57ea01

https://preview.redd.it/rfa2en3etqce1.png?width=1616&format=png&auto=webp&s=dd616eb3e7708a7e6d6834f2ec39eb76b456b783

IMPORTANT NOTE: MAKE SURE TO FIRST START THE CHAT WITH ANYTHING (ex: hi, hello etc.) because sometimes the LLM doesn't answer/work/take forever.