I have uploaded entire books to the latest Gemini and had the model reliably accurately answer specific questions requiring knowledge of multiple chapters.
I think it works for info but not so well for instructions/guidance. That's why the standard advice is instructions at the start and repeated at the end.
Or under the covers are just putting all the text you fed at into a rag database and doing embedding search define route and snippets and answer your questions when asked directly. Which is the difference approach than recalling instructions
That’s pretty typical, though not especially reliable. (Allthough in my experience, Gemini currently performs slightly better than ChatGPT for my case.)
In one repetitive workflow, for example, I process long email threads, large Markdown tables (which is a format from hell), stakeholder maps, and broader project context, such as roles, mailing lists, and related metadata. I feed all of that into the LLM, which determines the necessary response type (out of a given set), selects appropriate email templates, drafts replies, generates documentation, and outputs a JSON table.
It gets it right on the first try about 75% of the time, easily saving me an hour a day - often more.
Unfortunately, 10% of the time, the responses appear excellent but are fundamentally flawed in some way. Just so it doesn't get boring.
Try reformatting the data from the markdown table into a JSON or YAML list of objects. You may find that repeating the keys for every value gives you more reliable results.
Mind if I ask how you’re doing this? I have uploaded short stories of <40,000 words in .txt format and when I ask questions like “How many chapters are there?” or “What is the last sentence in the story?” it gets it wrong. If I paste a chapter or two at a time then ask, it works better, but that’s tedious…