Running off rails never worked out. I personalized differently for different LLMs. To truly get LLMs to suit your personal style takes time and often have to redo with every new major version release. Gist of common instructions are to make it ask clarifying questions, TLDR of every response at the top, To be direct and Rewards for showing true confidence scores for every response (this becomes critical in long conversations).
> Once Claude familiar with your repo (ask it to make copious notes), async coding is the next level of productivity. I now ask it to write a lot of the repetitive code paths.
Can you share what is your approach to context engineering this ?
My confidence in Mozilla got renewed after watching their CTO's interview on early days, early decisions, solving bottlenecks, Google being a Frenemy, major bug in BigZilla etc.,
This maybe an unpopular stance, when i read to learn something new or a fictional story, i noticed AI summaries misses the nuances of good writing that our brain intuitively picks up.
For some reason (maybe there is a psychological explanation), information retention is better when i read from the source rather than an AI summary.
Reward of learning something new in its original form, vastly outweighs ‘time saved’ by AI summaries.
What works for me, pass 1 is scanning through original text, pass 2 is digging into areas that needs deeper understanding and pass 3 is assessing understanding using AI summaries.
Direction of travel for this is less re: summaries and more re: creating time for the user to actually read the content in full.
Made some changes based on your feedback to that effect, this week, too. E.g. now when you receive the briefing you can go through the original content side by side to actually deep read.
The Beginning of Infinity by David Deutsch is in same category. I have not finished reading it yet, at best i go 5 pages in one sitting. Jam packed with fascinating facts.