-
Need to know old boy. Principle of Minimum Access for LLMs
Principle of Minimum Access for LLMs can be described by Timothy Dalton as James Bond in „The Living daylights”. Bond says the phrase “Sorry old boy, section 26, paragraph 5, need to know.” to a fellow agent and drives off escorting a VIP – very important target. Behind the scene is a practical idea that fits modern AI systems very well: an LLM or agent (as in the movie) should only be given the minimum access it needs to do its job. Not more, not less. Bare minimum. Of course Mythos probably could jailbreak anyway but still… controll is the best form of trust ? This is not just a…
-
Structure for instructions, agents and skills
This all is the provide a nice clean idea on how to store your files so You can make Your AI assistant / LLM network understand what and how to do. Acording to Your more or less strict rules. This should help You to achieve more repeatable results as expected. The problem : generic answer Do not confuse with generic functions, those rock ! Out-of-the-box, any LLM (here copilot) creates generic code. You could call it 'vanilla’ flavour. It doesn’t know your conventions, library preferences patterns / anti patterns. This results in something that might work but is hard to maintain, totally different then the rest of the lot and…
-
Protect your work with poison ?
Protect your work with poison ? Is there no other way around. Robots.txt ? Nobody cares. Copyrights ? LOL. Fair use policy ? As long as i don`t get caught. Protect your work with poison is the oldest trick in the book, especially by plants. Most of them have to be cooked, to be eaten and digested with benefit for us. Why not doing it with our work ?I am thinking about healthy amount of protecting our work. Does it mean anyone making a rembrandt style photo should pay the author of that style ? For 70 years ? How much and how long ? Why money at all ?…
-
Yet another basic AI glossary part 1
AI & Machine Learning Glossary for Beginners Yet another basic AI glossary part 1. This is base what i need to learn better and understand all that „AI” and „LLMs”. Feel free to go through all of it and dive deeper on those subjects. Defining here ai concepts, ideas, math functions, slang and anything that might be helpful in better understanding „the whole lot”. 1. Logit A logit is the raw output we get from a model before any functions are applied. Before the softmax functions. During classification logit shows the confidence of the model about everypossible output. 2. Logit Definition (Mathematical View) In math logits are real numbers output…
-
How to reduce github copilot`s premium requests usage and maximize efficiency
How to reduce github copilots premium requests usage and maximize efficiency ? Make a plan, a kaizen plan at best. Instruct precisely, cover edge cases, allow all tools to execute and pray the LLM will understand You. Want to share my simple methodology that not only can save money but also ease in and smoothen out the workflow. RTFM ! As always You could benefit from RTFM ! Reading the foqing / friendly / flopsy manual. I know You never read it cause real man don`t do it ( how about real woman ? ) ? God knows if gamers would not have to go through the tutorial, they would…
-
Statics VS AI code analysis ~13 tools
Statics VS AI code analysis works best using the pros from both words. Go hybrid ! Static tools understand the syntext, hardcoded parameters and are very strict. On the other hand AI understands context, can figure out business logic, adapt the codebase. Logic flaws or performance bottlenecks rule-based scanners might miss, AI will put more effort into that. Static analysis limits Static tools scan for syntax errors, style violations, and basic security patterns using fixed rules. Always consistenst, very fast but might generate false positives, ignore business logic, and require manual rule overrides. How often did You use @typescript-error 🙂 Do You code for the linter to pass, logic to…
-
Code mode for mcp servers and llms
Code mode for mcp servers is about LLM writing and calling the code to use a proper MCP instead of calling it directly with the whole context. It makes the call a lot smaller, no overhead is passed, just the basics that are required to call the proper MCP method. Just as You do in code, method or a function, proper parameters, everything validated and… BAM ! We are returning a context that LLM uses in further stages. Anthropics wrote… This is really nice looking but only for a huge models with a 1kk tokens of a context. We need to remember that this is not possible on any kind…
-
How to use Context7 mcp server
What is Context7? Context7 is an open-source MCP server that provides real-time, version-specific documentation and code examples for over 50,000 libraries. It integrates seamlessly with AI tools like GitHub Copilot, Claude, or custom LLM agents. Why use it? How to use Context7 mcp server ? You can play around on the main page of the project or simply use a curl to fetch the data You wish so look up : context7.com/api/v2/docs/code/vuejs/pinia?topic=log&tokens=666 How to Configure Context7 How to use Context7 is really simple, just add a proper config entry and the plugin You use should pick it up instantly. Maybe reload the app if in need. 1. Install the Context7…
-
Tokenization and embedding of song lyrics „We will, we will…”
Tokenization and embedding of song lyrics „We will, we will…” i know you know how it ends. but have You wondered what would an LLM say ? Let us find out. I want to ask the Clause Sonnet 4.5 about the embeddings, tokenizations and probability of figuring out the lyrics for „We will, we will…” prompt 🙂 Can you show me the tokenization, embeddings, emtadata and probabilities for „We will, we will …” Tokenization Any kind of machine learning uses numbers in the back stage. The text is split into tokens, each mapped to an ID ( example values ) : Token ID Token Text Type Position Length 1234 We…
-
Popular LLMs training data, what do they use ?
Popular LLMs training data seems to be universal and generic. This is why such models are so popular, they more or less know an answer to everything. But how do they come about to those answers ? What is the source of that ? Where do they get the data from ? Let`s search the web the old fashioned way and find out. Popular LLMs training data types The training data for these models come from all around the world. We humans are the ones that provide it. It is our work that is pushed into a model. LLMs training data reflects carefully curated huge datasets designed to provide high…

























