AI
-
Code mode for mcp servers and llms
Code mode for mcp servers is about LLM writing and calling the code to use a proper MCP instead of calling it directly with the whole context. It makes the call a lot smaller, no overhead is passed, just the basics that are required to call the proper MCP method. Just as You do in code, method or a function, proper parameters, everything validated and… BAM ! We are returning a context that LLM uses in further stages. Anthropics wrote… This is really nice looking but only for a huge models with a 1kk tokens of a context. We need to remember that this is not possible on any kind…
-
How to use Context7 mcp server
What is Context7? Context7 is an open-source MCP server that provides real-time, version-specific documentation and code examples for over 50,000 libraries. It integrates seamlessly with AI tools like GitHub Copilot, Claude, or custom LLM agents. Why use it? How to use Context7 mcp server ? You can play around on the main page of the project or simply use a curl to fetch the data You wish so look up : context7.com/api/v2/docs/code/vuejs/pinia?topic=log&tokens=666 How to Configure Context7 How to use Context7 is really simple, just add a proper config entry and the plugin You use should pick it up instantly. Maybe reload the app if in need. 1. Install the Context7…
-
Tokenization and embedding of song lyrics “We will, we will…”
Tokenization and embedding of song lyrics “We will, we will…” i know you know how it ends. but have You wondered what would an LLM say ? Let us find out. I want to ask the Clause Sonnet 4.5 about the embeddings, tokenizations and probability of figuring out the lyrics for “We will, we will…” prompt 🙂 Can you show me the tokenization, embeddings, emtadata and probabilities for “We will, we will …” Tokenization Any kind of machine learning uses numbers in the back stage. The text is split into tokens, each mapped to an ID ( example values ) : Token ID Token Text Type Position Length 1234 We…
-
What is a token in AI ? Prompt examples
What is a token in AI ? – A piece of text, usually a word that we send to the LLM. The same goes for when we get a response ( usually number of words = number of tokens, roughly). How to use it ? Since we pay for it , rather cautiously. We want to send the minimum and get the maximum out of every request. Pretty much the basics of economics. Below are couple of prompts i used to analyze my tokens usage. Something are obsiously hallucinations but on the other hand we get a prety decent breakdown of all the data i did send for that coding…
-
RICO prompt model
RICO prompt model is a simpler RICECO method and is NOT based on one of the penguins from the madagraskar…. You won`t remember this because of him 🙂 Rico is one of the simpler methods out there, not much complicated, just like a good kaboom 🙂 Simple and easy to remember. Rico can help You start Kabooming more effective prompts for any AI. RICO is a simple framework that helps you structure prompts to help and guide AI so it understands exactly what you want. The RICO Method: A Kaboom Framework for Better AI Prompts Recently one skill is becoming more and more valuable: prompting. Good prompts lead to clearer,…
-
What is an MCP (Model Context Protocol) data format ?
A regular MCP (Model Context Protocol) format follows JSON-RPC 2.0 encoded in UTF-8 standard. The format is design to easily integrate different tools used with AI and LLMS like Context7 ( used by visual code or intellij). Servers like Context7 are designed to integrate real-time, version-specific documentation and code examples directly into AI or coding assistant prompts to improve code accuracy and developer productivity. MCP naming conventions MCP naming conventions are usually : use lowercase letters, hyphens, or camelCase without spaces or special characters. Example filenames : The folder structure may organize docs by library by topic, component or any other phrase or category: MCP documentation file format MCP Context7…
-
Popular LLMs training data, what do they use ?
Popular LLMs training data seems to be universal and generic. This is why such models are so popular, they more or less know an answer to everything. But how do they come about to those answers ? What is the source of that ? Where do they get the data from ? Let`s search the web the old fashioned way and find out. Popular LLMs training data types The training data for these models come from all around the world. We humans are the ones that provide it. It is our work that is pushed into a model. LLMs training data reflects carefully curated huge datasets designed to provide high…
-
AI Slop Is Destroying The Internet – Kurzgesagt – In a Nutshell
AI Slop Is Destroying The Internet Just watch the video and remember on your next ‘work’.
-
Before:2023 google hack – get results before the AI crap
Before:2023 – add this before the search query and get results before the AI crappy era. In addition it should also render a lot more organic results without pesky ads. Since the algorith changes it feels like everything thing is either sponsored or just plain in ad. Call me a tin foil hat but the dead internet theory isn`t that dead anymore 😉 I miss the internet where people blogged and wrote their thoughts like on myspace. Too bad it didn`t work out, everything got monetized. Try to add “+blog” to hopefully find a personal blog of a person that really dug into your problem and has a nicely documented…
-
Commonly used sentence in AI NLP speech recognition ?
The commonly used sentence in AI NLP speech recognition is The quick brown fox jumps over the lazy dog AI for everyone The curiousity about that particular sentence is that it containes every letter in the alphabet, from A to Z. It is also called a pangram or a holoalphabetic sentence. Uses every letter of a given alphabet at least once. Pangrams serve various purposes as showcasing typefaces, testing equipment, and honing skills in handwriting, calligraphy, and typing.























