-
Dealing with github copilot errors
Dealing with github copilot errors is pretty irritating. Especially when dosing some more complex stuff and suddenly „Bam”. Red message. Easiest thing to do ? Switching the model family often works because some models have different capacity pools or stricter preview limits. GitHub documents vaguely that if you are rate limited, you can wait and try again. Just type „continue” and keep your fingers crossed. Otherwise check usage patterns, change the model, or contact support. Common github copilot errors Some run of the mill You probably know already, just put them together as a 'review’ : Why changing the model helps ? Changing from one model family (vendor) to another…
-
Markmap vs Mermaid for Spec-Driven Development (SDD)
Markmap vs Mermaid for Spec-Driven Development You will consider sooner or later. When You have to go „full on AI” documentation stops being a nice-to-have and becomes a must be part of the product. We need to convey a lot of information, with hopefully as little text as possible so we can easily read it and digest the information. Two popular options for Markdown documentation features are Markmap and Mermaid. They solve different problems, and the best choice depends on whether you want fast idea mapping or more structured diagrams for flows. When to use Markmap Markmap turns Markdown headings and bullet points into an interactive mind map. Useful for…
-
AI and LLM articles – links to read
Some links and articles i think are worth Your while to read and get Your own opinion. The Top 100 Gen AI Consumer Apps — 6th Edition | Andreessen Horowitz When Small Models Outperform the Giant: A Practical Guide to Picking AI Brains – DEV Community https://builtin.com/data-science/step-step-explanation-principal-component-analysis How to run mcp inspektor modelcontextprotocol/inspector: Visual testing tool for MCP servers And some more for some bed light reading 🙂 https://techtrenches.dev/p/the-great-software-quality-collapse Vertical Slice Architecture PQ4R Method: 6 Steps to Learn Effectively | 1Focus
-
Psychological safety vs producive stress. How not to go #toxic.
Psychological safety vs producive stress is a conflict of interest. People usually think that much „safety” can lead to laziness ? Don’t rest on your laurels as they say ? On the other stress and some level of danger motivates us to harder work. Workplaces need a certain level of pressure to move forward. Deadlines, feedback, and responsibility all matter. But pressure is not the same as panic, and motivation is not the same as fear. The real challenge is to create an environment where people feel safe enough to speak honestly, while still being stretched enough to grow (or just get exploited and run down the mill?). Safety leads…
-
Jevons paradox in AI workplace
Jevons paradox in AI workplace Jevons paradox in AI workplace is introduced at work with a simple promise: do more in less time. In practice, the result are messy and stack the work. Jevons paradox is the idea that when something becomes more efficient, people and organizations often end up using more of it. In the AI workplace that can mean faster tools, automated work that do not always reduce workload. They can also expand expectations, volume, and ambition to utilize the improved (AIed ?) processes. Be aware At first glance, this sounds contradictory and just wrong. If a team can draft emails, summarize meetings, and generate reports in minutes,…
-
Need to know old boy. Principle of Minimum Access for LLMs
Principle of Minimum Access for LLMs can be described by Timothy Dalton as James Bond in „The Living daylights”. Bond says the phrase “Sorry old boy, section 26, paragraph 5, need to know.” to a fellow agent and drives off escorting a VIP – very important target. Behind the scene is a practical idea that fits modern AI systems very well: an LLM or agent (as in the movie) should only be given the minimum access it needs to do its job. Not more, not less. Bare minimum. Of course Mythos probably could jailbreak anyway but still… controll is the best form of trust ? This is not just a…
-
WebExtension Manifest.json permissions options list
WebExtension Manifest.json permissions options list is quite long. Let us see what we can access with our plugin for FF. This hsould inspire You to write Your own for quality of life and ease of doing things… or avoid them all together. For avoidance i would highly recommend to look into prper ad block filter and creating / adding your own. Rule : Do not scare users Remember when Your app requires and asks for access. You either explain to user in every detail what for and go with bare minimum. Just as a rule of thumb of not beeing accused of getting and selling info. It is always scary…
-
Structure for instructions, agents and skills
This all is the provide a nice clean idea on how to store your files so You can make Your AI assistant / LLM network understand what and how to do. Acording to Your more or less strict rules. This should help You to achieve more repeatable results as expected. The problem : generic answer Do not confuse with generic functions, those rock ! Out-of-the-box, any LLM (here copilot) creates generic code. You could call it 'vanilla’ flavour. It doesn’t know your conventions, library preferences patterns / anti patterns. This results in something that might work but is hard to maintain, totally different then the rest of the lot and…
-
Protect your work with poison ?
Protect your work with poison ? Is there no other way around. Robots.txt ? Nobody cares. Copyrights ? LOL. Fair use policy ? As long as i don`t get caught. Protect your work with poison is the oldest trick in the book, especially by plants. Most of them have to be cooked, to be eaten and digested with benefit for us. Why not doing it with our work ?I am thinking about healthy amount of protecting our work. Does it mean anyone making a rembrandt style photo should pay the author of that style ? For 70 years ? How much and how long ? Why money at all ?…
-
Yet another basic AI glossary part 1
AI & Machine Learning Glossary for Beginners Yet another basic AI glossary part 1. This is base what i need to learn better and understand all that „AI” and „LLMs”. Feel free to go through all of it and dive deeper on those subjects. Defining here ai concepts, ideas, math functions, slang and anything that might be helpful in better understanding „the whole lot”. 1. Logit A logit is the raw output we get from a model before any functions are applied. Before the softmax functions. During classification logit shows the confidence of the model about everypossible output. 2. Logit Definition (Mathematical View) In math logits are real numbers output…
























