Markdown Was Never Meant for This
A formatting shortcut for bloggers now underpins how we talk to AI, build websites, and store knowledge. Markdown's accidental rise tells us something important about how infrastructure really gets built.
Practical writing about artificial intelligence — what it can do, how to use it well, and what it means for the rest of us.
Most AI writing falls into two camps: breathless hype from people selling the future, or dense research from people studying it. Kindred Intelligence is neither. This site is written by a working IT professional who builds and maintains real systems, and who uses AI tools every day alongside code, compliance frameworks, and the other unglamorous stuff that keeps organisations running.
The writing here is grounded in actual use, not press releases. Reviews are based on months of daily work, not a quick demo. Guides are written for people who need to get something done. And when a tool has problems, or raises questions about cost, access, or who really benefits, those things get said clearly.
If you are looking for honest, practical thinking about AI, written for people who build things rather than people who invest in things, you are in the right place.
Reviews — Hands-on assessments of AI tools and platforms, tested over weeks and months of real use. Every review includes an honest verdict, pricing, and a clear sense of who the tool is and is not for.
Guides — Step-by-step resources for integrating AI into your work. Written for intelligent people who are not necessarily technical, with enough depth to be useful for those who are.
Blog — Analysis, opinion, and practical takes on where AI is heading and what it means for the people using it. No hype. No jargon without explanation. No pretending this technology is simple when it is not.
A formatting shortcut for bloggers now underpins how we talk to AI, build websites, and store knowledge. Markdown's accidental rise tells us something important about how infrastructure really gets built.
The biggest threat from deepfakes is not the fakes themselves. It is the permission they give everyone to dismiss real evidence as fabricated.
Everything you need to start writing in Markdown, from zero to confident, in about an hour of practice.
AI isn't laying off junior workers. It's ensuring they never get hired in the first place, and the long-term consequences could be devastating.
An AI agent applied to 278 jobs in a week without anyone's meaningful consent. The governance frameworks that should prevent this do not exist yet, and the ones being built solve the wrong problem.
AI tools were supposed to make work easier. New research suggests they are making it harder, and the people hit worst are the ones least able to push back.
The AI industry runs on a hidden workforce of data labellers and content moderators, many earning less than $2 an hour in conditions designed to stay invisible. A recent UN report makes that invisibility harder to maintain.
The Pentagon blacklisted Anthropic for refusing to remove safety guardrails. Hours later, OpenAI signed a deal with nearly identical red lines. The contradiction tells us everything about where AI ethics stands in 2026.
Companies are cutting jobs based on what AI might do, not what it actually does. Workers are paying the price for efficiency gains that have not materialised.