The Closing Window
Everything is Energy, Everything in Berlin is Beautiful image

Everything is Energy, Everything in Berlin is Beautiful

Essays

TL;DR

Evil is the opposite of love, and it's winning. Not just in the Epstein files or the wars started on a whim, but in every moment we choose the couch over action. Walking through Berlin on a beautiful afternoon, surrounded by people completely unaware of the converging crises ahead, I realized the opportunity: when the crash comes, and it will, we can organize. The work starts now — projecting forward to the breaking point and working backwards to chronicle how we got there.

We are all energy. Everything is energy. All these energies interact with each other. This is not an all-encompassing description of how this universe behaves, but it's an accurate one.

I got to the topic of energy because I was trying to fight the desperation I feel as I consume news that reveals all the depraved acts described in the Epstein files. As I watch innocent children murdered, dismembered, disfigured, and traumatized as a result of the violent acts of greedy and misguided men. Starting wars as easily as we choose our outfits for the day. Maybe I was just not conscious of this earlier in my life. Maybe now that I have kids, I am more sensitive to all of it. But at no point in my life have I felt more certain that evil is winning.

Read More

Spec Driven Development Fans, You'll Love sdd-flow image

Spec Driven Development Fans, You'll Love sdd-flow

AI Insights

TL;DR

Built a Claude Code skill that forces research, spec, implementation, critical reviews and code review for one-shotting features. Used it to build Redakt (GDPR PII anonymizer) with six features in two days. Skill and plugin are open source.

Last week I shipped Redakt, an open-source GDPR compliance tool. Six features for a web UI and REST API to help us better handle PII (Personally Identifiable Information).

I had been using the spec driven development (SDD) process for months through a plugin I built. When I saw that Claude had expanded the context window to a million tokens, I thought to myself, "Why not try to push the limits of how I was developing software?" What I mean by that is that the SDD process was created to help with context management. It follows a traditional and rigorous software development cycle, but it was slow relative to how fast things move within the AI space. I decided to try just one-shotting a feature using a skill to orchestrate the phases of a development cycle, and it worked surprisingly well.

The skill was not the only thing that optimized the process. Anthropic's release of --permission-mode auto last week was the push that I needed to remove the diapers. I was excited to try it out. This mode is limited for now to Teams accounts. It was flaky initially, but it gave me a taste of the forbidden fruit. I didn't want to go back to confirming every request. At home, I knew there was another option though, a dangerous one, that I had avoided until now: --dangerously-skip-permissions. After close to a year of using Claude, and never having had any "oh shit" moments, I decided to remove all guardrails and bypass permissions. It felt like my face was being pushed back by gravity as you see in fighter pilot videos when they fly at multiple Machs... and the feeling is addictive.

Read More

Meet Redakt: Practical GDPR Compliance for AI Teams image

Meet Redakt: Practical GDPR Compliance for AI Teams

AI Insights

TL;DR

Telling employees "don't enter personal data into AI tools" doesn't work without giving them a way to comply. Redakt is an open-source PII anonymizer built on Microsoft Presidio that sits between your employees and their AI tools. Paste text in, get an anonymized version with placeholders. Paste the AI's response back, get the original values restored. The server never stores PII. It runs on your infrastructure, inside your network. No additional data processing agreements needed. The tool is free, the code is open.

Earlier this month I wrote about shadow AI and the compliance gap discussing how employees using unapproved AI tools with personal data are creating quiet GDPR liability across Europe, and how the gap between what the law requires and what companies actually do is growing every day.

The response made one thing clear: people know they have a problem. Current measures feel like we are all playing whack-a-mole.

The advice that post ended with was honest but incomplete. Department-specific guidelines, approved tool lists, clearer communication — all necessary, all insufficient. Because even with perfect policies, you still have the same fundamental problem: an employee sitting in front of ChatGPT with a paragraph of text containing a customer's name, email, and order history, and no practical way to strip it out before hitting enter.

So I built something.

Read More

Meet Sift: A Knowledge Base for Everything That Isn't a Note image

Meet Sift: A Knowledge Base for Everything That Isn't a Note

AI Insights

TL;DR

Sift is a personal knowledge base I built over three or four months that ingests anything — URLs, PDFs, bookmarks, web pages, video and audio files — and makes it all searchable by meaning, not just keywords, with awareness of when things were saved. It runs on your own hardware, answers questions using your own saved material as sources, and sits alongside Obsidian rather than replacing it. I've open sourced it at github.com/pablooliva/sift. This post is the story of why it exists, what I learned building it, and what to expect if you clone it.


My Bookmarks Saved Me Money

A few weeks ago, I was designing a multimedia publishing pipeline for this site. I needed voice synthesis and video generation tools, and I had a rough budget in mind, something like $56/month across a few SaaS subscriptions.

Before committing, I searched my Sift knowledge base. It surfaced two ComfyUI integrations I'd bookmarked a week earlier: Qwen3-TTS for local voice cloning and LTX-2.3 for open-source video generation with portrait support. I'd saved them as simple URL bookmarks with short descriptions — the lowest-effort form of ingestion possible.

Those two results shifted the entire pipeline from a paid SaaS stack to a $0 local approach.

Read More

Agent: Do You Understand the Words Coming Out of My Mouth? image

Agent: Do You Understand the Words Coming Out of My Mouth?

AI Insights

TL;DR

Your website is probably invisible to AI answer engines like Perplexity. By adding a handful of static files (llms.txt, per-post markdown files, JSON feeds) and some HTML tags (Schema.org JSON-LD, hreflang links, sitemap discovery), you can make your content easily discoverable, parseable, and citable by AI agents. None of it requires a framework or third-party service — just templates that run once and cover every future post.

A few weeks ago, as I started to write more about AI, I figured that what I was posting to the web should be easily accessible by the technology that I was writing about. I had Claude Code review my website structure and have it give me suggestions on how I could make it more AI agent-friendly.

I also added automated translations to three other languages in my publishing process. Beyond making the content more accessible to more people, it has the side benefit of surfacing in AI agent searches in those languages.

This morning I came across Julia Solorzano's post on answer engine optimization, which gave me some additional ideas. Her post approaches it from a slightly different angle, and that made me realize this information might be useful to others.

Read More

Finding Meaning in Your Notes with CK Search image
AI Insights

TL;DR

CK Search adds local semantic search to any directory, with a built-in MCP server for AI agent integration. Semantic search overcomes the limits of keyword search by finding notes based on meaning, not string matching. This post walks through the dead simple installation, configuration, and explains how semantic search finds what keyword search can't. It's not magic, and I'll be honest about where it falls short.

When Search Can't See What You Mean

In the first post, I described the moment when good organization still isn't enough. You can tag perfectly and still not find what you need, because you're searching for a concept and your note used different words. That's the ceiling of keyword search.

I have around 1,900 markdown files across three Obsidian vaults. If you're not an Obsidian user, you can just think of this as 1,900 text files in three different folders. They're tagged, linked, and structured with frontmatter, or more simply put, page properties. The organization is solid. But when I search for "authentication," I don't find my note about "session token management." When I search for "AI assistant," I miss the note about "Claude Desktop." The information is there. The search just can't see it.

Read More

From Folders to Knowledge Base: How I Made My Notes Work for Me image

From Folders to Knowledge Base: How I Made My Notes Work for Me

AI Insights

Series: AI-Powered Knowledge Management — Post 1 of 9

TL;DR

I spent years meticulously building folder structures for my notes. First in OneNote, then in Obsidian. It felt productive and effective. It wasn't. The turning point was realizing that folders impose a single hierarchy on information that doesn't have one. What followed was a slow evolution: folders to tags, tags to relationships, relationships to semantic search, and finally to a system where I can ask my notes a question and get an answer with sources. This post is about that journey and what I learned at each stage.

The Folder Trap

I was a heavy OneNote user for years. It was perfect for me, at that time. I had nested sections inside sections inside notebooks, color-coded and carefully maintained. When I needed something, I navigated a hierarchy I'd built from memory. It worked as long as I remembered where I'd put things, and as long as the information fit neatly into one category.

It often failed to fit into just one category. This was frustrating, but I lived with it.

Read More

Feed the AI - Digitize Everything image

Feed the AI - Digitize Everything

AI Insights

Why I'm Feeding an AI Every Piece of My Work Life — and What It Actually Needs to Function


TL;DR

AI agents aren't limited by capability — they're limited by context. The models are already good enough; what's missing is the information they need to act on your behalf without constantly asking. By digitizing your meetings, decisions, tasks, and relationships into a queryable format, you give an AI the situational awareness a seasoned human assistant would develop over months. The most effective AI deployments aren't happening in enterprise systems — they're happening in individual workflows built by people who stopped waiting and started feeding the machine.

A few weeks ago, I wrote a post titled I Am No Longer Needed, about how I had automated so much of my project management role as an AI lead that I could clearly see the shape of what was coming next. That post was uncomfortably honest about the emotional weight of seeing so much change so unexpectedly quickly.

But I realized afterward that it didn't fully answer the question underneath all of it: why? Why am I so obsessed with capturing and digitizing information? Why does it matter that my meeting notes, my task lists, my requirements gathering, and my ideas all exist in a queryable format? Why did I spend months building a personal knowledge base?

This post is that answer.

Read More

FolderTether: Obsidian Breaks Out of the Vault image

FolderTether: Obsidian Breaks Out of the Vault

AI Insights

TL;DR

FolderTether is an Obsidian plugin that creates a bidirectional link between a note and any directory on your filesystem. From Obsidian, a single click opens the linked folder in Finder. From Finder, a .url file sitting in the folder opens the note directly in Obsidian. The vault stops being a walled garden and becomes a navigational index over your entire digital workspace.

A colleague and I were talking about Markdown files and how they've become the default format for working in AI environments. Naturally the conversation drifted to Obsidian because I am a big fan. He mentioned something that stuck with me: he couldn't get comfortable with the vault concept, Obsidian's way of containing everything in a single managed folder, and it was holding him back from the tool entirely. He'd dabbled with this great note-taking system, but his actual work didn't live in it. His code was somewhere else. His documents were somewhere else. His Obsidian notes were somewhere else. Three separate places, no connection between them.

This is a problem almost every Obsidian user runs into, and most people just accept it. I had become so used to working in the Obsidian environment that I had become blind to this disconnect. Now that it was visible again, I didn't want to accept it.

Read More

The Notes Setup That Actually Works with AI image

The Notes Setup That Actually Works with AI

AI Insights

TL;DR

A decade of notes locked in OneNote, then a switch to Markdown files I thought was about portability. It turned out to be about something bigger: Markdown's combination of plaintext, embedded metadata, and explicit relationships makes it the right substrate for AI agents to work with your knowledge. The cloud tools with the beautiful interfaces made a bet that didn't pay off. The boring file-on-disk approach did.

I was a Microsoft OneNote user for a long time. More than ten years. I could not even begin to count how many notes I had built up in that time — meeting notes, project research, code snippets, random ideas. A decade of thinking, captured in a proprietary format on Microsoft's servers.

As a developer, I knew there was a principle about managing data well: keep content and configuration in the same file. Keep it portable. Don't let your tools hold your data hostage. I knew this principle. I applied it in code. And then I ignored it completely for my own notes, for over a decade.

I eventually made the switch — to Markdown files, and specifically to Obsidian. What surprised me was the timing. I moved before the current AI wave hit. I wasn't chasing a trend. I was just finally acting on something I already believed.

What I didn't anticipate was how right the decision would turn out to be, and for reasons that had nothing to do with personal productivity and everything to do with how AI agents actually work.

Read More

Shadow AI and the Compliance Gap That Won't Close Itself image

Shadow AI and the Compliance Gap That Won't Close Itself

AI Insights

TL;DR

Shadow AI — employees using AI tools the company hasn't approved — is quietly creating GDPR liability across Europe. Every prompt containing personal data triggers two regulatory frameworks simultaneously: GDPR and the EU AI Act. Most companies don't know this, and the gap between what the law requires and what employees actually do is growing every day. The August 2026 deadline for full EU AI Act compliance is five months away. Most companies haven't started.

Every time an employee pastes a customer name into ChatGPT, runs a vendor contract through DeepL, or asks Copilot to summarize their inbox using a free or unapproved account, they are operating in the dark — using tools the company hasn't sanctioned, with data the company hasn't authorized, under poorly understood legal frameworks. This is shadow AI: a quiet, daily habit that has become one of the most underestimated compliance risks in Europe.

I came to understand this not as a lawyer, but as an AI engineer. About two months into rolling out AI tools and creating guidelines and policies at work, I realized we had a problem: the guidance we'd worked hard to communicate to employees was missing something fundamental. Nobody had explained the difference between "confidential data" and "personal data" and the importance that this holds when using any of the easily accessible AI tools, and how this still applies even when working in a B2B market. Prior to the ChatGPT era, most of us found it sufficient to use standardized office applications to get our daily work done. Obviously, now our day-to-day work has changed significantly.

The policies had to be redrafted. Not updated — redrafted. We needed specific language covering GDPR requirements and how they overlapped with the EU AI Act, examples employees could actually apply, and clear distinctions between the tools they could use freely and the ones that required approved enterprise accounts. What made the gap visible wasn't an audit or an incident. It was the approval requests. As employees started coming to us to review, vet, and approve the AI tools they were already using — or wanted to use — a pattern emerged: most of them had no framework for thinking about what data they could share and with what. Shadow AI is a reality at most companies, and I can tell you that the liability it creates is not being addressed with nearly enough urgency — especially as free tools continue proliferating and competing for everyone's attention.

Read More

The Workplace Blind Spot image

The Workplace Blind Spot

AI Insights

TL;DR

The silence after I published a post about automating my own role told me something. Most people assume their job's complexity will protect them from AI — I held the same belief about my own work until I was proven wrong. The models don't need to get better; better harnessing of what exists today is enough to automate most of what companies do. What's coming isn't gradual.

A few weeks ago, I published a post about automating my own role. I timed it deliberately — sharing it with colleagues before a casual lunch with management, hoping to spark some real conversation about what's happening with AI and what it means for our work.

The response was almost nothing.

A couple of colleagues joked nervously that it made them uncomfortable. Management said nothing. It's been two weeks, and still no substantial feedback. Everyone is busy, life is full, there's no shortage of terrible things demanding our attention. Maybe they haven't read it.

But the silence got me thinking. And what I landed on is something I've started calling the workplace blind spot.

Read More

Close Enough to See image

Close Enough to See

Essays

Throughout a large portion of my life, I have been at the center of, or at least close to, issues that have shaped American society and are now rippling outward across the Western world.

This isn't a credential. It's a pattern I can't unsee.

The Bubbles

During my college years in the late 1990s and early 2000s, I was studying computer science and working side jobs as a web designer. Technologically, that meant having design skills and being able to string together HTML — simple stuff. But because I was inside that world, I could see what was coming. I remember the moment clearly: sitting around after smoking a joint, thinking about how much of a scam the dot-com bubble was. The details have faded, but the clarity hasn't. It was all so exaggerated, so fake. And it was all going to come crumbling down. The money crashed, but the technology lived on.

After college, I worked in what was essentially a boiler room — think The Wolf of Wall Street, but on a much smaller scale. I didn't make any real money there, but I spent several years getting a visceral introduction to the brutality of the financial markets and the type of people attracted to that environment. I also got a glimpse of the type of person you eventually become if you stay.

Read More

I Am No Longer Needed image

I Am No Longer Needed

AI Insights

How I Automated My Own Role as AI Lead — and What That Actually Feels Like


This past week, I realized something uncomfortable: I had automated myself out of a significant portion of my job as a project manager. Not theoretically. Not as an exercise. I had actually done it — piece by piece, meeting by meeting, analysis by analysis — until I looked at what was left of my daily work and thought, this doesn't need me anymore.

I'm an AI lead at a mid-sized company in Germany. My job is to evaluate, implement, and manage AI solutions across the business. The irony is not lost on me that the tools I was hired to champion are the same tools that are now doing chunks of my job better than I was.

Let me be honest: AI has not done everything perfectly. There have been failures and outputs that needed heavy editing. But those failures are small compared to the sheer volume and quality of what it has delivered. The balance sheet tips overwhelmingly toward the machine.

Here's how it happened.

Read More

Which AI Should You Use in February 2026? image

Which AI Should You Use in February 2026?

AI Insights

Source: A Guide to Which AI to Use in the Agentic Era by Ethan Mollick (Feb 18, 2026)

Practical Guide: Which AI Should You Use in February 2026?

The original article by Prof. Ethan Mollick goes much deeper into each of these topics with examples and comparisons — it's well worth reading in full — but here's a practical summary of the key takeaways.

The Big Picture

AI is no longer just chatbots. You used to type a question and get an answer. Now AI can actually do tasks for you — research, create spreadsheets, build presentations, organize files. To pick the right tool, you need to understand three simple concepts:

3 Things That Matter

Concept What It Means Analogy
Model The AI "brain" — how smart it is The engine in a car
App The website or program you use to talk to the AI The car itself
Harness The tools the AI can use to get work done What's attached to the car (trailer, plow, etc.)

Read More

AI's Social Trap image

AI's Social Trap

AI Insights

Source: The "Innovator's Dilemma" of our society by DocIsInDaHouse (Feb 11, 2026)

Core Thesis

The article applies Clayton Christensen's "Innovator's Dilemma" framework — where successful companies fail by optimizing the present while missing disruptive shifts — to society as a whole in the age of AI.

The key problem: A-posteriori thinking. Drawing on Kant's distinction between a posteriori (knowledge from experience) and a priori (knowledge through reason), the author argues that our society, politics, and ethics boards operate almost exclusively reactively. We regulate social media only after it destabilizes democracies. We address data privacy only after massive abuse. This reactive approach worked when innovation was linear, but is fatal with exponential, disruptive technologies like AI.

Read More

The Software Factory: AI-Driven Development Without Human Code Review image

The Software Factory: AI-Driven Development Without Human Code Review

AI Insights

Source: Software Factory by Simon Willison (Feb 7, 2026)

Simon Willison covers how StrongDM's AI team has implemented what Dan Shapiro calls the "Dark Factory" level of AI adoption — where no human writes or even reviews the code that coding agents produce. Their full writeup: Software Factories and the Agentic Moment.

Core Principles

StrongDM's AI team (founded July 2025, just 3 people) operates under radical constraints:

  • Code must not be written by humans
  • Code must not be reviewed by humans
  • If you haven't spent at least $1,000 on tokens/day per engineer, your software factory has room for improvement

The catalyst: with Claude 3.5 Sonnet revision 2 (October 2024), long-horizon agentic coding workflows began to compound correctness rather than compound errors. The November 2025 inflection point (Claude Opus 4.5, GPT 5.2) further improved reliability.

Read More

You Spent Your Whole Life Getting Good at the Wrong Thing image

You Spent Your Whole Life Getting Good at the Wrong Thing

AI Insights

Source: The Algorithmic Bridge by Alberto Romero (Feb 7, 2026)

Core Thesis

AI agents (Codex with GPT-5.3, Claude Code with Opus 4.6) collapse execution into wishing. The bottleneck shifts from how to do things to what to do. Most people have spent their entire lives optimizing for the "how" and are now holding a genie lamp while still spending 99% of their effort on execution.

Read More