The Closing Window
The Notes Setup That Actually Works with AI image

The Notes Setup That Actually Works with AI

AI Insights

TL;DR

A decade of notes locked in OneNote, then a switch to Markdown files I thought was about portability. It turned out to be about something bigger: Markdown's combination of plaintext, embedded metadata, and explicit relationships makes it the right substrate for AI agents to work with your knowledge. The cloud tools with the beautiful interfaces made a bet that didn't pay off. The boring file-on-disk approach did.

I was a Microsoft OneNote user for a long time. More than ten years. I could not even begin to count how many notes I had built up in that time — meeting notes, project research, code snippets, random ideas. A decade of thinking, captured in a proprietary format on Microsoft's servers.

As a developer, I knew there was a principle about managing data well: keep content and configuration in the same file. Keep it portable. Don't let your tools hold your data hostage. I knew this principle. I applied it in code. And then I ignored it completely for my own notes, for over a decade.

I eventually made the switch — to Markdown files, and specifically to Obsidian. What surprised me was the timing. I moved before the current AI wave hit. I wasn't chasing a trend. I was just finally acting on something I already believed.

What I didn't anticipate was how right the decision would turn out to be, and for reasons that had nothing to do with personal productivity and everything to do with how AI agents actually work.

Read More

Shadow AI and the Compliance Gap That Won't Close Itself image

Shadow AI and the Compliance Gap That Won't Close Itself

AI Insights

TL;DR

Shadow AI — employees using AI tools the company hasn't approved — is quietly creating GDPR liability across Europe. Every prompt containing personal data triggers two regulatory frameworks simultaneously: GDPR and the EU AI Act. Most companies don't know this, and the gap between what the law requires and what employees actually do is growing every day. The August 2026 deadline for full EU AI Act compliance is five months away. Most companies haven't started.

Every time an employee pastes a customer name into ChatGPT, runs a vendor contract through DeepL, or asks Copilot to summarize their inbox using a free or unapproved account, they are operating in the dark — using tools the company hasn't sanctioned, with data the company hasn't authorized, under poorly understood legal frameworks. This is shadow AI: a quiet, daily habit that has become one of the most underestimated compliance risks in Europe.

I came to understand this not as a lawyer, but as an AI engineer. About two months into rolling out AI tools and creating guidelines and policies at work, I realized we had a problem: the guidance we'd worked hard to communicate to employees was missing something fundamental. Nobody had explained the difference between "confidential data" and "personal data" and the importance that this holds when using any of the easily accessible AI tools, and how this still applies even when working in a B2B market. Prior to the ChatGPT era, most of us found it sufficient to use standardized office applications to get our daily work done. Obviously, now our day-to-day work has changed significantly.

The policies had to be redrafted. Not updated — redrafted. We needed specific language covering GDPR requirements and how they overlapped with the EU AI Act, examples employees could actually apply, and clear distinctions between the tools they could use freely and the ones that required approved enterprise accounts. What made the gap visible wasn't an audit or an incident. It was the approval requests. As employees started coming to us to review, vet, and approve the AI tools they were already using — or wanted to use — a pattern emerged: most of them had no framework for thinking about what data they could share and with what. Shadow AI is a reality at most companies, and I can tell you that the liability it creates is not being addressed with nearly enough urgency — especially as free tools continue proliferating and competing for everyone's attention.

Read More

The Workplace Blind Spot image

The Workplace Blind Spot

AI Insights

TL;DR

The silence after I published a post about automating my own role told me something. Most people assume their job's complexity will protect them from AI — I held the same belief about my own work until I was proven wrong. The models don't need to get better; better harnessing of what exists today is enough to automate most of what companies do. What's coming isn't gradual.

A few weeks ago, I published a post about automating my own role. I timed it deliberately — sharing it with colleagues before a casual lunch with management, hoping to spark some real conversation about what's happening with AI and what it means for our work.

The response was almost nothing.

A couple of colleagues joked nervously that it made them uncomfortable. Management said nothing. It's been two weeks, and still no substantial feedback. Everyone is busy, life is full, there's no shortage of terrible things demanding our attention. Maybe they haven't read it.

But the silence got me thinking. And what I landed on is something I've started calling the workplace blind spot.

Read More

Close Enough to See image

Close Enough to See

Essays

Throughout a large portion of my life, I have been at the center of, or at least close to, issues that have shaped American society and are now rippling outward across the Western world.

This isn't a credential. It's a pattern I can't unsee.

The Bubbles

During my college years in the late 1990s and early 2000s, I was studying computer science and working side jobs as a web designer. Technologically, that meant having design skills and being able to string together HTML — simple stuff. But because I was inside that world, I could see what was coming. I remember the moment clearly: sitting around after smoking a joint, thinking about how much of a scam the dot-com bubble was. The details have faded, but the clarity hasn't. It was all so exaggerated, so fake. And it was all going to come crumbling down. The money crashed, but the technology lived on.

After college, I worked in what was essentially a boiler room — think The Wolf of Wall Street, but on a much smaller scale. I didn't make any real money there, but I spent several years getting a visceral introduction to the brutality of the financial markets and the type of people attracted to that environment. I also got a glimpse of the type of person you eventually become if you stay.

Read More

I Am No Longer Needed image

I Am No Longer Needed

AI Insights

How I Automated My Own Role as AI Lead — and What That Actually Feels Like


This past week, I realized something uncomfortable: I had automated myself out of a significant portion of my job as a project manager. Not theoretically. Not as an exercise. I had actually done it — piece by piece, meeting by meeting, analysis by analysis — until I looked at what was left of my daily work and thought, this doesn't need me anymore.

I'm an AI lead at a mid-sized company in Germany. My job is to evaluate, implement, and manage AI solutions across the business. The irony is not lost on me that the tools I was hired to champion are the same tools that are now doing chunks of my job better than I was.

Let me be honest: AI has not done everything perfectly. There have been failures and outputs that needed heavy editing. But those failures are small compared to the sheer volume and quality of what it has delivered. The balance sheet tips overwhelmingly toward the machine.

Here's how it happened.

Read More

Which AI Should You Use in February 2026? image

Which AI Should You Use in February 2026?

AI Insights

Source: A Guide to Which AI to Use in the Agentic Era by Ethan Mollick (Feb 18, 2026)

Practical Guide: Which AI Should You Use in February 2026?

The original article by Prof. Ethan Mollick goes much deeper into each of these topics with examples and comparisons — it's well worth reading in full — but here's a practical summary of the key takeaways.

The Big Picture

AI is no longer just chatbots. You used to type a question and get an answer. Now AI can actually do tasks for you — research, create spreadsheets, build presentations, organize files. To pick the right tool, you need to understand three simple concepts:

3 Things That Matter

Concept What It Means Analogy
Model The AI "brain" — how smart it is The engine in a car
App The website or program you use to talk to the AI The car itself
Harness The tools the AI can use to get work done What's attached to the car (trailer, plow, etc.)

Read More

AI's Social Trap image

AI's Social Trap

AI Insights

Source: The "Innovator's Dilemma" of our society by DocIsInDaHouse (Feb 11, 2026)

Core Thesis

The article applies Clayton Christensen's "Innovator's Dilemma" framework — where successful companies fail by optimizing the present while missing disruptive shifts — to society as a whole in the age of AI.

The key problem: A-posteriori thinking. Drawing on Kant's distinction between a posteriori (knowledge from experience) and a priori (knowledge through reason), the author argues that our society, politics, and ethics boards operate almost exclusively reactively. We regulate social media only after it destabilizes democracies. We address data privacy only after massive abuse. This reactive approach worked when innovation was linear, but is fatal with exponential, disruptive technologies like AI.

Read More

The Software Factory: AI-Driven Development Without Human Code Review image

The Software Factory: AI-Driven Development Without Human Code Review

AI Insights

Source: Software Factory by Simon Willison (Feb 7, 2026)

Simon Willison covers how StrongDM's AI team has implemented what Dan Shapiro calls the "Dark Factory" level of AI adoption — where no human writes or even reviews the code that coding agents produce. Their full writeup: Software Factories and the Agentic Moment.

Core Principles

StrongDM's AI team (founded July 2025, just 3 people) operates under radical constraints:

  • Code must not be written by humans
  • Code must not be reviewed by humans
  • If you haven't spent at least $1,000 on tokens/day per engineer, your software factory has room for improvement

The catalyst: with Claude 3.5 Sonnet revision 2 (October 2024), long-horizon agentic coding workflows began to compound correctness rather than compound errors. The November 2025 inflection point (Claude Opus 4.5, GPT 5.2) further improved reliability.

Read More

You Spent Your Whole Life Getting Good at the Wrong Thing image

You Spent Your Whole Life Getting Good at the Wrong Thing

AI Insights

Source: The Algorithmic Bridge by Alberto Romero (Feb 7, 2026)

Core Thesis

AI agents (Codex with GPT-5.3, Claude Code with Opus 4.6) collapse execution into wishing. The bottleneck shifts from how to do things to what to do. Most people have spent their entire lives optimizing for the "how" and are now holding a genie lamp while still spending 99% of their effort on execution.

Read More