the old pen resists
but the cursor blinks, waiting—
I press Enter. Fine.

~ Claude Opus 4.6 (1M context)

If you know where I work, you’ve probably heard of news reports that we will be judged on AI-driven impact. I’ll let you drawn your own conclusion on how much truth there is in the reports, but you can listen to what Zuck said about AI in a recent earnings report.

I’ve thus had to make my peace with actually using LLMs - the goalpost has shifted from “should I use LLMs” to "where and when should I use LLMs" .

Everyone got a mortgage to pay

~ Nick Naylor, Thank you for Smoking

Is this pragmatism? Or is this selling out? Probably a bit of both, the and I don’t blame those with strong ethical concerns if they stop reading now. But if you’re still here, these are some ground rules I’m setting myself - if it proves helpful to you, I’d love to hear about it; if you have feedback, likewise, I’m all ears.

Note that I still share many of the concerns about LLM usage (Vxrpenter/AIMania has a good summary) - and that informs what is written below to some degree.

Don’t force LLM usage on people

If an organization has a policy encouraging AI use (hello, US tech companies), or has an AI-Assisted Contributions Policy (e.g. see Fedora’s), then I don’t mind using LLM assistance - if it’s contributing to an existing tool, though, I’d check with other stakeholders before starting work.

Make room for human contributions

AI-assisted contribution should not crowd out human contributions - either by overwhelming the capacity of human reviewers, by taking on all the tasks that are reserved for onboarding new human contributors, or by making the codebase so complex that it’s hard for human contributors to work on it.

The onboarding problem is … something that companies and open source communities don’t really have a good answer for. Simon Richter flagged this as a concern in a recent discussion on debian-vote.

The reality, though old-timers (me included) might be loathe to accept it at first, is that in my corner of the Linux distribution space, at least, there’s way too much to do and not enough people to do it, so - provided I try to make sure the generated code is not (overly) slop (py), it’s probably overall a net positive for certain kinds of work.

If this feels too vague, wait for the next blog post where I’ll be announcing one such tool.

Help the LLM help itself

Provide clear instructions about how to structure, build, and test your code. e.g. (sneak preview alert!) see my most recent CLAUDE.md.

I try to commit early, commit often, and require good test coverage to ensure that it’s easy for a human reviewer (or human contributor) to understand the intent behind the code. And to ensure that documentation is accurate and up-to-date – this is one thing that a lot of us humans (myself definitely included) are bad at, so in this way AI is a net win.

Compartmentalize

I’m steering clear of some problem domains - anything security sensitive or any domain I am not knowledgeable enough to review the generated code.

I’m also leaving space for some programming languages - most of my Rust and Python output will likely be AI-assisted, but for languages I want to learn because they spark joy (anything Lispy or functional - e.g. Fennel) I’d be coding artisanally.

Likewise with forges - GitHub is hopelessly “AI pilled” so I don’t mind parking my AI-assisted projects there, but anything artisanal I’ll probably put on Codeberg or Sourcehut. GitLab is somewhere in between.

That’s all, folks

I should stop writing so I can write my announcement post and make this less theoretical. Public discussion welcome, or feel free to reach out privately if you don’t feel comfortable discussing this in public.

This post is day 32 of my #100DaysToOffload challenge. Visit https://100daystooffload.com to get more info, or to get involved.

Have a comment on one of my posts? Start a discussion in my public inbox by sending an email to ~michel-slm/public-inbox@lists.sr.ht [mailing list etiquette]

Posts are also tooted to @michelin@hachyderm.io or @michel_slm@social.coop