kuniga.me > NP-Incompleteness > Coding with AI

Coding with AI

14 Feb 2026

Robot writing code via a computer. Generated with Nano Banana

In this post I’d like to share my thoughts on coding with AI and how it has affected me. Everyone is talking about this and I don’t have anything new to say, but want to centralize points I’ve heard/read so far and document this point in time. It might be fun to come back to this after a while.


A New Era

I kept hearing leadership saying AI would fundamentally change how we work and I was initially dismissing it as hyperbolic but then I started using Claude skills last December. I started getting value from it even before the introduction of Opus 4.5: I created a skill to carry out a refactoring that would have taken me days, and it finished the work in a couple of hours with little interaction needed from me.

Now I’m at a point where I don’t type code in the editor anymore, and do everything through Claude code or the terminal. I plan to write more details on how I use AI for my day-to-day work in a separate post.

Loss of Craft

According to Daniel Pink, the three pillars of intrinsic motivations are: autonomy, purpose and mastery. Delegating coding to AI has eliminated a lot of the mastery aspect of writing code. I’ve seen a lot of people at work sad / lost with this change.

I thought I’d be on the same boat because I love coding! I participated in ICPC competitions during college (I spent many school breaks happily practicing), I can do Leetcode for fun and still solve programming puzzles like Advent of Code. I was surprised at myself when I was excited about writing prompts all day.

Reflecting on why, I realize that most coding in a day-to-day job is not that interesting. It revolves around writing boilerplate, fixing compilation/type errors, adhering to convention and working with human-induced complexity and abstractions. I don’t enjoy any of that. I still plan to keep solving programming puzzles for fun.

Skill Atrophy

Some scientific studies show that people who use GPS have worsened spatial navigation abilities. While this is not permanent, if you get lost during a hike, you won’t be able to rely on these skills.

I think a similar pattern will happen in the future for programming. Most people will have trouble writing code by hand and once we delegate code reviews to AI we’ll have trouble reading it too. This will be even worse for people who are joining the workforce now, since they will not even have long-term memory to fall back on.

So in the event of an emergency blackout where we can’t rely on AI, very few people would be able to help, but maybe that’s ok.

Overwork

I saw this blog post by Simon Willison that resonated with me, claiming that people work more with AI. It seems like a paradox given that AI is supposed to automate a lot of our work.

To me, the major reason for overworking is that now I can get work done with smaller chunks of time. Before, if I had a gap of 30 minutes free I’d browse my phone or do something else. Now I have time to write a prompt and have Claude code fix a small issue. @darkzuckerberg on Threads uses the term casual productivity for this.

This also increases the perceived opportunity cost: I feel like I’m constantly thinking of ways I could be leveraging AI to advance some project. This sentiment is also shared by Philip Su:

In honesty, I don’t even use the bathroom these days before prompting several AIs with work while I’m gone 120 seconds.

This also means that my attention span, which is already not great (see Context Switching), is going to get worse.

Career Expectations

The part that I worry about the most is whether this new paradigm will benefit me more or less than the average.

I don’t enjoy writing documents and spending days in meetings to get alignment. While I don’t feel like I need to write the code myself, I do like having more direct contribution to projects, which so far translated into doing the work myself or mentoring more junior engineers / interns where my level of contribution is at the task level. Currently AI seems to be providing most value at this junior level and I’m able to leverage it.

That suggests junior people will be at a disadvantage because any engineer can use AI to do the same work, but junior people often lack the knowledge and experience to know what to build and verify the result. On the other hand, junior engineers are more open to adopting new technologies and are less expensive for the company, so who knows.

The senior engineer who typically delegates more concrete tasks to junior engineers can now do the same to AI agents and get the work done faster. However, the technical lead can also do this and with AI being able to handle more complex workflows, it will be possible to operate at a higher level of abstraction such as writing architectural documents.

My prediction is that the more AI advances, the more it will benefit higher level engineers. I expect it will widen the impact gap between the different levels and I wonder if this will reflect in compensation. This might also expose high-level engineers that are only good at people skills but not good at engineering, because now they might be expected to produce concrete output, even if prototypes.

We’ll see. I’m part anxious, part excited about how things will change in the next year or so.

Mind the Gap

Speaking of widening the impact gap, I expect that – at least while things are being figured out – some people will be vastly more productive with AI than others. Without AI, the ratio of the fastest typer vs. the slowest one is probably not much more than 2x. It’s said that the mythical 10x engineer is often more efficient because they choose to solve a problem in a different way or realize the problem does not need to be solved. However, it’s really hard to prove this efficiency objectively.

With AI, someone skilled at prompting might be able to orchestrate a large project without intervention. They might get it churning during sleep hours or weekends. And I think this could lead to a 100x difference between the most productive and least productive person, at least until tools mature.

Now that prototyping is cheap, people can also be compared more objectively. They could have team A and B compete to work on a major project in parallel, with different approaches. Someone from a different team might vibe code your project during a weekend because they’re most skilled at prompting. This is pretty anxiety-inducing and also plays into overwork (see Overwork above).

Context Switching

Over my career a consistent piece of feedback I got is that I work on too many projects in parallel. I often need a lot of effort to correct course and focus on fewer things, but I’m just excited about too many things and invariably end up relapsing.

I found so far that with AI I can more effectively operate in this mode. I have a main project which I prioritize but when Claude is “Shimmying…” I can context switch to a side quest. I read some people struggle with this but since I’ve been doing this since forever, it feels natural to me.

Relatedly, Boris Cherny, the creator of Claude code, recently did a Q&A and mentioned that we’re in the Golden age for ADHD. I never got diagnosed with ADHD, but I check a lot of the symptoms.

Jevons Paradox

One interesting aspect of individuals becoming 2x, 10x more efficient is what companies will end up doing: reducing work force proportionally or increasing the number and scope of projects.

So far it seems like most of the productivity is in coding, but code review and operational oversight still seem to require human intervention. Also, a lot more changes going in means the surface area that needs to be supported increases.

I think it’s a matter of time before most human oversight will be taken out of the loop and this will mean it will disrupt the software engineering profession. Whether this will cascade to other professions remains to be seen.

Coding Style Changes

It’s said that code is written once and read many times, so we need to optimize for readability. I think many people do the right thing for stuff like naming variables, but I found that when it comes to abstraction we often optimize for writing. The DRY (Don’t Repeat Yourself) principle is often used as justification for excessive abstraction which makes the code short and dense but hard to follow.

AI has no trouble generating tons of code so it can be as verbose as we want. It’s also good at being more consistent and thorough than humans so the argument to avoid duplicate code because it needs to be updated multiple times might not be as strong now. Multiple layers of useless abstraction might also require AI to bring in more context and degrade its performance.

On a similar vein, I’m changing my mind about comments. I often preferred sparse comments because it’s hard to keep them consistent with the code but AI is pretty good at keeping them up to date so I feel like it’s okay to have them now. One thing that AI is still bad at is explaining the “why”. It’s very good at explaining the “what” but it doesn’t explain why it decided to implement things this way until you actually ask it. It probably copied this behavior from training data.