Can Wang Yin's Courses Help Programmers Resist the Impact of AI?

2025-09-20

Recently, in my work, nearly all the code I use is written by AI. I no longer need to write code myself, not even for small changes.

All I need to do is tell the AI what I want to change and what result I expect. The AI performs the task very well, with high efficiency and tireless energy. The advancement of AI technology has truly fulfilled the vision of liberating our hands.

This naturally raises a question: Will AI replace programmers? And can Wang Yin’s courses help programmers resist this wave brought by AI?

The Capabilities of AI

If you’ve used AI to write code, you’ll know that current AI still can’t truly replace programmers. There are many things it can’t do and many things it does incorrectly, requiring human judgment.

But even just six months ago, the idea of handing an entire project over to AI was unimaginable. Back then, I was still manually copying code written by AI at work, hesitant to let AI directly touch the project. But now, the Agent mode of AI is actually usable. I recall the Agent model only emerged this year.

Compared to two years ago, AI’s development speed is now terrifying. Two years ago, it was still the GPT-3.5 era—no image recognition, and lots of hallucinations in text processing. After two years of progress, GPT-5 is now incredibly powerful.

So while AI is already very capable, what’s truly alarming is the speed at which it evolves.

The AI I Use Most

Subjectively, I find the GPT-5 model the most reliable.

Since I have real work to do and little time for trial and error, Claude Sonnet 4 feels too “handsy”—it writes piles of code before decisions are even made. It often proposes three options (A, B, and C) and writes code for all of them while still explaining the choices. In contrast, GPT-5 is more deliberate—it asks which approach to use before coding. I haven’t used Gemini 2.5 Pro much.

I use VS Code as my editor, with GitHub Copilot and ChatGPT Codex plugins, both set to use GPT-5. Codex handles complex tasks, and Copilot deals with lightweight changes.

So my experience with AI programming is based on this setup. If you use different models or tools, your experience may differ.

Clarifying the Topic

Back to the main question: Can Wang Yin’s courses help resist the impact of AI?

First, let’s define which programmers are most affected by AI. The biggest impact is on those with less than three years of experience—those who lack judgment on business contexts or tech stack choices, mainly execute tasks assigned by supervisors, and measure their work by code output.

AI excels when the business context and task goals are clear. Previously, a team leader might spend time explaining requirements to juniors; now, they can just describe the same to AI, and it gets the job done. Plus, AI is more polite, more skilled, and often more professional than experts.

Currently, AI can’t replace programmers who work in complex business scenarios and make decisions about tech stacks or architectural solutions. But these roles are already beyond basic technical positions and usually involve less hands-on coding.

So to refine the question: For programmers who still need to write code themselves, can Wang Yin’s courses enhance their competitiveness and at least delay AI’s impact—help them be eliminated later than others?

Wang Yin’s Courses

Following the “one exercise per day” plan, I’ve just completed the linked list section in my third round of practice.

(Side note: think about those interviews where they ask you to write code on the spot—how ridiculous are they? Try giving them one of Wang Yin’s exercises and see if they can solve it. So, I suggest rejecting any company that includes live coding in their interview process, regardless of the outcome.)

Four years ago, I wrote Linked List Reversal myself, but it was clunky—handling previous and next pointers manually to operate on the list.

In Wang Yin’s course, the linked list problems are tackled using pure functional programming and recursion. The code is elegant and concise—no need to think about previous or next pointers at all. Even though this was my third time doing the problem, and I had forgotten how I solved it before, I hesitated—can you really reverse a list in just one or two lines? Eventually, after solving it, I realized yes, it can be that simple.

So my point is: Wang Yin’s course teaches more than just surface-level knowledge—it emphasizes deep thinking.

A beginner and an expert might both write working code to reverse a list, but the results are fundamentally different. When dealing with more complex system design problems, should we not aim for solutions that are simple, elegant, clear, and reliable?

Once you’ve seen how clean a linked list reversal can be, you realize there are many messy ways to do it—and that the cleaner ones are better. When solving more complex tasks, you may carry that same mindset: is there a more elegant way?

You need to have seen better solutions to know what better even means.

AI can do the work, but it can’t replace your skill level.

If you’ve never seen good code, you won’t know what it is, and you won’t even be able to tell the AI to generate it. It might generate three versions, and none satisfy you. Eventually, the AI asks, “What do you mean by ‘good code’?”—and you don’t know.

Just like I discussed in A Compatibility Layer Integrating Geth and CometBFT, AI can do the work, but it can’t replace your understanding.

Do You Need to Understand?

You might ask: Isn’t it enough for AI to do the work? Why do I need to understand? I don’t care about code quality—neither do the bosses or clients. Only those arrogant yet powerless senior employees or team leads care.

This is exactly the point—Wang Yin’s course isn’t just about code; it’s about thinking.

Honestly, the coding rules in Wang Yin’s course can be summarized in a few sentences. Most of them were already mentioned in his article “The Wisdom of Programming” (which I’ve noticed has since been deleted—he seems to have removed many valuable posts).

What truly matters is not the surface-level rules, but how the course simplifies complex knowledge—and why some approaches are better while others are worse.

That thinking can transfer to other problems: what is good, and what is bad?

You can ignore code quality, but you’ll still have to deal with data structures, algorithms, system architecture, complexity, performance, etc.—unless you truly never face any of that.

Can AI solve these problems? Yes, but you need to clearly explain the requirements, understand the AI’s responses, and make decisions. If you can’t understand what AI is telling you, then you can’t control it.

Again: AI can’t replace your understanding.

Wang Yin’s Perspective

I keep saying, “AI can’t replace your understanding”—that’s based on my experience using AI for coding.

Wang Yin expressed a similar core idea in his old article “The Limitations of Artificial Intelligence” and recently on Weibo: “AI has no mind-reading capability.” This aligns with daily AI programming experiences—you have to clearly articulate what you want; only then will AI deliver it. If your request is vague, you’re fooling the AI, and it’ll fool you back. If you can’t express yourself clearly, the AI can’t act clearly.

Can Wang Yin’s courses help programmers resist AI’s impact? Maybe… and if not, then what?