Score code, devs, and debt fast.

Start free trial
SitePoint Premium
Stay Relevant and Grow Your Career in Tech
  • Premium Results
  • Publish articles on SitePoint
  • Daily curated jobs
  • Learning Paths
  • Discounts to dev tools
Start Free Trial

7 Day Free Trial. Cancel Anytime.

Most articles about AI coding tools focus on autocomplete and boilerplate generation. That’s fine when you’re starting out, but it barely touches how these tools can truly help when you’re solving complex engineering problems.

A couple of years ago, I joined a company and inherited a large, undocumented system built with unfamiliar technologies. As a principal engineer, you’re expected to figure it out, but the usual approach of endless doc reading, trial and error, and interrupting teammates was painfully slow. I was skeptical of AI tools. Would they produce working code? Would they make sound decisions?

What changed my mind was shifting how I used them. Instead of thinking of AI as a code generator, I treated it like a knowledgeable colleague I could brainstorm with. AI has a massive library of programming knowledge. I bring the context: project requirements, constraints, and what’s realistic for the team. Together, that combination speeds everything up.

For example, I once had to build a system that connected services across network boundaries under strict security requirements. I understood the business side but not the networking details. Instead of fumbling through documentation or making expensive mistakes, I walked through each component with AI. I described what I was trying to achieve, it broke down concepts and suggested approaches, and I validated everything against official documentation.

The result: a working solution delivered far faster, and I understood it deeply because I stayed involved rather than copy pasting mystery code.

This isn’t about typing faster. It’s about making smarter decisions and catching issues early.

Architecture: Exploring Tradeoffs, Not Just Getting Answers

The most valuable use of AI for me isn’t writing code at all. It’s exploring architectural decisions. These are the moments when multiple paths could work and you’re deciding which tradeoffs matter most.

For example, deciding where data should live, in your system or the customer’s, isn’t just technical. It affects cost, operational complexity, and whether you’re solving the real problem or chasing best practices. AI can help map out those factors clearly, which makes conversations with stakeholders easier and ensures you’re not missing obvious risks.

It’s also useful for technology choices like managed service versus custom build, batch versus streaming, and so on. It can help you see when a simple approach is good enough and when complexity is justified. The key is providing context. Don’t ask what’s best. Ask what’s best for your specific situation.

One pattern I use: start broad, narrow based on constraints, then refine based on what you actually have. That progressive narrowing beats trying to design everything perfectly upfront, which never really works anyway.

The main trap is over analysis. Once you can explain the approach clearly, understand the tradeoffs, and know the risks you’re accepting, stop debating and start building. Implementation will teach you more than another hour of design talk.

Debugging: Systematic, Not Lucky

AI doesn’t magically fix bugs, but it does make debugging more methodical, especially for complex configuration or environment specific issues.

A typical scenario is when a system works locally but fails elsewhere with cryptic errors. Instead of guessing, I share the error, relevant config, and context with AI. It helps structure the investigation: Are credentials loading? Is network connectivity intact? Can the service be reached? Is DNS resolving?

By testing each layer and reporting back, you progressively narrow the scope.

Once, I wasted hours on a database connection error that looked like an authentication issue. Working step by step with AI, I discovered the database was on a private network and my VPN wasn’t routing traffic correctly. Credentials were fine all along. Without that methodical approach, I would have wasted more time chasing the wrong thing.

The lesson: context is everything. Provide complete stack traces, detailed config, and what works versus what doesn’t. Incomplete info leads to generic, useless advice.

Code Review: A Second Pair of Eyes

AI can’t replace a human reviewer, but it makes a great extra set of eyes, especially for catching edge cases.

It’s good at spotting nil pointers, unhandled errors, and boundary conditions you might overlook. I once reviewed a function iterating over a map, and AI immediately flagged that the map could be nil and crash. I’d missed it entirely.

But AI can’t tell you if the code fits your architecture, solves the actual business problem, or is readable for future maintainers. Those calls need human judgment.

My process: I review the code myself first, then use AI for targeted questions on tricky sections, like “What edge cases might this miss?” or “What happens if the external service times out?” It challenges my assumptions without replacing my judgment.

Refactoring: Where AI Really Excels

Refactoring is where AI shines, especially mechanical, repetitive tasks.

I had two config classes with about 120 lines of identical code. AI helped extract that into a base class and refactor both to inherit from it in under 30 minutes, work that would’ve taken me hours.

But be careful. During refactors, AI might subtly change how edge cases or nulls are handled. Your tests might not catch everything, so review diffs carefully, especially conditional logic and comparisons.

Also, not all duplication should be removed. Sometimes code is intentionally similar but needs to evolve independently. AI can’t know that context. Only you can.

And always test in real environments. What works locally may fail in staging or production due to version differences, security policies, or network conditions.

Knowing When Not to Use AI

Understanding when not to use AI is just as important as knowing when to use it.

Never trust it with security critical code like authentication, authorization, or encryption. AI might suggest solutions that work functionally but contain subtle vulnerabilities. The same goes for compliance sensitive areas where mistakes could have legal consequences.

A bigger risk is over reliance. If you’re using AI generated code you don’t understand, you’re building technical debt. You should always be able to write that code yourself, even if it would take longer. If you can’t, learn the underlying concept or rethink your approach.

AI also can’t make decisions with long term organizational impact. It doesn’t know your team’s skills, legacy systems, operational capacity, or internal politics, factors that often outweigh technical optimization.

And sometimes, it’s worth solving a problem without AI just to keep your problem solving skills sharp. Like skipping the elevator to take the stairs, it keeps your engineering instincts strong.

Making It Work

After a couple of years using AI tools regularly, here’s what I’ve learned.

Use it as a thought partner, not a code generator. You bring judgment and context, it brings breadth of knowledge. The real power is in the combination.

Be specific. Vague questions get vague answers. Explain what you’re building, your constraints, and your goals. Treat it like a conversation: ask, refine, iterate.

Always review. Understand and validate what AI suggests. You’re still responsible for what goes into production. Know the boundaries too because security, compliance, and critical logic all need human expertise.

If you’re just starting with this stuff, start small. Pick one architectural question this week. Use AI to review for edge cases before merging a PR. Try it on a simple refactor. Build confidence gradually.

The question isn’t whether AI is useful for senior engineers. It clearly is. The real question is whether you can use it effectively without losing the judgment and expertise that make you valuable.The engineers who get that balance right will ship better products faster, not because AI replaces them, but because it amplifies them.

© 2000 – 2025 SitePoint Pty. Ltd.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.