AI has dominated tech discourse for the past few years. It has had mixed reception in the programming space, from hardcore vibe coders to those who won't even let the impure thought of AI enter their cerebrums. Personally, I’ve found AI useful in reducing tedious tasks so I can spend more time on the parts of development I enjoy. Below, I’ll walk through practical examples and principles that help make AI genuinely useful in everyday programming work.
"I asked AI to build me a billion-dollar app and it didn't work"
The most common failure point in using AI is poor communication. People underestimate how much context and clarity AI needs. It’s not magic—it’s a language model predicting the next word based on probability.
Here are a few principles that help get the best out of AI:
Overall do not treat LLMs like a human. Take the mindset that the LLM cannot think for itself, assume it'll make every mistake imaginable and write your prompt in a way that explicitly instructs it to avoid those mistakes. The bigger the task you give, the harder this is to do. Therefore, focus on small tasks that require minimal context.
In a project we all have little inefficiencies here and there. You always need to create this file when booting up the
server, setting up this new microservice requires setting up 4 containers through docker compose, you always need to
delete those files in that S3 bucket to restart your end to end test for a feature you're developing. These are perfect
candidates for automation—but often too minor to justify scripting from scratch.
However, this is a perfect task to give to an LLM. It's small, focused and fast to write if you know what you're doing.
I've done this myself while working on a client where I constantly needed to port forward to services to debug
behaviours in live environments, across numerous Kubernetes contexts. I had the LLM create a bash script, that took in a
JSON file of the Kubernetes context, pod name, name space etc. and it ran down this list and did the port forwarding for
me.
I have done this for many smaller things too, such as scripts to:
These aren’t hard problems, but LLMs save you minutes that add up fast.
One of the most useful features of LLMs is summarization. Poorly formatted logs, messy JSON, exception stacks—they’re a pain to parse manually. Instead of scanning a long blob of JSON for an error, I ask the LLM to extract the exception, identify the line number, and provide a rough diagnosis. It’s much faster and often more accurate.
This is usually work that just requires a lot of typing, but is trivial to complete.
For example:
These are all trivial to do by hand, but letting the LLM handle them can turn 10 minutes of effort into 10 seconds.
Some may let out an audible gasp at this suggestion. Writing good isolated tests often requires writing a lot of code. Scaffolding, mocks and data set-up can be an extraordinary amount of busy work. Most of this is very repetitive and can be a lot to keep in your head for complex features. If AI can generate this test, you can read and verify it is testing what is expected, why not generate the test? Especially if you must meet code coverage metrics, which make you end up testing things which aren't of much value?
A tip for making the LLM more consistent in generating tests of a consistent style, is to give it some guidelines. Making these guidelines can be quite tedious to fine tune sometimes however, you can also use LLMs to assist you in making these by using an LLM prompt generator.
LLMs can sometimes be more effective Google searches. When you don't quite know the proper words to Google, the LLM can
fill this gap. It is effect to ask the LLM to "find people who have had a similar problem online and tell me how they
fixed it". Using this I have found people on StackOverflow and GitHub who have reported my specific issue and found a
solution, that were surprisingly hard to find manually.
This can also be useful if you're having the LLM assist with a problem and its getting stuck not making sense. When I
tried to Google for the same problem, it was much harder for me to find the resource that the LLM had found.
LLMs can be a good resource to discover new solutions, technologies and approaches to problems. I think we've all made Google searches for "real-time databases", "web socket alternatives" etc. and the top result is always full of generic " top 10 real-time databases 2025 (published in 2024)", "you won't believe these aren't web sockets!". LLMs can cut out this noise and find you established solutions without needing to shovel through the noise the top Google results give you. Outside that, you can prompt the LLM to ask you questions you may not have considered and even highlight pitfalls before you go to implementation.
AI isn't magic, and it won’t build your dream app from a single prompt. But if you learn to use it effectively—by breaking tasks down, being specific, and focusing on small wins—it can be a serious time-saver.
Used well, AI isn't just hype. It’s a tool that helps you focus on what matters and skip what doesn’t.