My AI Journey
It's now evident that the world has shifted into a new workflow, especially for software engineers. The adoption of LLMs has been widespread, and those that are not embarking on this journey will stay behind. In this post, I intend to share my vision and current experience with AI tools.
In 2025, companies started pushing hard for engineers to adopt AI tools in their workflows. I was skeptical at the beginning because my first experiences with AI in 2024 were frustrating. Initially, I tried GitHub Copilot, and it was helpful for small function implementations. People got very excited about things like implementing regex validations. At the time, I didn't proceed with it since I felt it wasn't worth paying for what it offered.
However, in the past year, these buzzwords stopped being just buzzwords. I noticed an increase in job applications requesting experience with AI tools. There is a growing demand for LLMs, and terms like MCP servers, RAG, and fine-tuning are becoming common.
I started studying machine learning approximately twelve years ago. At the time, there were not so many courses. However, machine learning and Deep Learning were already producing excellent outcomes for specific problems, but their accessibility was limited. I tried the Machine Learning course from Andrew Ng on Coursera. However, I faced a recurrent problem: These courses start slowly, and after two or three weeks, the lectures become too magical. I call it magic because you see R or Python scripts doing much stuff based on parameters that influence your model's behavior, and that's it.
Hence, I lost my enthusiasm very quickly because I fell short in the statistics part. So, I decided to take a course focused on Fundamentals of Statistics for Data Science. And again, I felt the same thing. We started studying regression, supervised and non-supervised machine learning, and suddenly, the concepts became too complex for me to grasp. As a result, my adoption of AI into my workflow took a bit longer.
Then, to my surprise, when I started studying about RAG, LLMs, Antropic, and OpenAI, the story was quite different. The concepts were easier to grasp, and the APIs were easier to understand. In principle, the only thing you had to specify was the temperature and the top_p
, so despite the abstraction being probably higher, the understanding was better. Of course, each model has its parameters, but you can get them as soon as you need them, and to me, that felt better.
Then, I started coding with Cursor and Claude Sonnet, and I immediately saw value in the code suggestions it delivered. I don't do vibe coding all the time, but only when I need to solve specific tasks. Sometimes I validate that my reasoning is correct and discuss architectural changes in relation to it.
In July, I participated in a Hackathon. Focusing only on asking LLMs to do your job is not productive. Despite its ability to scaffold many things, it often deviates from what you need to do. So I found myself repetitively asking Cursor to stop and prompting it to focus on a specific, smaller task. I was able to ramp up into an AI project where a friend had implemented the basic architecture, so I only had to plug in an agent and some tools. Using Elixir and Macros, it was also very straightforward. We didn't use LangChain but a bespoke base architecture relying on system prompts and configurable tool prompts. The experience was astounding, and I truly felt the power of AI.
If one asks me, today I get the most out of AI when I have a clear understanding of what I need to do, when I have a clear vision of the steps I have to implement, and when I have a clear scope to limit the tool or agent to act on. And more, I get more from AI not being lazy, but smarter about the things I ask it.
I foresee that this area will continue to grow, but to date, the most important thing remains tied to two things:
- Being able to assess the outcome of those AI tools
- Being able to provide a clear description of your problem and how you want to tackle it
Although I have seen a significant improvement in hallucinations, when YOU don't know what to do, the AI won't be as beneficial.
In conclusion, despite all the fear and apocalyptic environment many people are raising, today I value more than ever:
- Architectural experience
- Assessment and judgmental experience
- The ability to break down problems into small tasks
Those, in combination with AI, will lead you to a successful and smarter workflow in your applications.