AI Impact on software engineering (as I see it)

When I first started using AI (Cursor, to be more specific) for coding, I was very impressed to see how it could generate such high-quality code, and I understand why it's now one of the most widely used tools for software engineers. As I continued to use them more regularly, I realized they are far from perfect. Their effectiveness depends heavily on how they are used and the context in which they are applied. In this blog post, I'd like to share more about my daily application of AI coding tools and where I find them truly useful.
Using the Cursor for code navigation
Code navigation is the feature I find most helpful. Every mature organisation has some form of monolithic codebase, and navigating through it isn't easy, especially when you are new to the team. If you know what you are looking for, AI can provide highly accurate explanations and guide you to the right files, functions, patterns, etc. When I joined ilert in June 2025, I found the Cursor’s code navigation and explanation of the flow very useful, and it made my context building about the monolith very smooth. Without it, I would have to put in much more effort and be more dependent on teammates to clarify my doubts and questions.
Boilerplate code and unit tests
In terms of code generation, AI is very effective at generating boilerplate code and writing unit test cases. Cursor builds context for the entire project and understands existing coding patterns and styles. So when you want something trivial, like creating new DB tables and entities, generating data for tests, test setup, or developing mocks, it can easily do that by modelling the existing code. Similarly, it can generate a good amount of unit tests.
For more complex tests, Cursor can also be helpful, but so far, my experience has been that it may not generate accurate results. Since boilerplate code generation is taken care of by AI, coding and writing tests have become significantly faster. An important caveat is that you do need to review what code it has created, specifically in a business-critical area, and verify its correctness. I will also be extra careful in code generation where the application is highly secure or critical.
Accelerates learning newer tech stacks

Another place I find AI handy is when dealing with newer tech. AI reduces the time needed to master new technologies. Here're a few examples.
ServiceNow app
I was working on building a marketplace app for ServiceNow, which I had never worked with before. Getting acquainted with ServiceNow can be time-consuming. When I started, the only thing I knew was the task itself, and no technical details about ServiceNow, its apps, or the marketplace. With AI, you simply specify the type of app you need and mention that you are new to ServiceNow app development. After that, the AI provides steps to get started with ServiceNow. It outlines different ways to develop an app, details the type of code you may need to write, and also explains how to create an app using only configurations. Without AI tools, I would have eventually learnt all these concepts after exhaustive Google searches and reading multiple sources, but with AI, it was faster, easier, concise, and efficient. ChatGPT and ServiceNow’s internal coding assistance (similar to Cursor) helped me understand the platform better in far less time, and I was able to create the POC before the deadline.
Learning Rust
Similarly, I had to pick up the programming language Rust for my work. I found that ChatGPT and Cursor lowered the barrier to entry. To anyone not familiar with Rust, it's a fairly complicated language for beginners, especially if you are learning it as a Java programmer. Rust’s unique memory management and the concept of borrowing can be intimidating.
Generally, to learn any programming language, you need to understand syntax, keywords, flows, data types, etc. It was easy to map the basics of syntax and data types from Java. Once you have grasped the basics, you want to get your hands dirty with coding exercises, identify errors, understand why they occurred, and fix them.
This is where ChatGPT and Cursor were super helpful:
- Error decoding: Instead of looking for answers on Stack Overflow, I could easily receive detailed explanations of why the error occurred.
- Proactive learning: AI was able to list down common roadblocks other developers faced, on top of my doubts. It understood that I was new to Rust, and I found it very useful to learn about the common pitfalls even before I encountered them.
- Efficient search: The internet is a sea of information. You can eventually find your answer after an exhaustive search and visiting multiple websites. But AI provides the right answer for your specific error.
AI not only helps you code, but it also helps you evolve. It lowers the barrier to entry for complex technologies, allowing developers to remain polyglots in a fast-changing industry.
Learnings

1. Provide enough context for higher accuracy results
Providing context for your needs to AI is critical. Unlike humans, AI doesn’t ask follow-up questions. When the request is vague, AI relies on default public data and produces results that are far from accurate. Whereas, if you provide better context, like edge cases, preferred libraries, and more descriptive business requirements, AI produces better results. Therefore, it's more about how you ask and how precisely you frame your questions and provide more information about your problem.
Example 1. File Processing Standards
In my previous workplace, we were implementing a file-processing workflow. The requirement was to read the file, process it, and move it to the archive in S3. It generated the code to read files using Java's latest NIO Path API, whereas we had a standard to use FileReader. This is a subtle but important example of how it can lead to results that aren’t consistent with organizational standards.
Example 2. Unit testing: Missing business context
Similarly, for unit testing, if you provide instructions like "write a unit test for the method." AI would generate basic tests that cover basic decision branches and happy paths. They often fail to address critical edge cases and business-specific scenarios without explicitly stated expectations, such as business rules, edge cases, failure scenarios, etc. AI cannot determine which cases truly matter. As a result, tests may look complete but provide limited confidence in real-world projects.
Providing context is essential to getting accurate results. Even if you don't do it initially, you will end up providing it eventually, as you won't be satisfied with the results. Therefore, investing time in sharing precise, well-defined information isn’t extra work; it is simply a better practice. Clear context enables AI to generate code that is more usable and production-ready.
2. AI can hallucinate; verification is important
By hallucinations, we usually mean cases when AI generates code or explanations that appear valid but are incorrect. I encountered this multiple times while building a ServiceNow application. This made me realize that you can't blindly depend on the responses it provides, and the importance of verification and testing.
Example 1: Sealed objects and ServiceNow constraints
In one scenario, the application needed to make an external REST call. ServiceNow provides the sn_ws object for this purpose. The AI-generated code used the object correctly in theory and aligned with common REST invocation patterns.
However, the implementation failed at runtime with the error: “Cannot change the property of a sealed object.” Despite several iterations, the AI was unable to diagnose the root cause. Further investigation revealed that certain ServiceNow objects are sealed and restricted to specific execution contexts. These objects cannot be instantiated or modified; they must be used within platform components. This is a platform-specific constraint that isn’t obvious from generic examples, and AI was unable to handle it.
Example 2: Cyclical suggestions
In another case, the AI-provided solution didn’t work. Subsequent prompts produced alternative results, none of which resolved the issue. After several iterations, AI began repeating previously suggested approaches, as if entering a loop. At that point, I had to fall back on the primary official API documentation and a deeper examination of the platform components to resolve it.
AI can generate invalid results, may use libraries with vulnerabilities, etc. Therefore, it’s crucial to validate the result, especially when you are dealing with secure or business-critical code.
3. AI can be very descriptive; ask it to be concise
AI systems tend to produce highly descriptive responses by default. While this can be useful for learning or exploration, it isn’t always ideal for day-to-day software engineering work. In real-world environments, we are often working under tight deadlines where speed is more important than detailed explanations. When using AI as a coding assistant, concise output is usually more effective. Long explanations, excessive comments, or multiple alternative approaches can slow you down. Explicitly asking for a concise response makes AI produce results that are quicker to evaluate and easier to use.
This becomes especially important during routine tasks such as writing small utility methods, refactoring existing code, generating unit tests, and exploring existing projects. In these cases, we typically want actionable code, not a tutorial. A prompt such as “Provide a concise solution with minimal explanation” can significantly improve results and save time.
Being descriptive isn’t bad, but not always effective. By asking for concise output, you guide it to produce exactly what you want more efficiently.
Conclusion
AI has significantly changed the way I work as a software engineer. It has helped me with code navigation, learning newer technologies, writing documentation, and being more productive. It's not perfect, but I am confident that it will improve significantly. I see it as a handy assistant, another toolset in your repertoire.

.png)
