Speeding Up Unit Test Writing with Neural Networks
Imagine the following situation: you’ve written clean, readable code to implement a new feature according to your work ticket. You’ve smoke-tested the solution, fixed a few inconsistencies, and double-checked the acceptance criteria. Everything seems fine! You’re ready to create a pull request for your team… but then, like a bolt from the blue — there […]
Technologies

Imagine the following situation: you’ve written clean, readable code to implement a new feature according to your work ticket. You’ve smoke-tested the solution, fixed a few inconsistencies, and double-checked the acceptance criteria. Everything seems fine! You’re ready to create a pull request for your team… but then, like a bolt from the blue — there are no unit tests for the newly written code! It’s frustrating to realize that you can’t wrap up your work just yet.
Back in the old days, a situation like this meant spending another hour (or more) working on the same ticket. Fortunately, we now live in a world where routine tasks can be solved with AI — and writing unit tests is no exception. However, like any tool ever invented by humans, it requires both knowledge and attention to use effectively. So let’s dive into the world of work optimization!
There are several facts you should keep in mind when using LLM-based neural networks — such as ChatGPT — for precise tasks:
- Write good prompts for an LLM to get accurate responses: the prompt should include not just a straightforward request, but also detailed context.
- Don’t expect it to comprehend the structure of your problem – LLM networks lack “model vision” (as I personally call it) – the ability to build up and keep track of structural organization of any real or abstract thing.
- Don’t assume an LLM knows niche terms or specialized traits of the domain that your task belongs to; be ready to provide domain-specific definitions.
- Repeat “game rules” to a LLM, especially in long conversations – LLMs tend to give higher priority to information provided in recent messages, while gradually “forgetting” earlier prompts.
Alright! But why are these rules actually applied to the task of writing unit tests? Let me share my personal experience on this subject, explaining each point in details.
Accurate prompts are key to accurate responses
A prompt is a message you write when interacting with an LLM. Use the following pattern when writing a prompt, especially when accuracy is important in your task:
As [role], do [detailed action] for [purpose] using [input data]
Not every part of this template is always required, but in general, the more detailed the context you provide, the higher the quality of the response you can expect.
Below is an example of two consecutive prompts for generating unit tests.
As Blazor & .NET software engineer, suggest list of possible key unit tests for the following UI component, without writing actual test code.
HTML markup:
(contents of .razor file without sensitive data)
Code-behind:
(contents of .razor.cs file without sensitive data)
Suppose the LLM responds with a list of unit tests. You take a quick expert glance, remove redundant tests, adjust some others. Then you come back with the following prompt with more implementation context.
As Blazor & .NET software engineer, write NUnit tests for UI component based on list below, strictly following the naming pattern [Element]_Should[Action]_When[Condition]
Required unit tests:
(refined list of unit tests)
HTML markup:
(contents of .razor file without sensitive data)
Code-behind:
(contents of .razor.cs file without sensitive data)
Even with this level of detail, the result will likely need some polishing — but it will already provide significant value and save you time.
LLM stands for “Lacks Logical Mapping”
Just joking! Of course, LLM means “Large Language Model.” However, it’s commonly known how neural networks struggle with maintaining a correct understanding of structure of task subjects.

I call it “model vision” (yes, it’s not a quite accurate term, but still) – and it’s where we humans still have an advantage over AI. Humans are capable of deep and complex abstract thinking, and combining it with other mental operations while solving different tasks. Although LLMs have been improved on this track, they still tend to overlook structure and logical relationships within task subject.
But how can we work around this limitation when using LLMs in software development? Well, the best approach I’ve found so far is to split tasks with complex context into a series of smaller, focused prompts. Is your class so large and complicated that an LLM gets “blind”? Just feed it smaller parts of your code! …And, well, maybe also take a moment to revisit the Single Responsibility Principle in your project — because yes, oversized classes are actually hard to maintain. 😉
Neural Networks are not Stack Overflow
Specialized libraries, manufacturer-provided packages, and your own helper classes – these are just a few examples of what an LLM was (most probably) not trained on. As a result, the generated code may call non-existent methods, attach attributes from unrelated libraries, and write other odd stuff – much like a poorly-trained student who pretends to know the material during an exam.
When I request unit tests based on BUnit (testing library for Blazor), I regularly see [Fact] attribute in the generated tests.

However, FactAttribute actually belongs to xUnit, and not present in BUnit. Also, in this example the LLM messed up the test name, but that’s a different story!
So, how can we address such issues when using LLMs for tasks involving exotic or less-known libraries? You can try any of the following:
- Provide an example of correctly written code that uses that library.
- Share the external contract exposed by the library, such as important interfaces, attributes, or methods that the LLM may not know.
- Ensure to specify the test framework, so that LLM won’t simply pick up the most popular one.
Now let’s finally move to the last point of our topic!
Fresh Prompts: Best Served One Message at a Time

LLMs chat like humans, but they’re still just bots. Unfortunately, the same applies to an LLM’s memory. When you provide a detailed prompt with clear rules, after a few responses your preferences may be “forgotten” – even if you explicitly say, “Use this ruleset till the end of our conversation!”
When using neural networks for technical tasks, I recommend copying the rules and specifications from your initial prompt and pasting them at the start of each new prompt. It will save you time and help the LLM to stay focused on the context of your task!
A Few Final Words
This isn’t the first time humanity has experienced a technological breakthrough that intrudes into almost every area of our lives. Life isn’t like a dish where you settle into a fixed place once and for all — it’s a wild stream where conditions are constantly changing! Don’t be afraid of the changes neural networks bring to the professional environment. Instead, find your own unique way to use them to your advantage. Innovation is our true strength!
Our team at Swan Software Solutions is always excited to use new tools and technologies to provide our clients with reliable, scalable, and affordable solutions. To discover more about how we could help your company, schedule a free assessment.