Humans + AI = A New Era of Development?

Last month, I was asked to write an article for the company website. Since I hadn’t written anything in quite a while – other than short posts – I had to think about what the article could be about. The answer was so obvious that all that was left was to set aside time to […]

Category

Technologies

Posted

Kyrylo

May 14, 2026

Last month, I was asked to write an article for the company website. Since I hadn’t written anything in quite a while – other than short posts – I had to think about what the article could be about. The answer was so obvious that all that was left was to set aside time to write it.

There is no other topic in my professional field that interests me more right now than artificial intelligence.

A New Era?

Many in the industry believe that the arrival of large language models on the market is such a revolutionary event that it could eclipse the invention of electricity. And yet, electricity is the foundation of civilization as we know it. Without it, we humans would be set back a long way in our development.

In modern warfare, opposing sides view the enemy’s energy infrastructure as a priority target. The media reports on such attacks as attempts to drive the enemy back to the Stone Age. Everyone understands that a prolonged lack of electricity would lead to irreparable consequences – traffic lights would stop working, there would be nowhere to charge gadgets, and there would be no internet or mobile connectivity. Industry, transportation, and other areas of human activity would be unable to continue as they are today without this ubiquitous, inexpensive resource that we perceive as an integral part of our daily reality. In a world without electricity, most of our professional skills would be useless; we would all have to learn new skills, such as blacksmithing or chimney sweeping.

If we compare our present day to the era when electricity first came into service for humanity, it is said that we are now at the beginning of the mass adoption of AI in manufacturing, offices, and the military. The introduction of electricity in manufacturing dates back to the beginning of the last century. The process took decades and led to the ubiquity of this energy. Try tackling any everyday task, such as making a cup of coffee, with the goal of deliberately avoiding the use of electricity. This will immediately require specific equipment and resources, such as a gas burner, a manual coffee grinder, drinking water from a well, and so on. In a market economy, all of this would immediately affect the price of that cup of coffee, and we wouldn’t like it.

Yesterday, while browsing my YouTube channel, I stumbled upon an old video I recorded on my phone at a music festival in 2015. It features a band whose name even the festival organizers didn’t know. I pasted the YouTube link in the LLM chat, and within a second the mystery was solved – I found out the exact names of both the band and the song! I still can’t shake the feeling that there’s some magic at work here, and I marvel like a child.

Just a few months ago, I was writing, debugging, and reviewing other people’s code on my own, manually. I studied hard and persistently, honing my programming skills for years. Now, most of that work is done for me by another program, whose inner workings I barely understand. My productivity has increased significantly. Very complex tasks that used to take weeks to solve are now solved within a single workday.

Has the new era of AI already arrived? In my opinion – yes. We simply don’t yet know where it will lead, since we are at the very beginning of it. People can still get by without AI; we still retain our old skills, but with each passing year, as we gradually lose them, we are moving toward a future where we will be helpless without AI.

A Bit of History

It has been over 30 years since I first began encountering the term “artificial intelligence” from time to time.

Back in 1991, the Research Institute for Problems of Artificial Intelligence was established in my hometown of Donetsk. And although it has yet to present the world with any discoveries or technological breakthroughs, the term has become widely known.

My first encounter with an AI specialist took place in the mid-90s, when I was still a student at a technical university and my father, an electronics engineer, introduced me to his acquaintance and former colleague, Petya Tankiev, who was presented to me as an AI specialist. That was also when I first heard an explanation of what this so-called artificial intelligence actually was. I didn’t understand much of his explanation back then, but I’m sure the term “neural networks” wasn’t mentioned in the conversation. I remember that it was hard to grasp anything, since neither these concepts nor these processes existed in my world, and my brain simply had nothing to latch onto in Petya’s story.

Back then, AI was an abstraction, the preserve of the select few, something akin to plasma physics research; today, it is a working tool that increases my productivity many times over.

Using AI in Development

The revolutionary aspect of LLMs is that they provide us with an interface for formulating tasks and receiving responses in a format that is as clear and accessible to humans as possible. In other words, I can formulate a task consisting of several points, containing all the information necessary for its completion in the form of so-called artifacts (text, hyperlinks, images, etc.). And this can be done without using a special programming language. For a developer, interacting with the program is akin to communicating with a project colleague: you assign a task, explain and demonstrate what we have and where changes are needed, answer clarifying questions, verify the result, and ultimately accept the work. For professional use of LLM in development, you don’t need to learn a new programming language; while not mandatory, it is highly recommended to study the tool’s data structures and interfaces. There is an input prompt, and the most important thing to learn is how to formulate the task as accurately and correctly as possible, providing the necessary context. This is exactly what my daily work routine looks like.

As a result, I’ve gained a colleague who knows far more than any human, is ready to work 24/7 for relatively little money, and is progressing very quickly as a professional. Six months ago, it was a junior; now it’s a senior; and tomorrow, the LLM may well reach the highest level of expertise unattainable by a human.

Meanwhile, the person whose profession is still called “software engineer” is, in fact, becoming both a project manager and an architect rolled into one.

With such a useful, skilled, and tireless assistant, the developer’s main task now is no longer writing code. In my opinion, the main thing now is to establish clear rules for the team’s work. The clearer and more detailed the criteria for these rules are formulated, the faster and better the tasks will be “completed.”

There are many possible approaches to achieving this goal. In this article, I will focus on one called Spec Driven Workflow.

Spec-driven development

This is a development approach where the task specification is the primary artifact, not the code.

The core idea is that, before writing any code, a detailed document is created that describes what needs to be built or changed, why, and how. Only then does implementation begin.

A typical development cycle would look like this:

Idea → mockup → specification → specification review → code → code review → tests → deployment

instead of the usual:

Idea → mockup → code → code review → tests → deployment

As a result, the team is left with not only the code but also a documented history of decisions. This is now an integral part of the project repository, which can be revisited at any time.

A specification typically consists of the following sections:

– Goal  –  what problem we are solving.

– Requirements  –  what the system must do.

– Design  –  architecture, API contracts, data schemas, mockups.

– Edge cases  –  what should happen in non-standard situations.

– Acceptance criteria  –  how to know that everything is ready and we can move forward.

Why It Matters

Not long ago, in March 2026, Amazon experienced a series of high-profile service disruptions. On March 2, the website was down for nearly six hours – during that time, 120,000 orders were lost and 1.6 million errors were recorded. Three days later, an even more serious outage occurred: order volume in the U.S. dropped by 99%, resulting in approximately 6.3 million lost orders. Both incidents were linked to code changes generated by AI and deployed to production without proper review. Following this, Amazon launched a 90-day code security audit across 335 critical systems, and all changes created with AI involvement are now required to undergo review by senior engineers before deployment.

The main principle of the approach discussed here is simple: finding and fixing an error in a Markdown document is incomparably easier and cheaper than fixing the same error in code that has already been written! The more complex the system being developed, and the later a problem is detected, the more expensive it is to fix. In addition, a single document becomes the single source of truth for the entire team – whether it consists of people or AI agents. As a result, the likelihood of misinterpreting the task is minimized, and everyone relies on clearly formulated documentation with defined criteria.

Impact on Working with AI Tools

Today, when code is generated by LLM models based on specifications, the result is significantly more predictable and reproducible. Humans don’t lose control of the project and will always be able to answer the question, “Why did we do this in the first place?”

How can we not recall the old adage: a well-asked question is half the answer. In the world of AI, this takes on a literal meaning.

Remember the supercomputer Deep Thought from *The Hitchhiker’s Guide to the Galaxy*? It honestly gave the answer to the ultimate question of life, the universe, and everything – 42. The problem wasn’t the answer, but the fact that no one had properly formulated the question itself.

Today, in the era of vibe coding and agent-based systems, history risks repeating itself. A good specification is, in fact, a question asked correctly. And that means half the work is already done.

Conclusion

I don’t know what the future holds for humanity armed with AI. Just as our ancestors living at the beginning of the last millennia could not have imagined that their descendants would fly in airplanes, drive cars, talk on the phone, and watch movies at home. An optimistic scenario is possible, where we live in comfort and abundance thanks to the automation of labor. Or a pessimistic one, in which the owners of neural networks establish a digital dictatorship of unprecedented scale. We shouldn’t dismiss the dystopian scenario in which machines decide to get rid of unnecessary “meat bags,” but as an optimist, I don’t want to dwell on that.

What I know for sure is that the world is constantly changing, and we are forced to change along with it. Therefore, regardless of whether we like these changes or not, we must constantly adapt to new realities. At the very least, to keep up and not become obsolete. After all, as we know, to stay in place, you have to move forward.

At Swan Software Solutions, we are always learning new tools and technologies to improve our skills. To discover more about how our team could help with your company’s technology needs, schedule a free assessment.