NorthCode logo

Looking back to Eurostar 2025 - What we really learned?

2025-12-22

Written by

Juha
Juha Randelin
DevOps Specialist
Teemu Moisanen profiilikuva 1
Teemu Moisanen
DevOps Specialist

Share

Recommended

wall
AI-Augmented Software Development: More Code, Less Chaos

AI is already here. Tools like Copilot and Cursor help developers write more code, faster. But more output without structure leads to trouble: tech debt piles up, documentation falls behind, and tests break.

aikollegakuva
The AI Colleague – Less Complexity, More Impact

AI Colleague reduces complexity in modern software systems by reading key data, paying down technical debt, improving test coverage, and supporting documentation, empowering teams to focus on what matters.

yksisarvinen
How I Learned to Stop Worrying and Love AI

AI could transform our world for the better

EuroSTAR 2025 showed that generative AI is no longer experimental. Real value comes from understanding how LLMs work, writing disciplined prompts, and treating AI as an engineering tool—efficient, precise, and responsible.

At EuroSTAR 2025, one message stood out: generative AI has matured from experimentation to real engineering. To work with it effectively, we need to understand how LLMs operate under the hood, what their “memory” really is, and why small things —like polite wording— can disrupt context and performance. We also need practical prompting habits that deliver accurate results and an awareness of the environmental impact behind every token we generate. Effective AI is about mechanics, clarity, and responsibility.

Understanding the Logic Behind LLMs

Large Language Models (LLMs) like ChatGPT aint magic, its pure math. Built on transformer architectures and trained on trillions of tokens, they rely on six main components that determine everything they do

  1. Tokenizer – Splits input into subword tokens and maps them to unique IDs.

  2. Embedding Layer – Converts tokens into high-dimensional vectors and adds positional encodings.

  3. Transformer Layers – Apply self-attention and feedforward mechanisms to model relationships.

  4. Context Window – Limits how many tokens can be processed at once. For GPT-4o, this is 128 000 tokens. If a model’s context window is 128k tokens, it means it can process about 100 000 – 120 000 words (roughly 200–250 pages of text) in one go.

  5. Output Layer – Uses a softmax function to predict next-token probabilities.

  6. Decoding Strategy – Determines whether outputs are deterministic

The Truth About “Memory” in AI

Many people think LLMs remember everything you tell them. They don’t. What they have is a context window which is a fixed-size buffer that holds both your input and the model’s response. Once you exceed that limit, older tokens simply fall off. This creates what’s called token drift, where earlier instructions lose influence as new ones take over.

Prompt Hygiene: Why Being Polite Hurts Performance

Adding “please” and “thank you” can actually make your results worse. Research shows that polite phrases introduce vector noise, shifting the model’s focus away from the core task. Your 'Hello, could you please summarize this text?' might perform worse than 'Summarize this text.' When clarity is the goal, precision beats politeness.

Then How To Make Prompts That Actually Work

If you care about accuracy, reproducibility, and efficiency, here are some habits worth following: skip greetings, start directly with the task, define the output format, and control randomness with decoding strategies.

The Green Side of AI: Efficiency = Responsibility 

A single GPT-4 query uses roughly 10x more energy than a Google search. If 17 million users each sent just one polite prompt a week, that’s the same energy as 20 days of electricity for the entire city of Washington, D.C. Brevity isn’t just good engineering — it’s sustainable.

Summary: AI Systems Deserve Technical Discipline

The takeaway from EuroSTAR 2025 is straightforward: AI becomes genuinely useful when treated as an engineering tool grounded in clear logic and disciplined inputs. Understanding how LLMs process context, how their “memory” actually works, and how prompt structure affects output allows engineer teams to apply AI with far more precision.

With well-designed prompts, everyday tasks—like generating CI/CD configurations, analyzing logs, documenting systems, or reviewing infrastructure code—become faster and more consistent.

By approaching AI with the same care used in pipelines and production systems, professionals can turn it into a dependable part of their workflow. It reduces manual overhead, sharpens operational clarity, and fits naturally into environments where accuracy, repeatability, and efficiency matter most.

Partners of our ecosystem

Luoto CompanyLuoto CompanyAsteroidAsteroidHeroeHeroeLakeviewLakeviewKipinäKipinäTrail OpenersTrail OpenersVuoluVuolu
Hello world_