Artificial intelligence (AI) has fundamentally changed the way we program. AI agents can generate code, optimize it, and even assist with debugging. However, there are some limitations that programmers need to keep in mind when working with AI.
At first glance, it appears that AI can effortlessly write code. Simple functions and scripts are often generated without issues. But once a project consists of multiple files and folders, problems arise. AI struggles to maintain consistency and structure in a larger codebase. This can lead to issues such as missing or incorrect links between files and inconsistencies in the implementation of functions.
AI agents have difficulty with the correct order of code. For example, they might place initializations at the end of a file, causing runtime errors. Additionally, AI can without hesitation define multiple versions of the same class or function within a project, leading to conflicts and confusion.
One solution is to use AI code platforms that can manage memory and project structures. This helps maintain consistency in complex projects. Unfortunately, these features are not always applied consistently. As a result, AI can lose track of the coherence of a project and introduce unwanted duplications or incorrect dependencies during programming.
Most AI coding platforms work with so-called tools that the large language model can invoke. These tools are based on an open standard protocol (MCP). It is therefore possible to connect an AI coding agent to an IDE such as Visual Code. Optionally, you can set up a local LLM with llama or ollama and choose an MCP server to integrate with. Models can be found on huggingface.
To better manage AI-generated code, developers can use IDE extensions that monitor code correctness. Tools such as linters, type checkers, and advanced code analysis tools help detect and correct errors early. They form an essential complement to AI-generated code to ensure quality and stability.
One of the main reasons AI agents keep repeating errors lies in how AI interprets APIs. AI models need context and a clear role description to generate effective code. This means prompts must be complete: they should not only include functional requirements but also explicitly state the expected outcome and constraints. To facilitate this, you can save prompts in a standard format (MDC) and always send them along to the AI. This is especially useful for generic programming rules you apply and the functional and technical requirements and structure of your project.
Products such as FAISS and LangChain offer solutions to help AI better handle context. For example, FAISS assists with efficiently searching and retrieving relevant code snippets, while LangChain helps structure AI-generated code and maintain context within a larger project. But here too, you can optionally set it up locally with RAC databases.
AI is a powerful tool for programmers and can help speed up development processes. However, it is not yet truly capable of independently designing and building a more complex codebase without human oversight. Programmers should view AI as an assistant that can automate tasks and generate ideas but still requires guidance and correction to achieve a good result.
Contact us to help set up the development environment to assist teams in getting the most out of their development environment and focus more on requirements engineering and design than on debugging and writing code.
Artificial Intelligence (AI) continues to develop further in 2025 and has an increasingly significant impact on our daily lives and business. The main trends in AI show how this technology is reaching new heights. Here we discuss some key developments that will shape the future of AI.
Below are the 7 most important trends in Artificial Intelligence for 2025
Agentic AI refers to systems capable of making decisions independently within predefined boundaries. In 2025, AI systems are becoming increasingly autonomous, with applications in areas such as autonomous vehicles, supply chain management, and even healthcare. These AI agents are not only reactive but also proactive, relieving human teams and increasing efficiency.
With the growth of AI applications in real-time environments, such as speech recognition and augmented reality, inference time compute becomes a crucial factor. In 2025, much attention is given to hardware and software optimizations to make AI models faster and more energy-efficient. Think of specialized chips like tensor processing units (TPUs) and neuromorphic hardware that support inference with minimal latency.
Since the introduction of models like GPT-4 and GPT-5, very large models continue to grow in size and complexity. In 2025, these models are not only larger but also optimized for specific tasks, such as legal analysis, medical diagnostics, and scientific research. These hypercomplex models deliver unprecedented accuracy and contextual understanding but also bring challenges in infrastructure and ethics.
At the other end of the spectrum, we see a trend of very small models specifically designed for edge computing. These models are used in IoT devices, such as smart thermostats and wearable health devices. Thanks to techniques like model pruning and quantization, these small AI systems are efficient, secure, and accessible for a wide range of applications.
AI applications in 2025 go beyond traditional domains such as image and speech recognition. Think of AI supporting creative processes, such as designing fashion, architecture, and even composing music. Additionally, breakthroughs are seen in domains like quantum chemistry, where AI helps discover new materials and medicines. Also, in the management of complete IT systems, software development, and cybersecurity.
Through the integration of cloud technology and advanced data management systems, AI systems have access to what almost feels like infinite memory. This makes it possible to maintain long-term context, essential for applications such as personalized virtual assistants and complex customer service systems. This capacity enables AI to provide consistent and context-aware experiences over extended periods. In fact, the AI remembers all conversations it has ever had with you. The question is whether you want that, of course, so there must also be an option to reset parts or the entirety.
Although AI is becoming increasingly autonomous, the human factor remains important. Human-in-the-loop augmentation ensures that AI systems are more accurate and reliable through human supervision in critical decision-making phases. This is especially important in sectors such as aviation, healthcare, and finance, where human experience and judgment remain crucial. Strangely enough, trials with diagnoses by 50 doctors show that an AI performs better and even performs best only when assisted by AI. So we mainly need to learn to ask the right questions.
With the arrival of O1, OpenAI took the first step towards a reasoning LLM. This step was soon surpassed by O3. But competition also comes from an unexpected corner from Deepseek R1. An open-source reasoning and reinforcement learning model that is many times cheaper than the American competitors, both in terms of energy use and hardware usage. Because this had an immediate impact on the stock value of all AI-related companies, the tone is set for 2025.
How NetCare can help with this topic
NetCare has a proven track record in implementing digital innovations that transform business processes. With our extensive experience in IT services and solutions, including managed IT services, IT security, cloud infrastructure, and digital transformation, we are well equipped to support companies in their AI initiatives.
Our approach includes:
Which goals you should set
When implementing AI, it is important to set clear and achievable goals that align with your overall business strategy. Here are some steps to help you define these goals:
By following these steps and working with an experienced partner like NetCare, you can maximize the benefits of AI and position your organization for future success.
The trends in AI in 2025 show how this technology is becoming increasingly intertwined with our daily lives and solving complex problems in ways that were unimaginable just a few years ago. From advanced agentic AI to almost infinite memory capacity, these developments promise a future where AI supports, enriches, and enables us to push new boundaries. Be sure to also read the fascinating news about the new LLM from OpenAI O3
Artificial intelligence (AI) continues to have a huge impact on how we work and innovate. OpenAI introduces O3, a groundbreaking new technology that enables companies to operate smarter, faster, and more efficiently. What does this advancement mean for your organization, and how can you leverage this technology? Read on to find out.
OpenAI O3 is the third generation of OpenAI’s advanced AI platform. It combines state-of-the-art language models, powerful automation, and advanced integration capabilities. While previous versions were already impressive, O3 takes performance to the next level with a focus on:
OpenAI O3 is designed to add value to a wide range of business processes. Here are some ways it can be used:
With O3, you can deploy intelligent chatbots and virtual assistants to support customers. These systems understand natural language better than ever before, enabling them to help customers faster and more effectively.
Businesses can use O3 to analyze large amounts of data, generate reports, and share insights. This makes it easier to make data-driven decisions.
O3 helps marketers generate compelling content, from blog posts to advertisements. The model can even provide personalized recommendations based on user preferences.
Large language models are very good at developing software.
One of the most notable features of OpenAI O3 is its focus on user-friendliness. Even companies without extensive technical expertise can benefit from the power of AI. Thanks to comprehensive documentation, API support, and training modules, implementation is straightforward.
Additionally, great attention has been paid to ethical guidelines. OpenAI has added new features to prevent misuse, such as content filters and stricter controls on the model’s output.
At NetCare, we understand how important technology is to your company’s success. That’s why we offer support in:
With our expertise, we ensure that your organization immediately benefits from the possibilities OpenAI O3 offers.
OpenAI O3 represents a new milestone in AI technology. Whether it’s improving customer experience, streamlining processes, or generating new insights, the possibilities are endless. Want to learn more about how OpenAI O3 can strengthen your business? Contact NetCare and discover the power of modern AI.
The future of organizations consists of digital twins: Transform with artificial intelligence and strengthen sectors such as healthcare and finance. Artificial Intelligence (AI) is more than just ChatGPT. Although 2023 brought AI into the public consciousness thanks to the breakthrough of OpenAI’s chatbot, AI has been evolving quietly for decades, waiting for the right moment to shine. Today, it is a very different kind of technology—capable of simulating, creating, analyzing, and even democratizing, pushing the boundaries of what is possible in virtually every industry.
But what exactly can AI do, and how should companies integrate it into their strategies? Let’s dive into the potential, use cases, and challenges of AI from an IT strategic perspective.
AI is capable of incredible achievements, such as simulating reality (through Deep Learning and Reinforcement Learning), creating new content (with models like GPT and GANs), and predicting outcomes by analyzing enormous datasets. Sectors such as healthcare, finance, and security are already feeling the impact:
These examples are just the tip of the iceberg. From real estate and insurance to customer service and the legal system, AI has the ability to revolutionize almost every aspect of our lives.
One of the most intriguing applications of AI is the creation of digital twins. By simulating reality with operational data, companies can safely explore the impact of AI before deploying it on a large scale. Digital twins can represent a pilot, judge, or even a digital credit assessor, allowing companies to mitigate risks and gradually integrate AI into their operations.
When companies want to embrace AI, they must consider questions such as “buy, use open source, or build ourselves?” and “how do we empower our current employees with AI tools?” It is crucial to see AI as a way to enhance human skills—not replace them. The ultimate goal is to create augmented advisors that support decision-making without sacrificing the human aspect.
With great power comes great responsibility. The EU AI Act came into effect in 2024 and aims to balance innovation with fundamental rights and safety. Companies must proactively consider bias in AI models, data privacy, and the ethical implications of deploying such technologies.
Consider using synthetic data generated by GANs to address bias, and utilize tools like SHAP or LIME to build more explainable AI systems. We need AI that supports human goals and values—technology that can improve lives rather than endanger them.
AI is already shaping how we live and work. According to Gartner, six of the top ten technology trends for 2024 are related to AI. Forrester predicts that the AI market will reach a value of $227 billion by 2030. Companies must now figure out how to take AI out of the labs and apply it in practical use cases.
The future is not about replacing people, but about creating a world where personal AIs collaborate with enterprise AIs, augment human capabilities, and transform industries. The vision is clear—embrace AI responsibly and harness its power for a more efficient and enriched future.
How NetCare Can Help with This Topic
NetCare conceived and developed this strategy long before major companies like Oracle and Microsoft came up with the idea. This offers a strategic advantage in terms of speed, approach, and future vision.
Goals You Should Set
When implementing a digital twin, it is important to set clear and measurable goals. Consider the following steps:
Why NetCare
NetCare distinguishes itself by combining AI with a customer-focused approach and deep IT expertise. The focus is on delivering tailored solutions that meet the unique needs of your organization. By working with NetCare, you can trust that your AI initiatives are strategically planned and effectively executed, leading to sustainable improvements and competitive advantage.
Faster, Smarter and More Sustainable In the world of software development, outdated code can be an obstacle to innovation and growth. Legacy code is often built up from decades of patches, workarounds, and updates that were once functional but are now difficult to maintain.
Fortunately, there is a new player that can help development teams modernize this code: artificial intelligence (AI). Thanks to AI, companies can clean up, document, and even convert legacy code to more modern programming languages faster, more efficiently, and more accurately.
Legacy code, written in outdated languages or with outdated structures, brings several challenges:
Modernizing legacy code with AI not only offers companies the chance to benefit from new technologies but also to minimize risks and save costs. With AI, it is possible to gradually transform a legacy codebase into a modern, future-proof infrastructure without losing the underlying functionality.
In a world where technology evolves rapidly, companies can build a valuable advantage through AI by renewing outdated code and positioning themselves as innovative players in their field. Modernizing legacy code is now not only feasible but also cost- and time-efficient.
Need help coaching and implementing AI to modernize legacy code? Fill in the contact form and I will gladly explain more. On average, a modernization project with AI goes 5 times faster.
The world of generative AI (genAI) is developing at a rapid pace. Where we once only dreamed of technology that could match human creativity, today we see applications that surprise and inspire us. From text generation to artificial image and video production: genAI opens doors to new possibilities in various sectors, from marketing and entertainment to healthcare and education. In this article, we discuss the most groundbreaking developments and look at what the future may hold.
The latest genAI models such as GPT-4 from OpenAI and DALL-E have become multimodal. This means they can combine different types of input, such as text and images, to generate more complex and creative outputs. With DALL-E, for example, you can now generate images based on text descriptions, helping creative professionals visualize their ideas directly. These multimodal models make it easier to push boundaries between different creative disciplines.
In-context learning means that AI models get better at understanding the context and nuances of what you ask without needing additional training. This makes them immediately applicable in real-time situations, such as customer service. Adaptive AI, which can adjust based on feedback and usage patterns, ensures that AI continuously improves in delivering personalized responses and services.
The genAI community is becoming increasingly open, with companies like Meta and Hugging Face making their models public. This allows developers to experiment with these advanced AI systems themselves and contribute to improvements. The open-source community plays an important role in addressing issues such as bias and ethical concerns by incorporating input from diverse users worldwide.
Traditionally, powerful AI models like genAI require a lot of computing power and energy. Innovations in AI architectures, such as more efficient neural networks and specialized AI chips, make it possible to run large AI models on a smaller scale and at lower costs. This makes genAI solutions more accessible to smaller companies and individual users.
Where genAI was previously mainly applied to text, the latest developments in image and video technology are impressive. Models like Midjourney and Runway offer users the ability to generate high-quality images and even video clips. This is particularly useful for marketing and advertising, where visually appealing content plays a major role. New AIs can even mimic human movements, allowing actors or animated characters to move lifelike in generated environments.
With the rise of powerful genAI models, ethical issues also arise, such as copyright, privacy, and the impact of AI on jobs. More and more companies and governments are working on guidelines to ensure responsible use of AI. OpenAI, for example, introduced features like ‘safeguarding’ to prevent unintended results in image generation. There is also a focus on making AI more transparent for users so they know when and how AI is being used.
GenAI is increasingly finding its way into everyday software tools, such as word processors, design software, and browsers. Google and Microsoft are integrating AI features into their Google Workspace and Microsoft Office suites, respectively, helping users work smarter and faster. This integration ensures that AI support is directly available in the workflow of millions of people, which can significantly boost productivity.
With the speed at which genAI is developing, we can expect even more groundbreaking applications soon. Think of AI assistants that not only respond but also proactively help by taking over tasks, advanced holographic images that are almost indistinguishable from real ones, and AIs that collaborate to solve complex problems.
Companies will also increasingly apply AI in business processes. A company can train multiple agents with a specific task and have them work together as a team. Currently, AI is primarily a very suitable assistant—one that works quickly and is, for example, very good at writing, checking, and debugging computer code.
Generative AI is now indispensable and plays a crucial role in the future of technology and creativity. Whether it is companies using genAI to create innovative products or individuals wanting to increase their productivity, the possibilities are endless and the future looks promising.
NetCare has also created its own genAI application, which we call AIR. A cost-effective LLM model that can be used for multiple applications. From programming to customer service agent, and it is also used as a translator for websites. For example, there are various websites like this one that are translated by AIR. Of course, we also had the plugin made by AIR with a little help from Gerard 🙂
The developments in the field of artificial intelligence (AI) raise questions about what lies ahead. A recent whitepaper by Leopold Aschenbrenner paints a fascinating picture of the current situation and what may await us. Here are some key insights shaping the future of AI, based on an analysis of trends and challenges.
The progress in AI is unprecedented. In just a few years, we have evolved from GPT-2, which was comparable to a toddler in understanding, to GPT-4, which has the capabilities of a smart high school student. This development has been driven by exponential growth in computing power, algorithmic efficiency, and innovative techniques such as reinforcement learning. The expectation is that this trend will continue, potentially leading to AI systems functioning as professional researchers or engineers by 2027.
After human levels of intelligence, the next step is superintelligence. This transition can be accelerated by AI’s ability to improve itself. The implications are enormous: from economic transformations to existential risks. Aschenbrenner emphasizes that this intelligence explosion could be a turning point, where control and safety are crucial to prevent disasters.
The enormous infrastructure needed for these AI systems is already being prepared. Companies are investing billions in data centers, GPUs, and electricity to provide the required computing power. This mobilization of resources marks an industrial shift comparable to historical war efforts, but now focused on technological dominance.
The economic implications of AI are profound. AI sectors are expected to drive a large part of global economic growth, particularly through automation, productivity increases, and the creation of new markets. At the same time, there is a risk of significant economic inequality, with countries and companies without access to advanced AI falling behind. According to Aschenbrenner, governments and companies must collaborate to bridge this gap by promoting education, innovation, and fair distribution of resources.
A major challenge is the security of AI models and data. The risk that sensitive technologies fall into the wrong hands, such as hostile states, poses a significant threat. The document calls for stricter security measures and better policies to mitigate such risks.
One of the greatest scientific challenges is developing methods to ensure AI systems operate in accordance with human values, even if they become much smarter than us. This is referred to as “superalignment.” Whether or not superalignment is achieved could lead to unforeseen and potentially catastrophic consequences.
Besides technological challenges, there is a geopolitical dimension. Countries like China and the United States compete for dominance in AI. Whoever wins this race will have a decisive advantage not only economically but also militarily. It is therefore vital that democratic societies collaborate to ensure a free and stable world order.
The prospects outlined in this document are both exciting and concerning. They call for attention, action, and cooperation. To harness the opportunities of AI and manage the risks, we must invest in research, policy, and international collaboration. As the document states: the future is not just something that happens to us—it is something we shape together.
What do you think? Are we ready for the challenges and opportunities that AI brings? Read more here?
Data obviously plays a crucial role in companies that are digitizing. But while the demand for high-quality and large amounts of data is increasing, we often encounter challenges such as privacy restrictions and a lack of sufficient data for specialized tasks. This is where the concept of synthetic data emerges as a groundbreaking solution.
Synthetic data are data that are artificially generated rather than derived from real events or processes. These data are often created using algorithms and techniques from artificial intelligence (AI), such as machine learning models. The goal of synthetic data is to mimic real data as accurately as possible in terms of statistical properties and patterns.
Example: A synthetically generated room
While it offers many advantages, there are also challenges. Ensuring the quality and accuracy of this data is crucial. Inaccurate synthetic datasets can lead to misleading results and decisions. Additionally, it is important to find a balance between using synthetic data and real data to obtain a complete and accurate picture. Furthermore, synthetic data can be used to reduce imbalances (BIAS) in a dataset. Large language models use generated data because they have simply already read the Internet and need even more training data to improve.
Synthetic data are a promising development in the world of data analysis and machine learning. They offer a solution to privacy issues and improve data availability. They are also invaluable for training advanced algorithms. As we further develop and integrate this technology, it is essential to ensure the quality and integrity of the data so that we can fully harness the potential of synthetic data.
Need help applying AI effectively? Make use of our consultancy services
For years, robots in the industry have ensured that simple tasks can be automated. So far, this has not led to higher unemployment, but that is about to change is the assertion.
With the advent of drones and self-driving cars, the entire transportation sector, police, and military will also be robotized. At the same time, GenAI and artificial intelligence in general will slowly but surely make the jobs of all highly educated people redundant. Normally, additional prosperity would lead to the creation of new jobs higher up in the value chain. Artificial Intelligence will counteract this process because AI can also deliver value here.
The extra prosperity will therefore end up with a few, the owners and managers of (large) companies. Initially, the gap between rich and poor will therefore increase further. First, the lower educated will lose their jobs, and no replacements will come for them. In the Netherlands, they will end up in the safety net of unemployment benefits and social assistance. In other countries like the USA, this will much more quickly lead to bitter poverty. It is not difficult to imagine that this could lead to enormous dissatisfaction and perhaps even revolutions. Hopefully, this is only a transitional period during which policymakers make adjustments so that everyone can benefit from increased prosperity. Drafting and implementing effective policy is crucial to shaping this transition.
But ultimately, this development cannot be stopped, simply because it is possible and a lot of money and power can be gained through AI and robotization.
So if ultimately even the highly educated are forced into unemployment by artificial intelligence, the government will be forced to intervene. This can be done by redistributing wealth between the (by then) super-rich and the unemployed. Because the national government will no longer have sufficient influence over multinationals, this requires cooperation. Let’s assume the positive and that this will eventually be achieved. We then live with much freedom, free time, and prosperity until the moment the last job is replaced by smarter robots. At that moment or just before, the economy as we know it disappears and everything becomes free. Robots make everything, including the extraction of raw materials, and because they demand no compensation, they do this at no cost, 24 hours a day, 365 days a year. The prices of products and services therefore continue to fall until they eventually reach zero.
The economy has disappeared, being rich no longer makes sense because everything is free.
Will a shadow economy arise, as currently exists between the underworld and upper world, or will we try to distinguish ourselves in other ways? At the moment, I do not know, but what I do know is that the above scenario is realistic and that we must be prepared both for the period between now and the disappearance of the economy and for the period thereafter.
But if we handle it well, we can actually achieve exactly what we have always wanted. More free time and enough income to lead a beautiful and fulfilling life. That thought is, therefore, worth continuing to invest in innovation.
In the world of artificial intelligence, one of the biggest challenges is developing AI systems that are not only intelligent but also act according to ethical norms and values that align with those of humans. One approach to this is training AI using legal codes and case law as a foundation. This article explores this method and looks at additional strategies to create an AI with human-like norms and values. I also made this suggestion on behalf of the Dutch AI coalition to the Ministry of Justice & Security in a strategy paper we wrote commissioned by the ministry.
The idea of training an AI based on legal codes and case law is based on the concept that laws reflect the collective norms and values within a society. By having an AI analyze these legal texts, the system can gain insight into what is socially acceptable and which behaviors are prohibited.
Using GANs to Identify Gaps
Generative Adversarial Networks (GANs) can serve as a tool to discover gaps in legislation. By generating scenarios that fall outside existing laws, GANs can reveal potential ethical dilemmas or unaddressed situations. This enables developers to identify and address these gaps, giving the AI a more complete ethical dataset to learn from. Of course, we also need lawyers, judges, politicians, and ethicists to fine-tune the model.
While training on legislation provides a solid starting point, there are some important considerations:
To develop an AI that truly resonates with human ethics, a more holistic approach is needed.
1. Integration of Cultural and Social Data
By exposing the AI to literature, philosophy, art, and history, the system can gain a deeper understanding of the human condition and the complexity of ethical issues.
2. Human Interaction and Feedback
Involving experts from ethics, psychology, and sociology in the training process can help refine the AI. Human feedback can provide nuance and correct where the system falls short.
3. Continuous Learning and Adaptation
AI systems must be designed to learn from new information and adapt to changing norms and values. This requires an infrastructure that allows for ongoing updates and retraining.
4. Transparency and Explainability
It is crucial that AI decisions are transparent and explainable. This not only facilitates user trust but also enables developers to evaluate ethical considerations and adjust the system where necessary.
Training an AI based on legal codes and case law is a valuable step toward developing systems with an understanding of human norms and values. However, to create an AI that truly acts ethically in a way comparable to humans, a multidisciplinary approach is needed. By combining legislation with cultural, social, and ethical insights, and integrating human expertise into the training process, we can develop AI systems that are not only intelligent but also wise and empathetic. Let’s see what the future may bring.
Additional resources: