Let me start with a confession: as a kid, I was a classic nerd, drawn to soccer and exact sciences. Math and physics weren’t just subjects—they were my playground.
During my education to become a teacher in Physics and Mathematics, I discovered something even more captivating: programming. It started with my first Apple IIe, where I tackled the challenge of programming with limited memory using machine language, analogue/digital interfaces, Pascal, and C.
Later, I turned to Visual Basic and C++, writing programs to simulate math scenarios, automate AutoCAD tasks, and later develop solutions on top of SmarTeam.
It was not just work—it was how I relaxed – structuring my thinking – would we call it now Vibe coding?
The upside of this experience? Technical and physical concepts never intimidated me – they helped me to see the bigger picture. I was wired to think deeply, patiently, and persistently—skills that have stayed with me ever since.
The switch to human
But then I got involved in training and mediating in PLM implementations, where I discovered that technical skills were needed; however, more important were understanding human behavior (not software), communication and PLM methodology skills.
Many implementations at that time stalled because everyone started with great enthusiasm until the results failed to materialize. The solution was not as expected, too unstable or not possible. And from the point of view of the users, it is too complex and frustrating for them. You can read one of my experiences from that time: Where is my ROI. Mister Voskuil

One of those many interesting discussions
But the budget was often finished, and the enthusiasm was gone. One of my favorite quotes at that time was:
“You never get a second first impression.”
indicating that from the start, you need to anticipate user acceptance, don’t think of a big bang approach and start with understanding and agreeing on the big picture before diving into the details.
How many of you have been in this situation?
Although the majority of people in the PLM community agree that human behavior can make or break a PLM implementation, the majority of discussions and focus are most of the time targeting tools and technologies.
Organizational Change Management is often considered too soft to address, particularly in so-called result-driven organizations. Shut up and do the work!
Recently, some PLM software vendors mentioned OCM as an important activity, sometimes even provided by them. Their business model is to sell as many software licenses as possible, and therefore, they promise best-case scenarios and coverage of business scenarios.
Would you buy your PLM software from a company that says:
“Our software is great; however, you also need to address a business change program.”
Or would you buy from“We are a market leader in your business, and thousands of users are currently working happily with our software.”
I believe, with the experience as a PLM coach, that every PLM implementation should be a people and business discussion first – preferably sponsored at C-level – before jumping on the solutions.
The challenge of this approach is that a human-centric approach depends on people, often hard to scale, as it is a people business, not a software tools business.
Digital Transformation is failing
While preparing for the upcoming Share PLM summit in Jerez on May 19-20, I was looking back at why real digital transformation in the PLM domain is still failing – we keep on working mostly in a linear document-driven operating model.
My opinion at this moment: For existing organizations, the move from coordinated to coordinated and connected is too complex for humans.
Despite a great white paper from McKinsey on how organisations could move away from a linear, often document-driven organisation to an organisation working in multidisciplinary product teams, there is no real progress in most organisations.
Changing the organizational structure appears to be so difficult, and this relates to Conway’s Law, which states that systems reflect the organizational structure, presenting a challenge in determining where to start.
Not starting means not failing. And failing is the worst thing you can do at the C-level.
And now there is “product memory.”
Is the “product memory” based on an agentic AI layer and an underlying ontology, the next big thing after the connected digital enterprise? Initially formulated by Benedict Smith and later translated to a more PLM-specific scope by Martin Eigner and Oleg Shilovitsky, we are trying to combine the (boring) systems of record data with all the reasoning and decision-making – that’s where the knowledge is sitting.
Benedict shared his journey exploring AI and PLM through his True Intelligence newsletter, which I recommend you subscribe to. What I admire about Benedict is the fact that he does his research based on experiments and dialogues with others, without a commercial drive to sell a product or service (at the moment).
You can follow the thought experiments when reading the True Intelligence newsletters from the start.
A theme that came up also in other “the future of PLM” discussions was that traditional PLM only stores the results of a development and delivery process, but the reasoning is missing.
In my opinion, Colab Software was one of the first complementary to PLM startups, with a focus on capturing the discussions and decisions during a design review, as the older image below shows – also, Colab Software is now much more advanced with an AI-supported infrastructure.
Still, the image shows the value; the reasoning that was captured from the communication between different stakeholders in the product development process during design reviews.
More in the traditional PLM domain, Martin and Oleg started developing the tconcept of an agentic AI enterprise driven by a graph-based layer on top of existing enterprise systems as Martin’s image illustrates below.
Where Oleg stays (for me) more in the traditional PLM enterprise world:
e.g., his post Product Memory Architecture: How PLM Loses Engineering Knowledge and What Comes Next,
Martin zoomed in on his day-to-day customer base in Germany when writing
this post: The Actual Concept of Product Memory based on a Digital Thread with a vision for the upcoming 5 years.
In addition, less PLM-focused but very data-driven, Jan Bosch wrote a complementary post on his blog related to
the agentic AI approach: From Copilot to Colleague – the rise of agentic AI.
An interesting quote from this post, valid for us all:
Agent systems require investment in data architecture, workflow mapping, governance frameworks and operational monitoring. Those investments compound. The organization that has deployed agents across its revenue cycle, supply chain and finance operations simultaneously develops deep operational expertise in running agentic systems, which is itself a form of competitive advantage.
And while finalizing this post, there was an interesting discussion related to product memory at The Future of PLM: Introducing Product Memory organized by Fino, also known as Michael Finocchiaro
As a “techie,” I was able to enjoy and follow the discussion about a future infrastructure related to product knowledge. The term “product memory” seems a little overhyped, as if information that is not directly accessible through agents is a cause of failure. The big elephant in the room is where and how to start.
Enjoy the dialogue here:
What about a product memory trauma?
In the past, when discussing knowledge graphs, I already posed the question:
“How can knowledge graphs unlearn?”
In the techie world, there was always a hypothetical response for this question, but will it happen in a product memory environment where not everything is 100 percent exact and correct? Patrick Hillberg, one of the few PLM teachers, can educate you all about seemingly small mistakes with a big impact.
During the product memory discussion, I heard a statement that only validated data is allowed to be part of the memory.
Has anyone thought about the utopia of this statement?
The ambitious statement that product memory would lead to a single source of truth is, for me, also a utopia. 100 percent correct data does not exist, nor will 100 percent accurate decisions exist. It will be the most likely truth for the moment.
Now compare this with the human brain; when a serious accident happens, the person involved might have trauma from that. Then you need a psychiatrist to fix the trauma, meaning create other memory constructs – rewiring the brain.
While seeing this interesting dialogue with Rob Ferrone (the original Product Data PLuMber) about how Quick Release became a significant consultancy firm with the pragmatic focus on making the data flow (old image below), I had a new thought.

With Rob’s entrepreneurial skills, he might be able to start a new company soon, fixing product memory traumas – as data-governance becomes a commodity.
Will the product data plumber become the first product memory shrink?
Conclusion
We are experiencing a fast-moving convergence on future PLM concepts, where the image from Martin Eigner nicely represents such a possible architecture based on “product memory”. The challenge I see is whether we would be able to implement such an architecture to be reliable and supported by humans. Because humans still have their old hardware, the limbic brain, that will try to escape from the perfect world with a single source of truth – they like their truth
This was 2025 – this year, same atmosphere, more experienced & bigger and more to discuss.




Last week I listened to a Dutch podcast that gave me an unexpected inspiration. The podcast 



I believe our brain is a muscle. Like any muscle, it needs resistance to stay strong. You do not become a better cyclist by riding an eBike everywhere — the motor does the work, and your legs lose the real strength needed when you are without your bike. The same applies to cognitive effort.
It is not the first time a transformative technology arrived with enormous promise and created a deeply unequal outcome. The Industrial Revolution reduced most workers to resources while a few became extraordinarily wealthy.
The PLM vendors benefited from selling the dream, the consultants benefited from its complexity and the users, initially engineers and later more stakeholders in the product lifecycle, often suffered under rigid processes and complex systems. As the systems were designed to store information. User-friendlyness was not a priority.
There is an interesting discussion ongoing about the future of PLM infrastructures, well described recently by 
As individuals, we need to keep on training our brain-muscles without AI where the muscle matters. As the Dutch podcast mentioned: write your first draft before asking Claude to improve it, think through a problem before asking ChatGPT to solve it, and read a book of 100 pages.









Within the PGGA, everyone is welcome to share their perspective — with respect for those who see it differently. It’s not about being right or wrong. It’s about the dialogue, and about finding paths forward to a future that’s sustainable not just for the planet, but for businesses and the people within them.
My 2015 blog post has the same title:
ERP always had a strong voice at the management level—boxes on an org chart, reporting lines, clear ownership and KPIs flowing upward. You could see how the company was performing.
In many of my engagements, the company’s management often struggles to understand the value of collaboration because there is no direct line between collaboration and immediate performance. Revenue can be measured. Cycle times can be measured. Defects can be measured. Even employee turnover can be measured.
The problem is not that collaboration has no impact on performance – look at the introduction of email in companies. Did your company make a business case for that?
The return on investment on collaboration is real, but it does not show up as a clean, linear metric.
“We need better platforms.”

For companies, it is easier to celebrate the hero who fixes a late-stage integration disaster than the quiet team that prevented it months earlier through cross-functional dialogue.
Note: shared experiences are not the same as planned online webmeetings that became popular during and after COVID. They have a rigid regime of collaboration enforcement, back-to-back in many companies, most of the time lacking the typical “coffee machine” experiences.
The question is not whether collaboration is valuable. The question is whether we are willing to adjust our vertical incentives to make it possible.


I enjoyed my role as the “Flying Dutchman,” travelling around the world to support PLM implementations and discussions. Flying was simply part of the job. Real communication meant being in the same room; early phone and video calls were expensive, awkward, and often ineffective. PLM was — and still is — a human business.









This definition needs to be resolved and adapted for a specific plant with its local suppliers and resources. PLM systems often support the transformation from the eBOM to a proposed mBOM, and if done more completely with a Bill of Process.

The challenge for these companies is that there is a lot of guesswork to be done, as the service business was not planned in their legacy business. A quick and dirty solution was to use the mBOM in ERP as the source of information. However, the ERP system usually does not provide any context information, such as where the part is located and what potential other parts need to be replaced—a challenging job for service engineers.






In early December, it became clear that Rich would no longer be able to support the PGGA for personal reasons. We respect his decision and thank Rich for the energy and private money he has put into setting up the website, pushing the moderators to remain active and publishing the newsletter every month. From the frequency of the newsletter over the last year, you might have noticed Rich struggled to be active.
product or start an alliance, the name can be excellent at the start, but later it might work against you. I believe we are facing this situation too with our PGGA (PLM Green Global Alliance)
Whether a business delivers products or services, most of the environmental impact is locked in during the design phase—often quoted at close to 80%. That makes design a strategic responsibility not only for engineering.
Green has gradually acquired a negative connotation, weakened by early marketing hype and repeated greenwashing exposures. For many, green has lost its attractiveness.

When reading or listening to the news, it seems that globalization is over and imperialism is back with a primary focus on economic control. For some countries, this means even control over people’s information and thoughts, by restricting access to information, deleting scientific data and meanwhile dividing humanity into good and bad people.

December is the last month when daylight is getting shorter in the Netherlands, and with the end of the year approaching, this is the time to reflect on 2025.
It was already clear that AI-generated content was going to drown the blogging space. The result: Original content became less and less visible, and a self-reinforcing amount of general messages reduced further excitement.
Therefore, if you are still interested in content that has not been generated with AI, I recommend subscribing to my blog and interacting directly with me through the comments, either on LinkedIn or via a direct message.
It was PeopleCentric first at the beginning of the year, with the 

Who are going to be the winners? Currently, the hardware, datacenter and energy providers, not the AI-solution providers. But this can change.
Many of the current AI tools allow individuals to perform better at first sight. Suddenly, someone who could not write understandable (email) messages, draw images or create structured presentations now has a better connection with others—the question to ask is whether these improved efficiencies will also result in business benefits for an organization.
Looking back at the introduction of email with Lotus Notes, for example, email repositories became information siloes and did not really improve the intellectual behavior of people.
As a result of this, some companies tried to reduce the usage of individual emails and work more and more in communities with a specific context. Also, due to COVID and improved connectivity, this led to the success of
For many companies, the chatbot is a way to reduce the number of people active in customer relations, either sales or services. I believe that, combined with the usage of LLMs, an improvement in customer service can be achieved. Or at least the perception, as so far I do not recall any interaction with a chatbot to be specific enough to solve my problem.




Remember, the first 50 – 100 years of the Industrial Revolution made only a few people extremely rich. 


Note: I try to avoid the abbreviation PLM, as many of us in the field associate PLM with a system, where, for me, the system is more of an IT solution, where the strategy and practices are best named as product lifecycle management.



















Combined with the traditional dinner in the middle, it was again a great networking event to charge the brain. We still need the brain besides AI. Some of the highlights of day 1 in this post.








However, as many of the other presentations on day 1 also stated: “data without context is worthless – then they become just bits and bytes.” For advanced and future scenarios, you cannot avoid working with ontologies, semantic models and graph databases.








The panel discussion at the end of day 1 was free of people jumping on the hype. Yes, benefits are envisioned across the product lifecycle management domain, but to be valuable, the foundation needs to be more structured than it has been in the past.
Interesting reflection, Jos. In my experience, the situation you describe is very recognizable. At the company where I work, sustainability…
[…] (The following post from PLM Green Global Alliance cofounder Jos Voskuil first appeared in his European PLM-focused blog HERE.) […]
[…] recent discussions in the PLM ecosystem, including PSC Transition Technologies (EcoPLM), CIMPA PLM services (LCA), and the Design for…
Jos, all interesting and relevant. There are additional elements to be mentioned and Ontologies seem to be one of the…
Jos, as usual, you've provided a buffet of "food for thought". Where do you see AI being trained by a…