You are currently browsing the category archive for the ‘Education’ category.

People, wherever you are, we are in a kind of lockdown. Some countries more restricted than others. Still, the challenge will be for most of us how to survive in two perhaps three months of being locked in your home and make the best of it. As I am not a virus expert, I will not give you any recommendations on this topic. As a PLM geek, I want to share with you the opportunities I see for the upcoming months.

A crisis is an opportunity

Most of us should be lucky that we do not live in the same situation as twenty years ago. At that time, internet connectivity was expensive and slow. Meaning working from home would mean isolation from the rest of the world. The positive point now is that we can be connected virtually without travel, without face-to-face meetings, and we are pushed to do so. This external push is an interesting point for me.

The traditional attitude for my PLM engagements was that face-to-face meetings are crucial for creating a human connection and trust. Now I ask myself is this a behavior of the past that should become obsolete in the future. Probably we cannot afford this approach anymore in the future if we take sustainability and the environment into consideration. We live now in a globally connected world, but should we act still in the old way?

Perhaps not. Let’s look at some of the examples that it is time to shift behaviors.

We might think in the Western world we know it all due to our dominance in the past hundred years. However, when you study history, you will see civilizations come to power and after hundreds of years, they lose power because they kill themselves internally. Apparently, a typical human property that will not disappear – still interesting to analyze when considering a globally connected world. Where is the point of gravity today?

Interestingly, the ancient Chinese population already knew that a crisis was an opportunity, as I am being told. The Chinese characters for crisis mean danger and opportunity, respectively, according to Wiki – see the image above. Joe Barkai was one of the first in my network that took action to explain that instead of focusing on the loss of what is happening now, we should take the opportunity to be better prepared for the future. You can read his post here: The Corona virus and your company’s brand. And these kinds of messages are popping up more frequently now. Let’s stay safe while thinking and preparing for the future.

Now a PLM related example.

Remember what the FFF is happening?

Two-three weeks ago, we had a vivid discussion in our PLM and CM community based on the famous FFF mnemonic.  What the FFF is happening was a post sharing my point of view, and there were a lot of reactions from different people.

The purpose of my post was to explain that the whole discussion was based on paradigms that drawings are defining the part. Because of that, we have a methodology to decide if YES or NO we need a new part number or revision. To me, this practice should no longer be a discussion.

A part has a unique identifier, and a document has a unique identifier. In PLM-systems, the information is managed by relations, no longer by identifiers – who knows the exact unique identifier? In a PLM-system information is connected, and the attributes of the part and document will tell you the details of the type of information. “Intelligent or meaningful” identifiers are in such an environment no longer relevant. Think about that…..

In the comments of my post, Jesse Leal was confirming this statement:

This in contrary to Joe Brouwer, who you might have noticed, always is spitting his opinion that the good old days of the draftsman are gone, Boeing made a tremendous mistake and that PLM is fake. This all combined with hyperlinks to his products and opinions. The comment below says it all:

Two points to observe in this response:

Hey, Bob, send me the new digital identifier”.

This statement assumes that if a person needs to retrieve information from someone else, they need to contact this person (Bob).

Bob then needs to drop his current work and answer to the response and send the latest version of a drawing?  This is old school. In a PLM-system,  information should be connected, and if Bob has released his latest drawing (no matter if it is FFF), any user could find the latest approved version, not even having to look at the identifier (which could be meaningless) but by following the relations between products, parts, and documents.

This is PLM!

One of the benefits, Bob does not get disturbed during the day by these kinds of questions and can focus on his critical work as an expert.

Second, if you need to sit with a designer to understand PLM, then you are probably talking with the wrong person. Designers work in the context of PDM. When we speak about PLM, we are talking about a broader scope beyond engineering and design.

This is a common mistake in a lot of marketing stories. Companies that focus on the design space only, some EBOM-integrations with CAD-systems, are most of the time focusing on PDM.  When Agile PLM came out (later Oracle E9) and later Aras without CAD-integrations, these companies were focusing on the flow of information inside the company, not necessarily driven by CAD. Of course, the traditional PLM companies combine CAD integration with other capabilities. Dassault Systèmes, Siemens, and PTC all have a strong relationship with their native CAD-systems. However, their offerings go way beyond CAD-integrations e.g. end-to-end governance, change processes and an item-centric backbone.

The diagram above explains the basics for the future. In a push-mode, the person in the middle has the responsibility to distribute information and ensure it remains accurate for all stakeholders. This makes this person crucial (good job security) but extremely inefficient compared to people working in the pull-mode, being responsible for getting the accurate data themselves. It may be clear the pull-mode is the model of a digital enterprise.

So if you have the time now, take this time to rethink how well your company is ready for a digital future. Companies that currently rely on Bob are in trouble as Bob is currently sitting at home. Companies that have learned to shift from the push-mode to the pull-mode could continue working as planned, as they do not need Bob. And don’t worry about your job. If you are in Bob’s position you will lose your job over time. However, when you keep on evolving, learning and adding value to your company, you will be always needed – don’t lock yourself in.

If you want to be inspired more in this area, read Jan Bosch’s post: This is not the end . Here Jan mentions the opportunity to move to digital practices (and more) – get out of our traditional patterns

 

What can you do?

Even though COVID-19 has, and will have, a dramatic impact on our society, this is also the moment to rewire some of our processes.  Because there was never time to think and act due to the running business. It reminded me of the financial crisis in 2008, when the market for PLM vendors was terrible, no significant sales for them as companies could not invest.

However, for me, 2008 was an extremely busy year,  thanks to all kinds of regulations from governments. There was time and budget to support employees to raise their skills and PLM was one of these domains. That year I conducted many workshops. It was also the year that I started my blog virtualdutchman.com.

Now we are in a similar situation and probably worse as now we are locked to our homes. However, we are also better connected. Imagine this situation without the internet. Now we can learn even better.

So let’s benefit from this connectivity and use the lockdown time to learn, think, and discuss with peers. Challenge and involve the management of your company how they see and lead to the future.


In that context, I am happy to spend on average one day per week on free conference calls if you need clarification or support for your PLM-related ideas.

Contact me through a personal message on LinkedIn, and we will find a way to connect.

 

Conclusion

This decade will be decisive for many of us. At the beginning of this year, I wrote PLM 2020- The next decade (4 challenges). With my narrow PLM-mind, I overlooked viruses. Bill Gates did not do that, as you can see from his 2015 TED talk: The next outbreak? We’re not ready.  Bill also explains that our traditional thinking patterns should change in a globally connected world.

I wish you all the time to think and educate yourself and prepare for a changed future. Stay safe inside, stay healthy, knowing for some of you it will be a big challenge.

One week ago, Yoann Maingon wrote an innocent post with the question: Has FFF killed?  The question was raised related to a 2014 problem at GM, where a changed part was causing fatal accidents.

The discussion started by Yoann and here my short extract. Assuming this problem was a configuration management issue and Yoann somehow indicated that the problem might be related to the fact that ERP-systems do not carry a revision on the part number – leading to an unnoticed change.  Therefore, he assumes there is a disconnect between the PLM-side (where we have parts with multiple lifecycle states and revisions) and ERP (where we have an industrial lifecycle – prototype/production).

He posted his thoughts, and then LinkedIn exploded (currently 116 comments), which means it is a topic that is of significant concern in our community. Next, if you read the comments, there are different viewpoints:

  • What does FFF really imply?
  • What about revisions of parts?
  • What are the best practices?

Let’s investigate these viewpoints with some comments

What does FFF really imply?

When we talk about FFF in engineering, we mean Form, Fit and Function – the three primary characteristics to describe a part  (source Wikipedia)

  • Form refers to such characteristics as external dimensions, weight, size, and visual appearance of a part or assembly. This is the element of FFF that is most affected by an engineer’s aesthetic choices, including enclosure, chassis, and control panel, that become the outward “face” of the product.
  • Fit refers to the ability of the part or feature to connect to, mate with, or join to another feature or part within an assembly. The “fit” allows the part to meet the required assembly tolerances to be useful.
  • Function is a criterion that is met when the part performs its stated purpose effectively and reliably. In an electronics product, for example, a function can depend on the solid-state components used, the software or firmware, and quite often on the features of the electronics enclosure selected.

One of the comments in Yoann’s post referred to Safe/Unsafe as a potential functional characteristic. I think this addition is not needed. Safety should be a requirement for the part, not a characteristic.

FFF was and still is an approach for engineers to decide if a new, improved version of the part would get a revision or needs a new part number.

I think before we dive deeper into the other viewpoints, it is crucial to define the part number a little more.

In a correct PLM data model, there are two types of part numbers. First, the internal part number that your company uses inside its engineering Bill of Materials to identify a part. This part number can be a meaningless part only to provide uniqueness inside the company.

In 2015 I wrote several posts related to best practices and data modeling for PLM. The most relevant posts to this discussion are here:

The part number can specify a part that needs to be manufactured according to specification, or it can be a part that needs to be purchased from an available supplier/manufacturer. The manufacturer part number is, most of the time, a meaningful number (6 – 7 characters) as these parts need to be ordered by your company. The manufacturer part number is the SKU for the manufacturer. As you can imagine in the manufacturer’s catalog, there isn’t a revision mentioned. In graphics, see the image below:

Your company might sell Product MP-323121 (note: the ID is meaningful to help the customer to order the product).

Internally there is a related EBOM that specifies the product. The EBOM top part is O122 (note: here, we can use a meaningless identifier as all is digitally connected).

For the manufacturing of O122, we need to resolve the EBOM according to its specifications. Therefore, for Part O124, the company needs to decide to purchase from their approved manufacturers either part ABC-21231 or XYZ-88818 (note: again, a meaningful ID as these companies are not digitally connected).

Now coming back to the FFF-discussion. For the orange parts, with a meaningful ID, no revision exists. However, if Assembly O122 is 100% FFF compatible, the Product ID MP-323121 will not change. It allows your company to optimize the EBOM and/or MBOM, meanwhile keeping 100% compatibility to the outside world. (note: the same principle applies to the two manufacturers for Part O124.)

In case Top Assembly O122 has new or changed parts – what should happen there?

At that moment, the definition has changed. The definitions, most of the time described in documents/drawings/models, are related information to the BOM. Therefore the Top Assembly O122 should get a new identifier. There is no need to name it a revision, it is a new data set in the PLM-system, again with a meaningless identifier as we are connected digitally,

What about revisions of parts?

Of course, the management of changes existed long before PLM-systems were introduced.

The specifications of a part were defined in drawings. The drawing contained all the information, not only the geometry definitions, but also specifications on how to manufacture the part.

For complex products, a considerable set of consistently related drawings would be released to manufacturing.  A release process with physical signatures on it.

At the same time, there was no discussion: the drawing represents the part. And as there was no digital connection, part numbers/drawing numbers were meaningful, often with the format of the drawing as part of the identifier.

In case changes were needed, for example, fixing a dimension or tolerance as discovered during manufacturing, the drawing had to be revised to remain consistent. First, in the original drawing, the issue or change was marked in red (redlining). Then engineering had to create a new version of the drawing.

Depending on the impact of change (here comes also the FFF-principle), people decided if a new part number was needed (FFF-change) or that the change only required an update of the drawing(s), meaning a revision.  If the difference was small (for example, adding a missing annotation), it could be called a minor change, all to be reflected in the drawing number, which equals the part number in this approach. So, when we talk about revisions of parts, we are talking about a document change.

A lousy practice from that approach is also that often manufacturing just redlines a drawing and keeps the redlined drawing as their source. It is too time-consuming or difficult to update the source drawing(s) through a change process. Engineering is not aware of this change, and when a later change comes through from engineering, these “fixes” might be missed as there is no traceability.

Generic example of a PLM data model and its relationsWhen PLM-systems were introduced, of course, companies did not want to disrupt their existing ways of working. Therefore, they were asking the PLM-editors to enable revisions on parts and so the PLM-editors did (or do).

Decoupling of parts and documents in a PLM data model

However, if you want to use the PLM-system in the best manner, you need to “decouple” the concept:  part number equals drawing number, combined with the possibility to start using meaningless identifiers, as relations between parts and drawings are managed in the PLM-system through relational links.

Relevant post related to the PLM data model are:

What are the best practices?

As some people mentioned in their comments to Yoann’s post, why do we have to answer this question as all is already well understood and described in best practices? I agree with that statement: Best Practices exist – so how to obtain them?

First, there is the whole framework of Configuration Management, which existed long before PLM-systems were introduced. If you follow their methodology, you can be (almost) guaranteed your information is consistent and correct. Configuration Management is crucial in areas where the impact of an error is enormous, like the GM-example Yoann referred to. Also, companies in the Aerospace and Defense industry are the ones that have strict configuration management in place.

Configuration management does not come for free. It requires an investment in skills, potentially a change in ways of working, and requires an overhead. Manufacturing companies that are creating less “risky” products often focus more on optimizing (= reducing) the cost of their internal processes instead of investing in proper methodologies to manage consistency.

If you want to learn more about CM, investigate the Institute of Process Excellence (IPX), the founders of the CM2 framework for Enterprise Configuration Management, and much more. Note: Their knowledge does not come for free, which I can understand. However, it also creates a barrier for the company’s further investment in CM as this kind of strategic investments are hard to sell at the management level by individuals in a company.

In the context of CM, I advise you to follow Martijn Dullaart, who is quite active in our social community. His latest blog post related to this thread is: It’s about Interchangeability and Traceability

With the introduction of PLM-system, these companies and the PLM-editors created the opportunity to implement configuration management in their system.

The data inside the system would be the “single version of the truth.” Unfortunately, this was most of the time, just a sales strategy, falsely giving the impression that information is under control now. Last year I wrote several posts related to the relation between PLM and CM, starting from PLM and Configuration Management – a happy marriage?

If you are interested in another resource for information related to these topics, have a look at the website from Jörg Eisenträger who also collected his best practices for PLM and CM for sharing (thanks Paul van der Ree for the link)

Don’t expect best practices from your PLM-vendors as their role is to sell software. It is the continuous discussion between:

  • A PLM-system that forces companies to work according to embedded methodology (hard to sell/implement but idealistically correct)

And

  • A flexible PLM-system that allows you to build and configure anything (easy to sell/challenging to implement correctly, depending on “wise” decisions)

The Future

Even though most companies are working drawing-centric, with or without a linked PLM-backbone for BOM-management, the next upcoming challenge is to evolve to model-based practices. The current CM-practices still talk about documents, although documents are already electronic datasets in that context. The future, however, in a model-based enterprise evolves related to connected models, 3D Models, but also simulation and software models, with different lifecycles and pace of change. For the model-based enterprise, we need to develop digital best practices that guarantee the same level of quality, however, executed and/or supported by (AI) Artificial Intelligence. AI is needed as human beings cannot physically analyze and understand all the impact of a change in such an environment.

Conclusion

The FFF-discussion illustrates that building a consistent framework within PLM is not an easy goal to achieve. My blog buddy Oleg Shilovitsky would claim that we consultants create the complexity. PLM-editors will never solve this complexity, it is up to your company’s mission to invest in knowledge to understand why and how to reduce the complexity. With this post and the related links and discussions, I hope more clarity will help you to make “wise” decisions.

This post is based on a mix of interactions I had the last two weeks in my network, mainly on LinkedIn.  First, I enjoyed the discussion that started around Yoann Maingon post: Thoughts about PLM Business models. Yoann is quite seasoned in PLM, as you can see from his LinkedIn profile, and we have had interesting discussions in the past, and recently about a new PLM-system, he is developing Ganister PLM, based on a flexible Graph database.

Perhaps in that context, Yoann was exploring the various business models. Do you pay for the software (and maintenance), do you pay through subscription, what about a modular approach or a full license for all the functionality? All these questions made me think about the various business models that I encountered and how hard it is for a customer to choose the optimal solution.  And is the space for a new type of PLM? Is there space for free PLM? Some of my thoughts here:

PLM vendors need to be profitable

One of the most essential points to consider is that whatever PLM solution you are aiming to buy, make sure that your PLM vendor has a profitable business model. As once you started with a PLM solution, it is your company’s IP that will be stored in this environment, and you do not want to change every few years your PLM system. Switching PLM systems would be affordable if the PLM system would store their data in a standard format – I will share a more in-depth link under PLM and standards.

For the moment, you cannot state PLM vendors endorse standards. None of the real PLM vendors have a standardized data model, perhaps closest to standards are Eurostep, who have based that ShareAspace solution on top of the PLCS (ISO 10303) standard. However, ShareAspace is more positioned as a type of middleware, connecting between OEMs/Owner/Operators and their suppliers to benefit for standardized connectivity.

Coming back to the statement, PLM Vendors need to be profitable to provide a guarantee for the future of your company’s data is the first step. The major PLM Vendors are now profitable as during a consolidation phase starting 15 years ago, a lot of non-profitable PLM Vendors disappeared. Matrix One, Agile, Eigner & Partner PLM are the best-known companies that were bought for either their technology or market share. In that context, you might also look at OnShape.

Would they be profitable as a separate company, or would investors give up? To survive, you need to be profitable, so giving software away for free is not a good sign (see the software for free paragraph) as a company needs continuity.

PLM startups

In the past 10 years, I have seen and evaluated several new PLM companies. All of them did not really change the PLM paradigm, most of them were still focusing on being an engineering collaboration tools. Several of these companies have in their visionary statement that they are going to be the “Excel killer.” We all know Excel has the best user interface and capabilities to manipulate a collection of metadata.

Very popular is the BOM in Excel, extracted from the CAD-system (no need for an “expensive” PDM or PLM) or BOM used to share with suppliers and stakeholders (ERP is too rigid, purchasing does not work with PDM).

The challenge I see here is that these startups do not bring real new value. The cost of manipulating Excels is a hidden cost, and companies relying on Excel communication are the type of companies that do not have a strategic point of view. This is typical for Small and Medium businesses where execution (“let’s do it”) gets all the attention.

PLM startups often collect investor’s money because they promise to kill Excel, but is Excel the real problem? Modern PLM is about data sharing, which is an attitude change, not necessarily a technology change from Excel tables to (cloud) shared tables. However, will one of these “new Excel killers” PLMs be disruptive? I don’t think so.

PLM disruption?

A week ago, I read an interview with Clayton Christensen (thanks Hakan Karden), which I shared on LinkedIn a week ago. Clayton Christensen is the father of the Disruptive Innovation theory, and I have cited him several times in my blogs. His theory is, in my opinion, fundamental to understand how traditional businesses can be disrupted. The interview took place shortly before he died at the age of 67. He died due to complications caused by leukemia.

A favorite part of this interview is, where he restates what is really Disruptive Innovation as we often talk about disruption without understanding the context, just echoing other people:

Christensen: Disruptive innovation describes a process by which a product or service powered by a technology enabler initially takes root in simple applications at the low end of a market — typically by being less expensive and more accessible — and then relentlessly moves upmarket, eventually displacing established competitors. Disruptive innovations are not breakthrough innovations or “ambitious upstarts” that dramatically alter how business is done but, rather, consist of products and services that are simple, accessible, and affordable. These products and services often appear modest at their outset but over time have the potential to transform an industry.

Many of the PLM startups dream and position themselves as the new disruptor.  Will they succeed? I do not believe so if they only focus on replacing Excel, there is a different paradigm needed. Voice control and analysis perhaps (“Hey PLM if I change Part XYZ what will be affected”)?

This would be disruptive and open new options. I think PLM startups should focus here if they want my investment money.

PLM for free?

There are some voices that PLM should be free in an analogy to software management and collaboration tools. There are so many open-source software management tools, why not using them for PLM? I think there are two issues here:

  • PLM data is not like software data. A lot of PLM data is based on design models (3D CAD / Simulation), which is different from software. Designs are often not that modular as software for various reasons. Companies want to be modular in their products, but do they have the time and resources to reinvent their existing product. For software, these costs are so much lower as it is only a brain exercise. For hardware, the impact is significant. Bringing me to the second point.
  • The cost of change for hardware is entirely different compared to software. Changing software does not have an impact on existing stock or suppliers and, therefore, can be implemented once tested for its purpose. A hardware change impacts the existing production process. First, use the old parts before introducing the change, or do we accept the (costs) of scrap. Is our supply chain, or are our production tools ready to deliver continuity for the new version? Hardware changes are costly, and you want to avoid them. Software changes are cheap, therefore design your products to be configurable based on software (For example Tesla’s software controlling the features to be allowed)

Now imagine, with enough funding, you could provide a PLM for free.  Because of ease of deployment, this would be very likely a cloud offering, easy and scalable. However, all your IP is in that cloud too, and let’s imagine that the cloud is safer than on-premise, so it does not matter in which country your data is hosted (does it ?).

Next, the “free” PLM provider starts asking a small service fee after five years, as the promised ROI on the model hasn’t delivered enough value for the shareholders, they become anxious. Of course, you do not like to pay the fee. However, where is your data, and what happens when you do not pay?

If the PLM provider switches you off, you are without your IP. If you ask the PLM provider to provide your data, what will you get? A blob of XML-files, anything you can use?

In general, this is a challenge for all cloud solutions.

  • What if you want to stop your subscription?
  • What is the allowed Exit-strategy?

Here I believe customers should ask for clarity, and perhaps these questions will lead to a renewed understanding that we need standards.

PLM and standards

We had a vivid discussion in the blogging community in September last year. You can read more related to this topic in my post: PLM and the need for standards which describes the aspects of lock-in and needs for openness.

Finally, a remark related to the PLM-acronym. Another interesting discussion started around Joe Barkai’s post: Why I do not do PLM . Read the comments and the various viewpoint on PLM here. It is clear that the word PLM unites us all; however, the interpretation is different.

If someone in the street asks me what is your profession, I never mention I do PLM. I say: “I assist mainly manufacturing companies in redesigning their business processes using best practices and modern digital technologies”. The focus is on the business value, not on the ultimate definition of PLM

Conclusion

There are many business aspects related to PLM to consider. Yoann Maingon’s post started the thinking process, and we ended up with the PLM-definition. It all illustrates that being involved in PLM is never a boring journey. I am curious to learn about your journey and where we meet.

At the beginning of this week, I was attending the 9th edition of the PI conference in London. Where it started as a popular conference with 300 – 400 attendees at its best, we were now back to a smaller number of approximately 100 attendees.

It illustrates that PLM as a standalone topic is no longer attracts a broad audience as Marketkey (the organization of the conference) confirms. The intention is that future conferences will be focusing on the broader scope of PLM, where business transformation will be one of the main streams.

In this post, I will share my highlights of the conference, knowing that other sessions might have been valuable too, but I had to make a choice.

It is about people

Armin Prommersberger, CTO from DIRAC and the chairman of the conference, made a great point: “What we will discuss in the upcoming two days, it is all about people not about technology.”

I am not sure if this opening has influenced the mood of the conference, as when I look back to what was the central theme: It is all about how we deal with people when explaining, implementing and justifying PLM.

AI at the Forefront of a Digital Transformation

Muhannad Alomari from R2 Data Labs as a separate unit within Rolls Royce to explore and provide data innovation started with his keynote speech sharing the AI initiatives within his team.

He talked about several projects where AI will become crucial.

For example, the EHM program related to engine behavior. How to detect anomalies, how to establish predictive maintenance and maximize the time an airplane engine is in operation. Interesting to mention is that Muhannad explained that most simulation models are based on simplified simulation models, not accurate enough to discover anomalies.

Modeling in the PLM world with feedback from reality

Machine learning and feedback loops are crucial to optimize the models both for the discovery of irregularities and, of course, to improve understanding of the engine behavior and predict maintenance. Currently, maintenance is defined based on the worst-case scenario for the engine, which in reality, of course, will not be the case for most engines. There is a lot (millions) to gain here for a company.

Interesting to mention is that Muhannad gave a realistic view of the current status of Artificial Intelligence (AI). AI is currently still dumb – it is a set of algorithms that need to be adapted whenever new patterns are discovered. Deep learning is still not there – currently, we still need human beings for that.

This was in contrast with the session from Kalypso later with the title: Supercharge your PLM with advanced analytics. It was a typical example of where a realistic story (R2 Data Labs) shows such a big difference with what is sold by PLM vendors or implementers. Kalypso introduced Product Lifecycle Intelligence (PLI) – you can see the dream on the left (click on the image to enlarge).

Combine PLM with Analytics, and you have Intelligence.  My main comment is, knowing from the field the first three phases in most companies have a lack of data quality and consistency. Therefore any “Intelligence” probably will be based on unreliable sources. Not an issue if you are working in the domain of politics, however when it comes to direct cost and quality implications, it can be a significant risk. We still have a way to go before we have a reliable PLM data backbone for analytics.

 

Keeping PLM Momentum after a Successful Campaign

Susanna Mäentausta from Kemira in Finland gave an exciting update of their PLM project. Where in 2019, she shared with us their PLM roadmap (see my 2019 post: The weekend after PI PLMx London 2019); this time, Susanna shared with us how they are keeping the PLM momentum.

Often PLM implementations are started based on a hypothetical business case (I talked about this in my post The PLM ROI Myth). But then, when you implement PLM, you need to take care you provide proof points to motivate the management. And this is exactly what the PLM team in Kemira has been doing. Often management believes that after the first investment, the project is done (“We bought the software – so we are done”) however the business and process change that will deliver the value is not reported.

Susanna shared with us how they defined measurable KPIs for two reasons.  First, to motivate the management that there are business progress and benefits, however, it is a journey. And secondary the facts are used to kill the legends that “Before PLM we were much faster or efficient.” These types of legends are often expressed loudly by persons who consider PLM as an overhead (killing their freedom) instead of a way to be more efficient in business. In the end, for a company, the business is more important than the person’s belief.

On the question for Susanna, what she would have done better with hindsight, she answered: “Communicate, communicate, communicate.” A response I fully support as often PLM teams are too busy completing their day-to-day work, that there is no spare time for communication. Crucial to achieving a business change.

My agreement: PLM needs facts based during implementation and support combined with the understanding we are dealing with people and their emotions too. Both need full attention.

Acceleration Digitalization at Stora Enso

Samuli Savo, Chief Digital Officer at Stora Enso, explained the principles of innovation, related to digitalization at his company. Stora Enso, a Swedish/Finish company, historically one of the largest forestry companies in the world as well as one of the most significant paper and packaging producers, is working on a transformation to become the renewable materials company. For me, he made two vital points on how Stora Enso’s digitalization’s journey is organized.

He pleads for experimentation funded by corporate as in the experimental stage, as it does not make sense to have a business case. First DO and then ANALYZE, where many companies have to policy first to ANALYZE and then DO, killing innovative thinking.

The second point was the active process to challenge startups to solve business challenges they foresee and, combined with a governance process for startups, allow these companies to be supported and become embedded within member companies of the Combient Foundry, like Stora Enso. By doing such in a structured way, the outcome must lead to innovation.

I was thinking about the hybrid enterprise model that I have been explaining in the past. Great story.

Cyber-security and Future Mobility

Out of interest, I followed the session from Madeline Cheah, Cybersecurity Innovation Lead at HORIBA MIRA. She gave an excellent and well-structured overview. Madeline leads the cybersecurity research program. Part of this job is investigating ways to prevent vehicles from being attacked.  In particular, when it comes to connected and autonomous vehicles. How to keep them secure.

She discussed the known gaps are and the cybersecurity implications of future mobility so extensive that I even doubted will there ever be an autonomous vehicle on the road. So much to define and explore. She looked at it from the perspective of the Internet of Everything, where Everything is divided into Things, Data, Processes, and People. Still, a lot of work to do, see image below

Good Times Ahead: Delay Mitigation Through a Plan for Every Part

Ian Quest, director at Quick Release, gave an overview of what their company aims to be. You could translate it as the plumbers of the automotive industry Where in the ideal world information should be flowing from design to release, there are many bottlenecks, leakages, hiccups that need to be resolved as the image shows.

Where their customers often do not have the time and expertise to fix these issues, Quick Release brings in various skillsets and common sense. For example, how to deal with the Bill of Materials, Configuration Management, and many other areas that you need to address with methodology first instead of (vendor-based) technology. I believe there is a significant need for this type of company in the PLM-domain.

The second part, presented by Nick Solly, with a focus on their QRonos tool, was perhaps a little too much a focus on the capabilities of the tool. Ian Quest, in his introduction,  already made the correct statement:

The QRonos tool, which is more or less a reporting tool, illustrates again that when people care about reliable data (planning, tasks, parts, deliverables, …..), you can improve your business significantly by creating visibility to delays or bottlenecks. The value lies in measurable activities and from there, learn to predict or enhance – see R2 Labs, Kemira and the PLI dream.

Conclusion

It is clear that a typical PLM conference is no longer a technology festival – it is about people. People are trying to change or improve their business. Trying to learn from each other, knowing that the technical concepts and technology are there.

I am looking forward to the upcoming PI events where this change will become more apparent.

 

Last week I shared my thoughts related to my observation that the ROI of PLM is not directly visible or measurable, and I explained why. Also, I explained that the alignment of an organization requires a myth to make it happen. A majority of readers agreed with these observations. Some others either misinterpreted the headlines or twisted the story in favor of their opinion.

A few came from Oleg Shilovitsky and as Oleg is quite open in his discussions, it allows me to follow-up on his statements. Other people might share similar thoughts but they haven’t had the time or opportunity to be vocal. Feel free to share your thoughts/experiences too.

Some misinterpretations from Oleg’s post: PLM circa 2020 – How to stop selling Myths

  • The title “How to stop selling Myths” is the first misinterpretation.
    We are not selling myths – more below.
  • “Jos Voskuil’s recommendation is to create a myth. In his PLM ROI Myths article, he suggests that you should not work on a business case, value, or even technology” is the second misinterpretation, you still need a business case, you need value and you need technology.

And I got some feedback from Lionel Grealou, who’s post was a catalyst for me to write the PLM ROI Myth post. I agree I took some shortcuts based on his blog post. You can read his comments here. The misinterpretation is:

  • “Good luck getting your CFO approve the business change or PLM investment based on some “myth” propaganda :-)” as it is the opposite, make your plan, support your plan with a business case and then use the myth to align

I am glad about these statements as they allow me to be more precise, avoiding misperceptions/myth-perceptions.

A Myth is bad

Some people might think that a myth is bad, as the myth is most of the time abstract.  I think these people do not realize that there a lot of myths that they are following; it is a typical social human behavior to respond to myths. Some myths:

  • How can you be religious without believing in myths?
  • In this country/world, you can become anything if you want?
  • In the past, life was better
  • I make this country great again

The reason human beings need myths is that without them, it is impossible to align people around abstract themes. Try for each of the myths above to create an end-to-end logical story based on factual and concrete information. Impossible!

Read Yuval Harari’s book Sapiens about the power of myths. Read Steven Pinker’s book Enlightenment Now to understand that statistics show a lot of current myths are false. However, this does not mean a myth is bad. Human beings are driven by social influences and myths – it is our brain.

Unless you have no social interaction, you might be immune to myths. With brings me to quoting Oleg once more time:

“A long time ago when I was too naive and too technical, I thought that the best product (or technology) always wins. Well… I was wrong. “

I went through the same experience, having studied physics and mathematics makes you think extremely logical. Something I enjoyed while developing software. Later, when I started my journey as the virtualdutchman mediating in PLM implementations, I discovered logical alone does not work in businesses. The majority of decisions are done based on “gut feelings” still presented as reasonable cases.

Unless you have an audience of Vulcans, like Mr. Spock, you need to deal with the human brain. Consider the myth as the envelope to pass the PLM-project to the management. C-level acts by myths as so far I haven’t seen C-level management spending serious time on understanding PLM. I will end with a quote from Paul Empringham:

I sometimes wish companies would spend 6 months+ to educate themselves on what it takes to deliver incremental PLM success BEFORE engaging with software providers

You don’t need a business case

Lionel is also skeptical about some “Myth-propaganda” and I agree with him. The Myth is the envelope, inside needs to be something valuable, the strategy, the plan, and the business case. Here I want to stress one more time that most business cases for PLM are focusing on tool and collaboration efficiency. And from there projecting benefits. However, how well can we predict the future?

If you implement a process, let’s assume BOM-collaboration done with Excel by BOM-collaboration based on an Excel-on-the-cloud-like solution, you can measure the differences, assuming you can measure people’s efficiency. I guess this is what Oleg means when he explains OpenBOM has a real business case.

However, if you change the intent for people to work differently, for example, consult your supplier or manufacturing earlier in the design process, you touch human behavior. Why should I consult someone before I finish my job, I am measured on output not on collaboration or proactive response? Here is the real ROI challenge.

I have participated in dozens of business cases and at the end, they all look like the graph below:

The ROI is fantastic – after a little more than 2 years, we have a positive ROI, and the ROI only gets bigger. So if you trust the numbers, you would be a fool not to approve this project. Right?

And here comes the C-level gut-feeling. If I have a positive feeling (I follow the myth), then I will approve. If I do not like it, I will say I do not trust the numbers.

Needless to say that if there was a business case without ROI, we do not need to meet the C-level. Unless, and it happens incidental, at C-level, there was already a decision we need PLM from Vendor X because we played golf together, we are condemned together or we believe the same myths.

In reality, the old Gartner graph from realized benefits says it all. The impact of culture, processes, and people can make or break a plan.

You do not need an abstract story for PLM

Some people believe PLM on its own is a myth. You just need the right technology and people will start using it, spreading it out and see how we have improved business. Sometimes email is used as an example. Email is popular because you can with limited effort, collaborate with people, no matter where they are. Now twenty years later, companies are complaining about the lack of traceability, the lack of knowledge and understanding related to their products and processes.

PLM will always have the complexity of supporting traceability combined with real-time collaboration. If you focus only on traceability, people will complain that they are not a counter clerk. If you focus solely on collaboration, you miss the knowledge build-up and traceability.

That’s why PLM is a mix of governance, optimized processes to guarantee quality and collaboration, combined with a methodology to tune the existing processes implemented in tools that allow people to be confident and efficient. You cannot translate a business strategy into a function-feature list for a tool.

Conclusion

Myths are part of the human social alignment of large groups of people. If a Myth is true or false, I will not judge. You can use the Myth as an envelope to package your business case. The business case should always be a combination of new ways of working (organizational change), optimized processes and finally, the best tools. A PLM tool-only business case is to my opinion far from realistic

 

Now preparing for PI PLMx London on 3-4 February – discussing Myths, Single BOMs and the PLM Green Alliance

 

 

For me, the joint conference from CIMdata and Eurostep is always a conference to look forward too. The conference is not as massive as PLM-Vendor conferences (slick presentations and happy faces); it is more a collection of PLM-practitioners (this time a 100+) with the intent to discuss and share their understanding and challenges, independent from specific vendor capabilities or features.  And because of its size a great place to network with everyone.

Day 1 was more a business/methodology view on PLM and Day 2 more in-depth focusing on standards and BIM. In this post, the highlights from the first day.

The State of PLM

 

 

Peter Bilello, CIMdata’s president, kicked of with a review of the current state of the PLM industry. Peter mentioned the PLM-market grew by 9.4 % to $47.8 billion (more than the expected 7 %). Good for the PLM Vendors and implementers.

However, Peter also mentioned that despite higher spending, PLM is still considered as a solution for engineering, often implemented as PDM/CAD data management. Traditional organizational structures, marketing, engineering, manufacturing, quality were defined in the previous century and are measured as such.

This traditional approach blocks the roll-out of PLM across these disciplines. Who is the owner of PLM or where is the responsibility for a certain dataset are questions to solve. PLM needs to transform to deliver end-to-end support instead of remaining the engineering silo. Are we still talking about PLM in the future? See Peter’s takeaways below:

 

 

We do not want to open the discussion if the the name PLM should change – too many debates – however unfortunate too much framing in the past too.

The Multi View BOM

 

 

Fred Feru from Airbus presented a status the Aerospace & Defense PLM action group are working on: How to improve and standardize on a PLM solution for multi-view BOM management, in particular, the interaction between the EBOM and MBOM. See below:

 

You might think this is a topic already solved when you speak with your PLM-vendor. However, all existing solutions at the participant implementations rely on customizations and vary per company. The target is to come up with common requirements that need to be addressed in the standard methodology. Initial alignment on terminology was already a first required step as before you standardize, you need to have a common dictionary. Moreover, a typical situation in EVERY PLM implementation.

 

 

An initial version was shared with the PLM Editors for feedback and after iterations and agreement to come with a solution that can be implemented without customization. If you are interested in the details, you can read the current status here with Appendix A en Appendix B.

 

Enabling the Circular Economy for Long Term Prosperity

Graham Aid gave a fascinating presentation related to the potentials and flaws of creating a circular economy. Although Graham was not a PLM-expert (till he left this conference), as he is the Strategy and Innovation Coordinator for the Ragn-Sells Group, which performs environmental services and recycling across Sweden, Norway, Denmark, and Estonia. Have a look at their website here.

 

 

Graham shared with us the fact that despite logical arguments for a circular economy – it is more profitable at the end – however, our short term thinking and bias block us from doing the right things for future generations.

Look at the missing link for a closed resource-lifecycle view below.

Graham shared weird examples where scarce materials for the future currently were getting cheaper, and therefore there is no desire for recycling them. A sound barrier with rubble could contain more copper than copper ore in a mine.

In the PLM-domain, there is also an opportunity for supporting and working on more sustainable products and services. It is a mindset and can be a profitable business model. In the PDT 2014 conference, there was a session on circular product development with Xerox as the best example. Circular product development but also Product As A Service can be activities that contribute to a more sustainable world. Graham’s presentation was inspiring for our PLM community and hopefully planted a few seeds for the future. As it is all about thinking long-term.

 

 

With the PLM Green Alliance, I hope we will be able to create a larger audience and participation for a sustainable future. More about the PLM Green Alliance next week.

 

The Fundamental Role of PLM in Data-driven Product Portfolio Management

 

 

Hannu Hannila (Polar) presented his study related to data-driven product portfolio management and why it should be connected to PLM.  For many companies, it is a challenge to understand which products are performing well and where to invest. These choices are often supported by Data Damagement as Hannu called it.

An example below:

The result of this fragmented approach is that organizations make their decisions on subjective data and emotions. Where the assumption is that 20 % of the products a company is selling is related to 80 % of the revenue, Hannu found in his research companies where only 10 % of the products were contributing to the revenue. As PPM (Product Portfolio Management)  often is based on big emotions – who shouts the loudest mentality, influenced by the company’s pet products and influence by the HIPPO (HIghest Paid Person in the Office).  So how to get a better rationale?

 

 

Hannu explained a data-driven framework that would provide the right analytics on management level, depending on overall data governance from all disciplines and systems.  See below:

I liked Hannu’s conclusions as it aligns with my findings:

  • To be data-driven, you need Master Data Management and Data Governance
  • Product Portfolio Management is the driving discipline for PLM, and in a modern digital enterprise, it should be connected.

Sponsor sessions

Sponsors are always needed to keep a conference affordable for the attendees.  The sponsor sessions on day 1 were of good quality.  Here a quick overview and a link if you want to invest further

 

 

Configit – explaining the value of a configurator that connects marketing, technical and sales, introducing CLM (Configuration Lifecycle Management) – a new TLA

 

 

Aras – explaining their view on what we consider the digital thread

 

 

Variantum – explaining their CPQ solution as part of a larger suite of cloud offerings

 

 

Quick Release – bringing common sense to PLM implementations, similar to what I am doing as PLM coach – focusing on the flow of information

 

 

SAP – explaining the change in focus when a company moves toward a product as a service model

 

 

SharePLM – A unique company addressing the importance of PLM training delivered through eLearning

Conclusion

The first day was an easy to digest conference with a good quality of presentations. I only shared 50 % of the session as we already reached 1000+ words.  The evening I enjoyed the joint dinner, being able to network and discuss in depth with participants and finished with a social network event organized by SharePLM. Next week part 2.

The usage of standards has been a recurring topic the past 10 months, probably came back to the surface at PI PLMx Chicago during the PLM Leaders panel discussion. If you want to refresh the debate, Oleg Shilovitsky posted an overview: What vendors are thinking about PLM standards – Aras, Dassault Systemes, Onshape, Oracle PLM, Propel PLM, SAP, Siemens PLM.

It is clear for vendors when they would actively support standards they reduce their competitive advantage, after all, you are opening your systems to connect to other vendor solutions, reducing the chance to sell adjacent functionality. We call it vendor lock-in. If you think this approach only counts for PLM, I would suggest you open your Apple (iPhone) and think about vendor lock-in for a moment.

Vendors will only adhere to standards when pushed by their customers, and that is why we have a wide variety of standards in the engineering domain.

Take the example of JT as a standard viewing format, heavily pushed by Siemens for the German automotive industry to be able to work downstream with CATIA and NX models. There was a JT-version (v9.5) that reached ISO 1306 alignment, but after that, Siemens changed JT (v10) again to optimize their own exchange scenarios, and the standard was lost.

And as customers did not complain (too much), the divergence continued. So it clear  vendors will not maintain standards out of charity as your business does not work for charity either (or do you ?). So I do not blame them is there is no push from their customers to maintain them.

What about standards?

The discussion related to standards flared up around the IpX ConX19 conference and a debate between Oleg & Hakan Kardan (EuroSTEP) where Hakan suggested that PLCS could be a standard data model for the digital thread – you can read Oleg’s view here: Do we need a standard like PLCS to build a digital thread.

Oleg’s opening sentence made me immediately stop reading further as more and more I am tired of this type of framing if you want to do a serious discussion based on arguments. Such a statement is called framing and in particular in politics we see the bad examples of framing.

Standards are like toothbrushes, a good idea, but no one wants to use anyone else’s. The history of engineering and manufacturing software is full of stories about standards.

This opening sentence says all about the mindset related to standards – it is a one-liner – not a fact. It could have been a tweet in this society of experts.

Still later,I read the blog post and learned Oleg has no arguments to depreciate PLCS, however as he does not know the details, he will probably not use it. The main challenge of standards: you need to spend time to understand and adhere to them and agree on following them. Otherwise, you get the same diversion of JT again or similar examples.

However, I might have been wrong in my conclusion as Oleg did some thinking on a Sunday and came with an excellent post: What would happen if PLM Vendors agree about data standards. Here Oleg is making the comparison with a standard in the digital world, established by Google, Microsoft, Yahoo, and Yandex : Schema.org: Evolution of Structured Data on the Web.

There is a need for semantic mapping and understanding in the day-to-day-world, and this understanding makes you realize the same is needed for PLM. That was one of the reasons why I wrote in the past (2015) a series of posts related to the importance of a PLM data model:

All these posts were aimed to help companies and implementers to make the right choices for an item-centric PLM implementation. At that time – 2015, item-centric was the current PLM best practice. I learned from my engagements in the past 15 years, in particular when you have a flexible modeling tool like SmarTeam or nowadays Aras, making the right data model decisions are crucial for future growth.

Who needs standards?

First of all, as long as you stay in your controlled environment, you do not need standards. In particular, in the Aerospace and Automotive industry, the OEMs defined the software versions to be used, and the supply chain had to adhere to their chosen formats. Even this narrow definition was not complete enough as a 3D CAD model needed to be exported for simulation or manufacturing purposes. There was not a single vendor working on a single CAD model definition at that time. So the need for standards emerged as there was a need to exchange data.

Data exchange is the driving force behind standards.

In a second stage also neutral format data storage became an important point – how to save for 75 years an aircraft definition.

Oil & Gas / Building – Construction

These two industries both had the need for standards. The Oil & Gas industry relies on EPC (Engineering / Procurement / Construction)  companies that build plants or platforms. Then the owner/operator takes over the operation and needs a hand-over of all the relevant information. However if this information would be delivered in the application-specific formats the EPC companies have used, the owner/operator would require various software environments and skills, just to have access to the data.

Therefore if the data is delivered in a standard format (ISO 15926) and the exchange follows CFIHOS (Capital Facilities Information Hand Over Specification) this exchange can be done more automated between the EPC and Owner/Operator environment, leading to lower overall cost of delivering and maintaining the information combined with a higher quality. For that reason, the Oil & Gas industry has invested already for a long time in standards as their plants/platform have a long lifecycle.

And the same is happening in the construction industry. Initially Autodesk and Bentley were fighting to become the vendor-standard and ultimately the IFC-standard has taken a lot from the Autodesk-world, but has become a neutral standard for all parties involved in a construction project to share and exchange data. In particular for the construction industry,  the cloud has been an accelerator for collaboration.

So standards are needed where companies/people exchange information

For the same reason in most global companies, English became the standard language. If you needed to learn all the languages spoken in a worldwide organization, you would not have time for business. Therefore everyone making some effort to communicate in one standard language is the best way to operate.

And this is the same for a future data-driven environment – we cannot afford for every exchange to go to the native format from the receiver or source – common neutral (or winning) standards will ultimately also come up in the world of manufacturing data exchange and IoT.

Companies need to push

This is probably the blocking issue for standards. Developing standards, using standards require an effort without immediate ROI. So why not use vendor-formats/models and create custom point-to-point interface as we only need one or two interfaces?  Companies delivering products with a long lifecycle know that the current data formats are not guaranteed for the future, so they push for standards (aerospace/defense/ oil & gas/construction/ infrastructure).

3D PDF Model

Other companies are looking for short term results, and standards are slowing them down. However as soon as they need to exchange data with their Eco-system (suppliers/ customers) an existing standard will make their business more scalable. The lack of standards is one of the inhibitors for Model-Based Definition or the Model-Based Enterprise – see also my post on this topic: Model-Based – Connecting Engineering and Manufacturing

When we would imagine the Digital Enterprise of the future, information will be connected through data streams and models. In a digital enterprise file conversions and proprietary formats will impede the flow of data and create non-value added work. For example if we look to current “Digital Twin” concepts, the 3D-representation of the twin is recreated again instead of a neutral 3D-model continuity. This because companies currently work in a coordinated manner. In perhaps 10 years from now we will reach maturity of a model-based enterprise, which only can exist based on standards. If the standards are based on one dominating platform or based on a merger of standards will be the question.

To discuss this question and how to bridge from the past to the future I am looking forward meeting you at the upcoming PLM Roadmap & PDT 2019 EMEA conference on 13-14 November in Paris, France. Download the program here: PLM for Professionals – Product Lifecycle Innovation

Conclusion

I believe PLM Standards will emerge when building and optimizing a digital enterprise. We need to keep on pushing and actively working for meaningful standards as they are crucial to avoid a lock-in of your data. Potentially creating dead-ends and massive inefficiencies.  The future is about connected Eco-systems, and the leanest companies will survive. Standards do not need to be extraordinarily well-defined and can start from a high-level alignment as we saw from schema.org. Keep on investing and contributing to standards and related discussion to create a shared learning path.

Thanks Oleg Shilovitsky to keep the topic alive.

p.s. I had not time to read and process your PLM Data Commodizitation post

 

Last week I read Verdi Ogewell’ s article:  PTC puts the Needle to the Digital Thread on Engineering.com where Verdi raised the question (and concluded) who is the most visionary PLM CEO – Bernard Charles from Dassault Systemes or Jim Heppelman from PTC. Unfortunate again, an advertorial creating more haziness around modern PLM than adding value.

People need education and Engineering.com is/was a respected site for me, as they state in their Engineering.com/about statement:

Valuable Content for Busy Engineers. Engineering.com was founded on the simple mission to help engineers be better.

Unfortunate this is not the case in the PLM domain anymore. In June, we saw an article related to the failing PLM migration at Ericsson – see The PLM migration dilemma. Besides the fact that a big-bang migration had failed at Ericsson, the majority of the article was based on rumors and suggestions, putting the sponsor of this article in a better perspective.

Of course, Engineering.com needs sponsoring to host their content, and vendors are willing to spend marketing money on that. However, it would be fairer to mention in a footnote who sponsored the article – although per article you can guess. Some more sincere editors or bloggers mention their sponsoring that might have influenced their opinion.

Now, why did the article PTC puts the Needle to the Digital Thread made me react ?

Does a visionary CEO pay off?

It can be great to have a visionary CEO however, do they make the company and their products/services more successful? For every successful visionary CEO, there are perhaps ten failing visionary CEOs as the stock market or their customers did not catch their vision.

There is no lack of PLM vision as Peter Bilello mapped in 2014 when imagining the gaps between vision, available technology, and implementations at companies (leaders and followers). See below:

The tremendous gap between vision and implementations is the topic that concerns me the most. Modern PLM is about making data available across the enterprise or even across the company’s ecosystem. It is about data democratization that allows information to flow and to be presented in context, without the need to recreate this information again.

And here the marketing starts. Verdi writes:

PTC’s Internet of Things (IoT), Industrial Internet of Things (IIoT), digital twin and augmented reality (AR) investments, as well as the collaboration with Rockwell Automation in the factory automation arena, have definitely placed the company in a leading position in digital product realization, distribution and aftermarket services

With this marketing sentence, we are eager to learn why

“With AR, for example, we can improve the quality control of the engines,” added Volvo Group’s Bertrand Felix, during an on-stage interview by Jim Heppelmann. Heppelmann then went down to a Volvo truck with the engine lifted out of its compartment. Using a tablet, he was able to show how the software identified the individual engine, the parts that were included, and he could also pick up the 3D models of each component and at the same time check that everything was included and in the right place.

Impressive – is it real?

The point is that this is the whole chain for digital product realization–development and manufacturing–that the Volvo Group has chosen to focus on. Sub-components have been set up that will build the chain, much is still in the pilot stage, and a lot remains to be done. But there is a plan, and the steps forward are imminent.

OK, so it is a pilot, and a lot remains to be done – but there is a plan. I am curious about the details of that plan, as a little later, we learn from the CAD story:

The Pro/ENGINEER “inheritor” Creo (engine, chassis) is mainly used for CAD and creation of digital twins, but as previously noted, Dassault Systémes’ CATIA is also still used. Just as in many other large industrial organizations, Autodesk’s AutoCAD is also represented for simpler design solutions.

There goes the efficient digital dream. Design data coming from CATIA needs to be recreated in Creo for digital twin support. Data conversion or recreation is an expensive exercise and needs to be reliable and affordable as the value of the digital twin is gone once the data is incorrect.

In a digital enterprise, you do not want silos to work with their own formats, you want a digital thread based on (neutral) models that share metadata/parameters from design to service.

So I dropped the article and noticed Oleg had already commented faster than me in his post: Does PLM industry need a visionary pageant? Oleg refers also to CIMdata, as they confirmed in 2018 that the concept of a platform for product innovation (PIP), or the beyond PLM is far from reality in companies. Most of the time, a PLM-implementation is mainly a beyond PDM environment, not really delivering product data downstream.

I am wholly aligned with Oleg’s  technical conclusion:

What is my(Oleg’s) conclusion? PLM industry doesn’t need another round of visionary pageants. I’d call democratization, downstream usage and openness as biggest challenges and opportunities in PLM applications. Recent decades of platform development demonstrated the important role network platforms played in the development of global systems and services. PLM paradigm change from isolated vertical platforms to open network services required to bring PLM to the next level. Just my thoughts..

My comments to Oleg’s post:

(Jos) I fully agree we do not need more visionary PLM pageants. It is not about technology and therefore I have to disagree with your point about Aras. You call it democratization and openness of data a crucial point – and here I agree – be it that we probably disagree about how to reach this – through standards or through more technology. My main point to be made (this post ) is that we need visionary companies that implement and rethink their processes and are willing to invest resources in that effort. Most digital transformation projects related to PLM fail because the existing status quo/ middle management has no incentive to change. More thoughts to come

And this the central part of my argumentation – it is not about technology (only).

Organizational structures are blocking digital transformation

Since 2014 I have been following several larger manufacturing companies on their path from pushing products to the market in a linear mode towards a customer-driven, more agile, fast responding enterprise. As this is done by taking benefit of digital technologies, we call this process: digital transformation.

(image depicting GE’s digital thread)

What I have learned from these larger enterprises, and both Volvo Trucks and GE as examples, that there is a vision for an end result. For GE, it is the virtual twin of their engines monitored and improved by their Predix platform. For Volvo Trucks, we saw the vision in the quote from Verdi’s article before.

However, these companies are failing in creating a horizontal mindset inside their companies. Data can only be efficient used downstream if there is a willingness to work on collecting the relevant data upstream and delivering this information in an accessible format, preferably data-driven.

The Middle Management Dilemma

And this leads to my reference to middle management. Middle managers learn about the C-level vision and are pushed to make this vision happen. However, they are measured and driven to solve these demands, mainly within their own division or discipline. Yes, they might create goodwill for others, but when it comes to money spent or changing people responsibilities, the status quo will remain.

I wrote about this challenge in The Middle Management dilemma. Digital transformation, of course, is enabled by digital technologies, but it does not mean the technology is creating the transformation. The crucial fact lies in making companies more flexible in their operations, yet establishing better and new contacts with customers.

It is interesting to see that the future of businesses is looking into agile, multidisciplinary teams that can deliver incremental innovations to the company’s portfolio. Somehow going back to the startup culture inside a more significant enterprise. Having worked with several startups, you see the outcome-focus as a whole in the beginning – everyone contributes. Then when the size of the company grows, middle-management is introduced, and most likely silos are created as the middle management gets their own profit & loss targets.

Digital Transformation myths debunked

This week Helmut Romer (thanks Helmut) pointed me to the following HBR-article: Digital does not need to be disruptive where the following myths are debunked:

  1. Myth: Digital requires radical disruption of the value proposition.
    Reality: It usually means using digital tools to better serve the known customer need.
  2. Myth: Digital will replace physical
    Reality: It is a “both/and.”
  3. Myth: Digital involves buying start-ups.
    Reality: It involves protecting start-ups.
  4. Myth: Digital is about technology.
    Reality: It’s about the customer
  5. Myth: Digital requires overhauling legacy systems.
    Reality: It’s more often about incremental bridging.

If you want to understand these five debunked myths, take your time to read the full article, very much aligned with my argumentation, albeit it that my focus is more on the PLM domain.

Conclusions

Vendor sponsoring at Engineering.com has not improved the quality of their PLM articles and creates misleading messages. Especially as the sponsor is not mentioned, and the sponsor is selling technology – the vision gap is too big with reality to compete around a vision.

Transforming companies to take benefit of new technologies requires an end-to-end vision and mindset based on achievable, incremental learning steps. The way your middle management is managed and measured needs to be reworked as the focus is on horizontal flow and understanding of customer/market-oriented processes.

 

This is the moment of the year, where at least in my region, most people take some time off to disconnect from their day-to-day business.  For me, it is never a full disconnect as PLM became my passion, and you should never switch off your passion.

On August 1st, 1999, I started my company TacIT, the same year the acronym PLM was born. I wanted to focus on knowledge management, therefore the name TacIT.  Being dragged into the SmarTeam world with a unique position interfacing between R&D, implementers and customers I found the unique sweet spot, helping me to see all aspects from PLM – the vendor position, the implementer’s view, the customer’s end-user, and management view.

It has been, and still, is 20 years of learning and have been sharing most in the past ten years through my blog. What I have learned is that the more you know, the more you understand that situations are not black and white. See one of my favorite blog pictures below.

So there is enough to overthink during the holidays. Some of my upcoming points:

From coordinated to connected

Instead of using the over-hyped term: Digital Transformation, I believe companies should learn to work in a connected mode, which has become the standard in our daily life. Connected means that information needs to be stored in databases somewhere, combined with openness and standards to make data accessible. For more transactional environments, like CRM, MES, and ERP, the connected mode is not new.

In the domain of product development and selling, we have still a long learning path to go as the majority of organizations is relying on documents, be it Excels, Drawings (PDF) and reports. The fact that they are stored in electronic file formats does not mean that they are accessible. There is still manpower needed to create these artifacts or to extract the required information from them.

The challenge for modern PLM is to establish new best practices around a model-based approach for systems engineering (MBSE), for engineering to manufacturing (MBD/MBE) and operations (Digital Twins). All these best practices should be generic and connected ultimately.  I wrote about these topics in the past, have a look at:

PLM Vendors are showing pieces of the puzzle, but it is up to the implementers to establish the puzzle, without knowing in detail what the end result will be. This is the same journey of Columbus. He had a boat and a target towards the unknown. He discovered a country with a small population, nowadays a country full of immigrants who call themselves natives.

However, the result was an impressive transformation.

Reading about transformation

Last year I read several books to get more insight into what motivates us, and how can we motivate people to change. In one way, it is disappointing to learn that we civilized human beings most of the time to not make rational decisions but act based on our per-historic brain.

 

Thinking, Fast and Slow from Daniel Kahneman was one of the first books in that direction as a must-read to understand our personal thinking and decision processes.

 

 

 

I read Idiot Brain: What Your Head Is Really Up To from Dean Burnett, where he explains this how our brain appears to be sabotaging our life, and what on earth it is really up to. Interesting to read but could be a little more comprehensive

 

I got more excited from Dan Ariely”s book: Predictably Irrational: The Hidden Forces That Shape Our Decisions as it was structured around topics where we handle completely irrational but predictable. And this predictability is used by people (sales/politicians/ management) to drive your actions. Useful to realize when you recognize the situation

 

These three books also illustrate the flaws of our modern time – we communicate fast (preferable through tweets) – we decide fast based on our gut feelings – so you realize towards what kind of world we are heading.  Going through a transformation should be considered as a slow, learning process. Like reading a book – it takes time to digest.

Once you are aiming at a business transformation for your company or supporting a company in its transformation, the following books were insightful:

Leading Digital: Turning Technology into Business Transformation by George Westerman, Didier Bonnet and Andrew McAfee is maybe not the most inspiring book, however as it stays close to what we experience in our day-to-day-life it is for sure a book to read to get a foundational understanding of business transformation.

 

The book I liked the most recent was Leading Transformation: How to Take Charge of Your Company’s Future by Nathan Furr, Kyle Nel, Thomas Zoega Ramsoy as it gives examples of transformation addressing parts of the irrational brain to get a transformation story. I believe in storytelling instead of business cases for transformation. I wrote about it in my blog post: PLM Measurable or a myth referring to Yuval Harari’s book Homo Sapiens

Note: I am starting my holidays now with a small basket of e-books. If you have any recommendations for books that I must read – please write them in the comments of this blog

Discussing transformation

After the summer holidays, I plan to have fruitful discussions around topics close to PLM. Working on a post and starting a conversation related to PLM, PIM, and Master Data Management. The borders between these domains are perhaps getting vaguer in a digital enterprise.

Further, I am looking forward to a discussion around the value of PLM assisting companies in developing sustainable products. A sustainable and probably circular economy is required to keep this earth a place to live for everybody. The whole discussion around climate change, however, is worrying as we should be Thinking – not fast and slow – but balanced.

A circular economy has been several times a topic during the joint CIMdata PLM Roadmap and PDT conferences, which bring me to the final point.

On 13th and 14th November this year I will participate again in the upcoming PLM Roadmap and PDT conference. This time in La Defense, Paris, France. I will share my experiences from working with companies trying to understand and implement pieces of a digital transformation related to PLM.

There will be inspiring presentations from other speakers, all working on some of the aspects of moving to facets of a connected enterprise. It is not a marketing event, it is done by professionals, serving professionals. Therefore I hope if you are passioned about the new aspects of PLM, no matter how you name label them, come and join, discuss and most of all, learn.

Conclusion

 

Modern life is about continuous learning  – make it a habit. Even a holiday is again a way to learn to disconnect.

How disconnected I was you will see after the holidays.

 

 

 

After my previous post about the PLM migration dilemma, I had several discussions with peers in the field why these PLM bad news are creating so much debate. For every PLM vendor, I can publish a failure story if I want. However, the reality is that the majority of PLM implementations do not fail.

Yes, they can cause discomfort or friction in an organization as implementing the tools often forces people to work differently.  And often working differently is not anticipated by the (middle) management and causes, therefore, a mismatch for the people, process & tools paradigm.

So we love bad news in real life. We talk about terrorism while meanwhile, a large number of people are dying through guns, cars, and even the biggest killer mosquitos. Fear stories sell better than success stories, and in particular, in the world of PLM Vendors, every failure of the competition is enlarged.  However, there are more actors involved in a PLM implementation, and if PLM systems would be that bad, they would not exist anymore and replace by ………?

Who to blame – the vendor?

Of course, it is the easiest way to blame the vendor as their marketing is promising to solve all problems. However, when you look from a distance to the traditional PLM vendor community, you see they are in a rat-race to deliver the latest and greatest technology ahead of their competition, often driven by some significant customers.

Their customers are buying the vision and expect it to be ready and industrialized, which is not the case – look at the digital twin hype or AI (Artificial Intelligence).  Released PLM software is not at the same maturity compared to office applications. Office applications do not innovate so much and have thousands of users during a beta-cycle and no dependency on processes.

Most PLM vendors are happy when a few customers jump on their latest release, combined with the fact that implementations of the most recent version are not yet a push on the button.  This might change in the long term if PLM Vendors can deliver cloud-based solutions.

PLM implementations within the same industry might look the same but often vary a lot due to existing practices, which will not change due to the tool – so there is a need for customization or configuration.

PLM systems with strong business rules inside their core might more and more develop towards configuration, where PLM toolkit-like systems might focus on ease of customization. Both approaches have their pro’s and con’s (in another blog post perhaps).

Another topic to blame the vendor is lack of openness.  You hear it in many discussions. If vendor X were open, they would not lock the data – a typical marketing slogan. If PLM vendors would be completely open, to which standards should they adhere?  Every PLM has its preferred collection of tools together – if you stay within their portfolio you have a minimum of compatibility or interface issues.

This logic started already with SAP in the previous century. For PLM vendors, there is no business model for openness. For example, the SmarTeam APIs for connecting and extracting data are available free of charge, leading to no revenue for the vendor and significant revenue for service providers. Without any license costs, they can build any type of interface/solution. In the end, when the PLM vendor has no sustainable revenue, the vendor will disappear as we have seen between 2000 and 2010, where several stand-alone PLM systems disappeared.

So yes, we can blame PLM vendors for their impossible expectations – coming to realistic expectations related to capabilities and openness is probably the biggest challenge.

Who to blame – the implementer?

The second partner in a PLM implementation is the implementation partner, often a specialized company related to the PLM vendor. There are two types of implementation partners – the strategic partners and the system integrators.

Let’s see where we can blame them.

Strategic partners, the consultancy firms,  often have a good relationship with the management, they help the company to shape the future strategy, including PLM. You can blame this type of company for their lack of connection to the actual business. What is the impact on the organization to implement a specific strategy, and what does this mean for current or future PLM?

Strategic partners should be the partner to support business change management as they are likely to have experience with other companies. Unfortunate, this type of companies does not have significant skills in PLM as the PLM domain is just a small subset of the whole potential business strategy.

You can blame them that they are useful in building a vision/strategy but fail to create a consistent connection to the field.

Implementation partners, the system integrators, are most of the times specialized in one or two PLM vendor’s software suites, although the smaller the implementation partner, the less broad their implementation skills. These implementation partners sometimes have built their own PLM best practices for a specific vendor and use this as a sales argument. Others just follow blindly what the vendor is promoting or what the customer is asking for.

They will do anything you request, as long as they get paid for it. The larger ones have loads of resources for offshore deliveries – the challenge you see here is that it might look cheap; however, it becomes expensive if there is no apparent convergence of the deliverables.

As I mentioned before they will never say No to a customer and claim to fill all the “gaps,” there are in the PLM environment.

You can blame implementation partners that their focus is on making money from services. And they are right, to remain in business your company needs to be profitable. It is like lawyers; they will invoice you based on their efforts. And the less you take on your plate, the more they will do for you.

The challenge for both consultancy partners as system integrators is to find a balance between experienced people, who really make it happen and educating juniors to become experts too. Often the customer pays for the education of these juniors

Who to blame – your company?

If your company is implementing PLM, then probably the perception is that that you made all the effort to make it successful.  You followed the advice of the strategic consultants, you selected the best PLM Vendor and system integrator, you created a budget – so what could go wrong?

This all depends on your company’s ambition and scope for PLM.

Implementing the as-is processes

If your PLM implementation is just there to automate existing practices and store data in a central location, this might work out. And this is most of the time when PLM implementations are successful. You know what to expect, and your system integrator knows what to expect.

This type of project can run close to budget, and some system integrators might be tempted to offer a fixed price. I am not a fan of fixed priced projects as you never know exactly what needs to be done. The system integrator might raise the target price with 20 – 40 % to cover their risk or you as a company might select the cheapest bid – another guarantee for failure. A PLM implementation is not a one-time project, it is an on-going journey. Therefore your choice needs to be sustainable.

My experience with this type of implementations is that it easy to blame the companies here too. Often the implementation becomes an IT-project, as business people are too busy to run their day-to-day jobs, therefore they only incidentally support the PLM project. The result is that at a specific moment, users confronted with the system feel not connected to the new system – it was better in the past. In particular, configuration management and change processes can become waterproof, leaving no freedom for the users. Then the blaming starts – first the software then the implementer.

But what if you have an ambitious PLM project as part of a business transformation?

In that case, the PLM platform is just one of the elements to consider. It will be the enabler for new ways of working, enabling customer-centric processes, multi-discipline collaboration, and more. All related to a digital transformation of the enterprise. Therefore, I mention PLM platform instead of PLM system. Future enterprises run on data through connected platforms. The better you can connect your disciplines, the more efficient and faster your company will operate. This, as opposed to the coordinated approach, which I have been addressing several times in the past.

A business transformation is a combination of end-to-end understanding of what to change – from management vision connected to the execution in the field. And as there is not an out-of-the-box template for business transformation, it is crucial a company experiments, evaluates and when successful, scales up new habits.

Therefore, it is hard to define upfront all the effort for the PLM platform and the implementation resources. What is sure is that your company is responsible for that, not an external part. So if it fails, your company is to blame.

Is everyone to blame?

You might have the feeling that everyone is to blame when a PLM implementation fails. I believe that is indeed the case. If you know in advance where all players have their strengths and weaknesses, a PLM implementation should not fail, but be balanced with the right resources. Depending on the scope of your PLM implementation, is it a consolidation or a transformation, you should take care of all stakeholders are participating in the anti-blame game.

The anti-blame game is an exercise where you make sure that the other parties in the game cannot blame you.

  • If you are a vendor – do not over commit
  • If you are a consultant or system integrator – learn to say NO
  • If you are the customer – make sure enough resources are assigned – you own the project. It is your project/transformation.

This has been several times my job in the past, where I was asked to mediate in a stalling PLM implementation. Most of the time at that time it was a blame game, missing the target to find a solution that makes sense. Here coaching from experienced PLM consultants makes sense.

 

Conclusion

Most of the time, PLM implementations are successful if the scope is well understood and not transformative. You will not hear a lot about these projects in the news as we like bad news.

To avoid bad news challenging PLM implementations should make sure all parties involved are challenging the others to remain realistic and invest enough. The role of an experienced external coach can help here.

 

 

%d bloggers like this: