You are currently browsing the tag archive for the ‘PLM’ tag.

This time it is again about learning. Last week, I read John Stark’s book: Products2019: A project to map and blueprint the flow and management of products across the product lifecycle: Ideation; Definition; Realisation; Support of Use; Retirement and Recycling. John, a well-known PLM consultant and writer of academic books related to PLM, wrote this book during his lockdown due to the COVID-19 virus. The challenge with PLM (books) is that it is, in a way boring from the outside. Remember my post: How come PLM and CM are boring? (reprise) ?

This time John wrapped the “boring” part into a story related to Jane from Somerset, who, as part of her MBA studies, is performing a research project for Josef Mayer Maschinenfabrik. The project is to describe for the newly appointed CEO what happens with the company’s products all along the lifecycle.

A story with a cliffhanger:

What happened to Newt from Cleveland?

 

Seven years in seven weeks

Poor Jane, in seven weeks, she is interviewing people on three sites. Two sites in Germany and one in France, and she is doing over a hundred interviews on her own. I realized that thanks to relation to SmarTeam at that time, it took me probably seven years to get in front of all these stakeholders in a company.

I had much more fun most of the time as you can see below. My engagements were teamwork, where you had some additional social relief after work. Jane works even at the weekends.

However, there are also many similarities. Her daily rhythm during working days. Gasthaus Adler reflects many of the typical guesthouses that I have visited. People staying there with a laptop were signs of the new world. Like Jane, I enjoyed the weissbier and noticed that sometimes overhearing other guests is not good for their company’s reputation. A lot of personal and human experiences are wrapped into the storyline.

Spoiler: Tarzan meets Jane!

Cultural differences

The book also illustrates the cultural difference between countries (Germany/France/US) nicely and even between regions (North & South). Just check the breakfast at your location to see it.

Although most of the people interviewed by Jane contributed to her research, she also meets that either for personal or political reasons, do not cooperate.

Having worked worldwide, including in Asian countries, I learned that understanding people and culture is crucial for successful PLM engagements.

John did an excellent job of merging cultural and human behavior in the book. I am sure we share many similar experiences, as both this book and my blog posts, do not mention particular tools. It is about the people and the processes.

Topics to learn

You will learn that 3D CAD is not the most important topic, as perhaps many traditional vendor-related PDM consultants might think.

Portfolio Management is a topic well addressed. In my opinion, to be addressed in every PLM roadmap, as here, the business goals get connected to the products.

New Product Introduction, a stage-gate governance process, and the importance of Modularity are also topics that pop up in several cases.

The need for innovation, Industry 4.0 and AI (Artificial Intelligene) buzz, the world of software development and the “War for Talent” can all be found in the book.

And I was happy that even product Master Data Management was addressed. In my opinion, not enough companies realize that a data-driven future requires data quality and data governance. I wrote about this topic last year: PLM and PIM – the complementary value in a digital enterprise.

There are fantastic technology terms, like APIs, microservices, Low Code platforms. They all rely on reliable and sharable data.

What’s next

Products2019 is written as the starting point for a sequel. In this book, you quickly learn all the aspects of a linear product lifecycle, as the image below shows

I see an opportunity for Products2020 (or later). What is the roadmap for a company in the future?

How to deal with more data-driven, more agile in their go-to-market strategy, as software, will be more and more defining the product’s capabilities?

How to come from a linear siloed approach towards a horizontal flow of information, market-driven and agile?

Perhaps we will learn what happened with Newt from Cleveland?

Meanwhile, we have to keep on learning to build the future.

My learning continues this week with PI DX USA 2020. Usually, a conference I would not attend as traveling to the USA would have too much impact on my budget and time. Now I can hopefully learn and get inspired – you can do the same! Feel free to apply for a free registration if you are a qualified end-user – check here.

And there is more to learn, already mentioned in my previous post:

Conclusion

John Stark wrote a great book to understand what is currently in most people’s heads in mid-size manufacturing companies. If you are relatively new to PLM, or if you have only been active in PDM, read it  –  it is affordable!  With my series Learning from the past, I also shared twenty years of experience, more a quick walkthrough, and a more specialized view on some of the aspects of PLM. Keep on learning!

After the series about “Learning from the past,” it is time to start looking towards the future.  I learned from several discussions that I am probably working most of the time with advanced companies. I believe this would motivate companies that lag behind even to look into the future even more.

If you look into the future for your company, you need new or better business outcomes. That should be the driver for your company. A company does not need PLM or a Digital Twin. A company might want to reduce its time to market, improve collaboration between all stakeholders. These objectives can be realized by different ways of working and an IT-infrastructure to allow these processes to become digital and connected.

That is the “game”. Coming back to the future of PLM.  We do not need a discussion about definitions; I leave this to the academics and vendors. We will see the same applies to the concept of a Digital Twin.

My statement: The digital twin is not new. Everybody can have their own digital twin as long as you interpret the definition differently. Does this sound like the PLM definition?

The definition

I like to follow the Gartner definition:

A digital twin is a digital representation of a real-world entity or system. The implementation of a digital twin is an encapsulated software object or model that mirrors a unique physical object, process, organization, person, or other abstraction. Data from multiple digital twins can be aggregated for a composite view across a number of real-world entities, such as a power plant or a city, and their related processes.

As you see, not a narrow definition. Now we will look at the different types of interpretations.

Single-purpose siloed Digital Twins

  1. Simple – data only

One of the most straightforward applications of a digital twin is, for example, my Garmin Connect environment. When cycling, my device registers performance parameters (speed, cadence, power, heartbeat, location). After every trip, I can analyze my performance. I can see changes in my overall performance; compare my performance with others in my category (weight, age, sex).

Based on that, I can decide if I want to improve my performance. My personal business goal is to maintain and improve my overall performance, knowing I cannot stop aging by upgrading my body.

On November 4th, 2020, I am participating in the (almost virtual) Digital Twin conference organized by Bits&Chips in the Netherlands. In the context of human performance, I look forward to Natal van Riel’s presentation: Towards the metabolic digital twin – for sure, this direction is not simple. Natal is a full professor at the Technical University in Eindhoven, the “smart city” in the Netherlands

  1. Medium – data and operating models

Many connected devices in the world use the same principle. An airplane engine, an industrial robot, a wind turbine, a medical device, and a train carriage; all track the performance based on this connection between physical and virtual, based on some sort of digital connectivity.

The business case here is also monitoring performance, predict maintenance, and upgrade the product when needed.

This is the domain of Asset Lifecycle Management, a practice that exists for decades. Based on financial and performance models, the optimal balance between maintaining and overhaul has to be found. Repairs are disruptive and can be extremely costly. A manufacturing site that cannot produce can costs millions per day. Connecting data between the physical and the virtual model allows us to have real-time insights and be proactive. It becomes a digital twin.

  1. Advanced – data and connected 3D model

The ditial twin we see the most in marketing videos is a virtual twin, using a 3D-representation for understanding and navigation.  The 3D-representation provides a Virtual Reality (VR) environment with connected data. When pointing at the virtual components, information might appear, or some animation takes place.

Building such a virtual representation is a significant effort; therefore, there needs to be a serious business case.

The simplest business case is to use the virtual twin for training purposes. A flight simulator provides a virtual environment and behavior as-if you are flying in the physical airplane – the behavior model behind the simulator should match as good as possible the real behavior. However, as it is a model, it will never be 100 % reality and requires updates when new findings or product changes appear.

A virtual model of a platform or plant can be used for training on Standard Operating Procedures (SOPs). In the physical world, there is no place or time to conduct such training. Here the complexity might be lower. There is a 3D Model; however, serious updates can only be expected after a major maintenance or overhaul activity.

These practices are not new either and are used in places where the physical training cannot be done.

More challenging is the Augmented Reality (AR) use case. Here the virtual model, most of the time, a lightweight 3D Model, connects to real-time data coming from other sources. For example, AR can be used when an engineer has to service a machine. The AR-environment might project actual data from the machine, indicate service points and service procedures.

The positive side of the business case is clear for such an opportunity, ensuring service engineers always work with the right information in a real-time context. The main obstacle for implementing AR, in reality, is the access to data, the presentation of the data and keeping the data in the AR-environment matching the reality.

And although there are 3D Models in use, they are, to my knowledge, always created in siloes, not yet connected to their design sources.Have a look at the Digital Twin conference from Bits&Chips, as mentioned before.

Several of the cases mentioned above will be discussed here. The conference’s target is to share real cases concluded by Q & A sessions, crucial for a virtual event.

Connected Virtual Twins along the product lifecycle

So far, we have been discussing the virtual twin concept, where we connect a product/system/person in the physical world to a virtual model. Now let us zoom in on the virtual twins relevant for the early parts of the product lifecycle, the manufacturing twin, and the development twin. This image from Siemens illustrates the concept:

On slides they imagine a complete integrated framework, which is the future vision. Let us first zoom in on the individual connected twins.

The digital production twin

This is the area of virtual manufacturing and creating a virtual model of the manufacturing plant. Virtual manufacturing planning is not a new topic. DELMIA (Dassault Systèmes) and Tecnomatix (Siemens) are already for a long time offering virtual manufacturing planning solutions.

At that time, the business case was based on the fact that the definition of a manufacturing plant and process done virtually allows you to optimize the plant before investing in physical assets.

Saving money as there is no costly prototype phase to optimize production. In a virtual world, you can perform many trade-off studies without extra costs. That was the past (and for many companies still the current situation).

With the need to be more flexible in manufacturing to address individual customer orders without increasing the overhead of delivering these customer-specific solutions, there is a need for a configurable plant that can produce these individual products (batch size 1).

This is where the virtual plant model comes into the picture again. Instead of having a virtual model to define the ultimate physical plant, now the virtual model remains an active model to propose and configure the production process for each of these individual products in the physical plant.

This is partly what Industry 4.0 is about. Using a model-based approach to configure the plant and its assets in a connected manner. The digital production twin drives the execution of the physical plant. The factory has to change from a static factory to a dynamic “smart” factory.

In the domain of Industry 4.0, companies are reporting progress. However, to my experience, the main challenge is still that the product source data is not yet built in a model-based, configurable manner. Therefore, requiring manual rework. This is the area of Model-Based Definition, and I have been writing about this aspect several times. Latest post: Model-Based: Connecting Engineering and Manufacturing

The business case for this type of digital twin, of course, is to be able to customer-specific products with extremely competitive speed and reduced cost compared to standard. It could be your company’s survival strategy. As it is hard to predict the future, as we see from COVID-19, it is still crucial to anticipate the future, instead of waiting.

The digital development twin

Before a product gets manufactured, there is a product development process. In the past, this was pure mechanical with some electronic components. Nowadays, many companies are actually manufacturing systems as the software controlling the product plays a significant role. In this context, the model-based systems engineering approach is the upcoming approach to defining and testing a system virtually before committing to the physical world.

Model-Based Systems Engineering can define a single complex product and perform all kinds of analysis on the system even before there is a physical system in place.  I will explain more about model-based systems engineering in future posts. In this context, I want to stress that having a model-based system engineering environment combined with modularity (do not confuse it with model-based) is a solid foundation for dealing with unique custom products. Solutions can be configured and validated against their requirements already during the engineering phase.

The business case for the digital development twin is easy to make. Shorter time to market, improved and validated quality, and reduced engineering hours and costs compared to traditional ways of working. To achieve these results,  for sure, you need to change your ways of working and the tools you are using. So it won’t be that easy!

For those interested in Industry 4.0 and the Model-Based System Engineering approach, join me at the upcoming PLM Road Map 2020 and PDT 2020 conference on 17-18-19 November. As you can see from the agenda, a lot of attention to the Digital Twin and Model-Based approaches.

Three digital half-days with hopefully a lot to learn and stay with our feet on the ground.  In particular, I am looking forward to Marc Halpern’s keynote speech: Digital Thread: Be Careful What you Wish For, It Just Might Come True

Conclusion

It has been very noisy on the internet related to product features and technologies, probably due to COVIC-19 and therefore disrupted interactions between all of us – vendors, implementers and companies trying to adjust their future. The Digital Twin concept is an excellent framing for a concept that everyone can relate to. Choose your business case and then look for the best matching twin.

I believe we are almost at the end of learning from the past. We have seen how, from an initial serial CAD-driven approach with PDM, we evolved to PLM-managed structures, the EBOM and the MBOM. Or to illustrate this statement, look at the image below, where I use a Tech-Clarity image from Jim Brown.

The image on the right describes perfectly the complementary roles of PLM and ERP. The image on the left shows the typical PDM-approach. PDM feeding ERP in a linear process. The image on the right, I believe it is from 2004, shows the best practice before digital transformation. PLM is supporting product innovation in an iterative approach, pushing released information to ERP for execution.

As I think in images, I like the concept of a circle for PLM and an arrow for ERP. I am always using those two images in discussions with my customers when we want to understand if a particular activity should be in the PLM or ERP-domain.

Ten years ago, the PLM-domain was conceptually further extended by introducing support for products in operations and service. Similar to the EBOM (engineering) and the MBOM (manufacturing), the SBOM (service) was introduced to support product information for products in operation. In theory a full connected cicle.

Asset Lifecycle Management

At the same time, I was promoting PLM-practices for owners/operators to enhance Asset Lifecycle Management. My first post from June 2010 was called: PLM for Asset Lifecycle Management and Asset Development introduces this approach.

Conceptually the SBOM and Asset Lifecycle Management have a lot in common. There is a design product, in this case, an asset (plant, machine) running in the field, and we need to make sure operators have the latest information about the asset. And in case of asset changes, which can be a maintenance operation, a repair or complete overall, we need to be sure the changes are based on the correct information from the as-built environment. This requires full configuration management.

Asset changes can be based on extensive projects that need to be treated like new product development projects, with a staged approach that can take weeks, months, sometimes years. These activities are typical activities performed in PLM-systems, not in MRO-systems that are designed to manage the actual operation. Again here we see the complementary roles of PLM (iterative) and MRO (execution).

Since 2008, I have worked a lot in this environment, mainly in the nuclear and process industry. If you want to learn more about this aspect of PLM, I recommend looking at the PLMpartner website, where Bjørn Fidjeland, in cooperation with SharePLM, published a course on Plant Information Management. We worked together in several projects and Bjørn has done a great effort to describe the logical model to be used instead of a function-feature story.

Ten years ago, we were not calling this concept the “Digital Twin,” as the aim was to provide end-to-end support of asset information from engineering, procurement, and construction towards operation in a coordinated manner. The breaking point in the relation between the EPCs and Owner/Operators is the data-handover – how much of your IP can/do you expose and what is needed. Nowadays, we would call striving for end-to-end data continuity the Digital Thread.

Hot from the press in this context, CIMdata just published a commentary Managing the Digital Thread in Global Value Chains describing Eurostep’s ShareAspace capabilities and experiences in managing an end-to-end information flow (Digital Thread) in a heterogeneous environment based on exchange standards like ISO 10303-239 PLCS.  Their solution is based on what I consider a more modern approach for managing digital continuity compared to the traditional approach I described before. Compare the two images in this paragraph. The first image represents the old/current way with a disconnected handover, the second represents ShareAspace connected approach based on a real digital thread.

The Service BOM

As discussed with Asset Lifecycle Management, there is a disconnect between the engineering disciplines and operations in the field, looking from the point of view of an Asset owner/operator.

Now when we look from the perspective of a manufacturing company that produces assets to be serviced, we can identify a different dataflow and a new structure, the Service BOM (SBOM).

The SBOM provides information on how a product needs to be serviced. What are the parts that require service, and what are the service kits that are possible for that product? For that reason, service engineering should be done in parallel to product engineering. When designing a product, the engineer needs to identify which the wearing parts (always require service in time) and which parts might be serviceable.

There are different ways to look at the SBOM. Conceptually, the SBOM could be created in close relation with the EBOM. At the moment you define your product, you also should specify how the product will be services. See the image below

From this example, it is clear that part standardization and modularization have a considerable benefit for services downstream. What if you have only one serviceable part that applies to many products? The number of parts to have in stock will be strongly reduced instead of having many similar parts that only fit in a single product?

Depending on the type of product, the SBOM can be generic, serving many products in the field. In that case, the company has to deal with catalogs, to be defined in PLM. Or the SBOM can be aligned with the As-Built of a capital product in the field. In that case, the concepts of Asset Lifecycle Management apply. Click on the image to see a clear picture.

The SBOM on its own,  in such an environment, will have links to specific documents, service instructions, operating manuals.

If your PLM-system allows it, extending the EBOM and MBOM with an SBOM is not a complex effort. What is crucial to understand is that the SBOM has its own lifecycle, which can even last longer than the active product sold. So sometimes, manufacturing specifications, related to service parts need to be maintained too, creating a link between the SBOM and potential MBOM(s).

ECM = Enterprise Change Management

When I discussed ECM in my previous post in the context of Engineering Change Management, I got the feedback that nowadays, everyone talks about Enterprise Change Management. Engineering Change Management is old school.

In the past, and even in a 2014 benchmark, a customer had two change management systems. One in PLM and one in ERP, and companies were looking into connecting these two processes. Like the BOM-interaction between PLM and ERP, this is technology-wise, never a real problem.

The real problem in such situations was to come to a logical flow of events. Many times the company insisted that every change should start from the ERP-system as we like to standardize. This means that even an engineering change had to be registered first in the ERP-system

Luckily the reach of PLM has grown. PLM is no longer the engineering tool (IT-system thinking). PLM has become the information backbone for product information all along the product lifecycle. Having the MBOM and SBOM available through a PLM-infrastructure allows organizations to streamline their processes.

Aras – digital thread through connected structures

And in this modern environment, enterprise change management might take place mostly in a PLM-infrastructure. The PLM-infrastructure providing a digital thread, as the Aras picture above illustrates, provides the full traceability to support configuration management.

However, we still have to remember that configuration management and engineering change management, first of all, are based on methodology and processes. Next, the combination of tools to be used will vary.

I like to conclude this topic with a quote from Lee Perrin’s comment on my previous blog post

I would add that aerospace companies implemented CM, to avoid fatal consequences to their companies, but also to their flying customers.

PLM provides the framework within which to carry out Configuration Management. CM can indeed be carried out without PLM, as was done in the old paper-based days. As you have stated, PLM makes the whole CM process much more efficient. I think more transparent too.

Conclusion

After nine posts around the theme Learning from the past to understand the future, I walked through the history of CAD, PDM and PLM in a fast mode, pointing to practices and friction points. In the blogging space, it is hard to find this information as most blog posts are coming from software vendors explaining why their tool is needed. Hopefully, these series have helped many of you to understand a broader context. Now I want to focus on the future again in my upcoming blog posts.

Still, feel free to contact me and discuss methodology topics.

Picture by Christi Wijnen – a good friend and photographer in the Netherlands

In the previous seven posts, learning from the past to understand the future, we have seen the evolution from manual 2D drawing handling. Next, the emerge of ERP and CAD followed by data management systems (PDM/PLM) and methodology (EBOM/MBOM) to create an infrastructure for product data from concept towards manufacturing.

Before discussing the extension to the SBOM-concept, I first want to discuss Engineering Change Management and Configuration Management.

ECM and CM – are they the same?

Often when you talk with people in my PLM bubble, the terms Change Management and Configuration Management are mixed or not well understood.

When talking about Change Management, we should clearly distinguish between OCM (Organizational Change Management) and ECM (Engineering Change Management). In this post, I will focus on Engineering Change Management (ECM).

When talking about Configuration Management also here we find two interpretations of it.

The first one is a methodology describing technically how, in your PLM/CAD-environment, you can build the most efficient way connected data structures, representing all product variations. This technology varies per PLM/CAD-vendor, and therefore I will not discuss it here. The other interpretation of Configuration Management is described on Wiki as follows:

Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product’s performance, functional, and physical attributes with its requirements, design, and operational information throughout its life.

This is also the area where I will focus on this time.

And as-if great minds think alike and are synchronized, I was happy to see Martijn Dullaart’s recent blog post, referring to a poll and follow-up article on CM.

Here Martijn precisely touches the topic I address in this post. I recommend you to read his post: Configuration Management done right = Product-Centric first and then follow with the rest of this article.

Engineering Change Management

Initially, engineering change management was a departmental activity performed by engineering to manage the changes in a product’s definition. Other stakeholders are often consulted when preparing a change, which can be minor (affecting, for example, only engineering) or major (affecting engineering and manufacturing).

The way engineering change management has been implemented varies a lot. Over time companies all around the world have defined their change methodology, and there is a lot of commonality between these approaches. However, terminology as revision, version, major change, minor change all might vary.

I described the generic approach for engineering change processes in my blog post: ECR / ECO for Dummies from 2010.

The fact that companies have defined their own engineering change processes is not an issue when it works and is done manually. The real challenge came with PDM/PLM-systems that need to provide support for engineering change management.

Do you leave the methodology 100 % open, or do you provide business logic?

I have seen implementations where an engineer with a right-click could release an assembly without any constraints. Related drawings might not exist, parts in the assembly are not released, and more. To obtain a reliable engineering change management process, the company had to customize the PLM-system to its desired behavior.

An exercise excellent for a system integrator as there was always a discussion with end-users that do not want to be restricted in case of an emergency  (“we will complete the definition later” / “too many clicks” / “do I have to approve 100 parts ?”). In many cases, the system integrator kept on customizing the system to adapt to all wishes. Often the engineering change methodology on paper was not complete or contained contradictions when trying to digitize the processes.

For that reason, the PLM-vendors that aim to provide Out-Of-The-Box solutions have been trying to predefine certain behaviors in their system. For example, you cannot release a part, when its specifications (drawings/documents) are not released. Or, you cannot update a released assembly without creating a new revision.

These rules speed-up the implementation; however, they require more OCM (Organizational Change Management) as probably naming and methodology has to change within the company. This is the continuous battle in PLM-implementations. In particular where the company has a strong legacy or lack of business understanding, when implementing PLM.

There is an excellent webcast in this context on Minerva PLM TV – How to Increase IT Project Success with Organizational Change Management.

Click on the image or link to watch this recording.

Configuration Management

When we talk about configuration management, we have to think about managing the consistency of product data along the whole product lifecycle, as we have seen from the Wiki-definition before.

Wiki – the configuration Activity Model

Configuration management existed long before we had IT-systems. Therefore, configuration management is more a collection of activities (see diagram above) to ensure the consistency of information is correct for any given product. Consistent during design, where requirements match product capabilities. Consistent with manufacturing, where the manufacturing process is based on the correct engineering specifications. And consistent with operations, meaning that we have the full definition of product in the field, the As-Built, in correct relation to its engineering and manufacturing definition.

Source: Configuration management in aerospace industry

This consistency is crucial for products where the cost of an error can have a massive impact on the manufacturer. The first industries that invested heavily in configuration management were the Aerospace and Defense industries. Configuration management is needed in these industries as the products are usually complex, and failure can have a fatal impact on the company. Combined with many regulatory constraints, managing the configuration of a product and the impact of changes is a discipline on its own.

Other industries have also introduced configuration management nowadays. The nuclear power industry and the pharmaceutical industry use configuration management as part of their regulatory compliance. The automotive industry requires configuration management partly for compliance, mainly driven by quality targets. An accident or a recall can be costly for a car manufacturer. Other manufacturing companies all have their own configuration management strategies, mainly depending on their own risk assessment. Configuration management is a pro-active discipline – it costs money – time, people and potential tools to implement it. In my experience, many of these companies try to do “some” configuration management, always hoping that a real disaster will not happen (or can happen). Proper configuration management allows you to perform reliable impact analysis for any change (image above)

What happens in the field?

When introducing PLM in mid-market companies, often, the dream was that with the new PLM-system configuration, management would be there too.

Management believes the tools will fix the issue.

Partly because configuration management deals with a structured approach on how to manage changes, there was always confusion with engineering change management. Modern PLM-systems all have an impact analysis capability. However, most of the time, this impact analysis only reaches the content that is in the PLM-system. Configuration Management goes further.

If you think that configuration management is crucial for your company, start educating yourselves first before implementing anything in a tool. There are several places where you can learn all about configuration management.

  • Probably the best-known organization is IpX (Institute for Process Excellence), teaching the CM2 methodology. Have a look here: CM2 certification and courses
  • Closely related to IpX, Martijn Dullaart shares his thoughts coming from the field as Lead Architect for Enterprise Configuration Management at ASML (one of the Dutch crown jewels) in his blog: MDUX
  • CMstat, a configuration and data management solution provider, provides educational posts from their perspective. Have a look at their posts, for example, PLM or PDM or CM
  • If you want to have a quick overview of Configuration Management in general, targeted for the mid-market, have a look at this (outdated) course: Training for Small and Medium Enterprises on CONFIGURATION MANAGEMENT. Good for self-study to get an understanding of the domain.

 

To summarize

In regulated industries, Configuration Management and PLM are a must to ensure compliance and quality. Configuration management and (engineering) change management are, first of all, required methodologies that guarantee the quality of your products. The more complex your products are, the higher the need for change and configuration management.

PLM-systems require embedded engineering change management – part of the PDM domain. Performing Engineering Change Management in a system is something many users do not like, as it feels like overhead. Too much administration or too many mouse clicks.

So far, there is no golden egg that performs engineering change management automatically. Perhaps in a data-driven environment, algorithms can speed-up change management processes. Still, there is a need for human decisions.

Similar to configuration management. If you have a PLM-system that connects all the data from concept, design, and manufacturing in a single environment, it does not mean you are performing configuration management. You need to have processes in place, and depending on your product and industry, the importance will vary.

Conclusion

In the first seven posts, we discussed the design and engineering practices, from CAD to EBOM, ending with the MBOM. Engineering Change Management and, in particular, Configuration Management are methodologies to ensure the consistency of data along the product lifecycle. These methodologies are connected and need to be fit for the future – more on this when we move to modern model-based approaches.

Closing note:

While finishing this blog post today I read Jan Bosch’s post: Why you should not align. Jan touches the same topic that I try to describe in my series Learning from the Past ….., as my intention is to make us aware that by holding on to practices from the past we are blocking our future. Highly recommended to read his post – a quote:

The problem is, of course, that every time you resist change, you get a bit behind. You accumulate some business, process and technical debt. You become a little less “fitting” to the environment in which you’re operating

In my last post related to Learning from the past to understand the future, I discussed what happened when 3D CAD became available for the mid-market. In the large automotive or aerospace & defense companies, 3D CAD has been introduced along the path of defining processes and selecting tools. In the mid-market 3D CAD started from the other side, first as a productivity tool, not thinking further to change methodologies or processes.

The approach starting with 3D CAD without changing processes, has created several complexities. Every company that is aiming to move towards a digital future needs to reduce complexity to remain competitive. Now let us focus on the relation between the 3D CAD-structure and a BOM.

The 3D CAD-structure

When building a product in a 3D CAD system, the concept is that you have individual parts designed in 3D.  Every single part has a unique identifier.

If possible, the (file) name would equal the physical part number.

Next, a group of parts could be stored as a subassembly. Such an assembly is sometimes called a phantom assembly, in case they only group together several 3D parts. The usage of this type of assemblies increased CAD productivity. For data management reasons, these assemblies need to have a unique identifier, preferably not with the same numbering scheme for physical part numbers. It would consume part numbers that would never be used during manufacturing.

Note: in the early days of connecting 3D CAD to ERP, there was a considerable debate about which system could generate the part number.

ERP has always been the leading system for parts definition, why change ? And why generate part numbers that might not be used later in production. “Wasting” part numbers was a bad practice as historically, the part number was like a catalog number: 6 to 7 digits.

Next, there is also another group of subassemblies that represent one or more primary components of a product. For example, a pump assembly, that might be the combination of the pump, the motor, and the base frame. This type of assembly appears most of the time high in the CAD-structure. They can be considered as a phantom assembly too, regarding a required identifier for this subassembly.

Finally, there might be parts in the CAD-structure that will not exist in reality as part but need to be created during the manufacturing process. Sheet metal parts are created during the manufacturing process. Cappings, strips and cables shown in the CAD-structure might come from materials that are purchased in standardized sizes (1 meter / 2 meter / 10 meter) and need to be cut during manufacturing. Here the instances in the CAD-structure will have a unique identifier. What type of identifier to use depends on the manufacturing process. It might be a physical part number, as it is a half-fabricate,  or it remains a unique identifier for the CAD-structure only.

The reason I am coming back to these identifiers is that as described before, companies wanted to keep a relation between the part number and the file name.

There was a problem with flexible parts. A rubber hose with a specific length could be shaped differently in an assembly based on its connection. Two different shapes would create two files and therefore break the rule of a part number equals file name. The 3D CAD vendors “solved” this issue by storing configurable views of the same part inside one file and allow the user to select the active view.

Later we will see that management of views inside the 3D CAD model is not a wrong choice. This, contrary to managing different configurations of a part/product inside a single file, which creates complexity in the PLM domain.

In the end, the product became an assembly with several levels of subassemblies. At that time, when I worked a lot with CAD-integrations, the average depth of 3D CAD-structures was 6 to 7 levels deep, with exceptions in both directions.

The entire product CAD-structure is mainly used for a final digital mock-up, to allow engineers to analyze the full product behavior.  One of my favorite YouTube movies is the one from Airbus – seven years ago, they described the power of a full digital mock-up used for the A380.

In ETO-processes, the 3D CAD-structure is unique for a given customer solution – like the A380.

In the case of large assemblies with a lot of parts and subassemblies,  there were situations where the full product could not be resolved anymore. For Airbus a must, for the mid-market not always easy to reach.  Graphics memory, combined with the way graphics were represented, are the major constraint. This performance issue is resolved in the gaming world, however then the 3D representation had no longer the required accuracy or definition.

The Version pop-up problem

Working with a 3D CAD structure created a new problem when designers were sharing parts and assemblies between themselves and suppliers. The central storage of the files required a versioning mechanism, supported by a check-in and check-out mechanism.

Depending on the type of 3D CAD integration, the PDM system generated a new minor revision of the file after check-in again. In this way, there was full traceability of the changes before release. The image below shows an example of how SmarTeam was dealing with minor and major revisions combined with lifecycle stages.

When revising a part, all assemblies that contained the changed part need to be updated too, in case you want to have traceability and preventing others from overwriting your version. Making sure this assembly file points to the right file again. In the cases of a 6-level deep CAD-structure, this has led to a lot of methodology problems on how to deal (or not to deal) with file changes.

In the case of a unique delivery for a customer, the ETO-process, the issue might not be so big. As everything in the 3D CAD-structure is work in progress, you only need to be sure during the release process of the 3D CAD-structure that all parts and assemblies are resolved to the latest version (and verified)

Making changes on an existing product is way more complicated, as assemblies are released, and parts exist in production.  In that case, the Bill of Material is the leading structure to control the versions and the change impact, as we will see.

Note: Most CAD- and PLM-vendors loved to show you their demos, where starting from the CAD-structure, a product gets created (the ETO-process). The reality is that most companies do not start from the CAD-structure, but from an existing Bill of Material. In 2010, I wrote a few posts, discussing the relation between CAD and the BOM:

to explain there is more than a CAD-driven scenario.

The EBOM

In most PDM-systems with CAD-integrations, it is possible to create a Bill of Materials from the 3D CAD-structure. The Bill of Materials will be based on the parts inside the 3D CAD-structure. There is often the option to filter out phantom assemblies.

The structures are not the same. The 3D CAD-structure is instance-based, where the extracted Bill of Materials will summarize the part quantities on the same level.  See the image below. There are four Wheel instances in the CAD structure, in the EBOM-structure, we have only one Wheel reference with quantity 4.

I named the structure on the right the EBOM as the structure represents the Bill of Materials from the engineering point of view. This definition is a little arbitrary, as we will see. In companies that started to develop products based on a conceptual BOM, often, this conceptual BOM was an “early” EBOM that had to be developed further. This EBOM was more representing a logical or modular structure driving the design, instead of an extract from the 3D CAD-structure. In the next post, I will zoom in on these differences. I want to conclude this time with a critical methodology needed to manage the 3D CAD structure changes in relation to an EBOM.

Breaking the rule Drawing ID (Model ID)  = Part ID

Although I have been writing mostly about the 3D CAD structure, I want to remind us that the 3D Model in the mid-market is mainly used for design purposes. The primary delivery for manufacturing or a supplier is still a 2D-drawing for most companies. The 3D Model might be “nice to have” for CAM- or quality usage. Still, in case of a dispute, the 2D Drawing will be leading.

For that reason, in many mid-market companies, there was the following relation below:

In an environment without file versioning through check-in/check-out, this relation was easy to maintain. In the electronic world, every change in the 3D Model (which could be an assembly) triggers a new file version and, therefore, most of the time, a new version of the drawing and the physical part. However, you do not want to have a physical part with many revisions, in particular when this part could be again part of a Bill of Material.

To solve this issue, the Physical Part and the related Drawing/Model should have different lifecycles. The relation between the Physical Part and the Drawing Model should no longer be based on numbers but on a relation in the PDM/PLM-system. One of the main characteristics of a PDM/PLM-system is that it allows users to navigate through relations to find information in context.  For example, solving a Where Used – question is a (few) mouse-click(s) in a PDM/PLM-system.

Click on the image to see the details.

Breaking this one-to-one numbering rule is a must if you want to evolve to an item-centric or data-driven PLM-environment. When to introduce this change and how to implement this new behavior is a methodology exercise, not an implementation of a new tool.

There is a lot to read about this topic as it is related to the Form-Fit-Function-discussion we had earlier this year. A collection of information can be found in these two LinkedIn-post, where the comments are providing the insights:

 

I will not dive deeper into this theme (reached 1700 words ☹) – next time I will zoom in on the EBOM and leave the world of 3D CAD behind (for a while)

 

 

This time a short post (for me) as I am in the middle the series “Learning from the past to understand the future” and currently collecting information for next week’s post. However, recently Rob Ferrone, the original Digital Plumber, pointed me to an interesting post from Scott Taylor, the Data Whisperer.

In code: The Virtual Dutchman discovered the Data Whisperer thanks to the original Digital Plumber.

Scott’s article with the title: “Data Management Hasn’t Failed, but Data Management Storytelling Has” matches precisely the discussion we have in the PLM community.

Please read his article, and just replace the words Data Management by PLM, and it could have been written for our community. In a way, PLM is a specific application of data management, so not a real surprise.

Scott’s conclusions give food for thought in the PLM community:

To win over business stakeholders, Data Management leadership must craft a compelling narrative that builds urgency, reinvigorates enthusiasm, and evangelizes WHY their programs enable the strategic intentions of their enterprise. If the business leaders whose support and engagement you seek do not understand and accept the WHY, they will not care about the HOW. When communicating to executive leadership, skip the technical details, the feature functionality, and the reference architecture and focus on:

  • Establishing an accessible vocabulary
  • Harmonizing to a common voice
  • Illuminating the business vision

When you tell your Data Management story with that perspective, it can end happily ever after.

It all resonates well with what I described in the PLM ROI Myth – it is clear that when people hear the word Myth, they have a bad connotation, same btw for PLM.

The fact that we still need to learn storytelling is because most of us are so much focused on technology and sometimes on discovering the new name for PLM in the future.

Last week I pointed to a survey from the PLMIG (PLM Interest Group) and XLifcycle, inviting you to help to define the future definition of PLM.

You are still welcome here: Towards a digital future: the evolving role of PLM in the future digital world.

Also, I saw a great interview with Martin Eigner on Minerva PLM TV interview by Jennifer Moore. Martin is well known in the PLM world and has done foundational work for our community

. According to Jennifer, he is considered as The Godfather of PLM.  This tittle fits nicely in today’s post. Those who have seen his presentations in recent years will remember Martin is talking about SysLM (System Lifecycle Management) as the future for PLM.

It is an interesting recording to watch – click on the image above to see it. Martin explains nicely why we often do not get the positive feedback from PLM implementations – starting at minute 13 for those who cannot wait.

In the interview, you will discover we often talk too much about our discipline capabilities where the real discussion should be talking business. Strategy and objectives are discussed and decided at the management level of a company. By using storytelling, we can connect to these business objectives.

The end result will be more likely that a company understands why to invest significantly in PLM as now PLM is part of its competitiveness and future continuity.

Conclusion

I shared links to two interesting posts from the last weeks. Studying them will help you to create a broader view. We have to learn to tell the right story. People do not want PLM – they have personal objectives. Companies have business objectives, and they might lead to the need for a new and changing PLM. Connecting to the management in an organization, therefore, is crucial.

Next week again more about learning from the past to understand the future

To understand our legacy in the PLM-domain, what are the types of practices we created, I started this series of posts: Learning from the past to understand the future. My first post (The evolution of the BOM) focused on the disconnected world between engineering  – generation of drawings as a  deliverable – and execution MRP/ERP – the first serious IT-systems in a company.

At that time, due to minimal connectivity, small and medium-sized companies had, most of the time, an informal connection between engineering and manufacturing. I remember a statement at that time, PLM was just introduced. One person during a conference claimed:

“You guys make our lives so difficult with your systems. If we have a problem, we gather around the machine, and we fix it.”

PLM started at large enterprises

Of course, large enterprises could not afford such behavior as they operate globally. The leading enterprises for PDM/PLM were the Aerospace & Defense and Automotive companies. They needed consistent processes and formal ways of working to guarantee quality output.

In that sense, I was happy with the reaction from Jean-Jacques Urban-Galindo, who shared in the LinkedIn comments a reference to a relevant chapter of John Stark’s PLM book. In the pdf describing the evolution of CAD / PDM / PLM at PSA. Jean-Jacques was responsible at that time for Responsible for the re-engineering of the Product & Process Engineering processes using digital tools (CAD/CAM, DMU, and more).

Read the PSA story here: PLM at GROUPE PSA. It describes nicely where 3D CAD and EBOM are coming in.  In large enterprise like PSA, the need for tools are driven by the processes. When you read it to the end, you will also see the need for a design and a manufacturing view. A topic I will touch in future posts too.

The introduction of 3D CAD in the mid-market

Where large automotive and aerospace companies already invested in (expensive) 3D CAD hard and software, for the majority of the midsize companies, the switch from 2D CAD (AutoCAD mainly) towards 3D CAD (SolidWorks, Solid Edge, Inventor) started at the end of the 20th century.

It was the time that Microsoft NT became a serious platform beside the existing mainframe and mini-computer based CAD-systems. The switch to PCs went so fast that the disruption from DEC (Digital Equipment Company) is one of the cases discussed by Clayton Christensen in his groundbreaking book: The Innovator’s dilemma

3D CAD introduced a lot of new capabilities, like DMU (Digital Mock-Up), for clash detection, and above all, a better understanding of a product’s behavior. The introduction of 3D CAD introduced a new set of challenges to be resolved.

For example, the concept of reusing 3D CAD parts. Mid-market companies, most of the time, are buying productivity tools. Can I design my product faster and with higher quality in 3D instead of using only the 2D definitions?

Mid-market companies usually do not redesign their business processes – no people available for strategy – the pain of lack of strategy is felt in a different way compared to large enterprises—a crucial differentiator for the future of PLM.

Reuse of (3D) CAD parts / Assemblies

In the 2D CAD world, there was not so much reuse of CAD parts. Standard parts were saved in libraries or generated on demand by parametric libraries. Now with 3D CAD, designers might spend more time to define the part. The benefits come from the reuse of small sub-assemblies (modules) into a larger product assembly. Something not relevant in the 2D CAD world.

As every 3D CAD part had to have a file name, it became difficult to manage the file names without a system. How do you secure that the file with name Part01.xxx is unique? Another designer might also create an assembly, where the 3D CAD tool would suggest Part01.xxx as the name. And what about revisions? Do you store them in the filename, and how do you know you have the correct and latest version of the file?

Companies had already part naming rules for drawings, often related to the part’s usage similar to “intelligent” numbers I mentioned in my previous post.

With 3D CAD it became a little more complicated as now in electronic formats, companies wanted to maintain the relation:

Drawing ID = Part ID = File Name

The need for a PDM-system,

If you look to the image on the left, which I found in one of my old SmarTeam files, there is a part number combined with additional flags A-A-C, which also have meaning (I don’t know ☹ ) and a description.

 

The purpose of these meaningful flags was to maintain the current ways of working. Without a PDM-system, parts of the assembly could be shared with an OEM or a supplier. File-based 3D CAD without using a PDM-system was not a problem for small and medium enterprises.

The 3D CAD-system maintained the relations in the assembly files, including relations to the 2D Drawings. Despite the introduction of 3D CAD, the 2D Drawing remained the deliverable the rest of the company or supply chain, was waiting for. Preferably a drawing containing a parts list and balloon numbers, the same as it has been done before.  Why would you need a PDM-system?

PDM for traceability and reuse

If you were working in your 3D CAD-system for a single product, or on individual projects for OEMs, there was no significant benefit for a PDM-system. All deliveries needed for the engineering department were in the 3D CAD environment. Assembly files and drawing files are already like small databases, containing references to the source files of the part (image above).

A PDM-system at this stage could help you build traceability and prevent people from overwriting files. The ROI for this part only depends on the cost and risks of making mistakes.

However, when companies started to reuse parts or subassemblies, there was a need for a system that could manage the 3D models separately. This had an impact on the design methodology.

Now parts could be used in various products. How do you discover parts for reuse, and how do you know you have the last released version.  For sure their naming cannot be related anymore to a single product or project (a practice still used a lot)

This is where PDM-systems came in. Using additional attributes per file combined with relations between parts,  allowing companies to structure and deliver more details related to a part. A detailed description for internal usage, a part type (classification), and the part material were commonly used attributes. And not to forget the status and revision.

For reuse, it was important that the creators of content had a strategy to define a part for future reuse or discovery. Engineerings were not used to provide such services, filling in data in a PDM-system was seen as an overhead – bureaucracy.

As they were measured on the number of drawings they produced, why do extra work with no immediate benefits?

The best compromise was to have the designer fill in properties in the CAD-file when creating a part. Using the CAD-integration with the PDM-system could be used to fill attributes in the PDM-system.

This “beautiful” simple concept lead later to a lot of complexity.

Is the CAD-model the source of data, meaning designers should always start from CAD when designing a product. If someone added or modified data in the PDM-system, should we open the CAD-file to update some properties? Changing a file means it is a new version. What happens if the CAD-file is released, and I update some connected attributes in PDM?

To summarize this topic. Companies have missed the opportunity here to implement data governance. However, none of the silos (manufacturing preparation, service) recognized the need. Implementing new tools (3D CAD and PDM) did not affect the company’s way of working.

Instead of people, processes, tools, the only focus was on new tools and satisfying the people withing the same process.

Of course, when introducing PDM, which happened for mid-market companies at the beginning of this century, there was no PLM vision. Talking about lifecycle support was a waste of time for management. As we will discover in the future posts, large enterprises and small and medium enterprises have the same PLM needs. However, there is already a fundamentally different starting point. Where large enterprises are analyzing and designing business processes, the small and medium enterprises are buying tools to improve the current ways of working

The Future?

Although we have many steps to take in the upcoming posts, I want to raise your attention to an initiative from the PLM Interest Group together with Xlifecycle.com. The discussion is about what will be PLM’s role in digital transformation.

As you might have noticed, there are people saying the word PLM is no longer covering the right context, and all kinds of alternatives have been suggested. I recommend giving your opinion without my personal guidance. Feel free to answer the questionnaire, and we will be all looking forward to the results.

Find the survey here: Towards a digital future: the evolving role of PLM in the future digital world

 

Conclusion

We are going slow. Discovering here in this post the split in strategy between large enterprises (process focus) and small and medium enterprises (tool focus) when introducing 3D CAD. This different focus, at this time for PDM, is one of the reasons why vendors are creating functions and features that require methodology solving – however, who will provide the methodology.

Next time more on 3D CAD structures and EBOM

One week ago, Yoann Maingon wrote an innocent post with the question: Has FFF killed?  The question was raised related to a 2014 problem at GM, where a changed part was causing fatal accidents.

The discussion started by Yoann and here my short extract. Assuming this problem was a configuration management issue and Yoann somehow indicated that the problem might be related to the fact that ERP-systems do not carry a revision on the part number – leading to an unnoticed change.  Therefore, he assumes there is a disconnect between the PLM-side (where we have parts with multiple lifecycle states and revisions) and ERP (where we have an industrial lifecycle – prototype/production).

He posted his thoughts, and then LinkedIn exploded (currently 116 comments), which means it is a topic that is of significant concern in our community. Next, if you read the comments, there are different viewpoints:

  • What does FFF really imply?
  • What about revisions of parts?
  • What are the best practices?

Let’s investigate these viewpoints with some comments

What does FFF really imply?

When we talk about FFF in engineering, we mean Form, Fit and Function – the three primary characteristics to describe a part  (source Wikipedia)

  • Form refers to such characteristics as external dimensions, weight, size, and visual appearance of a part or assembly. This is the element of FFF that is most affected by an engineer’s aesthetic choices, including enclosure, chassis, and control panel, that become the outward “face” of the product.
  • Fit refers to the ability of the part or feature to connect to, mate with, or join to another feature or part within an assembly. The “fit” allows the part to meet the required assembly tolerances to be useful.
  • Function is a criterion that is met when the part performs its stated purpose effectively and reliably. In an electronics product, for example, a function can depend on the solid-state components used, the software or firmware, and quite often on the features of the electronics enclosure selected.

One of the comments in Yoann’s post referred to Safe/Unsafe as a potential functional characteristic. I think this addition is not needed. Safety should be a requirement for the part, not a characteristic.

FFF was and still is an approach for engineers to decide if a new, improved version of the part would get a revision or needs a new part number.

I think before we dive deeper into the other viewpoints, it is crucial to define the part number a little more.

In a correct PLM data model, there are two types of part numbers. First, the internal part number that your company uses inside its engineering Bill of Materials to identify a part. This part number can be a meaningless part only to provide uniqueness inside the company.

In 2015 I wrote several posts related to best practices and data modeling for PLM. The most relevant posts to this discussion are here:

The part number can specify a part that needs to be manufactured according to specification, or it can be a part that needs to be purchased from an available supplier/manufacturer. The manufacturer part number is, most of the time, a meaningful number (6 – 7 characters) as these parts need to be ordered by your company. The manufacturer part number is the SKU for the manufacturer. As you can imagine in the manufacturer’s catalog, there isn’t a revision mentioned. In graphics, see the image below:

Your company might sell Product MP-323121 (note: the ID is meaningful to help the customer to order the product).

Internally there is a related EBOM that specifies the product. The EBOM top part is O122 (note: here, we can use a meaningless identifier as all is digitally connected).

For the manufacturing of O122, we need to resolve the EBOM according to its specifications. Therefore, for Part O124, the company needs to decide to purchase from their approved manufacturers either part ABC-21231 or XYZ-88818 (note: again, a meaningful ID as these companies are not digitally connected).

Now coming back to the FFF-discussion. For the orange parts, with a meaningful ID, no revision exists. However, if Assembly O122 is 100% FFF compatible, the Product ID MP-323121 will not change. It allows your company to optimize the EBOM and/or MBOM, meanwhile keeping 100% compatibility to the outside world. (note: the same principle applies to the two manufacturers for Part O124.)

In case Top Assembly O122 has new or changed parts – what should happen there?

At that moment, the definition has changed. The definitions, most of the time described in documents/drawings/models, are related information to the BOM. Therefore the Top Assembly O122 should get a new identifier. There is no need to name it a revision, it is a new data set in the PLM-system, again with a meaningless identifier as we are connected digitally,

What about revisions of parts?

Of course, the management of changes existed long before PLM-systems were introduced.

The specifications of a part were defined in drawings. The drawing contained all the information, not only the geometry definitions, but also specifications on how to manufacture the part.

For complex products, a considerable set of consistently related drawings would be released to manufacturing.  A release process with physical signatures on it.

At the same time, there was no discussion: the drawing represents the part. And as there was no digital connection, part numbers/drawing numbers were meaningful, often with the format of the drawing as part of the identifier.

In case changes were needed, for example, fixing a dimension or tolerance as discovered during manufacturing, the drawing had to be revised to remain consistent. First, in the original drawing, the issue or change was marked in red (redlining). Then engineering had to create a new version of the drawing.

Depending on the impact of change (here comes also the FFF-principle), people decided if a new part number was needed (FFF-change) or that the change only required an update of the drawing(s), meaning a revision.  If the difference was small (for example, adding a missing annotation), it could be called a minor change, all to be reflected in the drawing number, which equals the part number in this approach. So, when we talk about revisions of parts, we are talking about a document change.

A lousy practice from that approach is also that often manufacturing just redlines a drawing and keeps the redlined drawing as their source. It is too time-consuming or difficult to update the source drawing(s) through a change process. Engineering is not aware of this change, and when a later change comes through from engineering, these “fixes” might be missed as there is no traceability.

Generic example of a PLM data model and its relationsWhen PLM-systems were introduced, of course, companies did not want to disrupt their existing ways of working. Therefore, they were asking the PLM-editors to enable revisions on parts and so the PLM-editors did (or do).

Decoupling of parts and documents in a PLM data model

However, if you want to use the PLM-system in the best manner, you need to “decouple” the concept:  part number equals drawing number, combined with the possibility to start using meaningless identifiers, as relations between parts and drawings are managed in the PLM-system through relational links.

Relevant post related to the PLM data model are:

What are the best practices?

As some people mentioned in their comments to Yoann’s post, why do we have to answer this question as all is already well understood and described in best practices? I agree with that statement: Best Practices exist – so how to obtain them?

First, there is the whole framework of Configuration Management, which existed long before PLM-systems were introduced. If you follow their methodology, you can be (almost) guaranteed your information is consistent and correct. Configuration Management is crucial in areas where the impact of an error is enormous, like the GM-example Yoann referred to. Also, companies in the Aerospace and Defense industry are the ones that have strict configuration management in place.

Configuration management does not come for free. It requires an investment in skills, potentially a change in ways of working, and requires an overhead. Manufacturing companies that are creating less “risky” products often focus more on optimizing (= reducing) the cost of their internal processes instead of investing in proper methodologies to manage consistency.

If you want to learn more about CM, investigate the Institute of Process Excellence (IPX), the founders of the CM2 framework for Enterprise Configuration Management, and much more. Note: Their knowledge does not come for free, which I can understand. However, it also creates a barrier for the company’s further investment in CM as this kind of strategic investments are hard to sell at the management level by individuals in a company.

In the context of CM, I advise you to follow Martijn Dullaart, who is quite active in our social community. His latest blog post related to this thread is: It’s about Interchangeability and Traceability

With the introduction of PLM-system, these companies and the PLM-editors created the opportunity to implement configuration management in their system.

The data inside the system would be the “single version of the truth.” Unfortunately, this was most of the time, just a sales strategy, falsely giving the impression that information is under control now. Last year I wrote several posts related to the relation between PLM and CM, starting from PLM and Configuration Management – a happy marriage?

If you are interested in another resource for information related to these topics, have a look at the website from Jörg Eisenträger who also collected his best practices for PLM and CM for sharing (thanks Paul van der Ree for the link)

Don’t expect best practices from your PLM-vendors as their role is to sell software. It is the continuous discussion between:

  • A PLM-system that forces companies to work according to embedded methodology (hard to sell/implement but idealistically correct)

And

  • A flexible PLM-system that allows you to build and configure anything (easy to sell/challenging to implement correctly, depending on “wise” decisions)

The Future

Even though most companies are working drawing-centric, with or without a linked PLM-backbone for BOM-management, the next upcoming challenge is to evolve to model-based practices. The current CM-practices still talk about documents, although documents are already electronic datasets in that context. The future, however, in a model-based enterprise evolves related to connected models, 3D Models, but also simulation and software models, with different lifecycles and pace of change. For the model-based enterprise, we need to develop digital best practices that guarantee the same level of quality, however, executed and/or supported by (AI) Artificial Intelligence. AI is needed as human beings cannot physically analyze and understand all the impact of a change in such an environment.

Conclusion

The FFF-discussion illustrates that building a consistent framework within PLM is not an easy goal to achieve. My blog buddy Oleg Shilovitsky would claim that we consultants create the complexity. PLM-editors will never solve this complexity, it is up to your company’s mission to invest in knowledge to understand why and how to reduce the complexity. With this post and the related links and discussions, I hope more clarity will help you to make “wise” decisions.

This post is based on a mix of interactions I had the last two weeks in my network, mainly on LinkedIn.  First, I enjoyed the discussion that started around Yoann Maingon post: Thoughts about PLM Business models. Yoann is quite seasoned in PLM, as you can see from his LinkedIn profile, and we have had interesting discussions in the past, and recently about a new PLM-system, he is developing Ganister PLM, based on a flexible Graph database.

Perhaps in that context, Yoann was exploring the various business models. Do you pay for the software (and maintenance), do you pay through subscription, what about a modular approach or a full license for all the functionality? All these questions made me think about the various business models that I encountered and how hard it is for a customer to choose the optimal solution.  And is the space for a new type of PLM? Is there space for free PLM? Some of my thoughts here:

PLM vendors need to be profitable

One of the most essential points to consider is that whatever PLM solution you are aiming to buy, make sure that your PLM vendor has a profitable business model. As once you started with a PLM solution, it is your company’s IP that will be stored in this environment, and you do not want to change every few years your PLM system. Switching PLM systems would be affordable if the PLM system would store their data in a standard format – I will share a more in-depth link under PLM and standards.

For the moment, you cannot state PLM vendors endorse standards. None of the real PLM vendors have a standardized data model, perhaps closest to standards are Eurostep, who have based that ShareAspace solution on top of the PLCS (ISO 10303) standard. However, ShareAspace is more positioned as a type of middleware, connecting between OEMs/Owner/Operators and their suppliers to benefit for standardized connectivity.

Coming back to the statement, PLM Vendors need to be profitable to provide a guarantee for the future of your company’s data is the first step. The major PLM Vendors are now profitable as during a consolidation phase starting 15 years ago, a lot of non-profitable PLM Vendors disappeared. Matrix One, Agile, Eigner & Partner PLM are the best-known companies that were bought for either their technology or market share. In that context, you might also look at OnShape.

Would they be profitable as a separate company, or would investors give up? To survive, you need to be profitable, so giving software away for free is not a good sign (see the software for free paragraph) as a company needs continuity.

PLM startups

In the past 10 years, I have seen and evaluated several new PLM companies. All of them did not really change the PLM paradigm, most of them were still focusing on being an engineering collaboration tools. Several of these companies have in their visionary statement that they are going to be the “Excel killer.” We all know Excel has the best user interface and capabilities to manipulate a collection of metadata.

Very popular is the BOM in Excel, extracted from the CAD-system (no need for an “expensive” PDM or PLM) or BOM used to share with suppliers and stakeholders (ERP is too rigid, purchasing does not work with PDM).

The challenge I see here is that these startups do not bring real new value. The cost of manipulating Excels is a hidden cost, and companies relying on Excel communication are the type of companies that do not have a strategic point of view. This is typical for Small and Medium businesses where execution (“let’s do it”) gets all the attention.

PLM startups often collect investor’s money because they promise to kill Excel, but is Excel the real problem? Modern PLM is about data sharing, which is an attitude change, not necessarily a technology change from Excel tables to (cloud) shared tables. However, will one of these “new Excel killers” PLMs be disruptive? I don’t think so.

PLM disruption?

A week ago, I read an interview with Clayton Christensen (thanks Hakan Karden), which I shared on LinkedIn a week ago. Clayton Christensen is the father of the Disruptive Innovation theory, and I have cited him several times in my blogs. His theory is, in my opinion, fundamental to understand how traditional businesses can be disrupted. The interview took place shortly before he died at the age of 67. He died due to complications caused by leukemia.

A favorite part of this interview is, where he restates what is really Disruptive Innovation as we often talk about disruption without understanding the context, just echoing other people:

Christensen: Disruptive innovation describes a process by which a product or service powered by a technology enabler initially takes root in simple applications at the low end of a market — typically by being less expensive and more accessible — and then relentlessly moves upmarket, eventually displacing established competitors. Disruptive innovations are not breakthrough innovations or “ambitious upstarts” that dramatically alter how business is done but, rather, consist of products and services that are simple, accessible, and affordable. These products and services often appear modest at their outset but over time have the potential to transform an industry.

Many of the PLM startups dream and position themselves as the new disruptor.  Will they succeed? I do not believe so if they only focus on replacing Excel, there is a different paradigm needed. Voice control and analysis perhaps (“Hey PLM if I change Part XYZ what will be affected”)?

This would be disruptive and open new options. I think PLM startups should focus here if they want my investment money.

PLM for free?

There are some voices that PLM should be free in an analogy to software management and collaboration tools. There are so many open-source software management tools, why not using them for PLM? I think there are two issues here:

  • PLM data is not like software data. A lot of PLM data is based on design models (3D CAD / Simulation), which is different from software. Designs are often not that modular as software for various reasons. Companies want to be modular in their products, but do they have the time and resources to reinvent their existing product. For software, these costs are so much lower as it is only a brain exercise. For hardware, the impact is significant. Bringing me to the second point.
  • The cost of change for hardware is entirely different compared to software. Changing software does not have an impact on existing stock or suppliers and, therefore, can be implemented once tested for its purpose. A hardware change impacts the existing production process. First, use the old parts before introducing the change, or do we accept the (costs) of scrap. Is our supply chain, or are our production tools ready to deliver continuity for the new version? Hardware changes are costly, and you want to avoid them. Software changes are cheap, therefore design your products to be configurable based on software (For example Tesla’s software controlling the features to be allowed)

Now imagine, with enough funding, you could provide a PLM for free.  Because of ease of deployment, this would be very likely a cloud offering, easy and scalable. However, all your IP is in that cloud too, and let’s imagine that the cloud is safer than on-premise, so it does not matter in which country your data is hosted (does it ?).

Next, the “free” PLM provider starts asking a small service fee after five years, as the promised ROI on the model hasn’t delivered enough value for the shareholders, they become anxious. Of course, you do not like to pay the fee. However, where is your data, and what happens when you do not pay?

If the PLM provider switches you off, you are without your IP. If you ask the PLM provider to provide your data, what will you get? A blob of XML-files, anything you can use?

In general, this is a challenge for all cloud solutions.

  • What if you want to stop your subscription?
  • What is the allowed Exit-strategy?

Here I believe customers should ask for clarity, and perhaps these questions will lead to a renewed understanding that we need standards.

PLM and standards

We had a vivid discussion in the blogging community in September last year. You can read more related to this topic in my post: PLM and the need for standards which describes the aspects of lock-in and needs for openness.

Finally, a remark related to the PLM-acronym. Another interesting discussion started around Joe Barkai’s post: Why I do not do PLM . Read the comments and the various viewpoint on PLM here. It is clear that the word PLM unites us all; however, the interpretation is different.

If someone in the street asks me what is your profession, I never mention I do PLM. I say: “I assist mainly manufacturing companies in redesigning their business processes using best practices and modern digital technologies”. The focus is on the business value, not on the ultimate definition of PLM

Conclusion

There are many business aspects related to PLM to consider. Yoann Maingon’s post started the thinking process, and we ended up with the PLM-definition. It all illustrates that being involved in PLM is never a boring journey. I am curious to learn about your journey and where we meet.

At the beginning of this week, I was attending the 9th edition of the PI conference in London. Where it started as a popular conference with 300 – 400 attendees at its best, we were now back to a smaller number of approximately 100 attendees.

It illustrates that PLM as a standalone topic is no longer attracts a broad audience as Marketkey (the organization of the conference) confirms. The intention is that future conferences will be focusing on the broader scope of PLM, where business transformation will be one of the main streams.

In this post, I will share my highlights of the conference, knowing that other sessions might have been valuable too, but I had to make a choice.

It is about people

Armin Prommersberger, CTO from DIRAC and the chairman of the conference, made a great point: “What we will discuss in the upcoming two days, it is all about people not about technology.”

I am not sure if this opening has influenced the mood of the conference, as when I look back to what was the central theme: It is all about how we deal with people when explaining, implementing and justifying PLM.

AI at the Forefront of a Digital Transformation

Muhannad Alomari from R2 Data Labs as a separate unit within Rolls Royce to explore and provide data innovation started with his keynote speech sharing the AI initiatives within his team.

He talked about several projects where AI will become crucial.

For example, the EHM program related to engine behavior. How to detect anomalies, how to establish predictive maintenance and maximize the time an airplane engine is in operation. Interesting to mention is that Muhannad explained that most simulation models are based on simplified simulation models, not accurate enough to discover anomalies.

Modeling in the PLM world with feedback from reality

Machine learning and feedback loops are crucial to optimize the models both for the discovery of irregularities and, of course, to improve understanding of the engine behavior and predict maintenance. Currently, maintenance is defined based on the worst-case scenario for the engine, which in reality, of course, will not be the case for most engines. There is a lot (millions) to gain here for a company.

Interesting to mention is that Muhannad gave a realistic view of the current status of Artificial Intelligence (AI). AI is currently still dumb – it is a set of algorithms that need to be adapted whenever new patterns are discovered. Deep learning is still not there – currently, we still need human beings for that.

This was in contrast with the session from Kalypso later with the title: Supercharge your PLM with advanced analytics. It was a typical example of where a realistic story (R2 Data Labs) shows such a big difference with what is sold by PLM vendors or implementers. Kalypso introduced Product Lifecycle Intelligence (PLI) – you can see the dream on the left (click on the image to enlarge).

Combine PLM with Analytics, and you have Intelligence.  My main comment is, knowing from the field the first three phases in most companies have a lack of data quality and consistency. Therefore any “Intelligence” probably will be based on unreliable sources. Not an issue if you are working in the domain of politics, however when it comes to direct cost and quality implications, it can be a significant risk. We still have a way to go before we have a reliable PLM data backbone for analytics.

 

Keeping PLM Momentum after a Successful Campaign

Susanna Mäentausta from Kemira in Finland gave an exciting update of their PLM project. Where in 2019, she shared with us their PLM roadmap (see my 2019 post: The weekend after PI PLMx London 2019); this time, Susanna shared with us how they are keeping the PLM momentum.

Often PLM implementations are started based on a hypothetical business case (I talked about this in my post The PLM ROI Myth). But then, when you implement PLM, you need to take care you provide proof points to motivate the management. And this is exactly what the PLM team in Kemira has been doing. Often management believes that after the first investment, the project is done (“We bought the software – so we are done”) however the business and process change that will deliver the value is not reported.

Susanna shared with us how they defined measurable KPIs for two reasons.  First, to motivate the management that there are business progress and benefits, however, it is a journey. And secondary the facts are used to kill the legends that “Before PLM we were much faster or efficient.” These types of legends are often expressed loudly by persons who consider PLM as an overhead (killing their freedom) instead of a way to be more efficient in business. In the end, for a company, the business is more important than the person’s belief.

On the question for Susanna, what she would have done better with hindsight, she answered: “Communicate, communicate, communicate.” A response I fully support as often PLM teams are too busy completing their day-to-day work, that there is no spare time for communication. Crucial to achieving a business change.

My agreement: PLM needs facts based during implementation and support combined with the understanding we are dealing with people and their emotions too. Both need full attention.

Acceleration Digitalization at Stora Enso

Samuli Savo, Chief Digital Officer at Stora Enso, explained the principles of innovation, related to digitalization at his company. Stora Enso, a Swedish/Finish company, historically one of the largest forestry companies in the world as well as one of the most significant paper and packaging producers, is working on a transformation to become the renewable materials company. For me, he made two vital points on how Stora Enso’s digitalization’s journey is organized.

He pleads for experimentation funded by corporate as in the experimental stage, as it does not make sense to have a business case. First DO and then ANALYZE, where many companies have to policy first to ANALYZE and then DO, killing innovative thinking.

The second point was the active process to challenge startups to solve business challenges they foresee and, combined with a governance process for startups, allow these companies to be supported and become embedded within member companies of the Combient Foundry, like Stora Enso. By doing such in a structured way, the outcome must lead to innovation.

I was thinking about the hybrid enterprise model that I have been explaining in the past. Great story.

Cyber-security and Future Mobility

Out of interest, I followed the session from Madeline Cheah, Cybersecurity Innovation Lead at HORIBA MIRA. She gave an excellent and well-structured overview. Madeline leads the cybersecurity research program. Part of this job is investigating ways to prevent vehicles from being attacked.  In particular, when it comes to connected and autonomous vehicles. How to keep them secure.

She discussed the known gaps are and the cybersecurity implications of future mobility so extensive that I even doubted will there ever be an autonomous vehicle on the road. So much to define and explore. She looked at it from the perspective of the Internet of Everything, where Everything is divided into Things, Data, Processes, and People. Still, a lot of work to do, see image below

Good Times Ahead: Delay Mitigation Through a Plan for Every Part

Ian Quest, director at Quick Release, gave an overview of what their company aims to be. You could translate it as the plumbers of the automotive industry Where in the ideal world information should be flowing from design to release, there are many bottlenecks, leakages, hiccups that need to be resolved as the image shows.

Where their customers often do not have the time and expertise to fix these issues, Quick Release brings in various skillsets and common sense. For example, how to deal with the Bill of Materials, Configuration Management, and many other areas that you need to address with methodology first instead of (vendor-based) technology. I believe there is a significant need for this type of company in the PLM-domain.

The second part, presented by Nick Solly, with a focus on their QRonos tool, was perhaps a little too much a focus on the capabilities of the tool. Ian Quest, in his introduction,  already made the correct statement:

The QRonos tool, which is more or less a reporting tool, illustrates again that when people care about reliable data (planning, tasks, parts, deliverables, …..), you can improve your business significantly by creating visibility to delays or bottlenecks. The value lies in measurable activities and from there, learn to predict or enhance – see R2 Labs, Kemira and the PLI dream.

Conclusion

It is clear that a typical PLM conference is no longer a technology festival – it is about people. People are trying to change or improve their business. Trying to learn from each other, knowing that the technical concepts and technology are there.

I am looking forward to the upcoming PI events where this change will become more apparent.

 

%d bloggers like this: