You are currently browsing the category archive for the ‘Configuration Management’ category.

In my last post, My four picks from PLMIF,  I ended with the remark that the discussion related to the Multiview BOM concept was not complete. The session presented by James Roche focused on the Aerospace & defense domain and touched the surface. There is a lot of confusion related to best practices associated with BOM-handling. Sometimes created to promote unique vendor capabilities or to hide system complexity.

Besides, we need to consider the past as, in particular, for PLM, the burden of legacy processes and data is significant. Some practices even come from the previous, paper-based century, later mixed with behavior from 3D CAD-systems.

Therefore, to understand the future, I will take you through the past to understand why certain practices were established. Next, in a few upcoming posts, I want to explain the evolution of BOM-practices. How each new technology step introduced new capabilities that enabled companies to improve their product delivery process.

I will describe the drawing approach (for PLM – the past), the item-centric approach (for PLM – the current), and the model-driven approach(for PLM – the future). How big this sequence will become is not clear at this stage.

Whenever I come close to 1200 – 1500 words, I will stop and conclude. Based on my To-do list and your remarks, I will continue in a follow-up post.  The target will be to have a vendor-neutral collection of information to help you identify your business and the next possible steps.

Working with drawings

MRP/ERP – the first IT-system

For this approach, I go back fifty years in time, when companies were starting to work with their first significant IT-system, the MRP-system. MRP stands for Material Requirements Planning. This system became the heart of the company, scheduling the production. The extension to ERP (Enterprise Resource Planning) quickly after, made it possible to schedule other resources and, essential for the management, to report financials. Now execution could be monitored by generating all kinds of reports.

Still, the MRP/ERP-system was wholly disconnected from the engineering world as the image shows below. Let us have a look at how this worked at that time.

The concept

Products have never been designed from scratch by jumping to drawings. In the concept phase, a product was analyzed, mainly on its mechanical behavior. Was there anything else at that time? Many companies thank their existence from a launching product which someone, most of the time, the founder of the company, invented in a workshop. The company than improved and enriched this product by starting from the core product, creating enhancements in various areas of applicability.

These new ideas were shared through sketches and prototypes.

The design

The detail design of a product is delivered by a technical documentation set, often a package of manufacturing drawings containing a list of parts on the drawing, assembly with instructions. Balloon numbers are used to indicate parts in an assembly or section view.  In addition, there are the related fabrication drawings. The challenge for this approach is that all definitions must be there uniquely and complete to avoid ambiguity, which could lead to manufacturing errors.

The parts list contains make-parts, supplier parts, and standard parts. The make-parts are specified again by manufacturing drawings, identified by a number that uniquely identifies the correct drawing version. A habit here: Part number = Drawing Number (+ revision)

As the part is identified by a drawing the part most of the time got an “intelligent” part number and a revision. Intelligent to support easy recognition and revisions as at the end we do not want to generate a new part number when there is an evolution of the part. Read more about this in What the FFF is happening and “Intelligent” part numbers?

The standardized parts can be either company standard parts or external standard parts. There is a difference between them.

A company standard part could be a certain bracket, a frame. Anything that the company decided to standardize on for its own products Company standard parts are treated like make parts; they have an identifier related to their manufacturing drawings.  Again, here the habit: Part Number = Drawing Number (+ revision)

The supplier part is coming from a supplier that manufactures this part based on the supplier or market specifications. You can specify this part by using the supplier’s catalog number or refer to the standard.

For example, the part that has been specified under a certain ISO/ANSI/DIN-standard. For example, a stainless-steel bolt M8 x 1,25 x 20, meaning a metric bolt with a head diameter of 8 mm, a speed of 1,25 mm, and a length of 20 mm. You specify the standard part according to the standard. Purchasing will decide where to buy this part

Manufacturing Preparation

This is the most inefficient stage when working in a traditional drawing approach. At this stage, the information provided in drawings needs to be entered into the MRP/ERP-system to start production. This is the place where information is thrown over the wall as some might say.

This means a person needs to create process steps in the ERP-system based on the drawing information. For each manufacturing step, there needs to be a reference to the right drawing. Most ERP-systems have a placeholder where you can type the drawing number(s). Later, when companies were using CAD, there could be a reference to a file.

The part number in the ERP-system might be the same as the drawing number; however, the ERP-system requires unique numbers. In the beginning, ERP-systems were the number-generator for new parts. The unique number was often 6 to 7 digits in size, because it fits in our human short-term memory.

The parts list on the drawings had to be entered in the ERP-system too. A manual operation that often required additional research from the manufacturing engineer. As the designer might have specified the SS Bolt M8 x 1,25 x 20 as such, manufacturing preparation has to search in the ERP-system for the company’s part number.

Suppliers have to be sourced for outside manufactured make-parts. In case you do not want to depend on a single supplier, you have to send drawings and specifications to the supplier before the product is released. The supplier will receive a drawing number with revision and status warning.

If everything worked well the first time, there would be no iterations between engineering and manufacturing preparation. However, this is a utopia: prototype changes, potential manufacturing issues will require changes in the drawings. These changes require updates in the drawings, which will lead to new versions. How do you keep consistency between all identifiers?

Manufacturing

During manufacturing, orders are processed based on information from the ERP-system. The shop floor gets the drawing provided to the link in ERP. Sometimes there are issues during manufacturing. In coordination with engineering, some adaptations will be made to the manufacturing process. e.g., a changed fit or tolerance. Instead of going back to engineering to provide a new documentation set, the relevant drawings are redlined. Engineering will update these drawings whenever they touch them in the future (yeah, yeah).

Configuration Management

But will they update them? Perhaps already a new version existed due to the product’s evolution. Everything needs to be coordinated manually. Smaller companies heavily rely on people knowing things and talking together.

Larger companies cannot work in the same manner; therefore, they introduce procedures to guarantee that the information flow is consistent and accurate. Here the practices from configuration management come in.

There are many flavors of configuration management. Formal CM was first used in the 1950s to control the technical documentation for complex space and weapons systems. (Source ESA CM initiative for SME’s – © 2000) We will see it come back in future posts dealing with more complex products and the usage of computer systems.

Last year I wrote a few times about PLM and configuration management (PLM and CM – a happy marriage?) not relevant at this moment as there is no PLM yet.

Where is the BOM

As you might have noticed, there was no mentioning of a BOM so far. At this stage, there is only one Bill of Materials managed in the ERP-system. The source from the BOM comes from the various parts lists on the drawings, completed with manual additions.

Nobody talks in this stage about an EBOM or MBOM as there is only one BOM, a kind of hybrid BOM, where manufacturing steps were driving the way parts are grouped. Because the information was processed step by step, why would you like to have a multilevel BOM or a BOM tree?
Note: The image on the left was one of my first images in 2008 when I started my blog.

Summary

Working with drawings introduced “intelligent” part numbers as the documents have to be identified by manual interpretations. The intelligence of the part number was there to prevent people from making mistakes as the number already was a kind of functional identifier. Combined with a revision and versioning in the number, nothing could go wrong if handled consistently.

The disadvantage was that new employees had to master a numbering system. Next, the risk for all employees that a released drawing will not change its status. Only manual actions (retract/replace) will avoid making mistakes. And then, there are the disconnected redline drawings.

The “drawing number equals part number”  relation created a constraint that will be hard to maintain in the future.  Therefore you should worry if you still work according the above principles.

Conclusion

I reached the 1500 words – a long story – probably far from complete. I encourage readers to provide enhancements that might be relevant in the comments. This post might look like a post for dummies. However, to understand what is applicable to the future, we first need to understand why certain practices have been defined in the past.
I am looking forward to your comments and enhancements to make this a relevant stream of public information for all.

One week ago, Yoann Maingon wrote an innocent post with the question: Has FFF killed?  The question was raised related to a 2014 problem at GM, where a changed part was causing fatal accidents.

The discussion started by Yoann and here my short extract. Assuming this problem was a configuration management issue and Yoann somehow indicated that the problem might be related to the fact that ERP-systems do not carry a revision on the part number – leading to an unnoticed change.  Therefore, he assumes there is a disconnect between the PLM-side (where we have parts with multiple lifecycle states and revisions) and ERP (where we have an industrial lifecycle – prototype/production).

He posted his thoughts, and then LinkedIn exploded (currently 116 comments), which means it is a topic that is of significant concern in our community. Next, if you read the comments, there are different viewpoints:

  • What does FFF really imply?
  • What about revisions of parts?
  • What are the best practices?

Let’s investigate these viewpoints with some comments

What does FFF really imply?

When we talk about FFF in engineering, we mean Form, Fit and Function – the three primary characteristics to describe a part  (source Wikipedia)

  • Form refers to such characteristics as external dimensions, weight, size, and visual appearance of a part or assembly. This is the element of FFF that is most affected by an engineer’s aesthetic choices, including enclosure, chassis, and control panel, that become the outward “face” of the product.
  • Fit refers to the ability of the part or feature to connect to, mate with, or join to another feature or part within an assembly. The “fit” allows the part to meet the required assembly tolerances to be useful.
  • Function is a criterion that is met when the part performs its stated purpose effectively and reliably. In an electronics product, for example, a function can depend on the solid-state components used, the software or firmware, and quite often on the features of the electronics enclosure selected.

One of the comments in Yoann’s post referred to Safe/Unsafe as a potential functional characteristic. I think this addition is not needed. Safety should be a requirement for the part, not a characteristic.

FFF was and still is an approach for engineers to decide if a new, improved version of the part would get a revision or needs a new part number.

I think before we dive deeper into the other viewpoints, it is crucial to define the part number a little more.

In a correct PLM data model, there are two types of part numbers. First, the internal part number that your company uses inside its engineering Bill of Materials to identify a part. This part number can be a meaningless part only to provide uniqueness inside the company.

In 2015 I wrote several posts related to best practices and data modeling for PLM. The most relevant posts to this discussion are here:

The part number can specify a part that needs to be manufactured according to specification, or it can be a part that needs to be purchased from an available supplier/manufacturer. The manufacturer part number is, most of the time, a meaningful number (6 – 7 characters) as these parts need to be ordered by your company. The manufacturer part number is the SKU for the manufacturer. As you can imagine in the manufacturer’s catalog, there isn’t a revision mentioned. In graphics, see the image below:

Your company might sell Product MP-323121 (note: the ID is meaningful to help the customer to order the product).

Internally there is a related EBOM that specifies the product. The EBOM top part is O122 (note: here, we can use a meaningless identifier as all is digitally connected).

For the manufacturing of O122, we need to resolve the EBOM according to its specifications. Therefore, for Part O124, the company needs to decide to purchase from their approved manufacturers either part ABC-21231 or XYZ-88818 (note: again, a meaningful ID as these companies are not digitally connected).

Now coming back to the FFF-discussion. For the orange parts, with a meaningful ID, no revision exists. However, if Assembly O122 is 100% FFF compatible, the Product ID MP-323121 will not change. It allows your company to optimize the EBOM and/or MBOM, meanwhile keeping 100% compatibility to the outside world. (note: the same principle applies to the two manufacturers for Part O124.)

In case Top Assembly O122 has new or changed parts – what should happen there?

At that moment, the definition has changed. The definitions, most of the time described in documents/drawings/models, are related information to the BOM. Therefore the Top Assembly O122 should get a new identifier. There is no need to name it a revision, it is a new data set in the PLM-system, again with a meaningless identifier as we are connected digitally,

What about revisions of parts?

Of course, the management of changes existed long before PLM-systems were introduced.

The specifications of a part were defined in drawings. The drawing contained all the information, not only the geometry definitions, but also specifications on how to manufacture the part.

For complex products, a considerable set of consistently related drawings would be released to manufacturing.  A release process with physical signatures on it.

At the same time, there was no discussion: the drawing represents the part. And as there was no digital connection, part numbers/drawing numbers were meaningful, often with the format of the drawing as part of the identifier.

In case changes were needed, for example, fixing a dimension or tolerance as discovered during manufacturing, the drawing had to be revised to remain consistent. First, in the original drawing, the issue or change was marked in red (redlining). Then engineering had to create a new version of the drawing.

Depending on the impact of change (here comes also the FFF-principle), people decided if a new part number was needed (FFF-change) or that the change only required an update of the drawing(s), meaning a revision.  If the difference was small (for example, adding a missing annotation), it could be called a minor change, all to be reflected in the drawing number, which equals the part number in this approach. So, when we talk about revisions of parts, we are talking about a document change.

A lousy practice from that approach is also that often manufacturing just redlines a drawing and keeps the redlined drawing as their source. It is too time-consuming or difficult to update the source drawing(s) through a change process. Engineering is not aware of this change, and when a later change comes through from engineering, these “fixes” might be missed as there is no traceability.

Generic example of a PLM data model and its relationsWhen PLM-systems were introduced, of course, companies did not want to disrupt their existing ways of working. Therefore, they were asking the PLM-editors to enable revisions on parts and so the PLM-editors did (or do).

Decoupling of parts and documents in a PLM data model

However, if you want to use the PLM-system in the best manner, you need to “decouple” the concept:  part number equals drawing number, combined with the possibility to start using meaningless identifiers, as relations between parts and drawings are managed in the PLM-system through relational links.

Relevant post related to the PLM data model are:

What are the best practices?

As some people mentioned in their comments to Yoann’s post, why do we have to answer this question as all is already well understood and described in best practices? I agree with that statement: Best Practices exist – so how to obtain them?

First, there is the whole framework of Configuration Management, which existed long before PLM-systems were introduced. If you follow their methodology, you can be (almost) guaranteed your information is consistent and correct. Configuration Management is crucial in areas where the impact of an error is enormous, like the GM-example Yoann referred to. Also, companies in the Aerospace and Defense industry are the ones that have strict configuration management in place.

Configuration management does not come for free. It requires an investment in skills, potentially a change in ways of working, and requires an overhead. Manufacturing companies that are creating less “risky” products often focus more on optimizing (= reducing) the cost of their internal processes instead of investing in proper methodologies to manage consistency.

If you want to learn more about CM, investigate the Institute of Process Excellence (IPX), the founders of the CM2 framework for Enterprise Configuration Management, and much more. Note: Their knowledge does not come for free, which I can understand. However, it also creates a barrier for the company’s further investment in CM as this kind of strategic investments are hard to sell at the management level by individuals in a company.

In the context of CM, I advise you to follow Martijn Dullaart, who is quite active in our social community. His latest blog post related to this thread is: It’s about Interchangeability and Traceability

With the introduction of PLM-system, these companies and the PLM-editors created the opportunity to implement configuration management in their system.

The data inside the system would be the “single version of the truth.” Unfortunately, this was most of the time, just a sales strategy, falsely giving the impression that information is under control now. Last year I wrote several posts related to the relation between PLM and CM, starting from PLM and Configuration Management – a happy marriage?

If you are interested in another resource for information related to these topics, have a look at the website from Jörg Eisenträger who also collected his best practices for PLM and CM for sharing (thanks Paul van der Ree for the link)

Don’t expect best practices from your PLM-vendors as their role is to sell software. It is the continuous discussion between:

  • A PLM-system that forces companies to work according to embedded methodology (hard to sell/implement but idealistically correct)

And

  • A flexible PLM-system that allows you to build and configure anything (easy to sell/challenging to implement correctly, depending on “wise” decisions)

The Future

Even though most companies are working drawing-centric, with or without a linked PLM-backbone for BOM-management, the next upcoming challenge is to evolve to model-based practices. The current CM-practices still talk about documents, although documents are already electronic datasets in that context. The future, however, in a model-based enterprise evolves related to connected models, 3D Models, but also simulation and software models, with different lifecycles and pace of change. For the model-based enterprise, we need to develop digital best practices that guarantee the same level of quality, however, executed and/or supported by (AI) Artificial Intelligence. AI is needed as human beings cannot physically analyze and understand all the impact of a change in such an environment.

Conclusion

The FFF-discussion illustrates that building a consistent framework within PLM is not an easy goal to achieve. My blog buddy Oleg Shilovitsky would claim that we consultants create the complexity. PLM-editors will never solve this complexity, it is up to your company’s mission to invest in knowledge to understand why and how to reduce the complexity. With this post and the related links and discussions, I hope more clarity will help you to make “wise” decisions.

This time a post that has been on the table already for a long time – the importance of having established processes, in particular with implementing PLM.  By nature, most people hate processes as it might give the idea that their personal creativity is limited, where large organizations love processes as for them this is the way to guarantee a confident performance.  So let’s have a more in-depth look.

Where processes shine

In a transactional world, processes can be implemented like algorithms, assuming the data to be processed has the right quality. That is why MRP (Material Requirement Planning) and ERP (Enterprise Resource Planning) don’t have the mindset of personal creativity. It is about optimized execution driven by financial and quality goals.

When I started my career in the early days of data management, before it was called PDM/PLM, I learned that there is a need for communication-related to product data. Terms are revisions, and versions started to pop-up combined with change processes. Some companies began to talk about configuration management.

Companies were not thinking PLM along the whole lifecycle. It was more PDM for engineering and ERP for manufacturing. Where PDM was ultimate a document-control environment, ERP was the execution engine relying on documented content, but not necessarily connected. Unfortunate this is still the case at many companies, and it has to do with the mindset. Traditionally a company’s performance has been measured based on financial reporting coming from the ERP system. Engineering was an unmanageable cost in the eyes of the manufacturing company’s management and ERP-software vendors.

In de middle of the nineties (previous century now ! ), I had a meeting with an ERP-country manager to discuss a potential partnership. The challenge was that he had no clue about the value and complementary need for PLM. Even after discussing with him the differences between iterative product development (with revisioning) and linear execution (on the released product), his statement was:

“Engineers are just resources that do not want to be managed, but we will get them”

Meanwhile, I can say this company has changed its strategy, giving PLM a space in their portfolio combined with excellent slides about what could be possible.

To conclude, for linear execution the meaning of processes is more or less close to algorithms and when there is no algorithm, the individual steps in place are predictable with their own KPIs.

Process certification

As I mentioned in the introduction, processes were established to guarantee a predictable outcome, in particular when it comes to quality. For that reason, in the previous century when globalization started companies were somehow forced to get ISO 900x certified. The idea behind these certifications was that a company had processes in place to guarantee an expected outcome and for when they failed, they would have procedures in place to fix these gaps. The reason companies were doing this because no social internet could name and shame bad companies. Having ISO 900x certification would be the guarantee to deliver quality.  In the same perspective, we could see, configuration management, a system of best practices to guarantee that product information was always correct.

Certification was and is heaven for specialized external auditors and consultants.  To get certification you needed to invest in people and time to describe your processes, and once these processes were defined, there were regular external audits to ensure the quality system has been followed.  The beauty of this system – the described procedures were more or less “best intentions” not enforced. When the auditor would come the company had to play some theater that processes were followed., the auditor would find some improvements for next year and the management was happy certification was passed.

This has changed early this century. In particular, mid-market companies were no longer motivated to keep up this charade. The quality process manual remained as a source of inspiration, but external audits were no longer needed. Companies were globally connected and reviewed, so reputation could be sourced easily.

The result: there are documented quality procedures, and there is a reality. The more disconnected employees became in a company due to mergers or growth, the more individual best-practices became the way to deliver the right product and quality, combined with accepted errors and fixes downstream or later. The hidden cost of poor quality is still a secret within many companies.  Talking with employees they all have examples where their company lost a lot of money due to quality mistakes. Yet in less regulated industries, there is no standard approach, like CAPA (Corrective And Preventive Actions), APQP or 8D to solve it.

Configuration Management and Change Management processes

When it comes to managing the exact definition of a product, either an already manufactured product or products that are currently made, there is a need for Configuration Management.  Before there were PLM systems configuration management was done through procedures defining configurations based on references to documents with revisions and versions. In the aerospace industry, separate systems for configuration management were developed, to ensure the exact configuration of an aircraft could be retrieved at any time. Less regulated industries used a more document-based procedural approach as strict as possible. You can read about the history of configuration management and PLM in an earlier blog post: PLM and Configuration Management – a happy marriage?

With the introduction of PDM and PLM-systems, more and more companies wanted to implement their configuration management and in particular their change management inside the system, as the changes are always related to product information that can reside in a PLM-system. The change of part can be proposed (ECR), analyzed and approved, leading to and implementation of the change (ECO) which is based on changed specifications, designs (3D Models / Drawings) and more. You can read the basics here: The Issue and ECR/ECO for Dummies (Reprise)

The Challenge (= Problem) of Digital Processes

More and more companies are implementing change processes fully in PLM, and this is the point that creates the most friction for a PLM implementation. The beauty of digital change processes is that they can be full-proof. No change gets unnoticed as everyone is forced to follow the predefined procedures, either a type of fast track in case of lightweight (= low risk) changes or the full change process when the product is already in a mature state.

Like the ISO-900x processes, the PLM-implementer is often playing the role of the consultancy firm that needs to recommend the company how to implement configuration management and change processes. The challenge here is that the company most of the time does not have a standard view for their change processes and for sure the standard change management inside PLM s not identical to their processes.

Here the battle starts….

Management believes that digital change processes, preferable out-of-the-box, a crucial to implement, where users feel their job becomes more an administrative job than a creative job. Users that create information don’t want to be bothered with the decisions for numbering and revisioning.

They expect the system to do that easily for them – which does not happen as old procedures, responsibilities, and methodologies do not align with the system. Users are not measured or challenged for data quality, they are measured on the work they deliver that is needed now. Let’s first get the work done before we make sure all is consisted defined in the PLM-system.

Digital Transformation allows companies to redefine the responsibilities for users related to the data they produce. It is no longer a 3D Model or a drawing, but a complete data set with properties/attributes that can be shared and used for analysis and automation.

Conclusion

Implementing digital processes for PLM is the most painful, but required step for a successful implementation. As long as data and processes are not consistent, we can keep on dreaming about automation in PLM. Therefore, digital transformation inside PLM should focus on new methods and responsibilities to create a foundation for the future. Without an agreement on the digital processes there will be a growing inefficiency for the future.

 

Image: waitbutwhy.com

Two weeks ago I wrote about the simplification discussion around PLM – Why PLM never will be simple.  There I focused on the fact that even sharing information in a consistent, future proof way of working, is already challenging, despite easy to use communication tools like email or social communities.

I mentioned that sharing PLM data is even more challenging due to their potential revision, version, status, and context.  This brings us to the topic of configuration management, needed to manage the consistency of information, a challenge with the increasingly sophisticated products or systems. Simple tools will never fix this complexity.

To manage the consistency of a product,  configuration management (CM) is required. Two weeks ago I read the following interesting post from CMstat: A Brief History of Configuration Management Software.

An excellent introduction if you want to know more about the roots of CM, be it that the post at the end starts to flush out all the disadvantages and reasons why you should not think about CM using PLM systems.

The following part amused me:

 The Reality of Enterprise PLM

It is no secret that PLM solutions were often sold based in good part on their promise to provide full-lifecycle change control and systems-level configuration management across all functions of the enterprise for the OEM as well as their supply and service chain partners. The appeal of this sales stick was financial; the cost and liability to the corporation from product failures or disasters due to a lack of effective change control was already a chief concern of the executive suite. The sales carrot was the imaginary ROI projected once full-lifecycle, system-level configuration control was in effect for the OEM and supply chain.

Less widely known is that for many PLM deployments, millions of budget dollars and months of calendar time were exhausted before reaching the point in the deployment road map where CM could be implemented. It was not uncommon that before the CM stage gate was reached in the schedule, customer requirements, budget allocations, management priorities, or executive sponsors would change. Or if not these disruptions within the customer’s organization, then the PLM solution provider, their software products or system integrators had been changed, acquired, merged, replaced, or obsoleted. Worse yet for users who just had a job to do was when solutions were “reimagined” halfway through a deployment with the promise (or threat) of “transforming” their workflow processes.

Many project managers were silently thankful for all this as it avoided anyone being blamed for enterprise PLM deployment failures that were over budget, over schedule, overweight, and woefully underwhelming. Regrettably, users once again had to settle for basic change control instead of comprehensive configuration management.

I believe the CMstat-writer is generalizing too much and preaches for their parish. Although my focus lies on PLM, I also learned the importance of CM and for that reason I will share a view on CM from the PLM side:

Configuration Management is not a target for every company

The origins of Configuration Management come from the Aerospace and Defense (A&D) industries. These industries have high quality, reliability and traceability constraints. In simple words, you need to prove your product works correctly specified in all described circumstances and keep this consistent along the lifecycle of the product.

Moreover, imagine you delivered the perfect product, next implementing changes require a full understanding of the impact of the change. What is the impact of the change on the behavior or performance? In A&D is the question is it still safe and reliable?

Somehow PLM and CM are enemies. The main reason why PLM-systems are used is Time to Market — bringing a product as fast as possible to the market with acceptable quality. Being first is sometimes more important than high quality. CM is considered as a process that slows down Time to Market as managing consistency, and continuous validating takes time and effort.

Configuration Management in Aviation is crucial as everyone understands that you cannot afford to discover a severe problem during a flight. All the required verification and validation efforts make CM a costly process along the product lifecycle. Airplane parts are 2 – 3 times more expensive than potential the same parts used in other industries. The main reason: airplane parts are tested and validated for all expected conditions along their lifecycle.  Other industries do not spend so much time on validation. They validate only where issues can hurt the company, either for liability or for costs.

Time to Market even impacts the aviation industry  as we can see from the commercial aircraft battle(s) between Boeing and Airbus. Who delivers the best airplane (size/performance) at the right moment in the global economy? The Airbus 380 seemed to miss its targets in the future – too big – not flexible enough. The Boeing 737 MAX appears to target a market sweet spot (fuel economy) however the recent tragic accidents with this plane seemed to be caused by Time to Market pressure to certify the aircraft too early. Or is the complexity of a modern airplane unmanageable?

CM based on PLM-systems

Most companies had their configuration management practices long before they started to implement PLM. These practices were most of the time documented in procedures, leading to all kind of coding systems for these documents. Drawing numbers (the specification of a part/product), Specifications, Parts Lists, all had a meaningful identifier combined with a version/revision and status. For example, the Philips 12NC coding system is famous in the Netherlands and is still used among spin-offs of Philips and their supplier as it offers a consistent framework to manage configurations.

Storing these documents into a PDM/PLM-system to provide centralized access was not a big problem; however, companies also expected the PLM-system to support automation and functionality to support their configuration management procedures.

A challenge for many implementers for several reasons:

  • PLM-systems do not offer a standard way of working – if they would do so, they could only serve a small niche market – so it needs to be “configured/customized.”
  • Company configuration management rules sometimes cannot be mapped to the provided PLM data-model and their internal business logic. This has led to costly customizations where, in the best case, implementer and company agreed somewhere in the middle. Worst case as the writer from the CM blog is mentioning it becomes an expensive, painful project
  • Companies do not have a consistent configuration management framework as Time to Market is leading – we will fix CM later is the idea, and they let their PLM –implementer configure the PLM-system as good a possible. Still, at the management level, the value of CM is not recognized.
    (see also: PLM-CM-ALM – not sexy ?)

In companies that I worked with, those who were interested in a standardized configuration management approach were trained in CMII. CMII (or CM2) is a framework supported by most PLM-systems, sometimes even as a pre-configured template to speed-up the implementation. Still, as PLM-systems serve multiple industries, I would not expect any generic PLM-vendor to offer Commercial Off-The-Shelf (COTS) CM-capabilities – there are too many legacy approaches. You can find a good and more in-depth article related to CMII here: Towards Integrated Configuration Change Management (CMII) from Lionel Grealou.

 

What’s next?

Current configuration management practices are very much based on the concepts of managing document. However, products are more and more described in a data-driven, model-based approach. You can find all the reasons why we are moving to a model-based approach in my last year’s blog post. Important to realize is that current CM practices in PLM were designed with mechanical products and lifecycles as a base. With the combination of hardware and software, integrated and with different lifecycles, CM has to be reconsidered with a new holistic concept. The Institute of Process Excellence provides CM2 training but is also active in developing concepts for the digital enterprise.

Martijn Dullaart, Lead Architect Configuration Management @ ASML & Chair @ IPE/CM2 Global Congress has published several posts related to CM and a Model-Based approach – you find them here related to his LinkedIn profile. As you can read from his articles organizations are trying to find a new consistent approach.

Perhaps CM as a service to a Product Innovation Platform, as the CMstat blog post suggests? (quote from the post below)

In Part 2 of this CMsights series on the future of CM software we will examine the emerging strategy of “Platform PLM” where functional services like CM are delivered via an open, federated architecture comprised of rapidly-deployable industry-configured applications.

I am looking forward to Part2 of CMsights . An approach that makes sense to me as system boundaries will disappear in a digital enterprise. It will be more critical in the future to create consistent data flows in the right context and based on data with the right quality.

Conclusion

Simple tools and complexity need to be addressed in the right order. Aligning people and processes efficiently to support a profitable enterprise remains the primary challenge for every enterprise. Complex products, more dependent on software than hardware, are requiring new ways of working to stay competitive. Digitization can help to implement these new ways of working. Experienced PLM/CM experts know the document-driven past. Now it is time for a new generation of PLM and CM experts to start from a digital concept and build consistent and workable frameworks. Then the simple tools can follow.

 

The digital thread according to GE

In my earlier posts, I have explored the incompatibility between current PLM practices and future needs for digital PLM.  Digital PLM is one of the terms I am using for future concepts. Actually, in a digital enterprise, system borders become vague, it is more about connected platforms and digital services. Current PLM practices can be considered as Coordinated where the future for PLM is aiming at Connected information. See also Coordinated or Connected.

Moving from current PLM practices towards modern ways of working is a transformation for several reasons.

  • First, because the scope of current PLM implementation is most of the time focusing on engineering. Digital PLM aims to offer product information services along the product lifecycle.
  • Second, because the information in current PLM implementations is mainly stored in documents – drawings still being the leading In advanced PLM implementations BOM-structures, the EBOM and MBOM are information structures, again relying on related specification documents, either CAD- or Office files.

So let’s review the transformation challenges related to moving from current PLM to Digital PLM

Current PLM – document management

The first PLM implementations were most of the time advanced cPDM implementations, targeting sharing CAD models and drawings. Deployments started with the engineering department with the aim to centralize product design information. Integrations with mechanical CAD systems had the major priority including engineering change processes. Multidisciplinary collaboration enabled by introducing the concept of the Engineering Bill of Materials (EBOM).  Every discipline, mechanical, electrical and sometimes (embedded) software teams, linked their information to the EBOM. The product release process was driven by the EBOM. If the EBOM is released, the product is fully specified and can be manufactured.

Although people complain implementing PLM is complex, this type of implementation is relatively simple. The only added mental effort you are demanding from the PLM user is to work in a structured way and have a more controlled (rigid) way of working compared to a directory structure approach. For many people, this controlled way of working is already considered as a limitation of their freedom. However, companies are not profitable because their employees are all artists working in full freedom. They become successful if they can deliver in some efficient way products with consistent quality. In a competitive, global market there is no room anymore for inefficient ways of working as labor costs are adding to the price.

The way people work in this cPDM environment is coordinated, meaning based on business processes the various stakeholders agree to offer complete sets of information (read: documents) to contribute to the full product definition. If all contributions are consistent depends on the time and effort people spent to verify and validate its consistency. Often this is not done thoroughly and errors are only discovered during manufacturing or later in the field. Costly but accepted as it has always been the case.

Next Step PLM – coordinated document management / item-centric

When the awareness exists that data needs to flow through an organization is a consistent manner, the next step of PLM implementations come into the picture. Here I would state we are really talking about PLM as the target is to share product data outside the engineering department.

The first logical extension for PLM is moving information from an EBOM view (engineering) towards a Manufacturing Bill of Materials (MBOM) view. The MBOM is aiming to represent the manufacturing definition of the product and becomes a placeholder to link with the ERP system and suppliers directly. Having an integrated EBOM / MBOM process with your ERP system is already a big step forward as it creates an efficient way of working to connect engineering and manufacturing.

As all the information is now related to the EBOM and MBOM, this approach is often called the item-centric approach. The Item (or Part) is the information carrier linked to its specification documents.

 

Managing the right version of the information in relation to a specific version of the product is called configuration management. And the better you have your configuration management processes in place, the more efficient and with high confidence you can deliver and support your products.  Configuration Management is again a typical example where we are talking about a coordinated approach to managing products and documents.

Implementing this type of PLM requires already more complex as it needs different disciplines to agree on a collective process across various (enterprise) systems. ERP integrations are technically not complicated, it is the agreement on a leading process that makes it difficult as the holistic view is often failing.

Next, next step PLM – the Digital Thread

Continuing reading might give you the impression that the next step in PLM evolution is the digital thread. And this can be the case depending on your definition of the digital thread. Oleg Shilovitsky recently published an article: Digital Thread – A new catchy phrase to replace PLM? related to his observations from  ConX18 illustrate that there are many viewpoints to this concept. And of course, some vendors promote their perfect fit based on their unique definition. In general, I would classify the idea of Digital Thread in two approaches:

The Digital Thread – coordinated

In the Digital Thread – coordinated approach we are not revolutionizing the way of working in an enterprise. In the coordinated approach, the PLM environment is connected with another overlay, combining data from various disciplines into an environment where the dependencies are traceable. This can be the Aras overlay approach (here explained by Oleg Shilovitsky), the PTC Navigate approach or others, using a new extra layer to connect the various discipline data and create traceability in a more or less non-intrusive way. Similar concepts, but less intrusive can be done through Business Intelligence applications, although they are more read-only than a system approach.

The Digital Thread – connected

In the Digital Thread – connected approach the idea is that information is stored in an extreme granular way and shared among disciplines. Instead of the coordinated way, where every discipline can have their own data sources, here the target is to be data-driven (neutral/standard formats). I described this approach in the various aspects of the model-based enterprise. The challenge of a connected enterprise is the standardized data definition to make it available for all stakeholders.

Working in a connected enterprise is extremely difficult, in particular for people educated in the old-fashioned ways of working. If you have learned to work with shared documents, like Google Docs or Office documents in sharing mode, you will understand the mental change you have to go through. Continuous sharing the information instead of waiting until you feel your part is complete.

In the software domain, companies are used to work this way and to integrate data in a continuous stream. We have to learn to apply these practices also to a complete product lifecycle, where the product consists of hardware and software.

Still, the connect way if working is the vision where digital enterprises should aim for as it dramatically reduces the overhead of information conversion, overhead, and ambiguity. How we will implement in the context of PLM / Product Innovation is a learning process, where we should not be blocked by our echo chamber as Jan Bosch states it in his latest post: Don’t Get Stuck In Your Company’s Echo Chamber

Jan Bosch is coming from the software world, promoting the Software-Centric Systems conference SC2 as a conference to open up your mind. I recommend you to take part in upcoming PLM related events: CIMdata’s PLM roadmap Europe combined with PDT Europe on 24/25th October in Stuttgart, or if you are living in the US there is the upcoming PI PLMx CHICAGO 2018 on Nov 5/6th.

Conclusion

Learning and understanding are crucial and takes time. A digital transformation has many aspects to learn – keep in mind the difference between coordinated (relatively easy) and connected (extraordinarily challenging but promising). Unfortunate there is no populist way to become digital.

Note:
If you want to continue learning, please read this post – The True Impact of Industry 4.0 Revealed  -and its internal links to reference information from Martijn Dullaart – so relevant.

 

dummies_logo

 

In my earlier posts, I described generic PLM data model and practices related to Products, BOMs en recently EBOM and (CAD) Documents. This time I want to elaborate a little bit more on the various EBOM characteristics.

 

The EBOM is the place where engineering teams collaborate and define the product. A released EBOM is supposed to give the full engineering specification how a product should behave including material quality and tolerances. This makes it different from the MBOM, which contains the specification of how this product should be manufactured based on exact components and materials.

Depending on the type of product there are several EBOM best practices which I will discuss here (briefly) in alphabetical order:

EBOM & Buy Part

PDM_ERP_AML_AVLUsually, an EBOM consists of Make and Buy parts –an attribute on the EBOM part indicates the preferred approach. Make parts are typically sourced towards qualified suppliers, where Buy parts can be more generic and based on qualified vendors. Engineering specifies who are the approved Manufacturers for the part (AML) and purchasing decides who are the approved Vendors for this part (AVL). In general Buy parts do not need an engineering efforts every time the part is used in a product.

EBOM & CAD related

My previous post already discussed some of the points related to EBOM and CAD Documents. Here I want to extend a little more addressing the close relation between MCAD parts and EBOM parts. In particular in the Engineering To Order industry, there is, most of the time, no standard product to relate to. In that case, Mechanical CAD can be the driver for the EBOM definition and usually EBOM Make parts are designed uniquely. The challenge is to understand similar parts that might exist and reuse them. Classification (and old post here) and geometric search capabilities support the modern engineer. I will come back to classification in a later post

EBOM – Configuration Item

cmiiIn case a product is designed for mass production throughout a longer lifetime, it becomes necessary to manage the product configuration over time. How is the product is defined today and avoid the need to have for each product variant a complete EBOM to manage. The EBOM can be structured with Options and Variants. In that case, having Configuration Items in the EBOM is crucial. The Configuration Item is the top part that is versioned and controlled. Parts below the configuration item, mostly standard parts do not impact the version of the Configuration Item as long as the Form-Fit-Function from the Configuration Item does not change. Configuration Management is a topic on its own and some people believe PLM systems were invented to support Configuration Management.

EBOM – Company Standard Part

Standard Parts are often designed parts that should be used across various products or product lines. The advantage of company standard parts is that it reduces costs throughout the whole product lifecycle. Less design time, less manufacturing setup time and material sourcing effort and potential lower material cost thanks to higher volumes. Any EBOM part could become at a certain moment a Company Standard part and it is recommended to use a classification related to these parts. Otherwise they will not be found again. As mentioned before I will come back to classification.

EBOM – Functional group

Sometimes during the design of a product, several parts are logically grouped together from the design point of view, either because they are modular or because they always appear as a group of parts.

The EBOM, in that case, can contain phantom parts, which do not represent an end item. These phantom parts assist the company in understanding changing one of the individual parts in this functional group.

EBOM – Long Lead

In typical Engineering to Order or Build To Order deliveries there are components on the critical path of the product delivery. Components with a long lead time should be identified and ordered as early as possible during the delivery process. Often the EBOM is not complete or mature enough to pass through all the information to ERP. Therefore Long Lead items require a fast track towards ERP and a special status in the EBOM reflecting its ordering status. Long Lead items are the example where a company can benefit from a precise interaction between PLM and ERP with various status handshakes and approvals during the delivery process

EBOM – Make parts

Make Parts in an EBOM are usually specified by their related model and drawings. Therefore Make Parts usually have revisions but be aware that they do not follow the same versioning of the related model or drawing. A Make Part is in an In Work status as long as the EBOM is not released. Once the model is approved, the EBOM part can be approved or released. Often companies do not want to release the data as long as manufacturing is not completed. This to make sure that the first revision comes out at the first delivery of the product.

EBOM – Materials

In many mechanical assemblies, the designer specifies materials with a particular length. For example a rubber strip, tubing / piping. When extracting the information from the 3D CAD assembly, this material instance will get a unique identifier. Here it is important that the Material Part has an attribute that describes the material specification. In the ideal data model, this is a reference to a Materials library. Next when manufacturing engineering is defining the MBOM, they can decide on material quantities to purchase for the EBOM Material.

EBOM – Part Number

QRThis could be a post on its own. Do we need intelligent part numbers or can we use random generated unique numbers? I have a black and white opinion about that. If you want to achieve a digital enterprise you should aim for random generated unique numbers. This because in a digital enterprise data is connected without human transfer. The PLM and ERP link is unambiguous. Part recognition at the shop floor can be done with labels and scanning at the workstation. There is no need for a person to remember or transfer information from one system or location by understanding the part number. The uniquely generated number make sure every person will have a look at the digital metadata online available. Therefore immediately seeing a potential status change or upcoming engineering change. Supporting the intelligent numbering approach allows people to work disconnected again, therefore not guaranteeing that an error-free activity takes place. People make mistakes, machines usually not.

EBOM – Service Parts

It is important to identify already in the EBOM which parts need to be serviced in operation and engineering should relate the service information already to the EBOM part. This could be the same single part with a different packaging or it could be a service kit plus instructions linked to the part. In a PLM environment, it is important that this activity is done upfront by engineering to avoid later retrieval of the data and work again on service information. A sensitive point here is that engineers currently in the classical approach are not measured on the benefits they deliver downstream when the products are in the field. Too many companies work here in silos.

EBOM – Standard Parts

3dFinally, as I reach already the 1000 words, a short statement about EBOM standard parts. These standard parts, based on international or commercial standards do not need a revision and often they have a specification sheet, not necessary a 3D model for visualization. Classification is crucial for Standard Part and here I will write a separate post about dealing with Standard Parts, both mechanical and electrical.

Concluding: this post we can see that the EBOM is having many facets and based on the type of EBOM part different behavior is expected. It made me realize PLM is not that simple as I thought. In general when defining an EBOM data model you would try to minimize the specific classes for the EBOM part. Where possible, solve it with attributes (Make/Buy – Long Lead – Service – etc.). Use classification to store specific attributes per part type related to the part. Classification will be my next topic as it appears

Feel free to jump on any of the EBOM characteristics for an extended discussion

note: images borrowed from the internet contain links to the original location where I found them. The context there is not always relevant for this post.

PLM_profI believe that PLM with its roots in automotive, aerospace and discrete manufacturing is accepted, as a vital technology / business strategy to make a company more competitive and guarantee its future. Writing this sentence feels like marketing, trying to generalize a lot of information in one sentence.

Some questions you might raise:

  • Is PLM a technology or business strategy?
  • Are companies actually implementing PLM or is it extended PDM?
  • Does PLM suit every company?

My opinion:

  • PLM is a combination of technology (you need the right IT-infrastructure / software to start from) and the implementation is a business approach (it should be a business transformation). PLM vendors will tell you that it is their software that makes it happen; implementers have their preferred software and methodology to differentiate themselves. It is not a single simple solution. Interesting enough Stephen Porter wrote about this topic this week in the Zero Wait-State blog:  Applying the Goldilocks Principle to PLM – finding balance. Crucial for me is that PLM is about sharing data (not only/just documents) with status and context. Sharing data is the only way to (information) silos in a company and provide to each person a more adequate understanding.
  • Most companies that claim to have implemented PLM have implemented just extended PDM, which means on top of the CAD software add other engineering data and processes. This was also mentioned by Prof Eigner in his speech during PLM Innovation early this year in Munich. PLM is still considered by the management as an engineering tool, and at the other side they have ERP. Again sharing all product IP with all its iterations and maturity (PLM) and pushing execution to ERP is still a unique approach for more traditional companies. See also a nice discussion from my blog buddy Oleg: BOM: Apple of Discord between PLM and ERP?
  • Not every business needs the full PLM capabilities that are available. Larger companies might focus more on standardized processes across the enterprise; smaller companies might focus more on sharing the data. There is to my opinion no system that suits all. One point they are all dreaming of: usability and as in small companies PLM decisions are more bottom-up the voice of the user is stronger here. Therefore I might stick to my old post PLM for the mid-market: mission impossible ?

However, the title of this blog post is: PLM for all industries. Therefore, I will not go deeper on the points above. Topics for the future perhaps.

PLM for all industries ?

This time I will share with you some observations and experiences based on interactions with companies that not necessary think about PLM. I have been working with these companies the past five years. Some with some success, some still in an awareness phase. I strongly believe these companies described below would benefit a lot from PLM technology and practices.

Apparel

imageIn July, I wrote about my observations during the Product Innovation Apparel event in London. I am not a fashion expert and here I discovered that, in a sense, PLM in Apparel is much closer to the modern vision of PLM than classic PLM. They depend on data sharing in a global model, disciplines and suppliers driven by their crazy short time to market and the vast amount of interactions in a short time; otherwise they would not be competitive anymore and disappear.

This figure represented modern PLM

PLM in Apparel is still in the early stages. The classic PLM vendors try to support Apparel with their traditional systems and are often too complicated or not user-friendly enough. The niche PLM vendors in Apparel have a more lightweight entry level, simple and easy, sometimes cloud-based. They miss the long-term experience of building all the required technology, scalability and security, in their products, assuring future upgradability. For sure this market will evolve, and we will see consolidation

Owner / Operators nuclear

nuclearFor s nuclear plants it is essential to have configuration management in place, which in short would mean that the plant operates (as-built) is the same as specified by its specifications (as-designed). In fact this is hardly the case. A lot of legacy data in paper or legacy document archives do not provide the actual state. They are stored and duplicated disconnected from each other. In parallel the MRO system (SAP PM / Maximo are major systems) runs in an isolated environment only dealing with actual data (that might be validated).

In the past 5 years I have been working and talking with owners/operators from nuclear plants to discuss and improve support for their configuration management. frog

The main obstacles encountered are:

  • The boiling frog syndrome –it is not that bad
    (and even if it is bad we won´t tell you)
  • An IT-department that believes configuration management is about document management – they set the standards for the tools (Documentum / SharePoint – no business focus)
  • An aging generation, very knowledgeable in their current work, but averse for new ways of information management and highly demanding to keep the status quo till they retire
  • And the “If it works, do not touch it” – approach somehow related to the boiling frog syndrome.

Meanwhile business values for a change using a PLM infrastructure have been identified. With a PLM environment completing the operational environment, an owner/operator can introduce coordinated changes to the plant, reduce downtime and improve quality of information for the future. One week less down-time could provide a benefit of million Euros.

No_roiHowever with the current, lowering electricity costs in Europe, the profits for owner/operators are under pressure and they are not motivated to invest at this time in a long term project. First satisfy the shareholders Sad smile

 

 

Owner / Operators other process oriented plants

almIn the nuclear industry safety is priority one and required by the authorities. Therefore, there is a high pressure for data quality and configuration management. For other industries the principles remain the same. Here, depending on the plant lifetime, criticality of downtime and risk for catastrophes, the interest for a PLM based plant information management platform varies. The main obstacles here are similar to the nuclear ones:frog

  • Even a bigger boiling frog as we have SAP PM – so what else do we need
  • IT standardizes on a document management solution
  • The aging workforce and higher labor costs are not identified yet as threats for the future looking towards competing against cheaper and modern plants in the upcoming markets – the boiling frog again.

The benefits for a PLM based infrastructure are less direct visible, still ROI estimates predict that after two years a break-even can be reached. Too long for share holder driven companies L although in 10 years time the plant might need to close due to inefficiencies.

 

EPC companies

epcEPC (Engineering, Procurement and Construction) and EPCIC (Engineering, Procurement, Construction, Installation and Commissioning) companies exist in many industries: nuclear new build, oil & gas, Chemical, Civil construction, Building Construction.

They all work commissioned for owner / operators and internally they are looking for ways to improve their business performance. To increase their margin they need to work more efficient, faster and often global, to make use of the best (cheaper) resources around the world. A way to improve quality and margin is through more reuse and modularization. This is a mind-shift as most EPC companies have a single project / single customer per project in mind, as every owner/operator also pushes their own standards and formats.

knowledgeIn addition, when you start to work on reuse and knowledge capturing, you need to have a way to control and capture your IP. And EPCs want to protect their IP and not expose too much to their customers to maintain a dependency on their solution.

The last paragraph should sound familiar to the challenges automotive and aerospace supply chains had to face 15 years ago and were the reasons why PLM was introduced. Why do EPC companies not jump on PLM?

  • They have their home-grown systems – hard to replace as everyone likes their own babies (even when they reach adolescence or retirement symptoms)
  • Integrated process thinking needs to be developed instead of departmental thinking
  • As they are project-centric, an innovation strategy can only be budgeted inside a huge project, where they can write-off the investment to their customer project. However this makes them less competitive in their bid – so let´s not do it
  • Lack of data and exchange standards. Where in the automotive and aerospace industry CATIA was the driving 3D standard, such a standard and 3D is not available yet for other industries. ISO 15926 for the process industry is reasonable mature, BIM for the construction industry is still in many countries in its discovery phase.
  • Extreme lose supplier relations compared to automotive and aerospace, which combined with the lack of data exchanges standards contributes to low investments in information infrastructure.

Conclusion

In the past 5 years I have been focusing on explaining the significance of PLM infrastructure and concepts to the industries mentioned before. The value lies on sharing data, instead of working in silos. If needed do not call it PLM, call it online collaboration, controlled Excel on the cloud.
Modern web technologies and infrastructure make this all achievable; however it is a business change to start sharing. Beside Excel the boiling frog syndrome dominates everywhere.

  • What do you think?
  • Do you have examples of companies that took advantage of modern PLM capabilities to change their business?

I am looking forward to learn more.

Below some links that are relevant for this post as a reference:

imageSome weeks ago PLMJEN asked me my opinion on Peter Schroer´s post and invitation to an ARAS webinar called: Change Management: One Size Will Never Fit All. Change Management is actually a compelling topic, and I realized I had never written a dedicated post to such an essential topic. The introduction from Peter was excellent:

Change management is the toughest thing inside of PLM. It’s also the most important.

For the rest, the post elaborated further into software capabilities and the value of having templates processes for various industry practices. I share that opinion when talking to companies that are starting to establish their processes. It is extremely rare that an existing company will change its processes towards more standard processes delivered by the PLM system when implementing a new system. The rule of thumb is People, Processes and Tools. This all is nicely explained by Stephen Porter in his latest blog post Beware the quick fix successful plm deployment strategies. As I was not able to attend the webinar, here are my more general thoughts related to change management and why it is essential for PLM.

Change Management has always been there

It is not that PLM has invented change management. Before companies started to use ERP and PDM systems, every company had to deal with managing changes. At that time, their business was mostly local and compared with today slow. “Time to market” was more a “Time to Region” issue. Engineering and Manufacturing were operating from the same location. Change management was a personal responsibility supported by (paper) documents and individuals. Only with the growing complexity of products, growing and global customer demands and increasing regulatory constraints it became impossible to manage change in an unstructured manner.

Survival of the fittest change organization

imageI have worked with several companies where change management was a running Excel business. Running can be interpreted in two ways. The current operation could not stop and step back and look into an improvement cycle, and a lot of people were running to collect, check and validate information in order to make change estimates and make decisions based on the collected data.

When a lot of people are running, it means your business is at risk. A lot of people means costs for data (re)search and handling are higher than the competition if this can be done automatically. Also in countries of low labor costs, a lot of people running becomes a threat at a certain moment. In addition, running people can make mistakes or provide insufficient information, which leads to the wrong decisions.

Wrong decisions can be costly. Your product may become too expensive; your project may delay significant as information was based on conflicting information between disciplines or suppliers. Additional iterations to fix these issues lead to a longer time to market. Late discoveries can lead to severe high costs. For certain, when the product has been released to the market the cost might be tremendous.

NoChangeFrom the other side if making changes becomes difficult because the data has to be collected from various sources through human intervention, organizations might try to avoid making changes.

Somehow this is also an indirect death penalty. The future is for companies that are able to react quickly at any time and implement changes.

The analogy is with a commercial aircraft and a fighter plane. Let’s take the Airbus 380 in mind and a modern fighter jet the Joint Strike Fighter (JSF). The Airbus 380 brings you comfortable from A to B as long as A and B are well prepared places to land. The flight is comfortable as the plane is extremely stable. It is a well planned trip with an aversion to change of the trajectory.

The JSF airplane by definition is an unstable plane. It is only by its computer steering control that the plane behaves stable in the air. The built-in instability makes it possible to react as quickly as possible to unforeseen situations, preferable faster than the competition. This is a solution designed for change.

Based on your business you all should admire the JSF concept and try to understand where it is needed in your organization.

Why is change management integrated in PLM so important?

If we consider where changes appear the most, it is evident in the early lifecycle of the product most of the changes occur. And as long as they are in the virtual world with uncommitted costs to the product they are relative cheap. To my surprise many engineering companies and engineering departments work only with change management outside their own environment. Historically because outside their environment connected to prototyping or production costs of change are the highest. And our existing ERP system has an Engineering Change process – so let’s use that.

whyworryMeanwhile, engineering is used to work with the best so far information. At any moment, every discipline stores their data in a central repository. This could be a directory structure or PDM systems. Everyone is looking to the latest data. Files are overwritten with the latest versions. Data in the PDM system shows the latest version to all users. Hallelujah

And this is the place where it goes wrong. A mechanical engineer has overlooked a requirement in the specification that has been changed. Yes, the latest version of the 20 page document is there. An electrical engineer has defined a new control system for the engine, but has not noticed that the operating parameters of the motor have been changed. Typical examples where a best so far environments creates the visibility, but the individual user cannot understand the impact of a change anymore (especially when additional sites perform the engineering work)

Here comes the value of change management in PLM. Change Management in PLM can be light weighted in the early design phases, providing checks on changes (baselines) and notifications to disciplines involved. Approval processes are more agreements to changes to implement and their impact on all disciplines.

sel_aPLM supports the product definition through the whole product lifecycle, change management at each stage can have its particular behavior. In the early stages a focus on notifications and visibility of change, later checking the impact based on the maturity of the various disciplines and finally when running into production and materials commitment towards a strict and organized change mechanism. It is only in a PLM system where the gradual flow can be supported seamless

Change Management and ERP

As mentioned before, most manufacturing companies have implemented change management in ERP as the costs of change are the highest when the product capabilities are committed. However, the ERP system is not the place to explore and iterate for further improved solutions. The ERP system can be the trigger for a change process based on production issues. However the full implementation of the change requires a change in the product definition, the area where PLM is strong.

NOTE: on purpose I am not mentioning a change in the engineering definition as in some cases the engineering definition might remain the same, but only the manufacturing process or materials need to be adapted. PLM supports iterations, not an ERP execution matter.

Change Management and Configuration Management

cmiiSo far we have been discussing how the manufacturing system would be able to offer products based on the right engineering definition. As each specific product might not have an individual definition checked at any time, there is the need for configuration management (CM). Proper implemented configuration management assures there is a consistent relationship between how the product is specified and defined and the way it is produced. Read a refined and precise explanation on wiki

In one of my following posts I will focus on configuration management practices and why PLM systems and Configuration Management are like a Siamese twins

Conclusion:

Storing your data in a (PLM) system has only value if you are able to keep the actual status of the information and its context. Only then a person can make the right decisions immediately and with the right accuracy. The more systems or manual data handling, the less completive your company will be. Integrated and lean change management means survival !

dontmissLast week I started my final preparation for the PLM Innovation Congress 2012 on February 22nd and 23rd in Munich, where I will speak about Making the Case for PLM. Looking forward for two intensive days of knowledge sharing and discussion

The question came to my mind that when you make the case for PLM, you also must be clear about what you mean by PLM. And here I started to struggle a little. I have my perception of PLM, but I am also aware everyone has a different perception about the meaning of PLM.

cmpicI wrote about it last year, triggered by a question in the CMPIC group (configuration management) on LinkedIn. The question was Aren’t CM and PLM the same thing ? There was a firm belief from some of the members that PLM was the IT-platform to implement CM.

PLM_PDM_CAD_networkA few days ago Inge Craninckx posted a question in the PDM PLM CAD network group about the definition of PLM based on a statement from the PLMIG. In short:

“PDM is the IT platform for PLM.”Or, expressed from the opposite viewpoint: “PLM is the business context in which PDM is implemented

The response from Rick Franzosa caught my attention and I extracted the following text:

The reality is that most PLM systems are doing PDM, managing product data via BOM management, vaulting and workflow. In that regard, PDM [read BOM management, vaulting and workflow], IS the IT platform for the, in some ways, unfulfilled promise of PLM.

I fully agree with Rick’s statement and coming back to my introduction about making the case for PLM, we need to differentiate how we implement PLM. Also we have to take into our minds that no vendor, so also not a PLM vendor, will undersell their product. They are all promising J

Two different types of PLM implementation

Originally PLM has started in 1999 by extending the reach of Product Data outside the engineering department. However besides just adding extra functionality to extend the coverage of the lifecycle, PLM also created the opportunity to do things different. And here I believe you can follow two different definitions and directions for PLM.

Let’s start with the non-disruptive approach, which I call the extended PDM approach

Extended PDM

expressWhen I worked 6 years ago with SmarTeam on the Express approach, the target was to provide an OOTB (Out of the Box) generic scenario for mid-market companies. Main messages were around quick implementation and extending the CAD data management with BOM and Workflow. Several vendors at that time have promoted their quick start packages for the mid-market, all avoiding one word: change.

I was a great believer of this approach, but the first benchmark project that I governed demonstrated that if you want to do it right, you need to change the way people work, and this takes time (It took 2+ years). For the details: See A PLM success story with ROI from 2009

NoChange

Cloud based solutions have become now the packaging for this OOTB approach enriched, with the ease of deployment – no IT investment needed (and everyone avoids the word change again).

If you do not want to change too much in your company, the easiest way to make PDM available for the enterprise is to extend this environment with an enterprise PLM layer for BOM management, manufacturing definition, program management, compliancy and more.

Ten years ago, big global enterprises started to implement this approach, using local PDM systems for mainly engineering data management and a PLM system for the enterprise. See picture below:

clip_image002

This approach is now adapted by the Autodesk PLM solution and also ARAS is marketing themselves in the same direction. You have a CAD data management environment and without changing much on that area, you connect the other disciplines and lifecycle stages of the product lifecycle by implementing an additional enterprise layer.

The advantage from this approach is you get a shared and connected data repository of your product data and you are able to extend this with common best practices, BOM management (all the variants EBOM/MBOM/SBOM, …) but also connect the market opportunities and the customer (Portfolio management, Systems engineering)

myplmThe big three, Dassault Systemes, Siemens PLM and PTC, provide the above functionality as a complete set of functionalities – either as a single platform or as a portfolio of products (check the difference between marketing and reality).

Oracle and SAP also fight for the enterprise layer from the ERP side, by providing their enterprise PLM functionality as an extension of their ERP functionality. Also here in two different ways: as a single platform or as a portfolio of products. As their nature is on efficient execution, I would position these vendors as the one that drive for efficiency in a company, assuming all activities somehow can be scheduled and predicted

My statement is that extended PDM leads to more efficiency, more quality (as you standardize on your processes) and for many companies this approach is a relative easy way to get into PLM (extended PDM). If your company exists because of bringing new products quickly to the market, I would start from the PDM/PLM side with my implementation.

The other PLM – innovative PLM

idea

Most PLM vendors associate the word PLM in their marketing language with Innovation. In the previous paragraph I avoided on purpose the word Innovation. How do PLM vendors believe they contribute to Innovation?

This is something you do not hear so much about. Yes, in marketing terms it works, but in reality? Only few companies have implemented PLM in a different way, most of the time because they do not carry years of history, numbering systems, standard procedures to consider or to change. They can implement PLM in a different way, as they are open to change.

If you want to be innovative, you need to implement PLM in a more disruptive manner, as you need to change the way your organization is triggered – see the diagram below:

PLM_flow

The whole organization works around the market, the customer. Understanding the customer and the market needs at every moment in the organization is key for making a change. For me, an indicator of innovative PLM is the way concept development is connected with the after sales market and the customers. Is there a structured, powerful connection in your company between these people? If not, you do the extended PLM, not the innovative PLM.

Innovative PLM requires a change in business as I described in my series around PLM 2.0. Personally I am a big believer that this type of PLM is the lifesaver for companies, but I also realize it is the hardest to implement as you need people that have the vision and power to change the company. And as I described in my PLM 2.0 series, the longer the company exist, the harder to make a fundamental change.

Conclusion

There are two main directions possible for PLM. The first and oldest approach, which is an extension of PDM and the second approach which is a new customer centric approach, driving innovation. Your choice to make the case for one or the other, based on your business strategy.

Looking forward to an interesting discussion and see you in Munich where I will make the case

PLM_inno_2012

cmpic Recently i noticed two different discussions. One on LinkedIn in the CMPIC®  Configuration Management Trends group, where Chris Jennings started with the following statement:

Product Lifecycle Management (PLM) vs CM

An interesting debate has started up here about PLM vs CM. Not surprisingly it is revealing a variety of opinions on what each mean. So I’m wondering what sort of reaction I might get from this erudite community if I made a potentially provocative statement like …
“Actually, PLM and CM are one and the same thing” ?

24 days ago

It became a very active discussion and it was interesting to see that some of the respondents saw PLM as the tool to implement CM. Later the discussion moved more towards system engineering, with a focus on requirements management. Of course requirements management is key for CM, you could say CM starts with the capturing of requirements.

myplm

There was some discussion about what is the real definition of PLM and this triggered my post. Is the definition of PLM secured in a book – and if so – in which book as historically we have learned that when the truth comes from one book there is discussion

But initially in the early days of the PLM, requirements management was not part of the focus for PLM vendors. Yes, requirements and specifications existed in their terminology but were not fully integrated. They focused more on the ‘middle part’ of the product lifecycle – digital mockup and virtual manufacturing planning. Only a few years later PLM vendors started to address requirements management (and systems engineering) as part of their portfolio – either by acquisitions of products or by adding it natively.

For me it demonstrates that PLM and CM are not the same. CM initially had a wider scope than early PLM systems supported, although in various definitions of PLM you will see that CM is a key component of the PLM practices.

plmbookStill PLM and CM have a lot in common, I wrote about is a year ago in my post: PLM, CM and ALM; not sexy ! and both fighting to get enough management support and investments.  There is in the CMIP group another discussion open with the title: What crazy CM quotes have you heard ? You can easily use these quotes also for the current PLM opinion. Read them (if you have access and have fun)

But the same week another post caught my interest. Oleg’s post about Inforbix and Product Data Management. I am aware that also other vendors are working on concepts to provide end users with data without the effort of data management required.  Alcove9 and Exalead are products with a similar scope and my excuses to all companies not mentioned here.

cm_futureWhat you see it the trend to make PLM more simple by trying to avoid the CM practices that often are considered as “non-value add”, “bureaucracy” and more negative terms. I will be curious to learn how CM practices will be adhered by these “New Generation of PDM” vendors, as I believe you need CM to manage proactively your  products.

What is your opinion about CM and PLM  – can modern PLM change the way CM is done ?

%d bloggers like this: