You are currently browsing the category archive for the ‘CM’ category.

In my last post, My four picks from PLMIF,  I ended with the remark that the discussion related to the Multiview BOM concept was not complete. The session presented by James Roche focused on the Aerospace & defense domain and touched the surface. There is a lot of confusion related to best practices associated with BOM-handling. Sometimes created to promote unique vendor capabilities or to hide system complexity.

Besides, we need to consider the past as, in particular, for PLM, the burden of legacy processes and data is significant. Some practices even come from the previous, paper-based century, later mixed with behavior from 3D CAD-systems.

Therefore, to understand the future, I will take you through the past to understand why certain practices were established. Next, in a few upcoming posts, I want to explain the evolution of BOM-practices. How each new technology step introduced new capabilities that enabled companies to improve their product delivery process.

I will describe the drawing approach (for PLM – the past), the item-centric approach (for PLM – the current), and the model-driven approach(for PLM – the future). How big this sequence will become is not clear at this stage.

Whenever I come close to 1200 – 1500 words, I will stop and conclude. Based on my To-do list and your remarks, I will continue in a follow-up post.  The target will be to have a vendor-neutral collection of information to help you identify your business and the next possible steps.

Working with drawings

MRP/ERP – the first IT-system

For this approach, I go back fifty years in time, when companies were starting to work with their first significant IT-system, the MRP-system. MRP stands for Material Requirements Planning. This system became the heart of the company, scheduling the production. The extension to ERP (Enterprise Resource Planning) quickly after, made it possible to schedule other resources and, essential for the management, to report financials. Now execution could be monitored by generating all kinds of reports.

Still, the MRP/ERP-system was wholly disconnected from the engineering world as the image shows below. Let us have a look at how this worked at that time.

The concept

Products have never been designed from scratch by jumping to drawings. In the concept phase, a product was analyzed, mainly on its mechanical behavior. Was there anything else at that time? Many companies thank their existence from a launching product which someone, most of the time, the founder of the company, invented in a workshop. The company than improved and enriched this product by starting from the core product, creating enhancements in various areas of applicability.

These new ideas were shared through sketches and prototypes.

The design

The detail design of a product is delivered by a technical documentation set, often a package of manufacturing drawings containing a list of parts on the drawing, assembly with instructions. Balloon numbers are used to indicate parts in an assembly or section view.  In addition, there are the related fabrication drawings. The challenge for this approach is that all definitions must be there uniquely and complete to avoid ambiguity, which could lead to manufacturing errors.

The parts list contains make-parts, supplier parts, and standard parts. The make-parts are specified again by manufacturing drawings, identified by a number that uniquely identifies the correct drawing version. A habit here: Part number = Drawing Number (+ revision)

As the part is identified by a drawing the part most of the time got an “intelligent” part number and a revision. Intelligent to support easy recognition and revisions as at the end we do not want to generate a new part number when there is an evolution of the part. Read more about this in What the FFF is happening and “Intelligent” part numbers?

The standardized parts can be either company standard parts or external standard parts. There is a difference between them.

A company standard part could be a certain bracket, a frame. Anything that the company decided to standardize on for its own products Company standard parts are treated like make parts; they have an identifier related to their manufacturing drawings.  Again, here the habit: Part Number = Drawing Number (+ revision)

The supplier part is coming from a supplier that manufactures this part based on the supplier or market specifications. You can specify this part by using the supplier’s catalog number or refer to the standard.

For example, the part that has been specified under a certain ISO/ANSI/DIN-standard. For example, a stainless-steel bolt M8 x 1,25 x 20, meaning a metric bolt with a head diameter of 8 mm, a speed of 1,25 mm, and a length of 20 mm. You specify the standard part according to the standard. Purchasing will decide where to buy this part

Manufacturing Preparation

This is the most inefficient stage when working in a traditional drawing approach. At this stage, the information provided in drawings needs to be entered into the MRP/ERP-system to start production. This is the place where information is thrown over the wall as some might say.

This means a person needs to create process steps in the ERP-system based on the drawing information. For each manufacturing step, there needs to be a reference to the right drawing. Most ERP-systems have a placeholder where you can type the drawing number(s). Later, when companies were using CAD, there could be a reference to a file.

The part number in the ERP-system might be the same as the drawing number; however, the ERP-system requires unique numbers. In the beginning, ERP-systems were the number-generator for new parts. The unique number was often 6 to 7 digits in size, because it fits in our human short-term memory.

The parts list on the drawings had to be entered in the ERP-system too. A manual operation that often required additional research from the manufacturing engineer. As the designer might have specified the SS Bolt M8 x 1,25 x 20 as such, manufacturing preparation has to search in the ERP-system for the company’s part number.

Suppliers have to be sourced for outside manufactured make-parts. In case you do not want to depend on a single supplier, you have to send drawings and specifications to the supplier before the product is released. The supplier will receive a drawing number with revision and status warning.

If everything worked well the first time, there would be no iterations between engineering and manufacturing preparation. However, this is a utopia: prototype changes, potential manufacturing issues will require changes in the drawings. These changes require updates in the drawings, which will lead to new versions. How do you keep consistency between all identifiers?

Manufacturing

During manufacturing, orders are processed based on information from the ERP-system. The shop floor gets the drawing provided to the link in ERP. Sometimes there are issues during manufacturing. In coordination with engineering, some adaptations will be made to the manufacturing process. e.g., a changed fit or tolerance. Instead of going back to engineering to provide a new documentation set, the relevant drawings are redlined. Engineering will update these drawings whenever they touch them in the future (yeah, yeah).

Configuration Management

But will they update them? Perhaps already a new version existed due to the product’s evolution. Everything needs to be coordinated manually. Smaller companies heavily rely on people knowing things and talking together.

Larger companies cannot work in the same manner; therefore, they introduce procedures to guarantee that the information flow is consistent and accurate. Here the practices from configuration management come in.

There are many flavors of configuration management. Formal CM was first used in the 1950s to control the technical documentation for complex space and weapons systems. (Source ESA CM initiative for SME’s – © 2000) We will see it come back in future posts dealing with more complex products and the usage of computer systems.

Last year I wrote a few times about PLM and configuration management (PLM and CM – a happy marriage?) not relevant at this moment as there is no PLM yet.

Where is the BOM

As you might have noticed, there was no mentioning of a BOM so far. At this stage, there is only one Bill of Materials managed in the ERP-system. The source from the BOM comes from the various parts lists on the drawings, completed with manual additions.

Nobody talks in this stage about an EBOM or MBOM as there is only one BOM, a kind of hybrid BOM, where manufacturing steps were driving the way parts are grouped. Because the information was processed step by step, why would you like to have a multilevel BOM or a BOM tree?
Note: The image on the left was one of my first images in 2008 when I started my blog.

Summary

Working with drawings introduced “intelligent” part numbers as the documents have to be identified by manual interpretations. The intelligence of the part number was there to prevent people from making mistakes as the number already was a kind of functional identifier. Combined with a revision and versioning in the number, nothing could go wrong if handled consistently.

The disadvantage was that new employees had to master a numbering system. Next, the risk for all employees that a released drawing will not change its status. Only manual actions (retract/replace) will avoid making mistakes. And then, there are the disconnected redline drawings.

The “drawing number equals part number”  relation created a constraint that will be hard to maintain in the future.  Therefore you should worry if you still work according the above principles.

Conclusion

I reached the 1500 words – a long story – probably far from complete. I encourage readers to provide enhancements that might be relevant in the comments. This post might look like a post for dummies. However, to understand what is applicable to the future, we first need to understand why certain practices have been defined in the past.
I am looking forward to your comments and enhancements to make this a relevant stream of public information for all.

One week ago, Yoann Maingon wrote an innocent post with the question: Has FFF killed?  The question was raised related to a 2014 problem at GM, where a changed part was causing fatal accidents.

The discussion started by Yoann and here my short extract. Assuming this problem was a configuration management issue and Yoann somehow indicated that the problem might be related to the fact that ERP-systems do not carry a revision on the part number – leading to an unnoticed change.  Therefore, he assumes there is a disconnect between the PLM-side (where we have parts with multiple lifecycle states and revisions) and ERP (where we have an industrial lifecycle – prototype/production).

He posted his thoughts, and then LinkedIn exploded (currently 116 comments), which means it is a topic that is of significant concern in our community. Next, if you read the comments, there are different viewpoints:

  • What does FFF really imply?
  • What about revisions of parts?
  • What are the best practices?

Let’s investigate these viewpoints with some comments

What does FFF really imply?

When we talk about FFF in engineering, we mean Form, Fit and Function – the three primary characteristics to describe a part  (source Wikipedia)

  • Form refers to such characteristics as external dimensions, weight, size, and visual appearance of a part or assembly. This is the element of FFF that is most affected by an engineer’s aesthetic choices, including enclosure, chassis, and control panel, that become the outward “face” of the product.
  • Fit refers to the ability of the part or feature to connect to, mate with, or join to another feature or part within an assembly. The “fit” allows the part to meet the required assembly tolerances to be useful.
  • Function is a criterion that is met when the part performs its stated purpose effectively and reliably. In an electronics product, for example, a function can depend on the solid-state components used, the software or firmware, and quite often on the features of the electronics enclosure selected.

One of the comments in Yoann’s post referred to Safe/Unsafe as a potential functional characteristic. I think this addition is not needed. Safety should be a requirement for the part, not a characteristic.

FFF was and still is an approach for engineers to decide if a new, improved version of the part would get a revision or needs a new part number.

I think before we dive deeper into the other viewpoints, it is crucial to define the part number a little more.

In a correct PLM data model, there are two types of part numbers. First, the internal part number that your company uses inside its engineering Bill of Materials to identify a part. This part number can be a meaningless part only to provide uniqueness inside the company.

In 2015 I wrote several posts related to best practices and data modeling for PLM. The most relevant posts to this discussion are here:

The part number can specify a part that needs to be manufactured according to specification, or it can be a part that needs to be purchased from an available supplier/manufacturer. The manufacturer part number is, most of the time, a meaningful number (6 – 7 characters) as these parts need to be ordered by your company. The manufacturer part number is the SKU for the manufacturer. As you can imagine in the manufacturer’s catalog, there isn’t a revision mentioned. In graphics, see the image below:

Your company might sell Product MP-323121 (note: the ID is meaningful to help the customer to order the product).

Internally there is a related EBOM that specifies the product. The EBOM top part is O122 (note: here, we can use a meaningless identifier as all is digitally connected).

For the manufacturing of O122, we need to resolve the EBOM according to its specifications. Therefore, for Part O124, the company needs to decide to purchase from their approved manufacturers either part ABC-21231 or XYZ-88818 (note: again, a meaningful ID as these companies are not digitally connected).

Now coming back to the FFF-discussion. For the orange parts, with a meaningful ID, no revision exists. However, if Assembly O122 is 100% FFF compatible, the Product ID MP-323121 will not change. It allows your company to optimize the EBOM and/or MBOM, meanwhile keeping 100% compatibility to the outside world. (note: the same principle applies to the two manufacturers for Part O124.)

In case Top Assembly O122 has new or changed parts – what should happen there?

At that moment, the definition has changed. The definitions, most of the time described in documents/drawings/models, are related information to the BOM. Therefore the Top Assembly O122 should get a new identifier. There is no need to name it a revision, it is a new data set in the PLM-system, again with a meaningless identifier as we are connected digitally,

What about revisions of parts?

Of course, the management of changes existed long before PLM-systems were introduced.

The specifications of a part were defined in drawings. The drawing contained all the information, not only the geometry definitions, but also specifications on how to manufacture the part.

For complex products, a considerable set of consistently related drawings would be released to manufacturing.  A release process with physical signatures on it.

At the same time, there was no discussion: the drawing represents the part. And as there was no digital connection, part numbers/drawing numbers were meaningful, often with the format of the drawing as part of the identifier.

In case changes were needed, for example, fixing a dimension or tolerance as discovered during manufacturing, the drawing had to be revised to remain consistent. First, in the original drawing, the issue or change was marked in red (redlining). Then engineering had to create a new version of the drawing.

Depending on the impact of change (here comes also the FFF-principle), people decided if a new part number was needed (FFF-change) or that the change only required an update of the drawing(s), meaning a revision.  If the difference was small (for example, adding a missing annotation), it could be called a minor change, all to be reflected in the drawing number, which equals the part number in this approach. So, when we talk about revisions of parts, we are talking about a document change.

A lousy practice from that approach is also that often manufacturing just redlines a drawing and keeps the redlined drawing as their source. It is too time-consuming or difficult to update the source drawing(s) through a change process. Engineering is not aware of this change, and when a later change comes through from engineering, these “fixes” might be missed as there is no traceability.

Generic example of a PLM data model and its relationsWhen PLM-systems were introduced, of course, companies did not want to disrupt their existing ways of working. Therefore, they were asking the PLM-editors to enable revisions on parts and so the PLM-editors did (or do).

Decoupling of parts and documents in a PLM data model

However, if you want to use the PLM-system in the best manner, you need to “decouple” the concept:  part number equals drawing number, combined with the possibility to start using meaningless identifiers, as relations between parts and drawings are managed in the PLM-system through relational links.

Relevant post related to the PLM data model are:

What are the best practices?

As some people mentioned in their comments to Yoann’s post, why do we have to answer this question as all is already well understood and described in best practices? I agree with that statement: Best Practices exist – so how to obtain them?

First, there is the whole framework of Configuration Management, which existed long before PLM-systems were introduced. If you follow their methodology, you can be (almost) guaranteed your information is consistent and correct. Configuration Management is crucial in areas where the impact of an error is enormous, like the GM-example Yoann referred to. Also, companies in the Aerospace and Defense industry are the ones that have strict configuration management in place.

Configuration management does not come for free. It requires an investment in skills, potentially a change in ways of working, and requires an overhead. Manufacturing companies that are creating less “risky” products often focus more on optimizing (= reducing) the cost of their internal processes instead of investing in proper methodologies to manage consistency.

If you want to learn more about CM, investigate the Institute of Process Excellence (IPX), the founders of the CM2 framework for Enterprise Configuration Management, and much more. Note: Their knowledge does not come for free, which I can understand. However, it also creates a barrier for the company’s further investment in CM as this kind of strategic investments are hard to sell at the management level by individuals in a company.

In the context of CM, I advise you to follow Martijn Dullaart, who is quite active in our social community. His latest blog post related to this thread is: It’s about Interchangeability and Traceability

With the introduction of PLM-system, these companies and the PLM-editors created the opportunity to implement configuration management in their system.

The data inside the system would be the “single version of the truth.” Unfortunately, this was most of the time, just a sales strategy, falsely giving the impression that information is under control now. Last year I wrote several posts related to the relation between PLM and CM, starting from PLM and Configuration Management – a happy marriage?

If you are interested in another resource for information related to these topics, have a look at the website from Jörg Eisenträger who also collected his best practices for PLM and CM for sharing (thanks Paul van der Ree for the link)

Don’t expect best practices from your PLM-vendors as their role is to sell software. It is the continuous discussion between:

  • A PLM-system that forces companies to work according to embedded methodology (hard to sell/implement but idealistically correct)

And

  • A flexible PLM-system that allows you to build and configure anything (easy to sell/challenging to implement correctly, depending on “wise” decisions)

The Future

Even though most companies are working drawing-centric, with or without a linked PLM-backbone for BOM-management, the next upcoming challenge is to evolve to model-based practices. The current CM-practices still talk about documents, although documents are already electronic datasets in that context. The future, however, in a model-based enterprise evolves related to connected models, 3D Models, but also simulation and software models, with different lifecycles and pace of change. For the model-based enterprise, we need to develop digital best practices that guarantee the same level of quality, however, executed and/or supported by (AI) Artificial Intelligence. AI is needed as human beings cannot physically analyze and understand all the impact of a change in such an environment.

Conclusion

The FFF-discussion illustrates that building a consistent framework within PLM is not an easy goal to achieve. My blog buddy Oleg Shilovitsky would claim that we consultants create the complexity. PLM-editors will never solve this complexity, it is up to your company’s mission to invest in knowledge to understand why and how to reduce the complexity. With this post and the related links and discussions, I hope more clarity will help you to make “wise” decisions.

This time a post that has been on the table already for a long time – the importance of having established processes, in particular with implementing PLM.  By nature, most people hate processes as it might give the idea that their personal creativity is limited, where large organizations love processes as for them this is the way to guarantee a confident performance.  So let’s have a more in-depth look.

Where processes shine

In a transactional world, processes can be implemented like algorithms, assuming the data to be processed has the right quality. That is why MRP (Material Requirement Planning) and ERP (Enterprise Resource Planning) don’t have the mindset of personal creativity. It is about optimized execution driven by financial and quality goals.

When I started my career in the early days of data management, before it was called PDM/PLM, I learned that there is a need for communication-related to product data. Terms are revisions, and versions started to pop-up combined with change processes. Some companies began to talk about configuration management.

Companies were not thinking PLM along the whole lifecycle. It was more PDM for engineering and ERP for manufacturing. Where PDM was ultimate a document-control environment, ERP was the execution engine relying on documented content, but not necessarily connected. Unfortunate this is still the case at many companies, and it has to do with the mindset. Traditionally a company’s performance has been measured based on financial reporting coming from the ERP system. Engineering was an unmanageable cost in the eyes of the manufacturing company’s management and ERP-software vendors.

In de middle of the nineties (previous century now ! ), I had a meeting with an ERP-country manager to discuss a potential partnership. The challenge was that he had no clue about the value and complementary need for PLM. Even after discussing with him the differences between iterative product development (with revisioning) and linear execution (on the released product), his statement was:

“Engineers are just resources that do not want to be managed, but we will get them”

Meanwhile, I can say this company has changed its strategy, giving PLM a space in their portfolio combined with excellent slides about what could be possible.

To conclude, for linear execution the meaning of processes is more or less close to algorithms and when there is no algorithm, the individual steps in place are predictable with their own KPIs.

Process certification

As I mentioned in the introduction, processes were established to guarantee a predictable outcome, in particular when it comes to quality. For that reason, in the previous century when globalization started companies were somehow forced to get ISO 900x certified. The idea behind these certifications was that a company had processes in place to guarantee an expected outcome and for when they failed, they would have procedures in place to fix these gaps. The reason companies were doing this because no social internet could name and shame bad companies. Having ISO 900x certification would be the guarantee to deliver quality.  In the same perspective, we could see, configuration management, a system of best practices to guarantee that product information was always correct.

Certification was and is heaven for specialized external auditors and consultants.  To get certification you needed to invest in people and time to describe your processes, and once these processes were defined, there were regular external audits to ensure the quality system has been followed.  The beauty of this system – the described procedures were more or less “best intentions” not enforced. When the auditor would come the company had to play some theater that processes were followed., the auditor would find some improvements for next year and the management was happy certification was passed.

This has changed early this century. In particular, mid-market companies were no longer motivated to keep up this charade. The quality process manual remained as a source of inspiration, but external audits were no longer needed. Companies were globally connected and reviewed, so reputation could be sourced easily.

The result: there are documented quality procedures, and there is a reality. The more disconnected employees became in a company due to mergers or growth, the more individual best-practices became the way to deliver the right product and quality, combined with accepted errors and fixes downstream or later. The hidden cost of poor quality is still a secret within many companies.  Talking with employees they all have examples where their company lost a lot of money due to quality mistakes. Yet in less regulated industries, there is no standard approach, like CAPA (Corrective And Preventive Actions), APQP or 8D to solve it.

Configuration Management and Change Management processes

When it comes to managing the exact definition of a product, either an already manufactured product or products that are currently made, there is a need for Configuration Management.  Before there were PLM systems configuration management was done through procedures defining configurations based on references to documents with revisions and versions. In the aerospace industry, separate systems for configuration management were developed, to ensure the exact configuration of an aircraft could be retrieved at any time. Less regulated industries used a more document-based procedural approach as strict as possible. You can read about the history of configuration management and PLM in an earlier blog post: PLM and Configuration Management – a happy marriage?

With the introduction of PDM and PLM-systems, more and more companies wanted to implement their configuration management and in particular their change management inside the system, as the changes are always related to product information that can reside in a PLM-system. The change of part can be proposed (ECR), analyzed and approved, leading to and implementation of the change (ECO) which is based on changed specifications, designs (3D Models / Drawings) and more. You can read the basics here: The Issue and ECR/ECO for Dummies (Reprise)

The Challenge (= Problem) of Digital Processes

More and more companies are implementing change processes fully in PLM, and this is the point that creates the most friction for a PLM implementation. The beauty of digital change processes is that they can be full-proof. No change gets unnoticed as everyone is forced to follow the predefined procedures, either a type of fast track in case of lightweight (= low risk) changes or the full change process when the product is already in a mature state.

Like the ISO-900x processes, the PLM-implementer is often playing the role of the consultancy firm that needs to recommend the company how to implement configuration management and change processes. The challenge here is that the company most of the time does not have a standard view for their change processes and for sure the standard change management inside PLM s not identical to their processes.

Here the battle starts….

Management believes that digital change processes, preferable out-of-the-box, a crucial to implement, where users feel their job becomes more an administrative job than a creative job. Users that create information don’t want to be bothered with the decisions for numbering and revisioning.

They expect the system to do that easily for them – which does not happen as old procedures, responsibilities, and methodologies do not align with the system. Users are not measured or challenged for data quality, they are measured on the work they deliver that is needed now. Let’s first get the work done before we make sure all is consisted defined in the PLM-system.

Digital Transformation allows companies to redefine the responsibilities for users related to the data they produce. It is no longer a 3D Model or a drawing, but a complete data set with properties/attributes that can be shared and used for analysis and automation.

Conclusion

Implementing digital processes for PLM is the most painful, but required step for a successful implementation. As long as data and processes are not consistent, we can keep on dreaming about automation in PLM. Therefore, digital transformation inside PLM should focus on new methods and responsibilities to create a foundation for the future. Without an agreement on the digital processes there will be a growing inefficiency for the future.

 

Image: waitbutwhy.com

Two weeks ago I wrote about the simplification discussion around PLM – Why PLM never will be simple.  There I focused on the fact that even sharing information in a consistent, future proof way of working, is already challenging, despite easy to use communication tools like email or social communities.

I mentioned that sharing PLM data is even more challenging due to their potential revision, version, status, and context.  This brings us to the topic of configuration management, needed to manage the consistency of information, a challenge with the increasingly sophisticated products or systems. Simple tools will never fix this complexity.

To manage the consistency of a product,  configuration management (CM) is required. Two weeks ago I read the following interesting post from CMstat: A Brief History of Configuration Management Software.

An excellent introduction if you want to know more about the roots of CM, be it that the post at the end starts to flush out all the disadvantages and reasons why you should not think about CM using PLM systems.

The following part amused me:

 The Reality of Enterprise PLM

It is no secret that PLM solutions were often sold based in good part on their promise to provide full-lifecycle change control and systems-level configuration management across all functions of the enterprise for the OEM as well as their supply and service chain partners. The appeal of this sales stick was financial; the cost and liability to the corporation from product failures or disasters due to a lack of effective change control was already a chief concern of the executive suite. The sales carrot was the imaginary ROI projected once full-lifecycle, system-level configuration control was in effect for the OEM and supply chain.

Less widely known is that for many PLM deployments, millions of budget dollars and months of calendar time were exhausted before reaching the point in the deployment road map where CM could be implemented. It was not uncommon that before the CM stage gate was reached in the schedule, customer requirements, budget allocations, management priorities, or executive sponsors would change. Or if not these disruptions within the customer’s organization, then the PLM solution provider, their software products or system integrators had been changed, acquired, merged, replaced, or obsoleted. Worse yet for users who just had a job to do was when solutions were “reimagined” halfway through a deployment with the promise (or threat) of “transforming” their workflow processes.

Many project managers were silently thankful for all this as it avoided anyone being blamed for enterprise PLM deployment failures that were over budget, over schedule, overweight, and woefully underwhelming. Regrettably, users once again had to settle for basic change control instead of comprehensive configuration management.

I believe the CMstat-writer is generalizing too much and preaches for their parish. Although my focus lies on PLM, I also learned the importance of CM and for that reason I will share a view on CM from the PLM side:

Configuration Management is not a target for every company

The origins of Configuration Management come from the Aerospace and Defense (A&D) industries. These industries have high quality, reliability and traceability constraints. In simple words, you need to prove your product works correctly specified in all described circumstances and keep this consistent along the lifecycle of the product.

Moreover, imagine you delivered the perfect product, next implementing changes require a full understanding of the impact of the change. What is the impact of the change on the behavior or performance? In A&D is the question is it still safe and reliable?

Somehow PLM and CM are enemies. The main reason why PLM-systems are used is Time to Market — bringing a product as fast as possible to the market with acceptable quality. Being first is sometimes more important than high quality. CM is considered as a process that slows down Time to Market as managing consistency, and continuous validating takes time and effort.

Configuration Management in Aviation is crucial as everyone understands that you cannot afford to discover a severe problem during a flight. All the required verification and validation efforts make CM a costly process along the product lifecycle. Airplane parts are 2 – 3 times more expensive than potential the same parts used in other industries. The main reason: airplane parts are tested and validated for all expected conditions along their lifecycle.  Other industries do not spend so much time on validation. They validate only where issues can hurt the company, either for liability or for costs.

Time to Market even impacts the aviation industry  as we can see from the commercial aircraft battle(s) between Boeing and Airbus. Who delivers the best airplane (size/performance) at the right moment in the global economy? The Airbus 380 seemed to miss its targets in the future – too big – not flexible enough. The Boeing 737 MAX appears to target a market sweet spot (fuel economy) however the recent tragic accidents with this plane seemed to be caused by Time to Market pressure to certify the aircraft too early. Or is the complexity of a modern airplane unmanageable?

CM based on PLM-systems

Most companies had their configuration management practices long before they started to implement PLM. These practices were most of the time documented in procedures, leading to all kind of coding systems for these documents. Drawing numbers (the specification of a part/product), Specifications, Parts Lists, all had a meaningful identifier combined with a version/revision and status. For example, the Philips 12NC coding system is famous in the Netherlands and is still used among spin-offs of Philips and their supplier as it offers a consistent framework to manage configurations.

Storing these documents into a PDM/PLM-system to provide centralized access was not a big problem; however, companies also expected the PLM-system to support automation and functionality to support their configuration management procedures.

A challenge for many implementers for several reasons:

  • PLM-systems do not offer a standard way of working – if they would do so, they could only serve a small niche market – so it needs to be “configured/customized.”
  • Company configuration management rules sometimes cannot be mapped to the provided PLM data-model and their internal business logic. This has led to costly customizations where, in the best case, implementer and company agreed somewhere in the middle. Worst case as the writer from the CM blog is mentioning it becomes an expensive, painful project
  • Companies do not have a consistent configuration management framework as Time to Market is leading – we will fix CM later is the idea, and they let their PLM –implementer configure the PLM-system as good a possible. Still, at the management level, the value of CM is not recognized.
    (see also: PLM-CM-ALM – not sexy ?)

In companies that I worked with, those who were interested in a standardized configuration management approach were trained in CMII. CMII (or CM2) is a framework supported by most PLM-systems, sometimes even as a pre-configured template to speed-up the implementation. Still, as PLM-systems serve multiple industries, I would not expect any generic PLM-vendor to offer Commercial Off-The-Shelf (COTS) CM-capabilities – there are too many legacy approaches. You can find a good and more in-depth article related to CMII here: Towards Integrated Configuration Change Management (CMII) from Lionel Grealou.

 

What’s next?

Current configuration management practices are very much based on the concepts of managing document. However, products are more and more described in a data-driven, model-based approach. You can find all the reasons why we are moving to a model-based approach in my last year’s blog post. Important to realize is that current CM practices in PLM were designed with mechanical products and lifecycles as a base. With the combination of hardware and software, integrated and with different lifecycles, CM has to be reconsidered with a new holistic concept. The Institute of Process Excellence provides CM2 training but is also active in developing concepts for the digital enterprise.

Martijn Dullaart, Lead Architect Configuration Management @ ASML & Chair @ IPE/CM2 Global Congress has published several posts related to CM and a Model-Based approach – you find them here related to his LinkedIn profile. As you can read from his articles organizations are trying to find a new consistent approach.

Perhaps CM as a service to a Product Innovation Platform, as the CMstat blog post suggests? (quote from the post below)

In Part 2 of this CMsights series on the future of CM software we will examine the emerging strategy of “Platform PLM” where functional services like CM are delivered via an open, federated architecture comprised of rapidly-deployable industry-configured applications.

I am looking forward to Part2 of CMsights . An approach that makes sense to me as system boundaries will disappear in a digital enterprise. It will be more critical in the future to create consistent data flows in the right context and based on data with the right quality.

Conclusion

Simple tools and complexity need to be addressed in the right order. Aligning people and processes efficiently to support a profitable enterprise remains the primary challenge for every enterprise. Complex products, more dependent on software than hardware, are requiring new ways of working to stay competitive. Digitization can help to implement these new ways of working. Experienced PLM/CM experts know the document-driven past. Now it is time for a new generation of PLM and CM experts to start from a digital concept and build consistent and workable frameworks. Then the simple tools can follow.

 

Shaping the PLM platform of the Future

2050In this post my observations from the PDT 2014 Europe conference which was hosted in the Microsoft Conference center in Paris and organized by Eurostep and CIMdata.

It was the first time I attended this event. I was positively surprised about the audience and content. Where other PLM conferences were often more focusing on current business issues, here a smaller audience (130 persons) was looking into more details around the future of PLM. Themes like PLM platforms, the Circular Economy, Open Standards and longevity of data were presented and discussed here.

The emergence of the PLM platform

SNAGHTML149e44b9Pieter Bilello from CIMdata kicked off with his presentation: The emergence of the PLM platform. Peter explained we have to rethink our PLM strategy for two main reasons:

1.  The product lifecycle will become more and more circular due to changing business models and in parallel the different usage/availability of materials will have an impact how we design and deliver products

2.  The change towards digital platforms at the heart of our economy (The Digital Revolution as I wrote about also in previous posts) will impact organizations dramatically.

Can current processes and tools support today’s complexity. And what about tomorrow? According to a CIMdata survey there is a clear difference in profit and performance between leaders and followers, and the gap is increasing faster. “Can you afford yourself to be a follower ?” is a question companies should ask themselves.

Rethinking PLM platform does not bring the 2-3 % efficiency benefit but can bring benefits from 20 % and more.

Peter sees a federated platform as a must for companies to survive. I in particular likes his statement:

The new business platform paradigm is one in which solutions from multiple providers must be seamlessly deployed using a resilient architecture that can withstand rapid changes in business functions and delivery modalities

Industry voices on the Future PLM platform

Auto

SNAGHTML14a2180eSteven Vetterman from ProSTEP talked about PLM in the automotive industry. Steven started describing the change in the automotive industry, by quoting Heraclitus Τα πάντα ρεί – the only constant is change. Steven described two major changes in the automotive industry:

1.  The effect of globalization, technology and laws & ecology

2.  The change of the role of IT and the impact of culture & collaboration

Interesting observation is that the preferred automotive market will shift to the BRIC countries. In 2050 more than 50 % of the world population (estimate almost 10 billion people at that time) will be living in Asia, 25 percent in Africa. Europe and Japan are aging. They will not invest in new cars.

For Steven, it was clear that current automotive companies are not yet organized to support and integrate modern technologies (systems engineering / electrical / software) beyond mechanical designs. Neither are they open for a true global collaboration between all players in the industry. Some of the big automotive companies are still struggling with their rigid PLM implementation. There is a need for open PLM, not driven from a single PLM system, but based on a federated environment of information.

Aero

Yves Baudier spoke on behalf of the aerospace industry about the standardization effort at their Strategic Standardization Group around Airbus and some of its strategic suppliers, like Thales, Safran, BAE systems and more. If you look at the ASD Radar, you might get a feeling for the complexity of standards that exist and are relevant for the Airbus group.

standards at airbus

It is a complex network of evolving standard all providing (future) benefits in some domains. Yves was talking about the through Lifecycle support which is striving for data creation once and reuse many times during the lifecycle. The conclusion from Yves, like all the previous speakers is that: The PLM Platform of the Future will be federative, and standards will enable PLM Interoperability

Energy and Marine

SNAGHTML14a7edf3Shefali Arora from Wärtsilä spoke on behalf of the energy and marine sector and gave an overview of the current trends in their business and the role of PLM in Wärtsilä. With PLM, Wärtsilä wants to capitalize on its knowledge, drive costs down and above all improve business agility. As the future is in flexibility. Shefali gave an overview of their PLM roadmap covering the aspects of PDM (with Teamcenter), ERP (SAP) and a PLM backbone (Share-A-space). The PLM backbone providing connectivity of data between all lifecycle stages and external partners (customer / suppliers) based on the PLCS standard. Again another session demonstrating the future of PLM is in an open and federated environment

Intermediate conclusion:
The future PLM platform is a federated platform which adheres to standards provides openness of interfaces that permit the platform to be reliable over multiple upgrade cycles and being able to integrate third-parties (Peter Bilello)

Systems Engineering

imageThe afternoon session I followed the Systems Engineering track. Peter Bilello gave an overview of Model-Based Systems engineering and illustrated based on a CIMdata survey that even though many companies have a systems engineering strategy in place it is not applied consistently. And indeed several companies I have been dealing with recently expressed their desire to integrate systems engineering into their overall product development strategy. Often this approach is confused by believing requirements management and product development equal systems engineering. Still a way to go.

Dieter Scheithauer presented his vision that Systems Engineering should be a part of PLM, and he gave a very decent, academic overview how all is related. Important for companies that want to go into that direction, you need to understand where you aiming at. I liked his comparison of a system product structure and a physical product structure, helping companies to grab the difference between a virtual, system view and a physical product view:

system and product

More Industry voices

Construction industry

imageThe afternoon session started with Christophe Castaing, explaining BIM (Building Information Modeling) and the typical characteristics of the construction industry. Although many construction companies focus on the construction phase, for 100 pieces of information/exchange to be managed during the full life cycle only 5 will be managed during the initial design phase (BIM), 20 will be managed during the construction phase (BAM) and finally 75 will be managed during the operation phase (BOOM). I wrote about PLM and BIM last year: Will 2014 become the year the construction industry will discover PLM?

Christophe presented the themes from the French MINnD project, where the aim is starting from an Information Model to come to a platform, supporting and integrated with the particular civil and construction standards, like IFC. CityGml but also PLCS standard (isostep ISO 10303-239

Consumer Products

Amir Rashid described the need for PLM in the consumer product markets stating the circular economy as one of the main drivers. Especially in consumer markets, product waste can be extremely high due to the short lifetime of the product and everything is scrapped to land waste afterward. Interesting quote from Amir: Sustainability’s goal is to create possibilities not to limit options. He illustrated how Xerox already has sustainability as part of their product development since 1984. The diagram below demonstrates how the circular economy can impact all business today when well-orchestrated.

circular economy

SNAGHTML14b000f6Marc Halpern closed the tracks with his presentation around Product Innovation Platforms, describing how Product Design and PLM might evolve in the upcoming digital era. Gartner believes that future PLM platforms will provide insight (understand and analyze Big Data), Adaptability (flexible to integrate and maintain through an open service oriented architecture), promoting reuse (identifying similarity based on metadata and geometry), discovery (the integration of search analysis and simulation) and finally community (using the social paradigm).

If you look to current PLM systems, most of them are far from this definition, and if you support Gartner’s vision, there is still a lot of work for PLM vendor to do.

Interesting Marc also identified five significant risks that could delay or prevent from implementing this vision:

  • inadequate openness (pushing back open collaboration)
  • incomplete standards (blocking implementation of openness)
  • uncertain cloud performance (the future is in cloud services)
  • the steep learning curve (it is a big mind shift for companies)
  • Cyber-terrorism (where is your data safe?)

After Marc´s session there was an interesting panel discussion with some the speakers from that day, briefly answering discussing questions from the audience. As the presentations have been fairly technical, it was logical that the first question that came up was: What about change management?

A topic that could fill the rest of the week but the PDT dinner was waiting – a good place to network and digest the day.

DAY 2

imageDay 2 started with two interesting topics. The first presentation was a joined presentation from Max Fouache (IBM) and Jean-Bernard Hentz (Airbus – CAD/CAM/PDM R&T and IT Backbones). The topic was about the obsolescence of information systems: Hardware and PLM applications. As in the aerospace industry some data needs to be available for 75 years. You can imagine that during 75 years a lot can change to hardware and software systems. At Airbus, there are currently 2500 applications, provided by approximate 600 suppliers that need to be maintained. IBM and Airbus presented a Proof of Concept done with virtualization of different platforms supporting CATIA V4/V5 using Linux, Windows XP, W7, W8 which is just a small part of all the data.

The conclusion from this session was:

To benefit from PLM of the future, the PLM of the past has to be managed. Migration is not the only answer. Look for solutions that exist to mitigate risks and reduce costs of PLM Obsolescence. Usage and compliance to Standards is crucial.

Standards

Next Howard Mason, Corporate Information Standards Manager took us on a nice journey through the history of standards developed in his business. I loved his statement: Interoperability is a right, not a privilege

imageIn the systems engineering track Kent Freeland talked about Nuclear Knowledge Management and CM in Systems Engineering. As this is one of my favorite domains, we had a good discussion on the need for pro-active Knowledge Management, which somehow implies a CM approach through the whole lifecycle of a plant. Knowledge management is not equal to store information in a central place. It is about building and providing data in context that it can be used.

Ontology for systems engineering

Leo van Ruijven provided a session for insiders: An ontology for Systems Engineering based on ISO 15926-11. His simplified approach compared to the ISO 15288 lead to several discussion between supporters and opponents during lunch time.

Master Data Management

imageAfter lunch time Marc Halpern gave his perspective on Master Data Management, a new buzz-word or discipline need to orchestrate enterprise collaboration.

Based on the type of information companies want to manage in relation to each other supported by various applications (PLM, ERP, MES, MRO, …) this can be a complex exercise and Marc ended with recommendations and an action plan for the MDM lead. In my customer engagements I also see more and more the digital transformation leads to MDM questions. Can we replace Excel files by mastered data in a database?

SNAGHTML14c68ed3

Almost at the end of the day I was speaking about the PDM platform of the people targeted for the people from the future. Here I highlighted the fundamental change in skills that’s upcoming. Where my generation was trained to own and capture information as much as possible information in your brain (or cabinet), future generations are trained and skilled in finding data and building information out of it. Owning (information) is not crucial for them. Perhaps as the world is moving fast. See this nice YouTube movie at the end.

image

Ella Jamsin ended the conference on behalf of the Ellen MacArthur Foundation explaining the need to move to a circular economy and the PLM should play a role in that. No longer is PLM from cradle-to-grave but PLM should support the lifecycle from cradle-to-cradle.

Unfortunate I could not attend all sessions as there were several parallel sessions. Neither have I written about all sessions I attended. The PDT Europe conference, a conference for people who mind about the details around the PLM future concepts and the usage of standards, is a must for future strategists.

imageSome weeks ago PLMJEN asked me my opinion on Peter Schroer´s post and invitation to an ARAS webinar called: Change Management: One Size Will Never Fit All. Change Management is actually a compelling topic, and I realized I had never written a dedicated post to such an essential topic. The introduction from Peter was excellent:

Change management is the toughest thing inside of PLM. It’s also the most important.

For the rest, the post elaborated further into software capabilities and the value of having templates processes for various industry practices. I share that opinion when talking to companies that are starting to establish their processes. It is extremely rare that an existing company will change its processes towards more standard processes delivered by the PLM system when implementing a new system. The rule of thumb is People, Processes and Tools. This all is nicely explained by Stephen Porter in his latest blog post Beware the quick fix successful plm deployment strategies. As I was not able to attend the webinar, here are my more general thoughts related to change management and why it is essential for PLM.

Change Management has always been there

It is not that PLM has invented change management. Before companies started to use ERP and PDM systems, every company had to deal with managing changes. At that time, their business was mostly local and compared with today slow. “Time to market” was more a “Time to Region” issue. Engineering and Manufacturing were operating from the same location. Change management was a personal responsibility supported by (paper) documents and individuals. Only with the growing complexity of products, growing and global customer demands and increasing regulatory constraints it became impossible to manage change in an unstructured manner.

Survival of the fittest change organization

imageI have worked with several companies where change management was a running Excel business. Running can be interpreted in two ways. The current operation could not stop and step back and look into an improvement cycle, and a lot of people were running to collect, check and validate information in order to make change estimates and make decisions based on the collected data.

When a lot of people are running, it means your business is at risk. A lot of people means costs for data (re)search and handling are higher than the competition if this can be done automatically. Also in countries of low labor costs, a lot of people running becomes a threat at a certain moment. In addition, running people can make mistakes or provide insufficient information, which leads to the wrong decisions.

Wrong decisions can be costly. Your product may become too expensive; your project may delay significant as information was based on conflicting information between disciplines or suppliers. Additional iterations to fix these issues lead to a longer time to market. Late discoveries can lead to severe high costs. For certain, when the product has been released to the market the cost might be tremendous.

NoChangeFrom the other side if making changes becomes difficult because the data has to be collected from various sources through human intervention, organizations might try to avoid making changes.

Somehow this is also an indirect death penalty. The future is for companies that are able to react quickly at any time and implement changes.

The analogy is with a commercial aircraft and a fighter plane. Let’s take the Airbus 380 in mind and a modern fighter jet the Joint Strike Fighter (JSF). The Airbus 380 brings you comfortable from A to B as long as A and B are well prepared places to land. The flight is comfortable as the plane is extremely stable. It is a well planned trip with an aversion to change of the trajectory.

The JSF airplane by definition is an unstable plane. It is only by its computer steering control that the plane behaves stable in the air. The built-in instability makes it possible to react as quickly as possible to unforeseen situations, preferable faster than the competition. This is a solution designed for change.

Based on your business you all should admire the JSF concept and try to understand where it is needed in your organization.

Why is change management integrated in PLM so important?

If we consider where changes appear the most, it is evident in the early lifecycle of the product most of the changes occur. And as long as they are in the virtual world with uncommitted costs to the product they are relative cheap. To my surprise many engineering companies and engineering departments work only with change management outside their own environment. Historically because outside their environment connected to prototyping or production costs of change are the highest. And our existing ERP system has an Engineering Change process – so let’s use that.

whyworryMeanwhile, engineering is used to work with the best so far information. At any moment, every discipline stores their data in a central repository. This could be a directory structure or PDM systems. Everyone is looking to the latest data. Files are overwritten with the latest versions. Data in the PDM system shows the latest version to all users. Hallelujah

And this is the place where it goes wrong. A mechanical engineer has overlooked a requirement in the specification that has been changed. Yes, the latest version of the 20 page document is there. An electrical engineer has defined a new control system for the engine, but has not noticed that the operating parameters of the motor have been changed. Typical examples where a best so far environments creates the visibility, but the individual user cannot understand the impact of a change anymore (especially when additional sites perform the engineering work)

Here comes the value of change management in PLM. Change Management in PLM can be light weighted in the early design phases, providing checks on changes (baselines) and notifications to disciplines involved. Approval processes are more agreements to changes to implement and their impact on all disciplines.

sel_aPLM supports the product definition through the whole product lifecycle, change management at each stage can have its particular behavior. In the early stages a focus on notifications and visibility of change, later checking the impact based on the maturity of the various disciplines and finally when running into production and materials commitment towards a strict and organized change mechanism. It is only in a PLM system where the gradual flow can be supported seamless

Change Management and ERP

As mentioned before, most manufacturing companies have implemented change management in ERP as the costs of change are the highest when the product capabilities are committed. However, the ERP system is not the place to explore and iterate for further improved solutions. The ERP system can be the trigger for a change process based on production issues. However the full implementation of the change requires a change in the product definition, the area where PLM is strong.

NOTE: on purpose I am not mentioning a change in the engineering definition as in some cases the engineering definition might remain the same, but only the manufacturing process or materials need to be adapted. PLM supports iterations, not an ERP execution matter.

Change Management and Configuration Management

cmiiSo far we have been discussing how the manufacturing system would be able to offer products based on the right engineering definition. As each specific product might not have an individual definition checked at any time, there is the need for configuration management (CM). Proper implemented configuration management assures there is a consistent relationship between how the product is specified and defined and the way it is produced. Read a refined and precise explanation on wiki

In one of my following posts I will focus on configuration management practices and why PLM systems and Configuration Management are like a Siamese twins

Conclusion:

Storing your data in a (PLM) system has only value if you are able to keep the actual status of the information and its context. Only then a person can make the right decisions immediately and with the right accuracy. The more systems or manual data handling, the less completive your company will be. Integrated and lean change management means survival !

cmpic Recently i noticed two different discussions. One on LinkedIn in the CMPIC®  Configuration Management Trends group, where Chris Jennings started with the following statement:

Product Lifecycle Management (PLM) vs CM

An interesting debate has started up here about PLM vs CM. Not surprisingly it is revealing a variety of opinions on what each mean. So I’m wondering what sort of reaction I might get from this erudite community if I made a potentially provocative statement like …
“Actually, PLM and CM are one and the same thing” ?

24 days ago

It became a very active discussion and it was interesting to see that some of the respondents saw PLM as the tool to implement CM. Later the discussion moved more towards system engineering, with a focus on requirements management. Of course requirements management is key for CM, you could say CM starts with the capturing of requirements.

myplm

There was some discussion about what is the real definition of PLM and this triggered my post. Is the definition of PLM secured in a book – and if so – in which book as historically we have learned that when the truth comes from one book there is discussion

But initially in the early days of the PLM, requirements management was not part of the focus for PLM vendors. Yes, requirements and specifications existed in their terminology but were not fully integrated. They focused more on the ‘middle part’ of the product lifecycle – digital mockup and virtual manufacturing planning. Only a few years later PLM vendors started to address requirements management (and systems engineering) as part of their portfolio – either by acquisitions of products or by adding it natively.

For me it demonstrates that PLM and CM are not the same. CM initially had a wider scope than early PLM systems supported, although in various definitions of PLM you will see that CM is a key component of the PLM practices.

plmbookStill PLM and CM have a lot in common, I wrote about is a year ago in my post: PLM, CM and ALM; not sexy ! and both fighting to get enough management support and investments.  There is in the CMIP group another discussion open with the title: What crazy CM quotes have you heard ? You can easily use these quotes also for the current PLM opinion. Read them (if you have access and have fun)

But the same week another post caught my interest. Oleg’s post about Inforbix and Product Data Management. I am aware that also other vendors are working on concepts to provide end users with data without the effort of data management required.  Alcove9 and Exalead are products with a similar scope and my excuses to all companies not mentioned here.

cm_futureWhat you see it the trend to make PLM more simple by trying to avoid the CM practices that often are considered as “non-value add”, “bureaucracy” and more negative terms. I will be curious to learn how CM practices will be adhered by these “New Generation of PDM” vendors, as I believe you need CM to manage proactively your  products.

What is your opinion about CM and PLM  – can modern PLM change the way CM is done ?

observation This time it is hard to write my blog post. First of all, because tomorrow there will be the soccer final between Holland and Spain and as a Virtual Dutchman I still dream of a real cup for the Dutch team.

Beside that I had several discussions around PLM (Product Lifecycle Management), CM (Configuration Management) and ALM (Asset Lifecycle Management), where all insiders agreed that it is hard to explain and sell the value and best practices, because it is boring, because it is not sexy, etc, etc. 

So why am I still doing this job…..

Product Lifecycle Management (PLM)

3dlive

If you look at trade shows and major events of PLM vendors, the eye-catching hdplmstuff is 3D (CAD).  

Dassault Systemes introduced in 2006 3DLive as the 3D collaboration layer for all users with the capability to provide in a 3D manner (see what you mean) on-line role specific information, coming from different information sources.  Recently Siemens introduced their HD PLM, which as far as I understood, brings decision making capabilities (and fun) to the user.

Both user interfaces are focusing on providing information in a user-friendly and natural way – this is sexy to demonstrate, but a question never asked: “Where does the information come from ? “

And this is the boring but required part of PLM. Without data stored or connected to the PLM system, there is no way these sexy dashboards can provide the right information. The challenge for PLM systems will be to extract this information from various applications and from users to have the discipline to enter the needed data. 

Those software vendors, who find an invisible way to capture the required information hold the key to success. Will it be through a more social collaboration with a lot of fun, I am afraid not. The main issue is that the people who need to enter the data are not rewarded for doing it. It is downstream the organization, in the product lifecycle, that other people benefit from the complete information. And I even suspect in some organizations that there are people who do not want share data to assure being required in the organization – see also Some users do not like the single version of the truth

important

So who can reward these users and make them feel important. I believe this is a management job and no sexy (3D) environment will help here

 

Configuration Management (CM)

cmii Although it is considered a part of PLM, I added configuration management to my post as a separate bullet. Two weeks ago, I attended the second day of the  CMII Europe conference in Amsterdam. What I learned from this event was that the members of the CMII community are a group of enthusiastic people with somehow the same vision as PLM missionaries. 

Quoting the organization:  “CMII is about changing faster and documenting better. It is about accommodating change and keeping requirements clear, concise and valid.” 

And it was interesting to listen to speeches of the members. Like with PLM, everyone is convinced configuration management brings a lot of value to a company, they are also fighting for acknowledgement. Not sexy is what I learned here and also here those people who are responsible for data accuracy are not necessary the ones that benefit (the most).

Like PLM, but even more in Configuration Management, the cultural change should not be neglected. Companies are used to have a certain level of “configuration management”, often based on manual processes, not always as efficient, clear and understood and satisfactory for the management, till something happens due to incorrect information.

whyworry

  Of course the impact of an error differentiates per industry, a problem occurring due to wrong information for an  airplane is something different compared to a problem with a  sound system.

So the investment in configuration management pays of for complex products with critical behaviors and in countries where labor costs are high. It was interesting to learn that a CM maturity assessment showed that most companies score below average when it comes to management support and that they score above average when talking about the tools they have in place.

This demonstrates for me that also for configuration management, companies believe tools will implement the change without a continuous management push. I remember that in several PLM selection processes, prospects were asking for all kind of complex configuration management capabilities, like complex filtering of a product structure. Perhaps pushed by a competitor, as at the end it was never implemented 😦

Asset Lifecycle Management (ALM)

iaea In some previous posts,  I wrote about the benefits a PLM system can bring, when used as the core system for all asset related information. For nuclear plants, the IAEA (International Atomic Energy Agency) recommends to use configuration management best practices and I have met an owner/operator of a nuclear plant who recognized that a PLM system brings the right infrastructure, instead of SAP for example, which has more focus on operational data.

Also I had a meeting with another owner/operator, who was used to manage their asset data in a classical manner – documents in an as-built environment and changes of documents in various projects environments.

alm_1 When discussing the ALM best practices based on a PLM system, it was clear all the benefits it could bring, but also we realized that implementing these concepts would require a conceptual revolution. People would need to start thinking asset centric (with lifecycle behavior) instead of document centric with only revisions.

This kind of change requires a management vision, clear explanation of the benefits and a lot of attention for the user. Only then when these changes have been implemented, and data is available in a single repository, only then the fun and sexy environments become available for use.

Conclusion

PLM, CM and ALM are not sexy especially for the users who need to provide the data. But they provide the base for sexy applications where users have instant access to complete information to make the right decisions.  To get there a cultural change is required. The management needs to realize that the company changes into becoming proactive (avoiding errors) instead of being reactive  (trying to contain errors);  investing upfront and never be able to know what the losses would be in case an error occurred.

Not sexy, however the benefits this approach can bring allow employees and companies to continue to do their work for a secure future

 

And now … time to close as the final is near

spain_nl

%d bloggers like this: