You are currently browsing the category archive for the ‘CM’ category.

In my last post in this series, The road to model-based and connected PLM, I mentioned that perhaps it is time to talk about SLM instead of PLM when discussing popular TLA’s for our domain of expertise. There were not so many encouraging statements for SLM so far.

SLM could mean for me, Solution Lifecycle Management, considering that the company’s offering more and more is a mix of products and services. Or SLM could mean System Lifecycle Management, in that case pushing the idea that more and more products are interacting with the outside world and therefore could be considered systems. Products are (almost) dead.

In addition, I mentioned that the typical product lifecycle and related configuration management concepts need to change as in the SLM domain. There is hardware and software with different lifecycles and change processes.

It is a topic I want to explore further. I am curious to learn more from Martijn Dullaart, who will be lecturing at the  PLM Road map and PDT 2021 fall conference in November. I hope my expectations are not too high, knowing it is a topic of interest for Martijn. Feel free to join this discussion

In this post, it is time to follow up on my third statement related to what data-driven implies:

Data-driven means that we need to manage data in a much more granular manner. We have to look different at data ownership. It becomes more about data accountability per role as the data can be used and consumed throughout the product lifecycle

On this topic, I have a list of points to consider; let’s go through them.

The dataset

In this post, I will often use the term dataset (you are also allowed to write the data set I understood).

A dataset means a predefined number of attributes and values that belong logically to each other. Datasets should be defined based on the purpose and, if possible, designated for a single goal. In this way, they can be stored in a database.

Combined with other datasets, a combination can result in relevant business information. Note a dataset is not only transactional data; a dataset could also describe geometry.

Identify the dataset

In the document-based world, a lot of information could be stored in a single file. In a data-driven world, we should define a dataset that contains a specific piece of information, logically belonging together. If we are more precise, a part would have various related datasets that make up the definition of a part. These definitions could be:

  • Core identification attributes like ID, Name, Type and Status
  • The Type could define a set of linked information. For example, a valve would have different characteristics as a resistor. Through classification, we can link data sets to the core definition of a part.
  • The part can have engineering-specific data (CAD and metadata), manufacturing-specific data, supplier-specific data, and service-specific data. Each of these datasets needs to be defined as a unique element in a data-driven environment
  • CAD is a particular case as most current CAD systems don’t treat geometry as a single dataset. In a file-based world, many other datasets are stored in the file (e.g., engineering or manufacturing details). In a data-driven environment, we want to have the CAD definition to be treated like a dataset. Dassault Systèmes with their CATIA V6 and 3DEXPERIENCE platform or PTC with OnShape are examples of this approach.Having CAD as separate datasets makes sharing and collaboration so much easier, as we can see from these solutions. The concept for CAD stored in a database is not new, and this approach has been used in various disciplines. Mechanical CAD was always a challenge.

Thanks to Moore’s Law (approximate every 2 years, processor power doubled – click on the image for the details) and higher network connection speed, it starts to make sense to have mechanical CAD also stored in a database instead of a file

An important point to consider is a kind of standardization of datasets. In theory, there should be a kind of minimum agreed collection of datasets. Industry standards provide these collections in their dictionary. Whenever you optimize your data model for a connected enterprise, make sure you look first into the standards that apply to your industry.

They might not be perfect or complete, but inventing your own new standard is a guarantee for legacy issues in the future. This remark is also valid for the software vendors in this domain. A proprietary data model might give you a competitive advantage.

Still, in the long term, there is always the need to connect with outside stakeholders.

 

Identify the RACI

To ensure a dataset is complete and well maintained, the concept of RACI could be used. RACI is the abbreviation for Responsible Accountable Consulted and Informed and a simplification of the RASCI Model, see also a responsibility assignment matrix.

In a data-driven environment, there is no data ownership anymore like you have for documents. The main reason that data ownership can no longer be used is that datasets can be consumed by anyone in the ecosystem. No longer only your department or the manufacturing or service department.

Data sets in a data-driven environment bring value when connected with other datasets in applications or dashboards.

A dataset describing the specification attributes of a part could be used in a spare part app and a service app. Of course, the dataset will be used in a different context – still, we need to ensure we can trust the data.

Therefore, per identified dataset, there should be governed by a kind of RACI concept. The RACI concept is a way to break the siloes in an organization.

Identify Inside / outside

There is a lot of fear that a connected, data-driven environment will expose Intellectual Property (IP). It came up in recent discussions. If you like storytelling and technology, read my old SmarTeam colleague Alex Bruskin’s post: The Bilbo Baggins Threat to PLM Assets. Alex has written some “poetry” with a deep technical message behind it.

It is true that if your data set is too big, you have the challenge of exposing IP when connecting this dataset with others. Therefore, when building a data model, you should make it possible to have datasets pure for internal usage and datasets for sharing.

When you use the concept of RACI, the difference should be defined by the I(informed) – is it PLM-data or PIM-data for example?

Tracking relations

Suppose we follow up on the concept of datasets. In that case, it becomes clear that relations between the datasets are as crucial as the dataset. In traditional PLM applications, these relations are often predefined as part of the core data model/

For example, the EBOM parts have relationships between themselves and specification data – see image.

The MBOM parts have links with the supplier data or the manufacturing process.

The prepared relations in a PLM system allow people to implement the system relatively quickly to map their approaches to this taxonomy.

However, traditional PLM systems are based on a document-based (or file-based) taxonomy combined with related metadata. In a model-based and connected environment, we have to get rid of the document-based type of data.

Therefore, the datasets will be more granular, and there is a need to manage exponential more relations between datasets.

This is why you see the graph database coming up as a needed infrastructure for modern connected applications. If you haven’t heard of a graph database yet, you are probably far from technology hypes. To understand the principles of a graph database you can read this article from neo4j:  Graph Databases for Beginners: Why graph technology is the future

As you can see from the 2020 Gartner Hype Cycle for Artificial Intelligence this technology is at the top of the hype and conceptually the way to manage a connected enterprise. The discussion in this post also demonstrates that besides technology there is a lot of additional conceptual thinking needed before it can be implemented.

Although software vendors might handle the relations and datasets within their platform, the ultimate challenge will be sharing datasets with other platforms to get a connected ecosystem.

For example, the digital web picture shown above and introduced by Marc Halpern at the 2018 PDT conference shows this concept. Recently CIMdata discussed this topic in a similar manner: The Digital Thread is Really a Web, with the Engineering Bill of Materials at Its Center
(Note I am not sure if CIMdata has published a recording of this webinar – if so I will update the link)

Anyway, these are signs that we started to find the right visuals to imagine new concepts. The traditional digital thread pictures, like the one below, are, for me, impressions of the past as they are too rigid and focusing on some particular value streams.

From a distance, it looks like a connected enterprise should work like our brain. We story information on different abstraction levels. We keep incredibly many relations between information elements. As the brain is a biological organ, connections degrade or get lost. Or the opposite other relationships become so strong that we cannot change them anymore. (“I know I am always right”)

Interestingly, the brain does not use the “single source of truth”-concept – there can be various “truths” inside a brain. This makes us human beings with all the good and the harmful effects of that.

As long as we realize there is no single source of truth.

In business and our technological world, we need sometimes the undisputed truth. Blockchain could be the basis for securing the right connections between datasets to guarantee the result is valid. I am curious if blockchain can scale to complex connected situations, although Moore’s Law might ultimately help us here too(if still valid).

The topic is not new – in 2014 I wrote a post with the title: PLM is doomed unless ….   Where I introduced the topic of owning and sharing in the context of the human brain.  In the post, I refer to the book On Intelligence by Jeff Hawkins how tries to analyze what is human-based intelligence and how could we apply it to our technology concepts. Still a fascinating book worth reading if you have the time and opportunity.

 

Conclusion

A data-driven approach requires a more granular definition of information, leading to the concepts of datasets and managing relations between datasets. This is a fundamental difference compared to the past, where we were operating systems with information. Now we are heading towards connected platforms that provide a filtered set of real-time data to act upon.

I am curious to learn more about how people have solved the connected challenges and in what kind of granularity. Let us know!

 

 

My previous post introducing the concept of connected platforms created some positive feedback and some interesting questions. For example, the question from Maxime Gravel:

Thank you, Jos, for the great blog. Where do you see Change Management tool fit in this new Platform ecosystem?

is one of the questions I try to understand too. You can see my short comment in the comments here. However, while discussing with other experts in the CM-domain, we should paint the path forward. Because if we cannot solve this type of question, the value of connected platforms will be disputable.

It is essential to realize that a digital transformation in the PLM domain is challenging. No company or vendor has the perfect blueprint available to provide an end-to-end answer for a connected enterprise. In addition, I assume it will take 10 – 20 years till we will be familiar with the concepts.

It takes a generation to move from drawings to 3D CAD. It will take another generation to move from a document-driven, linear process to data-driven, real-time collaboration in an iterative manner.  Perhaps we can move faster, as the Automotive, Aerospace & Defense, and Industrial Equipment industries are not the most innovative industries at this time. Other industries or startups might lead us faster into the future.

Although I prefer discussing methodology, I believe before moving into that area, I need to clarify some more technical points before moving forward. My apologies for writing it in such a simple manner. This information should be accessible for the majority of readers.

What means data-driven?

I often mention a data-driven environment, but what do I mean precisely by that. For me, a data-driven environment means that all information is stored in a dataset that contains a single aspect of information in a standardized manner, so it becomes accessible by outside tools.

A document is not a dataset, as often it includes a collection of datasets. Most of the time, the information it is exposed to is not standardized in such a manner a tool can read and interpret the exact content. We will see that a dataset needs an identifier, a classification, and a status.

An identifier to be able to create a connection between other datasets – traceability or, in modern words, a digital thread.
A classification as the classification identifier will determine the type of information the dataset contains and potential a set of mandatory attributes

A status to understand if the dataset is stable or still in work.

Examples of a data-driven approach – the item

The most common dataset in the PLM world is probably the item (or part) in a Bill of Material. The identifier is the item number (ID + revision if revisions are used). Next, the classification will tell you the type of part it is.

Part classification can be a topic on its own, and every industry has its taxonomy.

Finally, the status is used to identify if the dataset is shareable in the context of other information (released, in work, obsolete), allowing tools to expose only relevant information.

In a data-driven manner, a part can occur in several Bill of Materials – an example of a single definition consumed in other places.

When the part information changes, the accountable person has to analyze the relations to the part, which is easy in a data-driven environment. It is normal to find this functionality in a PDM or ERP system.

When the part would change in a document-driven environment, the effort is much higher.

First, all documents need to be identified where this part occurs. Then the impact of change needs to be managed in document versions, which will lead to other related changes if you want to keep the information correct.

Examples of a data-driven approach – the requirement

Another example illustrating the benefits of a data-driven approach is implementing requirements management, where requirements become individual datasets.  Often a product specification can contain hundreds of requirements, addressing the needs of different stakeholders.

In addition, several combinations of requirements need to be handled by other disciplines, mechanical, electrical, software, quality and legal, for example.

As requirements need to be analyzed and ranked, a specification document would never be frozen. Trade-off analysis might lead to dropping or changing a single requirement. It is almost impossible to manage this all in a document, although many companies use Excel. The disadvantages of Excel are known, in particular in a dynamic environment.

The advantage of managing requirements as datasets is that they can be grouped. So, for example, they can be pushed to a supplier (as a specification).

Or requirements could be linked to test criteria and test cases, without the need to manage documents and make sure you work with them last updated document.

As you will see, also requirements need to have an Identifier (to manage digital relations), a classification (to allow grouping) and a status (in work / released /dropped)

Data-driven and Models – the 3D CAD model

3D PDF Model

When I launched my series related to the model-based approach in 2018, the first comments I got came from people who believed that model-based equals the usage of 3D CAD models – see Model-based – the confusion. 3D Models are indeed an essential part of a model-based infrastructure, as the 3D model provides an unambiguous definition of the physical product. Just look at how most vendors depict the aspects of a virtual product using 3D (wireframe) models.

Although we use a 3D representation at each product lifecycle stage, most companies do not have a digital continuity for the 3D representation. Design models are often too heavy for visualization and field services support. The connection between engineering and manufacturing is usually based on drawings instead of annotated models.

I wrote about modern PLM and Model-Based Definition, supported by Jennifer Herron from Action Engineering – read the post PLM and Model-Based Definition here.

If your company wants to master a data-driven approach, this is one of the most accessible learning areas. You will discover that connecting engineering and manufacturing requires new technology, new ways of working and much more coordination between stakeholders.

Implementing Model-Based Definition is not an easy process. However, it is probably one of the best steps to get your digital transformation moving. The benefits of connected information between engineering and manufacturing have been discussed in the blog post PLM and Model-Based Definition

Essential to realize all these exciting capabilities linked to Industry 4.0 require a data-driven, model-based connection between engineering and manufacturing.

If this is not the case, the projected game-changers will not occur as they become too costly.

Data-driven and mathematical models

To manage complexity, we have learned that we have to describe the behavior in models to make logical decisions. This can be done in an abstract model, purely based on mathematical equations and relations. For example, suppose you look at climate models, weather models or COVID infections models.

In that case, we see they all lead to discussions from so-called experts that believe a model should be 100 % correct and any exception shows the model is wrong.

It is not that the model is wrong; the expectations are false.

For less complex systems and products, we also use models in the engineering domain. For example, logical models and behavior models are all descriptive models that allow people to analyze the behavior of a product.

For example, how software code impacts the product’s behavior. Usually, we speak about systems when software is involved, as the software will interact with the outside world.

There can be many models related to a product, and if you want to get an impression, look at this page from the SEBoK wiki: Types of Models. The current challenge is to keep the relations between these models by sharing parameters.

The sharable parameters then again should be datasets in a data-driven environment. Using standardized diagrams, like SysML or UML,  enables the used objects in the diagram to become datasets.

I will not dive further into the modeling details as I want to remain at a high level.

Essential to realize digital models should connect to a data-driven infrastructure by sharing relevant datasets.

What does data-driven imply?

 

I want to conclude this time with some statements to elaborate on further in upcoming posts and discussions

  1. Data-driven does not imply there needs to be a single environment, a single database that contains all information. Like I mentioned in my previous post, it will be about managing connected datasets in a federated manner. It is not anymore about owned the data; it is about access to reliable data.
  2. Data-driven does not mean we do not need any documents anymore. Read electronic files for documents. Likely, document sets will still be the interface to non-connected entities, suppliers, and regulatory bodies. These document sets can be considered a configuration baseline.
  3. Data-driven means that we need to manage data in a much more granular manner. We have to look different at data ownership. It becomes more data accountability per role as the data can be used and consumed throughout the product lifecycle.
  4. Data-driven means that you need to have an enterprise architecture, data governance and a master data management (MDM) approach. So far, the traditional PLM vendors have not been active in the MDM domain as they believe their proprietary data model is leading. Read also this interesting McKinsey article: How enterprise architects need to evolve to survive in a digital world
  5. A model-based approach with connected datasets seems to be the way forward. Managing data in documents will become inefficient as they cannot contribute to any digital accelerator, like applying algorithms. Artificial Intelligence relies on direct access to qualified data.
  6. I don’t believe in Low-Code platforms that provide ad-hoc solutions on demand. The ultimate result after several years might be again a new type of spaghetti. On the other hand, standardized interfaces and protocols will probably deliver higher, long-term benefits. Remember: Low code: A promising trend or a Pandora’s Box?
  7. Configuration Management requires a new approach. The current methodology is very much based on hardware products with labor-intensive change management. However, the world of software products has different configuration management and change procedure. Therefore, we need to merge them in a single framework. Unfortunately, this cannot be the BOM framework due to the dynamics in software changes. An interesting starting point for discussion can be found here: Configuration management of industrial products in PDM/PLM

 

Conclusion

Again, a long post, slowly moving into the future with many questions and points to discuss. Each of the seven points above could be a topic for another blog post, a further discussion and debate.

After my summer holiday break in August, I will follow up. I hope you will join me in this journey by commenting and contributing with your experiences and knowledge.

 

 

 

 

I believe we are almost at the end of learning from the past. We have seen how, from an initial serial CAD-driven approach with PDM, we evolved to PLM-managed structures, the EBOM and the MBOM. Or to illustrate this statement, look at the image below, where I use a Tech-Clarity image from Jim Brown.

The image on the right describes perfectly the complementary roles of PLM and ERP. The image on the left shows the typical PDM-approach. PDM feeding ERP in a linear process. The image on the right, I believe it is from 2004, shows the best practice before digital transformation. PLM is supporting product innovation in an iterative approach, pushing released information to ERP for execution.

As I think in images, I like the concept of a circle for PLM and an arrow for ERP. I am always using those two images in discussions with my customers when we want to understand if a particular activity should be in the PLM or ERP-domain.

Ten years ago, the PLM-domain was conceptually further extended by introducing support for products in operations and service. Similar to the EBOM (engineering) and the MBOM (manufacturing), the SBOM (service) was introduced to support product information for products in operation. In theory a full connected cicle.

Asset Lifecycle Management

At the same time, I was promoting PLM-practices for owners/operators to enhance Asset Lifecycle Management. My first post from June 2010 was called: PLM for Asset Lifecycle Management and Asset Development introduces this approach.

Conceptually the SBOM and Asset Lifecycle Management have a lot in common. There is a design product, in this case, an asset (plant, machine) running in the field, and we need to make sure operators have the latest information about the asset. And in case of asset changes, which can be a maintenance operation, a repair or complete overall, we need to be sure the changes are based on the correct information from the as-built environment. This requires full configuration management.

Asset changes can be based on extensive projects that need to be treated like new product development projects, with a staged approach that can take weeks, months, sometimes years. These activities are typical activities performed in PLM-systems, not in MRO-systems that are designed to manage the actual operation. Again here we see the complementary roles of PLM (iterative) and MRO (execution).

Since 2008, I have worked a lot in this environment, mainly in the nuclear and process industry. If you want to learn more about this aspect of PLM, I recommend looking at the PLMpartner website, where Bjørn Fidjeland, in cooperation with SharePLM, published a course on Plant Information Management. We worked together in several projects and Bjørn has done a great effort to describe the logical model to be used instead of a function-feature story.

Ten years ago, we were not calling this concept the “Digital Twin,” as the aim was to provide end-to-end support of asset information from engineering, procurement, and construction towards operation in a coordinated manner. The breaking point in the relation between the EPCs and Owner/Operators is the data-handover – how much of your IP can/do you expose and what is needed. Nowadays, we would call striving for end-to-end data continuity the Digital Thread.

Hot from the press in this context, CIMdata just published a commentary Managing the Digital Thread in Global Value Chains describing Eurostep’s ShareAspace capabilities and experiences in managing an end-to-end information flow (Digital Thread) in a heterogeneous environment based on exchange standards like ISO 10303-239 PLCS.  Their solution is based on what I consider a more modern approach for managing digital continuity compared to the traditional approach I described before. Compare the two images in this paragraph. The first image represents the old/current way with a disconnected handover, the second represents ShareAspace connected approach based on a real digital thread.

The Service BOM

As discussed with Asset Lifecycle Management, there is a disconnect between the engineering disciplines and operations in the field, looking from the point of view of an Asset owner/operator.

Now when we look from the perspective of a manufacturing company that produces assets to be serviced, we can identify a different dataflow and a new structure, the Service BOM (SBOM).

The SBOM provides information on how a product needs to be serviced. What are the parts that require service, and what are the service kits that are possible for that product? For that reason, service engineering should be done in parallel to product engineering. When designing a product, the engineer needs to identify which the wearing parts (always require service in time) and which parts might be serviceable.

There are different ways to look at the SBOM. Conceptually, the SBOM could be created in close relation with the EBOM. At the moment you define your product, you also should specify how the product will be services. See the image below

From this example, it is clear that part standardization and modularization have a considerable benefit for services downstream. What if you have only one serviceable part that applies to many products? The number of parts to have in stock will be strongly reduced instead of having many similar parts that only fit in a single product?

Depending on the type of product, the SBOM can be generic, serving many products in the field. In that case, the company has to deal with catalogs, to be defined in PLM. Or the SBOM can be aligned with the As-Built of a capital product in the field. In that case, the concepts of Asset Lifecycle Management apply. Click on the image to see a clear picture.

The SBOM on its own,  in such an environment, will have links to specific documents, service instructions, operating manuals.

If your PLM-system allows it, extending the EBOM and MBOM with an SBOM is not a complex effort. What is crucial to understand is that the SBOM has its own lifecycle, which can even last longer than the active product sold. So sometimes, manufacturing specifications, related to service parts need to be maintained too, creating a link between the SBOM and potential MBOM(s).

ECM = Enterprise Change Management

When I discussed ECM in my previous post in the context of Engineering Change Management, I got the feedback that nowadays, everyone talks about Enterprise Change Management. Engineering Change Management is old school.

In the past, and even in a 2014 benchmark, a customer had two change management systems. One in PLM and one in ERP, and companies were looking into connecting these two processes. Like the BOM-interaction between PLM and ERP, this is technology-wise, never a real problem.

The real problem in such situations was to come to a logical flow of events. Many times the company insisted that every change should start from the ERP-system as we like to standardize. This means that even an engineering change had to be registered first in the ERP-system

Luckily the reach of PLM has grown. PLM is no longer the engineering tool (IT-system thinking). PLM has become the information backbone for product information all along the product lifecycle. Having the MBOM and SBOM available through a PLM-infrastructure allows organizations to streamline their processes.

Aras – digital thread through connected structures

And in this modern environment, enterprise change management might take place mostly in a PLM-infrastructure. The PLM-infrastructure providing a digital thread, as the Aras picture above illustrates, provides the full traceability to support configuration management.

However, we still have to remember that configuration management and engineering change management, first of all, are based on methodology and processes. Next, the combination of tools to be used will vary.

I like to conclude this topic with a quote from Lee Perrin’s comment on my previous blog post

I would add that aerospace companies implemented CM, to avoid fatal consequences to their companies, but also to their flying customers.

PLM provides the framework within which to carry out Configuration Management. CM can indeed be carried out without PLM, as was done in the old paper-based days. As you have stated, PLM makes the whole CM process much more efficient. I think more transparent too.

Conclusion

After nine posts around the theme Learning from the past to understand the future, I walked through the history of CAD, PDM and PLM in a fast mode, pointing to practices and friction points. In the blogging space, it is hard to find this information as most blog posts are coming from software vendors explaining why their tool is needed. Hopefully, these series have helped many of you to understand a broader context. Now I want to focus on the future again in my upcoming blog posts.

Still, feel free to contact me and discuss methodology topics.

Picture by Christi Wijnen – a good friend and photographer in the Netherlands

In the previous seven posts, learning from the past to understand the future, we have seen the evolution from manual 2D drawing handling. Next, the emerge of ERP and CAD followed by data management systems (PDM/PLM) and methodology (EBOM/MBOM) to create an infrastructure for product data from concept towards manufacturing.

Before discussing the extension to the SBOM-concept, I first want to discuss Engineering Change Management and Configuration Management.

ECM and CM – are they the same?

Often when you talk with people in my PLM bubble, the terms Change Management and Configuration Management are mixed or not well understood.

When talking about Change Management, we should clearly distinguish between OCM (Organizational Change Management) and ECM (Engineering Change Management). In this post, I will focus on Engineering Change Management (ECM).

When talking about Configuration Management also here we find two interpretations of it.

The first one is a methodology describing technically how, in your PLM/CAD-environment, you can build the most efficient way connected data structures, representing all product variations. This technology varies per PLM/CAD-vendor, and therefore I will not discuss it here. The other interpretation of Configuration Management is described on Wiki as follows:

Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product’s performance, functional, and physical attributes with its requirements, design, and operational information throughout its life.

This is also the area where I will focus on this time.

And as-if great minds think alike and are synchronized, I was happy to see Martijn Dullaart’s recent blog post, referring to a poll and follow-up article on CM.

Here Martijn precisely touches the topic I address in this post. I recommend you to read his post: Configuration Management done right = Product-Centric first and then follow with the rest of this article.

Engineering Change Management

Initially, engineering change management was a departmental activity performed by engineering to manage the changes in a product’s definition. Other stakeholders are often consulted when preparing a change, which can be minor (affecting, for example, only engineering) or major (affecting engineering and manufacturing).

The way engineering change management has been implemented varies a lot. Over time companies all around the world have defined their change methodology, and there is a lot of commonality between these approaches. However, terminology as revision, version, major change, minor change all might vary.

I described the generic approach for engineering change processes in my blog post: ECR / ECO for Dummies from 2010.

The fact that companies have defined their own engineering change processes is not an issue when it works and is done manually. The real challenge came with PDM/PLM-systems that need to provide support for engineering change management.

Do you leave the methodology 100 % open, or do you provide business logic?

I have seen implementations where an engineer with a right-click could release an assembly without any constraints. Related drawings might not exist, parts in the assembly are not released, and more. To obtain a reliable engineering change management process, the company had to customize the PLM-system to its desired behavior.

An exercise excellent for a system integrator as there was always a discussion with end-users that do not want to be restricted in case of an emergency  (“we will complete the definition later” / “too many clicks” / “do I have to approve 100 parts ?”). In many cases, the system integrator kept on customizing the system to adapt to all wishes. Often the engineering change methodology on paper was not complete or contained contradictions when trying to digitize the processes.

For that reason, the PLM-vendors that aim to provide Out-Of-The-Box solutions have been trying to predefine certain behaviors in their system. For example, you cannot release a part, when its specifications (drawings/documents) are not released. Or, you cannot update a released assembly without creating a new revision.

These rules speed-up the implementation; however, they require more OCM (Organizational Change Management) as probably naming and methodology has to change within the company. This is the continuous battle in PLM-implementations. In particular where the company has a strong legacy or lack of business understanding, when implementing PLM.

There is an excellent webcast in this context on Minerva PLM TV – How to Increase IT Project Success with Organizational Change Management.

Click on the image or link to watch this recording.

Configuration Management

When we talk about configuration management, we have to think about managing the consistency of product data along the whole product lifecycle, as we have seen from the Wiki-definition before.

Wiki – the configuration Activity Model

Configuration management existed long before we had IT-systems. Therefore, configuration management is more a collection of activities (see diagram above) to ensure the consistency of information is correct for any given product. Consistent during design, where requirements match product capabilities. Consistent with manufacturing, where the manufacturing process is based on the correct engineering specifications. And consistent with operations, meaning that we have the full definition of product in the field, the As-Built, in correct relation to its engineering and manufacturing definition.

Source: Configuration management in aerospace industry

This consistency is crucial for products where the cost of an error can have a massive impact on the manufacturer. The first industries that invested heavily in configuration management were the Aerospace and Defense industries. Configuration management is needed in these industries as the products are usually complex, and failure can have a fatal impact on the company. Combined with many regulatory constraints, managing the configuration of a product and the impact of changes is a discipline on its own.

Other industries have also introduced configuration management nowadays. The nuclear power industry and the pharmaceutical industry use configuration management as part of their regulatory compliance. The automotive industry requires configuration management partly for compliance, mainly driven by quality targets. An accident or a recall can be costly for a car manufacturer. Other manufacturing companies all have their own configuration management strategies, mainly depending on their own risk assessment. Configuration management is a pro-active discipline – it costs money – time, people and potential tools to implement it. In my experience, many of these companies try to do “some” configuration management, always hoping that a real disaster will not happen (or can happen). Proper configuration management allows you to perform reliable impact analysis for any change (image above)

What happens in the field?

When introducing PLM in mid-market companies, often, the dream was that with the new PLM-system configuration, management would be there too.

Management believes the tools will fix the issue.

Partly because configuration management deals with a structured approach on how to manage changes, there was always confusion with engineering change management. Modern PLM-systems all have an impact analysis capability. However, most of the time, this impact analysis only reaches the content that is in the PLM-system. Configuration Management goes further.

If you think that configuration management is crucial for your company, start educating yourselves first before implementing anything in a tool. There are several places where you can learn all about configuration management.

  • Probably the best-known organization is IpX (Institute for Process Excellence), teaching the CM2 methodology. Have a look here: CM2 certification and courses
  • Closely related to IpX, Martijn Dullaart shares his thoughts coming from the field as Lead Architect for Enterprise Configuration Management at ASML (one of the Dutch crown jewels) in his blog: MDUX
  • CMstat, a configuration and data management solution provider, provides educational posts from their perspective. Have a look at their posts, for example, PLM or PDM or CM
  • If you want to have a quick overview of Configuration Management in general, targeted for the mid-market, have a look at this (outdated) course: Training for Small and Medium Enterprises on CONFIGURATION MANAGEMENT. Good for self-study to get an understanding of the domain.

 

To summarize

In regulated industries, Configuration Management and PLM are a must to ensure compliance and quality. Configuration management and (engineering) change management are, first of all, required methodologies that guarantee the quality of your products. The more complex your products are, the higher the need for change and configuration management.

PLM-systems require embedded engineering change management – part of the PDM domain. Performing Engineering Change Management in a system is something many users do not like, as it feels like overhead. Too much administration or too many mouse clicks.

So far, there is no golden egg that performs engineering change management automatically. Perhaps in a data-driven environment, algorithms can speed-up change management processes. Still, there is a need for human decisions.

Similar to configuration management. If you have a PLM-system that connects all the data from concept, design, and manufacturing in a single environment, it does not mean you are performing configuration management. You need to have processes in place, and depending on your product and industry, the importance will vary.

Conclusion

In the first seven posts, we discussed the design and engineering practices, from CAD to EBOM, ending with the MBOM. Engineering Change Management and, in particular, Configuration Management are methodologies to ensure the consistency of data along the product lifecycle. These methodologies are connected and need to be fit for the future – more on this when we move to modern model-based approaches.

Closing note:

While finishing this blog post today I read Jan Bosch’s post: Why you should not align. Jan touches the same topic that I try to describe in my series Learning from the Past ….., as my intention is to make us aware that by holding on to practices from the past we are blocking our future. Highly recommended to read his post – a quote:

The problem is, of course, that every time you resist change, you get a bit behind. You accumulate some business, process and technical debt. You become a little less “fitting” to the environment in which you’re operating

In my last post, My four picks from PLMIF,  I ended with the remark that the discussion related to the Multiview BOM concept was not complete. The session presented by James Roche focused on the Aerospace & defense domain and touched the surface. There is a lot of confusion related to best practices associated with BOM-handling. Sometimes created to promote unique vendor capabilities or to hide system complexity.

Besides, we need to consider the past as, in particular, for PLM, the burden of legacy processes and data is significant. Some practices even come from the previous, paper-based century, later mixed with behavior from 3D CAD-systems.

Therefore, to understand the future, I will take you through the past to understand why certain practices were established. Next, in a few upcoming posts, I want to explain the evolution of BOM-practices. How each new technology step introduced new capabilities that enabled companies to improve their product delivery process.

I will describe the drawing approach (for PLM – the past), the item-centric approach (for PLM – the current), and the model-driven approach(for PLM – the future). How big this sequence will become is not clear at this stage.

Whenever I come close to 1200 – 1500 words, I will stop and conclude. Based on my To-do list and your remarks, I will continue in a follow-up post.  The target will be to have a vendor-neutral collection of information to help you identify your business and the next possible steps.

Working with drawings

MRP/ERP – the first IT-system

For this approach, I go back fifty years in time, when companies were starting to work with their first significant IT-system, the MRP-system. MRP stands for Material Requirements Planning. This system became the heart of the company, scheduling the production. The extension to ERP (Enterprise Resource Planning) quickly after, made it possible to schedule other resources and, essential for the management, to report financials. Now execution could be monitored by generating all kinds of reports.

Still, the MRP/ERP-system was wholly disconnected from the engineering world as the image shows below. Let us have a look at how this worked at that time.

The concept

Products have never been designed from scratch by jumping to drawings. In the concept phase, a product was analyzed, mainly on its mechanical behavior. Was there anything else at that time? Many companies thank their existence from a launching product which someone, most of the time, the founder of the company, invented in a workshop. The company than improved and enriched this product by starting from the core product, creating enhancements in various areas of applicability.

These new ideas were shared through sketches and prototypes.

The design

The detail design of a product is delivered by a technical documentation set, often a package of manufacturing drawings containing a list of parts on the drawing, assembly with instructions. Balloon numbers are used to indicate parts in an assembly or section view.  In addition, there are the related fabrication drawings. The challenge for this approach is that all definitions must be there uniquely and complete to avoid ambiguity, which could lead to manufacturing errors.

The parts list contains make-parts, supplier parts, and standard parts. The make-parts are specified again by manufacturing drawings, identified by a number that uniquely identifies the correct drawing version. A habit here: Part number = Drawing Number (+ revision)

As the part is identified by a drawing the part most of the time got an “intelligent” part number and a revision. Intelligent to support easy recognition and revisions as at the end we do not want to generate a new part number when there is an evolution of the part. Read more about this in What the FFF is happening and “Intelligent” part numbers?

The standardized parts can be either company standard parts or external standard parts. There is a difference between them.

A company standard part could be a certain bracket, a frame. Anything that the company decided to standardize on for its own products Company standard parts are treated like make parts; they have an identifier related to their manufacturing drawings.  Again, here the habit: Part Number = Drawing Number (+ revision)

The supplier part is coming from a supplier that manufactures this part based on the supplier or market specifications. You can specify this part by using the supplier’s catalog number or refer to the standard.

For example, the part that has been specified under a certain ISO/ANSI/DIN-standard. For example, a stainless-steel bolt M8 x 1,25 x 20, meaning a metric bolt with a head diameter of 8 mm, a speed of 1,25 mm, and a length of 20 mm. You specify the standard part according to the standard. Purchasing will decide where to buy this part

Manufacturing Preparation

This is the most inefficient stage when working in a traditional drawing approach. At this stage, the information provided in drawings needs to be entered into the MRP/ERP-system to start production. This is the place where information is thrown over the wall as some might say.

This means a person needs to create process steps in the ERP-system based on the drawing information. For each manufacturing step, there needs to be a reference to the right drawing. Most ERP-systems have a placeholder where you can type the drawing number(s). Later, when companies were using CAD, there could be a reference to a file.

The part number in the ERP-system might be the same as the drawing number; however, the ERP-system requires unique numbers. In the beginning, ERP-systems were the number-generator for new parts. The unique number was often 6 to 7 digits in size, because it fits in our human short-term memory.

The parts list on the drawings had to be entered in the ERP-system too. A manual operation that often required additional research from the manufacturing engineer. As the designer might have specified the SS Bolt M8 x 1,25 x 20 as such, manufacturing preparation has to search in the ERP-system for the company’s part number.

Suppliers have to be sourced for outside manufactured make-parts. In case you do not want to depend on a single supplier, you have to send drawings and specifications to the supplier before the product is released. The supplier will receive a drawing number with revision and status warning.

If everything worked well the first time, there would be no iterations between engineering and manufacturing preparation. However, this is a utopia: prototype changes, potential manufacturing issues will require changes in the drawings. These changes require updates in the drawings, which will lead to new versions. How do you keep consistency between all identifiers?

Manufacturing

During manufacturing, orders are processed based on information from the ERP-system. The shop floor gets the drawing provided to the link in ERP. Sometimes there are issues during manufacturing. In coordination with engineering, some adaptations will be made to the manufacturing process. e.g., a changed fit or tolerance. Instead of going back to engineering to provide a new documentation set, the relevant drawings are redlined. Engineering will update these drawings whenever they touch them in the future (yeah, yeah).

Configuration Management

But will they update them? Perhaps already a new version existed due to the product’s evolution. Everything needs to be coordinated manually. Smaller companies heavily rely on people knowing things and talking together.

Larger companies cannot work in the same manner; therefore, they introduce procedures to guarantee that the information flow is consistent and accurate. Here the practices from configuration management come in.

There are many flavors of configuration management. Formal CM was first used in the 1950s to control the technical documentation for complex space and weapons systems. (Source ESA CM initiative for SME’s – © 2000) We will see it come back in future posts dealing with more complex products and the usage of computer systems.

Last year I wrote a few times about PLM and configuration management (PLM and CM – a happy marriage?) not relevant at this moment as there is no PLM yet.

Where is the BOM

As you might have noticed, there was no mentioning of a BOM so far. At this stage, there is only one Bill of Materials managed in the ERP-system. The source from the BOM comes from the various parts lists on the drawings, completed with manual additions.

Nobody talks in this stage about an EBOM or MBOM as there is only one BOM, a kind of hybrid BOM, where manufacturing steps were driving the way parts are grouped. Because the information was processed step by step, why would you like to have a multilevel BOM or a BOM tree?
Note: The image on the left was one of my first images in 2008 when I started my blog.

Summary

Working with drawings introduced “intelligent” part numbers as the documents have to be identified by manual interpretations. The intelligence of the part number was there to prevent people from making mistakes as the number already was a kind of functional identifier. Combined with a revision and versioning in the number, nothing could go wrong if handled consistently.

The disadvantage was that new employees had to master a numbering system. Next, the risk for all employees that a released drawing will not change its status. Only manual actions (retract/replace) will avoid making mistakes. And then, there are the disconnected redline drawings.

The “drawing number equals part number”  relation created a constraint that will be hard to maintain in the future.  Therefore you should worry if you still work according the above principles.

Conclusion

I reached the 1500 words – a long story – probably far from complete. I encourage readers to provide enhancements that might be relevant in the comments. This post might look like a post for dummies. However, to understand what is applicable to the future, we first need to understand why certain practices have been defined in the past.
I am looking forward to your comments and enhancements to make this a relevant stream of public information for all.

One week ago, Yoann Maingon wrote an innocent post with the question: Has FFF killed?  The question was raised related to a 2014 problem at GM, where a changed part was causing fatal accidents.

The discussion started by Yoann and here my short extract. Assuming this problem was a configuration management issue and Yoann somehow indicated that the problem might be related to the fact that ERP-systems do not carry a revision on the part number – leading to an unnoticed change.  Therefore, he assumes there is a disconnect between the PLM-side (where we have parts with multiple lifecycle states and revisions) and ERP (where we have an industrial lifecycle – prototype/production).

He posted his thoughts, and then LinkedIn exploded (currently 116 comments), which means it is a topic that is of significant concern in our community. Next, if you read the comments, there are different viewpoints:

  • What does FFF really imply?
  • What about revisions of parts?
  • What are the best practices?

Let’s investigate these viewpoints with some comments

What does FFF really imply?

When we talk about FFF in engineering, we mean Form, Fit and Function – the three primary characteristics to describe a part  (source Wikipedia)

  • Form refers to such characteristics as external dimensions, weight, size, and visual appearance of a part or assembly. This is the element of FFF that is most affected by an engineer’s aesthetic choices, including enclosure, chassis, and control panel, that become the outward “face” of the product.
  • Fit refers to the ability of the part or feature to connect to, mate with, or join to another feature or part within an assembly. The “fit” allows the part to meet the required assembly tolerances to be useful.
  • Function is a criterion that is met when the part performs its stated purpose effectively and reliably. In an electronics product, for example, a function can depend on the solid-state components used, the software or firmware, and quite often on the features of the electronics enclosure selected.

One of the comments in Yoann’s post referred to Safe/Unsafe as a potential functional characteristic. I think this addition is not needed. Safety should be a requirement for the part, not a characteristic.

FFF was and still is an approach for engineers to decide if a new, improved version of the part would get a revision or needs a new part number.

I think before we dive deeper into the other viewpoints, it is crucial to define the part number a little more.

In a correct PLM data model, there are two types of part numbers. First, the internal part number that your company uses inside its engineering Bill of Materials to identify a part. This part number can be a meaningless part only to provide uniqueness inside the company.

In 2015 I wrote several posts related to best practices and data modeling for PLM. The most relevant posts to this discussion are here:

The part number can specify a part that needs to be manufactured according to specification, or it can be a part that needs to be purchased from an available supplier/manufacturer. The manufacturer part number is, most of the time, a meaningful number (6 – 7 characters) as these parts need to be ordered by your company. The manufacturer part number is the SKU for the manufacturer. As you can imagine in the manufacturer’s catalog, there isn’t a revision mentioned. In graphics, see the image below:

Your company might sell Product MP-323121 (note: the ID is meaningful to help the customer to order the product).

Internally there is a related EBOM that specifies the product. The EBOM top part is O122 (note: here, we can use a meaningless identifier as all is digitally connected).

For the manufacturing of O122, we need to resolve the EBOM according to its specifications. Therefore, for Part O124, the company needs to decide to purchase from their approved manufacturers either part ABC-21231 or XYZ-88818 (note: again, a meaningful ID as these companies are not digitally connected).

Now coming back to the FFF-discussion. For the orange parts, with a meaningful ID, no revision exists. However, if Assembly O122 is 100% FFF compatible, the Product ID MP-323121 will not change. It allows your company to optimize the EBOM and/or MBOM, meanwhile keeping 100% compatibility to the outside world. (note: the same principle applies to the two manufacturers for Part O124.)

In case Top Assembly O122 has new or changed parts – what should happen there?

At that moment, the definition has changed. The definitions, most of the time described in documents/drawings/models, are related information to the BOM. Therefore the Top Assembly O122 should get a new identifier. There is no need to name it a revision, it is a new data set in the PLM-system, again with a meaningless identifier as we are connected digitally,

What about revisions of parts?

Of course, the management of changes existed long before PLM-systems were introduced.

The specifications of a part were defined in drawings. The drawing contained all the information, not only the geometry definitions, but also specifications on how to manufacture the part.

For complex products, a considerable set of consistently related drawings would be released to manufacturing.  A release process with physical signatures on it.

At the same time, there was no discussion: the drawing represents the part. And as there was no digital connection, part numbers/drawing numbers were meaningful, often with the format of the drawing as part of the identifier.

In case changes were needed, for example, fixing a dimension or tolerance as discovered during manufacturing, the drawing had to be revised to remain consistent. First, in the original drawing, the issue or change was marked in red (redlining). Then engineering had to create a new version of the drawing.

Depending on the impact of change (here comes also the FFF-principle), people decided if a new part number was needed (FFF-change) or that the change only required an update of the drawing(s), meaning a revision.  If the difference was small (for example, adding a missing annotation), it could be called a minor change, all to be reflected in the drawing number, which equals the part number in this approach. So, when we talk about revisions of parts, we are talking about a document change.

A lousy practice from that approach is also that often manufacturing just redlines a drawing and keeps the redlined drawing as their source. It is too time-consuming or difficult to update the source drawing(s) through a change process. Engineering is not aware of this change, and when a later change comes through from engineering, these “fixes” might be missed as there is no traceability.

Generic example of a PLM data model and its relationsWhen PLM-systems were introduced, of course, companies did not want to disrupt their existing ways of working. Therefore, they were asking the PLM-editors to enable revisions on parts and so the PLM-editors did (or do).

Decoupling of parts and documents in a PLM data model

However, if you want to use the PLM-system in the best manner, you need to “decouple” the concept:  part number equals drawing number, combined with the possibility to start using meaningless identifiers, as relations between parts and drawings are managed in the PLM-system through relational links.

Relevant post related to the PLM data model are:

What are the best practices?

As some people mentioned in their comments to Yoann’s post, why do we have to answer this question as all is already well understood and described in best practices? I agree with that statement: Best Practices exist – so how to obtain them?

First, there is the whole framework of Configuration Management, which existed long before PLM-systems were introduced. If you follow their methodology, you can be (almost) guaranteed your information is consistent and correct. Configuration Management is crucial in areas where the impact of an error is enormous, like the GM-example Yoann referred to. Also, companies in the Aerospace and Defense industry are the ones that have strict configuration management in place.

Configuration management does not come for free. It requires an investment in skills, potentially a change in ways of working, and requires an overhead. Manufacturing companies that are creating less “risky” products often focus more on optimizing (= reducing) the cost of their internal processes instead of investing in proper methodologies to manage consistency.

If you want to learn more about CM, investigate the Institute of Process Excellence (IPX), the founders of the CM2 framework for Enterprise Configuration Management, and much more. Note: Their knowledge does not come for free, which I can understand. However, it also creates a barrier for the company’s further investment in CM as this kind of strategic investments are hard to sell at the management level by individuals in a company.

In the context of CM, I advise you to follow Martijn Dullaart, who is quite active in our social community. His latest blog post related to this thread is: It’s about Interchangeability and Traceability

With the introduction of PLM-system, these companies and the PLM-editors created the opportunity to implement configuration management in their system.

The data inside the system would be the “single version of the truth.” Unfortunately, this was most of the time, just a sales strategy, falsely giving the impression that information is under control now. Last year I wrote several posts related to the relation between PLM and CM, starting from PLM and Configuration Management – a happy marriage?

If you are interested in another resource for information related to these topics, have a look at the website from Jörg Eisenträger who also collected his best practices for PLM and CM for sharing (thanks Paul van der Ree for the link)

Don’t expect best practices from your PLM-vendors as their role is to sell software. It is the continuous discussion between:

  • A PLM-system that forces companies to work according to embedded methodology (hard to sell/implement but idealistically correct)

And

  • A flexible PLM-system that allows you to build and configure anything (easy to sell/challenging to implement correctly, depending on “wise” decisions)

The Future

Even though most companies are working drawing-centric, with or without a linked PLM-backbone for BOM-management, the next upcoming challenge is to evolve to model-based practices. The current CM-practices still talk about documents, although documents are already electronic datasets in that context. The future, however, in a model-based enterprise evolves related to connected models, 3D Models, but also simulation and software models, with different lifecycles and pace of change. For the model-based enterprise, we need to develop digital best practices that guarantee the same level of quality, however, executed and/or supported by (AI) Artificial Intelligence. AI is needed as human beings cannot physically analyze and understand all the impact of a change in such an environment.

Conclusion

The FFF-discussion illustrates that building a consistent framework within PLM is not an easy goal to achieve. My blog buddy Oleg Shilovitsky would claim that we consultants create the complexity. PLM-editors will never solve this complexity, it is up to your company’s mission to invest in knowledge to understand why and how to reduce the complexity. With this post and the related links and discussions, I hope more clarity will help you to make “wise” decisions.

This time a post that has been on the table already for a long time – the importance of having established processes, in particular with implementing PLM.  By nature, most people hate processes as it might give the idea that their personal creativity is limited, where large organizations love processes as for them this is the way to guarantee a confident performance.  So let’s have a more in-depth look.

Where processes shine

In a transactional world, processes can be implemented like algorithms, assuming the data to be processed has the right quality. That is why MRP (Material Requirement Planning) and ERP (Enterprise Resource Planning) don’t have the mindset of personal creativity. It is about optimized execution driven by financial and quality goals.

When I started my career in the early days of data management, before it was called PDM/PLM, I learned that there is a need for communication-related to product data. Terms are revisions, and versions started to pop-up combined with change processes. Some companies began to talk about configuration management.

Companies were not thinking PLM along the whole lifecycle. It was more PDM for engineering and ERP for manufacturing. Where PDM was ultimate a document-control environment, ERP was the execution engine relying on documented content, but not necessarily connected. Unfortunate this is still the case at many companies, and it has to do with the mindset. Traditionally a company’s performance has been measured based on financial reporting coming from the ERP system. Engineering was an unmanageable cost in the eyes of the manufacturing company’s management and ERP-software vendors.

In de middle of the nineties (previous century now ! ), I had a meeting with an ERP-country manager to discuss a potential partnership. The challenge was that he had no clue about the value and complementary need for PLM. Even after discussing with him the differences between iterative product development (with revisioning) and linear execution (on the released product), his statement was:

“Engineers are just resources that do not want to be managed, but we will get them”

Meanwhile, I can say this company has changed its strategy, giving PLM a space in their portfolio combined with excellent slides about what could be possible.

To conclude, for linear execution the meaning of processes is more or less close to algorithms and when there is no algorithm, the individual steps in place are predictable with their own KPIs.

Process certification

As I mentioned in the introduction, processes were established to guarantee a predictable outcome, in particular when it comes to quality. For that reason, in the previous century when globalization started companies were somehow forced to get ISO 900x certified. The idea behind these certifications was that a company had processes in place to guarantee an expected outcome and for when they failed, they would have procedures in place to fix these gaps. The reason companies were doing this because no social internet could name and shame bad companies. Having ISO 900x certification would be the guarantee to deliver quality.  In the same perspective, we could see, configuration management, a system of best practices to guarantee that product information was always correct.

Certification was and is heaven for specialized external auditors and consultants.  To get certification you needed to invest in people and time to describe your processes, and once these processes were defined, there were regular external audits to ensure the quality system has been followed.  The beauty of this system – the described procedures were more or less “best intentions” not enforced. When the auditor would come the company had to play some theater that processes were followed., the auditor would find some improvements for next year and the management was happy certification was passed.

This has changed early this century. In particular, mid-market companies were no longer motivated to keep up this charade. The quality process manual remained as a source of inspiration, but external audits were no longer needed. Companies were globally connected and reviewed, so reputation could be sourced easily.

The result: there are documented quality procedures, and there is a reality. The more disconnected employees became in a company due to mergers or growth, the more individual best-practices became the way to deliver the right product and quality, combined with accepted errors and fixes downstream or later. The hidden cost of poor quality is still a secret within many companies.  Talking with employees they all have examples where their company lost a lot of money due to quality mistakes. Yet in less regulated industries, there is no standard approach, like CAPA (Corrective And Preventive Actions), APQP or 8D to solve it.

Configuration Management and Change Management processes

When it comes to managing the exact definition of a product, either an already manufactured product or products that are currently made, there is a need for Configuration Management.  Before there were PLM systems configuration management was done through procedures defining configurations based on references to documents with revisions and versions. In the aerospace industry, separate systems for configuration management were developed, to ensure the exact configuration of an aircraft could be retrieved at any time. Less regulated industries used a more document-based procedural approach as strict as possible. You can read about the history of configuration management and PLM in an earlier blog post: PLM and Configuration Management – a happy marriage?

With the introduction of PDM and PLM-systems, more and more companies wanted to implement their configuration management and in particular their change management inside the system, as the changes are always related to product information that can reside in a PLM-system. The change of part can be proposed (ECR), analyzed and approved, leading to and implementation of the change (ECO) which is based on changed specifications, designs (3D Models / Drawings) and more. You can read the basics here: The Issue and ECR/ECO for Dummies (Reprise)

The Challenge (= Problem) of Digital Processes

More and more companies are implementing change processes fully in PLM, and this is the point that creates the most friction for a PLM implementation. The beauty of digital change processes is that they can be full-proof. No change gets unnoticed as everyone is forced to follow the predefined procedures, either a type of fast track in case of lightweight (= low risk) changes or the full change process when the product is already in a mature state.

Like the ISO-900x processes, the PLM-implementer is often playing the role of the consultancy firm that needs to recommend the company how to implement configuration management and change processes. The challenge here is that the company most of the time does not have a standard view for their change processes and for sure the standard change management inside PLM s not identical to their processes.

Here the battle starts….

Management believes that digital change processes, preferable out-of-the-box, a crucial to implement, where users feel their job becomes more an administrative job than a creative job. Users that create information don’t want to be bothered with the decisions for numbering and revisioning.

They expect the system to do that easily for them – which does not happen as old procedures, responsibilities, and methodologies do not align with the system. Users are not measured or challenged for data quality, they are measured on the work they deliver that is needed now. Let’s first get the work done before we make sure all is consisted defined in the PLM-system.

Digital Transformation allows companies to redefine the responsibilities for users related to the data they produce. It is no longer a 3D Model or a drawing, but a complete data set with properties/attributes that can be shared and used for analysis and automation.

Conclusion

Implementing digital processes for PLM is the most painful, but required step for a successful implementation. As long as data and processes are not consistent, we can keep on dreaming about automation in PLM. Therefore, digital transformation inside PLM should focus on new methods and responsibilities to create a foundation for the future. Without an agreement on the digital processes there will be a growing inefficiency for the future.

 

Image: waitbutwhy.com

Two weeks ago I wrote about the simplification discussion around PLM – Why PLM never will be simple.  There I focused on the fact that even sharing information in a consistent, future proof way of working, is already challenging, despite easy to use communication tools like email or social communities.

I mentioned that sharing PLM data is even more challenging due to their potential revision, version, status, and context.  This brings us to the topic of configuration management, needed to manage the consistency of information, a challenge with the increasingly sophisticated products or systems. Simple tools will never fix this complexity.

To manage the consistency of a product,  configuration management (CM) is required. Two weeks ago I read the following interesting post from CMstat: A Brief History of Configuration Management Software.

An excellent introduction if you want to know more about the roots of CM, be it that the post at the end starts to flush out all the disadvantages and reasons why you should not think about CM using PLM systems.

The following part amused me:

 The Reality of Enterprise PLM

It is no secret that PLM solutions were often sold based in good part on their promise to provide full-lifecycle change control and systems-level configuration management across all functions of the enterprise for the OEM as well as their supply and service chain partners. The appeal of this sales stick was financial; the cost and liability to the corporation from product failures or disasters due to a lack of effective change control was already a chief concern of the executive suite. The sales carrot was the imaginary ROI projected once full-lifecycle, system-level configuration control was in effect for the OEM and supply chain.

Less widely known is that for many PLM deployments, millions of budget dollars and months of calendar time were exhausted before reaching the point in the deployment road map where CM could be implemented. It was not uncommon that before the CM stage gate was reached in the schedule, customer requirements, budget allocations, management priorities, or executive sponsors would change. Or if not these disruptions within the customer’s organization, then the PLM solution provider, their software products or system integrators had been changed, acquired, merged, replaced, or obsoleted. Worse yet for users who just had a job to do was when solutions were “reimagined” halfway through a deployment with the promise (or threat) of “transforming” their workflow processes.

Many project managers were silently thankful for all this as it avoided anyone being blamed for enterprise PLM deployment failures that were over budget, over schedule, overweight, and woefully underwhelming. Regrettably, users once again had to settle for basic change control instead of comprehensive configuration management.

I believe the CMstat-writer is generalizing too much and preaches for their parish. Although my focus lies on PLM, I also learned the importance of CM and for that reason I will share a view on CM from the PLM side:

Configuration Management is not a target for every company

The origins of Configuration Management come from the Aerospace and Defense (A&D) industries. These industries have high quality, reliability and traceability constraints. In simple words, you need to prove your product works correctly specified in all described circumstances and keep this consistent along the lifecycle of the product.

Moreover, imagine you delivered the perfect product, next implementing changes require a full understanding of the impact of the change. What is the impact of the change on the behavior or performance? In A&D is the question is it still safe and reliable?

Somehow PLM and CM are enemies. The main reason why PLM-systems are used is Time to Market — bringing a product as fast as possible to the market with acceptable quality. Being first is sometimes more important than high quality. CM is considered as a process that slows down Time to Market as managing consistency, and continuous validating takes time and effort.

Configuration Management in Aviation is crucial as everyone understands that you cannot afford to discover a severe problem during a flight. All the required verification and validation efforts make CM a costly process along the product lifecycle. Airplane parts are 2 – 3 times more expensive than potential the same parts used in other industries. The main reason: airplane parts are tested and validated for all expected conditions along their lifecycle.  Other industries do not spend so much time on validation. They validate only where issues can hurt the company, either for liability or for costs.

Time to Market even impacts the aviation industry  as we can see from the commercial aircraft battle(s) between Boeing and Airbus. Who delivers the best airplane (size/performance) at the right moment in the global economy? The Airbus 380 seemed to miss its targets in the future – too big – not flexible enough. The Boeing 737 MAX appears to target a market sweet spot (fuel economy) however the recent tragic accidents with this plane seemed to be caused by Time to Market pressure to certify the aircraft too early. Or is the complexity of a modern airplane unmanageable?

CM based on PLM-systems

Most companies had their configuration management practices long before they started to implement PLM. These practices were most of the time documented in procedures, leading to all kind of coding systems for these documents. Drawing numbers (the specification of a part/product), Specifications, Parts Lists, all had a meaningful identifier combined with a version/revision and status. For example, the Philips 12NC coding system is famous in the Netherlands and is still used among spin-offs of Philips and their supplier as it offers a consistent framework to manage configurations.

Storing these documents into a PDM/PLM-system to provide centralized access was not a big problem; however, companies also expected the PLM-system to support automation and functionality to support their configuration management procedures.

A challenge for many implementers for several reasons:

  • PLM-systems do not offer a standard way of working – if they would do so, they could only serve a small niche market – so it needs to be “configured/customized.”
  • Company configuration management rules sometimes cannot be mapped to the provided PLM data-model and their internal business logic. This has led to costly customizations where, in the best case, implementer and company agreed somewhere in the middle. Worst case as the writer from the CM blog is mentioning it becomes an expensive, painful project
  • Companies do not have a consistent configuration management framework as Time to Market is leading – we will fix CM later is the idea, and they let their PLM –implementer configure the PLM-system as good a possible. Still, at the management level, the value of CM is not recognized.
    (see also: PLM-CM-ALM – not sexy ?)

In companies that I worked with, those who were interested in a standardized configuration management approach were trained in CMII. CMII (or CM2) is a framework supported by most PLM-systems, sometimes even as a pre-configured template to speed-up the implementation. Still, as PLM-systems serve multiple industries, I would not expect any generic PLM-vendor to offer Commercial Off-The-Shelf (COTS) CM-capabilities – there are too many legacy approaches. You can find a good and more in-depth article related to CMII here: Towards Integrated Configuration Change Management (CMII) from Lionel Grealou.

 

What’s next?

Current configuration management practices are very much based on the concepts of managing document. However, products are more and more described in a data-driven, model-based approach. You can find all the reasons why we are moving to a model-based approach in my last year’s blog post. Important to realize is that current CM practices in PLM were designed with mechanical products and lifecycles as a base. With the combination of hardware and software, integrated and with different lifecycles, CM has to be reconsidered with a new holistic concept. The Institute of Process Excellence provides CM2 training but is also active in developing concepts for the digital enterprise.

Martijn Dullaart, Lead Architect Configuration Management @ ASML & Chair @ IPE/CM2 Global Congress has published several posts related to CM and a Model-Based approach – you find them here related to his LinkedIn profile. As you can read from his articles organizations are trying to find a new consistent approach.

Perhaps CM as a service to a Product Innovation Platform, as the CMstat blog post suggests? (quote from the post below)

In Part 2 of this CMsights series on the future of CM software we will examine the emerging strategy of “Platform PLM” where functional services like CM are delivered via an open, federated architecture comprised of rapidly-deployable industry-configured applications.

I am looking forward to Part2 of CMsights . An approach that makes sense to me as system boundaries will disappear in a digital enterprise. It will be more critical in the future to create consistent data flows in the right context and based on data with the right quality.

Conclusion

Simple tools and complexity need to be addressed in the right order. Aligning people and processes efficiently to support a profitable enterprise remains the primary challenge for every enterprise. Complex products, more dependent on software than hardware, are requiring new ways of working to stay competitive. Digitization can help to implement these new ways of working. Experienced PLM/CM experts know the document-driven past. Now it is time for a new generation of PLM and CM experts to start from a digital concept and build consistent and workable frameworks. Then the simple tools can follow.

 

Shaping the PLM platform of the Future

2050In this post my observations from the PDT 2014 Europe conference which was hosted in the Microsoft Conference center in Paris and organized by Eurostep and CIMdata.

It was the first time I attended this event. I was positively surprised about the audience and content. Where other PLM conferences were often more focusing on current business issues, here a smaller audience (130 persons) was looking into more details around the future of PLM. Themes like PLM platforms, the Circular Economy, Open Standards and longevity of data were presented and discussed here.

The emergence of the PLM platform

SNAGHTML149e44b9Pieter Bilello from CIMdata kicked off with his presentation: The emergence of the PLM platform. Peter explained we have to rethink our PLM strategy for two main reasons:

1.  The product lifecycle will become more and more circular due to changing business models and in parallel the different usage/availability of materials will have an impact how we design and deliver products

2.  The change towards digital platforms at the heart of our economy (The Digital Revolution as I wrote about also in previous posts) will impact organizations dramatically.

Can current processes and tools support today’s complexity. And what about tomorrow? According to a CIMdata survey there is a clear difference in profit and performance between leaders and followers, and the gap is increasing faster. “Can you afford yourself to be a follower ?” is a question companies should ask themselves.

Rethinking PLM platform does not bring the 2-3 % efficiency benefit but can bring benefits from 20 % and more.

Peter sees a federated platform as a must for companies to survive. I in particular likes his statement:

The new business platform paradigm is one in which solutions from multiple providers must be seamlessly deployed using a resilient architecture that can withstand rapid changes in business functions and delivery modalities

Industry voices on the Future PLM platform

Auto

SNAGHTML14a2180eSteven Vetterman from ProSTEP talked about PLM in the automotive industry. Steven started describing the change in the automotive industry, by quoting Heraclitus Τα πάντα ρεί – the only constant is change. Steven described two major changes in the automotive industry:

1.  The effect of globalization, technology and laws & ecology

2.  The change of the role of IT and the impact of culture & collaboration

Interesting observation is that the preferred automotive market will shift to the BRIC countries. In 2050 more than 50 % of the world population (estimate almost 10 billion people at that time) will be living in Asia, 25 percent in Africa. Europe and Japan are aging. They will not invest in new cars.

For Steven, it was clear that current automotive companies are not yet organized to support and integrate modern technologies (systems engineering / electrical / software) beyond mechanical designs. Neither are they open for a true global collaboration between all players in the industry. Some of the big automotive companies are still struggling with their rigid PLM implementation. There is a need for open PLM, not driven from a single PLM system, but based on a federated environment of information.

Aero

Yves Baudier spoke on behalf of the aerospace industry about the standardization effort at their Strategic Standardization Group around Airbus and some of its strategic suppliers, like Thales, Safran, BAE systems and more. If you look at the ASD Radar, you might get a feeling for the complexity of standards that exist and are relevant for the Airbus group.

standards at airbus

It is a complex network of evolving standard all providing (future) benefits in some domains. Yves was talking about the through Lifecycle support which is striving for data creation once and reuse many times during the lifecycle. The conclusion from Yves, like all the previous speakers is that: The PLM Platform of the Future will be federative, and standards will enable PLM Interoperability

Energy and Marine

SNAGHTML14a7edf3Shefali Arora from Wärtsilä spoke on behalf of the energy and marine sector and gave an overview of the current trends in their business and the role of PLM in Wärtsilä. With PLM, Wärtsilä wants to capitalize on its knowledge, drive costs down and above all improve business agility. As the future is in flexibility. Shefali gave an overview of their PLM roadmap covering the aspects of PDM (with Teamcenter), ERP (SAP) and a PLM backbone (Share-A-space). The PLM backbone providing connectivity of data between all lifecycle stages and external partners (customer / suppliers) based on the PLCS standard. Again another session demonstrating the future of PLM is in an open and federated environment

Intermediate conclusion:
The future PLM platform is a federated platform which adheres to standards provides openness of interfaces that permit the platform to be reliable over multiple upgrade cycles and being able to integrate third-parties (Peter Bilello)

Systems Engineering

imageThe afternoon session I followed the Systems Engineering track. Peter Bilello gave an overview of Model-Based Systems engineering and illustrated based on a CIMdata survey that even though many companies have a systems engineering strategy in place it is not applied consistently. And indeed several companies I have been dealing with recently expressed their desire to integrate systems engineering into their overall product development strategy. Often this approach is confused by believing requirements management and product development equal systems engineering. Still a way to go.

Dieter Scheithauer presented his vision that Systems Engineering should be a part of PLM, and he gave a very decent, academic overview how all is related. Important for companies that want to go into that direction, you need to understand where you aiming at. I liked his comparison of a system product structure and a physical product structure, helping companies to grab the difference between a virtual, system view and a physical product view:

system and product

More Industry voices

Construction industry

imageThe afternoon session started with Christophe Castaing, explaining BIM (Building Information Modeling) and the typical characteristics of the construction industry. Although many construction companies focus on the construction phase, for 100 pieces of information/exchange to be managed during the full life cycle only 5 will be managed during the initial design phase (BIM), 20 will be managed during the construction phase (BAM) and finally 75 will be managed during the operation phase (BOOM). I wrote about PLM and BIM last year: Will 2014 become the year the construction industry will discover PLM?

Christophe presented the themes from the French MINnD project, where the aim is starting from an Information Model to come to a platform, supporting and integrated with the particular civil and construction standards, like IFC. CityGml but also PLCS standard (isostep ISO 10303-239

Consumer Products

Amir Rashid described the need for PLM in the consumer product markets stating the circular economy as one of the main drivers. Especially in consumer markets, product waste can be extremely high due to the short lifetime of the product and everything is scrapped to land waste afterward. Interesting quote from Amir: Sustainability’s goal is to create possibilities not to limit options. He illustrated how Xerox already has sustainability as part of their product development since 1984. The diagram below demonstrates how the circular economy can impact all business today when well-orchestrated.

circular economy

SNAGHTML14b000f6Marc Halpern closed the tracks with his presentation around Product Innovation Platforms, describing how Product Design and PLM might evolve in the upcoming digital era. Gartner believes that future PLM platforms will provide insight (understand and analyze Big Data), Adaptability (flexible to integrate and maintain through an open service oriented architecture), promoting reuse (identifying similarity based on metadata and geometry), discovery (the integration of search analysis and simulation) and finally community (using the social paradigm).

If you look to current PLM systems, most of them are far from this definition, and if you support Gartner’s vision, there is still a lot of work for PLM vendor to do.

Interesting Marc also identified five significant risks that could delay or prevent from implementing this vision:

  • inadequate openness (pushing back open collaboration)
  • incomplete standards (blocking implementation of openness)
  • uncertain cloud performance (the future is in cloud services)
  • the steep learning curve (it is a big mind shift for companies)
  • Cyber-terrorism (where is your data safe?)

After Marc´s session there was an interesting panel discussion with some the speakers from that day, briefly answering discussing questions from the audience. As the presentations have been fairly technical, it was logical that the first question that came up was: What about change management?

A topic that could fill the rest of the week but the PDT dinner was waiting – a good place to network and digest the day.

DAY 2

imageDay 2 started with two interesting topics. The first presentation was a joined presentation from Max Fouache (IBM) and Jean-Bernard Hentz (Airbus – CAD/CAM/PDM R&T and IT Backbones). The topic was about the obsolescence of information systems: Hardware and PLM applications. As in the aerospace industry some data needs to be available for 75 years. You can imagine that during 75 years a lot can change to hardware and software systems. At Airbus, there are currently 2500 applications, provided by approximate 600 suppliers that need to be maintained. IBM and Airbus presented a Proof of Concept done with virtualization of different platforms supporting CATIA V4/V5 using Linux, Windows XP, W7, W8 which is just a small part of all the data.

The conclusion from this session was:

To benefit from PLM of the future, the PLM of the past has to be managed. Migration is not the only answer. Look for solutions that exist to mitigate risks and reduce costs of PLM Obsolescence. Usage and compliance to Standards is crucial.

Standards

Next Howard Mason, Corporate Information Standards Manager took us on a nice journey through the history of standards developed in his business. I loved his statement: Interoperability is a right, not a privilege

imageIn the systems engineering track Kent Freeland talked about Nuclear Knowledge Management and CM in Systems Engineering. As this is one of my favorite domains, we had a good discussion on the need for pro-active Knowledge Management, which somehow implies a CM approach through the whole lifecycle of a plant. Knowledge management is not equal to store information in a central place. It is about building and providing data in context that it can be used.

Ontology for systems engineering

Leo van Ruijven provided a session for insiders: An ontology for Systems Engineering based on ISO 15926-11. His simplified approach compared to the ISO 15288 lead to several discussion between supporters and opponents during lunch time.

Master Data Management

imageAfter lunch time Marc Halpern gave his perspective on Master Data Management, a new buzz-word or discipline need to orchestrate enterprise collaboration.

Based on the type of information companies want to manage in relation to each other supported by various applications (PLM, ERP, MES, MRO, …) this can be a complex exercise and Marc ended with recommendations and an action plan for the MDM lead. In my customer engagements I also see more and more the digital transformation leads to MDM questions. Can we replace Excel files by mastered data in a database?

SNAGHTML14c68ed3

Almost at the end of the day I was speaking about the PDM platform of the people targeted for the people from the future. Here I highlighted the fundamental change in skills that’s upcoming. Where my generation was trained to own and capture information as much as possible information in your brain (or cabinet), future generations are trained and skilled in finding data and building information out of it. Owning (information) is not crucial for them. Perhaps as the world is moving fast. See this nice YouTube movie at the end.

image

Ella Jamsin ended the conference on behalf of the Ellen MacArthur Foundation explaining the need to move to a circular economy and the PLM should play a role in that. No longer is PLM from cradle-to-grave but PLM should support the lifecycle from cradle-to-cradle.

Unfortunate I could not attend all sessions as there were several parallel sessions. Neither have I written about all sessions I attended. The PDT Europe conference, a conference for people who mind about the details around the PLM future concepts and the usage of standards, is a must for future strategists.

imageSome weeks ago PLMJEN asked me my opinion on Peter Schroer´s post and invitation to an ARAS webinar called: Change Management: One Size Will Never Fit All. Change Management is actually a compelling topic, and I realized I had never written a dedicated post to such an essential topic. The introduction from Peter was excellent:

Change management is the toughest thing inside of PLM. It’s also the most important.

For the rest, the post elaborated further into software capabilities and the value of having templates processes for various industry practices. I share that opinion when talking to companies that are starting to establish their processes. It is extremely rare that an existing company will change its processes towards more standard processes delivered by the PLM system when implementing a new system. The rule of thumb is People, Processes and Tools. This all is nicely explained by Stephen Porter in his latest blog post Beware the quick fix successful plm deployment strategies. As I was not able to attend the webinar, here are my more general thoughts related to change management and why it is essential for PLM.

Change Management has always been there

It is not that PLM has invented change management. Before companies started to use ERP and PDM systems, every company had to deal with managing changes. At that time, their business was mostly local and compared with today slow. “Time to market” was more a “Time to Region” issue. Engineering and Manufacturing were operating from the same location. Change management was a personal responsibility supported by (paper) documents and individuals. Only with the growing complexity of products, growing and global customer demands and increasing regulatory constraints it became impossible to manage change in an unstructured manner.

Survival of the fittest change organization

imageI have worked with several companies where change management was a running Excel business. Running can be interpreted in two ways. The current operation could not stop and step back and look into an improvement cycle, and a lot of people were running to collect, check and validate information in order to make change estimates and make decisions based on the collected data.

When a lot of people are running, it means your business is at risk. A lot of people means costs for data (re)search and handling are higher than the competition if this can be done automatically. Also in countries of low labor costs, a lot of people running becomes a threat at a certain moment. In addition, running people can make mistakes or provide insufficient information, which leads to the wrong decisions.

Wrong decisions can be costly. Your product may become too expensive; your project may delay significant as information was based on conflicting information between disciplines or suppliers. Additional iterations to fix these issues lead to a longer time to market. Late discoveries can lead to severe high costs. For certain, when the product has been released to the market the cost might be tremendous.

NoChangeFrom the other side if making changes becomes difficult because the data has to be collected from various sources through human intervention, organizations might try to avoid making changes.

Somehow this is also an indirect death penalty. The future is for companies that are able to react quickly at any time and implement changes.

The analogy is with a commercial aircraft and a fighter plane. Let’s take the Airbus 380 in mind and a modern fighter jet the Joint Strike Fighter (JSF). The Airbus 380 brings you comfortable from A to B as long as A and B are well prepared places to land. The flight is comfortable as the plane is extremely stable. It is a well planned trip with an aversion to change of the trajectory.

The JSF airplane by definition is an unstable plane. It is only by its computer steering control that the plane behaves stable in the air. The built-in instability makes it possible to react as quickly as possible to unforeseen situations, preferable faster than the competition. This is a solution designed for change.

Based on your business you all should admire the JSF concept and try to understand where it is needed in your organization.

Why is change management integrated in PLM so important?

If we consider where changes appear the most, it is evident in the early lifecycle of the product most of the changes occur. And as long as they are in the virtual world with uncommitted costs to the product they are relative cheap. To my surprise many engineering companies and engineering departments work only with change management outside their own environment. Historically because outside their environment connected to prototyping or production costs of change are the highest. And our existing ERP system has an Engineering Change process – so let’s use that.

whyworryMeanwhile, engineering is used to work with the best so far information. At any moment, every discipline stores their data in a central repository. This could be a directory structure or PDM systems. Everyone is looking to the latest data. Files are overwritten with the latest versions. Data in the PDM system shows the latest version to all users. Hallelujah

And this is the place where it goes wrong. A mechanical engineer has overlooked a requirement in the specification that has been changed. Yes, the latest version of the 20 page document is there. An electrical engineer has defined a new control system for the engine, but has not noticed that the operating parameters of the motor have been changed. Typical examples where a best so far environments creates the visibility, but the individual user cannot understand the impact of a change anymore (especially when additional sites perform the engineering work)

Here comes the value of change management in PLM. Change Management in PLM can be light weighted in the early design phases, providing checks on changes (baselines) and notifications to disciplines involved. Approval processes are more agreements to changes to implement and their impact on all disciplines.

sel_aPLM supports the product definition through the whole product lifecycle, change management at each stage can have its particular behavior. In the early stages a focus on notifications and visibility of change, later checking the impact based on the maturity of the various disciplines and finally when running into production and materials commitment towards a strict and organized change mechanism. It is only in a PLM system where the gradual flow can be supported seamless

Change Management and ERP

As mentioned before, most manufacturing companies have implemented change management in ERP as the costs of change are the highest when the product capabilities are committed. However, the ERP system is not the place to explore and iterate for further improved solutions. The ERP system can be the trigger for a change process based on production issues. However the full implementation of the change requires a change in the product definition, the area where PLM is strong.

NOTE: on purpose I am not mentioning a change in the engineering definition as in some cases the engineering definition might remain the same, but only the manufacturing process or materials need to be adapted. PLM supports iterations, not an ERP execution matter.

Change Management and Configuration Management

cmiiSo far we have been discussing how the manufacturing system would be able to offer products based on the right engineering definition. As each specific product might not have an individual definition checked at any time, there is the need for configuration management (CM). Proper implemented configuration management assures there is a consistent relationship between how the product is specified and defined and the way it is produced. Read a refined and precise explanation on wiki

In one of my following posts I will focus on configuration management practices and why PLM systems and Configuration Management are like a Siamese twins

Conclusion:

Storing your data in a (PLM) system has only value if you are able to keep the actual status of the information and its context. Only then a person can make the right decisions immediately and with the right accuracy. The more systems or manual data handling, the less completive your company will be. Integrated and lean change management means survival !

Translate

Email subscription to this blog

Categories

%d bloggers like this: