You are currently browsing the category archive for the ‘Asset Lifecycle Management’ category.

After the series about “Learning from the past,” it is time to start looking towards the future.  I learned from several discussions that I am probably working most of the time with advanced companies. I believe this would motivate companies that lag behind even to look into the future even more.

If you look into the future for your company, you need new or better business outcomes. That should be the driver for your company. A company does not need PLM or a Digital Twin. A company might want to reduce its time to market, improve collaboration between all stakeholders. These objectives can be realized by different ways of working and an IT-infrastructure to allow these processes to become digital and connected.

That is the “game”. Coming back to the future of PLM.  We do not need a discussion about definitions; I leave this to the academics and vendors. We will see the same applies to the concept of a Digital Twin.

My statement: The digital twin is not new. Everybody can have their own digital twin as long as you interpret the definition differently. Does this sound like the PLM definition?

The definition

I like to follow the Gartner definition:

A digital twin is a digital representation of a real-world entity or system. The implementation of a digital twin is an encapsulated software object or model that mirrors a unique physical object, process, organization, person, or other abstraction. Data from multiple digital twins can be aggregated for a composite view across a number of real-world entities, such as a power plant or a city, and their related processes.

As you see, not a narrow definition. Now we will look at the different types of interpretations.

Single-purpose siloed Digital Twins

  1. Simple – data only

One of the most straightforward applications of a digital twin is, for example, my Garmin Connect environment. When cycling, my device registers performance parameters (speed, cadence, power, heartbeat, location). After every trip, I can analyze my performance. I can see changes in my overall performance; compare my performance with others in my category (weight, age, sex).

Based on that, I can decide if I want to improve my performance. My personal business goal is to maintain and improve my overall performance, knowing I cannot stop aging by upgrading my body.

On November 4th, 2020, I am participating in the (almost virtual) Digital Twin conference organized by Bits&Chips in the Netherlands. In the context of human performance, I look forward to Natal van Riel’s presentation: Towards the metabolic digital twin – for sure, this direction is not simple. Natal is a full professor at the Technical University in Eindhoven, the “smart city” in the Netherlands

  1. Medium – data and operating models

Many connected devices in the world use the same principle. An airplane engine, an industrial robot, a wind turbine, a medical device, and a train carriage; all track the performance based on this connection between physical and virtual, based on some sort of digital connectivity.

The business case here is also monitoring performance, predict maintenance, and upgrade the product when needed.

This is the domain of Asset Lifecycle Management, a practice that exists for decades. Based on financial and performance models, the optimal balance between maintaining and overhaul has to be found. Repairs are disruptive and can be extremely costly. A manufacturing site that cannot produce can costs millions per day. Connecting data between the physical and the virtual model allows us to have real-time insights and be proactive. It becomes a digital twin.

  1. Advanced – data and connected 3D model

The ditial twin we see the most in marketing videos is a virtual twin, using a 3D-representation for understanding and navigation.  The 3D-representation provides a Virtual Reality (VR) environment with connected data. When pointing at the virtual components, information might appear, or some animation takes place.

Building such a virtual representation is a significant effort; therefore, there needs to be a serious business case.

The simplest business case is to use the virtual twin for training purposes. A flight simulator provides a virtual environment and behavior as-if you are flying in the physical airplane – the behavior model behind the simulator should match as good as possible the real behavior. However, as it is a model, it will never be 100 % reality and requires updates when new findings or product changes appear.

A virtual model of a platform or plant can be used for training on Standard Operating Procedures (SOPs). In the physical world, there is no place or time to conduct such training. Here the complexity might be lower. There is a 3D Model; however, serious updates can only be expected after a major maintenance or overhaul activity.

These practices are not new either and are used in places where the physical training cannot be done.

More challenging is the Augmented Reality (AR) use case. Here the virtual model, most of the time, a lightweight 3D Model, connects to real-time data coming from other sources. For example, AR can be used when an engineer has to service a machine. The AR-environment might project actual data from the machine, indicate service points and service procedures.

The positive side of the business case is clear for such an opportunity, ensuring service engineers always work with the right information in a real-time context. The main obstacle for implementing AR, in reality, is the access to data, the presentation of the data and keeping the data in the AR-environment matching the reality.

And although there are 3D Models in use, they are, to my knowledge, always created in siloes, not yet connected to their design sources.Have a look at the Digital Twin conference from Bits&Chips, as mentioned before.

Several of the cases mentioned above will be discussed here. The conference’s target is to share real cases concluded by Q & A sessions, crucial for a virtual event.

Connected Virtual Twins along the product lifecycle

So far, we have been discussing the virtual twin concept, where we connect a product/system/person in the physical world to a virtual model. Now let us zoom in on the virtual twins relevant for the early parts of the product lifecycle, the manufacturing twin, and the development twin. This image from Siemens illustrates the concept:

On slides they imagine a complete integrated framework, which is the future vision. Let us first zoom in on the individual connected twins.

The digital production twin

This is the area of virtual manufacturing and creating a virtual model of the manufacturing plant. Virtual manufacturing planning is not a new topic. DELMIA (Dassault Systèmes) and Tecnomatix (Siemens) are already for a long time offering virtual manufacturing planning solutions.

At that time, the business case was based on the fact that the definition of a manufacturing plant and process done virtually allows you to optimize the plant before investing in physical assets.

Saving money as there is no costly prototype phase to optimize production. In a virtual world, you can perform many trade-off studies without extra costs. That was the past (and for many companies still the current situation).

With the need to be more flexible in manufacturing to address individual customer orders without increasing the overhead of delivering these customer-specific solutions, there is a need for a configurable plant that can produce these individual products (batch size 1).

This is where the virtual plant model comes into the picture again. Instead of having a virtual model to define the ultimate physical plant, now the virtual model remains an active model to propose and configure the production process for each of these individual products in the physical plant.

This is partly what Industry 4.0 is about. Using a model-based approach to configure the plant and its assets in a connected manner. The digital production twin drives the execution of the physical plant. The factory has to change from a static factory to a dynamic “smart” factory.

In the domain of Industry 4.0, companies are reporting progress. However, to my experience, the main challenge is still that the product source data is not yet built in a model-based, configurable manner. Therefore, requiring manual rework. This is the area of Model-Based Definition, and I have been writing about this aspect several times. Latest post: Model-Based: Connecting Engineering and Manufacturing

The business case for this type of digital twin, of course, is to be able to customer-specific products with extremely competitive speed and reduced cost compared to standard. It could be your company’s survival strategy. As it is hard to predict the future, as we see from COVID-19, it is still crucial to anticipate the future, instead of waiting.

The digital development twin

Before a product gets manufactured, there is a product development process. In the past, this was pure mechanical with some electronic components. Nowadays, many companies are actually manufacturing systems as the software controlling the product plays a significant role. In this context, the model-based systems engineering approach is the upcoming approach to defining and testing a system virtually before committing to the physical world.

Model-Based Systems Engineering can define a single complex product and perform all kinds of analysis on the system even before there is a physical system in place.  I will explain more about model-based systems engineering in future posts. In this context, I want to stress that having a model-based system engineering environment combined with modularity (do not confuse it with model-based) is a solid foundation for dealing with unique custom products. Solutions can be configured and validated against their requirements already during the engineering phase.

The business case for the digital development twin is easy to make. Shorter time to market, improved and validated quality, and reduced engineering hours and costs compared to traditional ways of working. To achieve these results,  for sure, you need to change your ways of working and the tools you are using. So it won’t be that easy!

For those interested in Industry 4.0 and the Model-Based System Engineering approach, join me at the upcoming PLM Road Map 2020 and PDT 2020 conference on 17-18-19 November. As you can see from the agenda, a lot of attention to the Digital Twin and Model-Based approaches.

Three digital half-days with hopefully a lot to learn and stay with our feet on the ground.  In particular, I am looking forward to Marc Halpern’s keynote speech: Digital Thread: Be Careful What you Wish For, It Just Might Come True

Conclusion

It has been very noisy on the internet related to product features and technologies, probably due to COVIC-19 and therefore disrupted interactions between all of us – vendors, implementers and companies trying to adjust their future. The Digital Twin concept is an excellent framing for a concept that everyone can relate to. Choose your business case and then look for the best matching twin.

I believe we are almost at the end of learning from the past. We have seen how, from an initial serial CAD-driven approach with PDM, we evolved to PLM-managed structures, the EBOM and the MBOM. Or to illustrate this statement, look at the image below, where I use a Tech-Clarity image from Jim Brown.

The image on the right describes perfectly the complementary roles of PLM and ERP. The image on the left shows the typical PDM-approach. PDM feeding ERP in a linear process. The image on the right, I believe it is from 2004, shows the best practice before digital transformation. PLM is supporting product innovation in an iterative approach, pushing released information to ERP for execution.

As I think in images, I like the concept of a circle for PLM and an arrow for ERP. I am always using those two images in discussions with my customers when we want to understand if a particular activity should be in the PLM or ERP-domain.

Ten years ago, the PLM-domain was conceptually further extended by introducing support for products in operations and service. Similar to the EBOM (engineering) and the MBOM (manufacturing), the SBOM (service) was introduced to support product information for products in operation. In theory a full connected cicle.

Asset Lifecycle Management

At the same time, I was promoting PLM-practices for owners/operators to enhance Asset Lifecycle Management. My first post from June 2010 was called: PLM for Asset Lifecycle Management and Asset Development introduces this approach.

Conceptually the SBOM and Asset Lifecycle Management have a lot in common. There is a design product, in this case, an asset (plant, machine) running in the field, and we need to make sure operators have the latest information about the asset. And in case of asset changes, which can be a maintenance operation, a repair or complete overall, we need to be sure the changes are based on the correct information from the as-built environment. This requires full configuration management.

Asset changes can be based on extensive projects that need to be treated like new product development projects, with a staged approach that can take weeks, months, sometimes years. These activities are typical activities performed in PLM-systems, not in MRO-systems that are designed to manage the actual operation. Again here we see the complementary roles of PLM (iterative) and MRO (execution).

Since 2008, I have worked a lot in this environment, mainly in the nuclear and process industry. If you want to learn more about this aspect of PLM, I recommend looking at the PLMpartner website, where Bjørn Fidjeland, in cooperation with SharePLM, published a course on Plant Information Management. We worked together in several projects and Bjørn has done a great effort to describe the logical model to be used instead of a function-feature story.

Ten years ago, we were not calling this concept the “Digital Twin,” as the aim was to provide end-to-end support of asset information from engineering, procurement, and construction towards operation in a coordinated manner. The breaking point in the relation between the EPCs and Owner/Operators is the data-handover – how much of your IP can/do you expose and what is needed. Nowadays, we would call striving for end-to-end data continuity the Digital Thread.

Hot from the press in this context, CIMdata just published a commentary Managing the Digital Thread in Global Value Chains describing Eurostep’s ShareAspace capabilities and experiences in managing an end-to-end information flow (Digital Thread) in a heterogeneous environment based on exchange standards like ISO 10303-239 PLCS.  Their solution is based on what I consider a more modern approach for managing digital continuity compared to the traditional approach I described before. Compare the two images in this paragraph. The first image represents the old/current way with a disconnected handover, the second represents ShareAspace connected approach based on a real digital thread.

The Service BOM

As discussed with Asset Lifecycle Management, there is a disconnect between the engineering disciplines and operations in the field, looking from the point of view of an Asset owner/operator.

Now when we look from the perspective of a manufacturing company that produces assets to be serviced, we can identify a different dataflow and a new structure, the Service BOM (SBOM).

The SBOM provides information on how a product needs to be serviced. What are the parts that require service, and what are the service kits that are possible for that product? For that reason, service engineering should be done in parallel to product engineering. When designing a product, the engineer needs to identify which the wearing parts (always require service in time) and which parts might be serviceable.

There are different ways to look at the SBOM. Conceptually, the SBOM could be created in close relation with the EBOM. At the moment you define your product, you also should specify how the product will be services. See the image below

From this example, it is clear that part standardization and modularization have a considerable benefit for services downstream. What if you have only one serviceable part that applies to many products? The number of parts to have in stock will be strongly reduced instead of having many similar parts that only fit in a single product?

Depending on the type of product, the SBOM can be generic, serving many products in the field. In that case, the company has to deal with catalogs, to be defined in PLM. Or the SBOM can be aligned with the As-Built of a capital product in the field. In that case, the concepts of Asset Lifecycle Management apply. Click on the image to see a clear picture.

The SBOM on its own,  in such an environment, will have links to specific documents, service instructions, operating manuals.

If your PLM-system allows it, extending the EBOM and MBOM with an SBOM is not a complex effort. What is crucial to understand is that the SBOM has its own lifecycle, which can even last longer than the active product sold. So sometimes, manufacturing specifications, related to service parts need to be maintained too, creating a link between the SBOM and potential MBOM(s).

ECM = Enterprise Change Management

When I discussed ECM in my previous post in the context of Engineering Change Management, I got the feedback that nowadays, everyone talks about Enterprise Change Management. Engineering Change Management is old school.

In the past, and even in a 2014 benchmark, a customer had two change management systems. One in PLM and one in ERP, and companies were looking into connecting these two processes. Like the BOM-interaction between PLM and ERP, this is technology-wise, never a real problem.

The real problem in such situations was to come to a logical flow of events. Many times the company insisted that every change should start from the ERP-system as we like to standardize. This means that even an engineering change had to be registered first in the ERP-system

Luckily the reach of PLM has grown. PLM is no longer the engineering tool (IT-system thinking). PLM has become the information backbone for product information all along the product lifecycle. Having the MBOM and SBOM available through a PLM-infrastructure allows organizations to streamline their processes.

Aras – digital thread through connected structures

And in this modern environment, enterprise change management might take place mostly in a PLM-infrastructure. The PLM-infrastructure providing a digital thread, as the Aras picture above illustrates, provides the full traceability to support configuration management.

However, we still have to remember that configuration management and engineering change management, first of all, are based on methodology and processes. Next, the combination of tools to be used will vary.

I like to conclude this topic with a quote from Lee Perrin’s comment on my previous blog post

I would add that aerospace companies implemented CM, to avoid fatal consequences to their companies, but also to their flying customers.

PLM provides the framework within which to carry out Configuration Management. CM can indeed be carried out without PLM, as was done in the old paper-based days. As you have stated, PLM makes the whole CM process much more efficient. I think more transparent too.

Conclusion

After nine posts around the theme Learning from the past to understand the future, I walked through the history of CAD, PDM and PLM in a fast mode, pointing to practices and friction points. In the blogging space, it is hard to find this information as most blog posts are coming from software vendors explaining why their tool is needed. Hopefully, these series have helped many of you to understand a broader context. Now I want to focus on the future again in my upcoming blog posts.

Still, feel free to contact me and discuss methodology topics.

Picture by Christi Wijnen – a good friend and photographer in the Netherlands

To avoid that software geeks are getting curious about the title – in this context, ALM means Asset Lifecycle Management. In 2008 I was active for SmarTeam to promote PLM concepts relevant for Asset Lifecycle Management. The focus was on PLM being complementary to asset operation management (EAM Enterprise Asset Management and MRO – Maintenance Repair and Overhaul).

This topic has become actual for me in the past two months, having discussed and seen (PDT) the concepts of a model-based approach for assets and constructions. PLM, ALM, and BIM converge conceptually. Every year I give a one-day update from the field for students doing a master for PLM & BIM on top of their engineering/architectural background. Five years ago, there was no mentioning of BIM, now the ratio of BIM-oriented students has become significant. For me it is always great to see young students willing to learn PLM or BIM on top of their own skillset. Read more about this particular Master class in French when you click on the logo to the left.

In 2012 I started to explain PLM benefits to EPC companies (Engineering Procurement Construction), targeting a more profitable and efficient delivery of their constructions (oil platform, plant, building, infrastructure). The simplified reasoning behind using PLM was related to a more efficient and quality of multidisciplinary collaboration, reducing costly fixes during construction, and smoothening the intensive process of data handover.

More and more in the process industry, standards, like ISO 15926 (Process Industry) and ISO 19650 (BIM – mainly in the UK), became crucial.  At that time, it was difficult to convince companies to focus on the horizontal-integrated process instead of dedicated, disconnected tools. Meanwhile, this has changed, thanks to the Digital Twin hype. Let’s have a look.

PLM and ALM

The initial value for using PLM concepts complementary to MRO systems came from the fact that MRO systems are mainly focusing on plant operations. You could compare these systems with ERP systems for manufacturing companies, focusing execution and continuous operation. Scheduled maintenance and inspections are also driven by the MRO system. Typical MRO systems are Maximo and SAP PM. PLM could deliver configuration management, linking the design intent to the physical implementation. Therefore provide higher data quality, visibility, and traceability of the asset history.

The SmarTeam data model for Asset Lifecycle Management

In 2010, I shared these concepts in two posts: Asset Lifecycle Management using a PLM-system and PLM for Asset Lifecycle Management and Asset Development based on lessons learned with some (nuclear) plant owner/operators. They started to discover the need for configuration management to ensure data quality for operations. In 2010-2014 the business case using PLM complementary to MRO was data quality and therefore reduced down-time when executing large maintenance programs (dependencies between the individual projects were not visible without PLM)

In MRO-systems, like in ERP-systems, the data for execution is based on information coming from various engineering sources – specifications, PFDs, P&IDs.  Questions owner/operators ask themselves are:

  • What are the designed operational settings?
  • Are the asset parameters currently running as designed?
  • What is the optimized maintenance period?
  • Can we stretch maintenance intervals?
  • Can we reduce inspections?
  • Can we reduce downtime for maintenance and overhaul?
  • What about predictive maintenance?

Most of these questions are answered by experts that use their tacit knowledge and experience to give the best so far answers. And when the answers were wrong, they were accepted as new learning points. Next time we won’t make this mistake, and the experts become even more knowledgeable.

Now, these questions could be answered if you can model your asset in a virtual environment. In the virtual world, you would use simulation models, logical models, and 3D Models to describe the asset. This is where Model-Based Systems Engineering practices are used. However, these models need to be calibrated based on reality. And that is where IoT and Asset Operation Monitoring comes in connecting physical behavior with virtual predicted behavior. You can read more about this relationship in my post: Will MBSE the new PLM instead of IoT?

PLM and BIM

In 2014 when I started to discuss PLM concepts with EPC-companies (Engineering, Procurement, and Construction), mainly in the Oil & Gas industry. Here excellent asset development tools (AVEVA, Intergraph, Bentley) are the standard, and as the purpose of an EPC company is to deliver a plant or platform. Each software tool has its purpose and there is no lifecycle strategy.  The value PLM could bring was providing a program overview (complementary with Primavera), standardization, multidisciplinary coordination and visibility across projects to capture knowledge.

Most of the time, the EPC companies did not see the value of optimizing themselves as this was accepted in the process. Even while their productivity and cost due to poor quality (fixing during construction /commissioning) were absurd (10-20 % of the project budget). Cultural change – think longer instead of fix later – was hard to explain. In the end, the EPC was not responsible for operations, so why bother that much?

My blog posts: PLM for all Industries and 2014 – the year that the construction industry did not discover PLM illustrate the challenge at that time. None of the EPCs and construction companies had the, that improving collaboration based on information-continuity (not data-driven yet) could bring the significant benefits, despite their relatively low-profit margin (1- 3 % is considered excellent). Breaking the silos is too.

Two recent trends, however, changed the status quo that existed.

First of all, more and more, the owner/operator does not want to be responsible for the maintenance and operations of the asset. The typical EPC-companies now became DBO-companies (Design Build and Operate), this requires lifecycle thinking for these companies as most of the costs of an asset are during its maintenance and operation phase.

Advanced Thinking (read: (Model-Based) Systems Engineering) can help these companies to shift their focus on a more sustainable design of the asset for the future and get rewarded for that. In the old EPC-model, the target was “just” to deliver as specified.

A second significant trend is the availability of cloud infrastructure for the construction world. A cloud infrastructure does not require considerable investment for the stakeholders in a construction project. By introducing BIM in a common data environment (CDE), a comparable infrastructure to PLM is created and likely the Maintenance-and-Operatie stakeholder is eager to have the full virtual definition here for the future.

Read more about BIM and CDE for example, here: CDE – strategic BIM process tool.

Of course, technology and standards are there to collaborate. Now it is up to the stakeholders involved to develop new skills for collaboration (learn or hire) and implement them through new ways of working. A learning process can never be pushed by a big-bang, so make sure your company operates in two modes while learning.

As I mentioned the Maintenance-and-Operate stakeholders or in traditional cases, the Owner/Operators are incredibly interested in a well-defined virtual model of the asset. This allows them to analyze and simulate the implementation of fixes and enhancements for the future with an optimum result. Again we are talking about a digital twin of the asset here

Conclusion

Even though the digital twin is on the top of the Gartner Hype cycle, it has become already a vital principle to implement in particular for substantial, critical assets. As these precious assets, minor inefficiencies in data continuity can still be afforded to learn. From the moment companies have established a digital continuity between their virtual and physical assets, the concept for Digital Twin can also be profitable (and required) for other industries. In particular when these companies want to deliver their products as a service.

 

Note: I have been talking this year a lot about the challenges of digital transformation applied to PLM in particular. During PI PLMx London 2020 on February 3 and 4, I will lead a Think Thank session related to the challenge of connecting your PLM transformation to your executives’ vision (and budget). See you there ?

This is my concluding post related to the various aspects of the model-driven enterprise. We went through Model-Based Systems Engineering (MBSE) where the focus was on using models (functional / logical / physical / simulations) to define complex product (systems). Next we discussed Model Based Definition / Model-Based Enterprise (MBD/MBE), where the focus was on data continuity between engineering and manufacturing by using the 3D Model as a master for design, manufacturing and eventually service information.

And last time we looked at the Digital Twin from its operational side, where the Digital Twin was applied for collecting and tuning physical assets in operation, which is not a typical PLM domain to my opinion.

Now we will focus on two areas where the Digital Twin touches aspects of PLM – the most challenging one and the most over-hyped areas I believe. These two areas are:

  • The Digital Twin used to virtually define and optimize a new product/system or even a system of systems. For example, defining a new production line.
  • The Digital Twin used to be the virtual replica of an asset in operation. For example, a turbine or engine.

Digital Twin to define a new Product/System

There might be some conceptual overlap if you compare the MBSE approach and the Digital Twin concept to define a new product or system to deliver. For me the differentiation would be that MBSE is used to master and define a complex system from the R&D point of view – unknown solution concepts – use hardware or software?  Unknown constraints to be refined and optimized in an iterative manner.

In the Digital Twin concept, it is more about a defining a system that should work in the field. How to combine various systems into a working solution and each of the systems has already a pre-defined set of behavioral / operational parameters, which could be 3D related but also performance related.

You would define and analyze the new solution virtual to discover the ideal solution for performance, costs, feasibility and maintenance. Working in the context of a virtual model might take more time than traditional ways of working, however once the models are in place analyzing the solution and optimizing it takes hours instead of weeks, assuming the virtual model is based on a digital thread, not a sequential process of creating and passing documents/files. Virtual solutions allow a company to optimize the solution upfront instead of costly fixing during delivery, commissioning and maintenance.

Why aren’t we doing this already? It takes more skilled engineers instead of cheaper fixers downstream. The fact that we are used to fixing it later is also an inhibitor for change. Management needs to trust and understand the economic value instead of trying to reduce the number of engineers as they are expensive and hard to plan.

In the construction industry, companies are discovering the power of BIM (Building Information Model) , introduced to enhance the efficiency and productivity of all stakeholders involved. Massive benefits can be achieved if the construction of the building and its future behavior and maintenance can be optimized virtually compared to fixing it in an expensive way in reality when issues pop up.

The same concept applies to process plants or manufacturing plants where you could virtually run the (manufacturing) process. If the design is done with all the behavior defined (hardware-in-the-loop simulation and software-in-the-loop) a solution has been virtually tested and rapidly delivered with no late discoveries and costly fixes.

Of course it requires new ways of working. Working with digital connected models is not what engineering learn during their education time – we have just started this journey. Therefore organizations should explore on a smaller scale how to create a full Digital Twin based on connected data – this is the ultimate base for the next purpose.

Digital Twin to match a product/system in the field

When you are after the topic of a Digital Twin through the materials provided by the various software vendors, you see all kinds of previews what is possible. Augmented Reality, Virtual Reality and more. All these presentations show that clicking somewhere in a 3D Model Space relevant information pops-up. Where does this relevant information come from?

Most of the time information is re-entered in a new environment, sometimes derived from CAD but all the metadata comes from people collecting and validating data. Not the type of work we promote for a modern digital enterprise. These inefficiencies are good for learning and demos but in a final stage a company cannot afford silos where data is collected and entered again disconnected from the source.

The main problem: Legacy PLM information is stored in documents (drawings / excels) and not intended to be shared downstream with full quality.
Read also: Why PLM is the forgotten domain in digital transformation.

If a company has already implemented an end-to-end Digital Twin to deliver the solution as described in the previous section, we can understand the data has been entered somewhere during the design and delivery process and thanks to a digital continuity it is there.

How many companies have done this already? For sure not the companies that are already a long time in business as their current silos and legacy processes do not cater for digital continuity. By appointing a Chief Digital Officer, the journey might start, the biggest risk the Chief Digital Officer will be running another silo in the organization.

So where does PLM support the concept of the Digital Twin operating in the field?

For me, the IoT part of the Digital Twin is not the core of a PLM. Defining the right sensors, controls and software are the first areas where IoT is used to define the measurable/controllable behavior of a Digital Twin. This topic has been discussed in the previous section.

The second part where PLM gets involved is twofold:

  • Processing data from an individual twin
  • Processing data from a collection of similar twins

Processing data from an individual twin

Data collected from an individual twin or collection of twins can be analyzed to extract or discover failure opportunities. An R&D organization is interested in learning what is happening in the field with their products. These analyses lead to better and more competitive solutions.

Predictive maintenance is not necessarily a part of that.  When you know that certain parts will fail between 10.000 and 20.000 operating hours, you want to optimize the moment of providing service to reduce downtime of the process and you do not want to replace parts way too early.


The R&D part related to predictive maintenance could be that R&D develops sensors inside this serviceable part that signal the need for maintenance in a much smaller time from – maintenance needed within 100 hours instead of a bandwidth of 10.000 hours. Or R&D could develop new parts that need less service and guarantee a longer up-time.

For an R&D department the information from an individual Digital Twin might be only relevant if the Physical Twin is complex to repair and downtime for each individual too high. Imagine a jet engine, a turbine in a power plant or similar. Here a Digital Twin will allow service and R&D to prepare maintenance and simulate and optimize the actions for the physical world before.

The five potential platforms of a digital enterprise

The second part where R&D will be interested in, is in the behavior of similar products/systems in the field combined with their environmental conditions. In this way, R&D can discover improvement points for the whole range and give incremental innovation. The challenge for this R&D organization is to find a logical placeholder in their PLM environment to collect commonalities related to the individual modules or components. This is not an ERP or MES domain.

Concepts of a logical product structure are already known in the oil & gas, process or nuclear industry and in 2017 I wrote about PLM for Owners/Operators mentioning Bjorn Fidjeland has always been active in this domain, you can find his concepts at plmPartner here  or as an eLearning course at SharePLM.

To conclude:

  • This post is way too long (sorry)
  • PLM is not dead – it evolves into one of the crucial platforms for the future – The Product Innovation Platform
  • Current BOM-centric approach within PLM is blocking progress to a full digital thread

More to come after the holidays (a European habit) with additional topics related to the digital enterprise

 

This is almost my last planned post related to the concepts of model-based. After having discussed Model-Based Systems Engineering (needed to develop complex products/systems including hardware and software) and Model-Based Definition (creating an efficient connection between Engineering and Manufacturing), my last post will be related to the most over-hyped topic: The Digital Twin

There are several reasons why the Digital Twin is over-hyped. One of the reasons is that the Digital Twin is not necessarily considered as a PLM-related topic. Other vendors like SAP (the network of digital twins), Oracle (Digital Twins for IoT applications)  and GE with their Predix-platform also contributed to the hype related to the digital twin. The other reason is that the concept of Digital Twin is a great idea for marketers to shine above the clouds. Are recent comment from Monica Schnitger says it all in her post 5 quick takeaways from Siemens Automation summit. Monica’s take away related to Digital Twin:

The whole digital twin concept is just starting to gain traction with automation users. In many cases, they don’t have a digital representation of the equipment on their lines; they may have some data from the equipment OEM or their automation contractors but it’s inconsistent and probably incomplete. The consensus seemed to be that this is a great idea but out of many attendees’ immediate reach. [But it is important to start down this path: model something critical, gather all the data you can, prove benefit then move on to a bigger project.]

Monica is aiming to the same point I have been mentioning several times. There is no digital representation and the existing data is inconsistent. Don’t wait: The importance of accurate data – act now !

What is a digital twin?

I think there are various definitions of the digital twin and I do not want to go in a definition debate like we had before with the acronyms MBD/MBE (Model Based Definition/Enterprise – the confusion) or even the acronym PLM (classical PLM or digital PLM ?). Let’s agree on the following high-level statements:

  • A digital twin is a virtual representation of a physical product
  • The virtual part of the digital twin is defined by what you want to analyze, simulate, predict related to the physical product
  • One physical product can have multiple digital twins, only in the ideal world there is potentially a unique digital twin for every physical product in the world
  • When a product interacts with the environment, based on inputs and outputs, we normally call them systems. When I use Product, it will be most of the time a System, in particular in the context of a digital twin

Given the above statements, I will give some examples of digital twin concepts:

As a cyclist I am active on platforms like Garmin and Strava, using a tracking device, heart monitor and a power meter. During every ride my device plus the sensors measure my performance and all the data is uploaded to the platform, providing me with a report where I drove, how fast, my heartbeat, cadence and power during the ride. On Strava I can see the Flybys (other digital twins that crossed my path and their performances) and I can see per segment how I performed considered to others and I can filter by age, by level etc.)

This is the easiest part of a digital twin. Every individual can monitor and analyze their personal behavior and discover trends. Additionally, the platform owner has all the intelligence about all cyclists around the world, how they perform and what would be the best performance per location. And based on their Premium offering (where you pay) they can give you advanced advise on how you can improve. This is the Strava business model bringing value to the individual meanwhile learning from the behavior of thousands. Note in this scenario there is no 3D involved.

Another known digital twin story is related to plants in operation. In the past 10 years I have been advocating for Plant Lifecycle Management (PLM for Owner/Operators), describing the value of a virtual plant model using PLM capabilities combined with Maintenance, Repair and Overhaul (MRO) in order to reduce downtime. In a nuclear environment the usage of 3D verification, simulation and even control software in a virtual environment, can bring great benefit due to the fact that the physical twin is not always accessible and downtime can be up to several million per week.

The above examples provide two types of digital twins. I will discuss some characteristics in the next paragraphs.

Digital Twin – performance focus

Companies like GE and SAP focus a lot on the digital twin in relation to the asset performance. Measuring the performance of assets, compare their performance with other similar assets and based on performance characteristics the collector of the data can sell predictive maintenance analysis, performance optimization guidance and potentially other value offerings to their customers.

Small improvements in the range of a few percents can have a big impact on the overall net results. The digital twin is crucial in this business model to build-up knowledge, analyze and collect it and sell the knowledge again. This type of scenario is the easiest one. You need products with sensors, you need an infrastructure to collect the data and extract and process information in a manner that it can be linked to a behavior model with parameters that influence the model.

Image SAP blogs

This is the model-based part of the digital twin. For a single product there can be different models related to the parameters driving your business. E.g. performance parameters for output, parameters for optimal up-time (preventive maintenance – usage optimization) or parameters related to environmental impact, etc..) Building and selling the results of such a model is an add-on business, creating more value for your customer combined with creating more loyalty. Using the digital twin in the context of performance focus does not require a company to change the way they are working totally.  Yes, you need new skills, data collection and analysis, and more sensor technology but a lot of the product development activities can remain the same (for the moment).

As a conclusion for this type of digital twin I would state, yes there is some PLM involved, but the main focus is on business execution.

Due to the fact that I already reach more than 1000 words, I will focus in my next post on the most relevant digital twin for PLM. Here all disciplines come together. The 3D Mechanical model, the behavior models, the embedded and control software, (manufacturing) simulation and more. All to create an almost perfect virtual copy of a real product or system in the physical world. And there we will see that this is not as easy, as concepts depend on accurate data and reliable models, which is not the case currently in most companies in their engineering environment.

 

Conclusion

Digital Twin is a marketing hype however when you focus on only performance monitoring and tuning it becomes a reality as it does not require a company to align in a digital manner across the whole lifecycle. However this is just the beginning of a real digital twin.

Where are you in your company with the digital twin journey?

 

clip_image001The past year I have written about PLM in the context of digital transformation, relevant for companies that deliver products to the market. Some years ago, I have advocated the value of a PLM infrastructure for EPC companies and Owners/Operators of a plant.

EPC stands for Engineering, Construction, and Procurement, a typical name for often large capital-intensive projects, executed by a consortium of companies. Together they create buildings, platforms, plants, infrastructure and more one-off deliveries, which will be under control of the Owner/Operator after going-live.

Some references:

2014 EPC related: The year the construction industry did not discover PLM

2013 Owner/Operators related: PLM for all industries?

As you can see from the dates, these posts are not the most recent posts. Meanwhile, EPC-based businesses are discovering the value of a PLM infrastructure. Main component for them is BIM (Building Information Model or Building Information Management) and they use cloud-based collaboration environments to be more cost-efficient. Slowly these companies are moving to a single repository of the data supporting multidisciplinary collaboration related to a BIM model to guarantee a continuity of data and better execution. I am positive about EPC companies that are discovering the value of PLM- It might be slightly different from classical product-selling companies, mainly because data ownership is different. In an EPC environment many companies are responsible for parts of the data and each of them keeps the real knowledge as IP (Intellectual Property) for themselves. They only “publish” deliverables. For companies that deliver products to the market, the OEM keeps responsibility for all relevant product information and h has a different strategy.

 

clip_image003I worked in the past with one of my peers, Bjorn Fidjeland (www.plmpartner.com) on PLM for EPCs and Owner/Operators. We share the same passion to bring PLM outside traditional industries. As Bjorn is now more active than I am in this domain, I recommend to read Bjorn´s posts on this topic. For example:

EPC related 2016: Handover to logistics and supply chain in capital projects

Owner/Operators 2015: Plant Information Management – Information Structures

Bjorn provides a lot of details, which are important as implementing PLM for EPCs or Owner/Operators requires different data structures. I wrote about these concepts in 2014 in two posts – PLM and/or SLM ?  post 1 and post 2. At that time not realizing the virtual twin was becoming popular.

PLM complementary to EAM

The last year I have explored these concepts together with (potential) Owner/Operators of a plant, where PLM would be complementary to their EAM system. In the world of Owner/Operators, Enterprise Asset Management (EAM) software is the major software these companies use. You find some of the major EAM players here.

You will discover that all these software suites are good for plant operations, but they all have a challenge to support data consistency and quality in particular when dealing with plant changes and efficient, high-quality  plant information management. Versioning and status management, typical PLM capabilities are often not there.

Owner/Operators have challenges with EAM environments as:

  • EAM systems are designed to support an as-operated environment, assuming all data it correct. Support for Maintenance, Repair or Overhaul projects is often rudimentary and depending on document-driven processes. The primary business process of these companies is producing continuously, such as, electricity or chemicals. Therefore typical engineering projects to change or enhance the main production process do not have the same financial focus.
  • A document-driven approach is the de facto standard common for these industries. Most of the time because the plant has been established through an EPC approach, which was 100 % document-driven due to the different disconnected disciplines/tools working at that time in the EPC project. As the asset information is stored and delivered in documents, most owners/operators keep the document-driven approach for future change projects.

Owners/operator can benefit significantly from a data-driven PLM system as complementary infrastructure to their EAM system. The PLM system will be the source for accurate asset information, manage the change and approvals for the assets and ultimately push the new released information to the EAM system. The PLM system will offer the full history an traceability of decisions made, important for regulatory bodies or insurance companies.

.A data-driven approach for asset information allows owners/operators to benefit from efficient processes, reducing strongly the amount of people required to process data (documents) or reducing the time for people working in maintenance and operations to search for data. I found a nice slide from IBM explaining the concept of PLM an EAM collaboration – see below:

clip_image005

The same benefits modern digital enterprises will have related to a data-driven approach will come available for owner/operators. Operational management is supported by the EAM system combined with real-time capabilities provided by a modern PLM systems to analyze, design and deliver changes to the plant without a costly data conversion process (e.g. compiling new documents) and disconnected processes.

Moving to a virtual twin

clip_image007Interesting enough the digital transformation is bringing the concepts of connecting engineering, manufacturing and operations together into an infrastructure of digital platforms interacting together. Where owners/operators historically do not focus on optimizing the engineering process to build and maintain their assets, in the “classical” industries companies were not really focusing on how products behaved in the field after they were delivered. With digital continuity (the digital thread) and IoT now these “classical” companies can connect to their products in the field. Their products become assets of information, and in case these companies change their business offering into leasing products and services, these assets become managed assets, like the assets owner/operators are managing.

The concept of a virtual twin (or digital twin – image proprietary of GE) , where a virtual model-based environment is linked to one or more real instances in operations, is the dream of all industries. Preparing, Simulating and verifying changes in a virtual world is so much more efficient and cheaper that is allows for higher quality of products and in the case of plant operators higher safety will be the number one topic.

Conclusion

What I have learned so far from plant owners/operators is that they are struggling to grasp a modern digital enterprise concept as their current environment is not model-based but document-driven. Starting with PLM to complement their EAM system could be a first step to understand the value and business benefits of digital continuity. It requires a new way of thinking which is not a commodity at this time. It will happen in the next 5 to 10 years. Expect it to be driven by the realization of virtual twins in the industry and further BIM maturity. The future is model-based !!!

p.s. I am happy to announce WordPress provided a new feature to my blog. In the side panel you can now choose your language (based on Google Translate) if you have difficulties with English. Enjoy !

observationIn my previous post, I wrote about the different ways you could look at Service Lifecycle Management (SLM), which, I believe, should be part of the full PLM vision. The fact that this does not happen is probably because companies buy applications to solve issues instead of implementing a consistent company wide vision (When and Where to start is the challenge). Oleg Shilovitsky just referred one more time to this phenomena – Why PLM is stuck in PDM.

I believe PLM as the enterprise information backbone for product information. I will discuss the logical flow of data that might be required in a PLM data model, to support SLM. Of course all should be interpreted in the context of the kind of business your company is in.

This post is probably not the easiest to digest as it assumes you are somehow aware and familiar with the issues relevant for the ETO (Engineering To Order) /EPC (Engineering Procurement Construction) /BTO (Build To Order) business

A collection of systems or a single device

The first significant differentiation I want to make is between managing an installation or a single device as I will focus only on installations.

clip_image002An installation can be a collection of systems, subsystems, equipment and/or components, typically implemented by companies that deliver end-to-end solutions to their customers. A system can be an oil rig, a processing production line (food, packages, …), a plant (processing chemicals, nuclear materials), where maintenance and service can be performed on individual components providing full traceability.

Most of the time a customer specific solution is delivered to a customer, either direct or through installation / construction partners. This is the domain I will focus on.

clip_image004I will not focus on the other option for a single device (or system) with a unique serial number that needs to be maintained and serviced as a single entity. For example a car, a computer device. Usually a product for mass consumption, not to be traced individually.

In order to support SLM at the end of the PLM lifecycle, we will see a particular data model is required which has dependencies on the early design phases.

Let´s go through the lifecycle stages and identify the different data types.

The concept / sales phase

concept2

In the concept/sales phase the company needs to have a template structure to collect and process all the information shared and managed during their customer interaction.

In the implementations that I guided, this was often a kind of folder structure grouping information into a system view (what do we need), a delivery view (how and when can we deliver), a services view (who does what ) and a contractual view (cost, budget, time constraints). Most of these folders had initially relations to documents. However the system view was often already based on typical system objects representing the major systems, subsystems and components with metadata.

In the diagram, the colors represent various data types often standard available in a rich PLM data model. Although it can be simplified by going back to the old folder/document approach shared on a server, you will recognize the functional grouping of the information and its related documents, which can be further detailed into individual requirements if needed and affordable. In addition, a first conceptual system structure can already exist with links to potential solutions (generic EBOMs) that have been developed before. A PLM system provides the ideal infrastructure to store and manage all data in context of each other.

The Design phase

Before the design phase starts, there is an agreement around the solution to be delivered. In that situation, an as-sold system structure will be leading for the project delivery, and later this evolved structure will be the reference structure for the as-maintained and as-services environment.

A typical environment at this stage will support a work breakdown structure (WBS), a system breakdown structure (SBS) and a product breakdown structure (PBS). In cases where the location of the systems and subsystems are relevant for the solution, a geographical breakdown structure (GBS) can be used. This last method is often used in shipbuilding (sections / compartments) and plant design (areas / buildings / levels) and is relevant for any company that needs to combine systems and equipment in shared locations.

design

The benefit of having the system breakdown structure is that it manages the relations between all systems and subsystems. Potentially when a subsystem will be delivered by a supplier this environment supports the relationship to the supplier and the tracking of the delivery related to the full system / project.

Note: the system breakdown structure typically uses a hierarchical tag numbering system as the primary id for system elements.  In a PLM environment, the system breakdown elements should be data objects, providing the metadata describing the performance of the element, including the mandatory attributes that are required for exchange with MRO (Maintenance Repair Overhaul) systems.

Working with a system breakdown structure is common for plant design or a asset maintenance project and this approach will be very beneficial for companies delivering process lines, infrastructure projects and other solutions that need to be delivered as a collection of systems and equipment.

The delivery phase

During the delivery phase, the system breakdown structure supports the delivery of each component in detail. In the example below you can see the relation between the tag number, the generic part number and the serial number of a component.

The example below demonstrates the situation where two motors (same item – same datasheet) is implemented at two positions in a subsystem with a different tag number, a unique serial number and unique test certificates per motor.

The benefit of a system breakdown structure here is that it supports the delivery of unique information per component that needs to be delivered and verified on-site. Each system element becomes traceable.

delivery

The maintenance phase

For the maintenance phase the system breakdown structure (or a geographical breakdown structure) could be the place holder to follow up the development of an installation at a customer site.

Imagine that, in the previous example, the motor with tag number S1.2-M2 appears to be under dimensioned and needs to be replaced by a more powerful one. The situation after implementing this change would look like the following picture:

maintenance

Through the relationships with the BOM items (not all are shown in the diagram), there is the possibility to perform a where-used query and identify other customers with a similar motor at that system position. Perhaps a case for preventive maintenance?

Note: the diagram also demonstrates that the system breakdown structure elements should have their own lifecycle in order to support changes through time (and provide traceability).

From my experience, this is a significant differentiator PLM systems can bring in relation to an MRO system. MRO and ERP (Enterprise Resource Planning)systems are designed to work with the latest and actual data only. Bringing in versioning of assets and traceability towards the initial design intent is almost impossible to achieve for these systems (unless you invest in a heavy customized system).

Conclusion

In this post and my previous post, I tried to explain the value of having at least a system breakdown structure as part of the overall PLM data model. This structure supports the early concept phase and connects data from the delivery phase to the maintenance phase.

Where my mission in the past 8 years was teaching non-classical PLM industries the benefits of PLM technology and best practices, in this situation you might say it is where classical BTO companies can learn from best practices from the process and oil & gas industry.

Note: Oleg just published a new blog post: PLM Best Practices and Henry Ford Mass Production System where he claims PLM vendors, Service partners and consultants like to sell Best Practices and still during implementation discover mass customization needs to be made to become customer specific, therefore, the age of Best Practices is over.

I agree with that conclusion, as I do not believe in an Out-Of-The-Box approach, to lead a business change.

Still Best Practices are needed to explain to a company what could be done and in that context without starting from a blank sheet.

Therefore I have been sharing this Best Practice (for free)

PLM_profI believe that PLM with its roots in automotive, aerospace and discrete manufacturing is accepted, as a vital technology / business strategy to make a company more competitive and guarantee its future. Writing this sentence feels like marketing, trying to generalize a lot of information in one sentence.

Some questions you might raise:

  • Is PLM a technology or business strategy?
  • Are companies actually implementing PLM or is it extended PDM?
  • Does PLM suit every company?

My opinion:

  • PLM is a combination of technology (you need the right IT-infrastructure / software to start from) and the implementation is a business approach (it should be a business transformation). PLM vendors will tell you that it is their software that makes it happen; implementers have their preferred software and methodology to differentiate themselves. It is not a single simple solution. Interesting enough Stephen Porter wrote about this topic this week in the Zero Wait-State blog:  Applying the Goldilocks Principle to PLM – finding balance. Crucial for me is that PLM is about sharing data (not only/just documents) with status and context. Sharing data is the only way to (information) silos in a company and provide to each person a more adequate understanding.
  • Most companies that claim to have implemented PLM have implemented just extended PDM, which means on top of the CAD software add other engineering data and processes. This was also mentioned by Prof Eigner in his speech during PLM Innovation early this year in Munich. PLM is still considered by the management as an engineering tool, and at the other side they have ERP. Again sharing all product IP with all its iterations and maturity (PLM) and pushing execution to ERP is still a unique approach for more traditional companies. See also a nice discussion from my blog buddy Oleg: BOM: Apple of Discord between PLM and ERP?
  • Not every business needs the full PLM capabilities that are available. Larger companies might focus more on standardized processes across the enterprise; smaller companies might focus more on sharing the data. There is to my opinion no system that suits all. One point they are all dreaming of: usability and as in small companies PLM decisions are more bottom-up the voice of the user is stronger here. Therefore I might stick to my old post PLM for the mid-market: mission impossible ?

However, the title of this blog post is: PLM for all industries. Therefore, I will not go deeper on the points above. Topics for the future perhaps.

PLM for all industries ?

This time I will share with you some observations and experiences based on interactions with companies that not necessary think about PLM. I have been working with these companies the past five years. Some with some success, some still in an awareness phase. I strongly believe these companies described below would benefit a lot from PLM technology and practices.

Apparel

imageIn July, I wrote about my observations during the Product Innovation Apparel event in London. I am not a fashion expert and here I discovered that, in a sense, PLM in Apparel is much closer to the modern vision of PLM than classic PLM. They depend on data sharing in a global model, disciplines and suppliers driven by their crazy short time to market and the vast amount of interactions in a short time; otherwise they would not be competitive anymore and disappear.

This figure represented modern PLM

PLM in Apparel is still in the early stages. The classic PLM vendors try to support Apparel with their traditional systems and are often too complicated or not user-friendly enough. The niche PLM vendors in Apparel have a more lightweight entry level, simple and easy, sometimes cloud-based. They miss the long-term experience of building all the required technology, scalability and security, in their products, assuring future upgradability. For sure this market will evolve, and we will see consolidation

Owner / Operators nuclear

nuclearFor s nuclear plants it is essential to have configuration management in place, which in short would mean that the plant operates (as-built) is the same as specified by its specifications (as-designed). In fact this is hardly the case. A lot of legacy data in paper or legacy document archives do not provide the actual state. They are stored and duplicated disconnected from each other. In parallel the MRO system (SAP PM / Maximo are major systems) runs in an isolated environment only dealing with actual data (that might be validated).

In the past 5 years I have been working and talking with owners/operators from nuclear plants to discuss and improve support for their configuration management. frog

The main obstacles encountered are:

  • The boiling frog syndrome –it is not that bad
    (and even if it is bad we won´t tell you)
  • An IT-department that believes configuration management is about document management – they set the standards for the tools (Documentum / SharePoint – no business focus)
  • An aging generation, very knowledgeable in their current work, but averse for new ways of information management and highly demanding to keep the status quo till they retire
  • And the “If it works, do not touch it” – approach somehow related to the boiling frog syndrome.

Meanwhile business values for a change using a PLM infrastructure have been identified. With a PLM environment completing the operational environment, an owner/operator can introduce coordinated changes to the plant, reduce downtime and improve quality of information for the future. One week less down-time could provide a benefit of million Euros.

No_roiHowever with the current, lowering electricity costs in Europe, the profits for owner/operators are under pressure and they are not motivated to invest at this time in a long term project. First satisfy the shareholders Sad smile

 

 

Owner / Operators other process oriented plants

almIn the nuclear industry safety is priority one and required by the authorities. Therefore, there is a high pressure for data quality and configuration management. For other industries the principles remain the same. Here, depending on the plant lifetime, criticality of downtime and risk for catastrophes, the interest for a PLM based plant information management platform varies. The main obstacles here are similar to the nuclear ones:frog

  • Even a bigger boiling frog as we have SAP PM – so what else do we need
  • IT standardizes on a document management solution
  • The aging workforce and higher labor costs are not identified yet as threats for the future looking towards competing against cheaper and modern plants in the upcoming markets – the boiling frog again.

The benefits for a PLM based infrastructure are less direct visible, still ROI estimates predict that after two years a break-even can be reached. Too long for share holder driven companies L although in 10 years time the plant might need to close due to inefficiencies.

 

EPC companies

epcEPC (Engineering, Procurement and Construction) and EPCIC (Engineering, Procurement, Construction, Installation and Commissioning) companies exist in many industries: nuclear new build, oil & gas, Chemical, Civil construction, Building Construction.

They all work commissioned for owner / operators and internally they are looking for ways to improve their business performance. To increase their margin they need to work more efficient, faster and often global, to make use of the best (cheaper) resources around the world. A way to improve quality and margin is through more reuse and modularization. This is a mind-shift as most EPC companies have a single project / single customer per project in mind, as every owner/operator also pushes their own standards and formats.

knowledgeIn addition, when you start to work on reuse and knowledge capturing, you need to have a way to control and capture your IP. And EPCs want to protect their IP and not expose too much to their customers to maintain a dependency on their solution.

The last paragraph should sound familiar to the challenges automotive and aerospace supply chains had to face 15 years ago and were the reasons why PLM was introduced. Why do EPC companies not jump on PLM?

  • They have their home-grown systems – hard to replace as everyone likes their own babies (even when they reach adolescence or retirement symptoms)
  • Integrated process thinking needs to be developed instead of departmental thinking
  • As they are project-centric, an innovation strategy can only be budgeted inside a huge project, where they can write-off the investment to their customer project. However this makes them less competitive in their bid – so let´s not do it
  • Lack of data and exchange standards. Where in the automotive and aerospace industry CATIA was the driving 3D standard, such a standard and 3D is not available yet for other industries. ISO 15926 for the process industry is reasonable mature, BIM for the construction industry is still in many countries in its discovery phase.
  • Extreme lose supplier relations compared to automotive and aerospace, which combined with the lack of data exchanges standards contributes to low investments in information infrastructure.

Conclusion

In the past 5 years I have been focusing on explaining the significance of PLM infrastructure and concepts to the industries mentioned before. The value lies on sharing data, instead of working in silos. If needed do not call it PLM, call it online collaboration, controlled Excel on the cloud.
Modern web technologies and infrastructure make this all achievable; however it is a business change to start sharing. Beside Excel the boiling frog syndrome dominates everywhere.

  • What do you think?
  • Do you have examples of companies that took advantage of modern PLM capabilities to change their business?

I am looking forward to learn more.

Below some links that are relevant for this post as a reference:

The problem with a TLA is that there is a limited number of combinations that make sense. And even once you have found the right meaning for a TLA, like PLM you discover so many different interpretations.

myplmFor PLM I wrote about this in my post PLM misconceptions –: PLM = PLM ?
I can imagine an (un)certain person, who wants to learn about PLM, might get confused (and should be – if you take it too serious).

At the end your company’s goal should be how to drive innovation, increase profitability and competiveness and not about how it is labeled.

As a frequent reader of my blog, you might have noticed I wrote sometimes about ALM and here a similar confusion might exist as there are three ALMs that might be considered in the context I am blogging.

Therefore this post to clarify which ALM I am dedicated to.
So first I start with the other ALMs:

ALM = Application Lifecycle Management

SWThis is an upcoming discipline in the scope of PLM due to the fact that more and more in the product development world embedded software becomes a part of the product. And like in PLM where we want to manage the product data through its lifecycle, ALM should become a logical part of a modern PLM implementation. Currently most of the ALM applications in this context are isolated systems dealing only with the software lifecycle, see this Wiki Page

ALM = Asset Lifecycle Management (operational)

In 2009 I started to focus on (my type of) ALM, called Asset Lifecycle Management, and I discovered the same confusion as when you talk about a BOM. What BOM really means is only clear when you understand the context. Engineers will usually think of an Engineering BOM, representing product as specified by engineering (managed by PDM). Usually the rest of the organization will imagine the Manufacturing BOM, representing the product the way it will be produced (managed mostly in ERP).

ALM_operational diagramThe same is valid for ALM. The majority of people in a production facility, plant or managed infrastructure will consider ALM as the way to optimize the lifecycle of assets. This means optimizing the execution of the plant, when to service or replace an asset ? What types of MRO activities to perform. Sounds a lot like ERP and as it has direct measurable impact on finance, it is the area that gets most of the attention by the management.

ALM = Asset Lifecycle Management (information management)

alm_1Here we talk about the information management of assets. When you maintain your assets only in a MRO system, it is similar like in a manufacturing company when only using an ERP system. You have the data for operations, but you do not have the process in place to manage the change and quality of data. In the manufacturing world this is done in PDM and PLM system and I believe owners/operators of plant can learn from that.

I wrote a few posts about this topic, see Asset Lifecycle Management using a PLM system, PLM CM and ALM – not sexy or using a PLM system for Asset Lifecycle Management requires a vision  and I am not going to rewrite them in this post. So get familiar with my thoughts if you read the first time about ALM  in my blog.

What I wanted to share is that thanks to modern PLM systems,  IT infrastructure/technologies and SBA it becomes achievable for owner/operators to implement an Asset Lifecycle Management vision for their asset information and I am happy to confirm that in my prospect and customer base, I see companies investing and building this ALM vision.

And why do they do this:

  • imageReduce maintenance time (incidental and planned) by days or weeks due to the fact that people have been working with the right and complete data. Depending on the type of operations, one week less maintenance can bring millions (power generation, high demand/high cost chemicals and more)

.

  • imageReduce the failure costs dramatically. As maintenance is often a multi-disciplinary activity errors due to miscommunication are considered as normal in this industry (10 % up and even more).  It is exactly this multi-disciplinary coordination that PLM systems can bring to this world. And the more you can do in a virtual world the more you can assure you do the right thing during real maintenance activities. Here industries similar as for the previous bullet, but also industries where high-costly materials and resources are used, the impact on reducing failure costs is high.

.

  • imageImprove the quality of data. Often the MRO system contains a lot of operational parameters that were entered there at a certain time by a certain person with certain skills – the fact that although I used the word certain three times, the result is uncertainty as there is no separate tracing and validation of the parameters per discipline and an uncertain person looking at the data might not discover there is an error, till it goes wrong. Here industries where a human error can be dramatic benefit the most from it (nuclear, complex chemical processes)

Conclusion: The PLM system based ALM implementations are more and more becoming reality next to the ALM operational world.  After spending more then three years focused on this area, I believe we can see and learn from the first results.

Are you interested in more details or do you want to share your experience ? Please let me know and I will be happy to extend the discussion

Note: On purpose I used as much TLA’s to assure it looks like an specialist blog, but you can always follow the hyperlink to the wiki explanation, when the TLA occurs the first time.

JOS

blog_start

May 24th, 2008 was the date I posted my first blog post as a Virtual Dutchman aiming to share PLM related topics for the mid-market.

I tried to stay away from technology and function/feature debates and based on my day to day observations, describe the human side of the PLM  – what people do and why . All  from a personal perspective and always open to discuss and learn more.

Looking back and reviewing my 86 posts and 233 comments so far, I would like to share a summary around some of the main topics in my blog.

PLM

PLM_profIn 2008, PLM awareness was much lower – at that time one of the reasons for me to start blogging. There was still a need to explain that PLM was a business strategy needed beside ERP and PDM.

PLM will bring more efficiency, and in better quality, new innovative products to the market due to better collaboration between teams and departments.

At that time the big three, Dassault Systemes, Siemens and PTC  were all offering a very CAD-centric, complex approach for PLM. There was no real mid-market offering, although their marketing organizations tried to sell as-if a mid-marketing offering existed.  Express, Velocity, ProductPoint where are these offerings now ?

Now, In 2012 there is an established PLM awareness as everyone is talking about (their interpretation of) PLM and with Autodesk, a company that knows how to serve the mid-market, also acknowledged there is a need for PLM in their customer base, the term PLM is widespread

The new PLM providers focus on a disconnect between PDM and PLM, as in particular the handling of enterprise data outside the PDM scope is a white space for many mid-market companies that need to operate on a global platform.

PLM & ERP

NoChangeIn the relation between PLM and ERP, I haven’t seen a big change the past four years. The two dominating ERP originated vendors, SAP and Oracle were paying attention to PLM in 2008 in their marketing and portfolio approach.

However their PLM offerings in my perception, haven’t moved much forward. SAP is selling ERP and yes there is a PLM module and Oracle is having PLM systems, but I haven’t seen a real targeted PLM campaign explaining the needs and value of PLM integrated with ERP.

Historically ERP is the main IT-system and gets all the management attention. PLM is more considered something for engineering (and gets less focus and budget). Understanding PLM and how it connects to ERP remains a point of attention and the crucial point of interaction is the manufacturing BOM and the place where it is defined. The two most read posts from my blog are: Where is the MBOM and next Bill of Materials for Dummies – ETO, indicating there is a lot of discussion around this topic.

I am happy to announce here that in October this year during PLM Innovation US, I will present and share my thoughts in more detail with the audience, hoping for good discussions

New trends

There are three new trends that became more clear the past four years.

dummies_logoThe first one to mention is the upcoming of Search Based Applications (SBA). Where PLM systems require structured and controlled data, search based applications assist the user by “discovering” data anywhere in the organization, often in legacy systems or possible in modern communication tools.

I believe companies that develop an integrated concept of PLM and SBA can benefit the most. PLM and ERP vendors should think about combining these two approaches in an integrated offering. I wrote about this combined topic in my post: Social Media and PLM explained for Dummies

cloudThe second trend is the cloud. Where two-three years ago social media combined with PLM was the hype as a must for product innovation and collaboration, currently cloud is in focus.

Mainly driven and coming for the US, where the big marketing engine from Autodesk is making sure it is on the agenda of mid-market companies.

In Europe there is less a hype at this moment, different countries and many languages to support plus discussions around security take the overhand here.

For me a cloud solution for sure is lowering the threshold for mid-market companies to start implementing PLM. However how to make the change in your company ? It is not only an IT-offering. Like a similar discussion around Open Source PLM, there is still a need to provide the knowledge and change push  inside a company to implement PLM correct. Who will provide these skills ?

alm_1The third trend is the applicability of PLM systems outside the classical manufacturing industries.

I have been writing about the usage of PLM systems for Owner/Operators and the civil / construction industry, where the PLM system becomes the place to store all plant related information, connected to assets and with status handling. Currently I am participating in several projects in these new areas and the results are promising

People and Change

frogI believe PLM requires a change in an organization not only from the IT perspective but more important from the way people will work in an organization and the new processes they require.

The change is in sharing information, making it visible and useful for others in order to be more efficient and better informed to make the right decisions much faster.

This is a global trend and you cannot stay away from it. Keeping data locked in your reach might provide job security but in the long term it kills all jobs in the company as competiveness is gone.

The major task here lies with the management that should be able to understand and execute a vision that is beyond their comfort zone. I wrote about this topic in my series around PLM 2.0

Modern companies with a new generation of workers will have less challenges with this change and I will try to support the change with arguments and experiences from the field.

Audience

Since February this year, WordPress provides much more statistics and interesting is the map below indicating in which countries my blog is read. As you can see there are only a few places left on earth where PLM is not studied.  Good news !!

audience

Although most of my observations come from working in Europe, it is the US that provides the most readers (30 %) , followed by India (9 %) and on the third place the UK (6 %).

This might be related to the fact that I write my blog in English  (not in 100 % native English as someone commented once).

It makes me look forward to be in October in Atlanta during the PLM Innovation US conference to meet face to face with many of my blog readers and share experiences.

Conclusion

Reading back my posts since 2008, it demonstrated for me that the world of PLM is not a static environment. It is even that dynamic that some of the posts I wrote in the early days have become obsolete. 

At the end of 2008 I predicted the future of PLM in 2050 – here we are on the right track.

There is still enough blogging to do without falling into repetitions and  I am looking forward to your opinion, feedback and topics to discuss.

 

%d bloggers like this: