You are currently browsing the tag archive for the ‘SLM’ tag.
In my last post, I zoomed in on a preferred technical architecture for the future digital enterprise. Drawing the conclusion that it is a mission impossible to aim for a single connected environment. Instead, information will be stored in different platforms, both domain-oriented (PLM, ERP, CRM, MES, IoT) and value chain oriented (OEM, Supplier, Marketplace, Supply Chain hub).
In part 3, I posted seven statements that I will be discussing in this series. In this post, I will zoom in on point 2:
Data-driven does not mean we do not need any documents anymore. Read electronic files for documents. Likely, document sets will still be the interface to non-connected entities, suppliers, and regulatory bodies. These document sets can be considered a configuration baseline.
System of Record and System of Engagement
In the image below, a slide from 2016, I show a simplified view when discussing the difference between the current, coordinated approach and the future, connected approach. This picture might create the wrong impression that there are two different worlds – either you are document-driven, or you are data-driven.
In the follow-up of this presentation, I explained that companies need both environments in the future. The most efficient way of working for operations will be infrastructure on the right side, the platform-based approach using connected information.
For traceability and disconnected information exchanges, the left side will be there for many years to come. Systems of Record are needed for data exchange with disconnected suppliers, disconnected regulatory bodies and probably crucial for configuration management.
The System of Record will probably remain as a capability in every platform or cross-section of platform information. The Systems of Engagement will be the configured real-time environment for anyone involved in active company processes, not only ERP or MES, all execution.
Introducing SysML and SML
This summer, I received a copy of Martin Eigner’s System Lifecycle Management book, which I am reading at his moment in my spare moments. I always enjoyed Martin’s presentations. In many ways, we share similar ideas. Martin from his profession spent more time on the academic aspects of product and system lifecycle than I. But, on the other hand, I have always been in the field observing and trying to make sense of what I see and learn in a coherent approach. I am halfway through the book now, and for sure, I will come back on the book when I have finished.
A first impression: A great and interesting book for all. Martin and I share the same history of data management. Read all about this in his second chapter: Forty Years of Product Data Management
From PDM via PLM to SysLM, is a chapter that everyone should read when you haven’t lived it yourself. It helps you to understand the past (Learning for the past to understand the future). When I finish this series about the model-based and connected approach for products and systems, Martin’s book will be highly complementary given the content he describes.
There is one point for which I am looking forward to is feedback from the readers of this blog.
Should we, in our everyday language, better differentiate between Product Lifecycle Management (PLM) and System Lifecycle Management(SysLM)?
In some customer situations, I talk on purpose about System Lifecycle Management to create the awareness that the company’s offering is more than an electro/mechanical product. Or ultimately, in a more circular economy, would we use the term Solution Lifecycle Management as not only hardware and software might be part of the value proposition?
Martin uses consistently the abbreviation SysLM, where I would prefer the TLA SLM. The problem we both have is that both abbreviations are not unique or explicit enough. SysLM creates confusion with SysML (for dyslectic people or fast readers). SLM already has so many less valuable meanings: Simulation Lifecycle Management, Service Lifecycle Management or Software Lifecycle Management.
For the moment, I will use the abbreviation SLM, leaving it in the middle if it is System Lifecycle Management or Solution Lifecycle Management.
How to implement both approaches?
In the long term, I predict that more than 80 percent of the activities related to SLM will take place in a data-driven, model-based environment due to the changing content of the solutions offered by companies.
A solution will be based on hardware, the solid part of the solution, for which we could apply a BOM-centric approach. We can see the BOM-centric approach in most current PLM implementations. It is the logical result of optimizing the product lifecycle management processes in a coordinated manner.
However, the most dynamic part of the solution will be covered by software and services. Changing software or services related to a solution has completely different dynamics than a hardware product.
Software and services implementations are associated with a data-driven, model-based approach.
The management of solutions, therefore, needs to be done in a connected manner. Using the BOM-centric approach to manage software and services would create a Kafkaesque overhead.
Depending on your company’s value proposition to the market, the challenge will be to find the right balance. For example, when you keep on selling “disconnected” hardware, there is probably no need to change your internal PLM processes that much.
However, when you are moving to a “connected” business model providing solutions (connected systems / Outcome-based services), you need to introduce new ways of working with a different go-to-market mindset. No longer linear, but iterative.
A McKinsey concept, I have been promoting several times, illustrates a potential path – note the article was not written with a PLM mindset but in a business mindset.
What about Configuration Management?
The different datasets defining a solution also challenge traditional configuration management processes. Configuration Management (CM) is well established in the aerospace & defense industry. In theory, proper configuration management should be the target of every industry to guarantee an appropriate performance, reduced risk and cost of fixing issues.
The challenge, however, is that configuration management processes are not designed to manage systems or solutions, where dynamic updates can be applied whether or not done by the customer.
This is a topic to solve for the modern Connected Car (system) or Connected Car Sharing (solution)
For that reason, I am inquisitive to learn more from Martijn Dullaart’s presentation at the upcoming PLM Roadmap/PDT conference. The title of his session: The next disruption please …
In his abstract for this session, Martijn writes:
From Paper to Digital Files brought many benefits but did not fundamentally impact how Configuration Management was and still is done. The process to go digital was accelerated because of the Covid-19 Pandemic. Forced to work remotely was the disruption that was needed to push everyone to go digital. But a bigger disruption to CM has already arrived. Going model-based will require us to reexamine why we need CM and how to apply it in a model-based environment. Where, from a Configuration Management perspective, a digital file still in many ways behaves like a paper document, a model is something different. What is the deliverable? How do you manage change in models? How do you manage ownership? How should CM adopt MBx, and what requirements to support CM should be considered in the successful implementation of MBx? It’s time to start unraveling these questions in search of answers.
One of the ideas I am currently exploring is that we need a new layer on top of the current configuration management processes extending the validation to software and services. For example, instead of describing every validated configuration, a company might implement the regular configuration management processes for its hardware.
Next, the systems or solutions in the field will report (or validate) their configuration against validation rules. A topic that requires a long discussion and more than this blog post, potentially a full conference.
Therefore I am looking forward to participating in the CIMdata/PDT FALL conference and pick-up the discussions towards a data-driven, model-based future with the attendees. Besides CM, there are several other topics of great interest for the future. Have a look at the agenda here
Conclusion
A data-driven and model-based infrastructure still need to be combined with a coordinated, document-driven infrastructure. Where the focus will be, depends on your company’s value proposition.
If we discuss hardware products, we should think PLM. When you deliver systems, you should perhaps talk SysML (or SLM). And maybe it is time to define Solution Lifecycle Management as the term for the future.
Please, share your thoughts in the comments.
Last week I got the following question:
Many companies face the challenges relevant to the cooperation and joint ventures and need to integrate in a smart way the portfolio’s to offer integrated solutions. In the world of sharing and collaboration, this may be a good argument to dig into. Is PLM software ready for this challenge with best practice solutions or this is a matter that is under specific development case by case? Any guidelines?
Some history
When PLM solutions were developed their core focus was on bringing hardware products to the market in a traditional manner as shown in the figure below.
Products were pushed to the market based on marketing research and closed innovation. Closed innovation meant companies were dependent on their internal R&D to provide innovative products. And this is the way most PLM systems are implemented: supporting internal development. Thanks to global connectivity, the internal development teams can collaborate together connected to a single PLM backbone/infrastructure.
Third Party Products (TPP) at that time were sometimes embedded in the EBOM, and during the development phase, there would be an exchange of information between the OEM and the TPP provider. Third Party Products were treated in a similar manner as purchase items. And as the manufacturing of the product was often defined in the ERP system, there the contractual and financial interactions with the TTP provider were handled, creating a discontinuity between what has been defined for the product and what has been shipped. The disconnect between the engineering intent and actual delivery to the customer often managed in Excel spreadsheets or proprietary databases developed to soften the pain
What is happening now?
In the past 10 – 15 years there is the growing importance of first electronic components and their embedded software now followed by new go-to-market approaches, where the customer proposition changes from just a product, towards a combined offering of hardware, software, and services. Let´s have a look how this could be done in a PLM environment.
From Products to Solutions
The first step is to manage the customer proposition in a logical manner instead of managing all in a BOM definition. In traditional businesses, most companies still work around multiple Bill of Materials. For example, read this LinkedIn post: The BOM is King. This approach works when your company only delivers hardware.
Not every PLM system supports Out-Of-The-Box a logical structure. I have seen implementations where this logical structure was stored in an external database (not preferred) or as a customized structure in the PLM system. Even in SmarTeam, this methodology was used to support Asset Lifecycle Management. I wrote about this concept early 2014 in the context of Service Lifecycle Management(SLM) two posts: PLM and/or SLM ? and PLM and/or SLM (continued). It is no coincidence that concepts used for connecting SLM to PLM are similar to defining customer propositions.
In the figure to the left, you can see the basic structure to manage a customer proposition and how it would connect to the aspects of hardware, software, and services. In an advanced manner, the same structure could be used with configuration rules to define and create a portfolio of propositions. More about this topic potential in a future blog post.
For hardware, most PLM systems have their best practices based on the BOM as discussed before. When combining the hardware with embedded software, we enter the world of systems. The proposition is no longer a product it becomes a system or even an experience.
For managing systems, I see two main additions to the classical PLM approach:
- The need for connected systems engineering. As the behavior of the system is much more complicated than just a hardware product, companies discover the need to spend more time on understanding all the requirements for the system and its potential use cases in operation – the only way to define the full experience. Systems Engineering practices coming from Automotive & Aerospace are now coming into the world of high-tech, industrial equipment, and even consumer goods.
- The need to connect software deliverables. Software introduces a new challenge for companies, no matter if the software is developed internally or embedded through TTP. In both situations, there is the need to manage change in a fast and iterative manner. Classical ECR /ECO processes do not work here anymore. Working agile and managing a backlog becomes the mode. Application Lifecycle Management connected to PLM becomes a need.
In both domains, systems engineering, and ALM, PLM vendors have their offerings, and on the marketing side, they might all look the same to you. However, there is a fundamental need that is not always visible on the marketing slides, the need for complete openness.
Openness
To manage a portfolio based on systems a company can no longer afford to manually check in multiple management systems all the dependencies between the product and its components combined with the software deliverables and TTPs. Automation, traceability on changes and notifications are needed in a modern, digital environment, which you might call a product innovation platform. My high-speed blog buddy Oleg Shilovitsky just dedicated a post to “The Best PLM for Product Innovation Platform” sharing several quotes from CIMdata´s talk about characteristics of a Product Innovation Platform and stressing the need for openness.
It is true if you can only manage your hardware (mechanics & electronics) and software in dedicated systems, your infrastructure will be limited and rigid as the outside world is in constant and fast changes. No ultimate solution or product does it all and will do it all in the future. Therefore openness is crucial.
Services
In several companies, original in the Engineering, Procurement & Construction industry, I have seen the need to manage services in the context of the customer delivery too. Highly customized systems and/or disconnected systems were used here. I believe the domain of managing a proposition, a combination of hardware, software, AND services in a connected environment is still in its early days. Therefore the question marks in the diagram.
Conclusion
How Third Party Products management are supported by PLM depends very much on the openness of the PLM system. How it connects to ALM and how the PLM system is able to manage a proposition. If your PLM system has been implemented as a supporting infrastructure for Engineering only, you are probably not ready for the modern digital enterprise.
Other thoughts ???
In my previous post, I wrote about the different ways you could look at Service Lifecycle Management (SLM), which, I believe, should be part of the full PLM vision. The fact that this does not happen is probably because companies buy applications to solve issues instead of implementing a consistent company wide vision (When and Where to start is the challenge). Oleg Shilovitsky just referred one more time to this phenomena – Why PLM is stuck in PDM.
I believe PLM as the enterprise information backbone for product information. I will discuss the logical flow of data that might be required in a PLM data model, to support SLM. Of course all should be interpreted in the context of the kind of business your company is in.
This post is probably not the easiest to digest as it assumes you are somehow aware and familiar with the issues relevant for the ETO (Engineering To Order) /EPC (Engineering Procurement Construction) /BTO (Build To Order) business
A collection of systems or a single device
The first significant differentiation I want to make is between managing an installation or a single device as I will focus only on installations.
An installation can be a collection of systems, subsystems, equipment and/or components, typically implemented by companies that deliver end-to-end solutions to their customers. A system can be an oil rig, a processing production line (food, packages, …), a plant (processing chemicals, nuclear materials), where maintenance and service can be performed on individual components providing full traceability.
Most of the time a customer specific solution is delivered to a customer, either direct or through installation / construction partners. This is the domain I will focus on.
I will not focus on the other option for a single device (or system) with a unique serial number that needs to be maintained and serviced as a single entity. For example a car, a computer device. Usually a product for mass consumption, not to be traced individually.
In order to support SLM at the end of the PLM lifecycle, we will see a particular data model is required which has dependencies on the early design phases.
Let´s go through the lifecycle stages and identify the different data types.
The concept / sales phase
In the concept/sales phase the company needs to have a template structure to collect and process all the information shared and managed during their customer interaction.
In the implementations that I guided, this was often a kind of folder structure grouping information into a system view (what do we need), a delivery view (how and when can we deliver), a services view (who does what ) and a contractual view (cost, budget, time constraints). Most of these folders had initially relations to documents. However the system view was often already based on typical system objects representing the major systems, subsystems and components with metadata.
In the diagram, the colors represent various data types often standard available in a rich PLM data model. Although it can be simplified by going back to the old folder/document approach shared on a server, you will recognize the functional grouping of the information and its related documents, which can be further detailed into individual requirements if needed and affordable. In addition, a first conceptual system structure can already exist with links to potential solutions (generic EBOMs) that have been developed before. A PLM system provides the ideal infrastructure to store and manage all data in context of each other.
The Design phase
Before the design phase starts, there is an agreement around the solution to be delivered. In that situation, an as-sold system structure will be leading for the project delivery, and later this evolved structure will be the reference structure for the as-maintained and as-services environment.
A typical environment at this stage will support a work breakdown structure (WBS), a system breakdown structure (SBS) and a product breakdown structure (PBS). In cases where the location of the systems and subsystems are relevant for the solution, a geographical breakdown structure (GBS) can be used. This last method is often used in shipbuilding (sections / compartments) and plant design (areas / buildings / levels) and is relevant for any company that needs to combine systems and equipment in shared locations.
The benefit of having the system breakdown structure is that it manages the relations between all systems and subsystems. Potentially when a subsystem will be delivered by a supplier this environment supports the relationship to the supplier and the tracking of the delivery related to the full system / project.
Note: the system breakdown structure typically uses a hierarchical tag numbering system as the primary id for system elements. In a PLM environment, the system breakdown elements should be data objects, providing the metadata describing the performance of the element, including the mandatory attributes that are required for exchange with MRO (Maintenance Repair Overhaul) systems.
Working with a system breakdown structure is common for plant design or a asset maintenance project and this approach will be very beneficial for companies delivering process lines, infrastructure projects and other solutions that need to be delivered as a collection of systems and equipment.
The delivery phase
During the delivery phase, the system breakdown structure supports the delivery of each component in detail. In the example below you can see the relation between the tag number, the generic part number and the serial number of a component.
The example below demonstrates the situation where two motors (same item – same datasheet) is implemented at two positions in a subsystem with a different tag number, a unique serial number and unique test certificates per motor.
The benefit of a system breakdown structure here is that it supports the delivery of unique information per component that needs to be delivered and verified on-site. Each system element becomes traceable.
The maintenance phase
For the maintenance phase the system breakdown structure (or a geographical breakdown structure) could be the place holder to follow up the development of an installation at a customer site.
Imagine that, in the previous example, the motor with tag number S1.2-M2 appears to be under dimensioned and needs to be replaced by a more powerful one. The situation after implementing this change would look like the following picture:
Through the relationships with the BOM items (not all are shown in the diagram), there is the possibility to perform a where-used query and identify other customers with a similar motor at that system position. Perhaps a case for preventive maintenance?
Note: the diagram also demonstrates that the system breakdown structure elements should have their own lifecycle in order to support changes through time (and provide traceability).
From my experience, this is a significant differentiator PLM systems can bring in relation to an MRO system. MRO and ERP (Enterprise Resource Planning)systems are designed to work with the latest and actual data only. Bringing in versioning of assets and traceability towards the initial design intent is almost impossible to achieve for these systems (unless you invest in a heavy customized system).
Conclusion
In this post and my previous post, I tried to explain the value of having at least a system breakdown structure as part of the overall PLM data model. This structure supports the early concept phase and connects data from the delivery phase to the maintenance phase.
Where my mission in the past 8 years was teaching non-classical PLM industries the benefits of PLM technology and best practices, in this situation you might say it is where classical BTO companies can learn from best practices from the process and oil & gas industry.
Note: Oleg just published a new blog post: PLM Best Practices and Henry Ford Mass Production System where he claims PLM vendors, Service partners and consultants like to sell Best Practices and still during implementation discover mass customization needs to be made to become customer specific, therefore, the age of Best Practices is over.
I agree with that conclusion, as I do not believe in an Out-Of-The-Box approach, to lead a business change.
Still Best Practices are needed to explain to a company what could be done and in that context without starting from a blank sheet.
Therefore I have been sharing this Best Practice (for free)
Some weeks ago there was a vivid discussion around the need for SLM (service lifecycle management) besides PLM started in a PLM LinkedIn group. Of course, the discussion was already simmering in the background in other LinkedIn groups and fora (forums) triggered by PTC´s announcement to focus on SLM and their “observation” that they were probably the only PLM vendor to observe that need. The Internet of Things is in one pen stroke connected with SLM. (Someone still using a pen?)
Of course it is not that simple and I will try to bring some logic in the thought process, the potential hype and the various approaches you could take related to SLM
SLM
First SLM as a TLA (Three Letter Acronym). If you would Google what is the meaning of SLM the most common meaning is Hello, often said on IRC, this is short for “salaam”, or hello.
In the context of PLM it is a relative new acronym and the discussion on LinkedIn was also about the fact if we needed a new TLA. In general. What we try to achieve with SLM is: the ability to trace and follow existing products at customers and to provide advanced or integrated services to them. In a basic matter this could be providing documentation and service information (spare parts information). In an advanced manner, this could be thinking about the Internet of Things, be products that connect to the home base and provide information for preventive maintenance, performance monitoring and enhancements, etc.
The topic is not new for companies around the world that have a “what can we do beyond PDM” vision, as I was involved already in 2001 in discussion with a large Swiss company providing solutions for the food processing industry. They wanted to leverage their internal customer centric delivery process and extend it to their customer support using a web interface for relevant content: spare parts lists and documentation.
I am sure one or two readers of this blog post will remember “the spindle case” (the only part in the demo concept that had real data behind it at that time)
For many industries and businesses the customer services (and the margin on spare parts) are the main areas where they make a sustainable profit to secure the company’s future. Most of the time, the initial sale and/or delivery of their products are done with relative low margin due to the competitive sales situation they are during selling. And of course the sale itself is surrounded with uncertainty which vendors have to accept.
If they would ask for more certainty – it would require a more detailed research, which is costly for them or considered as a disadvantage by their potential customer. As other competing vendors do not insist on further research, your company might consider not being “skilled” enough to estimate properly a product.
The above paragraph implicitly clarifies that we are mainly talking about companies, where their primary process is Engineering to Order or Build to Order. For companies where the product is delivered through a Configure to Order or an Off-the-Shelf approach, there is no need to work in a similar manner. Buying a computer or a car has no sales engineering involved anymore. There is a clear understanding of the target price and of course resellers will still focus on differentiating themselves by providing adjacent services.
So for simplicity I will focus on companies with a BTO or ETO primary business process
SLM and ETO
In a real Engineering to Order process, traditionally the company that delivers the solution to the client will not be really involved in the follow up of the lifecycle of the products delivered. The delivered product (small machinery, large machinery or even an installation or plant) is delivered to the customer and with the commissioning and handover a lot of information is transferred to the customer, based on the requirements of the customer.
Usually during this handover, a lot of intelligence of the information is gone, as the customer does not have the same engineering environment and therefore requires information is “neutral” formats: paper (less and less), PDFs (the majority) and (stripped) CAD data combined with Excels.
The information battle here between the ETO-delivery company and the customer is, that the ETO-delivery company does not want to provide too much information to the customer, to make the customer fully independent, as the service and spare parts business is the area where they can make their margin. The customer, however, often wants to have ownership of the majority of data, but also there is the awareness if they ask too much; they will pay for it (as an engineering company will consider this as extra work). So finding the right balance is the point.
However, the balance is changing, and this is where SLM comes in.
More and more we see that companies who purchased in the past an Engineering to Order product (or even plant) are changing their business model towards using the product or running the plant and ask from the Engineering to Order company to provide the solution as a service. A kind of operation lease including resources. This means solutions are no longer sold as a collection of products, but as an operational model (40.000 chickens / day, 1 Mio liter/day, 100 000 Tons / year, etc., etc.)
The owner of the equipment is no longer the owner, but pays for the service to perform the business. Very similar to SaaS (software as a service) solutions. You do not own the software anymore; you pay for using it, no matter what kind of hardware / software architecture there is behind the offering.
In that case, the Engineering to Order company can provide much more advanced services when they extend their delivery process with capabilities for the operational phase of the product. As a more integrated approach eliminates the need for this disruptive handover process. Data does not need to be made “stupid” again, it is a continuous flow of information.
How this can be done, I will describe in an upcoming, more technical, blog post. This approach brings value to both the Engineering to Order company and the owner/operator of the product / plant.
As it is a continuous flow of information, I would like to conclude this topic by stating that, for Engineering to Order companies, there is no need to think about an extra SLM solution. You could label the last part of the PLM process the SLM domain.
As the customer data is already unique, it is just a normal continuation of the PLM process.
Two closing notes here:
- I have seen already Engineering to Order companies that provide the whole maintenance and service of the delivered product / plant to their customer integrated in their data environment. (so it is happening !)
- Engineering to Order companies are still discovering the advantages of PLM to get a cross-project, cross-discipline understanding and working methodology for their delivery process. Historically they were thinking in isolated projects, where the brain of experienced engineers was the connection between different projects. Now PLM practices are becoming the foundation for sharing and capitalizing on knowledge.
And with the last remark on capitalizing the knowledge, we move from the Engineering to Order industry to the Build to Order
SLM and BTO
In the Build to Order industry, the company that delivers a solution to their customer, has tried, in a way, to standardize certain parts of their total solution. These parts can be standardized/configurable machinery or standardized/configurable equipment, or even a level higher standardized systems and subsystems.
More configurable/modular standardization is what most companies are aiming for. As the more you modularize your solution parts, the clearer it will be that there are two different main processes inside the same organization:
- One process, the main process for the company, fulfilling the customer need. In this process it is about combining existing solution components and engineering them together in a customer specific solution. This could be a PLM delivery model like ETO.
- One process to enhance, maintain and develop new solution components, which is a typical R&D process. Here I would state PLM is indisputable needed, to bring new technology and solutions to the main business process
So within a company, there might be the need for two different PLM solution processes. From my observations in the past 10 years, companies invest in PDM for their R&D process and try to do a little of PLM on top of this PDM implementation for their delivery process. This basic PLM process usually focuses again on the core of the engineering process of delivery, starting somewhere from the specifications till the delivery of the solution.
So “full” PLM is very rare to find. The front end of the delivery process, systems engineering, is often considered complex and often the customer does not want to engage fully in the front end definition of the solution.
“You are the experts, you know best what we want” is often heard.
Ironically in an analogue situation this is often the case of PLM implementations at risk. Here the company expects the PLM implementer to know what they want, without being explicit or understanding what is needed.
To extend the discussion for PLM and SLM, I would like to change the question to a different dimension first:
Do we need two PLM implementations within one company ?
One for R&D and one for the delivery process ?
Reasons to say No are:
- Simplicity – it is easier to have one system instead of two systems
- The amount of R&D activity is so low compared to the delivery process; the main PLM system can support this.
Reasons to say Yes are:
- The R&D process is extremely important as is the delivery process
- The R&D process is extremely important and we have a large customer base to serve
Reading these two options, it brings some clarity.
If the R&D process is a significant differentiator and you are aiming to serve many customers, it makes sense to have two PLM implementations.
Still two PLM implementations could be based on the same PLM infrastructure and I would challenge readers of this post to explain why it should be a single instance of a PLM infrastructure.
Why two PLM systems
- I believe based on the potential huge amount of data a single instance would create a data monster, where we can see that connected systems (using big data) is the future.
- In other concepts there is an enterprise PLM and local PDMs exactly because there is no single system that can do all in an efficient manner.
Still I haven´t talked about SLM, which could be part of the delivery process, where you manage customer specific data. For that, more detail in my next blog post, there is are some data model constraints for the PLM system.
I would state you only can use a separate SLM system if you are not interested in data from the early phases of the delivery process. In the early phase, you use conceptual structures to define the product /installation/plant. These conceptual structures are to my opinion the connection between the concept phase and the service phase. Usually tag numbers are used to describe the functional usage of a product or system, and they are the ones referenced by service engineers to start a service operation.
Only when this view or need does not exist, I can imagine, SLM is needed, where potential based on serial numbers, services are tracked and monitored and are fed back to the R&D environment. The R&D environment then would publish product data into the SLM system
You might be confused at this time, as I did not bring the various information structures into this post to clarify the data flow for the delivery process. This I will do in my upcoming post.
Why not CTO and SLM ?
I haven´t discussed Configure to Order (CTO) here, as I consider CTO a logistical process, which logically is addressed by the ERP system. The definitions of the configurations and its related content probably will be delivered through a PDM/PLM system, so the R&D type of PLM system will exist in the company.
SLM most logically would be performed in this situation by the ERP system, as there is no PLM delivery layer. Having said this, a new religion discussion might come up. Is SLM a separate discipline or is it part of the ERP system?
This topic is no discussion for the big ERP vendors – they do it all J, but it is up to your company if a Swiss knife is the right tool to work within your organization.
Conclusion
For the moment I would like to conclude:
- PLM and SLM –> No (only Yes in isolated cases)
- PLM and PLM –> Yes (as SLM requires the front end of PLM too)
Do we need SLM ? Perhaps yes as a way to describe a functional domain. No when we are talking about another silo system. I believe the future is in connectivity of data and in the long term PLM, ERP and SLM will be functional domains describing how connected data will serve particular needs.
Looking forward to your thoughts
Jos, great thoughts about BOM management. Here are some of my thoughts. I can see how BOM management will evolve…
As a complement, even if more and more of the diversity of a product is managed at the software level…
1) A wiring diagram stores information (wires between ports of the electrical components) that does not exist in most of…
BOM has NEVER been the sole "master" of the Product. The DEFINITION FILE is ! For example the wiring of…
Interesting discussion about part numbers and where they originate. Though there seems to be consensus about the EBOM and MBOM,…