You are currently browsing the tag archive for the ‘Digital Enterprise’ tag.

When I started this series in July, I expected to talk mostly about new ways of working, enabled through a data-driven and model-based approach. However, when analyzing what is needed for such a future (part 3), it became apparent that many of these new ways of working are dependent on technology.

From coordinated to connected sounds like a business change;

however, it all depends on technology. And here I have to thank Marc Halpern (Gartner’s Research VP, Engineering and Design Technologies)  again, who came with this brilliant scheme below:

So now it is time to address the last point from my starting post:

Configuration Management requires a new approach. The current methodology is very much based on hardware products with labor-intensive change management. However, the world of software products has different configuration management and change procedures. Therefore, we need to merge them into a single framework. Unfortunately, this cannot be the BOM framework due to the dynamics in software changes.

Configuration management at this moment

PLM and CM are often considered overlapping. My March 2019 post: PLM and Configuration Management – a happy marriage? shares some thoughts related to this point

Does having PLM or PDM installed mean you have implemented CM? There is this confusion because revision management is considered the same as configuration management. Read my March 2020 post: What the FFF is happening? Based on a vivid discussion launched by  Yoann Maingon, CEO and founder of Ganister, an example of a modern, graph database-based, flexible PLM solution.

To hear it from a CM-side,  I discussed it with Martijn Dullaart in my February 2021 post: PLM and Configuration Management. We also zoomed in on CM2 in this post as a methodology.

Martijn is the Lead Architect for Enterprise Configuration Management at ASML (Our Dutch national pride) and chairperson of the Industry 4.0 committee of the Integrated Process Excellence (IPX) Congress.

As mentioned before in a previous post (part 6), he will be speaking at the PLM Roadmap & PDT Fall conference starting this upcoming week.

In this post, I want to talk about the CM future. For understanding the current situation, you can find a broad explanation here on Wikipedia. Have a look at CM in the context of the product lifecycle, ensuring that the product As-Specified and As-Designed information matches the As-Built and As-Operated product information.

A mismatch or inconsistency between these artifacts can lead to costly errors, particularly in later lifecycle stages. CM originated from the Aerospace and Defense industry for that reason. However, companies in other industries might have implemented CM practices too. Either due to regulations or thanks to the understanding that configuration mistakes can cause significant damage to the company.

Historically configuration management addresses the needs of “slow-moving” products. For example, the design of an airplane could take years before manufacturing started. Tracking changes and ensuring consistency of all referenced datasets was often a manual process.

On purpose, I wrote “referenced datasets,” as the information was not connected in a single environment most of the time. The identifier of a dataset ( an item or a document) was the primary information carrier used for mentally connecting other artifacts to keep consistency.

The Institute of Process Excellence (IPX) has been one of the significant contributors to configuration management methodology. They have been providing (and still offer) CM2 training and certification.

As mentioned before, PLM vendors or implementers suggest that a PLM system could fully support Configuration Management. However, CM is more than change management, release management and revision management.

As the diagram from Martijn Dullaart shows, PLM is one facet of configuration management.

Of course, there are also (a few) separate CM tools focusing on the configuration management process. CMstat’s EPOCH CM tool is an example of such software. In addition, on their website, you can find excellent articles explaining the history and their future thoughts related to CM.

The future will undoubtedly be a connected, model-based, software-driven environment. Naturally, therefore, configuration management processes will have to change. (Impressive buzz word sentence, still I hope you get the message).

From coordinated to connected has a severe impact on CM. Let’s have a look at the issues.

Configuration Management – the future

The transition to a data-driven and model-based infrastructure has raised the following questions:

  • How to deal with the granularity of data – each dataset needs to be validated. For example, a document (a collection of datasets) needs to be validated in the document-based approach. How to do this efficiently?
  • The behavior of a product (or system) will more and more dependent on software. Product CM practices have been designed for the hardware domain; now, we need a mix of hardware and software CM practices.
  • Due to the increased complexity of products (or systems) and the rapid changes due to software versions, how do we guarantee the As-Operated product is still matching the As-Designed / As-Certified definitions.

I don’t have answers to these questions. I only share observations and trends I see in my actual world.

Granularity of data

The concept of datasets has been discussed in my post (part 6). Now it is about how to manage the right sets of connected data.

The image on the left, borrowed from Erik Herzog’s presentation at the PDM Roadmap & PDT Fall conference in 2020, is a good illustration of the challenge.

At that time, Erik suggested that OSLC could be the enabler of a digital CM backbone for an enterprise. Therefore, it was a pleasure to see Erik providing an update at the yearly OSLC Fest conference this week.

You can find the agenda and Erik’s presentation here on day 2.

OSLC as a framework seems to be a good candidate for supporting modern CM scenarios. It allows a company to build full traceability between all relevant artifacts (if digital available). I can see the beauty of the technical infrastructure.

Still, it is about people and processes first. Therefore, I am curious to learn from my readers who believe and experiment with such a federated infrastructure.

More software

Traditional working companies might believe that software should be treated as part of the Bill of Materials. In this theory, you treat software code as a part, with a part number and revision. In this way, you might believe configuration management practices do not have to change. However, there are some fundamental differences in why we should decouple hardware and software.

First, for the same hardware solution, there might be a whole collection of valid software codes. Just like your computer. How many valid software codes, even from the same application, can you run on this hardware? Managing a computer system and its software through a Bill of Materials is unimaginable.

A computer, of course, is designed for running all kinds of software versions. However, modern products in the field, like cars, machines, electrical devices, all will have a similar type of software-driven flexibility.

For that reason, I believe that companies that deliver software-driven products should design a mechanism to check if the combination of hardware and software is valid. For a computer system, a software mismatch might not be costly or painful; for an industrial system, it might be crucial to ensure invalid combinations can exist. Click on the image to learn more.

Solutions like Configit or pure::variants might lead to a solution. In Feb 2021, I discussed in PLM and Configuration Lifecycle Management with Henrik Hulgaard, the CTO from Configit, the unique features of their solution.

I hope to have a similar post shortly with Pure Systems to understand their added value to configuration management.

Software change management is entirely different from hardware change management. The challenge is to have two different change management approaches under one consistent umbrella without creating needless overhead.

Increased complexity – the digital twin?

With the increased complexity of products and many potential variants of a solution, how can you validate a configuration? Perhaps we should investigate the digital twin concept, with a twin for each instance we want to validate.

Having a complete virtual representation of a product, including the possibility to validate the software behavior on the virtual product, would allow you to run (automated) validation tests to certify and later understand a product in the field.

No need for inspection on-site or test and fix upgrades in the physical world. Needed for space systems for sure, but why not for every system in the long term. When we are able to define and maintain a virtual twin of our physical product (on-demand), we can validate.

I learned about this concept at the 2020 Digital Twin conference in the Netherlands. Bart Theelen from Canon Production Printing explained that they could feed their simulation models with actual customer data to simulate and analyze the physical situation. In some cases, it is even impossible to observe the physical behavior. By tuning the virtual environment, you might understand what happens in the physical world.

An eye-opener and an advocate for the model-based approach. Therefore, I am looking forward to the upcoming PLM Roadmap & PDT Fall conference. Hopefully, Martijn Dullaart will share his thoughts on combining CM and working in a model-based environment. See you there?

Conclusion

Finally, we have reached in this series the methodology part, particularly the one related to configuration management and traceability in a very granular, digital environment.  

After the PLM Roadmap & PDT fall conference, I plan to follow up with three thought leaders on this topic: Martijn Dullaart (ASML), Maxime Gravel (Moog) and Lisa Fenwick (CMstat).  What would you ask them?

In my previous post, I discovered that my header for this series is confusing. Although a future implementation of system lifecycle management (SLM/PLM) will rely on models, the most foundational change needed is a technical one to create a data-driven infrastructure for connected ways of working.

My previous article discussed the concept of the dataset, which led to interesting discussions on LinkedIn and in my personal interactions. Also, this time Matthias Ahrens (HELLA) shared again a relevant but very academic article in this context – how to harmonize company information.

For those who want to dive deeper into the concept of connected datasets, read this article: The euBusinessGraph ontology: A lightweight ontology for harmonizing basic company information.

The article illustrates that the topic is relevant for all larger enterprises (and it is not an easy topic).

This time I want to share my thoughts about the two statements from my introductory post, i.e.:

A model-based approach with connected datasets seems to be the way forward. Managing data in documents will become inefficient as they cannot contribute to any digital accelerator, like applying algorithms. Artificial Intelligence relies on direct access to qualified data.

A model-based approach with connected datasets

We discussed connected datasets in the previous post; now, let’s explore why models and datasets are related. In the traditional CAD-centric PLM domain, most people will associate the word model with a CAD model, to be more precise, the 3D CAD Model. However, there are many other types of models used related to product development, delivery and operations.

A model can be a:

Physical Model

  • A smaller-scale object for the first analysis, e.g., a city or building model, an airplane model

Conceptual Model

  • A conceptual model describes the entities and their relations, e.g., a Process Flow Diagram (PFD)
  • A mathematical model describes a system concept using a mathematical language, e.g., weather or climate models. Modelica and MATLAB would fall in this category
  • A CGI (Computer Generated Imagery) or 3D CAD model is probably the most associated model in the mind of traditional PLM practitioners
  • Functional and Logical Models describing the services and components of a system are crucial in an MBSE

Operational Model

  • A model providing performance analysis based on (real-time) data coming from selected data sources. It could be an operational business model, an asset performance model; even my Garmin’s training performance model is such an operating model.

The list of all models above is not extensive nor academically defined. Moreover, some model term definitions might overlap, e.g., where would we classify software models or manufacturing models?

All models are a best-so-far approach to describing reality. Based on more accurate data from observations or measurements, the model comes closer to what happens in reality.

A model and its data

Never blame the model when there is a difference between what the model predicts and the observed reality. It is still a model.  That’s why we need feedback loops from the actual physical world to the virtual world to fine-tune the model.

Part of what we call Artificial Intelligence is nothing more than applying algorithms to a model. The more accurate data available, the more “intelligent” the artificial intelligence solution will be.

By using data analysis complementary to the model, the model may get better and better through self-learning. Like our human brain, it starts with understanding the world (our model) and collecting experiences (improving our model).

There are two points I would like to highlight for this paragraph:

  • A model is never 100 % the same as reality – so don’t worry about deviations. There will always be a difference between virtual predicted and physical measured – most of the time because reality has much more influencing parameters.
  • The more qualified data we use in the model, the closer to reality – so focus on accurate (and the right) data for your model. Although, as most of the time, it is impossible to fully model a system, focus on the most significant data sources.

The ultimate goal: THE DIGITAL TWIN

The discussion related to data-driven and the usage of models might feel abstract and complex (and that’s the case). However the term “digital twin” is well known and even used in board rooms.

The great benefits of a digital twin for business operations and for sustainability are promoted by many software vendors and consultancy firms.

My statement and reason for this series of blog posts: Digital Twins do not run on documents, you need to have a data-driven, model-based infrastructure to efficiently benefit from digital twin concepts.

Unfortunate a reliable and sustainable implementation of a digital twin requires more than software – it is a learning journey to connect the right data to the right model.
A puzzle every company has to solve as there is no 100 percent blueprint at this time.

Are Low Code platforms the answer?

I mentioned the importance of accurate data. Companies have different systems or even platforms managing enterprise data. The digital dream is that by combining datasets from different systems and platforms, we can provide to any user the needed information in real-time. My statement from my introductory post was:

I don’t believe in Low-Code platforms that provide ad-hoc solutions on demand. The ultimate result after several years might be again a new type of spaghetti. On the other hand, standardized interfaces and protocols will probably deliver higher, long-term benefits. Remember: Low code: A promising trend or a Pandora’s Box?

Let’s look into some of the low-code platform messages mentioned by Low-Code advocates:

You will have an increasingly hard time finding developers to keep up with global app development demands (reason #1 for PEGA)

This statement reminded me of the early days of SmarTeam implementations. With a Data model Wizard, a Form Designer, and a Visual Basic COM API, you could create any kind of data management application with SmarTeam. By using its built-in behaviors for document lifecycle management, item lifecycle management, and CAD integrations combined with easy customizations.

The sky was the limit to satisfy end users.  No need for an experienced partner or to be a skilled programmer (this was 2003+). SmarTeam was a low-code platform the marketing department would say now.

A lot of my activities between 2003 and 2010 were related fixing the problems related to flexibility,  making sense (again) of customizations.  I wrote about this in a 2015 post: The importance of a (PLM) data model sharing the experiences of “fixing” issues created to flexibility.

Think first

The challenge is that an enthusiastic team creates a (low code) solution rapidly. Immediate success is celebrated by the people involved. However, the future impact of this solution is often forgotten – we did the job,  right?

Documentation and a broader visibility are often lacking when implementing such a solution.

For example, suppose your product data is going to be consumed by another app. In that case, you need to make sure that the information you consume is accurate. On the other hand, perhaps the information was valid when you created the app.

However, if your friendly co-worker has moved on to another job and someone with different data standards becomes responsible for the data you consume, the reliability might fail. So how do you guarantee its quality?

Easy tools have often led to spaghetti, starting from Clipper (the old days), Visual Basic (the less old days) to highly customizable systems (like Aras is promoting) and future low-code platforms (and Aras is there again).

However, the strength of being highly flexible is also the weaknesses if not managed and understood correctly. In particular, in a digital enterprise architecture, you need skilled people who guarantee a reliable anchorage of the solution.

The HBR article When Low-Code/No-Code Development Works — and When It Doesn’t mentions the same point:

There are great benefits from LC/NC software development, but management challenges as well. Broad use of these tools institutionalizes the “shadow IT phenomenon, which has bedeviled IT organizations for decades — and could make the problem much worse if not appropriately governed. Citizen developers tend to create applications that don’t work or scale well, and then they try to turn them over to IT. Or the person may leave the company, and no one knows how to change or support the system they developed.

The fundamental difference: from coordinated to connected

For the moment, I remain skeptical about the low-code hype, because I have seen this kind of hype before. The most crucial point companies need to understand is that the coordinated world and the connected world are incompatible.

Using new tools based on old processes and existing data is not a digital transformation. Instead, a focus on value streams and their needed (connected) data should lead to the design of a modern digital enterprise, not the optimization and connectivity between organizational siloes.
Before buying a tool (a medicine) to reduce the current pains, imagine your future ways of working, discover what is possible with your existing infrastructure and identify the gaps.

Next, you need to analyze if these gaps are so significant that it requires a technology change. Probably it does, as historically, systems were not designed to share data horizontally in an organization.

In this context, have a look at Lionel Grealou’s s article for Engineering.com:
Data Readiness in the new age of digital collaboration.

Conclusion

We discussed the crucial relation between models and data. Models have only value if they acquire the right and accurate data (exercise 1).

Next, even the simplest development platforms, like low-code platforms, require brains and a long-term strategy (exercise 2) – nothing is simple at this moment in transformational times.  

The next and final post in this series will focus on configuration management – a new approach is needed. I don’t have the answers, but I will share some thoughts

A recommended event and an exciting agenda and a good place to validate and share your thoughts.

I will be there and look forward to meeting you at this conference (unfortunate still virtually)

This week I attended the SCAF conference in Jonkoping. SCAF is an abbreviation of the Swedish CATIA User Group. First of all, I was happy to be there as it was a “physical” conference, having the opportunity to discuss topics with the attendees outside the presentation time slot.

It is crucial for me as I have no technical message. Instead, I am trying to make sense of the future through dialogues. What is sure is that the future will be based on new digital concepts, completely different from the traditional approach that we currently practice.

My presentation, which you can find here on SlideShare, was again zooming in on the difference between a coordinated approach (current) and a connected approach (the future).

The presentation explains the concepts of datasets, which I discussed in my previous blog post. Now, I focussed on how this concept can be discovered in the Dassault Systemes 3DExperience platform, combined with the must-go path for all companies to more systems thinking and sustainable products.

It was interesting to learn that the concept of connected datasets like the spider’s web in the image reflected the future concept for many of the attendees.

One of the demos during the conference illustrated that it is no longer about managing the product lifecycle through structures (EBOM/MBOM/SBOM).

Still, it is based on a collection of connected datasets – the path in the spider’s web.

It was interesting to talk with the present companies about their roadmap. How to become a digital enterprise is strongly influenced by their legacy culture and ways of working. Where to start to be connected is the main challenge for all.

A final positive remark.  The SCAF had renamed itself to SCAF (3DX), showing that even CATIA practices no longer can be considered as a niche – the future of business is to be connected.

Now back to the thread that I am following on the series The road to model-based. Perhaps I should change the title to “The road to connected datasets, using models”. The statement for this week to discuss is:

Data-driven means that you need to have an enterprise architecture, data governance and a master data management (MDM) approach. So far, the traditional PLM vendors have not been active in the MDM domain as they believe their proprietary data model is leading. Read also this interesting McKinsey article: How enterprise architects need to evolve to survive in a digital world

Reliable data

If you have been following my story related to PLM transition: From a connected to a coordinated infrastructure might have seen the image below:

The challenge of a connected enterprise is that you want to connect different datasets, defined in various platforms, to support any type of context. We called this a digital thread or perhaps even better framed a digital web.

This is new for most organizations because each discipline has been working most of the time in its own silo. They are producing readable information in neutral files – pdf drawings/documents. In cases where a discipline needs to deliver datasets, like in a PDM-ERP integration, we see IT-energy levels rising as integrations are an IT thing, right?

Too much focus on IT

In particular, SAP has always played the IT card (and is still playing it through their Siemens partnership). Historically, SAP claimed that all parts/items should be in their system. Thus, there was no need for a PDM interface, neglecting that the interface moment was now shifted to the designer in CAD. And by using the name Material for what is considered a Part in the engineering world, they illustrated their lack of understanding of the actual engineering world.

There is more to “blame” to SAP when it comes to the PLM domain, or you can state PLM vendors did not yet understand what enterprise data means. Historically ERP systems were the first enterprise systems introduced in a company; they have been leading in a transactional  “digital” world. The world of product development never has been a transactional process.

SAP introduced the Master Data Management for their customers to manage data in heterogeneous environments. As you can imagine, the focus of SAP MDM was more on the transactional side of the product (also PIM) than on the engineering characteristics of a product.

I have no problem that each vendor wants to see their solution as the center of the world. This is expected behavior. However, when it comes to a single system approach, there is a considerable danger of vendor lock-in, a lack of freedom to optimize your business.

In a modern digital enterprise (to be), the business processes and value streams should be driving the requirements for which systems to use. I was tempted to write “not the IT capabilities”; however, that would be a mistake. We need systems or platforms that are open and able to connect to other systems or platforms. The technology should be there, and more and more, we realize the future is based on connectivity between cloud solutions.

In one of my first posts (part 2), I referred to five potential platforms for a connected enterprise.  Each platform will have its own data model based on its legacy design, allowing it to service its core users in an optimized environment.

When it comes to interactions between two or more platforms, for example, between PLM and ERP, between PLM and IoT, but also between IoT and ERP or IoT and CRM, these interactions should first be based on identified business processes and value streams.

The need for Master Data Management

Defining horizontal business processes and value streams independent of the existing IT systems is the biggest challenge in many enterprises. Historically, we have been thinking around a coordinated way of working, meaning people shifting pieces of information between systems – either as files or through interfaces.

In the digital enterprise, the flow should be leading based on the stakeholders involved. Once people agree on the ideal flow, the implementation process can start.

Which systems are involved, and where do we need a connection between the two systems. Is the relationship bidirectional, or is it a push?

The interfaces need to be data-driven in a digital enterprise; we do not want human interference here, slowing down or modifying the flow. This is the moment Master Data Management and Data Governance comes in.

When exchanging data, we need to trust the data in its context, and we should be able to use the data in another context. But, unfortunately, trust is hard to gain.

I can share an example of trust when implementing a PDM system linked to a Microsoft-friendly ERP system. Both systems we able to have Excel as an interface medium – the Excel columns took care of the data mapping between these two systems.

In the first year, engineers produced the Excel with BOM information and manufacturing engineering imported the Excel into their ERP system. After a year, the manufacturing engineers proposed to automatically upload the Excel as they discovered the exchange process did not need their attention anymore – they learned to trust the data.

How often have you seen similar cases in your company where we insist on a readable exchange format?

When you trust the process(es), you can trust the data. In a digital enterprise, you must assume that specific datasets are used or consumed in different systems. Therefore a single data mapping as in the Excel example won’t be sufficient

Master Data Management and standards?

Some traditional standards, like the ISO 15926 or ISO 10303, have been designed to exchange process and engineering data – they are domain-specific. Therefore, they could simplify your master data management approach if your digitalization efforts are in that domain.

To connect other types of data, it is hard to find a global standard that also encompasses different kinds of data or consumers. Think about the GS1 standard, which has more of a focus on the consumer-side of data management.  When PLM meets PIM, this standard and Master Data Management will be relevant.

Therefore I want to point to these two articles in this context:

How enterprise architects need to evolve to survive in a digital world focusing on the transition of a coordinated enterprise towards a connected enterprise from the IT point of view.  And a recent LinkedIn post, Web Ontology Language as a common standard language for Engineering Networks? by Matthias Ahrens exploring the concepts I have been discussing in this post.

To me, it seems that standards are helpful when working in a coordinated environment. However, in a connected environment, we have to rely on master data management and data governance processes, potentially based on a clever IT infrastructure using graph databases to be able to connect anything meaningful and possibly artificial intelligence to provide quality monitoring.

Conclusion

Standards have great value in exchange processes, which happen in a coordinated business environment. To benefit from a connected business environment, we need an open and flexible IT infrastructure supported by algorithms (AI) to guarantee quality. Before installing the IT infrastructure, we should first have defined the value streams it should support.

What are your experiences with this transition?

In my last post in this series, The road to model-based and connected PLM, I mentioned that perhaps it is time to talk about SLM instead of PLM when discussing popular TLA’s for our domain of expertise. There were not so many encouraging statements for SLM so far.

SLM could mean for me, Solution Lifecycle Management, considering that the company’s offering more and more is a mix of products and services. Or SLM could mean System Lifecycle Management, in that case pushing the idea that more and more products are interacting with the outside world and therefore could be considered systems. Products are (almost) dead.

In addition, I mentioned that the typical product lifecycle and related configuration management concepts need to change as in the SLM domain. There is hardware and software with different lifecycles and change processes.

It is a topic I want to explore further. I am curious to learn more from Martijn Dullaart, who will be lecturing at the  PLM Road map and PDT 2021 fall conference in November. I hope my expectations are not too high, knowing it is a topic of interest for Martijn. Feel free to join this discussion

In this post, it is time to follow up on my third statement related to what data-driven implies:

Data-driven means that we need to manage data in a much more granular manner. We have to look different at data ownership. It becomes more about data accountability per role as the data can be used and consumed throughout the product lifecycle

On this topic, I have a list of points to consider; let’s go through them.

The dataset

In this post, I will often use the term dataset (you are also allowed to write the data set I understood).

A dataset means a predefined number of attributes and values that belong logically to each other. Datasets should be defined based on the purpose and, if possible, designated for a single goal. In this way, they can be stored in a database.

Combined with other datasets, a combination can result in relevant business information. Note a dataset is not only transactional data; a dataset could also describe geometry.

Identify the dataset

In the document-based world, a lot of information could be stored in a single file. In a data-driven world, we should define a dataset that contains a specific piece of information, logically belonging together. If we are more precise, a part would have various related datasets that make up the definition of a part. These definitions could be:

  • Core identification attributes like ID, Name, Type and Status
  • The Type could define a set of linked information. For example, a valve would have different characteristics as a resistor. Through classification, we can link data sets to the core definition of a part.
  • The part can have engineering-specific data (CAD and metadata), manufacturing-specific data, supplier-specific data, and service-specific data. Each of these datasets needs to be defined as a unique element in a data-driven environment
  • CAD is a particular case as most current CAD systems don’t treat geometry as a single dataset. In a file-based world, many other datasets are stored in the file (e.g., engineering or manufacturing details). In a data-driven environment, we want to have the CAD definition to be treated like a dataset. Dassault Systèmes with their CATIA V6 and 3DEXPERIENCE platform or PTC with OnShape are examples of this approach.Having CAD as separate datasets makes sharing and collaboration so much easier, as we can see from these solutions. The concept for CAD stored in a database is not new, and this approach has been used in various disciplines. Mechanical CAD was always a challenge.

Thanks to Moore’s Law (approximate every 2 years, processor power doubled – click on the image for the details) and higher network connection speed, it starts to make sense to have mechanical CAD also stored in a database instead of a file

An important point to consider is a kind of standardization of datasets. In theory, there should be a kind of minimum agreed collection of datasets. Industry standards provide these collections in their dictionary. Whenever you optimize your data model for a connected enterprise, make sure you look first into the standards that apply to your industry.

They might not be perfect or complete, but inventing your own new standard is a guarantee for legacy issues in the future. This remark is also valid for the software vendors in this domain. A proprietary data model might give you a competitive advantage.

Still, in the long term, there is always the need to connect with outside stakeholders.

 

Identify the RACI

To ensure a dataset is complete and well maintained, the concept of RACI could be used. RACI is the abbreviation for Responsible Accountable Consulted and Informed and a simplification of the RASCI Model, see also a responsibility assignment matrix.

In a data-driven environment, there is no data ownership anymore like you have for documents. The main reason that data ownership can no longer be used is that datasets can be consumed by anyone in the ecosystem. No longer only your department or the manufacturing or service department.

Data sets in a data-driven environment bring value when connected with other datasets in applications or dashboards.

A dataset describing the specification attributes of a part could be used in a spare part app and a service app. Of course, the dataset will be used in a different context – still, we need to ensure we can trust the data.

Therefore, per identified dataset, there should be governed by a kind of RACI concept. The RACI concept is a way to break the siloes in an organization.

Identify Inside / outside

There is a lot of fear that a connected, data-driven environment will expose Intellectual Property (IP). It came up in recent discussions. If you like storytelling and technology, read my old SmarTeam colleague Alex Bruskin’s post: The Bilbo Baggins Threat to PLM Assets. Alex has written some “poetry” with a deep technical message behind it.

It is true that if your data set is too big, you have the challenge of exposing IP when connecting this dataset with others. Therefore, when building a data model, you should make it possible to have datasets pure for internal usage and datasets for sharing.

When you use the concept of RACI, the difference should be defined by the I(informed) – is it PLM-data or PIM-data for example?

Tracking relations

Suppose we follow up on the concept of datasets. In that case, it becomes clear that relations between the datasets are as crucial as the dataset. In traditional PLM applications, these relations are often predefined as part of the core data model/

For example, the EBOM parts have relationships between themselves and specification data – see image.

The MBOM parts have links with the supplier data or the manufacturing process.

The prepared relations in a PLM system allow people to implement the system relatively quickly to map their approaches to this taxonomy.

However, traditional PLM systems are based on a document-based (or file-based) taxonomy combined with related metadata. In a model-based and connected environment, we have to get rid of the document-based type of data.

Therefore, the datasets will be more granular, and there is a need to manage exponential more relations between datasets.

This is why you see the graph database coming up as a needed infrastructure for modern connected applications. If you haven’t heard of a graph database yet, you are probably far from technology hypes. To understand the principles of a graph database you can read this article from neo4j:  Graph Databases for Beginners: Why graph technology is the future

As you can see from the 2020 Gartner Hype Cycle for Artificial Intelligence this technology is at the top of the hype and conceptually the way to manage a connected enterprise. The discussion in this post also demonstrates that besides technology there is a lot of additional conceptual thinking needed before it can be implemented.

Although software vendors might handle the relations and datasets within their platform, the ultimate challenge will be sharing datasets with other platforms to get a connected ecosystem.

For example, the digital web picture shown above and introduced by Marc Halpern at the 2018 PDT conference shows this concept. Recently CIMdata discussed this topic in a similar manner: The Digital Thread is Really a Web, with the Engineering Bill of Materials at Its Center
(Note I am not sure if CIMdata has published a recording of this webinar – if so I will update the link)

Anyway, these are signs that we started to find the right visuals to imagine new concepts. The traditional digital thread pictures, like the one below, are, for me, impressions of the past as they are too rigid and focusing on some particular value streams.

From a distance, it looks like a connected enterprise should work like our brain. We story information on different abstraction levels. We keep incredibly many relations between information elements. As the brain is a biological organ, connections degrade or get lost. Or the opposite other relationships become so strong that we cannot change them anymore. (“I know I am always right”)

Interestingly, the brain does not use the “single source of truth”-concept – there can be various “truths” inside a brain. This makes us human beings with all the good and the harmful effects of that.

As long as we realize there is no single source of truth.

In business and our technological world, we need sometimes the undisputed truth. Blockchain could be the basis for securing the right connections between datasets to guarantee the result is valid. I am curious if blockchain can scale to complex connected situations, although Moore’s Law might ultimately help us here too(if still valid).

The topic is not new – in 2014 I wrote a post with the title: PLM is doomed unless ….   Where I introduced the topic of owning and sharing in the context of the human brain.  In the post, I refer to the book On Intelligence by Jeff Hawkins how tries to analyze what is human-based intelligence and how could we apply it to our technology concepts. Still a fascinating book worth reading if you have the time and opportunity.

 

Conclusion

A data-driven approach requires a more granular definition of information, leading to the concepts of datasets and managing relations between datasets. This is a fundamental difference compared to the past, where we were operating systems with information. Now we are heading towards connected platforms that provide a filtered set of real-time data to act upon.

I am curious to learn more about how people have solved the connected challenges and in what kind of granularity. Let us know!

 

 

In my last post, I zoomed in on a preferred technical architecture for the future digital enterprise. Drawing the conclusion that it is a mission impossible to aim for a single connected environment. Instead, information will be stored in different platforms, both domain-oriented (PLM, ERP, CRM, MES, IoT) and value chain oriented (OEM, Supplier, Marketplace, Supply Chain hub).

In part 3, I posted seven statements that I will be discussing in this series. In this post, I will zoom in on point 2:

Data-driven does not mean we do not need any documents anymore. Read electronic files for documents. Likely, document sets will still be the interface to non-connected entities, suppliers, and regulatory bodies. These document sets can be considered a configuration baseline.

 

System of Record and System of Engagement

In the image below, a slide from 2016,  I show a simplified view when discussing the difference between the current, coordinated approach and the future, connected approach.  This picture might create the wrong impression that there are two different worlds – either you are document-driven, or you are data-driven.

In the follow-up of this presentation, I explained that companies need both environments in the future. The most efficient way of working for operations will be infrastructure on the right side, the platform-based approach using connected information.

For traceability and disconnected information exchanges, the left side will be there for many years to come. Systems of Record are needed for data exchange with disconnected suppliers, disconnected regulatory bodies and probably crucial for configuration management.

The System of Record will probably remain as a capability in every platform or cross-section of platform information. The Systems of Engagement will be the configured real-time environment for anyone involved in active company processes, not only ERP or MES, all execution.

Introducing SysML and SML

This summer, I received a copy of Martin Eigner’s System Lifecycle Management book, which I am reading at his moment in my spare moments. I always enjoyed Martin’s presentations. In many ways, we share similar ideas. Martin from his profession spent more time on the academic aspects of product and system lifecycle than I. But, on the other hand, I have always been in the field observing and trying to make sense of what I see and learn in a coherent approach. I am halfway through the book now, and for sure, I will come back on the book when I have finished.

A first impression: A great and interesting book for all. Martin and I share the same history of data management. Read all about this in his second chapter: Forty Years of Product Data Management

From PDM via PLM to SysLM, is a chapter that everyone should read when you haven’t lived it yourself. It helps you to understand the past (Learning for the past to understand the future). When I finish this series about the model-based and connected approach for products and systems, Martin’s book will be highly complementary given the content he describes.

There is one point for which I am looking forward to is feedback from the readers of this blog.

Should we, in our everyday language, better differentiate between Product Lifecycle Management (PLM) and System Lifecycle Management(SysLM)?

In some customer situations, I talk on purpose about System Lifecycle Management to create the awareness that the company’s offering is more than an electro/mechanical product. Or ultimately, in a more circular economy, would we use the term Solution Lifecycle Management as not only hardware and software might be part of the value proposition?

Martin uses consistently the abbreviation SysLM, where I would prefer the TLA SLM. The problem we both have is that both abbreviations are not unique or explicit enough. SysLM creates confusion with SysML (for dyslectic people or fast readers). SLM already has so many less valuable meanings: Simulation Lifecycle Management, Service Lifecycle Management or Software Lifecycle Management.

For the moment, I will use the abbreviation SLM, leaving it in the middle if it is System Lifecycle Management or Solution Lifecycle Management.

 

How to implement both approaches?

In the long term, I predict that more than 80 percent of the activities related to SLM will take place in a data-driven, model-based environment due to the changing content of the solutions offered by companies.

A solution will be based on hardware, the solid part of the solution, for which we could apply a BOM-centric approach. We can see the BOM-centric approach in most current PLM implementations. It is the logical result of optimizing the product lifecycle management processes in a coordinated manner.

However, the most dynamic part of the solution will be covered by software and services. Changing software or services related to a solution has completely different dynamics than a hardware product.

Software and services implementations are associated with a data-driven, model-based approach.

The management of solutions, therefore, needs to be done in a connected manner. Using the BOM-centric approach to manage software and services would create a Kafkaesque overhead.

Depending on your company’s value proposition to the market, the challenge will be to find the right balance. For example, when you keep on selling disconnectedhardware, there is probably no need to change your internal PLM processes that much.

However, when you are moving to a connected business model providing solutions (connected systems / Outcome-based services), you need to introduce new ways of working with a different go-to-market mindset. No longer linear, but iterative.

A McKinsey concept, I have been promoting several times, illustrates a potential path – note the article was not written with a PLM mindset but in a business mindset.

What about Configuration Management?

The different datasets defining a solution also challenge traditional configuration management processes. Configuration Management (CM) is well established in the aerospace & defense industry. In theory, proper configuration management should be the target of every industry to guarantee an appropriate performance, reduced risk and cost of fixing issues.

The challenge, however, is that configuration management processes are not designed to manage systems or solutions, where dynamic updates can be applied whether or not done by the customer.

This is a topic to solve for the modern Connected Car (system) or Connected Car Sharing (solution)

For that reason, I am inquisitive to learn more from Martijn Dullaart’s presentation at the upcoming PLM Roadmap/PDT conference. The title of his session: The next disruption please …

In his abstract for this session, Martijn writes:

From Paper to Digital Files brought many benefits but did not fundamentally impact how Configuration Management was and still is done. The process to go digital was accelerated because of the Covid-19 Pandemic. Forced to work remotely was the disruption that was needed to push everyone to go digital. But a bigger disruption to CM has already arrived. Going model-based will require us to reexamine why we need CM and how to apply it in a model-based environment. Where, from a Configuration Management perspective, a digital file still in many ways behaves like a paper document, a model is something different. What is the deliverable? How do you manage change in models? How do you manage ownership? How should CM adopt MBx, and what requirements to support CM should be considered in the successful implementation of MBx? It’s time to start unraveling these questions in search of answers.

One of the ideas I am currently exploring is that we need a new layer on top of the current configuration management processes extending the validation to software and services. For example, instead of describing every validated configuration, a company might implement the regular configuration management processes for its hardware.

Next, the systems or solutions in the field will report (or validate) their configuration against validation rules. A topic that requires a long discussion and more than this blog post, potentially a full conference.

Therefore I am looking forward to participating in the CIMdata/PDT FALL conference and pick-up the discussions towards a data-driven, model-based future with the attendees.  Besides CM, there are several other topics of great interest for the future. Have a look at the agenda here

 

Conclusion

A data-driven and model-based infrastructure still need to be combined with a coordinated, document-driven infrastructure.  Where the focus will be, depends on your company’s value proposition.

If we discuss hardware products, we should think PLM. When you deliver systems, you should perhaps talk SysML (or SLM). And maybe it is time to define Solution Lifecycle Management as the term for the future.

Please, share your thoughts in the comments.

 

After a short summer break with almost no mentioning of the word PLM, it is time to continue this series of posts exploring the future of “connected” PLM. For those who also started with a cleaned-up memory, here is a short recap:

In part 1, I rush through more than 60 years of product development, starting from vellum drawings ending with the current PLM best practice for product development, the item-centric approach.

In part 2, I painted a high-level picture of the future, introducing the concept of digital platforms, which, if connected wisely, could support the digital enterprise in all its aspects. The five platforms I identified are the ERP and CRM platform (the oldest domains).

Next, the MES and PIP platform(modern domains to support manufacturing and product innovation in more detail) and the IoT platform (needed to support connected products and customers).

In part 3, I explained what is data-driven and how data-driven is closely connected to a model-based approach. Here we abandon documents (electronic files) as active information carriers. Documents will remain, however, as reports, baselines, or information containers. In this post, I ended up with seven topics related to data-driven, which I will discuss in upcoming posts.

Hopefully, by describing these topics – and for sure, there are more related topics – we will better understand the connected future and make decisions to enable the future instead of freezing the past.

 

Topic 1 for this post:

Data-driven does not imply, there needs to be a single environment, a single database that contains all information. As I mentioned in my previous post, it will be about managing connected datasets federated. It is not anymore about owned the data; it is about access to reliable data.

 

Platform or a collection of systems?

One of the first (marketing) hurdles to take is understanding what a data platform is and what is a collection of systems that work together, sold as a platform.

CIMdata published in 2017 an excellent whitepaper positioning the PIP (Product Innovation Platform):  Product Innovation Platforms: Definition, Their Role in the Enterprise, and Their Long-Term Viability. CIMdata’s definition is extensive and covers the full scope of product innovation. Of course, you can find a platform that starts from a more focused process.

For example, look at OpenBOM (focus on BOM collaboration), OnShape (focus on CAD collaboration) or even Microsoft 365 (historical, document-based collaboration).

The idea behind a platform is that it provides basic capabilities connected to all stakeholders, inside and outside your company. In addition, to avoid that these capabilities are limited, a platform should be open and able to connect with other data sources that might be either local or central available.

From these characteristics, it is clear that the underlying infrastructure of a platform must be based on a multitenant SaaS infrastructure, still allowing local data to be connected and shielded for performance or IP reasons.

The picture below describes the business benefits of a Product Innovation Platform as imagined by Accenture in 2014

Link to CIMdata’s 2014 commentary of Digital PLM HERE

Sometimes vendors sell their suite of systems as a platform. This is a marketing trick because when you want to add functionality to your PLM infrastructure, you need to install a new system and create or use interfaces with the existing systems, not really a scalable environment.

In addition, sometimes, the collaboration between systems in such a marketing platform is managed through proprietary exchange (file) formats.

A practice we have seen in the construction industry before cloud connectivity became available. However, a so-called end-to-end solution working on PowerPoint implemented in real life requires a lot of human intervention.

 

Not a single environment

There has always been the debate:

“Do I use best-in-class tools, supporting the end-user of the software, or do I provide an end-to-end infrastructure with more generic tools on top of that, focusing on ease of collaboration?”

In the system approach, the focus was most of the time on the best-in-class tools where PLM-systems provide the data governance. A typical example is the item-centric approach. It reflects the current working culture, people working in their optimized siloes, exchanging information between disciplines through (neutral) files.

The platform approach makes it possible to deliver the optimized user interface for the end-user through a dedicated app. Assuming the data needed for such an app is accessible from the current platform or through other systems and platforms.

It might be tempting as a platform provider to add all imaginable data elements to their platform infrastructure as much as possible. The challenge with this approach is whether all data should be stored in a central data environment (preferably cloud) or federated.  And what about filtering IP?

In my post PLM and Supply Chain Collaboration, I described the concept of having an intermediate hub (ShareAspace) between enterprises to facilitate real-time data sharing, however carefully filtered which data is shared in the hub.

It may be clear that storing everything in one big platform is not the future. As I described in part 2, in the end, a company might implement a maximum of five connected platforms (CRM, ERP, PIP, IoT and MES). Each of the individual platforms could contain a core data model relevant for this part of the business. This does not imply there might be no other platforms in the future. Platforms focusing on supply chain collaboration, like ShareAspace or OpenBOM, will have a value proposition too.  In the end, the long-term future is all about realizing a digital tread of information within the organization.

Will we ever reach a perfectly connected enterprise or society? Probably not. Not because of technology but because of politics and human behavior. The connected enterprise might be the most efficient architecture, but will it be social, supporting all humanity. Predicting the future is impossible, as Yuval Harari described in his book:  21 Lessons for the 21st Century. Worth reading, still a collection of ideas.

 

Proprietary data model or standards?

So far, when you are a software vendor developing a system, there is no restriction in how you internally manage your data. In the domain of PLM, this meant that every vendor has its own proprietary data model and behavior.

I have learned from my 25+ years of experience with systems that the original design of a product combined with the vendor’s culture defines the future roadmap. So even if a PLM vendor would rewrite all their software to become data-driven, the ways of working, the assumptions will be based on past experiences.

This makes it hard to come to unified data models and methodology valid for our PLM domain. However, large enterprises like Airbus and Boeing and the major Automotive suppliers have always pushed for standards as they will benefit the most from standardization.

The recent PDT conferences were an example of this, mainly the 2020 Fall conference. Several Aerospace & Defense PLM Action groups reported their progress.

You can read my impression of this event in The weekend after PLM Roadmap / PDT 2020 – part 1 and The next weekend after PLM Roadmap PDT 2020 – part 2.

It would be interesting to see a Product Innovation Platform built upon a data model as much as possible aligned to existing standards. Probably it won’t happen as you do not make money from being open and complying with standards as a software vendor. Still, companies should push their software vendors to support standards as this is the only way to get larger connected eco-systems.

I do not believe in the toolkit approach where every company can build its own data model based on its current needs. I have seen this flexibility with SmarTeam in the early days. However, it became an upgrade risk when new, overlapping capabilities were introduced, not matching the past.

In addition, a flexible toolkit still requires a robust data model design done by experienced people who have learned from their mistakes.

The benefit of using standards is that they contain the learnings from many people involved.

 

Conclusion

I did not like writing this post so much, as my primary PLM focus lies on people and methodology. Still, understanding future technologies is an important point to consider. Therefore, this time a not-so-exciting post. There is enough to read on the internet related to PLM technology; see some of the recent articles below. Enjoy

 

Matthias Ahrens shared:  Integrated Product Lifecycle Management (Google translated from German)

Oleg Shilovitsky wrote numerous articles related to technology –
in this context:
3 Challenges of Unified Platforms and System Locking and
SaaS PLM Acceleration Trends

So far, I have been discussing PLM experiences and best practices that have changed due to introducing electronic drawings and affordable 3D CAD systems for the mainstream. From vellum to PDM to item-centric PLM to manage product designs and manufacturing specifications.

Although the technology has improved, the overall processes haven’t changed so much. As a result, disciplines could continue to work in their own comfort zone, most of the time hidden and disconnected from the outside world.

Now, thanks to digitalization, we can connect and format information in real-time. Now we can provide every stakeholder in the company’s business to have almost real-time visibility on what is happening (if allowed). We have seen the benefits of platformization, where the benefits come from real-time connectivity within an ecosystem.

Apple, Amazon, Uber, Airbnb are the non-manufacturing related examples. Companies are trying to replicate these models for other businesses, connecting the concept owner (OEM ?), with design and manufacturing (services), with suppliers and customers. All connected through information, managed in data elements instead of documents – I call it connected PLM

Vendors have already shared their PowerPoints, movies, and demos from how the future would be in the ideal world using their software. The reality, however, is that implementing such solutions requires new business models, a new type of organization and probably new skills.

The last point is vital, as in schools and organizations, we tend to teach what we know from the past as this gives some (fake) feeling of security.

The reality is that most of us will have to go through a learning path, where skills from the past might become obsolete; however, knowledge of the past might be fundamental.

In the upcoming posts, I will share with you what I see, what I deduct from that and what I think would be the next step to learn.

I firmly believe connected PLM requires the usage of various models. Not only the 3D CAD model, as there are so many other models needed to describe and analyze the behavior of a product.

I hope that some of my readers can help us all further on the path of connected PLM (with a model-based approach). This series of posts will be based on the max size per post (avg 1500 words) and the ideas and contributes coming from you and me.

What is platformization?

In our day-to-day life, we are more and more used to direct interaction between resellers and services providers on one side and consumers on the other side. We have a question, and within 24 hours, there is an answer. We want to purchase something, and potentially the next day the goods are delivered. These are examples of a society where all stakeholders are connected in a data-driven manner.

We don’t have to create documents or specialized forms. An app or a digital interface allows us to connect. To enable this type of connectivity, there is a need for an underlying platform that connects all stakeholders. Amazon and Salesforce are examples for commercial activities, Facebook for social activities and, in theory, LinkedIn for professional job activities.

The platform is responsible for direct communication between all stakeholders.

The same applies to businesses. Depending on the products or services they deliver, they could benefit from one or more platforms. The image below shows five potential platforms that I identified in my customer engagements. Of course, they have a PLM focus (in the middle), and the grouping can be made differently.

Five potential business platforms

The 5 potential platforms

The ERP platform
is mainly dedicated to the company’s execution processes – Human Resources, Purchasing, Finance, Production scheduling, and potentially many more services. As platforms try to connect as much as possible all stakeholders. The ERP platform might contain CRM capabilities, which might be sufficient for several companies. However, when the CRM activities become more advanced, it would be better to connect the ERP platform to a CRM platform. The same logic is valid for a Product Innovation Platform and an ERP platform.  Examples of ERP platforms are SAP and Oracle (and they will claim they are more than ERP)

Note: Historically, most companies started with an ERP system, which is not the same as an ERP platform.  A platform is scalable; you can add more apps without having to install a new system. In a platform, all stored data is connected and has a shared data model.

The CRM platform

a platform that is mainly focusing on customer-related activities, and as you can see from the diagram, there is an overlap with capabilities from the other platforms. So again, depending on your core business and products, you might use these capabilities or connect to other platforms. Examples of CRM platforms are Salesforce and Pega, providing a platform to further extend capabilities related to core CRM.

The MES platform
In the past, we had PDM and ERP and what happened in detail on the shop floor was a black box for these systems. MES platforms have become more and more important as companies need to trace and guide individual production orders in a data-driven manner. Manufacturing Execution Systems (and platforms) have their own data model. However, they require input from other platforms and will provide specific information to other platforms.

For example, if we want to know the serial number of a product and the exact production details of this product (used parts, quality status), we would use an MES platform. Examples of MES platforms (none PLM/ERP related vendors) are Parsec and Critical Manufacturing

The IoT platform

these platforms are new and are used to monitor and manage connected products. For example, if you want to trace the individual behavior of a product of a process, you need an IoT platform. The IoT platform provides the product user with performance insights and alerts.

However, it also provides the product manufacturer with the same insights for all their products. This allows the manufacturer to offer predictive maintenance or optimization services based on the experience of a large number of similar products.  Examples of IoT platforms (none PLM/ERP-related vendors) are Hitachi and Microsoft.

The Product Innovation Platform (PIP)

All the above platforms would not have a reason to exist if there was not an environment where products were invented, developed, and managed. The Product Innovation Platform PIP – as described by CIMdata  -is the place where Intellectual Property (IP) is created, where companies decide on their portfolio and more.

The PIP contains the traditional PLM domain. It is also a logical place to manage product quality and technical portfolio decisions, like what kind of product platforms and modules a company will develop. Like all previous platforms, the PIP cannot exist without other platforms and requires connectivity with the other platforms is applicable.

Look below at the CIMdata definition of a Product Innovation Platform.

You will see that most of the historical PLM vendors aiming to be a PIP (with their different flavors): Aras, Dassault Systèmes, PTC and Siemens.

Of course, several vendors sell more than one platform or even create the impression that everything is connected as a single platform. Usually, this is not the case, as each platform has its specific data model and combining them in a single platform would hurt the overall performance.

Therefore, the interaction between these platforms will be based on standardized interfaces or ad-hoc connections.

Standard interfaces or ad-hoc connections?

Suppose your role and information needs can be satisfied within a single platform. In that case, most likely, the platform will provide you with the right environment to see and manipulate the information.

However, it might be different if your role requires access to information from other platforms. For example, it could be as simple as an engineer analyzing a product change who needs to know the actual stock of materials to decide how and when to implement a change.

This would be a PIP/ERP platform collaboration scenario.

Or even more complex, it might be a product manager wanting to know how individual products behave in the field to decide on enhancements and new features. This could be a PIP, CRM, IoT and MES collaboration scenario if traceability of serial numbers is needed.

The company might decide to build a custom app or dashboard for this role to support such a role. Combining in real-time data from the relevant platforms, using standard interfaces (preferred) or using API’s, web services, REST services, microservices (for specialists) and currently in fashion Low-Code development platforms, which allow users to combine data services from different platforms without being an expert in coding.

Without going too much in technology, the topics in this paragraph require an enterprise architecture and vision. It is opportunistic to think that your existing environment will evolve smoothly into a digital highway for the future by “fixing” demands per user. Your infrastructure is much more likely to end up congested as spaghetti.

In that context, I read last week an interesting post Low code: A promising trend or Pandora’s box. Have a look and decide for yourself

I am less focused on technology, more on methodology. Therefore, I want to come back to the theme of my series: The road to model-based and connected PLM. For sure, in the ideal world, the platforms I mentioned, or other platforms that run across these five platforms, are cloud-based and open to connect to other data sources. So, this is the infrastructure discussion.

In my upcoming blog post, I will explain why platforms require a model-based approach and, therefore, cause a challenge, particularly in the PLM domain.

It took us more than fifty years to get rid of vellum drawings. It took us more than twenty years to introduce 3D CAD for design and engineering. Still primarily relying on drawings. It will take us for sure one generation to switch from document-based engineering to model-based engineering.

Conclusion

In this post, I tried to paint a picture of the ideal future based on connected platforms. Such an environment is needed if we want to be highly efficient in designing, delivering, and maintaining future complex products based on hardware and software. Concepts like Digital Twin and Industry 4.0 require a model-based foundation.

In addition, we will need Digital Twins to reach our future sustainability goals efficiently. So, there is work to do.

Your opinion, Your contribution?

 

 

 

 

 

 

After the first article discussing “The Future of PLM,” now again a post in the category of PLM and complementary practices/domains a topic that is already for a long time on the radar: Model-Based Definition, I am glad to catch up with Jennifer Herron, founder of Action Engineering, who is one of the thought leaders related to Model-Based Definition (MBD) and Model-Based Enterprise (MBE).

In 2016 I spoke with Jennifer after reading her book: “Re-Use Your CAD – The Model-Based CAD Handbook”. At that time, the discussion was initiated through two articles on Engineering.com. Action Engineering introduced OSCAR seven years later as the next step towards learning and understanding the benefits of Model-Based Definition.

Therefore, it is a perfect moment to catch up with Jennifer. Let’s start.

 

Model-Based Definition

Jennifer, first of all, can you bring some clarity in terminology. When I discussed the various model-based approaches, the first response I got was that model-based is all about 3D Models and that a lot of the TLA’s are just marketing terminology.
Can you clarify which parts of the model-based enterprise you focus on and with the proper TLA’s?

Model-Based means many things to many different viewpoints and systems of interest. All these perspectives lead us down many rabbit holes, and we are often left confused when first exposed to the big concepts of model-based.

At Action Engineering, we focus on Model-Based Definition (MBD), which uses and re-uses 3D data (CAD models) in design, fabrication, and inspection.

There are other model-based approaches, and the use of the word “model” is always a challenge to define within the proper context.

For MBD, a model is 3D CAD data that comes in both native and neutral formats

Another model-based approach is Model-Based Systems Engineering (MBSE). The term “model” in this context is a formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later lifecycle phases.

<Jos> I will come back on Model-Based Systems Engineering in future posts

Sometimes MBSE is about designing widgets, and often it is about representing the entire system and the business operations. For MBD, we often focus our education on the ASME Y14.47 definition that MBD is an annotated model and associated data elements that define the product without a drawing.

Model-Based Definition for Everybody?

I believe it took many years till 3D CAD design became a commodity; however, I still see the disconnected 2D drawing used to specify a product or part for manufacturing or suppliers. What are the benefits of model-based definition?
Are there companies that will not benefit from the model-based definition?

There’s no question that the manufacturing industry is addicted to their drawings. There are many reasons why, and yet mostly the problem is lack of awareness of how 3D CAD data can make design, fabrication, and inspection work easier.

For most, the person doing an inspection in the shipping and receiving department doesn’t have exposure to 3D data, and the only thing they have is a tabulated ERP database and maybe a drawing to read. If you plop down a 3D viewable that they can spin and zoom, they may not know how that relates to their job or what you want them to do differently.

Today’s approach of engineering championing MBD alone doesn’t work. To evolve information from the 2D drawing onto the 3D CAD model without engaging the stakeholders (machinists, assembly technicians, and inspectors) never yields a return on investment.

Organizations that succeed in transitioning to MBD are considering and incorporating all departments that touch the drawing today.

Incorporating all departments requires a vision from the management. Can you give some examples of companies that have transitioned to MBD, and what were the benefits they noticed?

I’ll give you an example of a small company with no First Article Inspection (FAI) regulatory requirements and a huge company with very rigorous FAI requirements.

 

Note: click on the images below to enjoy the details.

The small company instituted a system of CAD modeling discipline that allowed them to push 3D viewable information directly to the factory floor. The assembly technicians instantly understood engineering’s requirements faster and better.

The positive MBD messages for these use cases are 3D  navigation, CAD Re-Use, and better control of their revisions on the factory floor.

 

The large company has added inspection requirements directly onto their engineering and created a Bill of Characteristics (BOC) for the suppliers and internal manufacturers. They are removing engineering ambiguity, resulting in direct digital information exchange between engineering, manufacturing, and quality siloes.

These practices have reduced error and reduced time to market.

The positive MBD messages for these use cases are unambiguous requirements capture by Engineering, Quality Traceability, and Model-Based PMI (Product and Manufacturing Information).

Model-Based Definition and PLM?

How do you see the relation between Model-Based Definition and PLM? Is a PLM system a complication or aid to implement a Model-Based Definition? And do you see a difference between the old and new PLM Vendors?

Model-Based Definition data is complex and rich in connected information, and we want it to be. With that amount of connected data, a data management system (beyond upload/download of documents) must keep all that data straight.

Depending on the size and function of an organization, a PLM may not be needed. However, a way to manage changes and collaboration amongst those using 3D data is necessary. Sometimes that results in a less sophisticated Product Data Management (PDM) system. Large organizations often require PLM.

There is significant resistance to doing MBD and PLM implementations simultaneously because PLM is always over budget and behind schedule. However, doing just MBD or just PLM without the other doesn’t work either. I think you should be brave and do both at once.

I think we can debate why PLM is always over budget and behind schedule. I hear the same about ERP implementations. Perhaps it has to deal with the fact that enterprise applications have to satisfy many users?

I believe that working with model versions and file versions can get mixed in larger organizations, so there is a need for PDM or PLM. Have you seen successful implementations of both interacting together?

Yes, the only successful MBD implementations are those that already have a matured PDM/PLM (scaled best to the individual business).

 

Model-Based Definition and Digital Transformation

In the previous question, we already touched on the challenge of old and modern PLM. How do you see the introduction of Model-Based Definition addressing the dreams of Industry 4.0, the Digital Twin and other digital concepts?

I just gave a presentation at the ASME Digital Twin Summit discussing the importance of MBD for the Digital Twin. MBD is a foundational element that allows engineering to compare their design requirements to the quality inspection results of digital twin data.

The feedback loop between Engineering and Quality is fraught with labor-intensive efforts in most businesses today.

Leveraging the combination of MBD and Digital Twin allows automation possibilities to speed up and increase the accuracy of the engineering to inspection feedback loop. That capability helps organizations realize the vision of Industry 4.0.

And then there is OSCAR.

I noticed you announced OSCAR. First, I thought OSCAR was a virtual aid for model-based definition, and I liked the launching page HERE. Can you tell us more about what makes OSCAR unique?

One thing that is hard with MBD implementation is there is so much to know. Our MBDers at Action Engineering have been involved with MBD for many years and with many companies. We are embedded in real-life transitions from using drawings to using models.

Suppose you start down the model-based path for digital manufacturing. In that case, there are significant investments in time to learn how to get to the right set of capabilities and the right implementation plan guided by a strategic focus. OSCAR reduces that ramp-up time with educational resources and provides vetted and repeatable methods for an MBD implementation.

OSCAR combines decades of Action Engineering expertise and lessons learned into a multi-media textbook of sorts. To kickstart an individual or an organization’s MBD journey, it includes asynchronous learning, downloadable resources, and CAD examples available in Creo, NX, and SOLIDWORKS formats.

CAD users can access how-to training and downloadable resources such as the latest edition of Re-Use Your CAD (RUYC). OSCAR enables process improvement champions to make their case to start the MBD journey. We add content regularly and post what’s new. Free trials are available to check out the online platform.

Learn more about what OSCAR is here:

Want to learn more?

In this post, I believe we only touched the tip of the iceberg. There is so much to learn and understand. What would you recommend to a reader of this blog who got interested?

 

RUYC (Re-Use Your CAD)  is an excellent place to start, but if you need more audio-visual, and want to see real-life examples of MBD in action, get a Training subscription of OSCAR to get rooted in the vocabulary and benefits of MBD with a Model-Based Enterprise. Watch the videos multiple times! That’s what they are for. We love to work with European companies and would love to support you with a kickstart coaching package to get started.

What I learned

First of all, I learned that Jennifer is a very pragmatic person. Her company (Action Engineering) and her experience are a perfect pivot point for those who want to learn and understand more about Model-Based Definition. In particular, in the US, given her strong involvement in the American Society of Mechanical Engineers (ASME).

I am still curious if European or Asian counterparts exist to introduce and explain the benefits and usage of Model-Based Definition to their customers.  Feel free to comment.

Next, and an important observation too, is the fact that Jennifer also describes the tension between Model-Based Definition and PLM. Current PLM systems might be too rigid to support end-to-end scenarios, taking benefit of the Model-Based definition.

I have to agree here. PLM Vendors mainly support their own MBD (model-based definition), where the ultimate purpose is to share all product-related information using various models as the main information carriers efficiently.

We have to study and solve a topic in the PLM domain, as I described in my technical highlights from the PLM Road Map & PDT Spring 2021 conference.

There is work to do!

Conclusion

Model-Based Definition is, for me, one of the must-do steps of a company to understand the model-based future. A model-based future sometimes incorporates Model-Based Systems Engineering, a real Digital Thread and one or more Digital Twins (depending on your company’s products).

It is a must-do activity because companies must transform themselves to depend on digital processes and digital continuity of data to remain competitive. Document-driven processes relying on the interpretation of a person are not sustainable.

 

After the first article discussing “The Future of PLM,” now again a post in the category of PLM and complementary practices/domains a topic that is already for a long time on the radar: Model-Based Definition, I am glad to catch up with Jennifer Herron, founder of Action Engineering, who is one of the thought leaders related to Model-Based Definition (MBD) and Model-Based Enterprise (MBE).

In 2016 I spoke with Jennifer after reading her book: “Re-Use Your CAD – The Model-Based CAD Handbook”. At that time, the discussion was initiated through two articles on Engineering.com. Action Engineering introduced OSCAR seven years later as the next step towards learning and understanding the benefits of Model-Based Definition.

Therefore, it is a perfect moment to catch up with Jennifer. Let’s start.

 

Model-Based Definition

Jennifer, first of all, can you bring some clarity in terminology. When I discussed the various model-based approaches, the first response I got was that model-based is all about 3D Models and that a lot of the TLA’s are just marketing terminology.
Can you clarify which parts of the model-based enterprise you focus on and with the proper TLA’s?

Model-Based means many things to many different viewpoints and systems of interest. All these perspectives lead us down many rabbit holes, and we are often left confused when first exposed to the big concepts of model-based.

At Action Engineering, we focus on Model-Based Definition (MBD), which uses and re-uses 3D data (CAD models) in design, fabrication, and inspection.

There are other model-based approaches, and the use of the word “model” is always a challenge to define within the proper context.

For MBD, a model is 3D CAD data that comes in both native and neutral formats

Another model-based approach is Model-Based Systems Engineering (MBSE). The term “model” in this context is a formalized application of modeling to support system requirements, design, analysis, verification and validation activities beginning in the conceptual design phase and continuing throughout development and later lifecycle phases.

<Jos> I will come back on Model-Based Systems Engineering in future posts

Sometimes MBSE is about designing widgets, and often it is about representing the entire system and the business operations. For MBD, we often focus our education on the ASME Y14.47 definition that MBD is an annotated model and associated data elements that define the product without a drawing.

Model-Based Definition for Everybody?

I believe it took many years till 3D CAD design became a commodity; however, I still see the disconnected 2D drawing used to specify a product or part for manufacturing or suppliers. What are the benefits of model-based definition?
Are there companies that will not benefit from the model-based definition?

There’s no question that the manufacturing industry is addicted to their drawings. There are many reasons why, and yet mostly the problem is lack of awareness of how 3D CAD data can make design, fabrication, and inspection work easier.

For most, the person doing an inspection in the shipping and receiving department doesn’t have exposure to 3D data, and the only thing they have is a tabulated ERP database and maybe a drawing to read. If you plop down a 3D viewable that they can spin and zoom, they may not know how that relates to their job or what you want them to do differently.

Today’s approach of engineering championing MBD alone doesn’t work. To evolve information from the 2D drawing onto the 3D CAD model without engaging the stakeholders (machinists, assembly technicians, and inspectors) never yields a return on investment.

Organizations that succeed in transitioning to MBD are considering and incorporating all departments that touch the drawing today.

Incorporating all departments requires a vision from the management. Can you give some examples of companies that have transitioned to MBD, and what were the benefits they noticed?

I’ll give you an example of a small company with no First Article Inspection (FAI) regulatory requirements and a huge company with very rigorous FAI requirements.

 

Note: click on the images below to enjoy the details.

The small company instituted a system of CAD modeling discipline that allowed them to push 3D viewable information directly to the factory floor. The assembly technicians instantly understood engineering’s requirements faster and better.

The positive MBD messages for these use cases are 3D  navigation, CAD Re-Use, and better control of their revisions on the factory floor.

 

The large company has added inspection requirements directly onto their engineering and created a Bill of Characteristics (BOC) for the suppliers and internal manufacturers. They are removing engineering ambiguity, resulting in direct digital information exchange between engineering, manufacturing, and quality siloes.

These practices have reduced error and reduced time to market.

The positive MBD messages for these use cases are unambiguous requirements capture by Engineering, Quality Traceability, and Model-Based PMI (Product and Manufacturing Information).

Model-Based Definition and PLM?

How do you see the relation between Model-Based Definition and PLM? Is a PLM system a complication or aid to implement a Model-Based Definition? And do you see a difference between the old and new PLM Vendors?

Model-Based Definition data is complex and rich in connected information, and we want it to be. With that amount of connected data, a data management system (beyond upload/download of documents) must keep all that data straight.

Depending on the size and function of an organization, a PLM may not be needed. However, a way to manage changes and collaboration amongst those using 3D data is necessary. Sometimes that results in a less sophisticated Product Data Management (PDM) system. Large organizations often require PLM.

There is significant resistance to doing MBD and PLM implementations simultaneously because PLM is always over budget and behind schedule. However, doing just MBD or just PLM without the other doesn’t work either. I think you should be brave and do both at once.

I think we can debate why PLM is always over budget and behind schedule. I hear the same about ERP implementations. Perhaps it has to deal with the fact that enterprise applications have to satisfy many users?

I believe that working with model versions and file versions can get mixed in larger organizations, so there is a need for PDM or PLM. Have you seen successful implementations of both interacting together?

Yes, the only successful MBD implementations are those that already have a matured PDM/PLM (scaled best to the individual business).

 

Model-Based Definition and Digital Transformation

In the previous question, we already touched on the challenge of old and modern PLM. How do you see the introduction of Model-Based Definition addressing the dreams of Industry 4.0, the Digital Twin and other digital concepts?

I just gave a presentation at the ASME Digital Twin Summit discussing the importance of MBD for the Digital Twin. MBD is a foundational element that allows engineering to compare their design requirements to the quality inspection results of digital twin data.

The feedback loop between Engineering and Quality is fraught with labor-intensive efforts in most businesses today.

Leveraging the combination of MBD and Digital Twin allows automation possibilities to speed up and increase the accuracy of the engineering to inspection feedback loop. That capability helps organizations realize the vision of Industry 4.0.

And then there is OSCAR.

I noticed you announced OSCAR. First, I thought OSCAR was a virtual aid for model-based definition, and I liked the launching page HERE. Can you tell us more about what makes OSCAR unique?

One thing that is hard with MBD implementation is there is so much to know. Our MBDers at Action Engineering have been involved with MBD for many years and with many companies. We are embedded in real-life transitions from using drawings to using models.

Suppose you start down the model-based path for digital manufacturing. In that case, there are significant investments in time to learn how to get to the right set of capabilities and the right implementation plan guided by a strategic focus. OSCAR reduces that ramp-up time with educational resources and provides vetted and repeatable methods for an MBD implementation.

OSCAR combines decades of Action Engineering expertise and lessons learned into a multi-media textbook of sorts. To kickstart an individual or an organization’s MBD journey, it includes asynchronous learning, downloadable resources, and CAD examples available in Creo, NX, and SOLIDWORKS formats.

CAD users can access how-to training and downloadable resources such as the latest edition of Re-Use Your CAD (RUYC). OSCAR enables process improvement champions to make their case to start the MBD journey. We add content regularly and post what’s new. Free trials are available to check out the online platform.

Learn more about what OSCAR is here:

Want to learn more?

In this post, I believe we only touched the tip of the iceberg. There is so much to learn and understand. What would you recommend to a reader of this blog who got interested?

 

RUYC (Re-Use Your CAD)  is an excellent place to start, but if you need more audio-visual, and want to see real-life examples of MBD in action, get a Training subscription of OSCAR to get rooted in the vocabulary and benefits of MBD with a Model-Based Enterprise. Watch the videos multiple times! That’s what they are for. We love to work with European companies and would love to support you with a kickstart coaching package to get started.

What I learned

First of all, I learned that Jennifer is a very pragmatic person. Her company (Action Engineering) and her experience are a perfect pivot point for those who want to learn and understand more about Model-Based Definition. In particular, in the US, given her strong involvement in the American Society of Mechanical Engineers (ASME).

I am still curious if European or Asian counterparts exist to introduce and explain the benefits and usage of Model-Based Definition to their customers.  Feel free to comment.

Next, and an important observation too, is the fact that Jennifer also describes the tension between Model-Based Definition and PLM. Current PLM systems might be too rigid to support end-to-end scenarios, taking benefit of the Model-Based definition.

I have to agree here. PLM Vendors mainly support their own MBD (model-based definition), where the ultimate purpose is to share all product-related information using various models as the main information carriers efficiently.

We have to study and solve a topic in the PLM domain, as I described in my technical highlights from the PLM Road Map & PDT Spring 2021 conference.

There is work to do!

Conclusion

Model-Based Definition is, for me, one of the must-do steps of a company to understand the model-based future. A model-based future sometimes incorporates Model-Based Systems Engineering, a real Digital Thread and one or more Digital Twins (depending on your company’s products).

It is a must-do activity because companies must transform themselves to depend on digital processes and digital continuity of data to remain competitive. Document-driven processes relying on the interpretation of a person are not sustainable.

 

Last week I wrote about the recent PLM Road Map & PDT Spring 2021 conference day 1, focusing mainly on technology. There were also interesting sessions related to exploring future methodologies for a digital enterprise. Now on Day 2, we started with two sessions related to people and methodology, indispensable when discussing PLM topics.

Designing and Keeping Great Teams

This keynote speech from Noshir Contractor, Professor of Behavioral Sciences in the McCormick School of Engineering & Applied Science, intrigued me as the subtitle states: Lessons from Preparing for Mars. What Can PLM Professionals Learn from This?

You might ask yourself, is a PLM implementation as difficult and as complex as a mission to Mars? I hoped, so I followed with great interest Noshir’s presentation.

Noshir started by mentioning that many disruptive technologies have emerged in recent years, like Teams, Slack, Yammer and many more.

The interesting question he asked in the context of PLM is:

As the domain of PLM is all about trying to optimize effective collaboration, this is a fair question

Structural Signatures

Noshir shared with us that it is not the most crucial point to look at people’s individual skills but more about who they know.
Measure who they work with is more important than who they are.

Based on this statement, Noshir showed some network patterns of different types of networks.

Click on the image to see the enlarged picture.

It is clear from these patterns how organizations communicate internally and/or externally. It would be an interesting exercise to perform in a company and to see if the analysis matches the perceived reality.

Noshir’s research was used by NASA to analyze and predict the right teams for a mission to Mars.

Noshir went further by proposing what PLM can learn from teams that are going into space. And here, I was not sure about the parallel. Is a PLM project comparable to a mission to Mars? I hope not! I have always advocated that a PLM implementation is a journey. Still, I never imagined that it could be a journey into the remote unknown.

Noshir explained that they had built tools based on their scientific model to describe and predict how teams could evolve over time. He believes that society can also benefit from these learnings. Many inventions from the past were driven by innovations coming from space programs.

I believe Noshir’s approach related to team analysis is much more critical for organizations with a mission. How do you build multidisciplinary teams?

The proposed methodology is probably best for a holocracy based organization. Holocrazy is an interesting concept for companies to get their employees committed, however, it also demands a type of involvement that not every person can deliver.  For me, coming back to PLM, as a strategy to enable collaboration, the effectiveness of collaboration depends very much on the organizational culture and created structure.

DISRUPTION – EXTINCTION or still EVOLUTION?

We talk a lot about disruption because disruption is a painful process that you do not like to happen to yourself or your company. In the context of this conference’s theme, I discussed the awareness that disruptive technologies will be changing the PLM Value equation.

However, disruptive technologies are not alone sufficient. In PLM, we have to deal with legacy data, legacy processes, legacy organization structures, and often legacy people.

A disruption like the switch from mini-computers to PCs (killed DEC) or from Symbian to iOS (killed Nokia) is therefore not likely to happen that fast. Still, there is a need to take benefit from these new disruptive technologies.

My presentation was focusing on describing the path of evolution and focus areas for the PLM community. Doing nothing means extinction; experimenting and learning towards the future will provide an evolutionary way.

Starting from acknowledging that there is an incompatibility between data produced most of the time now and the data needed in the future, I explained my theme: From Coordinated to Connected. As a PLM community, we should spend more time together in focus groups, conferences on describing and verifying methodology and best practices.

Nigel Shaw (EuroStep) and Mark Williams (Boeing) hinted in this direction during this conference  (see day 1). Erik Herzog (SAAB Aeronautics) brought this topic to last year’s conference (see day 3). Outside this conference, I have comparable touchpoints with Martijn Dullaert when discussing Configuration Management in the future in relation to PLM.

In addition, this decade will probably be the most disruptive decade we have known in humanity due to external forces that push companies to change. Sustainability regulations from governments (the Paris agreement),  the implementation of circular economy concepts combined with the positive and high Total Share Holder return will push companies to adapt themselves more radical than before.

What is clear is that disruptive technologies and concepts, like Industry 4.0, Digital Thread and Digital Twin, can serve a purpose when implemented efficiently, ensuring the business becomes sustainable.

Due to the lack of end-to-end experience, we need focus groups and conferences to share progress and lessons learned. And we do not need to hear the isolated vendor success stories here as a reference, as often they are siloed again and leading to proprietary environments.

You can see my full presentation on SlideShare: DISRUPTION – EXTINCTION or still EVOLUTION?

 

Building a profitable Digital T(win) business

Beatrice Gasser,  Technical, Innovation, and Sustainable Development Director from the Egis group, gave an exciting presentation related to the vision and implementation of digital twins in the construction industry.

The Egis group both serves as a consultancy firm as well as an asset management organization. You can see a wide variety of activities on their website or have a look at their perspectives

Historically the construction industry has been lagging behind having low productivity due to fragmentation, risk aversion and recently, more and more due to the lack of digital talent. In addition, some of the construction companies make their money from claims inside of having a smooth and profitable business model.

Without innovation in the construction industry, companies working the traditional way would lose market share and investor-focused attention, as we can see from the BCG diagram I discussed in my session.

The digital twin of construction is an ideal concept for the future. It can be built in the design phase to align all stakeholders, validate and integrate solutions and simulate the building operational scenarios at almost zero materials cost. Egis estimates that by using a digital twin during construction, the engineering and construction costs of a building can be reduced between 15 and 25 %

More importantly, the digital twin can also be used to first simulate operations and optimize energy consumption. The connected digital twin of an existing building can serve as a new common data environment for future building stakeholders. This could be the asset owner, service companies, and even the regulatory authorities needing to validate the building’s safety and environmental impact.

Beatrice ended with five principles essential to establish a digital twin, i.e

I think the construction industry has a vast potential to disrupt itself. Faster than the traditional manufacturing industries due to their current needs to work in a best-connected manner.

Next, there is almost no legacy data to deal with for these companies. Every new construction or building is a unique project on its own. The key differentiators will be experience and efficient ways of working.

It is about the belief, the guts and the skilled people that can make it work – all for a more efficient and sustainable future.

 

 

Leveraging PLM and Cloud Technology for Market Success

Stan Przybylinski, Vice President of CIMdata, reported their global survey related to the cloud, completed in early 2021.  Also, Stan typified Industry 4.0 as a connected vision and cloud and digital thread as enablers to implementing this vision.

The companies interviewed showed a lot of goodwill to make progress – click on the image to see the details. CIMdata is also working with PLM Vendors to learn and describe better the areas of beneft. I remain curious about who comes with a realization and business case that is future-proof. This will define our new PLM Value Equation.

 

Conclusion

These were two exciting days with enough mentioning of disruptive technologies. Our challenge in the PLM domain will be to give them a purpose. A purpose is likely driven by external factors related to the need for a sustainable future.  Efficiency and effectiveness must come from learning to work in connected environments (digital twin, digital thread, industry 4.0, Model-Based (Systems) Engineering.

Note: You might have seen the image below already – a nice link between sustainability and the mission to Mars

Translate

Email subscription to this blog

Categories

%d bloggers like this: