You are currently browsing the category archive for the ‘intelligence’ category.

Ontology example: description of the business entities and their relationships

In my recent posts, I have talked a lot about the model-based enterprise and already after my first post: Model-Based – an introduction I got a lot of feedback where most of the audience was automatically associating the words Model-Based to a 3D CAD Model.
Trying to clarify this through my post: Why Model-Based – the 3D CAD Model stirred up the discussion even more leading into: Model- Based: The confusion.

A Digital Twin of the Organization

At that time, I briefly touched on business models and business processes that also need to be reshaped and build for a digital enterprise. Business modeling is necessary if you want to understand and streamline large enterprises, where nobody can overview the overall company. This approach is like systems engineering where we try to understand and simulate complex systems.

With this post, I want to close on the Model-Based series and focus on the aspects of the business model. I was caught by this catchy article: How would you like a digital twin of your organization? which provides a nice introduction to this theme.  Also, I met with Steve Dunnico, Creator and co-founder of Clearvision, a Swedish startup company focusing on modern ways of business modeling.

 

Introduction

 Jos (VirtualDutchman):  Steve can you give us an introduction to your company and the which parts of the model-based enterprise you are addressing with Clearvision?

Steve (Clearvision):  Clearvision started as a concept over two decades ago – modeling complex situations across multiple domains needed a simplistic approach to create a copy of the complete ecosystem. Along the way, technology advancements have opened up big-data to everyone, and now we have Clearvision as a modeling tool/SaaS that creates a digital business ecosystem that enables better visibility to deliver transformation.

As we all know, change is constant, so we must transition from the old silo projects and programs to a business world of continuous monitoring and transformation.
Clearvision enables this by connecting the disparate parts of an organization into a model linking people, competence, technology services, data flow, organization, and processes.
Complex inter-dependencies can be visualized, showing impact and opportunity to deliver corporate transformation goals in measured minimum viable transformation – many small changes, with measurable benefit, delivered frequently.  This is what Clearvision enables!

Jos: What is your definition of business modeling?

Steve: Business modeling historically, has long been the domain of financial experts – taking the “business model” of the company (such as production, sales, support) and looking at cost, profit, margins for opportunity and remodeling to suit. Now, with the availability of increased digital data about many dimensions of a business, it is possible to model more than the financials.

This is the business modeling that we (Clearvision) work with – connecting all the entities that define a business so that a change is connected to process, people, data, technology and other dimensions such as cost, time, quality.  So if we change a part, all of the connected parts are checked for impact and benefit.

Jos: What are the benefits of business modeling?

Steve: Connecting the disparate entities of a business opens up limitless opportunities to analyze “what is affected if I change this?”.  This can be applied to simple static “as-is” gap analyses, to the more advanced studies needed to future forecast and move into predictive planning rather than reactive.

 The benefits of using a digital model of the business ecosystem are applicable to the whole organization.  The “C-suite” team get to see heat-maps for not only technology-project deliveries but can use workforce-culture maps to assess the company’s understanding and adoption of new ways of working and achievement of strategic goals.  While at an operational level, teams can collaborate more effectively knowing which parts of the ecosystem help or hinder their deliveries and vice-versa.

Jos: Is business modeling applicable for any type or size of the company?

The complexity of business has driven us to silo our way of working, to simplify tasks to achieve our own goals, and it is larger organizations which can benefit from modeling their business ecosystems.  On that basis, it is unlikely that a standalone small business would engage in its own digital ecosystem model.  However, as a supplier to a larger organization, it can be beneficial for the larger organizations to model their smaller suppliers to ensure a holistic view of their ecosystem.

The core digital business ecosystem model delivers integrated views of dependencies, clashes, hot-spots to support transformation

Jos: How is business modeling related to digital transformation?

Digital transformation is an often heard topic in large corporations, by implication we should take advantage of the digital data we generate and collect in our businesses and connect it, so we benefit from the whole not work in silos.  Therefore, using a digital model of a business ecosystem will help identify areas of connectivity and collaboration that can deliver best benefit but through Minimum Viable Transformation, not a multi-year program with a big-bang output (which sometimes misses its goals…).

Today’s digital technology brings new capabilities to businesses and is driving competence changes in organizations and their partner companies.  So another use of business modeling is to map competence of internal/external resources to the needed capabilities of digital transformation.  Mapping competence rather than roles brings a better fit for resources to support transformation.  Understanding which competencies we have and what the gaps are pr-requisite to plan and deliver transformation.

Jos: Then perhaps close with your Clearvision mission where you fit (uniquely)?

Having worked on early digital business ecosystem models in the late 90’s, we’ve cut our teeth on slow processing time, difficult to change data relationships and poor access to data, combined with a very silo’d work mentality.  Clearvision is now positioned to help organizations realize that the value of the whole of their business is greater than the sum of their parts (silos) by enabling a holistic view of their business ecosystem that can be used to deliver measured transformation on a continual basis.

 Jos: Thanks Steve for your contribution and with this completing the series of post related to a model-based enterprise with its various facets. I am aware this post the opinion from one company describing the importance of a model-based business in general. There are no commercial relations between the two of us and I recommend you to explore this topic further in case relevant for your situation.

Conclusion

Companies and their products are becoming more and more complex, most if it happening now, a lot more happening in the near future. In order to understand and manage this complexity models are needed to virtually define and analyze the real world without the high costs of making prototypes or changes in the real world. This applies for organizations, for systems, engineering and manufacturing coordination and finally in-field operating systems.  They all can be described by – connected – models. This is the future of a model-based enterprise

Coming up next time: CIMdata PDM Roadmap Europe and PDT Europe. You can still register and meet a large group of people who care about the details of aspects of a digital enterprise

 

Advertisements

What I want to discuss this time is the challenging transformation related to product data that needs to take place.

The top image of this post illustrates the current PLM world on the left, and on the right the potential future positioning of PLM in a digital enterprise.  How the right side will behave is still vague – it can be a collection of platforms or a vast collection of small services all contributing to the performance of the company.  Some vendors might dream, all these capabilities are defined in one system of systems, like the human body; all functions are available and connected.

Coordinated or connected?

This is THE big question for a future digital enterprise. In the current PLM approach, there are governance structures that allow people to share data along the product lifecycle in a structured way.

These governance structures can be project breakdown structures, where with a phase-gate approach the full delivery is guided. Deliverables related to task and gates will make sure information is stored available for every stakeholder. For example, a well-known process in the automotive industry, the Advanced Product Quality Process ( APQP process) is a standardized approach to make sure parts or products are introduced with the right quality for the customer.

Deliverables at any stage in the process can be reviewed or consumed by another stakeholder. The result is most of the time a collection of approved documents (Office-type, Design & Test files) stored centrally. This is what I would call a coordinated data approach.

In complex environments, besides the project governance, there will be product structures and Bill of Materials, where each object in such a structure will be the placeholder for related information. In case of a product structure it can be its specifications per component, in case of a Bill of Materials, it can be its design specification (usually in CAD models) and its manufacturing specifications, in case of an MBOM.

An example of structures used in Enovia

Although these structures contain information about the product composition themselves, the related information makes the content understandable/realizable.

Again it is a coordinated approach, and most PLM systems and implementations are focusing on providing these structures.

Sometimes with their own system only – you need to follow the vendor portfolio to get the full benefit  or sometimes the system is positioned as an overlay to existing systems in the company, therefore less invasive.

Presentation from Martin Eigner – explaining the overlay concept based on Aras

Providing the single version of the truth is often associated with this approach. The question is: Is the green bin on the left the single version of the truth?

The Coordinated – Single Version of the Truth – problem

The challenge of a coordinated approach is that there is no thorough consistency checking if the data delivered is representing the real truth. Through serious review procedures, we do our best to make sure every deliverable has the required content and quality. As information inside these deliverables is not connected to the outside world, there will be discrepancies between reality and what has been stored. Still, we feel comfortable enough as an organization to pretend we know where the risks are. Until the costly impossible happens !

The connected enterprise

The ultimate dream of a digital enterprise is that everything relevant is connected in context. This means no more documents or files but a very granular information model for linking data and keeping it in context. We can apply algorithms and automation to connected data and use Artificial Intelligence to make sense of massive amounts of data.

Connected data allows us to share combined sets of information that are relevant to a particular role. Real-time dashboarding is one of the benefits of such an infrastructure. There are still a lot of challenges with this approach. How do we know which information is valid in the context of other information? What are the rules that describe a valid product or project baseline at a particular time?

Although all data is stored as unique information objects in a network of information, we cannot apply the old mechanisms for a coordinated approach all the time. Generated reports from a connected environment can still serve as baselines or records related to a specific state, such as when the design was approved for manufacturing, we can generate approved Product Baselines structures or Bill of Materials structures.

However, this linearity in lifecycle for passing information through an enterprise will not exist anymore. It might be there are various design alternatives and the delivery process is already part of the design phase. Through integrated virtual simulation and testing, we reach a state that the product satisfies the market for that moment and the delivery process is known at the same time

Almost immediately and based on first experiences from the field, new features can be added virtually tested and validated for the next stage. We need to design new PLM infrastructures that can support this granularity and therefore complexity.

The connected – Single Version of the Truth – problem

The concepts I described related to the connected enterprise made me realize that this is analogue to how the brain works. Our brain is a giant network of connected information, dynamically maintaining associations, having different abstraction levels and always pretending there is one truth.

If you want to understand a potential model of the brain, please read On Intelligence from Jeff Hawkins. With the possible upcoming of the Quantum Computer, we might be able to create performing brain models.

In my earlier post: Are we blocking our future,  I referred to the book; The Idiot Brain: What Your Head is Really Up To from Dean Burnett, where Dean is stating that due to the complexity of stored information our brain continuously adapts “non-compliant” information to make sure the owner of the brain feels comfortable.

What we think that is the truth might be just the creation from the brain, combining the positive parts into a compelling story and suppressing or deleting information that does not fit.  Although it sounds absurd, I believe if we are able to create a connected digital enterprise we will face the same symptoms.  Due to the complexity of connected information, we are looking for the best suitable version, and as all became so complex, ordinary human beings will no longer be able to distinguish this

 

Conclusion:

As part of the preparation for the upcoming PDT Europe 2018, I was investigating the topics coordinated and connected enterprise to discover potential transformation steps. We all need to explore the future with an open mind, and the challenge is: WHERE and HOW FAST can we transform from coordinated to connected? I am curious if you have experiences or thoughts on this topic.

 

 

During my holiday I have read some interesting books. Some for the beauty of imagination and some to enrich my understanding of the human brain.

Why the human brain? It is the foundation and motto of my company: The Know-How to Know Now.
In 2012 I wrote a post: Our brain blocks PLM acceptance followed by a post in 2014  PLM is doomed, unless …… both based on observations and inspired by the following books (must read if you are interested in more than just PLM practices and technology):

In 2014, Digital Transformation was not so clear. We talked about disruptors, but disruption happened outside our PLM comfort zone.

Now six years later disruption or significant change in the way we develop and deliver solutions to the market has become visible in the majority of companies. To stay competitive or meaningful in a global market with changing customer demands, old ways of working no longer bring enough revenue to sustain.  The impact of software as part of the solution has significantly changed the complexity and lifecycle(s) of solutions on the market.

Most of my earlier posts in the past two years are related to these challenges.

What is blocking Model-Based Definition?

This week I had a meeting in the Netherlands with three Dutch peers all interested and involved in Model-Based Definition – either from the coaching point of view or the “victim” point of view.  We compared MBD-challenges with Joe Brouwer’s AID (Associated Information Documents) approach and found a lot of commonalities.

No matter which method you use it is about specifying unambiguously how a product should be manufactured – this is a skill and craftsmanship and not a technology. We agreed that a model-based approach where information (PMI) is stored as intelligent data elements in a Technical Data Package (TPD) will be crucial for multidisciplinary usage of a 3D Model and its associated information.

If we would store the information again as dumb text in a view, it will need human rework leading to potential parallel information out of sync, therefore creating communication and quality issues. Unfortunate as it was a short meeting, the intention is to follow-up this discussion in the Netherlands to a broader audience. I believe this is what everyone interested in learning and understanding the needs and benefits of a model-based approach (unavoidable) should do. Get connected around the table and share/discuss.

We realized that human beings indeed are often the blocking reason why new ways of working cannot be introduced. Twenty-five years ago we had the discussion moving from 2D to 3D for design. Now due to the maturity of the solutions and the education of new engineers this is no longer an issue. Now we are in the next wave using the 3D Model as the base for manufacturing definition, and again a new mindset is needed.

There are a few challenges here:

  • MBD is still in progress – standards like AP242 still needs enhancements
  • There is a lack of visibility on real reference stories to motivate others.
    (Vendor-driven stories often are too good to be true or too narrow in scope)
  • There is no education for (modern) business processes related to product development and manufacturing. Engineers with new skills are dropped in organizations with traditional processes and silo thinking.

Educate, or our brain will block the future!

The above points need to be addressed, and here the human brain comes again into the picture.  Our unconscious, reptile brain is continuously busy to spend a least amount of energy as described in Thinking, Fast and Slow. Currently, I am reading the Idiot Brain: What Your Head Is Really Up To by Dean Burnett, another book confirming that our brain is not a logical engine making wise decisions

And then there is the Dunning-Kruger effect, explaining that the people with the lowest skills often have the most outspoken opinion and not even aware of this flaw. We see this phenomenon in particular now in social media where people push their opinion as if they are facts.

So how can we learn new model-based approaches and here I mean all the model-based aspects I have discussed recently, i.e., Model-Based Systems Engineering, Model-Based Definition/ Model-Based Enterprise and the Digital Twin? We cannot learn it from a book, as we are entering a new era.

First, you might want to understand there is a need for new ways of working related to complex products. If you have time, listen to Xin Guo Zhang’s opening keynote with the title: Co-Evolution of Complex Aeronautical Systems & Complex SE. It takes 30 minutes so force yourself to think slow and comprehend the message related to the needed paradigm shift for systems engineering towards model-based systems engineering

Also, we have to believe that model-based is the future. If not, we will find for every issue on our path a reason not to work toward the ultimate goal.

You can see this in the comments of my earlier post on LinkedIn, where Sami Grönstrand writes:

I warmly welcome the initiative to “clean up” these concepts  (It is time to clean up our model-based problem and above all, await to see live examples of transformations — even partial — coupled with reasonable business value identification. 

There are two kinds of amazing places: those you have first to see before you can believe they exist.
And then those kinds that you have to believe in first before you can see them…

And here I think we need to simplify en enhance the Model-Based myth as according to Yuval Harari in his book Sapiens, the power of the human race came from creating myths to align people to have long-term, forward-looking changes accepted by our reptile brain. We are designed to believe in myths. Therefore, the need for a Model-based myth.In my post PLM as a myth? from 2017, I discussed this topic in more detail.

Conclusion

There are so many proof points that our human brain is not as reliable as we think it is.  Knowing less about these effects makes it even harder to make progress towards a digital future. This post with all its embedded links can keep your brain active for a few hours. Try it, avoid to think fast and avoid assuming you know it all. Your thoughts?

 

Learning & Discussing more?
Still time to register for CIMdata PLM Roadmap and PDT Europe

 

 

At this moment there are two approaches to implement PLM. The most common practice is item-centric and model-centric will be potentially the best practice for the future. Perhaps your company still using a method from the previous century called drawing-centric. In that case, you should read this post with even more attention as there are opportunities to improve.

 

The characteristics of item-centric

In an item-centric approach, the leading information carrier is an item also known as a part. The term part is sometimes confusing in an organization as it is associated with a 3D CAD part. In SAP terminology the item is called Material, which is sometimes confusing for engineering as they consider Material the raw material. Item-centric is an approach where items are managed and handled through the whole lifecycle. In theory, an item can be a conceptual item (for early estimates), a design item (describing the engineering intent), a manufacturing item (defining how an item is consumed) and potentially a service item.

The picture below illustrates the various stages of an item-centric approach. Don’t focus on the structure, it’s an impression.

It is clear these three structures are different and can contain different item types. To read more about the details for an EBOM/MBOM approach read these post on my blog:

Back to item-centric. This approach means that the item is the leading authority of the product /part. The id and revision describe the unique object in the database, and the status of the item tells you in the current lifecycle stage for the item. In some cases, where your company makes configurable products also the relation between two items can define effectivity characteristics, like data effectivity, serial number effectivity and more. From an item structure, you can find its related information in context. The item points to the correct CAD model, the assembly or related manufacturing drawings, the specifications. In case of an engineering item, it might point towards approved manufacturers or approved manufacturing items.

Releasing an item or a BOM means the related information in context needs to validated and frozen too. In case your company works with drawings for manufacturing, these drawings need to be created, correct and released, which sometimes can be an issue due to some last-minute changes that can happen. The above figure just gives an impression of the potential data related to an item. It is important to mention that reports, which are also considered documents, do not need an approval as they are more a snapshot of the characteristics at that moment of generation.

The advantages of an item-centric approach are:

  • End-to-end traceability of information
  • Can be implemented in an evolutionary approach after PDM-ERP without organizational changes
  • It enables companies to support sharing of information
  • Sharing of information forces companies to think about data governance
    (not sure if a company wants to invest on that topic)

The main disadvantages of an item-centric approach are:

  • Related information on the item is not in context and therefore requires its own management and governance to ensure consistency
  • Related information is contained in documents, where availability and access is not always guaranteed

Still, the item-centric approach brings big benefits to a company that was working in a classical drawing-driven PDM-ERP approach. An additional remark needs to be made that not every company will benefit from an item-centric approach as typically Engineering-to-Order companies might find this method creating too much overhead.

The characteristics of Model-Centric

A model-centric approach is considered the future approach for modern enterprises as it brings efficiency, speed, multidisciplinary collaboration and support for incremental innovation in an agile way. When talking about a model-centric approach, I do not mean a 3D CAD model-centric approach. Yes, in case the product is mature, there will be a 3D Model serving as a base for the physical realization of the product.

However, in the beginning, the model can be still a functional or logical model. In particular, for complex products, model-based systems engineering might be the base for defining the solution. Actually, when we talk about products that interact with the outside world through software, we tend to call them systems. This explains that model-based systems engineering is getting more and more a recommended approach to make sure the product works as expected, fulfills all the needs for the product and creates a foundation for incremental innovation without starting from scratch.

Where the model-based architecture provides a framework for all stakeholders, the 3D CAD model will be the base for a digital thread towards manufacturing. Linking parameters from the logical and functional model towards the physical model a connection is created without the need to create documents or input-files for other disciplines. Adding 3D Annotations to the 3D CAD model and manufacturing process steps related to the model provides a direct connection to the manufacturing process.

The primary challenge of this future approach is to have all these data elements (requirements, functions, components, 3D design instances, manufacturing processes & resources to be connected in a federated environment (the product innovation platform). Connecting, versioning and baselining are crucial for a model-centric approach. This is what initiatives like Industry 4.0 are now exploring through demonstrators, prototypes to get a coherent collection of managed data.

Once we are able to control this collection of managed data concepts of digital twin or even virtual twin can be exploited linking data to a single instance in the field.

Also, the model can serve as the foundation for introduction incremental innovation, bringing in new features.  As the model-based architecture provides direct visibility for change impact (there are no documents to study), it will be extremely lean and cost-efficient to innovate on an existing product.

Advantages of model-centric

  • End-to-end traceability of all data related to a product
  • Extremely efficient in data-handling – no overhead on data-conversions
  • Providing high-quality understanding of the product with reduced effort compared to drawing-centric or item-centric approaches
  • It is scalable to include external stakeholders directly (suppliers/customers) leading to potential different, more beneficial business models
  • Foundation for Artificial Intelligence at any lifecycle step.

Disadvantages of model-centric

  • It requires a fundamentally different way of working compared to past. Legacy departments, legacy people, and legacy data do not fit directly into the model-centric approach. A business transformation is required, not evolution.
  • It is all about sharing data, which requires an architecture that is built to share information across Not through a service bus but as a (federated) platform of information.
    A platform requires a strong data governance, both from the dictionary as well as authorizations which discipline is leading/following.
  • There is no qualified industrial solution from any vendor yet at this time. There is advanced technology, there are demos, but to my knowledge, there is no 100% model-centric enterprise yet. We are all learning. Trying to distinguish reality from the hype.

 

Conclusions

The item-centric approach is the current best practice for most PLM implementations. However, it has the disadvantage that it is not designed for a data-driven approach, the foundation of a digital enterprise. The model-centric approach is new. Some facets already exist. However, for the total solution companies, vendors, consultants, and implementers are all learning step-by-step how it all connects. The future of model-centric is promising and crucial for survival.

Do you want to learn where we are now related to a model-centric approach?
Come to PDT2017 in Gothenburg on 18-19th October and find out more from the experts and your peers.

IntNumberSome weeks ago I wrote a post about non-intelligent part numbers (here) and this was (as expected) one of the topics that fired up other people to react. Thanks to Oleg Shilovitsky (here), Ed Lopategui (here), David Taber (here)  for your contribution to this debate. For me, the interesting conclusion was that nobody denies the advantage of non-intelligent part number anymore. Five to ten years ago this discussion would be more a debate between defenders of the old “intelligent” methodology and non-intelligent numbers. Now it was more about how to deal/wait/anticipate for the future. Great progress !!

 

Non-intelligent part number benefits

Again a short summary for those who have not read the posts referenced in the introduction. Non-intelligent part numbers provide the following advantages:

  • Flexibility towards the future in case of mergers, new products, and technologies of number ranges not foreseen. Reduced risk of changes and maintenance for part numbers in the future.
  • Reduced support for “brain related connectivity” between systems (error prone) and better support for automated connectivity (interfaces / digital scanning devices). Minimizing mistakes and learning time.

 

What’s next?

So when a company decides to move forward towards non-intelligent part numbers, there are still some more actions to take. As the part number becomes irrelevant for human beings, there is the need for more human-readable properties provided as metadata on screens or attributes in a report.

CLASSIFICATION: The first obvious need is to apply a part classification to your parts. Intelligent part numbers somehow were often a kind of classification based on the codes and position of numbers and characters inside the intelligent ID. The intelligent part number containing information about the type of part, perhaps the drawing format, the project or the year it was issued the first time. You do not want to lose this information and therefore, make sure it is captured in attributes (e.g. part type / creation date) or in related information (e.g. drawing properties, model properties, customer, project). In a modern PLM system, all the intelligence of a part number needs to be at least stored as metadata and relations.

classificationWhich classification to use is hard to tell. It depends on your industry and the product you are making. Each industry has it standards which are probably the optimized target when you work in that industry. Classifications like UNSPC might be too generic. Although when you classify, do not invent a new classification yourself. People have spent thousands of hours (millions perhaps) on building the best classification for your industry – don’t be smarter unless you are a clever startup.

And next, do not rely on a single classification. Make sure your parts can adhere to multiple classifications as this is the best way to stay flexible for the future. Multiple classifications can offer support for a marketing view, a technology view (design and IP usage), a manufacturing view and so on.

Legacy parts should be classified by using analytic tools and custom data manipulations to complete the part metadata in the future environment. There are standard tools in the market to support data discovery and quality improvement. Part similarity discovery done by Exalead’s One Part and for more specific tools read Dick Bourke’s article on Engineering.com.

DOWNSTREAM USAGE: As Mathias Högberg commented on my post, the challenge of non-intelligent part numbers has its impact downstream on the shop floor. Production line scheduling for variants or production process steps for half-fabricates often depends on the intelligence of the part number. When moving to non-intelligent numbers, these capabilities have to be addressed too, either by additional attributes, immediately identifying product families or by adding a more standardized description based on the initial attributes of the classification. Also David Taber in his post talked about two identifiers, one meaningless and fixed and a second used for the outside world, which could be build by a concatenation of attributes and can change during the part lifecycle.

VIRTUALIn the latter case, you might say, we remove intelligence from the part number and we bring intelligence back in the description. This is correct. Still human beings are better in mapping a description in their mind than a number.

Do you know Jos Voskuil (a.k.a. virtualdutchman) or
Do you know NL 13.012.789 / 56 ?

 

Quality of data

Moving from “intelligent” part numbers towards meaningless part numbers enriched with classification and a standardized description, allow companies to gain significant benefits for just part reuse. This is what current enterprises are targeting. Discovering and eliminating similar parts already justifies this process. I consider this as a tactical advantage. The real strategic advantage will come in the next ten years when we will go more and more to a digital enterprise. In a digital enterprise, algorithms will play a significant role (see Gartner) amount of human interpretation and delays. However, algorithms only work on data with certain properties and a reliable quality.

Conclusion

Introducing non-intelligent part numbers has it benefits and ROI to stay flexible for the future. However consider it also as a strategic step for the long-term future when information needs to flow in an integrated way through the enterprise with a minimum of human handling.

 

thinkHappy New Year to all of you and I am wishing you all an understandable and digital future. This year I hope to entertain you again with a mix of future trends related to PLM combined with old PLM basics. This time, one of the topics that are popping up in almost every PLM implementation – numbering schemes – do we use numbers with a meaning, so-called intelligent numbers or can we work with insignificant numbers? And of course, the question what is the impact of changing from meaningful numbers towards unique meaningless numbers.

Why did we create “intelligent” numbers?

IntNumberIntelligent part numbers were used to help engineers and people on the shop floor for two different reasons. As in the early days, the majority of design work was based on mechanical design. Often companies had a one-to-one relation between the part and the drawing. This implied that the part number was identical to the drawing number. An intelligent part number could have the following format: A4-95-BE33K3-007.A

Of course, I invented this part number as the format of an intelligent part number is only known to local experts. In my case, I was thinking about a part that was created in 1995, drawn on A4. Probably a bearing of the 33K3 standard (another intelligent code) and its index is 007 (checked in a numbering book). The version of the drawing (part) is A

A person, who is working in production, assembling the product and reading the BOM, immediately knows which part to use by its number and drawing. Of course the word “immediately” is only valid for people who have experience with using this part. And this was in the previous century not so painful as it is now. Products were not so sophisticated as they are now and variation in products was limited.

Later, when information became digital, intelligent numbers were also used by engineering to classify their parts. The classification digits would assist the engineer to find similar parts in a drawing directory or drawing list.

And if the world had not changed, there would be still intelligent part numbers.

Why no more intelligent part numbers?

There are several reasons why you would not use intelligent part numbers anymore.

  1. PerfectWorldAn intelligent number scheme works in a perfect world where nothing is changing. In real life companies merge with other companies and then the question comes up: Do we introduce a new numbering scheme or is one of the schemes going to be the perfect scheme for the future?If this happened a few times, a company might think: Do we have to through this again and again? As probably topic #2 has also occurred.
  2. The numbering scheme does not support current products and complexity anymore. Products change from mechanical towards systems, containing electronic components and embedded software. The original numbering system has never catered for that. Is there an overreaching numbering standard? It is getting complicated, perhaps we can change ? And here #3 comes in.
  3. BarCodeAs we are now able to store information in a digital manner, we are able to link to this complex part number a few descriptive attributes that help us to identify the component. Here the number is becoming less important, still serving as access to the unique metadata. Consider it as a bar code on a product. Nobody reads the bar code without a device anymore and the device connected to an information system will provide the right information. This brings us to the last point #4.
  4. In a digital enterprise, where data is flowing between systems, we need unique identifiers to connect datasets between systems. The most obvious example is the part master data. Related to a unique ID you will find in the PDM or PLM system the attributes relevant for overall identification (Description, Revision, Status, Classification) and further attributes relevant for engineering (weight, material, volume, dimensions).
    In the ERP system, you will find a dataset with the same ID and master attributes. However here they are extended with attributes related to logistics and finance. The unique identifier provides the guarantee that data is connected in the correct manner and that information can flow or connected between systems without human interpretation or human-spent processing time.

GartnerWorkforceAnd this is one of the big benefits of a digital enterprise, reducing overhead in data handling, often reducing the cost of data handling with 50 % or more (people / customizations)

 

What to do now in your company?

There is no business justification just to start renumbering parts just for future purposes. You need a business reason. Otherwise, it will only increase costs and create a potential for migration errors. Moving to meaningless part numbers can be the best done at the moment a change is required. For example, when you implement a new PLM system or when your company merges with another company. At these moments, part numbering should be considered with the future in mind.

augmentedAnd the future is no longer about memorizing part classifications and numbers, even if you are from the generation that used to structure and manage everything inside your brain. Future businesses rely on digitally connected information, where a person based on machine interpretation of a unique ID will get the relevant and meaningful data. Augmented reality  (picture above) is becoming more and more available. It is now about human beings that need to get ready for a modern future.

 

Conclusion

Intelligent part numbers are a best practice from the previous century. Start to think digital and connected and try to reduce the dependency of understanding the part number in all your business activities. Move towards providing the relevant data for a user. This can be an evolution smoothening a future PLM implementation step.

 

clip_image002Looking forward to discussing this topic and many other PLM related practices with you face to face during the Product Innovation conference in Munich. I will talk about the PLM identity change and lead a focus group session about PLM and ERP integration. Looking from the high-level and working in the real world. The challenge of every PLM implementation.

discussThis time I would like to receive some feedback from my readers as I believe the topic I am discussing here might be similar to a PLM / ERP discussion – a discussion between religions. I have preached the past two years a more data-centric approach for PLM, instead of file management and related tot this data-centric approach, the concept of a PLM platform / Business Platform – CIMdata/ Innovation Platform – Gartner becomes clear.

What´s the issue?

As I wrote in my earlier post (random PLM future thoughts), I realized that talking about platforms is not that straight-forward when meeting companies with their history and terminology. Some claim they are already using a business platform, others have no clue what makes a platform different from a their current PLM implementation ? Therefore I will summarize the different approaches I have seen in my network and give a non-academic opinion as a base for discussion. Looking forward to your opinion.

The platform approach

My definition of a PLM platform:

  • A central repository of data based on a core data model. Information is stored as data in a unique way
  • On top of this repository, applications can run, using a subset of the overall data elements, proving dedicated functionality and user interface to a particular user / role
  • Access to the platform is provided through web-technology. Storage could be on the cloud.
  • External applications and data can be connected through an open (standardized?) API embedded or federated
  • The PLM platform can be a collection of services and functionality coming from various vendors / suppliers – the app store concept
  • The platform approach is THE DREAM for business, being flexible to combine and edit data in any desired context in dedicated apps / environments

PLM platformIn the PLM world, Dassault Systems with their 3DExperience approach is following this trend although here you might argue about the ease of use to add external apps to this platform – is it open ? Aras and Autodesk might also claim they have a PLM platform, where you might question the same and if the depth of the data model and the provided solutions on top of the data model are mature enough. Finally also SAP can be considered as a platform, but I would not name it a PLM platform at this moment in time. An important question for me would be: How can achieve openness of a PLM platform?
Your thoughts?

The PLM backbone approach

My definition of a PLM backbone:

  • The core PLM functionality is provided by a single, proprietary PLM system
  • Additional functionality that is not part of the core development (acquisitions) is connected to the backbone through proprietary interfaces
  • External authoring tools are linked to the backbone through integrations or interfaces which could be developed by third parties
  • External system can interface to the PLM backbone through open interfaces
  • The PLM backbone is THE DREAM for engineering, as historically this was the domain where PLM started to be implemented

PLM backbone (PTC)I would consider Siemens and PTC (see picture) the best examples of a PLM backbone approach with their PLM portfolio. Teamcenter and Windchill are both rich PLM systems further connected to several systems, covering the product lifecycle. I am not expert enough to state that the same conclusion is valid for Oracle´s Agile, where I believe the backbone is bigger than the PLM system. What do you think ? Will these PLM vendors also move to a platform approach? And what will be the platform?

The Service Bus approach

My understanding of the Service Bus (I am not an IT-expert):

  • Service Bus has a standardized interface to request for data or to post data that needs to be stored in other systems
  • The Service Bus approach reduces the amount of (custom) interfaces between systems by requiring standardized inputs and outputs per system
  • Providing a user with information that is not entirely available in a single system, the service bus needs to acquire the data from other systems, which might not give a high-performance as expected by business people
  • The Service Bus is the IT DREAM as it simplifies the complexity for IT to manage point-to-point solutions between systems and makes an upgrade strategy easier to support.

From a very high-level view, the service bus approach has some similarities to a platform. The service bus concept allows business to select the systems they like the most (provided they connect to the service bus) – Image property of IBM.com

The main difference would be the persistence of information, where is the real data stored? I came across the service bus approach more often in the past, where the target was most of the time to integrate the PDM functionality (PLM as an enterprise solution was never in scope here).

For the Service Bus approach, I am curious to learn its relevance for future PLM implementations as the challenge would be to provide any user in the company with the relevant information in context. Is the service bus going to be replaced by the platform? Who would be the major players here?

The Business Intelligence approach

This method I discovered in project-centric companies (Oil & Gas companies, EPCs, Construction companies) but strangely enough also at some manufacturing companies, where I would assume integration of systems would bring large benefits.

  • Each type of information is managed only in one single system avoiding interfaces or duplication of data.
  • Only where needed, data will be pushed from one system to other systems
  • Business Intelligence applications extract information from the relevant system and present this in context to the user, giving him/her a better of understanding
  • Business users will work have to work in multiple systems to complete their tasks
  • The BI approach is the ULTIMATE IT DREAM as it simplifies their works dramatically and shuts down business demands.

BI approachI have seen an example where IT dictated that for document management we use product ABC (well-known Content Management system). Next for internal documents we use SharePoint. For CAD, we use product PQR as much as possible (heavily adapted) or AutoCAD 2D (to support the minimum). For ERP, the standard system is XYZ (a famous ERP system – you do not lose your job by selecting them) and of course everyone uses Excel as a common interface of information between people.

It was impossible in this company to have a business view on the solution landscape. As you can imagine, this company’s margins are not (yet) under pressure as their industry is very conservative.

What do you think?

Is the future for PLM in platforms? If Yes, what about openness? Who are the candidates to offer such a platform? Or will lack of industry standards and openness block wider adoption? If No, will there be a massive PLM system in the future, connected to other enterprise systems (ERP/CRM)? Or will PLM be implemented as a collection of smaller systems communicating through an enterprise service bus?

I am looking forward discussing the topic here and soon during the upcoming Product Innovation conference in Düsseldorf

image

NoChangeHuman beings are a strange kind of creatures. We think we make a decision based on logic, and we think we act based on logic. In reality, however, we do not like to change, if it does not feel good, and we are lazy in changing our habits.

Disclaimer: It is a generalization which is valid for 99 % of the population. So if you feel offended by the previous statement, be happy as you are one of the happy few.

Our inability to change can be seen in the economy (only the happy few share). We see it in relation to global climate change. We see it in territorial fights all around the world.

Owning instead of sharing.  ?

The cartoon below gives an interesting insight how personal interests are perceived more important than general interest.

clip_image001

It is our brain !

More and more I realize that the success of PLM is also related to his human behavior; we like to own and find it difficult to share. PLM primarily is about sharing data through all stages of the lifecycle. A valid point why sharing is rare , is that current PLM systems and their infrastructures are still too complex to deliver shared information with ease. However, the potential benefits are clear when a company is able to transform its business into a sharing model and therefore react and anticipate much faster on the outside world.

But sharing is not in our genes, as:

  • In current business knowledge is power. Companies fight for their IP; individuals fight for their job security by keeping some specific IP to themselves.
  • As a biological organism, composed of a collection of cells, we are focused on survival of our genes. Own body/family first is our biological message.

Breaking these habits is difficult, and I will give some examples that I noticed the past few weeks. Of course, it is not completely a surprise for readers of my blog, as a large number of my recent posts are related to the complexity of change. Some are related to human behavior:

August 2012: Our brain blocks PLM acceptance
April 2014: PLM and Blockers

Ed Lopategui, an interesting PLM blogger, see http://eng-eng.com, wrote a long comment to my PLM and Blockers post. The (long) quote below is exactly describing what makes PLM difficult to implement within a company full of blockers :

“I also know that I was focused on doing the right thing – even if cost me my position; and there were many blockers who plotted exactly that. I wore that determination as a sort of self-imposed diplomatic immunity and would use it to protect my team and concentrate any wrath on just myself. My partner in that venture, the chief IT architect admitted on several occasions that we wouldn’t have been successful if I had actually cared what happened to my position – since I had to throw myself and the project in front of so many trains. I owe him for believing in me.

But there was a balance. I could not allow myself to reach a point of arrogance; I would reserve enough empathy for the blockers to listen at just the right moments, and win them over. I spent more time in the trenches than most would reasonably allow. It was a ridiculously hard thing and was not without an intellectual and emotional cost.

In that crucible, I realized that finding people with such perspective (putting the ideal above their own position) within each corporation is *exceptionally* rare. People naturally don’t like to jump in front of trains. It can be career-limiting. That’s kind of a problem, don’t you think? It’s a limiting factor without a doubt, and not one that can be fulfilled with consultants alone. You often need someone with internal street cred and long-earned reputation to push through the tough parts”

Ed concludes that it is exceptionally rare to find people putting the ideal above their own position. Again referring to the opening statement that only a (happy) few are advocates for change

Now let´s look at some facts why it is exceptionally rare, so we feel less guilty.

On Intelligence

clip_image003Last month I read the book On Intelligence from Jeff Hawkins well written by Sandra Blakeslee. (Thanks Joost Schut from KE-Works for pointing me to this book).

Although it was not the easiest book to read during a holiday, it was well written considering the complexity of the topic discussed. Jeff describes how the information architecture of the brain could work based on the neocortex layering.

In his model, he describes how the brain processes information from our senses, first in a specific manner but then more and more in an invariant approach. You have to read the book to get the full meaning of this model. The eye opener for me was that Jeff described the brain as a prediction engine. All the time the brain anticipates what is going to happen, based on years of learning. That’s why we need to learn and practice building and enrich this information model.

And the more and more specialized you are on a particular topic, it can be knowledge but it can also be motoric skill, the deeper in the neocortex this pattern is anchored. This makes is hard to change (bad) practices.

The book goes much further, and I was reading it more in the context of how artificial intelligence or brain-like intelligence could support the boring PLM activities. I got nice insights from it, However the main side observation was; it is hard to change our patterns. So if you are not aware of it, your subconscious will always find reasons to reject a change. Follow the predictions !

Thinking Fast and Slow

clip_image005And this is exactly the connection with another book I have read before: Thinking Fast and Slow from Daniel Kahneman. Daniel explains that our brain is running its activities on two systems:

System 1: makes fast and automatic decisions based on stereotypes and emotions. System 1 is what we are using most of the time, running often in subconscious mode. It does not cost us much energy to run in this mode.

System 2: takes more energy and time; therefore, it is slow and pushes us to be conscious and alert. Still system 2 can be influenced by various external, subconscious factors.

Thinking Fast and Slow nicely complements On Intelligence, where system 1 described by Daniel Kahneman is similar to the system Jeff Hawkins describes as the prediction engine. It runs in an subconscious mode, with optimal energy consumption allowing us to survive most of the time.

Fast thinking leads to boiling frogs

clip_image007And this links again to the boiling frog syndrome. If you are not familiar with the term follow the link. In general it means that people (and businesses) are not reacting on (life threating) outside change when it goes slowly, but would react immediately if they are confronted with the end result. (no more business / no more competitive situation)

Conclusion: our brain by default wants to keep business in predictive mode, so implementing a business change is challenging, as all changes are painful and against our subconscious system.

So PLM is doomed, unless we change our brain behavior ?

The fact that we are not living in caves anymore illustrates that there have been always those happy few that took a risk and a next step into the future by questioning and changing comfortable habits. Daniel Kahneman´s system 2 and also Jeff Hawkins talk about the energy it takes to change habits, to learn new predictive mechanisms. But it can be done.

I see two major trends that will force the classical PLM to change:

  • The amount of connected data becomes so huge, it does not make sense anymore to store it and structure the information in a single system. The time required to structure data does not deliver enough ROI in a fast moving society. The old “single system that stores all”-concept is dying.
  • The newer generations (generation Y and beyond) grew up with the notion that it is impossible to learn, capture and own specific information. They developed different skills to interpret data available from various sources, not necessary own and manage it all.

These two trends lead to the point where it becomes clear that the future in system thinking becomes obsolete. It will be about connectivity and interpretation of connected data, used by apps, running on a platform. The openness of the platform towards other platform is crucial and will be the weakest link.

Conclusion:

The PLM vision is not doomed and with a new generations of knowledge workers the “brain change” has started. The challenge is to implement the vision across systems and silos in an organization. For that we need to be aware that it can be done and allocate the “happy few” in your company to enable it.

 

image

What do you think  ???????????????????????????

%d bloggers like this: