You are currently browsing the category archive for the ‘Knowledge Management’ category.

PDT Europe is over, and it was this year a surprising aligned conference, showing that ideas and concepts align more and more for modern PLM. Håkan Kårdén opened the conference mentioning the event was fully booked, about 160 attendees from over 19 countries. With a typical attendance of approx. 120 participants, this showed the theme of the conference: Continuous Transformation of PLM to support the Lifecycle Model-Based Enterprise was very attractive and real. You can find a history of tweets following the hashtag #pdte17

Setting the scene

Peter Bilello from CIMdata kicked-off by bringing some structure related to the various Model-Based areas and Digital Thread. Peter started by mentioning that technology is the least important issue as organization culture, changing processing and adapting people skills are more critical factors for a successful adoption of modern PLM. Something that would repeatedly be confirmed by other speakers during the conference.

Peter presented a nice slide bringing the Model-Based terminology together on one page. Next, Peter took us through various digital threads in the different stages of the product lifecycle. Peter concluded with the message that we are still in a learning process redefining optimal processes for PLM, using Model-Based approaches and Digital Threads and thanks (or due) to digitalization these changes will be rapid. Ending with an overall conclusion that we should keep in mind:


It isn’t about what we call digitalization; It is about delivering value to customers and all other stakeholders of the enterprise

Next Marc Halpern busted the Myth of Digital Twins (according to his session title) and looked into realistic planning them. I am not sure if Marc smashed some of the myths although it is sure Digital Twin is at the top of the hype cycle and we are all starting to look for practical implementations. A digital twin can have many appearances and depends on its usage. For sure it is not just a 3D Virtual model.

There are still many areas to consider when implementing a digital twin for your products. Depending on what and how you apply the connection between the virtual and the physical model, you have to consider where your vendor really is in maturity and avoid lock in on his approach. In particular, in these early stages, you are not sure which technology will last longer, and data ownership and confidentially will play an important role. And opposite to quick wins make sure your digital twin is open and use as much as possible open standards to stay open for the future, which also means keep aiming for working with multiple vendors.

Industry sessions

Next, we had industry-focused sessions related to a lifecycle Model-Based enterprise and later in the afternoon a session from Outotec with the title: Managing Installed Base to Unlock Service opportunities.

The first presentation from Väino Tarandi, professor in IT in Construction at KTH Sweden presented his findings related to BIM and GIS in the context of the lifecycle, a test bed where PLCS meets IFC. Interesting as I have been involved in BIM Level 3 discussions in the UK, which was already an operational challenge for stakeholders in the construction industry now extended with the concept of the lifecycle. So far these projects are at the academic level, and I am still waiting for companies to push and discover the full benefits of an integrated approach.

Concepts for the industrial approach could be learned from Outotec as you might understand later in this post. Of course the difference is that Outotec is aiming for data ownership along the lifecycle, where in case of the construction industries, each silo often is handled by a different contractor.

Fredrik Ekström from Swedish Transport Administration shared his challenges of managing assets for both road and railway transport – see image on the left. I have worked around this domain in the Netherlands, where asset management for infrastructure and asset management for the rail infrastructure are managed in two different organizations. I believe Fredrik (and similar organizations) could learn from the concepts in other industries. Again Outotec’s example is also about having relevant information to increase service capabilities, where the Swedish Transport Administration is aiming to have the right data for their services. When you look at the challenges reported by Fredrik, I assume he can find the answers in other industry concepts.

Outotec’s presentation related to managing installed base and unlock service opportunities explained by Sami Grönstrand and Helena Guiterrez was besides entertaining easy to digest content and well-paced. Without being academic, they explained somehow the challenges of a company with existing systems in place moving towards concepts of a digital twin and the related data management and quality issues. Their practical example illustrated that if you have a clear target, understanding better a customer specific environment to sell better services, can be achieved by rational thinking and doing, a typical Finish approach. This all including the “bi-modal approach” and people change management.

Future Automotive

Ivar Hammarstadt, Senior Analyst Technology Intelligence for Volvo Cars Corporation entertained us with a projection toward the future based on 160 years of automotive industry. Interesting as electrical did not seem to be the only way to go for a sustainable future depending on operational performance demands.

 

Next Jeanette Nilsson and Daniel Adin from Volvo Group Truck shared their findings related to an evaluation project for more than one year where they evaluated the major PLM Vendors (Dassault Systemes / PTC / Siemens) on their Out-of-the-box capabilities related to 3D product documentation and manufacturing.

They concluded that none of the vendors were able to support the full Volvo Truck complexity in a OOTB matter. Also, it was a good awareness project for Volvo Trucks organization to understand that a common system for 3D geometry reduces the need for data transfers and manual data validation. Cross-functional iterations can start earlier, and more iterations can be performed. This will support a shortening of lead time and improve product quality. Personally, I believe this was a rather expensive approach to create awareness for such a conclusion, pushing PLM vendors in a competitive pre-sales position for so much detail.

Future Aerospace

Kenny Swope from Boeing talked us through the potential Boeing journey towards a Model-Based Enterprise. Boeing has always been challenging themselves and their partners to deliver environments close to what is possible. Look at the Boeing journey and you can see that already in 2005 they were aiming for an approach that most of current manufacturing enterprises cannot meet. And now they are planning their future state.

To approach the future state Boeing aims to align their business with a single architecture for all aspects of the company. Starting with collecting capabilities (over 400 in 6 levels) and defining value streams (strategic/operational) the next step is mapping the capabilities to the value streams.  Part of the process would be to look at the components of a value stream if they could be fulfilled by a service. In this way you design your business for a service-oriented architecture, still independent from any system constraints. As Kenny states the aerospace and defense industry has a long history and therefore slow to change as its culture is rooted in the organization. It will be interesting to learn from Kenny next hear how much (mandatory) progress towards a model-based enterprise has been achieved and which values have been confirmed.

Gearing up for day 2

Martin Eigner took us in high-speed mode through his vision and experience working in a bi-modular approach with Aras to support legacy environments and a modern federated layer to support the complexity of a digital enterprise where the system architecture is leading. I will share more details on these concepts in my next post as during day 2 of PDT Europe both Marc Halpern and me were talking related to this topic, and I will combine it in a more extended story.

The last formal presentation for day one was from Nigel Shaw from Eurostep Ltd where he took us through the journey of challenges for a model-based enterprise. As there will not be a single model that defines all, it will be clear various models and derived models will exist for a product/system.  Interesting was Nigel’s slide showing the multiple models disciplines can have from an airplane (1948). Similar to the famous “swing” cartoon, used to illustrate that every single view can be entirely different from the purpose of the product.

Next are these models consistent and still describing the same initial specified system. On top of that, even the usage of various modeling techniques and tools will lead to differences in the system. And the last challenge on top is managing the change over the system’s lifecycle. From here Nigel stepped into the need for digital threads to govern relations between the various views per discipline and lifecycle stage, not only for the physical and the virtual twin.  When comparing the needs of a model-based enterprise through its lifecycle, Nigel concluded that using PLCS as a framework provides an excellent fit to manage such complexity.

Finally, after a panel discussion, which was more a collection of opinions as the target was not necessary to align in such a short time, it was time for the PDT dinner always an excellent way to share thoughts and verify them with your peers.

Conclusion

Day 1 was over before you knew it without any moment of boredom and so I hope is also this post. Next week I will close reviewing the PDT conference with some more details about my favorite topics.

 

Advertisements

During my summer holidays, I read some fantastic books to relax the brain. Confessions from Jaume Cabré was an impressive novel, and I finished Yuval Noah Harari’s book Sapiens.

However, to get my PLM-twisted brain back on track, I also decided to read the book “The Death of Expertise” from Tom Nichols, with the thought-provoking subtitle” “The Campaign Against Established Knowledge and Why it Matters.”

I wanted to read it and understand if and how this would apply for PLM.

Tom Nichols is an American, so you understand he has many examples to support his statement from his own experience, like the anti-vaccination “experts”,  the climate change “hoax” and an “expert” tweeting president in his country who knows everything. Besides these obvious examples, Tom explains in a structured way how due to more general education and the internet, the distance between an expert and a average person has disappeared and facts and opinions seem to be interchangeable. I talked about this phenomena during the Product Innovation conference in Munich 2016: The PLM identity crisis.

Further down the book, Tom becomes a little grumpy and starts to complain about the Internet, Google and even about Wikipedia. These information resources provide so often fake or skin-deep information, which is not scientifically proven by experts. It reminded me of a conference that I attended in the early nineties of the previous century.  An engineering society had organized this conference to discuss the issue that finite element analysis became more and more available to laymen. The affordable simulation software would be used by non-trained engineers, and they would make the wrong decisions. Constructions would fall down, machines would fail. Looking back now, we can see the liberation of finite element analysis leads to more usage of simulation technology providing better products and when really needed experts are still involved.

I have the same opinion for internet, Google, and Wikipedia. They rapidly provide information. Still, you need to do fact checking and look at multiple sources, even if you found the answer that you liked already. Usually, when I do my “research” using the internet, I try to find different sources with different opinions and if possible also from various countries. What you will discover is that, when using the internet, there is often detailed information, but not in the headlines of these pages. To get down to the details, we will need experts for certain cases, but we cannot turn the clock back to the previous century.

What about PLM Expertise?

In the case of PLM, it is hard to find real expertise. Although PLM is recognized as a business strategy / a domain / an infrastructure , PLM has so many faces depending on the industry and its application. It will be hard to find an expert who understands it all and I assume headhunters can confirm this. A search for “PLM Consultant” on LinkedIn gives me almost 4000 hits, and when searching for “PLM Expert,” this number is reduced to less than 200. With only one source of information (LinkedIn), these figures do not really give an in-depth result (as expected !)

However, what is a PLM expert? Recently I wrote a post sharing the observation that a lot of PLM product – or IT-focused discussions miss the point of education (see PLM for Small and Medium Enterprises – It is not the software). In this post, I referred to an initiative from John Stark striving for the recognition of a PLM professional. You can read John’s follow up on this activity here: How strong is the support for Professional PLM?  Would a PLM Professional bring expertise?

I believe when a company understands the need for PLM, they have to build this knowledge internally. Building knowledge is a challenge for small and medium enterprises. It is a long-term investment contributing to the viability of the company. Support from a PLM professional can help. However, like the job of a teacher, it is about the skill-set (subjects, experience) and the motivational power of such a person. A certificate won’t help to select a qualified person.

Conclusion

We still need PLM expertise, and it takes time to build it. Expertise is something different as an (internet) opinion. When gaining PLM expertise, use the internet and other resources wisely. Do not go for the headlines of an internet page. Go deeper than the marketing pages from PLM related companies (vendors/implementers). Take time and hire experts to help you, not to release you from your responsibility to collect the expertise.

 

Note: If you want to meet PLM Experts and get a vendor-independent taste of PLM, join me at PDT Europe 2017 on 18-19 October in Gothenburg.  The theme of the conference: Continuous transformation of PLM to support the Lifecycle Model-Based Enterprise.  The conference is preceded on 17th October by CIMdata’s PLM Roadmap Europe 2017. Looking forward to meet you there !

 

 

tacitIn 1999, I started my company TacIT in order to focus on knowledge management. The name TacIT came from the term tacit knowledge, the

knowledge an expert has, combining knowledge from different domains and making the right decision, based on his or her experience / intuition? Tacit knowledge is the opposite of explicit knowledge which you can define in rules. In particular, large companies are always looking for ways to capture and share knowledge to raise the tacit knowledge of their employees.

When I analyzed knowledge management in 1999, many businesses thought it was just about installing intranet. At that time, it became in fashion to have an internal website where people were publishing their knowledge. Wikipedia was not yet launched. Some people got excited from the intranet capabilities; however a lot of information remained locked or hidden. What was clear to me at that time was that knowledge management as a bottom-up approach would not work in an organization for the following reasons:

  • In 1999 knowledge was power, so voluntary sharing your knowledge was considered more or less reducing your job security. Others might become as skilled as you. A friend of mine was trying to capture knowledge from experts in his domain and only people close to retirement were willing to speak with him. Has this attitude meanwhile changed?
  • It takes time to share your knowledge and in particular for the busy experts this is a burden. They want (or need) to go on to the next job and not spend “useless” time to describe what they have learned.

My focus on knowledge management disappeared in 2000 as I got dragged into PLM with the excuse in mind that PLM should be a kind of knowledge management too.

No knowledge management in PLM

PLM_flowIn theory, the picture representing PLM is a circle, where through iterations organizations learn to improve their products and understand better the way their products are perceived and performing in the market. However, the reality was that PLM was used as an infrastructure to transfer and share information mainly within engineering disciplines. Each department had its own tools and demands. Most companies have silos for PDM, ERP, and Services, and people have no clue about which information exist within the organization. Most of the time, they only know their system and even worse they are the only ones that know where their data is stored (or hidden when you talk to colleagues)

When PLM became more and more accepted as the backbone for product information in a company, there was more attention for a structured manner of knowledge management in the context of lessons learned. Quality systems like ISO900x provide guidance for processes of quality improvement. Various industries have their own quality methodology, APQP, 8D, CAPA all to ensure quality gets improved in a learning organization. 8D and CAPA are examples of Issue management which are a must-do for every PLM implementation. It is the first step in sharing and discovering commonalities and trends related to your product, your processes, and your customers. When issues are solved by email and phone calls, the content and lesson learned remain often hidden for the rest of the organization.

PLM binStill storing all information into one PLM system is not what I would call knowledge management. Also, my garbage bin (I had a huge one) contains all my written notes and thoughts. Would anyone be able to work with my environment? No!

Knowledge Management is an attitude

When organizations really care about knowledge, it should be a top-down guided process. And knowledge is more than storing data in a static manner in a central place. Let´s have a look how modern knowledge management could work:

Structured information

InfoInContextIn a PLM system you will find mainly structured information, i.e. Bill of Materials containing Parts, Documents/CAD Models/Drawings describing products, Catalogs with standard parts, Suppliers and in modern environment perhaps even issues related to these information objects and all the change processes that have been performed on parts, products or documents.

This information already becomes valuable information if companies upfront spend time on planning and creating the context of the information. This means attributes are important and even maintaining relationships between different types of information. This is the value a PLM system can bring beyond a standard document management system or a parts database. Information in the right context brings much more value.

For example, a “Where used” of a part not only in the context of a BOM but also in the context of suppliers, all issues, all ECRs/ECOs, projects or customers implemented. It could be any relation, starting from any relevant information object.

fasterCreating rich data in context does not happen without a business change. People creating the relationships and attribute values need to be rewarded for that. Often it is the opposite.

“Do your job as fast as possible and do only what is necessary to deliver now” is often the message from a short-sighted manager who believes that spending time on the “NOW” is more important than spending time on the “FUTURE.”

Managing information to become valuable in the future is an investment that needs to be done in the world of structured data. Once done, a company will discover that this investment has improved the overall performance of the company as time for searching will reduce (from 20 %++ to 5 % –) and people are enabled to reuse instead of reinventing things or worse re-experience issues.

There is more structured information out there.

SBAOf course, companies cannot wait for a few years till structured information becomes usable. Most of the time there is already a lot of information in the various systems the company is using. Emails, the ERP system, the PDM system and file directories may contain already relevant information. Here modern search-based applications like Exalead or Conweaver (for sure there are more apps in the market – these are the two I am familiar with) will help to collect and connect information in context coming from various systems. This allows users to see information across disciplines and across the lifecycle of a product.

Still these capabilities are not really knowledge management increasing the tacit knowledge of a company

How to collect tacit knowledge ?

Static information collection does not contribute to tacit knowledge, it provides some visibility to what exists and might help with explicit knowledge. Tacit knowledge can only be collected by an active process. People in an organization need to be motivated and stimulated to share their story, which is more than just sharing information. It is the reasoning why certain decisions were taken which helps others to learn. Innovation and learning come from associating information coming from different domains and creating opportunity and excitement to share stories. This is what modern companies like Google and Apple do and it is somehow the same way as information is shared at the coffee machine. This is the primary challenge. Instead of an opportunistic approach to knowledge sharing you want a reliable process of knowledge sharing. The process of capturing and sharing tacit knowledge could be improved by assigning knowledge agents in a company.

knowledge agent

Image courtesy of www.atlassian.comknowledge flow

A knowledge agent has the responsibility to capture and translate lessons learned. For that reasons, a knowledge agent should be somebody who can capitalize information and store and publish it in a manner the information can be found back in various contexts. The advantage of such a process is that knowledge is obtained in a structured manner. In the modern world, a knowledge agent could be a community owner / moderator actively sharing and publishing information. Strangely knowledge agents are often considered as overhead as their immediate value is not directly visible (as many of the PLM activities are) although the job of a knowledge agent does not need to be a full-time job.

I found a helpful link related to the knowledge management agent here: 7 knowledge management tips. The information is not in the context of product development. However, it is generic enough to consider.

https://www.atlassian.com/it-service/7-knowledge-management-tips

Conclusion

Many companies talk about PLM and Knowledge Management as equivalents to each other. It should be clear that they are different although also partly overlapping in purpose. Import to understand that both PLM knowledge and general Knowledge Management will only happen with a top-down strategy and motivation for the organization, either by assigning individual people to become knowledge agents or to have common processes for all to follow up.

I am curious to learn:

  • Is knowledge management on your company´s agenda

and if Yes

  • How is knowledge management implemented
%d bloggers like this: