You are currently browsing the category archive for the ‘OOTB’ category.

In my previous post, the PLM blame game, I briefly mentioned that there are two delivery models for PLM. One approach based on a PLM system, that contains predefined business logic and functionality, promoting to use the system as much as possible out-of-the-box (OOTB) somehow driving toward a certain rigidness or the other approach where the PLM capabilities need to be developed on top of a customizable infrastructure, providing more flexibility. I believe there has been a debate about this topic over more than 15 years without a decisive conclusion. Therefore I will take you through the pros and cons of both approaches illustrated by examples from the field.

PLM started as a toolkit

The initial cPDM/PLM systems were toolkits for several reasons. In the early days, scalable connectivity was not available or way too expensive for a standard collaboration approach. Engineering information, mostly design files, needed to be shared globally in an efficient manner, and the PLM backbone was often a centralized repository for CAD-data. Bill of Materials handling in PLM was often at a basic level, as either the ERP-system (mostly Aerospace/Defense) or home-grown developed BOM-systems(Automotive) were in place for manufacturing.

Depending on the business needs of the company, the target was too connect as much as possible engineering data sources to the PLM backbone – PLM originated from engineering and is still considered by many people as an engineering solution. For connectivity interfaces and integrations needed to be developed in a time that application integration frameworks were primitive and complicated. This made PLM implementations complex and expensive, so only the large automotive and aerospace/defense companies could afford to invest in such systems. And a lot of tuition fees spent to achieve results. Many of these environments are still operational as they became too risky to touch, as I described in my post: The PLM Migration Dilemma.

The birth of OOTB

Around the year 2000, there was the first development of OOTB PLM. There was Agile (later acquired by Oracle) focusing on the high-tech and medical industry. Instead of document management, they focused on the scenario from bringing the BOM from engineering to manufacturing based on a relatively fixed scenario – therefore fast to implement and fast to validate. The last point, in particular, is crucial in regulated medical environments.

At that time, I was working with SmarTeam on the development of templates for various industries, with a similar mindset. A predefined template would lead to faster implementations and therefore reducing the implementation costs. The challenge with SmarTeam, however, was that is was very easy to customize, based on Microsoft technology and wizards for data modeling and UI design.

This was not a benefit for OOTB-delivery as SmarTeam was implemented through Value Added Resellers, and their major revenue came from providing services to their customers. So it was easy to reprogram the concepts of the templates and use them as your unique selling points towards a customer. A similar situation is now happening with Aras – the primary implementation skills are at the implementing companies, and their revenue does not come from software (maintenance).

The result is that each implementer considers another implementer as a competitor and they are not willing to give up their IP to the software company.

SmarTeam resellers were not eager to deliver their IP back to SmarTeam to get it embedded in the product as it would reduce their unique selling points. I assume the same happens currently in the Aras channel – it might be called Open Source however probably it is only high-level infrastructure.

Around 2006 many of the main PLM-vendors had their various mid-market offerings, and I contributed at that time to the SmarTeam Engineering Express – a preconfigured solution that was rapid to implement if you wanted.

Although the SmarTeam Engineering Express was an excellent sales tool, the resellers that started to implement the software began to customize the environment as fast as possible in their own preferred manner. For two reasons: the customer most of the time had different current practices and secondly the money come from services. So why say No to a customer if you can say Yes?

OOTB and modules

Initially, for the leading PLM Vendors, their mid-market templates were not just aiming at the mid-market. All companies wanted to have a standardized PLM-system with as little as possible customizations. This meant for the PLM vendors that they had to package their functionality into modules, sometimes addressing industry-specific capabilities, sometimes areas of interfaces (CAD and ERP integrations) as a module or generic governance capabilities like portfolio management, project management, and change management.

The principles behind the modules were that they need to deliver data model capabilities combined with business logic/behavior. Otherwise, the value of the module would be not relevant. And this causes a challenge. The more business logic a module delivers, the more the company that implements the module needs to adapt to more generic practices. This requires business change management, people need to be motivated to work differently. And who is eager to make people work differently? Almost nobody,  as it is an intensive coaching job that cannot be done by the vendors (they sell software), often cannot be done by the implementers (they do not have the broad set of skills needed) or by the companies (they do not have the free resources for that). Precisely the principles behind the PLM Blame Game.

OOTB modularity advantages

The first advantage of modularity in the PLM software is that you only buy the software pieces that you really need. However, most companies do not see PLM as a journey, so they agree on a budget to start, and then every module that was not identified before becomes a cost issue. Main reason because the implementation teams focus on delivering capabilities at that stage, not at providing value-based metrics.

The second potential advantage of PLM modularity is the fact that these modules supposed to be complementary to the other modules as they should have been developed in the context of each other. In reality, this is not always the case. Yes, the modules fit nicely on a single PowerPoint slide, however, when it comes to reality, there are separate systems with a minimum of integration with the core. However, the advantage is that the PLM software provider now becomes responsible for upgradability or extendibility of the provided functionality, which is a serious point to consider.

The third advantage from the OOTB modular approach is that it forces the PLM vendor to invest in your industry and future needed capabilities, for example, digital twins, AR/VR, and model-based ways of working. Some skeptic people might say PLM vendors create problems to solve that do not exist yet, optimists might say they invest in imagining the future, which can only happen by trial-and-error. In a digital enterprise, it is: think big, start small, fail fast, and scale quickly.

OOTB modularity disadvantages

Most of the OOTB modularity disadvantages will be advantages in the toolkit approach, therefore discussed in the next paragraph. One downside from the OOTB modular approach is the disconnect between the people developing the modules and the implementers in the field. Often modules are developed based on some leading customer experiences (the big ones), where the majority of usage in the field is targeting smaller companies where people have multiple roles, the typical SMB approach. SMB implementations are often not visible at the PLM Vendor R&D level as they are hidden through the Value Added Reseller network and/or usually too small to become apparent.

Toolkit advantages

The most significant advantage of a PLM toolkit approach is that the implementation can be a journey. Starting with a clear business need, for example in modern PLM, create a digital thread and then once this is achieved dive deeper in areas of the lifecycle that require improvement. And increased functionality is only linked to the number of users, not to extra costs for a new module.

However, if the development of additional functionality becomes massive, you have the risk that low license costs are nullified by development costs.

The second advantage of a PLM toolkit approach is that the implementer and users will have a better relationship in delivering capabilities and therefore, a higher chance of acceptance. The implementer builds what the customer is asking for.

However, as Henry Ford said, if I would ask my customers what they wanted, they would ask for faster horses.

Toolkit considerations

There are several points where a PLM toolkit can be an advantage but also a disadvantage, very much depending on various characteristics of your company and your implementation team. Let’s review some of them:

Innovative: a toolkit does not provide an innovative way of working immediately. The toolkit can have an infrastructure to deliver innovative capabilities, even as small demonstrations, the implementation, and methodology to implement this innovative way of working needs to come from either your company’s resources or your implementer’s skills.

Uniqueness: with a toolkit approach, you can build a unique PLM infrastructure that makes you more competitive than the other. Don’t share your IP and best practices to be more competitive. This approach can be valid if you truly have a competing plan here. Otherwise, the risk might be you are creating a legacy for your company that will slow you down later in time.

Performance: this is a crucial topic if you want to scale your solution to the enterprise level. I spent a lot of time in the past analyzing and supporting SmarTeam implementers and template developers on their journey to optimize their solutions. Choosing the right algorithms, the right data modeling choices are crucial.

Sometimes I came into a situation where the customer blamed SmarTeam because customizations were possible – you can read about this example in an old LinkedIn post: the importance of a PLM data model

Experience: When you plan to implement PLM “big” with a toolkit approach, experience becomes crucial as initial design decisions and scope are significant for future extensions and maintainability. Beautiful implementations can become a burden after five years as design decisions were not documented or analyzed. Having experience or an experienced partner/coach can help you in these situations. In general, it is sporadic for a company to have internally experienced PLM implementers as it is not their core business to implement PLM. Experienced PLM implementers vary from size and skills – make the right choice.

 

Conclusion

After writing this post, I still cannot write a final verdict from my side what is the best approach. Personally, I like the PLM toolkit approach as I have been working in the PLM domain for twenty years seeing and experiencing good and best practices. The OOTB-box approach represents many of these best practices and therefore are a safe path to follow. The undecisive points are who are the people involved and what is your business model. It needs to be an end-to-end coherent approach, no matter which option you choose.

 

 

 

I don’t know if it is the time of the year, but suddenly there is again in the PLM world a discussion which is related to the theme of flexibility (or the lack of flexibility). And I do not refer to some of the PLM supplier lock-in situations discussed recently. In a group discussion on LinkedIn we talked about the two worlds of PLM-ERP and that somehow here we have status quo do to the fact companies won’t change the way they manage their BOM if they are not forced to do or see the value.

Stephen Porter from Zero Wait-State in his blog wrote an interesting post about using PLM to model business processes and I liked his thoughts. Here the topic, flexibility was brought into the discussion by me.

ootb Then Mark Lind from Aras responded to this post and referred to his post on Out-Of-The-Box (OOTB) PLM which ended in a call for flexibility.

However, reading this post I wanted to bring some different viewpoints to Mark’s post and as my response became too long, I decided to post it in my blog. So please read Stephen’s post, read Mark’s post and keep the word flexibility in the back of your mind.

 

My European view

As I have been involved in several OOTB-attempts with various PDM / PLM suppliers, I tend to have somehow a different opinion about the purpose of OOTB.

It is all about what you mean with OOTB and what type and size of company you are talking about. My focus is not on the global enterprises – they are too big to even consider OOTB (too many opinions – too much politics).

But the mid-market companies, which in Europe practice a lot of PLM, without having a PLM system, are my major target. They improve their business with tools fitting in their environment, and when they decide to use a PLM system; it is often close related to their CAD or ERP system.

In this perspective, Mark’s statement:

Now stop and think… the fundamental premise of OOTB enterprise software is that there’s an exact match between your corporate processes and the software. If it’s not an exact match, then get ready to customize (and it won’t be OOTB anymore). This is why the concept of OOTB enterprise PLM is absurd.

I see it as a simplification – yes customers want to use OOTB systems, but as soon as you offer flexibility, customers want to adapt it. And the challenge of each product is to support as much as possible different scenarios (through configuration, through tuning (you can call it macros or customization) Microsoft Excel is still the best tool in this area

But let’s focus on PLM. Marc’s next statement:

It doesn’t matter if we’re talking about Industry Accelerators or so called ‘best practice’ templates

standard_process Again is simplifying the topic. Most of the companies I have been working with had no standard processes or PLM practices as much of the work was done outside a controlled system. And in situations that there was no Accelerator or Best Practice, you were trapped in a situation where people started to discuss their processes and to-be practices (losing time, concluding the process was not so easy as they thought, and at the end blame the PLM system as it takes so long to implement – and you need someone or something to blame). Also her Stephen promotes the functionality in PLM to assist modeling these processes.

 PLM is a learning process for companies and with learning I mean, understanding that the way of working can be different and change is difficult. That’s why a second, new PLM implementation in the same company is often more easy to do. At this stage a customer is able to realize which customizations were nice to have but did not contribute to the process and which customizations now could be replaced by standard capabilities (or configured capabilities). A happy target for PLM vendors where the customer changes from PLM vendor as they claim the success of the second implementation. However I have seen also re-implementations with the same software and the same vendor with the same results: faster implementation, less customization and more flexibility.

I fully agree with Marc’s statement that PLM implementations should be flexible and for me this means during implementations make sure you stay close to the PLM standards (yes there are no ‘official’ standards but every PLM implementation is around a similar data model.)

As the metadata and the created files represent the most value for the customer, this is where you should focus. Processes to change, review, collaborate or approve information should always be flexible as they will change. And when you implement these processes to speed up time-to-market or communication between departments/partners, do an ROI and risk analysis if you need to customize.

I still see the biggest problem for PLM is that people believe it is an IT-project, like their ERP project in the past. Looking at PLM in the same way does not reflect the real PLM challenge of being flexible to react. This is one of my aversions against SAP PLM – these two trigrams just don’t go together – SAP is not flexible – PLM should be flexible.

Therefore this time a short blog post or long response, looking forward to your thoughts

%d bloggers like this: