Menu
16/11/2019

PLM: What was that about?

A decade ago I started this column with explaining what PLM is meant for. Looking back to the more recent episodes, I especially focused on the details of PLM. This first column after the summer holiday is a great opportunity to go wider again.

The story started about 30 years ago, when data was mainly kept on floppy discs. Until then, drawings were neatly stored in archives. You had to, because with those big rolls your cabinet was filled quickly. With the advent of CAD systems, that didn’t really change, because you made a print of the 2D model that neatly went into the archive. The file you wrote away on a floppy and it was put into a box in your closet. As the number of boxes in the cabinet increased, a problem arose. Once a change was made to the drawing, it had to be placed in the file on the floppy, but in which box was the floppy stored?

Because in the mid-80 the price per byte of hard disks had decreased in such a way that storage of large numbers of CAD files became affordable, various software vendors came up with the idea to offer digital archives in which CAD-files could systematically be stored and managed. They called it EDM: Engineering Data Management.

Very soon, at the beginning of the 90’s, it appeared that not only engineering, but also production, purchasing and other departments could benefit from such a system. That software was a magnitude more expensive, because they had to be able to serve more users, and therefore needed a new name: Product Data Management, in short PDM.

As the diversity of users increased, the number of functions of the software was also increased. The different PDM products grew rapidly and suppliers were in need of a new name. All sorts of names were ‘ coined ‘ but around the turn of the century it became clear that the name PLM (Product Lifecycle Management) was going to win. This name has been broadly accepted for 20 years and it looks like it will still be in use for a while. Note that the word ‘ data ‘ has disappeared: not the product data, but the product life cycle is managed.

The desire remains to have all the data around the product accessible through one system, but new developments in IT are opening up new possibilities to gather more data. Internet of Things (IoT) can collect data about the use of individual products through the product’s built-in and internet-connected chips. We wanted that for a long time, but now it can. Only: how do you make that gigantic stream of data accessible and secure without conflicting with the privacy of the customer?

With simulation of 3D models, you can run a simulation model in parallel with the product. The model then becomes a digital twin of the physical product, and you can do analysis of the product behavior based on the actual condition of the product. But to keep the model a product life-long consistent after maintenance operations on the product, and to keep it full for all individual products, is not an obvious task.

Another problem is that, as the scope of PLM expands, the data is increasingly coming from other software. Linking data between different systems has always been a problem, and still is. The easiest, but not necessarily the cheapest, solution is still to lay the coupling problems with the software supplier and thus involve the whole system through one supplier. In practice, there are all sorts of good reasons for not wanting that.

PLM remains a complex challenge for many companies. If you are a big player, then the problem is that your ambitions are ahead of what the supplier can deliver. Are you a small business then the big suppliers offer much more than you need. They all have an small business version at a reasonable price, but each company has its peculiarities, and to get these covered costs a fortune of additional licenses.

Strange actually, that PLM is so hard to implement. Why not develop an App for each PLM feature so Google can collect all the data that the user is typing in playfully. Remember, however, that Google does not give one, but thousands of answers to a question, and the first is not always correct. Also, you have no idea for whose questions your data will also be used. It will remain a little tricky, but it is indispensable.

Henk Jan Pels
Universitair hoofddocent bedrijfskundige informatica TU Eindhoven