Webcast on-demand
In this IEEE Spectrum webcast in conjunction with Derek Piette, PTC you will learn ways to rethink your systems engineering approach to combat complexity in your product development environment.
For more on Systems Engineering visit our resource center.
Engineering executives are increasingly challenged to accelerate the delivery of innovative products and manage increasing product variants with fewer resources, while improving quality. The presence and increasing importance of software in many products introduces additional complexities that cannot be an effective Systems Engineering approach and technologies.
Companies that have an effective Systems Engineering approach in place have achieved:
- Continuous requirements management processes.
- Low levels of rework late in the development cycle.
- One single source of information and high levels of traceability.
- High levels of product compliance and safety protecting brand loyalty.
- Predictable and on target delivery.
Desktop Engineering: Check it out - "Managing Product Complexity with Systems Engineering"
Systems Engineering with PTC – Watch the video
Video Transcript
Hello and welcome to the IEEE Spectrum online presentation. Rethink your systems engineering approach to combat complexity in product development. I’m Dexter Johnson and I’ll be moderating this presentation.
Now I would like to introduce our presenter for today, Derek Piette, Product Manager, Director PTC. Derek is currently responsible for the product direction of PTC System Engineering Solution within the ALM segment. Mr. Piette also manages the integration between Windchill and PTC Integrity. He’s spoken at numerous conferences and events on the topics of hardware/software interaction, electronics, high-tech, and PTC’s product direction. Since joining PTC in 2003, Mr. Piette has had several positions within product management involving CAD Data Management, CAD Integrations, Requirements Management, and complete Product Structures. Prior to joining PTC, Mr. Piette worked as an Electrical and Systems Engineer for the semiconductor capital equipment supplier at KLA-Tencor. Mr. Piette attended WPI where he earned his Bachelor’s degree in electrical engineering. With that Derek, if you’d like to begin, feel free.
Thanks, Dexter. Good morning and good day, everyone. As Dexter mentioned, today’s presentation and webcast will be about rethinking or thinking about how to improve your system engineering approach to combat complexity in product development. First, let me just introduce a little bit about PTC for those of you who don’t know PTC very well.
First of all, we have a number of different product brands down the left hand side of the screen. The primary ones focused on system engineering are Windchill, PTC Integrity, Mathcad, Creo, and some portions of Arbortext, but these are all the brands that PTC offers.
We offer solutions in a number of different areas. Some of them are around corporate management up at the top, which are more enterprise or engineering enterprise-centric. Then the areas down the pillars on the left center and right of the corporate management side, are focused around hardware and software engineering, supply chain and manufacturing, and sales and service, where we offer a wide variety of opportunities for our customers to develop and design their product information, capture that product information, interact with suppliers and manufacturers, as well as support in the sales and service organization for their product. All of these are different solution offerings that PTC provides our customers including the product brands to the left.
Today’s focus will be obviously on the system engineering aspect, but you’ll notice a lot of these other solutions in this space really also interface or interact well with system engineering to provide a holistic approach for solving system engineering methodologies.
With that, I want to talk a little bit about some of the challenges today in product development. Then, after those challenges are discussed, I want to go through some of PTC’s methodologies and approaches for helping today to solve these challenges.
Let me give you an example to date in the industrial equipment arena. This is a pretty large tractor. Tractors today are considered high-tech agricultural equipment. They’re no longer the machines of yesteryear where you went into them, you had a single operator go and drive, figure out how to plough, how to support the field, how to handle and do the things that they needed to do in order to get significant yield and crop information out of their soil and out of their land.
Today, many of these things are highly complex, software intensive, system engineered methodologies where operators themselves are secondary backstops or individuals that are supported in the complete agricultural farming design. These things are GPS controlled, software driven, completely defined within the industry for how to provide the best yield out of these products and how the whole ecosystem is involved.
The system engineering methodology comes into play, not just the hardware, but also the software, the users, the environment, the system, all kinds of input and information is brought into this to really surprise and really bring about the best methodologies for developing the result of information.
When you think about of system engineering in high-tech areas, a lot of times we think about things like airplanes, or automobiles, or things where software is extremely being included at a higher rate of change, and a higher rate of lines of code. In this case, the same is true even on agricultural equipment. This is becoming a more and more pervasive area across products that are developed, at least from our customers.
Let’s talk about the next level of how, when you design these products, the methodologies for improving or maintaining quality while you’re doing significant complexity of these designs. You’ll notice here on the left this is kind of a timeline for how to determine where risks or issues can be either resolved or unresolved in this space. From the time when you identify an issue to when the actual cost to resolve, repair, or replace on the right hand side, you’ll notice a number of significant things can happen to your products throughout that time, whether you get bad press, whether the sales are impacted. Obviously, your stock price could drop. Eventually your brand loyalty can really take a hit. All of these areas are really a challenge and things that you need to be concerned about when you’re developing products that can significantly impact the bottom line and eventually, of course, the actual company its self.
As you’ll notice here, some information we’ve got on automotive recalls for federally-mandated through some history of information. Typically, you’ll notice that a lot of vehicles are recalled. There are a lot of pieces of equipment that are done on a yearly basis, and a number of different child safety seats are also in this space. All of these things equate to needing to solve or minimal resolution of issues, once they are identified, to actual either replacement or fixing of those problems.
As we move along, the goal here is really to identify that methodologies need to be taken into consideration for system engineering approaches to solve these problems. There are a number of significant market drivers here on the left, things as we just talked about, quality, obviously profitability, globalization, and product variation. Product variation is becoming increasingly important, as customers want to deliver common information or common product platforms and be able to deliver variations on those platforms quickly, easily, and rapidly to markets that meet time to markets and quality, and of course significant benefits to beating their competitor’s to those markets.
On the right hand side, the challenges are there. How does the visibility of the system really get known for customers? How does velocity change, so the number of the different changes happening, to be from various different disciplines? How do the changes interact with one another? That leads into the collaboration aspect. Lastly, on a supply chain, how do you improve or include your suppliers within that engineering and design processes to really develop and maintain these complex systems? All of these areas are contributors. These are only four of the major ones. There is obviously a significant amount that could be added to this list, but these are the four ones where we’ve identified significant value at being able to address and/or challenges that need to be addressed.
As we move along here, I want to talk a little bit on these challenges and get you to think about and consider methodologies of how to improve and spend less time reworking and more time innovating, based on some of these issues.
Today, you’ll notice that many companies are focused on what we identify as the reengineering approach. This is the typical, traditional engineering methodology for a lot of companies that are still doing things today. How they design, build, and fix methodology. In a lot of this, there is a lot of heavy product or product integration or system integration testing on the right hand side of this V-Model. I’ll get into the V-Model a little bit because I know system engineers really can identify the V-Model. This is a simplistic view of how that V-Model is identified where product definition and integration, verification, and validation come together.
You’ll notice a lot of that occurs on the right hand side where rework of these issues, changes, or problems that are found cost significant revenue to resolve and fix. The ideal approach in the methodologies of where we want to drive our customers is to make them engineer upfront and not reengineer, to focus on the upfront analysis and design approach, where continual verification, validation on product information before you’re actually going out and building a product, putting some hardware, or actually cutting physical metal or defining physical components in your system are actually done.
Focus more on this improving the quality at a lower cost early on that process through collaboration, through iteration, through general connectivity of the different disciplines of hardware and software and systems. Really focus on the definition in that analysis approach. I’m going to go through some areas of where that piece and how those kinds of symptoms can be identified.
As we look at this space, one of the ways to identify certain systems or certain ineffective systems engineering processes are how do you have disconnected or fragmented requirements, whether they be continually changing, continually modifying, missed untested, un-validated, unverified. Who is looking at them? Who can see them? How do we know that they’re traceable to different information, that they’ve been effectively covered, that they’ve been effectively communicated to different individuals that need it in the organizations?
How do you do true requirements management and engineering of methodologies to identify that you’ve spent more time on the left side of this V-Model or at the upfront design-met stages, and less time on the back end where you’re trying to do, “We wrote these requirements and we implemented the products, but we didn’t really design it based on those requirements so now we need to go back and update our requirements documents to satisfy.” Well, that’s kind of the wrong approach of how to manage and really control your requirements definition and management processes.
The next one here is about levels of rework late in that development cycle. To reiterate a little bit on the previous slide, to focus less time on that back end of reworking and continually changing, because you wanted to push off or wait for those types of changes to be reworked later on the development or implementation side of things.
The last one here is more a general approach of traceability issues, whether this is from a reporting perspective to validate to governing bodies that we validated and we can prove validation and our requirements in our product by this level of traceability through either reporting or just physically looking in the system. But also traceability for future product direction, future implementation, and future ideas of how to develop new products. How do know what you did previously and either repeat it or not repeat it depending upon the results of the implementation?
Some more implications of that approach really are about things that you cannot do. Let’s look at a list of things that are basically challenges or issues if you do kind of an ineffective approach. This idea of effective tradeoff analysis, to identify which direction do you want to go? Do you want to choose direction A or direction B? Which one has the right approach and is the right methodology for the products that you’re developing, whether it be this one is more effective in this amount of time and more efficient, or maybe this one will be a higher yield to our customer from a revenue perspective. How do you then collaborate across engineering disciplines? There is a challenge today, many times there are a lot of over the wall throwing of information, giving information from one discipline to the next, without truly being able to say and/or connect those two disciplines to one another when the product is either released or needs to be modified.
This idea of reusing information. How do you effectively reuse data? This is really forcing reuse of information to become assets, so not just taking a document and highlighting it or crossing things out, but realistically leveraging each item or element within your environment, whether it be your physical data of CAD, whether it be your simulation requirements, your tests. All of these areas can really improve the ability of you to provide and deliver new variations or new options for products on common platforms.
The next one, which is a huge, growing trend especially in a number of different verticals where we see our customers, is innovation through software. A lot of hardware these days is very static or stable, but software variation or software updates can really choose and improve product performance and capabilities without ever physically changing hardware.
Then lack of integration, verification, or validation, so continual verification and validation throughout your lifecycle. You have this challenge of finding errors upfront, but you can’t really solve them or eliminate them until later on in the design cycle, so really being able to improve and handle verification and validation early.
Then this idea of rapidly managing change driven by software. This is similar to the innovation one, but focused more around constantly changing software. Software is never completed and never done. It’s not like hardware where you can build something, it’s static, and you may need to tweak it based on some criteria. Software is ever-evolving and ever-changing. By the time you release your product to your market, you may already have updated software done because you want it to do those software late. How do you handle and manage that change of software late in that process?
It’s an inability to detect or communicate requirements change, which is usually a significant one. You get a late requirement in from your supplier or from your customer, but you’re already a significant way down the process of developing, so how do you have that change, validate its impact, and communicate it across who needs it to interact or act on it?
Then lastly, this ability to close the loop of what information is delivered. This validation and verification of, “Here’s what we said we were going to design or here’s what we designed to or planned to design to, and here’s what we physically delivered,” so these kinds of communication processes.
There’s then this next idea. What if you could optimize this design tradeoff or information to ensure that these requirements are fully met, taking into consideration a number of different factors? Product development have a lot of different tradeoffs when you design your products, whether it be from compliance regulation, from improving your product costs, shorting your time to market, or developing… A lot of green products today are significant drivers for how to define what tradeoff decisions you need to do. These are all different areas or information you need to take into consideration when driving innovation or making sure that you can optimize your design, based on a number of different factors. Taking all of these pieces into consideration is typical for many companies.
In addition to that point, you have a number of different engineering disciplines that need to be identified. Here you’ll notice three that have been identified. Hardware can include both mechanical and electrical, software could be embedded software, IT software, systems software, and systems could be simulation, modeling, or architecture, a number of different factors. All of these methodologies really need to communicate to one another, where you talk about effective program and project management, identifying issues, investigating those issues, changing and managing those changes across all these different areas. Each one of these R&D disciplines has processes that are affected or interacted by other disciplines in other areas. Really, we’re talking about collaboration and communication across these areas when you think about change and process management.
Let’s talk now about reusing product information. We talked about all of the challenges. Now what about methodologies for reusing that information of products? Here you may have some mechanical product and you have its requirements information. You have some validation and verification methodology that you’ve done. You also have some attributes or some field information. You, of course, have the history of how it was designed. You have all of this information identified, but you need to leverage it for the next version of the project. How do you take that information, take it in the level of the context? I’m within this design and I want to make a number of choices, I want to understand how it was designed, why it was designed, identify all that information, and reuse it for the next version or next iteration of the project.
In addition, we talk about how do you handle two scale areas? Think about when I’m validating or verifying a product. I have the easy way to fix a problem, but it’s kind of hard to find because it’s in a virtual world, or I have the hard way to fix things, but it’s easy to find because I can physically, tangibly connect to it. It’s those two domain spaces. If you draw a line between them, where between identification of those issues and when it’s the time or the cost to fix those problems, you have this linear methodology. Defining and identifying the problem early if you can and fixing it will obviously improve and reduce the cost of information later on and obviously reduce the rework costs later. Identifying and managing traceability to ensure things are satisfied.
How do you tie in changes of requirements? A lot of this is requirement and verification of those requirement identified. Testing early and testing often of everything, whether that is testing including software testing or simulation of models, simulation of systems, or simulation of interaction of information, or interfaces between those systems is highly critical to reduce and ensure product quality early on in this process. Leveraging, reusing and leveraging those validation and verification methodologies when you’re reusing your system models or your physical models throughout the system.
Then, lastly here, when we talk about other ways of looking at product innovation or product features, we talk about software. Another challenge is if you look at each one of these verticals, they all have challenges in improved or inclusion of software.
If I take one example here, in the automotive space, 10 to 20 to 30 years ago, the number of software was almost insignificant or minute within an automobile. Today, in high-end vehicle classes, when you talk about what they call a C-class or high-end class of vehicles, things like Mercedes, Lexus, and so forth, there are over 100 million lines of operating code within those, if you take every embedded system and include it in a linear state. How do you add them all up and come up with 100 million lines of code in there? That’s a significant amount of software that really drives innovation because there are a lot of common physical components, but software really focuses on how to handle innovation and modification in those spaces.
The same is true among a number of different areas. Another example is software drives aerospace and defense. Joint Strike Fighter is a huge amount of software-controlled system in the information, so it’s not just a mechanical device anymore. Physically software-driven automation or software-driven systems is really driving how these products are delivered to market.
How do you think about improving your product development processes and take into consideration or leveraging software as a significant driver in that innovation process, and taking software to be the next methodologies? Instead of developing a new version of hardware for the system, taking the same hardware and leveraging it from many platforms, but driving with different software versions.
Now that we’ve done that, let’s talk about some of the challenges and some of the ways you can improve those challenges or some of the challenges that exist, and how they can be leveraged today. Let’s talk a little bit more now from a holistic approach of how PTC, when we think about identifying system engineering, what are the different areas, where we look at from how to bucket these different pieces, and how do we address them with our product portfolio?
We look at requirements management. We look at verification and validation. We look at cross-discipline design. We look at modeling and simulation. Those are kind of the four pillars when I think about and when we think about what are the core aspects within system engineering.
Within that, there are a number of different lifecycle processes that are supporting or are connected to these systems. Things like traceability. How do you get traceability between the requirements to the models, to the simulation results, to the test results, to the design itself, and change management of all these different areas? How do you determine change? How do you determine the configuration management aspects of all of these different pieces, whether it be the requirements themselves, whether it be the CAD information, or the product data? Product-line engineering or variation in information. How do you have different variants of product platforms and define these product platforms, as well as, obviously, reporting and in compliance of this information?
Taking that into consideration, these lifecycles processes, as I’ve identified here, just give you a general idea of where each one of these fits and how they’re supported. Managing trace relationships across, managing change, managing configuration, obviously I mentioned product line, metrics, and compliance. These compliance levels could be standards or governing bodies of information.
Next, you take those different pieces and reorganize them a bit to identify how they interact. In the center here, you have a number of different system engineering core components. These are the green bars going across where we talk about requirements, architecture, and design allocation and collaboration or design collaboration, and at the right hand side verification and validation.
The point here is to show that verification and validation is not just at the end of the process. You’re verifying and validating the requirements early on in that process, architecturally throughout when the architecture is done through either simulation or verification, through some level of validation processes, or test processes. Collaboration throughout the design. How do you verify that this mechanical design or this electrical design is appropriately satisfied within the software space, or that you have collaborated the interfaces in this software space?
Obviously these lifecycle processes are the foundation for all of these areas where there’s a constant connectivity and process-flow connecting, and including upfront the piece that I haven’t really talked about much. It is this idea of program and/or product and portfolio planning to identify how these product and portfolios really then drive into the actual core disciplines that you actually need to do the work. How do you identify your portfolio information?
All of these are really bounded within identifying the product development of the entire system for managing and capturing the information of your system engineering approach. This is kind of a methodology of a process, how things flow. This isn’t linear, so not to say that requirements flow to architecture, flow into design, where you may think of it that way, but there’s a continual loop. That’s also to show these circular loops here in the center-right. Not just in validation methodologies, but also in communication and collaboration back between, “We’ve done some requirements and we’ve maybe started our modeling, but the models themselves, we need to adjust appropriately based on some simulation or results to feed that back into the requirements.” The point here is to identify this continuous and ever-flowing validation and rework our updates going on within these methodologies.
Derek, this is our first poll question. Today’s question is “what is the most important area of systems engineering improvement to your business?” Is it requirements and management in engineering, system modeling, architecture, verification and validation besides simulation, simulation, and detailed development?
Great. As I review this information, I see a number of different companies focusing on what do you see as the most important, so definitely requirements. That makes a lot of sense, where requirements management and engineering is kind of the leader. It’s basically what I kind of expected in that space, to identify how requirements engineering is really the most important area in your business, as well as system modeling and architecture. I think that makes a lot of sense.
I’m a little surprised that detail development falls at the bottom, but maybe that’s because we’re talking about system engineering and detailed development is a natural feeder into that space. I think this is great information. I appreciate the audience giving that information.
I think with that, let’s just move ahead to the slides and continue on. I think it gives us a good input as to which one is identified as the highest or the least important, so I think we’ve got our information.
Now let´s look at how the complexity of the product information really fits and how do you support it with an infrastructure?
I’m going to go into a little bit of detail here regarding how does this V-Model, you can all see the V-Model here I’m sure, and identify different methodologies. What we’ve done is we’ve taken this V-Model and we’ve broken it down into three different disciplines to identify, on the hardware side, in the software side, and the electrical side, and mechanical side, different pieces of how that’s connected.
If you take a look at this, for those of you who are systems engineers, you understand the V-Model fairly well. For those of you who don’t, I’ll kind of give a brief overview. It really starts up at the top where you think about at the systems. How do we do some level of requirements analysis, and requirements design, and system design, understanding what it is we want to develop?
Then that kind of feeds into from mechanical and/or electrical and/or software requirements. How do you take them and really kind of bring them down to a subsystem or sub-discipline level? Which then feeds into some analysis and validation design, later on into the actual bottom of the V which we identify as kind of component or detailed development of information. Then that starts to move up along the right hand side where you talk about kind of component level testing, and component level verification or testing, up to system and system integration testing, and then finally up at the top where you talk about full-level system testing. Here you have kind of cross-discipline testing, and at the top you have full-system level testing. The pieces here are really to show that you need cross-discipline change management, you need cross-discipline validation, but you need continuous loops of information.
One thing this V-Model doesn’t show is a way for you to continually have these, I’ll call them, small, concentric circles of validation processes throughout. That’s the attempt for these little circles here to show you, is to identify that just because it’s a V-Model doesn’t mean it flows from top, left, down, and then up to the top, from the bottom to the top right. It’s the fact that we want to make sure, we want to continually validate that information, and not make it a linear methodology and a linear process. That validation process has to be continuous and has to be interconnected and inter-organized. You need to have an infrastructure that can really support this level of increasing complexity of the natures of the system engineering approaches. That’s really the idea here, of how that looks.
As we move on, let’s talk about those four pillars and how we, as a company within PTC, really help support these four pillars. Just to remind you, the four pillars that I covered early on were requirements management, modeling and simulations, cross-discipline design and collaboration, and verification and validation. I’m going to go through each one of those four individually, kind of give you a little more of a deep-dive first into requirements management, and then also show about how we physically support them with our product portfolios.
First of all, let’s talk about requirements management, where the need here is focused around decomposing customer requirements or customer needs if you will, to functional information. How do we know we want to do? How do you handle those requirements and decompose them appropriately to the various different disciplines, optimize or figure out what are the design tradeoffs in this space, and really flowing those requirements down to those disciplines, whether there be some level of design choices or optionality connected there, whether we’re talking about failure-mode effects that need to be identified in this space and getting those requirements in a context perspective, as well as including level of parameterization.
Many requirements these days use the term shall, will, or whatnot, but many of them are physically or tactically important because they have constraints embedded with them or parameterization embedded with them. This engine must go no more than X number of horsepower or this fan will do X number of CFM, or cubic feet per meter. Whatever that methodology is for defining those requirements, those are parameters that need to be considered and flowed down to those different discipline areas. How do you handle those different pieces?
What we have done as a company is we provide an integral requirements management solution, with our brands of PTC Integrity as a significant and robust authoring application for managing those requirements, including capturing the rich text of information, embedding images, embedding OLE objects, taking tables, capturing rich text authoring management. Managing the reuse of those requirements down at the granular level, handling parameterization so I can use requirements as an asset across many different projects, where I define a requirement once, but it may have multiple parameters, and being able to identify those parameters in different projects. Having connectivity to other systems, whether they be through Word, Excel, or other authoring systems in the requirement space. Supporting an exchange format that’s standard for supply chain or collaboration through a RAK or RIFF exchange format.
Having traceability of those requirements, not just the software information, but the physical product and hard data. Being able to take a requirement, trace it to a physical product, and as those requirements change, being able to identify what artifacts or what systems in my environment are suspect or need to be identified as a change. Really identifying and collaborating on managing the requirements, tracing it to product information, and identifying change. That holistic and combined offering is what PTC provides with the combination of our two product-bearings of Windchill and PTC Integrity.
Moving on to the next set, we talk about modeling and simulation. Here, the idea is representing models and requirements within the same ecosystem. Many companies are looking at developing models and using models to actually capture and manage requirements instead of textural based. That’s an evolving trend that we see a lot of companies wanting to move to, but still kind of growing. Really identifying, and building out functional, logical, and physical structures, and being able to identify what are the decomposition of them.
It’s adoption of SysML and UML methodologies for identifying these system models. SysML is a fairly new standard for the last few years that are really gaining adoption these days. UML’s been around a long time, obviously, in the software space. Parameterization, again, is another key factor of taking that information. Then obviously reuse. How do you take your models, design upfront for that space, and identify the reuse perspective?
Within our system, what we support from a modeling offering and modeling traceability within our PTC Integrity environment, you can capture these models from modeling environments. PTC does not have a modeling defined system model application, but tools that are in the industry like Sparx, Atego’s Artisan Studio, Rhapsody from IBM, like other applications in that vein.
How do you capture those modeling environments, capture that data, truly manage that information of those models, and have those models be able to be identified and traced to other requirements that are not captured in a model, that may be captured texturally or they may be exchanged through a provider or supplier and really able to handle that information?
Then managing the change of those requirements within a change process and a change methodology, such that if these models must change, they identify which requirements must change, who needs to react on those changes. Also, a change in requirements or changing models identifies which downstream artifacts are then considered suspects.
Really, how do you handle changes to model elements, connectivity to requirements, as well as being able to identify what artifacts that are identified in the trace that need to be identified on a downstream trace? Really, how do you capture validation of those requirements with test artifacts and test information, which I’m going to get into in a couple of minutes.
The next one is this idea of cross-discipline collaboration where you talk about having access to the entire design, having a full-level of the bill of materials for your products, managing cross-discipline change, capturing those models and filling them down to visual system and physical models of a product, and obviously collaborating on the different systems as they evolve.
Here we have the ability to have connectivity of the full product information. Having software-related into the bill of materials has been a challenge for many different companies, where Windchill’s bill of material management being able to ensure that you have the correct information within the correct configuration of the product, being able to identify the correct product information as it evolves over the lifecycle, and being able to capture the change methodologies and the change processes for all of this interconnected data and being able to manage cohesive change management.
In addition, being able to visualize interconnected structures of information. How do you view structures that are connected or interconnected, but not at an element by element level, but at a structure by structure level? Here you have a product structure, a document-level structure that may be identified in your system. A mechanical assembly structure, a requirement structure, a test case structure, and being able to see and trace relationships information between those different disciplines really provides the users and the system the visibility into, if a change needs to be made, what are the downstream or upstream impacts of that change?
If a tradeoff study needs to be identified, so a requirement we want to modify, we want to update a variation of this information and we want to produce a new version of this information or a new version of this product, what are the pieces that are all interconnected that need to be reused or could be leveraged in the next version of that system? Really showing cross-product traceability of information.
Then we start talking into verification and validation. Now, here, when we say that it really covers both test cases and tests simulation, but it’s really how do we handle verification and validation of those requirements, whether they be through the actual physical implementation or whether it be through validation methodologies of tests or simulation, as well as capturing and tracing those dashboards and results. We can identify and report out information against that analysis process.
Similarly, we have the ability to integrate with a tool. This is one example of the verification or validation. We have other methodologies, but with a common methodology in the system engineering space of MATLAB and Simulink where customers want to define their system model in a more granular approach, so not at high-level kind of block diagram, but here getting some more physical implementation of that model in the MATLAB/Simulink environment. Capturing and managing the configuration of that model, putting it under change control, having it traced, similar to the system model I mentioned earlier, but being able to do it at a more granular level so that I’m within the simulation environment and being able to see what artifacts are changed.
I get visual indications so the users know that something has changed. I’m able to capture a simulation inputs and outputs. I’m able to even drive that information directly if we’re doing automated co-generation and manage the co-generation and configuration management of that. Being able to have full traceability of all of these artifacts of information, and being able to identify which pieces need to change, when that change needs to happen, and what could be impacted by that change. That could be based on the simulation results that come out of this or it could be just based on some new requirements that come in, and being able to capture that information, as well as handling and managing repeatable processes. With that, let’s move to our next poll question.
Yes, Derek, this is another poll question. The question is “in which areas do you plan to make investments in the next one to two years?” The same setup as the previous poll question, requirements management, systems modeling, architecture, verification and validation, simulation, or finally detailed development.
Here we’re looking at any short-term investments that you’re looking to make in these areas.
Yes, one to two years. I’m going to push out these results for you, Derek. There you go.
Great. This is actually good information. The fact that many of the companies out there are looking at how do we invest in system modeling and architecture and requirements. I think that’s pretty consistent with many of our customers where system modeling and architecture and requirements management are the two core areas where companies are looking to invest. That’s over half of the audience. Almost 60% of the audience is looking to focus in those two areas. I think that really shows where those two areas are the most. Similarly, I think it would be great to get information on longer-term investments. How do we look at it from a longer-term investment?
Again, we have a new poll question. This is for the next three to five years.
Similarly to the last one, I think that is probably true in that investing in requirements management and system modeling looks like it is a longer-term process as well as a shorter-term process, and it’s probably ever-evolving. I think that makes a lot of sense in that companies are really trying to figure out how to handle those. What I think one of the outtakes of that is it’s not a simple one- to two-year investment. It’s a much shorter- and longer-term investment that needs to be made in both of these spaces. I think that makes a lot of sense. That’s great information. I really appreciate the audience responding to that.
With that, we have about 10 minutes left. I’m going to go into the last couple of slides here. In a summary perspective, the way that PTC looks at it is in order to really deliver product innovation, which we believe leads to increased product complexity and complexity obviously makes product quality much more difficult, we look at it as how do you identify and capture a solution that requires a holistic system engineering methodology, where these life cycle processes are all of the foundation, but you really need to have investments in focused areas in requirements, simulation, modeling, verification, cross-disciplined designs, and so forth. Really having those four key areas drive or support significant innovation and/or product complexity and product quality. I think that’s the main message that’s coming out of here, as well as improving and validating that information upfront within a design.
Focusing on if you have late-stage changes and rework, you really need to reduce that aspect, to reduce the fact of product development costs, and accelerate the time to market. There’s been studies done by INCOSE and other groups that indicate if you find a problem really early on in the process there’s a significant saving. Whether it be somewhere between spending an extra 20% upfront on the system model, the system design, or the requirements, it can really save 50% to 60% or more down in the end in a cost-basis perspective. That’s really the goal for significantly reducing the late-stage rework, really driving that innovation and driving that validation upfront.
Then the next one is in order to really define common platform, or what we call product variance, product line engineering, or options in variance within your product portfolio, really effectively defining that. You need to do that within a system engineering approach on a modular product architecture that really drives modular product information, that really drives modular bill of materials eventually, to product information that goes out to your customers. Many companies are doing it to improve their market share and their profitability.
With that, I ask, as kind of a call to action for many companies, to look at things as a phased approach, to really think about and assess, to determine what are the best next steps for you as a company? These are some of the takeaways. Your requirements management processes - when you look at it, how well are they defined, how well are they connected, are they disconnected and fragmented? Is the complexity that you have within your product offering, is there a lot of software-driven innovation that’s driving it? Is it part of your current technologies and processes? Maybe it isn’t yet. Maybe there’s a way of being able to drive and improve profitability, or improve time to market based on that, and common platform information.
Being able to verify design and analyze the process. How difficult is that in your methodologies? How much pain is there in that methodology or in that process of being able to do it within that process without being able to have a significant amount of late-stage rework?
Similarly, is the right side of your engineering V too heavy? Is it weighted? Do you spend too much time? Yes, you need to do system testing and system validation, of course, and there will always be some level of rework on the right hand side, that will never go away, but you need to have the right balance. The right balance is you really need to start to drive more of that to the left side of the V.
With that, I thank you for your time today on the presentation aspects. I think we’re going to switch over now to our Q&A session.
Yes, that’s right, Derek. I have a question right at the top here. How do you achieve the design collaboration between a lot of people? It seems that in some companies, once a decision is made on a development, then whoever proposed it has to figure out how to do it, pretty much alone.
Yeah, that’s a good question. I think there is definitely a lot of credence to that and a lot of challenges within companies. The answers are somewhat different depending on the size of the company as well. You’ll find that many smaller companies there is one person doing multiple jobs, so they have multiple hats, when they need to actually collaborate. There may actually be two or three people that do this little engineering group or engineering team and they kind of collaborate to one another.
Let’s talk about it on a larger scale, a larger-sized company where collaboration is challenging in bigger companies. In that case, when you need collaboration between many people, the decision to be able to be made on development processes or how that’s done, I think may come from inefficiencies in the lack of understanding or a lack of defined roles within an organization. Are there system engineers in your company? Are they defining or identifying what that architecture is? Does that architecture then get some kind of cross-feed or some kind of cross-pollination back from the detailed development teams?
I realize that maybe you have an issue where whoever proposed this has to figure out how to challenge it, but I think if you have traceability and connectivity of the system information, to the detailed information, to the validation information, and you can see and you identify problems, there’s an easier way to trace back up the chain to identify where the challenges are and who did it. I’ve kind of suggested this development or collaboration methodology, if you know who to communicate or contact to and will highly identify and figure out what’s the potential impact of that change.
Is it possible to optimize the design by just using a semiautomatic system? In other words, can users search some information and the software can also retrieve information automatically?
Yeah, that’s a good question. I think automation of a lot of different aspects is coming into play and gaining in this space. Whether it is automation of system simulation, whether it is automation of verification and validation processes, I think that’s probably where you’re going, right? Where you’re looking at how do you identify system validation. You could also be talking about system interaction, as well.
I think there are a number of different tools out in this space that support this. We, as a company, have the ability to handle automation of verification and validation or execution of verification and validation. We have the ability to connect to, let’s say, test systems. Let’s call it a system test environment that runs a system’s information that you’re continually updating software to and you need to run a validation test.
Let’s use an example of an engine control. That engine control is constantly getting new updates of software, because we want to maximize the performance and get the results of the performance that we want. We’re constantly maybe updating new software to that system. How do we automate? Then we have a methodology wherein customers that take the software from our system, do the software development, and then execute or run the execution test, which then push out the software, run the execution and then feedback the results. There are some systems that can be automated in this methodology to improve that validation processing.
Great. Another question here is how do you synchronize the different lead times between hardware and software? Software deliveries can be weekly, whereas electronics iterations can take up to months. How do you collaborate between the disciplines in this case?
Yeah, that’s a good one. That’s an ever-evolving problem. To the nature of software, as you mentioned, constantly and rapidly changing software is ever-pervasive and, as you mention, weekly. It could even be daily at that point, where hardware itself is pretty static. A lot of that comes down to, which I didn’t really cover in this space because I was focused a little more on engineering, that program management or project management collaboration aspects and analyzing when certain deliverables or timelines are met to be able to manage the project release schedule. Release schedule, I agree, is completely important and validated and needed so that you can identify, “Here is the hardware deadline, or the hardware components,” which also may have late lead times because of suppliers or late-stage hardware implementations or a long time to get components or what have you. Software is constantly changing.
In a lot of companies, the way that they focus on that is they try to do their hardware as early on in the process as possible, and a lot of the software comes a little bit later. That’s not necessarily the best approach and best case. Realistically, it comes down to managing timeline and project deliverables of information so that you can have complete visibility into it if there’s a delay, or when the next version or supported version of software is coming out, or if there’s a delay on the hardware side.
Realistically, it’s about visibility into how are the project teams collaborating? Instead of having a software project or a software team in one system and a hardware team in another system, the goal is really to have transparency and transparency of how the project teams are developing and where are they at in that development cycle.
Great. Another question here, does PTC Integrity have its own module for managing different software, firmware versions, or only a gateway to standard software?
Actually, PTC Integrity itself does have the ability to manage software configuration management within the system. Just like a software configuration management system, you may be familiar with like Subversion or ClearCase or other ones, PTC Integrity does have a configuration management system to capture full-level granular software code. It also has the ability to have traceability between a specific line of code in a feature to the requirements that are also captured in the system. We have very deep and detailed level granularity. In addition, it also has interfaces and integrations to other software configuration management systems, like I mentioned already.
Great. How would you avoid spending time in late rework, especially if it is necessary?
Yeah, in some respects you can never avoid any late-state changes, right? There is always something or more than likely something that’s going to come in late. I think one of the goals or ways to try to improve that late-stage rework is really in identifying how in your system you want to organize the different functional areas.
One example that we found for a lot of companies is that software comes very late in that process. Rework of software is definitely less costly than reworking of hardware. Just like one of the previous questions came up, that continual change of software, we have one company that’s ours in the high-tech industry that their hardware is done for a long time and the software is continually evolving. The software actually continually gets updated, even on the manufacturing line, until it’s completely done.
Although that’s a late-stage rework and it’s definitely necessary, it’s definitely a less costly way of reworking. If you organize your product platform and product so that software can easily be added at the end, either have new features or fixes very easily, and less dependency on hardware changes late in the process, that’s a significant way that customers are looking at identifying or improving late-stage rework.
I guess we’ll take a couple more questions. First one is how can we make design feasible enough for a minimum amount of investments in testing and maintenance?
That’s a good question in identifying investments in testing. One way is, obviously, there are a lot of open-source capabilities today on the market. One methodology you could look at, if you’re looking at minimal investments, is Open-source as one example. However, you need to balance that with investment cost versus ability to have full support and full traceability or full information from the provider.
A minimal amount of investment, I agree, is a challenge. When we talk about testing and maintenance, I think the best way to do testing and maintenance is really to go back to this idea of early validation. How do you do early validation and early support? A lot of this leads back to the software space. I think if you can have robust product information and make the design upfront really rigid and hard on a hardware perspective and continually update software, that’s going weigh-in as a way of minimizing the testing and maintenance.
Let’s use a common example that everybody knows about. Apple is probably the king of this space, where they have their hardware platform with minimal changes. They push out software on a regular basis. They make the product itself extremely flexible and configurable from an application perspective or a different modular perspective. In that case, their maintenance level is probably pretty low in the perspective that maintenance is just software development cost versus hardware implementation or hardware update costs.
That’s one way of really trying to help and reduce and minimize your investment time. Outsourcing is another way of maybe being able to minimize investment, if you don’t want to design. Finding off-the-shelf components is another way of being able to minimize investment costs.
I think we have time for one more question. In project definition, how prevalent is the use of UML among system engineering participants today?
UML, from the customer that we talked to, is pretty much pervasive all over the place where software development is done. I think the SysML aspects, or the advantage of using SysML or the methodologies for using SysML, are still growing. They are getting more and more vast but UML is pretty pervasive in all the companies we’ve talked to. Every customer we talk to uses some level of UML in defining their software development, but not so much in defining their system.
SysML, as I mentioned, the adoption of that, they’re kind of looking at it. There’s a lot more education going on of people trying to leverage it and wanting to leverage it, but haven’t really gotten there. I think that they’re trying to use UML in some aspects of systems, but definitely using UML if companies are doing software development pieces.
I’d like to thank our presenter today for making this such an informative hour, our sponsor, and thank you, our audience, for participating in today’s session. We hope you found today’s event valuable and will return for future IEEE Spectrum webcasts. Thank you.
