Doug: Hello. Welcome to Better Business Decisions by Design: A Systems Engineering Case Study. This IEEE Spectrum Tech Insider Webcast is sponsored by PTC. I’m Doug McCormick.
Today’s presenters are Mills Ripley and Derek Piette. Derek Piette is a Product Management Director at PTC, responsible for the company’s application life cycle management sector, system requirements, and validation solution. Derek also manages integration between the PTC Windchill and PTC Integrity solutions. He’s a frequent speaker on hardware/software interaction, electronics and high-tech, and PTC’s product direction. Derek earned his bachelor’s degree in electrical engineering from Worcester Polytechnic Institute.
Mills Ripley is North America Applications Life-Cycle Management Customer Solutions Director for PTC. He has more than 25 years of software and systems engineering experience in both technical and management positions. Mills has worked as a software engineer and systems consultant at Digital Equipment Corporation, in the office of the CTO at Xerox, and as a technical lead and technical sales manager at IBM Rational Software.
Mills is a PMI-certified project management professional and a certified Scrum master. He holds a bachelor’s of science degree in computer engineering from the University of New Mexico, and a Master of Science degree in computer information systems from Regis University.
Now it’s my pleasure to turn the virtual podium over to Mills Ripley to start today’s discussion of Better Business Decisions by Design: A Systems Engineering Case Study.
Mills: Thank you, Doug. As mentioned, I’m here with Derek Piette. We’re both part of the ALM, Application Life-Cycle Management, segment at PTC, and systems engineering solutions are a big part of both of our jobs. A real challenge for any engineering practice is the ability to develop a shared view of alternative courses of actions including risks, cost, schedules, and architectural tradeoffs. And in order to withstand the test of time, systems analysis needs to be thorough and dynamic enough to overcome changing conditions in the problem space while leading business and technology teams to that shared understanding.
Each year, INCOSE, the International Council on Systems Engineering, provides vendors such as us with a systems engineering challenge designed to exercise these principles using a hypothetical scenario. In today’s presentation, we explore the PTC solution to a system of systems challenge posed by the INCOSE team.
In the course of solving this challenge, we will illustrate how to apply fundamental system engineering principles to identify shared goals, manage requirements, identify options, design and model alternative solutions, analyze the trade-offs associated with each solution, and select the best alternative. As you’ll see, the best business decisions are not always the most obvious ones. By following the principles of system engineering, we’ll illustrate an approach to problem analysis that can be applied to many complex systems, resulting in more fact-based decision-making and ultimately better business outcomes.
We’re also going to ask you for some introductions by way of some polling questions that are coming up in a little while. We’re going to talk about some major forces of transformation that are impacting how we engineer products. We’re going to talk about systems of smart, connected products. We’re going to go through the INCOSE Tool Vendor Challenge and our solution to that challenge, and we left time at the end for discussion and Q&A.
PTC, our goal, and this is still by way of introduction, is to give customers a product and service advantage. We provide technology solutions that transform how products are created and serviced. You’ll notice there are a few highlighted words there. Transform, that speaks to process change, and we are in the business of process optimization. Create and service, this is both the upfront knowledge needed to design these products correctly along with the manufacturing and servicing of those products.
We have identified seven major forces, some of which are long-standing while others are more recent, that are really transforming product development. In recent years, leveraging production-centric strategies has begun to diminish in part because they’ve become commonplace. Optimization of manufacturing production processes, it’s simply becoming a price of entry to compete.
The path of competitive advantage, then, in the modern era, requires a rethinking of pretty much everything from how products are designed, built, and serviced to the underlying business models. Causing the need for this strategic realignment is a set of market forces which are now converging towards a tipping point of fundamental transformation. If you look around the outside of this diagram here, you’ll see this. These are connected as well. They overlap, they interrelate, etc.
Digitization, replacing analog products and service information with full virtual representations that can be leveraged across the value chain: engineering, factory floor, service department, etc. Globalization, the general shrinking of the world by technology that eliminates or lowers economic and geographical divisions and barriers and really opens new markets. Regulation, the enforcement of government rules, non-governmental organizational policies, industry standards, things related to environment, health, safety, and trade.
Personalization, we know we’re getting spoiled. People’s expectations are changing based on interaction with personal devices such as smartphones. They expect the same degree of interaction and tailoring of products and services to accommodate their personal preferences that they get on their very common device.
Software-intensive products, we’re talking about integrated systems of hardware and software capable of sophisticated human-machine interaction, diagnostics, and service-data capture. Servitization, this is really a fundamental business model shift where products evolve to integrated bundles of services capable of delivering new value continuously throughout the customer experience level.
Finally, and probably most all-encompassing here, is connectivity. A pervasive network of things, often mobile, but embedded with sensors and individually addressable to enable monitoring, control, and communication.
Those are the forces we see that are really starting to come to bear here that are driving towards this transformation. We do have a poll question as promised here. Doug, could you please put that up? Let’s get some introduction from our audience.
Doug: Yes, the poll should be up on your screens. The question is which of these three forces are the primary influences on your organization? Digitization, regulation, personalization, globalization and connectivity, software-intensive products, and servitization. Please select just three and let us know what your answers are. I will keep you on the line here until the numbers start to stabilize a bit. I think we’re ready to go. Thank you very much.
Mills: Yeah, regulation is big, globalization and connectivity, followed by digitization with servitization bringing up the rear. That’s fairly common. It’s a well-regulated world. Let’s just leave it at that. Again, it’s one of the challenges that you combine with things like globalization and connectivity, software-intensive products that are really driving some change. I appreciate that.
I said we were going to talk about systems of smart, connected products. What’s changed in the industry is that the bar for what is complexity in a product has changed. Products are now connected. They’re software-intensive systems of systems. They’re well beyond just mechatronics. This graphic illustrates how combine harvesters, for instance, coordinate via GPS in order to optimize the harvesting process. Agricultural lifecycle goes well beyond what’s illustrated there.
Also part of this system are ground sensors, vehicle sensors and subsystem actuators, interfaces with weather systems, and communications that are simply not command-and-control but also peer-to-peer.
In this example, agricultural systems of systems are not limited to harvesting but encompass the entire agricultural lifecycle from what crops you plant to where you plant them to how you feed them and water them, how you protect them from pests, etc., all with an eye towards maximizing yield-per-acre while optimizing water, pesticide, fertilizer, and fuel use.
IEEE Spectrum Magazine did an entire issue on agricultural productivity in June of 2013. You should take a look at that if you’ve got access to it. One article that really stands out, because they talk about it from a personal perspective, is called “Farming by the Numbers.” It talks about how precision technologies are really being brought to bear in modern agriculture.
One other thing on this graphic you see here, this illustration is really the cover of PTC’s “Internet of Things” e-book and that is available for download at our website. Probably the quickest way to do it is just Google “PTC Internet of Things e-book” and you’ll get pointed right to it.
Smart, connected products. Let’s talk about the evolution of these smart, connected products and the different types of products we have seen out there. The first type is a physical product, and it’s defined by products comprised of mechanical and electrical components. For this product type, the user interacts with the product through mechanical controls.
After the product is sold, the service organization, that is the manufacturer, the manufacturer’s partner, a dealer, or an independent third party, their interaction with the physical product type is limited to discrete and reactive service events. Physical diagnosis and manual part replacement, that’s what we’re talking about, and the service organization further needs to forecast their spare parts inventory based on historical demand. That’s the best information they have with this type of product.
The second product type in the product continuum introduces a new aspect to the physical product, and that’s a digital component. The product has expanded to include sensors and computing capacity and embedded software and a user interface. In so doing, the product has become what is commonly referred to as “smart.” The smart product is capable of adjusting the product’s performance based on the data it is collecting. Think of your car’s cruise control, for example. Once you set the speed of the vehicle, it automatically accelerates or decelerates depending on the physical environment. That is, you’re going uphill, you’re going downhill, etc.
Smart products have implications for how you deliver services. By adding this digital component, you now have the ability for service teams to plug into the product’s CPU and memory and assess what has happened in the environment that that product has been used over a period of time, but it’s still periodic connectivity but allows faster diagnosis and much better pinpointing of problems.
The manufacturer interacts with its product, still, only when it is brought in for service. That is, in a reactive mode. As was the case with purely physical products, service providers can only respond and react to problems once they occur. In addition, they continue to forecast spare parts inventory based on historical demand.
Smart products confer great benefits to users: improved comfort, safety, efficiency of physical products, etc., but benefits also accrue to the manufacturers. For example, smart products often yield a premium in the marketplace because of that added functionality. To capture that value, manufacturers need to invest in additional R&D resources, primarily in software engineers to design and code the software as well as systems engineers to integrate the digital components seamlessly with the physical components of the product. The ability to capture and analyze data about the product is constrained by the fact that they’re designed only for periodic connectivity.
Let’s talk about the third type, the smart, connected product. The third type in this continuum has the potential to be the most disruptive. Once a product has been made smart, in other words it’s been embedded with software or a CPU or other smarts if you will, it’s really a matter of connecting it to a network and getting all interested parties transparency into what happens to the product during its useful life.
This connectivity taps an entirely new vein of value-creation opportunities. At the most basic level, it enables the user, the manufacturer, and especially the service organization, and in some cases qualified third parties, to monitor, control, and deliver enhancements and value-added services to the product during its useful life. This fundamentally changes the nature of the product from a thing whose value is optimized for the point-of-sale to a platform over which value can be exchanged between all relevant parties over time.
This connectivity can take a lot of forms. First, there’s the connectivity between the product and the user. Think about a remote control of a home security system via the web. There’s connectivity between the product and the manufacturer: remote diagnostics and troubleshooting of farm equipment in our previous example. Connectivity between products themselves: two automobiles that avoid a collision because they sense proximity and take corrective action. This is peer-to-peer communication.
This connectivity also extends between a product and the third-party ecosystem. Probably the most famous third-party ecosystem is the App Store for the Apple iPhone. The iPhone is the product, but it’s really the apps that you add to it that give it its personality and provide the functionality that’s correct for you as a user.
The benefits of smart, connected products are wide, they’re varied, and in many respects they’re still completely untapped. Among the broader category of benefits related to the product’s performance are the provisioning of extended services and the delivery of the product itself as a service, that is value consumed via subscription as opposed to purchase.
Of course this is where big data comes in. You’ve got a huge volume of data coming in. You need to turn that into information that’s applicable to your business so that your service department, your engineering, your manufacturing and your marketing can take advantage of it.
We do have another poll coming up here. Doug, if you would be so kind?
Doug: Thank you. The question is: which of these smart-connected product strategies does your organization currently have in place or plan to invest in? Improve product service information or serviceability, live monitoring of product information or performance, offer in-product enhancements or features via software, deliver software corrective updates, or if you’re not sure. Please pick one. If you’re doing several of them, please pick the one that is most important to your organization.
Mills: It looks like the live monitoring of product information or performance as well as improving product service information and serviceability and not sure. Well, it’s not surprising that there are a good number of people that are not sure because these are very new forces that are really impacting our ability to engineer systems of systems in very different ways. That’s not surprising. The live monitoring of product information and performance is something where the benefits are pretty apparent. It’s a matter of how we optimize that.
Speaking of that, again, we’re driving transformational change here. These market forces we discussed combined with the competitive pressures that have always been there but are now accelerating are really driving a need to evolve how we engineer our systems. We’ve got new types of systems and subsystems interacting in new types of ways which are driving some fundamental change.
PTC has developed a system of solutions that’s been engineered to work together to provide a closed-loop process to help transform the way you create and service your products. We’ve developed solutions that alone can drive process optimization within a function, but the value grows dramatically as you add solutions within and across the enterprise. We’re going to touch on a few of these solutions during the INCOSE Tool Vendor Challenge piece which is coming up now.
I’m going to set it up with what is the INCOSE Tool Vendor Challenge and then pass it over to Derek.
There are a lot of commercial, off-the-shelf tools that support system engineering activities. Some tools support a generic systems engineering process while other tools support specific parts of the process. Without direct experience, though, it’s hard for systems engineers to understand how a given tool can fit their specific needs and contribute to the success of their activities. As a result, many useful and efficient tools, especially newly-developed ones, remain unused, on the shelf, or don’t really penetrate the market while practitioners still rely on legacy or in-house tools, which while they may be familiar may also no longer be as effective or implement modern systems engineering practices.
The industry is lacking an independent benchmark for assessing and comparing the various system engineering tools, and that’s where the INCOSE Tool Vendor Challenge comes in. It was initially introduced at the INCOSE International Symposium in 2004 and it’s been going on ever since. It offers tool vendors a common use case derived from a practical problem for them to solve and demonstrate.
With that, I’d like to turn it over to Derek Piette to walk you through our solution to the INCOSE Tool Vendor Challenge.
Derek: Thanks, Mills. First, let me explain a little bit about what the challenge was that was requested by INCOSE that we participated in in 2013. Basically, the challenge was a natural disaster. It was a number of natural disasters or natural occurrences that have affected people’s lives over the past few years, and INCOSE wants to try to understand while this is not a product-centric challenge, there are a number of systems or systems-of-systems that need to interact in order to come together to help resolve or impact the natural disaster results, and how do you best respond to those things?
The challenge started off as this natural disaster really affected several thousand people in a rural town which was roughly about a 100-mile radius. It happened during the summer months. It was fairly warm, in the 70 to 100 degrees Fahrenheit range. This natural disaster really destroyed homes. It really impacted the families’ lives. And it impacted, obviously, all the information around them. It destroyed their shelters, it killed their power, it really impacted their ability to have communications, and also transportation in and out of that site was heavily impacted as well. Roads were considered impassable. The region itself really needed immediate response.
The goal of the challenge is how do you best respond to this both in a short time window, because obviously lives are at stake here, but also sustainable information over time?
The final part about the challenge was there are a number of factors and requirements that came into play where in order to sustain both life, medicine, food, and a number of other factors required to maintain the personnel and the people within the area that were devastated, ice had to be delivered quickly and efficiently to the region.
There are a number of factors, obviously, to take into that case. We need to understand how you satisfy this challenge, which truly kind of leads us into the goals. The goals are really to provide each vendor their way of how they respond to these challenges. They give some recommendations. INCOSE gives recommendations about how we satisfy these types of goals, meaning how do you show different alternative options for how you’re going to respond to this disaster?
In this case, how do you show alternative ways of being able to deliver the ice to the region? Through maybe some procurement methods and transportation? Or do you decide to set up ice creation facilities internal within that region? That takes into consideration you need electricity, obviously you need people and power, then water. All of those things are devastated, so there’s really some analysis that needs to be done here.
You need to take the requirements of the disaster and all the scenarios that were brought into bear: temperature, tragedy, amount of ice needed per person per day to sustain life and to sustain keeping your medicine stable, keeping food stable, providing shelters. There are a number of requirements that need to be developed as well as things that need to be taken into consideration. The requirements and understanding of those requirements need to be taken and decomposed so that you have effectively and efficiently satisfied those components.
Then you need to compare what are the operational scenarios? What are the ways that we’re going to deliver? How are we effectively going to execute on the alternatives that we’ve selected? Compare and contrast which one is best. Identify and analyze, much like in a tradeoff analysis, what a process would be for product development or system development. Identify the operational scenarios and really determine which one is the best approach.
From that, you can then develop, “What is the architecture that we’re going to use to satisfy this? How do we take the use cases, the actors, the requirements into consideration, the scenario that we choose, and really develop the approach for how you determine the alternatives and what other things are going to be needed to satisfy this challenge?”
Then finally, it’s really around analyzing the system. Either through when it executes or continual feedback of analyzing the system, or the decisions you made were the right ones, that your architecture, your scenario, and your requirements are all being effectively and sufficiently satisfied so you can provide the best possible results to this challenge.
When we look at how PTC has really focused on responding to this challenge, we looked at a number of different steps along the way which are noticed by these chevrons or stages down below. We’ve kind of broken it down into six using those goals as a guide.
What we looked at is how do we develop the operational requirements? How do we take those requirements and decompose them? How do we, from that, organize what the system is going to be that satisfies those requirements? Analyze the project in the portfolio that we need to leverage to satisfy this in order to identify, “Now that we’ve taken these requirements into consideration, what’s our analysis approach?”
Then decompose that further into the system requirements after we’ve made a selection. The operational requirements are at a higher level, but we need detailed or decomposed requirements from that based on the analysis and selection. There’ll be different requirements depending on the approach that is delivered.
Then finally, how do we effectively choose the right approach and handle that delivery or configuration setup? Identifying what is the right approach for satisfying the system requirements and then performing a validation of that process to make sure the satisfaction is done.
What we’ve done here is we’ve taken into consideration those five tasks and really have wrapped it around our portfolio in the center here. To help satisfy these five areas of requirements, the system model, system test, project analysis, and then selection of physical delivery vehicles, these are the product portfolios that we offer, that we provide, that can help manage these areas.
In conjunction with that, many system engineering environments use other applications from multiple providers. In this case, we also connected in third-party tools for requirements and architecture to allow configuration connections, to allow traceability, and to allow the openness of needs for connectivity to other applications, because there are many tools within the system engineering environment process flow and we’ve decided to take a few here to identify which ones to connect into.
Now, I should indicate that the time limit on providing a response and answer to the challenge is really not a very long one. It’s only 20-plus odd minutes of presentation and demonstration to the audience, and the goal is really go give and pick the key areas that you want to focus in on. Satisfying the entire challenge over that time window is somewhat inefficient, but it’s really to try to give your best approach and what you recommend in the near-term for how your portfolio can satisfy these challenges.
That was the intention here, to really identify a number of our key products that helped satisfy this challenge and obviously give a response to that analysis.
If we move to the next one, we’re going to step through each of these stages on a one-by-one basis and really talk about how PTC satisfies them. You’ll notice in the upper right-hand corner that stage that we’re in as we move along this journey.
Let’s take these operational requirements. Operational requirements are captured in our system and decomposed at a granular level so that you can identify each requirement and make sure that each one of them is effectively captured. This could be graphical, textural, attribute-based, parameter information, but a detailed level of granularity for requirements. Again, these are the operational ones so their level of information may not be as granular as the technical specification. Nevertheless, being able to identify each one individually allows effective reuse, effective connectivity, and intelligence that can be potentially either distributed for analysis or can be combined to allow great visibility of the overall project.
Although we can offer them within our own application, there are a number of different common applications out there for interface and requirements which may exist in the industry that we also allow or provide interfacing to. Being common and open for interchange of requirements, whether it be through a third-party… Maybe the government in this example gave us the requirements through one format or one application and we need to inherit those requirements then decompose them or visualize them to determine within our own ecosystem the common, I’ll call it, OEM and supplier communication and protocol to illustrate that we’re allowing flexibility in the ways that requirements can be captured, managed, and interacted with.
The next one is actually taking that architecture and moving along that architecture and identifying with the model. Here we utilized a third-party application from IBM to really define and capture that architecture, showing the different actors, the roles, the functions, the applications that may be done. Really, a decomposition of those requirements, and also taking those requirements and connecting them directly to the original requirements that were identified at the operational level.
Now here you can see the requirements in the requirements management system tied and imported and visualized directly so that the architect who’s defining their architecture can really see, “These are all the requirements and what they are.” However, they only have a limited view of the requirements. It’s not a detailed view of every single set of requirements. If I was an architect living only in the architectural application with very little visibility into the requirements one, maybe it’s a different role or maybe it’s a different organization, then I need to have some visibility and really more granular information.
What happens from this level is the user takes the ability to say, “I need more details about that requirement. I need more information. Maybe there are some parameters that are not captured in the current way that the model shows it.” I need to go and visualize that information.
Here, the user can quickly take from that architecture and delve directly into those requirements and then get the visibility of what is this architecture connected to? What is the traceability of those requirements? Is there more granularity that needs to be done? Maybe there are additional requirements that are tied to this architectural model that I also need to take into consideration that maybe have been added or are new. Being able to identify additional requirements or modified requirements is a key aspect in capturing that architecture.
We provide a connectivity and a close interaction with the requirements in the system between the architectural model to really allow that transparency and that communication layer between either the different disciplines or the different applications that may exist in the organization.
Now that we’ve built our architecture and we’ve decomposed our requirements into that architecture, we’re going to do some operational scenarios. Here we’re going to do some what-if analysis. What we’ve done is we’ve analyzed a number of different factors similar to if you were doing a spreadsheet. In this case, we’re taking that information and actually managing the data at a granular level so that we can tune the data and tune the choices effectively.
Here, we have a two-pronged approach, graphically shown here. On the left-hand side are things we want to analyze. These are the cost basis, the number of units, the amount of ice, and the quantity of information that we need to capture and analyze. On the right-hand side is the graphical representation or the output of that dashboard so that we can visually see what the differences are.
What we’ve done is we’ve analyzed the two options. Really, do we take the ice to the shelters or to the region that has been devastated, set it up there, and distribute it effectively? Or do we actually set up the region, set up the information there, and not procure the ice, but actually develop and deliver the ice locally? Setting up ice-making machines, setting up water and electricity in that site.
What we’ve done is we’ve analyzed the two options, either trucking the ice in over the terrain or setting it up locally. Based on that criteria, we chose to actually truck the ice. We thought it was more efficient, quicker, and more, in this case, necessary in order to sustain human life in the short-term, longer term than maybe infrastructural changes and we can adjust effectively. To get an immediate response, the goal was really to deliver the ice effectively and that was the analysis that was made at the time.
Now that we’ve done that, we’re actually going to take these requirements and decompose them into the actual system-level requirements. Here, what we’ve done is we’ve taken those requirements and we need to be able to choose how are we going to get the ice to the region that has been devastated, identify which of the correct vehicles is most effective to deliver the ice, and handle the challenges that come along with that. There are a number of different challenges that may come up and that may need to be identified.
Nevertheless, we need to be able to have traceability between the requirements that are captured here on the left to the product information, the detailed information about the vehicle. If, for example, I need to hold ice at a certain level, I need to know what the payload of that vehicle is. If it has to be kept at a certain temperature, I need to know if that vehicle has a refrigeration unit. If so, does it satisfy the requirement to keep those ice temperatures and length of time that it’s going to be on the road?
There are different ways of being able to identify this, but really tying those requirements to the physical, in this case the physical vehicle, that’s going to deliver the ice.
Since we can support the ability to capture vehicle configurations, here we have a way of identifying a common platform for these vehicles.
In this example, we need to ship the ice. There are a number of vehicles that have been developed that handle transportation. Some of them, as you’ll see in a minute, are equipped for people, some of them are equipped for payload, and some of them are equipped for a number of different factors based on the range of configurations that are needed. They’re all built on a common platform. In this case, we’re talking about a common platform with configurable options or configurable needs based on the challenges that need to be faced.
When we analyzed the requirements, we identified a certain need of things. Now we’re going to go through and figure out what are the correct components that need to be selected to satisfy this need? To do this, I’m actually going to show you a short one-minute or so video clip that actually leads you through the configuration. You’ll hear my voice talking on this Flash clip, and I’ll come back and follow-up with the last few slides then we’ll get into some Q&A.
Video: The correct transportation vehicle is needed to effectively deliver ice to the devastated region. In this case, a number of vehicle configurations have been developed on a common platform and can be selected to meet the needed requirements. The project management team can filter through the different selections to visualize, compare, and determine which configuration is most appropriate.
There’s a command variant which has different payload options. There’s a utility variant where a non-refrigerated vehicle can be explored as an alternative mode of delivery. The team can perform a tradeoff analysis on this configuration and consider if this mode is more effective than the others. There is a military variant where the vehicle is equipped with peace-keeping ordinance in the event insurgents are encountered and try to disrupt the delivery. Again, the team can weigh the need to arm the vehicle if there are reports of ice theft, rioting, or looting within the region.
Finally, after reviewing the numerous options, the team decides on the most appropriate vehicle to satisfy the mission. They select this one and move forward.
Derek: I hope that was informative and you notice the idea of being able to see different configurations of the vehicle that went through. There were a number of different ones that exist in the system, and we analyze the different options and choices in this case to identify which one was the right one.
Now that we’ve taken that into consideration, we’ve selected our transportation and our vehicle that’s going to deliver ice effectively to the region. Now we’re going to go through and do some verification.
Now, in this case for the verification process, we’re actually going to verify that the vehicle is the right one. Before we ever deliver anything to the site, we want to make sure that choice that we’ve selected is the right selection. Here are a number of tests that may go through it. There’s a different variety of tests that can happen: environmental testing, inspection testing. Each one of these tests can be categorized individually or in a common-configuration approach.
As you can see, how do you best tie the verification aspects back to the requirements? What we’re talking about here is really requirements-based testing and capturing the actual selection or in this case the product configuration, so identifying which things have been tested, did we satisfy that test, and oh yes, by the way, if there were corrective actions that need to be done, tying that into this space as well.
Obviously, as I mentioned, we can’t cover every single piece of information but we’re more than happy to have add-on conversations if there’s areas that I did not touch upon today that we can definitely cover going forward in the future, by all means.
Lastly, we’ve talked about a lot of different configurations of data. Visibility of that data can be siloed or it can be discrete, but it can also be pervasive. We know it’s interconnected because we know we’ve taken the operational requirements, we’ve decomposed into the architecture, we’ve taken the architecture and decomposed into system and physical requirements and detailed requirements, and we’ve taken those physical requirements and tied them to the physical configuration of information. Then there’s the whole verification process as well.
There are a lot of different siloed sets of data, but they’re all interconnected in some way and we provided you the ability to see visibility enter that connectivity so you can allow traceability and do some additional what-if analysis either after release, during product design, or maybe even for the next product configuration that you’re going to focus on, how to identify different configuration steps in that journey, and what if I make a change to one of these configurations or one of these items? What is the potential impact against the other pieces?
Derek: I know we’ve got a few minutes left for Q&A, but let me just summarize the tool vendor challenge and how did we do against these original goals that were set up?
The first one is showing alternatives. We did this in a couple of different domain spaces. We talked about alternative ways to select or deliver, to select the ice delivery mechanism, whether it be from shipping it through a truck or setting up at the target location. That was an analysis we did more on a spreadsheet-type organization or project-level identification.
We also obviously did alternatives in the physical vehicle configurations. Which was the right selection there that we monitor, identify and select?
The requirements were done in a couple of different areas, both at the operational level and at the subsystem level, and identifying those requirements and really decomposing them effectively because they’re different domains and they’re different use cases and different things that need to be satisfied.
Compare and select the operational scenario. Well that was really that project management level, making those decisions both on a financial and a physical and an operational approach of how do we best satisfy the need or the challenge that was given. The architecture we did, we developed to really show the actors, the roles, and the functions that were involved in this space with a tight connectivity to those requirements to allow the traceability between the architecture and the requirements that were informing that architecture.
Lastly, doing a system analysis, we really did a couple of different things. One, we did a virtual trace of connections of information, but really it was focused also around the verification and the validation process of making sure we effectively and correctly chose the vehicle to satisfy the original requirements or the needs for this particular challenge.
With that, I hope you found the presentation, video, and information informative from both Mills and myself. I’d like to open it up now to the audience for Q&A. I’m sure there are a number of different questions in the queue that we can sort through. If by some reason we don’t get to them in the timeframe, we’ll definitely follow up in the future.
Doug: Thank you very much, Derek, and thank you, Mills. I’m just going to intrude for a moment to remind people that there’s still time to get your questions in through the Q&A panel in the lower left. Now I’m going to turn it back over to Mills for the Q&A session.
Mills: Thank you, Doug. We’ve already got quite a list, Derek, so be ready. The first one I see regards risk management. The question is, “I haven’t heard any mention of an important aspect of systems engineering, risk management. Can you comment on this?”
Derek: Yes, that’s a good question. I probably alluded to this a little bit in my talk. Risk management is a key aspect of system engineering, and the need to check on risks and identify risks and mitigate risks is by all means valid to include within the INCOSE challenge and the tool vendor challenge.
In this case, we basically just made a conscious decision to not focus on it within this particular response. There are only so many things that we can show, and we just limit it. PTC does offer a windshield quality solutions offering that really is focused around handling quality, mitigating risks and tying those failure-mode effects and full-train analysis methodologies to those risks and analyzing them effectively. While we didn’t capture it in this year’s or last year’s INCOSE challenge, for sure it could easily have been a component of the response.
Mills: Thanks, Derek. Another question here about information exchange. “One of the large challenges we have is the ability to exchange information easily using industry standards. How can you help improve this?”
Derek: Yeah, another good question. Exchange of information is always a challenge in a large organization, whether you’re doing it both through different applications or you’re doing it of course through to your partners or your contractors or even across different domain organizations.
We, as a company, have supported a variety of different standards. We’re actually on boards and organizations that really support standards and are heavily involved in a pro-step organization for identifying the uses of standards there. We’re also in the RIF, the Requirements Interchange Format. We’re on the implementer forum for that. We’re also involved in the Oasis governing body which is managing kind of the standards approach. Now it’s also engaged on the OSLC front because OSLC seems to be growing on the radar of exchanging or being able to communicate disparate applications.
At PTC we definitely support numerous standards. We also are involved heavily in a number of standards. We’re always looking to satisfy and support standard interfaces and standard exchanges of data with our applications. We want to make sure those standards are utilized, they’re effective, and they’re well-informed so that they can be adopted by both our customers and obviously the applications that we need to interface with as well.
Mills: Thank you. The next one is with regards to leveraging systems engineering practices. Do you see any specific industries trending towards leveraging systems engineering in their product development process?
Derek: Yeah, another good one. It’s probably not a surprise for many of you that system engineering really grew out of the aerospace and defense, Milaro, whatever industry you couch it in your organization as. Automotive is really second in that layer as far as adoption, but nevertheless they’re really trending highly in taking system engineering approaches. Many of the customers that I talk to in both the aerospace and defense and automotive are always talking about a system engineering approach or effectively supporting system engineering methodologies. Those are definitely the two primary industries that are really focusing in and expanding their processes or have mature processes in the system engineering arena.
From that, the other uptick we’ve seen is heavily around the medical device industry where there are really safety-critical type things. That’s true for automotive and aerospace and defense as well. Safety-critical needs where the needs are coming to bear where traceability and proving traceability for compliance reasons are really becoming more and more important for customers to really satisfy. Showing a governing body that yes, I’ve satisfied this requirement and I’ve tested it through this test case or I’ve implemented by this product configuration or by this specific device or maybe even by this software, I’ve satisfied by this line of code in the software.
Granular-level traceability and compliance really seem to be the forefront of leveraging system engineering approaches and methodologies. That’s not to say the other ones aren’t either. I would just say those are the ones we see leading the trends.
Mills: I’ve got a question on modeling here. How do you view the adoption of SysML with your customers?
Derek: Yeah, it’s funny, a few years ago when we really started engaging customers more on the system engineering process at the requirement side and the architecture side of things, SysML I would say was still lightly touched, somewhat immature in the adoption, or if it was, it was really being used by a few people within their organization.
Fast forward to today and looking forward, it’s now becoming a more and more regular conversation with customers. They’re all trying to think about how they architect their product effectively upfront, whether it be using SysML or some other modeling domain approach, but architecture.
I think that comes out of building modularity in their architecture and identifying challenges and issues early on in the process, and not waiting for the later stages when their physical design, whether it be in 2D CAD, 3D CAD or physical prototypes done. They’re really trying to identify architecture upfront and really tying that architecture to those requirements to really leverage that upfront early verification and validation process and identify it through an architecture.
It’s also to effectively identify and communicate to downstream, “Here’s how we’re going to develop it. We’re going to allocate this component or this area to software, this one to hardware, identifying particular interfaces.” Having that notation or architecture definition upfront which is not in spreadsheets and other pieces that is harder to identify, I see a lot of uptick.
It’s funny. I was talking to some other people who do training on SysML and training on SysML for them has also boomed over the last year or two years or so because more and more customers are really starting to question it and think about it.
As I mentioned before, INCOSE is really at the forefront of this. They’re really driving it forward. In fact, many of you may not know but France is actually one of the leaders in adoption of SysML. Now whenever an individual gets a degree in system engineering domain spaces, one of the requirements is that they must learn to leverage SysML in the technology approach. Again, that’s just different areas of organizations, whether it be this country or different areas of the world, that are really starting to pick up the usage of this notation or the idea of architectural definition.
Mills: Let’s see. We have another one here. “In the creation of your solution design, what challenges did you find to which there were no readily-available answers and needs for further research?” I assume this is specific to the INCOSE Challenge solution.
Derek: Yeah, that’s a good question. I would say some of the challenges that we’ve tied to are really around I would say the people… The INCOSE Challenge is kind of interesting. We’re a product development company and many of the challenges that INCOSE puts forth are really natural disaster things where most of our customers that we interface with aren’t really engaged in that space where they’re trying to do those types of challenges. We’re helping customers build planes, trains, and automobiles and other physical-type devices. One of them is trying to identify the challenges that INCOSE puts forth and really how our products fit and how our applications fit well within the challenges that they give.
In this case, as I kind of laid out a little bit, we focused on the physical piece of things. There’s a whole other system-of-systems component around the personnel and the people and other areas that could’ve been taken into consideration that we kind of left out. We look at the physical domain and physically delivering the vehicles and the tangible product configuration or product information around it, but there are definitely other pieces of the challenge that we chose to not focus on in this case. It could’ve been hazards to individuals. We could’ve built that into our model. What are the hazards going to impact on people?
In retrospect, we just had to focus on some set of areas, but that doesn’t mean there aren’t other pieces that we did. That’s one.
The second one is really trying to cram it all into a short window of what we can deliver and effectively communicate to the audience. That’s just a timing issue.
Mills: I have another question here about tests. One in the audience would like to know what is the difference between simulation tests and inspection tests?
Derek: In this case, it’s really about the different steps that are involved within testing. Inspection testing may be a physical inspection as a user going up and trying to identify does this dimension match the original drawing? What does the physical display look like? Is it functioning or not turning on or so forth? Those are types of inspection, visual ways of invalidating it.
Simulation tests are more on actually running physical simulations or physical executions of testing. This can be hardware in the loop, it could be software in the loop, or it could be a model that’s being executed by inputs and outputs and gathering the data.
There are different types of tests, and those tests could be going through different process flows. You may do an inspection test, maybe a user or feedback input. Simulation tests are probably a little bit more automated or tool-centric where the application itself will give you the result and then those results are tied to the test verdict as opposed to the inspection test where probably a user is potentially more involved than an environment.
Doug: Thank you very much. I’m afraid that we are out of time. I’m sorry to interrupt. Thanks, again, very much Derek and Mills.
Mills: Thanks, everybody. Thank you, Doug.
Derek: Thank you. Have a good day.