Quantification of cost and risk during product development

Quantification of cost and risk during product development

Computers & Industrial Engineering 76 (2014) 183–192 Contents lists available at ScienceDirect Computers & Industrial Engineering journal homepage: ...

2MB Sizes 0 Downloads 24 Views

Computers & Industrial Engineering 76 (2014) 183–192

Contents lists available at ScienceDirect

Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie

Quantification of cost and risk during product development Shan Zhao, Arman Oduncuoglu, Onur Hisarciklilar, Vince Thomson ⇑ Mechanical Engineering, McGill University, Montreal, Canada

a r t i c l e

i n f o

Article history: Received 10 October 2013 Received in revised form 5 May 2014 Accepted 23 July 2014 Available online 13 August 2014 Keywords: Dependency modelling Complex systems Decision support system Risk assessment Change management

a b s t r a c t In response to customers’ rising demands for more customization and quality, companies are making more frequent changes to their products. A framework for a decision support system (DSS) which helps product development managers to understand the cost and risk of change is described and illustrated with a simple example of a thermoflask. The DSS assesses project performance metrics, such as development effort, development time, product cost and revenue, customer satisfaction, profit margin, and risk. The system allows the recalculation of these performance metrics when engineering change occurs during the creation of new design solutions. The assessment of different design solutions can then be performed by comparing the change in performance metrics. The DSS integrates methods from quality function deployment, functional analysis, and risk assessment that increases product knowledge during design stages in order to calculate the effects of engineering change, and thus, to support design managers in decision making. Ó 2014 Elsevier Ltd. All rights reserved.

1. Introduction Ever increasing competitiveness along with fast changing environments in current markets force companies to seek ways to produce higher quality products for the lowest cost in the shortest amount of time. The ability to make changes to existing designs quickly, in other words agility in product development and production, has become a crucial strength in order to be at the leading edge or to just survive in today’s market. One of the most prominent ways for companies to remain competitive is to avoid time and cost overruns when making changes to existing designs while maintaining quality. In order to handle the risk generated by uncertainty in large complex projects, companies have adopted ‘incremental innovation’, which gradually introduces new functions, performance characteristics and technologies that rely on existing product information. As a result, adaptive design, where existing products are modified to produce new solutions that satisfy a change in needs, new requirements or the desire for improvement, constitutes 75– 85% of new product development projects (Duhovnik & Tavcar, 2002). In order to efficiently and effectively deal with the introduction of a new product solution, it is paramount that the impact of engineering change, i.e., effort, span time, technical difficulty,

⇑ Corresponding author. Address: Mechanical Engineering, McGill University, 817 Sherbrooke Street West, Montreal, QC H3A 2K6, Canada. Tel.: +1 613 425 2713; fax: +1 514 398 7365. E-mail address: [email protected] (V. Thomson). http://dx.doi.org/10.1016/j.cie.2014.07.023 0360-8352/Ó 2014 Elsevier Ltd. All rights reserved.

quality, fulfilment of customer requirements, and cost, be identified and assessed as early as possible during the product life cycle. On the other hand, evidence from empirical investigations (Crossland, Williams, & McMahon, 2003) and from the literature (Keller, Eckert, & Clarkson, 2009) show that 70 to 80% of total product cost is decided during early design stages where 56% of change occurs after the initial phase, of which 39% is avoidable. This situation indicates a high potential for productivity improvement. A survey of the literature shows that there are 4 main groups of change management methods and tools: those which record and report change, e.g., change management effects analysis (CMEA) (Bouikni, Desrochers, & Rivest, 2006) and quality function deployment (QFD) (ReVelle, Moran, & Cox, 1998); those which control and minimize change impact, e.g., modularization (Dahmus, Gonzalez-Zugasti, & Otto, 2001), design for changeability (Fricke & Schulz, 2005), and frontloading (Oppenheim, 2004); those which fix change once it occurs, e.g., engineering change request management (Kocar & Akgunduz, 2010) and workflow data management; and those tools which model and analyze engineering change (Clarkson, Simons, & Eckert, 2004; Cohen, Peak, & Fulton, 1999; Silver & Weck, 2007). Engineering change analysis and assessment are paramount since they are the basis for decision making about whether to proceed with change and how to implement it. Decisions consist of identifying how a change affects other components in a product, of assessing the risk and impact of the change (Clarkson et al., 2004), and of supporting the decision for implementing a change (Flanaghan, 2006).

184

S. Zhao et al. / Computers & Industrial Engineering 76 (2014) 183–192

As the feasibility and the cost related to implementing change are highly affected by the product development phase in which they occur (Pikosz & Malmqvist, 1998; Reidelbach, 1991), it is sensible that a change evaluation system consider a multi-layer information model, including product, requirements, functions, detailed design description as well as function interdependencies (Riviere, Féru, & Tollenaere, 2003). The interdependencies within and across different subsystems (functions, specifications, parameters, constraints) affect how change occurs and propagates throughout different subsystems as well as the ability of the system to adapt (Lindemann, Maurer, & Braun, 2008; Maurer, 2007). Adaptability is the capability to identify change, to apply change actions, then to effect the actions in order to derive new outcomes, and finally, to be able to repeat the change cycle. Being able to move through the cycle quickly and effectively requires the analysis of the impact of one element on another within the same or dependent systems (Browning & Ramasesh, 2007). Although many methods have been proposed for change analysis (Gärtner, Rohleder, & Schlick, 2008; Kocar & Akgunduz, 2010; Reidelbach, 1991), most of them only consider change from the perspective of the product or its attributes, and do not consider the aspects of risk assessment, technical feasibility, fulfilment of customer requirements, etc., that are necessary for the implementation of alternative solutions. Hence, there is a lack of a holistic method that provides guidance as to additional cost and risk when design parameters change as projects proceed. This paper describes a framework for a comprehensive decision support system (DSS) to allow managers and designers to have better project information to help them identify the best solution as early as possible in the design process. Good decision making is especially important in early stages of new product development, since it is during these stages that the success of a project is determined (Crossland et al., 2003; Keller et al., 2009). Moreover, rework or change is more costly the further a design progresses. The proposed DSS integrates methods from QFD, functional analysis, and risk assessment to calculate the effects of engineering change where several dependency matrices are derived to make the analysis. The main analysis tool used in the DSS is the Cost and Risk Analysis Method (CRAM), where costs and risks are analyzed at the design function level and aggregated through a Functional Analysis System Technique (FAST) diagram. An Integrated Evaluation Matrix allows the comparison of total target cost, total risk level, and total design cost across alternative design solutions. The CRAM does not develop any new capability in QFD, function analysis or risk assessment, but uses these tools together in a new way to facilitate decision making about design change. The contribution to research is a tool that can quantify product cost, development cost and risk at any point during product development. The remainder of this paper is organized as follows: Section 2 briefly provides the background information required to understand the proposed method. Section 3 introduces the DSS framework and illustrates it with a thermoflask example. Discussion and conclusion follow in Section 4.

2. Quantitative methods for productivity and risk assessment during design In this section, a brief background is provided on the methods for productivity and risk assessment during early design stages, which are utilized in the DSS. First, two main concepts are defined: value and risk.

2.1. Value There are many definitions of value consisting of exchange of worth or being the level of importance of an object to stakeholders. Among the seemingly more quantitative definitions, Johansson and Pendlebury (1993) propose that value be quantified in terms of the relationship between quality, service and sale price, and lead time. Park (1998) considers value to be based on product functionality and cost, whereas Weinstein and Johnson (1999) define value as a benefit to cost ratio for products, as perceived by consumers. We use Weinstein and Johnson’s definition of value in the DSS, benefit/cost. In determining value, quantitative parameters such as quality, function, need, sale price, cost, lead time and level of satisfaction are considered. It is also important to consider risk when determining value since risk may reduce value when product performance, development time, manufacturing cost, etc., impact product function, quality, cost or delivery. 2.2. Risk Risk can be defined as the ‘‘variation in outcomes around an expectation’’; in other words, risk refers to how life differs from what is expected (Young & Tippins, 2000). In addition, according to Wang and Roush (2000), the central feature of any engineering project is to produce a result that leads to customer satisfaction within the scheduled span time and budgeted cost. During an engineering design activity, risk events include late design changes, product defects, manufacturing variability, structural and technical failures (materials, tools, process procedures, installation specifications), wear failures, severe environments, human factors, etc., that can lead to customer dissatisfaction and to time and budget overruns. At the start of any design project, there are uncertainties for each of these factors for which the associated risks need to be managed. Risk management can help to mitigate the potential of negative consequences arising from possible risk events and to maximize the possibility that outcomes will be better than target values. In the research in this paper, an investigation regarding relevant risk attributes is conducted. The following risks can be listed among these attributes: technical (new technology), development cost (effort overrun), timeliness, quality, procurement, financial and program, e.g., regulatory changes, etc. In this paper risk is focused on development cost, technical risk and timeliness risk since they are key factors that affect product (technical risk) and project outcomes (development cost and timelines risk). 2.3. Product value assessment methods 2.3.1. Functional performance specification The SAVE International (Society of American Value Engineers) defines value engineering similar to Park (1998):  identifying the functions of a product or service  establishing a monetary value for the functions, and  providing the required functions at the lowest overall cost. Functional performance specification (FPS) is a methodology used to detect, formulate and justify the functional requirements of a product or a service without specifying any solutions. It concentrates on the functions of a product rather than product parameters, and aims to satisfy these functional requirements with the lowest cost, effort, time, etc. (Donais, 2001). The FAST diagram (Fig. 1) is one of the key functional performance specification methods. The FAST diagram allows a functional hierarchy to be developed from left to right where functions are broken down into

S. Zhao et al. / Computers & Industrial Engineering 76 (2014) 183–192

185

uct. The HoQ for the thermoflask example is shown in Fig. 4 and is explained later in Section 3.2.1, 2.3.3. Product complexity model In order to estimate the actual, required effort related to the definition of design functions, Bashir and Thomson (2001) developed a product complexity model. This method is based on the weighted sum of the number of functions with respect to their level in a FAST diagram. Its calculation is given by Eq. (1).

PC ¼

l X Fjj

ð1Þ

j¼0

Fig. 1. A segment of the FAST diagram for a thermoflask with sample product data.

simpler ones when asking the question ‘‘how’’. For instance, when asking the question: How to ‘‘transport liquid’’?, the answer is ‘‘secure lid’’, ‘‘easy to use’’, etc. How to ‘‘secure lid’’? evokes the response with subfunctions ‘‘stop contents’’, etc. Asking why ‘‘stop contents’’? gives ‘‘secure lid’’. Thus, one can move from one side of the diagram to the other with the questions how and why. Once subfunctions are defined at a suitable level of granularity, specific devices or components that can perform the functions are given along with associated costs at the right of the diagram (Fig. 1). In the evaluation phase of FPS, the relative weights of importance for functions are assigned to each function (Masson, 2001). Subfunctions at level 2 are classified as design characteristics (DC) and subfunctions at level 3 are classified as design functions (DF). The relative weights of importance of DCs (d), the relative weights of importance of DFs (f), and the fraction of effort required for DFs (e) are obtained through discussions with product domain experts. The use of relative weights of importance for DFs and DCs is an extension of the FPS methodology (Masson, 2001). Each set of relative weights is normalized with respect to each other such that their sum is 1. Each design function listed at level 3 in Fig. 1 is independent of other functions and is satisfied by specific components so that the costs on the far right are independent as well and can be added to obtain a total cost. The total part cost is typically an allinclusive cost of purchase, manufacturing and assembly cost. All these values are used in the CRAM for analysis and are explained later in the paper in conjunction with the thermoflask example.

2.3.2. Quality function deployment QFD, which was developed by Akao (1997), translates customer requirements into product specifications before manufacturing. According to ReVelle et al. (1998), QFD is driven by the following objectives:  converting user needs (customer requirements) into design/ quality characteristics at the design stage, and  deploying the design/quality characteristics identified at the design stage into production activities. When these two objectives are satisfied, the resulting product can meet customer needs. Among the many quality tables used in QFD, the House of Quality (HoQ) is the most basic one. Given project needs, an HoQ links customer requirements with design characteristics; in other words, it transforms the WHATs of design into the HOWs of implementation (Cross, 2000). Iranmanesh and Thomson (2008) demonstrated how the DCs along with relative weights from an HoQ could be used to optimize the cost of a prod-

where PC is product complexity; Fj is the number of functions at level of decomposition j; and j is the number of levels. In order to estimate the total effort for different designs, a bestfit curve is generated by using regression analysis on effort versus complexity data from previous projects, and the coefficients in Eq. (2) are determined. Cn b E ¼ aPC b DC11 DC2 2 . . . Dm

ð2Þ

where Êb is the estimated design effort in hours; PC is product complexity; Dm are the different effort factors: difficult to achieve requirements, difficulty to expertise ratio, etc., and a, b, cn are constants (weights) that are estimated from historical data. The historical data is typically from similar projects done in a similar way in the same company in order to derive meaningful parameters. The total cost for a project can be calculated as

TC ¼ b EAC

ð3Þ

where Ê is the estimated total effort; and AC is the average cost per unit effort. In estimating the effect of change, the difference in effort between two designs can be obtained by calculating the total effort for each design and then subtracting them from each other. Furthermore, project effort can be translated into an estimation of span time by using Norden’s model (Bashir & Thomson, 2001). Norden (1964, 1970) states that the entire project development cycle can be approximated by the following equation.

E ateat y0 ¼ 2 b

2

ð4Þ

where y0 is manpower required in time period t; Ê is the total estimated design effort from Eq. (2); a is a shape parameter defined by the point in time at which y0 reaches its maximum; t is time; and e is the base of the natural logarithm. After rearrangement of parameters and taking the integral of Eq. (4), the duration of a project (td) can be obtained as:

td ¼

rffiffiffiffiffiffiffiffiffiffiffi lnx

a

þ1

ð5Þ

where x is the average ratio of the total effort spent during the last month of a project to the total design effort of a project; td is the project duration. 2.3.4. Risk assessment methods Risk management is any measure taken to evaluate hazards and to control their potential impact. The risk management process is composed of three steps: risk identification, risk analysis and assessment, and risk control (Fossnes, Forsberg, Wray, Fisher, & Mackey, 2004). In risk analysis and assessment, the data collected in the risk identification phase is converted into information describing the five key elements of risk: probability, frequency, impact, importance, and exposure. These elements need to be well defined to control risk. Knowledge from past experience or past product designs and their association with component characteristics are usually used in new designs to mitigate risk.

186

S. Zhao et al. / Computers & Industrial Engineering 76 (2014) 183–192

Designers make changes to design characteristics to adapt them to the new requirements. Hence, newly introduced design functions generate change in existing design characteristics, which in turn imply different cost and risk. The assessment of the risk of change and its impact on product value (functions, cost and technical risk) becomes of great value by systematically indicating changed functions and imputed cost. The DSS considers differential cost, quality and risk levels between consecutive design changes which are more relevant than the exact amount of cost, quality or risk level for each design solution. Integration of QFD, value engineering and risk assessment methods provide an easy to read summary of the ‘big picture’ for engineers. Furthermore, the impact of possible risks, which arise due to changes, is quantified by comparing alternatives to an original solution. The risks considered by the DSS deal with product characteristics, such as component cost and technical risk, and some project risks, e.g., development cost and on time delivery. This section reviewed some of the quantitative methods used by QFD, functional performance specification, and risk estimation to assess quality, cost and risk. Section 3 presents a DSS framework that addresses the integration of these methods to create a complete productivity and risk assessment method for design, and gives an example of a thermoflask to illustrate the framework. 3. Decision support system framework 3.1. Background Quantitative parameters are defined and deployed into matrices to establish the relative importance of customer requirements (CRs), design characteristics (DCs), design functions (DFs), and risk attributes (RAs) as well as the relationships to each other with respect to cost, risk and customer satisfaction. The method starts with customer requirements and design characteristics from a House of Quality as an input to the CRAM matrix and to the FAST diagram (Fig. 2), in which the target unit cost is first set for the current product. Cost and risk are analyzed at the design function level, and values are assigned for each component at the lowest level of the FAST diagram. The DSS allows comparison of actual to planned cost ratios and risk assessment levels based on the different criteria for the design functions. The total risk for all design characteristics is calculated for the current design based on all product specific risk attributes. The use of the CRAM is repeated for each possible design solution or change scenario; then, the most feasible solution is chosen. To make comparisons between the current design and proposed solutions, cost and risk related information are organized in the evaluation matrix. The alternative with the highest customer satisfaction and the best benefit to the company is chosen. Although the CRAM matrix is a computerized tool that facilitates the many calculations in the method, much reliance is put on the experts in the

QFD - HOQ Cost and Risk Analysis Matrix (CRAM)

VE - FAST Diagram

CRAM for Alternative Solutions

Product Complexity Model (Effort Estimation)

Norden’s Model (Span time Estimation)

Fig. 2. Simplified building blocks of the DSS.

Evaluation of Alternatives

Evaluation of Configurations & Optimization

product development team who provide the data from the QFD, function and risk analysis. 3.2. DSS and thermoflask example The framework of the DSS integrates a product complexity model for effort and span time estimation with the extended quantitative QFD method described in Section 3.1. Also, it introduces a new set of project performance metrics to evaluate different alternatives. The building blocks of the DSS (Fig. 2) are explained through the CRAM, using an illustrative example, thermoflask. The DSS is explained this way to give the reader a better understanding. The initial design for a thermoflask (basic design) uses glass as the container material and assumes that the sales duration of the product life cycle (PLC) of the thermoflask is 1 year with a sales volume of 200,000 units after entry into the market. The target cost including development and parts is $8.00. In Section 3.3, two alternative container materials, steel and plastic, are considered to further illustrate the use of the CRAM. 3.2.1. Cost and Risk Analysis Method (CRAM) Although the method starts with a FAST diagram along with an HoQ matrix, the CRAM matrix is presented as the main point of discussion to help the understanding of the procedure. Fig. 3 presents the CRAM matrix with its columns numbered from 1 to 17. In the following subsections, we explain the CRAM matrix and introduce the content of the columns. Design characteristics (DC) and the relative weights of importance associated with product design characteristics (d) are columns 1 and 2 in the CRAM matrix. The design characteristics are obtained from the second level of the FAST diagram (Fig. 1). The relative weights of importance for design characteristics are derived by evaluating how well the DCs satisfy customer requirements (CR) using an HoQ (Fig. 4). The relative weights of importance of CRs (column 2, Fig. 4) and the importance values for the customer requirement with respect to a design characteristic (columns 3–9, Fig. 4) are obtained from discussions with product domain experts. Using the HoQ matrix, the weights of importance for the DCs (D) are calculated (row 11, Fig. 4) by multiplying the relative weight of importance (ci) for the CRs and the evaluation values for the DCs that were obtained from product domain experts. These values are then normalized such that their sum is equal to one to obtain the relative weights of importance for the DCs (d) (row 12, Fig. 4), and then, put into column 2 of the CRAM matrix (Fig. 3). The values for d are also shown in the FAST diagram (Fig. 1). 3.2.1.1. Functional analysis. In the FAST diagram for the thermoflask (Fig. 1), DFs are matched to DCs. For instance, DC1 (secure lid) relates to the 5 design functions: DF11 (stop interior contents), DF12 (allow mouth open), DF13 (keep temperature), DF14 (contain contents), and DF15 (satisfy temperature and pressure). These DFs and their weights of importance (f) are then entered into the CRAM matrix (Fig. 3, columns 3 and 4). The elements of interest are the outcomes of both the analysis of design complexity and design characteristics, which consist of the fraction of effort per design function and contribute to the actual product component cost. The former is derived using the previously defined functional performance specification when answering the question: ‘‘what is the importance of a DF in generating effort during design’’ with the result put into column 5 of the CRAM matrix (Fig. 3). 3.2.1.2. Cost analysis. The fraction of effort values in column 5 of the CRAM matrix (Fig. 3) are multiplied with the relative weights of importance of DCs (column 2) and the total design cost (DCot) that appears in row 20 (column 6) of the CRAM matrix to derive the

187

S. Zhao et al. / Computers & Industrial Engineering 76 (2014) 183–192

Fig. 3. CRAM matrix.

1

2

3

4

5

6

7

8

9

HOQ - Customer Requirements to Design Characteriscs Design Characteriscs (DCi) 1 Customer Requirements (CRi)

Relave Weight of Importance of CR (ci)

Good Insulaon to keep hot 0.35 Durability (unbreakable 0.15 4 Easy to fill and serve 0.10 5 Easy to open 0.09 6 Comfortable to handle 0.08 7 Stable base 0.08 8 Does not spill 0.06 9 Nice look (appearance) 0.05 10 Easy to clean 0.04 11 Weight of importance of DC (D j ) 12 Relave Weight of importance of DC (d j ) 2

3

high quality aracve secure easy secure stable insulaon body Ext. color & lid to use holding base level material texture (DC1) (DC2) (DC5) (DC6) (DC3) (DC4) (DC7)

3 1 3 1 3 10 8 4 1 1 1 1 7 7 2 2 2 3 3.28 2.66 0.17 0.14

10 1 1 1 1 1 1 1 1 4.15 0.22

2 10 1 1 2 1 1 2 9 3.15 0.16

1 3 2 2 10 1 1 3 2 2.35 0.12

1 2 3 1 1 10 1 2 1 2.1 0.11

1 1 1 1 2 1 1 10 2 1.57 0.08

Fig. 4. House of quality for the thermoflask showing the relationship between customer requirements (CR) and design characteristics (DC).

design cost for functions (DCot) (column 6). The DCot is estimated using the product complexity model described in Section 2.3.3 by using Eqs. (1)–(3). The estimated span time (days) is calculated with Eq. (5) and is entered in the last row of column 6. Actual product cost values (PrCo) are also derived from the functional performance specification that are provided by product domain experts and shown in the FAST diagram (last column, Fig. 1). They are entered into column 7 of the CRAM matrix. In column 8, the target unit cost, which is $8.00 for the thermoflask (glass bottle) and is set by marketing, is entered into the last row. The target cost of DF (TCF) is obtained by multiplying the target unit cost with the relative weights of importance of DCs and the relative weights of importance for DFs for each function. For instance, the target cost function of the DF11 (stop contents) is calculated by multiplying the relative weight of the DC (0.170) with the relative weight of the DF (0.30) and with the target unit cost ($8.00) to obtain $0.41. In addition, design engineering, actual product, and target costs are used to calculate the product cost improvement index (PCII) in column 9. The formula used in the calculation of PCII is presented in the third row of column 9 in Fig. 3 and corresponds to the definition of value = benefit/cost. The PCII indicates the potential improvement opportunities for a DF. If the PCII is greater than 1,

it means that the actual cost exceeds the target cost of the DF. In other words, a PCII greater than 1 indicates an opportunity for cost reduction, which means effort should be directed towards reducing product cost. The PCII value for DF11 (stop contents, 1.23; row 2 in the CRAM matrix) indicates that if the cost is not reduced, it is very likely that there will be a budget overrun for this function. On the other hand, if the PCII value is smaller than 1, it means that there is an opportunity for the performance improvement of a DF within the given budget. The PCII for DF27 (avoid spill, 0.55; row 8 in the CRAM matrix) indicates that within the given target cost of $0.55, this function can be improved to increase customer satisfaction. 3.2.1.3. Risk analysis. Starting from column 10, risk attributes (RAs) relevant to a particular design are listed. These RAs are highly project specific and their number and focus can vary from project to project. Some of the RAs frequently encountered in new product development include, but are not limited to, development cost, technical risk, and timeliness risk as shown for the thermoflask example, which represent the maximum unexpected (worst case scenario) increase in the development and product costs, and the maximum unexpected loss of sales due to a delay, respectively. In this context, risk represents the uncertainty for a particular cash

188

S. Zhao et al. / Computers & Industrial Engineering 76 (2014) 183–192

flow. As more knowledge and information become available during the design process, the uncertainty with regard to delivering a required product characteristic goes down. Thus, the risk level decreases and the certainty with regard to predicting cash flow goes up. The risk level (RA) is assessed for each function relative to each of the risk attributes on a scale of 100% with collaboration of the design and marketing people. In this case, a three point distribution function was used, which is frequently done when information regarding outcomes is scarce. In a three point distribution, only extremities (the lowest, highest and the most likely points) of a probability density function are defined. Although the shape of the probability density function is not completely known, these defined points are sufficient to do analysis for most cases. The monetization process, which is explained in the following section, uses the worst case values. 3.2.1.4. Monetized risk analysis. Each of the risk attributes are monetized by using the results obtained so far in the analysis. As previously mentioned, the sales duration of the PLC of the thermoflask is assumed to be 1 year with a volume of 200,000 units. In column 13 of Fig. 3, the development cost risk is monetized by multiplying the design cost of the DF (column 6) with the risk level values in column 10 for each design function. For instance, the development cost risk of stop contents (DF11) (row

1

2 in the CRAM matrix) is $35 to $49 and is obtained by multiplying DCo11 ($147.87) with RA11 (0.24). The technical risk in column 14 is monetized by multiplying the actual product cost values (column 7), the expected sales volume (in units per year), and the duration of the PLC (in years) with the risk level values in column 11. In the case of the thermoflask, the expected sales volume is 200,000 units/year and the duration of the PLC is 1 year. The technical risk of stop contents (DF11) is $16,500.00, and is calculated by multiplying PrCo11 ($0.50) and 200,000 units/year in 1 year with RA11 (0.17). Finally, the timeliness risk in column 15 is estimated by considering the expected sales in the time window for a possible delay. Assuming a uniform sales volume for the thermoflask, the timeliness risk can be estimated by multiplying the percentage delay with the duration of the PLC and the expected revenue for the PLC. For a high volume consumer product, loss of sales constitutes the highest risk. For the thermoflask example, the timeliness risk is $109,090.91 which is estimated by multiplying the risk level (0.05) in column 14 and the sales duration of the PLC (1 year) with the expected revenue ($2,200,000). This represents the loss of sales for 18 days. In column 16, the monetized risk levels for the DFs are obtained by summing the monetized risk values in columns 13, 14 and 15. In column 17, the monetized risk estimation process is taken down to the design characteristic level by summing the monetized

3

2

4

5

Integrated Evaluaon Matrix (IEM) - Basic Design and Alternave Soluons Return to Manual

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

20 21 22 23 24 25 E1 E2 E3

Customer Evaluaon Score 1 - 9 Alt. 2 Basic Customer Requirements Relave Weight of Improtance of CR Alt. 1 Steel Plasc Design (CRi) (ci) Bole Glass Bole Bole Good Insulaon to keep 0.35 7 8 6 hot or cold Durability (unbreakable 2 9 6 0.15 6 8 Easy to fill and serve 8 0.10 Easy to open 7 9 9 0.09 0.08 Comfortable to handle 8 6 8 Stable base 8 8 8 0.08 8 7 Does not spill 9 0.06 Nice look (appearance) 7 8 9 0.05 0.04 Easy to clean 6 8 7 Total Customer Sasfacon Value (CSL) 6.33 8.14 7.04 200,000 257,188 222,433 Average Sales/ Year (S) ($/units) $8.00 $11.00 $7.00 Target Unit Cost (TCF t ) ($/units) $1,600,000 $2,829,068 $1,557,030 Total Cost of Product Units/ Year (C t ) ($/year) $11.00 $14.50 $9.00 Sale Price/ Unit (Price) ($/unit) $2,200,000 $3,729,226 $2,001,896 Yearly Benefits (YB) ($/year) 1.38 1.32 1.29 Benefits/ Cost (YB/C t ) $484,983 $576,039 $527,934 Monezed Risk - Worst Case Scenario (MR) ($/year) 0.780 0.846 0.736 Total Certainty Level for all DCs (DCe t ) 1.07 1.11 0.95 Integrated Evaluaon Index (IEI) Normalized Project Metrics (Risk Adjusted) Relave Basic Alt. 1 Alt. 2 Weight of Project Metric (PM f) Design Steel Plasc Importance of Glass Bole Bole Bole PM (wPMf) Differenal Cost Improvement Index (NdCII v ) Design Effort (NDCov) Span Time (NSPv) Profit (NPrf v ) Total Sasfacon [N(CSL+IEI) v ] Certainty Level (NDCe v) For use of Expected Profit Range Chart

0.05 0.20 0.10 0.20 0.35 0.10

Minimum Expected Profit ($/year) Maximum Expected Profit ($/year) Most Likely Expected Profit ($/year)

1.00 1.00 1.00 1.00 1.00 1.00 $115,017 $600,000 $372,366

0.18 0.98 0.91 1.63 1.16 1.08 $324,119 $900,158 $646,787

Fig. 5. Integrated Evaluation Matrix with thermoflask data.

1.97 1.11 1.10 0.70 1.00 0.94 -$83,069 $444,866 $62,123

189

S. Zhao et al. / Computers & Industrial Engineering 76 (2014) 183–192 Table 1 Summary of normalized project metrics. Project metric

Explanation

Formula

Normalized project metrics (risk adjusted) Differential cost improvement index Relative measure of PCII as a percentage of the performance of PCII for basic (NdCIIv) design v

Design effort (NDCo )

 v  PCIIt 1 DCev  NdCII ¼ AbsPCIIt0  DCet0 1  vt v

Required design effort as a percentage of required design effort for basic design

Span time (NSPv)

t DCev t



DCo0 t DCe0 t



DCe NSP ¼  t  SP 0 t DCe0 t

Total satisfaction [N(CSL + IEI)v]

Expected total satisfaction as a percentage of expected total satisfaction for basic design Relative total certainty as a percentage of total certainty for basic design

Certainty level (NDCev)

v

NPrf ¼

for each

for each

ðYBv C vt ÞxDCevt ðYB0 C 0t ÞxDCe0t v

NðCSL þ IEIÞ ¼ NDCev ¼

v ¼ 1; 2; . . . ; q



SP v t

v

v

Expected profit as a percentage of expected profit for basic design

v ¼ 1; 2; . . . ; q

DCo

NDCov ¼ 

Required span time as a percentage of required span time for basic design

Profit (NPrfv)

for each

DCEvt DCE0t

v ¼ 1; 2; . . . ; q

for each

CSLv IEIv þ CSL0 IEI0

2

for each

v ¼ 1; 2; . . . ; q

for each

v ¼ 1; 2; . . . ; q

v ¼ 1; 2; . . . ; q

Comparison of Risk Integrated Project Metrics Differenal Cost Improvement Index (NdCIIv) 2.02 1.6 1.2

Certainty Level (NDCev)

Design Effort (NDCov) 0.8 0.4

0

Total Sasfacon [N(CSL+IEI)v]

Span Time (NSPv) Minimize Resources Max. Benefits Basic Design Glass Bole Alt. 1 Steel Bole Alt. 2 Plasc Bole

Profit (NPrfv)

Fig. 6. Comparison of normalized project metrics for thermoflask alternative solutions.

development cost and technical risk levels with the maximum of the monetized timeliness risk for each design characteristic. Finally, the total monetized risk level of the design (last row of column 17), which represents the worst case scenario, is obtained by summing all the monetized design cost and technical risk levels with the maximum monetized timeliness risk for the whole product. After the completion of the CRAM matrix for the basic design, the method is repeated for each alternative design solution. For the thermoflask example, the alternatives substitute a steel and plastic bottle for the glass one. The results are presented in an evaluation matrix, which is explained in the following section. 3.3. Evaluation of alternatives As explained at the beginning of Section 3.2, two alternative thermoflask solutions with steel and plastic containers are analyzed. Both the FAST diagram and CRAM steps are repeated for these alternative solutions. Thus, the target total cost, total risk

$1,000,000 $800,000 $600,000 $400,000 $200,000 $0 -$200,000 Basic Design Glass Bole

Alt. 1 Steel Bole

Alt. 2 Plasc Bole

Fig. 7. Expected profit range chart for thermoflask alternative solutions.

level, and total design cost are determined. These values are shown in the Integrated Evaluation Matrix (Fig. 5); all three solutions are evaluated against customer requirements in rows 1 through 9. The weighted average of the customer evaluation scores are calculated to obtain the customer satisfaction level for each alternative (row 10). The customer satisfaction level values are used to

190

S. Zhao et al. / Computers & Industrial Engineering 76 (2014) 183–192

estimate the average sales per year for each solution. By using these values as well as the target unit cost and the sale price, the benefit over cost value is calculated (row 16). In row 18, the total certainty level is calculated by subtracting the monetized risk over benefits from 1. In row 19, the integrated evaluation index, which represents company satisfaction, is obtained by multiplying the benefit over cost with the total certainty level. At this point, the index shows that the steel bottle is the best solution.

metrics should also be taken into consideration depending on the project at hand. In order to perform an assessment relative to the basic design, the project metrics are normalized and sorted into two categories: resources required and benefits expected. A comparison of the design alternatives for the thermoflask example is shown in Fig. 6, where alternative 1 (steel bottle) is seen as the best solution since it requires the lowest resources and provides the highest benefit. Additionally, a chart of expected profit for different alternatives is constructed by calculating the best, most likely, and worst case scenarios which were used in the risk estimation. The expected profits for the three scenarios are provided at the bottom of the Integrated Evaluation Matrix (rows E1–E3, Fig. 5) and shown in

3.3.1. Normalized project metrics The characteristics of successful development projects were presented in Section 2.1. The Integrated Evaluation Matrix specifically addresses these characteristics with six metrics (rows 20 to 25, Fig. 5) that are defined in detail in Table 1. Additional project

1

2

3

4

Design Characteriscs v (DC j)

Relave Weight of importance

B

Design Funcons (DF jk)

B

of DC (d j) 1 2 3 4

secure lid (DC1)

0.170

5

easy to use (DC2) high insulaon level 8 (DC3) 6

0.138

7

9 10 11 12 13 14 15 16 17

0.215

quality body material (DC4)

0.163

secure holding (DC5) stable base (DC6) aracve Ext. color & texture (DC7)

0.122 0.110 0.081

1.000

18 19

5

6

7

8

9

10

11

12

13

14

Development Matrix (DM) - Different Configuraons of Best Soluon (Steel Bole)

Return to Manual

Totals:

1.000

Relave Target Cost of DF for Configuraons of Best Soluon (Steel Bole) (TCF 1Cjk) ($) Target Weight of Cost of DF importance 0 (TCF jk) ($) B of DF (f jk) DConf11 DConf12 DConf13 DConf14 DConf15 DConf16 DConf17 DConf18 DConf19

stop contents allow mouth open keep temperature contain contents sasfy temp. & press allow flow avoid spill

0.30 0.30 0.20 0.10 0.10 0.50 0.50

$0.56 $0.56 $0.37 $0.19 $0.19 $0.76 $0.76

0.56 0.56 0.37 0.19 0.19 0.76 0.76

0.57 0.57 0.38 0.19 0.19 0.77 0.77

0.59 0.59 0.39 0.20 0.20 0.79 0.79

0.60 0.60 0.40 0.20 0.20 0.81 0.81

0.61 0.61 0.41 0.20 0.20 0.83 0.83

0.63 0.63 0.42 0.21 0.21 0.85 0.85

0.64 0.64 0.43 0.21 0.21 0.86 0.86

0.65 0.65 0.43 0.22 0.22 0.88 0.88

0.66 0.66 0.44 0.22 0.22 0.90 0.90

keep temperature

1.00

$2.37

2.37

2.41

2.47

2.54

2.58

2.65

2.69

2.73

2.80

avoid breakage sasfy pres., with light weight clean easily keep temperature grip bole be Confortable

0.50 0.30 0.15 0.05 0.70 0.30

$0.90 $0.54 $0.27 $0.09 $0.94 $0.40

0.90 0.54 0.27 0.09 0.94 0.40

0.91 0.55 0.27 0.09 0.95 0.41

0.94 0.56 0.28 0.09 0.98 0.42

0.96 0.58 0.29 0.10 1.01 0.43

0.98 0.59 0.29 0.10 1.02 0.44

1.00 0.60 0.30 0.10 1.05 0.45

1.02 0.61 0.31 0.10 1.07 0.46

1.04 0.62 0.31 0.10 1.08 0.46

1.06 0.64 0.32 0.11 1.11 0.48

stay stable

1.00

$1.21

1.21

1.23

1.26

1.30

1.32

1.35

1.37

1.40

1.43

aract visually sasfy feeling of touch Color Coding Change Ripple Change Blossom Change Avalanche

0.60 0.40

$0.54 $0.36

0.54 0.36

0.55 0.36

0.56 0.37

0.58 0.38

0.59 0.39

0.60 0.40

0.61 0.41

0.62 0.41

0.64 0.42

n

TCFt

TCF11t

TCF12t

TCF13t

TCF14

TCF15t

TCF16t

TCF17t

TCF18t

TCF19t

7.00

$11.00

11.00

11.18

11.48

11.79

11.99

12.29

12.50

12.70

13.01

3

4

5

6

7

8

9

10

11

12

DConf16

DConf17

DConf18

DConf19

1

2

Development Matrix - Integrated Evaluaon Matrix Secon

Return to Manual

Customer Evaluaon Score 1 - 9

Customer Requirements (CRi)

Relave Weight of Importance of CR (ci)

1

Good Insulaon to keep hot or cold

2

Durability (unbreakable

3

Easy to fill and serve

0.35 0.15 0.10 0.09 0.08 0.08 0.06 0.05 0.04

4

Easy to open

5

Comfortable to handle

6

Stable base

7

Does not spill

8

Nice look (appearance)

9

Easy to clean

10

Total Customer Sasfacon Value (CSL)

11

Sale Price/ Unit (Price) ($/unit)

12

Benefits/ Cost (B/TCF)

13

Total Certainty Level for all DCs (DCe t )

Basic Design (Glass Bole)

$8.00 7 2 6 7 8 8 8 7 6 6.33 11.00 1.38 0.78

DConf11

DConf12

DConf13

DConf14

DConf15

$11.00 $11.18 $11.48 $11.79 $11.99 $12.29 $12.50 $12.70 $13.01 8 8 8.2 8.5 8.8 9 9 9 9 9 9 9 9 9 9 9 9 9 8 8.2 8.3 8.7 9 9 9 9 9 9 9 9 9 9 9 9 9 9 6 6.2 6.6 7.2 8 8.5 8.5 9 9 8 8 8.1 8.3 8.6 8.8 9 9 9 9 9 9 9 9 9 9 9 9 8 8.1 8.3 8.6 9 9 9 9 9 8 8 8.2 8.5 8.7 9 9 9 9 8.14 8.18 8.32 8.56 8.81 8.94 8.96 9.00 9.00 14.50 14.70 15.00 15.30 15.50 15.80 16.00 16.20 16.50 1.32 1.31 1.31 1.30 1.29 1.29 1.28 1.28 1.27 0.85 0.89 0.93 0.91 0.90 0.88 0.84 0.81 0.78

Normalized Project Metrics

Project Metric (PMf)

14 15 16

Total Customer Sasfacon Value (CSL) Profit Margin (PrfM) Certainty Level (NDCe)

Current Design DConf11 (Glass Bole) 0.703 0.904 1.043 1.000 0.835 0.905

DConf12 DConf13 DConf14 DConf15 DConf16 DConf17 DConf18 DConf19 0.909 0.997 0.949

0.924 0.991 1.000

0.951 0.984 0.976

0.978 0.981 0.968

0.994 0.975 0.940

Fig. 8. Development matrix and Integrated Evaluation Matrix for different steel bottle configurations.

0.996 0.971 0.903

1.000 0.968 0.863

1.000 0.962 0.835

191

S. Zhao et al. / Computers & Industrial Engineering 76 (2014) 183–192

Fig. 7, where the steel bottle is far better than the other solutions with a most likely profit of about $650,000 and a worst case profit of $325,000.

enterprise satisfaction curves, which also has a medium risk of 0.97, is selected as the optimal point. The closest configuration to the optimal point is DConf15, which is chosen for development in the detail design phase.

3.4. Evaluation of different configurations 4. Discussion and conclusion Different configurations of the best solution can be performed using a development matrix and the corresponding Integrated Evaluation Matrix as shown in Fig. 8. In the first part of the development matrix, configurations of the best solution, steel bottle, with different target cost structures are shown. In column 5, the target cost structure for the basic design (glass bottle) is presented as a reference for decision makers, so that they can easily compare different configurations (DConf). Columns 6–14 (Fig. 8) show different target cost structures ranging from $11.00 to $13.01. The differences in these configurations are minor changes to the details of the steel bottle in the thermoflask design. Similar to the approach presented in the previous section, each configuration is evaluated against meeting customer requirements in the Integrated Evaluation Matrix section of Fig. 8. The data in rows 10 through 13 are estimated in the same way as presented in Section 3.3. With changing cost structures, risk analysis may need to be repeated to accurately capture design certainty. In rows 14, 15, and 16 of the Integrated Evaluation Matrix (Fig. 8), the three performance metrics are normalized with respect to the maximum value in each category, and the best performing, normalized value for each metric is highlighted in grey. For instance, the customer satisfaction value for design configuration, DConf11, is obtained by dividing 8.14 by 9.00, which is the maximum value in the same row. These metrics are specifically presented in raw form rather than as risk integrated data to prevent bias created by distortion of data through modifications. These metrics are plotted simultaneously against the target cost of each configuration in Fig. 9. In this way, managers can view the chart easily to help them decide which configuration should be developed further in the detailed design phase. In the case of the thermoflask example, the company aims to obtain a balance between customer and company satisfaction to maintain a positive image and sustainable market share, while taking medium risk. Thus, the intersection point of the customer and

In response to customers’ rising demands for more customization and quality, companies are making more frequent changes to their products. In this paper, a framework for a comprehensive DSS that provides managers with information about the impact of design changes is described and illustrated with a simple example of a thermoflask. The DSS helps product development managers to assess project performance metrics, such as development effort, development time, product cost and revenue, customer satisfaction, profit margin, and risk. The system allows the recalculation of these performance metrics when an engineering change occurs during the creation of new design solutions. Thus, the assessment of different design solutions can be performed by comparing the differences in performance metrics. The DSS integrates the House of Quality (HoQ) from QFD, the Functional Analysis System Technique (FAST), and risk assessment. The research does not change these three methods, but integrates them into a new tool for product and project assessment during product development. The goal is to increase product knowledge during the different stages of design by calculating the effect of engineering change, and thus, to support decision making by design managers. The DSS quantitatively integrates product cost, development cost, development time, development risk, and other quantified performance indicators using the Cost and Risk Analysis Method (CRAM). The CRAM organizes these quantified performance metrics in the form of normalized values in a matrix with respect to a baseline design which makes comparison to other design solutions possible in the Integrated Evaluation Matrix. Through the use of a spider diagram, these performance metrics can be easily compared to those of the basic design. The DSS extensively uses other visual aids such as colour coding and information charts to significantly reduce the overall effort needed to interpret different design solutions. The relative comparison of project performance metrics allows the tracking of cost and risk as change occurs during

Project Metrics vs. TCF - Configuraons of Best Alternave Soluon 1.02 1.00

DConf18 DConf16DConf17

0.98

DConf15

0.96

Project Metrics

DConf19

Total Customer Sasfacon Value (CSL) 0.94

DConf14 Profit Margin (PrfM)

0.92

Certainty Level (NDCe)

DConf13 0.90

DConf12 DConf11

0.88 0.86 0.84 0.82 $10.50

$11.00

$11.50

$12.00

$12.50

$13.00

$13.50

TCF Fig. 9. Evaluation of different steel bottle configurations (TCF1) for project metrics: customer satisfaction level (CSL), profit margin, and certainty level versus total cost of functions.

192

S. Zhao et al. / Computers & Industrial Engineering 76 (2014) 183–192

product design. Furthermore, the information from previous projects forms a valuable database that can be used in tracking design change. The CRAM uses a lot of product data in order to do its calculations. Much of this data can be obtained from product data management systems (PDMS) or product lifecycle management (PLM) systems. Integration of a spreadsheet tool into corporate systems is straightforward. Likewise the results of the calculations, such as risk analysis can be exported for use with other business systems. The use of a computerized tool like CRAM is not used in isolation. It fits into a collaborative product development environment. As noted earlier in the paper, much of the input data (function analysis, relative weights, risk analysis, etc.) is a result of good teamwork among the experts doing the product development. So, the decision making supported by the CRAM is a collaborative affair. It should be noted that the method can take into account situations with changing environments. By the use of statistical and scientific techniques, the subjectivity level in risk assessment is significantly lowered. The assessment method increases the objectivity in order to decrease both inaccuracy and bias in the estimation, and thus, enhance sensitivity analysis. Each organization tries to obtain a balance between customer and enterprise satisfaction levels. By using the CRAM an appropriate balance can be easily determined at different design stages. However, to assure good results, risk indicators need to be defined very carefully. Although the approach provides generic qualities to assess the risk of change, the implementation of the tool is highly product specific and requires effort to obtain the appropriate data. One area for future research is to develop standard calculation procedures to reduce the amount of data manipulation by the user. Since the tool is meant to be used continuously during product development, easing data lookup and calculation will encourage its use. The CRAM tool is currently available in MS Excel as a freeware in beta version. Contact the authors for a copy. Acknowledgement The authors would like to acknowledge the funding received from the Canadian National Science and Engineering Research Council (NSERC) and the Consortium for Research and Innovation in Aerospace in Quebec (CRIAQ) used to support the research described in this article. References Akao, Y. (1997). QFD: Past, present, and future. International symposium on QFD’97, Linköping, Sweden. Bashir, H., & Thomson, V. (2001). Project estimation from feasibility study until completion: A quantitative methodology. Concurrent Engineering, 9(4), 257–269. Bouikni, N., Desrochers, A., & Rivest, L. (2006). A product feature evolution validation model for engineering change management. Journal of Computing and Information Science in Engineering, 6(2), 188–195.

Browning, T. R., & Ramasesh, R. V. (2007). A survey of activity network-based process models for managing product development projects. Production and Operations Management, 16(2), 217–240. Clarkson, P. J., Simons, C., & Eckert, C. (2004). Predicting change propagation in complex design. Journal of Mechanical Design, 126(5), 788–797. Cohen, T., Peak, R. S., & Fulton, R. E. (1999). Evaluating a change process product data model for an analysis driven supply chain case study. Proceedings of DETC99: ASME design engineering technical conferences, Las Vegas, Nevada. Cross, N. (2000). Engineering design methods: Strategies for product design. Chichester: Wiley. Crossland, R., Williams, J., & McMahon, C. (2003). An object-oriented modeling framework for representing uncertainty in early variant design. Research in Engineering Design, 14(3), 173–183. Dahmus, J. B., Gonzalez-Zugasti, J. P., & Otto, K. N. (2001). Modular product architecture. Design Studies, 22(5), 409–424. Donais, R. (2001). Value engineering Montreal subway project, VALOREX report. Montreal. Duhovnik, J., & Tavcar, J. (2002). Reengineering with rapid prototyping. TMCE 2002: Proceedings of the fourth international symposium on tools and methods of competitive engineering, Wuhan, PR China. Flanaghan, T. (2006). Supporting design planning through process model simulation. Cambridge, UK: PhD Thesis, University of Cambridge. Fossnes, T., Forsberg, K., Wray, R., Fisher, G., & Mackey, W. (2004). Systems engineering handbook. International council on systems engineering (INCOSE). Fricke, E., & Schulz, A. P. (2005). Design for changeability (DfC): Principles to enable changes in systems throughout their entire lifecycle. Systems Engineering, 8(4), 342–359. Gärtner, T., Rohleder, M., & Schlick, C. M. (2008). Simulation of product change effects on the duration of development processes based on the DSM. Proceedings of the 10th international DSM conference, Stockholm, Sweden. Iranmanesh, H., & Thomson, V. (2008). Competitive advantage by adjusting design characteristics to satisfy cost targets. International Journal of Production Economics, 115, 64–71. Johansson, H., & Pendlebury, A. (1993). Business process reengineering: Breakpoint strategies for market dominance. New York: Wiley. Keller, R., Eckert, C. M., & Clarkson, P. J. (2009). Using an engineering change methodology to support conceptual design. Journal of Engineering Design, 20(6), 571–587. Kocar, V., & Akgunduz, A. (2010). ADVICE: A virtual environment for engineering change management. Computers in Industry, 61(1), 15–28. Lindemann, U., Maurer, M., & Braun, T. (2008). Structural complexity management: An approach for the field of product design. Springer. Masson, J. (2001). The use of functional performance specification (FPS) to define information systems requirements, create RFP to suppliers and evaluate suppliers’ response. SAVE international conference, Fort Lauderdale. Maurer, M. S. (2007). Structural awareness in complex product design. München, Germany: PhD Thesis, Technischen Universität München. Norden, P. (1964). Manpower utilization patterns in research and development projects. PhD Thesis, Columbia University. Norden, P. (1970). Useful tools in project management. Baltimore: Penguin Books. Oppenheim, B. W. (2004). Lean product development flow. System Engineering, 7(4), 352–376. Park, R. (1998). Value engineering: A plan for invention. CRC Press. Pikosz, P., & Malmqvist, J. (1998). A comparative study of engineering change management in three Swedish engineering companies. Proceedings of the ASME design engineering technical conference DETC98/EIM-5684. Reidelbach, P. E. (1991). Engineering change management for long-lead-time production. Production and Inventory Management, 32(2), 84–88. ReVelle, J., Moran, J., & Cox, C. (1998). The QFD handbook. John Wiley & Sons Inc. Riviere, A., Féru, F., & Tollenaere, M. (2003) Controlling product related engineering changes in the aircraft industry. International conference on engineering design ICED03, Stockholm. Silver, M. R., & Weck, O. L. d. (2007). Time-expanded decision networks: A framework for designing evolvable complex systems. System Engineering, 10(2), 167–188. Wang, J., & Roush, M. (2000). Risk engineering and management. Marcel Dekker, Inc. Weinstein, A., & Johnson, W. (1999). Designing and delivering superior customer value: Concepts, cases, and applications. CRC Press. Young, P., & Tippins, S. (2000). Managing business risk: An organization-wide approach to risk management. American Management Association.