Tuesday, March 28, 2006

Design For Six Sigma Tools

What are the main tools used in DFSS? That's a huge question and impossible to answer. DFSS places strong emphasis on customer analysis, the transition of customer needs and requirements down to process requirements, and minimizing defects, costs, and time. A few tools are common to any DFSS methodology, but the use of other tools will vary according to whether the design is of a product or a service or a process, the specific context, and the priorities.

A book of this scope cannot present all of the DFSS tools in sufficient detail. Most of the tools in this chapter and the next easily warrant a chapter in itself. In fact, some of these have filled books. Our purpose here is simply to provide brief explanations of the more important DFSS tools used generally. The tools are presented here in more or less a usual way to use them, although the order in which tools are used varies according to the version of DFSS, the project, and the context, although—as we've noted in Chapter 3—some tools are useful throughout the DFSS process.

To go beyond what we present here, perhaps the best two resources—other than your MBB and BBs—are two books by Thomas Pyzdek—The Six Sigma Handbook (McGraw-Hill, 2000) and The Six Sigma Project Planner (McGraw-Hill, 2003)— and Design for Six Sigma in Technology and Product Development by Clyde M. Creveling, Jeffrey Lee Slutsky, and David Antis Jr. (Prentice Hall, 2002).

Phase-Gate Project Reviews

As mentioned in Chapter 3, it's important to set up for each phase of a new project a phase-gate review, also known as a stage-gate review, a project review, a phase review, a project tollgate review, or a project milestone review. Through this mechanism, a multi-functional management team, often consisting of vice presidents or directors, reviews and assesses the project at the end of each phase. According to the plan set forth in the project charter and the criteria defined for the project, the managers review timelines, check deliverables, and then decide whether or not the project is successful enough to merit further expenditure of resources.

Each stage-gate review should have well-defined entry criteria or deliverables, an objective, and an agenda. And all members of the project team should understand the purposes of the reviews and prepare for them appropriately. Not only must the phase-gate process be set up but the organization also needs to have the discipline to use the process. We frequently find organizations that have processes that are thought through and documented well, but do not rigorously use them. If an activity is identified as being necessary to complete a phase, then it should be done— and not waived, as we sometimes see.

Charles Waxer, in "Successful Six Sigma Project Reviews" (www.isixsigma.com/library/content/c010520a.asp), emphasizes the importance of communication in Six Sigma projects: "Communication starts with the project charter and lives through project reviews." He states that project reviews are "simply status checks"— occasions for evaluating the status of the project relative to the plan set forth in the project charter, reviewing timelines, validating proper use of Six Sigma tools, and monitoring key progress deliverables.

He offers the following list of suggestions for the team leader, to ensure gaining maximum benefits from the review:
Monitor the progress of the project. Make sure that all members of the team are focused on the phase objectives and working together.
Provide guidance. A good leader guides the team members without giving the impression of doubting their competence and undermining their confidence and morale.
Align activities with the objectives. Working together necessitates communication among team members, to make sure that all are making appropriate progress.
Show support. The team leader motivates and energizes team members by facilitating and providing advice resources.
Eliminate obstacles. Project reviews allow the opportunity for team members to identify obstacles for the champion to eliminate.
Share lessons learned. Team members and leaders of other project teams can share and discuss ways to work more effectively and efficiently.
Recognize results. Treat each phase as a mini project and progress as a partial success and recognize the team members for their contributions.

It's sometimes necessary to kill a project. Sometimes a project that looked good early on turns out to have flaws. Many organizations have a very difficult time dropping projects. But killing a project early is smarter than putting a "dog" out in the marketplace.

Benchmarking

Benchmarking in DFSS can be part of selecting a project and also a means of eliminating potential failure modes from the design. As mentioned in Chapter 4, there are basically three types of benchmarking:

• internal—identifying the best business practices within your own organization and sharing them across the functional areas
• competitive—researching the products, services, and processes of direct and/or best-in-class competitors, to identify and describe them as a basis of comparison for your own products, services, and processes
• functional—identifying and examining the activities of organizations considered to be the best in class in some function

As noted in Chapter 3, benchmarking best practices may not help in developing a design, and may actually hurt, since sometimes comparisons—even with the best—can hinder creativity.

Knowing that something is the best can be intimidating.

Measurement System Analysis

Good measurement is absolutely fundamental to Six Sigma projects. When the project team identifies channels for process capability data, it should conduct a measurement system analysis (MSA) on the means of measuring process capability. After all, if you do not ensure that your measurement system is both accurate and precise each and every time it is used, 100% of the time, you risk working with flawed data. And your results are only as good as your data.

There are three key concepts in measurement:

• Validity: A measurement or a method of measuring is valid if it represents the measured feature usefully or appropriately.
• Precision: A measurement or a method of measuring is precise if repeated measurements of the same feature vary little or not at all.
• Accuracy: A measurement or a method of measuring is accurate (or unbiased) if measurements on average produce true or correct values.

Precision is largely an intrinsic property of a measurement method or device. There is not really any way to adjust for poor precision or to remedy it except to overhaul or replace measurement technology or to average multiple measurements.

As discussed in Chapter 3, MSA consists of procedures using an experimental, mathematical method for detecting and quantifying the extent to which any variation within the measurement process contributes to overall process variability. An MSA generally involves the following factors:

repeatability: The degree of variation among repeated measurements, the ability of an instrument to reproduce a reading with certain accuracy.
reproducibility: The degree of variation in the averages among appraisers repeatedly measuring the same characteristic of a single part.
stability: The degree of variation in measurements when a known, constant input (a single characteristic) is measured over an extended time span.
linearity: The difference in the bias values or repeatability across the expected operating range of the measuring instrument, consistency.
bias: The difference between the observed average of measurements and the reference value. This offset or drift from "zero" necessitates adding a consistent "bias factor" to all measurements.
discrimination: The ability to distinguish differences, which requires enough increments for differences to be measured. It's ideal to have 10 possible values between limits; it's only marginally useful to have five possible values; and it's impossible to allow for adequate discrimination with fewer possible values.

Voice of the Customer (VOC)

Design should begin with the customers. DFSS focuses on determining what customers require and value. "Voice of the customer" (VOC) is a term that describes stated and unstated customer needs or requirements. It's important to keep in mind that customer voices are diverse.

Within any organization, there are multiple customer voices: the procuring unit, the user, and the supporting maintenance unit. Within any of those units, there may also be multiple customer voices. The project team must consider these diverse voices in developing designs.

As mentioned in Chapter 3, there are many ways to capture the voice of the customer (VOC)—surveys, focus groups, one-on-one interviews, contextual inquiry, customer specifications, field reports, complaint logs, conjoint analysis ... The next step is to process that input, to document, organize, and analyze the information—and, again, project teams have a wide range of tools from which to choose. Here we'll touch upon

How Much Is Too Much?

Acceptability of measurement errors revealed through MSA will depend on many factors, particularly the nature of the specifications and the customer. But you should be familiar with any guidelines used in your specific industry. Here, for example, are the guidelines for measurement system acceptability advocated by the Automotive Industry Action Group (AIAG):

• under 10% error is acceptable
• 10% to 30% error is acceptable depending on the importance of application, the cost of measurement device, the cost of repair, and other factors
• over 30% error is unacceptable: and you should improve your measurement system

Customer Survey Analysis

A survey is used to gather information from a sample of individuals, usually a fraction of the population being studied. In a bona fide survey, the sample is scientifically chosen so that each person in the population will have a measurable chance of selection. Surveys can be conducted in various ways, including over the telephone, by mail, and in person. Focus groups and one-on-one interviews are popular types of in-person surveys.

Without surveying the customers adequately, it's difficult to know which features of a product or a service will contribute to its success or failure—or to understand why.

A well-designed customer survey should ask:
• What was expected or wanted?
• What was experienced?
• What is your level of satisfaction with the product?
• What is the degree of relative importance of this variable?

Surveys are useful in some situations, but they're weak in terms of getting the types of data necessary for a new design.

That's why teams are using contextual inquiry, a newly popular approach to getting at root customer needs.

Contextual inquiry, as defined in Chapter 3, is a structured qualitative market research method that uses a combination of techniques to discover customer needs through observing and interviewing people in the context of using the product or service.

Briefly, contextual inquiry involves having small teams observe how customers or potential customers interact with products. (This is sometimes referred to as "living with the customer.")

The team makes notes of the likes, dislikes, and frustrations that arise from these interactions. Customers can be very helpful in providing technical solutions to problems they're experiencing. It's important to make note of any technical solution.

However, typically customers do not know what the best technical solutions are, so it's better to leave that up to your engineers. But make sure they grasp the fundamental, underlying need that the customers want satisfied.

Affinity Diagramming

After the project team has gathered information on customer requirements and expectations, it needs to organize the information. If it needs to consolidate disparate items from interview notes, survey results, market research, and so forth into a selection of essential statements, affinity diagramming can be a useful tool. Team members transcribe onto cards brief statements of customer requirements and expectations. They then organize these cards into logical groupings of related needs. This makes it easier to identify similarities and redundancies and ensure that key needs are represented. This is also known as the KJ method, after its inventor, anthropologist Kawakita Jiro.

Quality Function Deployment (QFD)

Quality function deployment (QFD) is a natural part of most DFSS strategies. It's a systematic approach for transitioning VOC into design requirements. The project team uses QFD to translate what customers need and want into critical technical characteristics and ultimately into critical-to-quality characteristics (CTQs) at each stage, from research through to marketing, sales, and distribution.

QFD was developed in the late 1960s in Japan by Professors Shigeru Mizuno and Yoji Akao. Statistical quality control was gaining acceptance as a means of improving quality.

Mizuno and Akao wanted to develop a quality assurance method to apply in the design process.

To benefit more fully from QFD analysis, the project team begins with the Kano model, then uses the VOC table (VOCT), and follows up with the house of quality (HOQ).

Kano Model

Developed in the 1980s by Professor Noriaki Kano, the Kano model is based on concepts of customer quality and provides a simple ranking system that distinguishes between essential and differentiating attributes. Simply put, the Kano Model is used to assess levels of customer satisfaction.

It's a conceptual model for understanding that customers have a range of requirements for any product or service. It divides those requirements into three categories, according to customer input on Kano questionnaires:

• basic requirements (unspoken), dissatisfiers
• variable requirements (spoken), satisfiers
• excitement attributes (unspoken), latent requirements,
Delighters

Kano model A diagram in the form of a cross in which each customer requirement for a product or a service is placed in one of three classes: basic requirements or dissatisfiers, variable requirements or satisfiers, and excitement attributes or latent requirements or delighters. This method was devised by quality expert Noriaki Kano.

Basic requirements must be present in order for the product to be successful. These minimum requirements can be viewed as the price of entry. Variable requirements are directly correlated with customer satisfaction. Attributes in this category will please the customer more or less depending on the degree to which they are present in the product or service. Finally, excitement attributes are features that go beyond what customers might ordinarily expect, that give them great satisfaction, and for which they're willing to pay a premium price. In designing a product or a service, the DFSS team must account for all three categories to ensure a successful business strategy.

According to the Kano Model, not all characteristics are equal in the eyes of the customer.

Furthermore, just responding to customer complaints and spoken requirements is not enough. We must be proactive in seeking out customer needs and we must remain aware that those needs will change over time.

Driving Forces

If you're designing a car, you might consider the following among hundreds of features:

• engine, brakes, wheels: basic requirements or dissatisfiers—A car must have these minimum features or customers will be dissatisfied.
• price, performance, choices of color: variable requirements or satisfiers—A car will satisfy customers more or less according to how low it's priced, how well it performs, and how many colors the company offers.
• talking controls, satellite navigation system: latent requirements or delighters—A car will delight or excite customers if it offers either or both of these features.

Voice of the Customer Table

This tool allows a project team to record information about customer needs in a way that captures the context of those needs to enable the team to better understand explicit and implicit customer requirements. VOCT is not only useful for recording information about customer needs in context, but it also serves as a preliminary exercise before the team builds a QFD house of quality.

For each customer statement, the team enters a customer I.D. and demographic information (sex, age, location, etc.) and information about product or service use. The information is categorized in terms of basic questions—what, when, where, why, how—that provide a context for analyzing and understanding the customer statements. The team also indicates "external" and "internal," to distinguish between information gathered directly from customers and information generated within the organization. (See Figure 7-1.) The second part of the VOC table is for translating statements into requirements.

House of Quality

The house of quality (HOQ) helps the team members structure their thinking, reach a consensus about where to focus the attention of the organization, and communicate this information throughout the organization. This tool helps ensure that they don't leave anything out when they identify CTQs that are the source of customer satisfaction (in contrast with FMEA, in which they identify CTQs that are the source of customer dissatisfaction), at the system level, subsystem levels, and components level.

The HOQ is a process for ensuring that customer needs drive the entire development process, a tool for planning and organizing CTQs at all levels of the design, and a method for providing early focus and alignment for the development team.

It's an L-shaped matrix that allows a project team to quantitatively analyze the relationship between customer needs and design attributes. The HOQ is an assembly of deployment hierarchies and tables:

• the Demanded Quality Hierarchy (rows)
• the Quality Characteristics Hierarchy (columns)
• the relationships matrix (which uses one of several distribution methods) (roof)
• the Quality Planning Table (room at right)
• the Design Planning Table (bottom room)

There are four key phases in the HOQ (the first phase of QFD):

Phase 1. Product Planning: Translate what the customer wants into prioritized design requirements or top company measures.
Phase 2. Part Planning: Translate design requirements or company measures from phase 1 into part characteristics.
Phase 3. Process Planning: Translate part characteristics from phase 2 into key process characteristics that optimize the design.
Phase 4. Production Planning: Translate process characteristics from phase 3 into production and process requirements that will maximize your ability to provide the product or the service most efficiently.

QFD: Current Use and Potential

QFD is a valuable tool for ensuring that customer needs drive the entire design development process. It's effective for focusing and aligning the project team very early in the Identify phase of DFSS, identifying gaps and targets, and planning and organizing requirements at all levels of the design.

We find many organizations have had bad experiences with QFD. Here are recommendations for avoiding some common errors we've seen in the use of QFD:

• Don't assume you already know customer needs and skip the Concept Engineering steps.
• Don't assume you already know customer importance of CTC and decide not to survey customers for this information.
• Make sure your project teams are cross-functional.
• Make sure to do "the roof" of the house of quality.
• Don't mix up CTQs and CTCs.
• Don't use CTQs that are not measurable.
• Don't attempt to conduct initial QFDs without a skilled facilitator.
• Don't itemize every item in the QFD. A rule of thumb is to limit your items to 20 x 20. In Design for Six Sigma in Technology and Product Development , Clyde Creveling, Jeffrey Lee Slutsky, and David Antis, Jr. suggest NUD , considering only new, unique, and d ifficult items in the QFD.


Project teams could make even greater use of QFD, according to two advocates. Stefan Schurr, MD of Qualica Software GmbH of Munich, Germany, and Craig T. Smith, vice president of Rath & Strong, recently proposed a more comprehensive QFD approach ("Using Comprehensive QFD Including Function, Reliability, and Cost as the Backbone for a Design for Six Sigma Strategy," International Symposium on QFD 2002).

They suggest using a system of 15 matrices to integrate and link QFD with Six Sigma scorecards, TRIZ, value analysis, analytic hierarchy process, experiment planning, functional analysis, target costing, New Concept Selection, and matrix-based FMEA. (Bob King of GOAL/QPC was using a version of this "matrix of matrices" in the early 1980s.)

Schurr and Smth suggest that when a project team builds a system of matrices to document relationships between needs and solutions, QFD becomes "a repository of product planning knowledge forming the central nervous system of DFSS." QFD and DFSS together can add control to any design process to ensure continually meeting customer requirements.

Analytic Hierarchy Process (AHP)

The Analytic Hierarchy Process is a tool for multi-criteria analysis that enables people to explicitly rank tangible and intangible factors against each other in order to establish priorities.

The first step is to decide on the relative importance of the criteria, comparing each one against each of the others. Then, some simple calculations determine the weight that will be assigned to each criterion: this weight will be between 0 and 1 and the weights of all criteria will total to 1.

The second step is to evaluate each factor on each criterion.

All options are paired separately for each criterion, for comparison.

Users are often encouraged to express comparisons through verbal statements that express relative merit or interest:

e.g., "In terms of [criterion], how many times more is option A preferable to option B or option B preferable to option A?"

These paired comparisons of options and of criteria are then subjected to matrix mathematics, which assigns numbers ("weights") to both the options and the criteria to show priorities.

This tool, developed by mathematician Thomas L. Saaty, a professor at the University of Pittsburgh and author of The Analytic Hierarchy Process (New York; London: McGraw-Hill

International, 1980), consists of four steps:

1. The decision problem is broken down into criteria and factors.
2. Criteria are evaluated.
3. Factors are evaluated among themselves in terms of the criteria.
4. The priorities are synthesized.

This tool for multi-criteria analysis has another benefit for DFSS project teams. By breaking down the steps in the selection process, AHP reveals the extent to which team members understand and can evaluate factors and criteria. The team leaders can use it to stimulate discussion of alternatives.

DFSS Scorecards
Scorecards provide a systematic way for the project team to do the following:
• Set goals.
• Predict results.
• Calculate capability.
• Identify gaps.
• Track performance as progress toward the goals.

Computer programs allow a project team to create scorecards that fit the specifics of any particular project and context. DFSS scorecards generally include at least the following headings:
• CTQ
• items and component items
• base DPM
• improve %
• goal DPM
• data (Y/N)
• source
• data DPM

For designs that consist of several levels, the team may need hierarchical scorecards, consisting of several levels of scorecards. The basic scorecard might break down into an input variables scorecard and a supplier scorecard, for example.

Figure 7-2 shows an example of scorecard.

Pugh Concept Selection Technique

For evaluating design alternatives, a project team can make two passes. The first evaluation is centered on the Pugh concept selection technique: the team selects the best alternatives or improves upon them. On the second pass, the team uses concept FMEA. FMEA should be done concurrently with Pugh, but just on the last few design concepts in the step just before convergence.

FMEA takes time; using the above approach, the team needs to do FMEA only on the final few concepts.

The Pugh concept selection technique is a method for concept selection using a scoring matrix called the Pugh matrix or the criteria-based matrix. This is a form of prioritization matrix usually associated with the QFD method.

This technique, also known as concept screening, is based on a method developed by the late Stuart Pugh in the 1980s (Total Design: Integrated Methods for Successful Product Engineering, Boston: Addison-Wesley, 1991). It is an iterative evaluation method in which options are assigned scores relative to criteria; the selection is then made based on the consolidated scores.

The Pugh technique is effective for comparing alternative concepts—and most effective if each member of the team performs it independently and the team then compares the results.

Here are the steps for applying the Pugh concept selection technique (Figure 7-3):

1. Select the requirements, the evaluation criteria.
2. Individual team members or small groups form lists of 10-20 criteria. These are frequently based upon the technical requirements of phase I of QFD. Robustness, cost, design for manufacture (DFM), and complexity are good candidates for any Pugh analysis.
3. The team members discuss, merge, and prioritize the criteria on their lists, usually reducing the lists to fewer than 20 criteria. Also, at the beginning of the first working session, they also discuss the criteria, to better understand them.
4. The team chooses one of the design alternatives as the baseline (datum). Note: This is best done in the early stages, frequently completed with detailed sketches.
5. The team sets up a matrix with the criteria down the left side and concepts across the top.
6. The team members rate each design alternative by each of the requirement criteria relative to the baseline (datum), using the following scale:

+ clearly better than the baselines about the same as the baseline
- clearly worse than the baseline

Note: As the team completes its comparison, disagreements may arise stemming from different interpretations of the selection criteria. If this happens, clarify the item and begin again. For each cell of the matrix, the team must really know whether each alternative is better than, worse than, or the same as the datum. (It may need to use DOE or FMEA.)

7. After all comparisons are completed, the team totals the +'s and -'s for each alternative.

8. The team will then usually choose to focus on the alternative with the most pluses and fewest minuses. However, instead of simply selecting this alternative, the team may work at removing the minuses by examining alternatives that do better by the criterion in question. Through the concept selection process, the team may be able to create an alternative that is superior to any initially used in the comparison. The scores can be used to rank the concepts and help the team make decisions—to continue considering, to drop from consideration, to revise, or to combine with other concept(s). This process is repeated until an alternative wins out; it may take several sessions.

Criteria
Ease of use
Ease of transport
Reliability
Accuracy
Durability
Aesthetics
Sum of +
Sum of 0
Sum of –
Score of
Rank
Continue
Figure 7-3. Pugh matrix

There are variations on scoring. For greater discrimination among choices, for example, the team could use a seven-level scale:
+3 meets criterion far better than baseline
+2 meets criterion much better than baseline
+1 meets criterion better than baseline
0 meets criterion as well as baseline
-1 meets criterion not as well as baseline
-2 meets criterion much worse than baseline
-3 meets criterion far worse than baseline

The scores should be used for guidance in making decisions, not treated as absolutes. If the two top scores are very similar, then the team should examine the concepts more closely.

If it's impossible to make a comparison, the team should get more information on the concepts.

Design for X (DFx)

Traditionally, design starts with sketches of parts and assemblies.

These sketches are then passed to the manufacturing and assembly engineers, who are responsible for optimizing the processes used to produce the final product. Frequently, manufacturing and assembly problems are discovered at this stage and requests for changes are made. The later in the development cycle the change occurs, the more expensive the change becomes. That's why DFx is such an important part of DFSS, both in the Identify phase, to select a product concept, and in the Design phase, to assess and manage risks. These early design reviews improve quality, lower costs, and reduce time to market.

Design for X is the value-added service of using best practices in the design stage to improve X, where X is one of a proliferation of purposes or concerns. Here are many of the members of the growing DFx family now in use:

• DFM—Design for Manufacturability/Manufacture/Manufacturing
• DFA—Design for Assembly
• DFMA—Design for Manufacturability/Manufacture/Manufacturing and Assembly
• DFR—Design for Reliability
• DFT—Design for Testability/Testing
• DFC—Design for Cost
• DFS—Design for Serviceability/Service
• DFQ—Design for Quality
• DFF—Design for Fabrication
• DFD—Design for Disassembly
• DFD—Design for Diagnosis
• DFI—Design for Inspection/Design for International
• DFG—Design for Green

Design for Manufacture (DFM) is a methodology for designing product components in a way that facilitates their fabrication.

Design for Assembly (DFA) simplifies the product structure, reduces the number of parts, and thereby reduces the total cost of parts. Thus, DFA reduces not only assembly costs, but also manufacturing costs.
Design for Manufacture and Assembly (DFMA)® (Boothroyd Dewhurst, Inc.) combines DFM and DFA, of course, to design products to optimize the relationship between design function, manufacturability, and ease of assembly. This helps reduce manufacturing cycle time and ultimately the cost of manufacture. DFMA requires a collaboration between all engineers, beginning with product conception all the way through to delivery.

DFMA Cuts Parts, Time, and Costs

Ingersoll-Rand successfully used DFMA to reduce the product development time of a portable compressor from two years to one.

Moreover, the design team was able to reduce the number of parts in the product's radiator and oil cooler from 80 to 29, the number of fasteners from 38 to 20, and the time for total assembly from 18.5 minutes to 6.5 minutes.

Design for Reliability (DFR) is one of the most critical elements of DFSS. Reliability is the ability of a product to satisfactorily perform its designated function over its customer intended lifetime.

It has been referred to as "quality over time" and is a major bottom-line assessment of product performance in customers' minds, especially at the time of repurchase.
Design for Testability (DFT) encourages the early involvement of test engineering or quality assurance functions in the design process. This can minimize the cost of developing or acquiring new equipment as well as the cost of testing the product at each stage of production. Where production volumes are high, automated test equipment may be cost-effective. Testing equipment and procedures at the same time the product is being designed can reduce lead time. Designing products to use standardized equipment can further reduce the costs of test equipment and reduce the lead time for testing.

Design for Cost (DFC) is an approach with a narrow, obvious focus that uses such product design tools as Quality Function Deployment, Design for Manufacturing and Assembly, and Value Analysis/Function Analysis.

Design for Serviceability/Service (DFS) involves paying attention to the serviceability or maintenance of products early in the design process. Those responsible for product maintenance and service need to be consulted early on to share their concerns and requirements. Designing support processes should be done parallel to the design of the product. This can result in lower life cycle costs and a product design that considers serviceability.

With high-reliability, low-cost, or consumable products, maintenance is less important than with products that have a long life cycle or with products that have parts that are subject to wear.

For these products, the following design rules can be helpful:
• Use unique connectors with electrical parts, to avoid misconnections.
• Identify parts that are subject to wear. Design these parts so that they can be accessed, removed, and replaced easily.
• Use common hand tools and a minimum of hand tools for assembly and disassembly.
• Place the items most likely to fail, wear out, or need replacement in a small number of modules or assemblies.
• Eliminate or reduce the need for adjustment.
• Employ quick fastening and unfastening mechanisms for serviceable parts.
• Use standard replacement parts.

Design for International (DFI) means designing products that are suitable for worldwide distribution. This means designing products that can be quickly adapted to each country or market.

This requires attention to varying standards, differing power requirements, differing safety and environmental standards, and differences in language and culture.

Design for Green (DFG) or Design for Environment (DFE) shows concern for producing in environmentally conscious ways. With environmental regulations increasing and concern for the environment growing, manufacturers are reevaluating their manufacturing processes to minimize the impact on our environment.

Manager's Checklist for Chapter 7
❏ It's important to set up for each phase of a DFSS project a phase-gate review, during which a multi-functional management team reviews and assesses the project according to the plan set forth in the project charter.
❏ Benchmarking in DFSS can be part of selecting a project and also a means of eliminating potential failure modes from the design.
❏ When the project team identifies channels for process capability data, it should conduct a measurement system analysis. This generally involves six factors: repeatability, reproducibility, stability, linearity, bias, and discrimination.
❏ DFSS focuses on determining what customers require and value, through a range of tools, including customer survey analysis, affinity diagramming, quality function deployment (QFD) and the house of quality, the Kano model, the voice of the customer table, and analytic hierarchy process.
❏ DFSS scorecards provide a systematic way for the project team to set goals, predict results, calculate capability, identify gaps, and track performance as progress toward the goals.
❏ For evaluating design alternatives, a project team can use the Pugh concept selection technique and the failure modes and affects analysis (FMEA).
❏ A project team can use any of a family of design reviews known as Design for X (DFx) to select a product concept and to assess and manage risks. These early design reviews improve quality, lower costs, and reduce time to market.

2 comments:

brainmeasure2010 said...

Designs for Six Sigma hold a great value if the Design is perfect the final result is also going to be perfect.
Online Six Sigma Black Belt Certification

brainmeasure2010 said...

Designs for Six Sigma hold a great value if the Design is perfect the final result is also going to be perfect.
Online Six Sigma Black Belt Certification