Categories
Manufacturing terms

Quality barometer of product function “Function reliability”

Quality barometer of product function “Function reliability”

When designing products, we will make product specifications.
It will be expanded from the product specification to the function of the product, but as long as the quality of the function itself can not be maintained, the specification can not be satisfied as a result.
I would like to talk about how “functional reliability” is related to design.

When you want to increase functional reliability, please describe in concept of maintaining functions depending on functional reliability at design time It is necessary to make such change as it is.

Relationship between specifications and functions

Before talking about “functional reliability”, I would like to talk about the relationship between function and specification in the first place.

As an example, I think with “knock type pen”.
Examples of

As “Specification” “Take the pen tip out with just one hand’s thumb operation”
as “” feature “>” Press the upper part of the pen to tip the pen “,

“Specification” should be
“Function” is a measure to make it appear as it should be from “specification”

I think that it is a relationship.

“Specification” means what it should be. “Function” means a strategy to make it appear

What is “functional reliability”?


“Functional reliability” is simply “quality of function almost” if it says simply.

I think with the “knock type pen” of the previous example.
The function was “When pressing the top of the pen the pen tip comes out” or “Press it once again” was.

“Function reliability”
10,000 times successfully “When the pen top is pressed, the pen tip comes out” and “When pressed once again” is satisfied.
.

This is the degree of whether the function itself can satisfy after expanding the function from the specification.
“Function reliability” ≈ “quality of function” → “quality of specification”

The quality of the function is as follows.

· Grade of function
· Credit rating of function (functional confidence is strong here)

“Functional reliability” is the degree of satisfying the function developed from the specification

“Function reliability” What happens when you drop it?


As I mentioned earlier, “functional reliability” will lead to “quality of specifications”.

Degraded “functional confidence”

If it was a model change from an existing product, there should be not a few specifications that had been up to now.
Especially the specification on the extension of products up to now should have higher reliability than before.

If the reliability is low (when there are many failures that do not satisfy the specifications), customers will not be satisfied because what they thought as “what should be should be” is not satisfied.
It will not be satisfied even if there is a satisfactory part.
For the relationship between satisfaction and dissatisfaction, please see “satisfied / dissatisfied” .

“Function reliability” leads to “reliability” of the product itself.
Actually, it is only for that product, but despite the fact that customers tend to be confused with “reliability” of the whole company as a whole.

 As a result, if “functional reliability” declines, and it is notified to customers, it will be regarded that “reliability” has declined for the entire company.
Even if it’s already sold or not, we will capture the “functional reliability” low at the end of “reliability”.

Maker’s way of thinking

Even if it is a manufacturer’s probability of failure (even if it is one in one million pieces), the customer who purchased and faced the problem was 100% defective item.
Therefore, in particular, Japanese manufacturing industry tends to increase “functional reliability”.
For that reason, we also manufacture products that incorporate process capability against product standards .

– Difference in concept of functional reliability by industry

“Function reliability” follows anything.
However, depending on the industry, the size of its reliability comes out to the market.

High functional reliability
Manufacturing industry
→ Because it was recalled and it takes a huge cost to rework

Feature reliability is relatively low
Service and software industry
→ Since it can be done with upgrading etc., it will not cost much for rework

If you do not judge “functional reliability”, the value of the enterprise will also decline.

Bathtub curve (failure rate curve) grasping reliability trend

The failure rate trend is represented by a graph called bathtub curve (failure rate curve), in which the vertical axis shows “failure rate” and the horizontal axis shows “time lapse”.
Because it is a curve like a bathtub it is called a “bathtub curve”.
bathtub curve

Over time, it is divided into an initial failure period, a random failure period, and a wear failure period.

Failure rate decrease curve (DFR)
Failure rate constant curve (CFR)
Failure rate increase curve (IFR)

When the tendency of failure rate decrease curve (DFR) is large

Failure rate reduction curve
It is a failure due to defects in manufacturing.
This tendency is likely to occur even if it is due to the quality that does not have the inspection process or is missing.
This type of failure rate decreases with the passage of time.

When the tendency of the failure rate constant curve (CFR) is large

constant failure rate
It is a failure that happens by chance.
It is a related breakdown when the range of the standard is too sweet or when the use range of the product exceeds the assumed value.
It is of a type that does not relate to time and breaks down at a certain rate.

When the tendency of failure rate increase curve (IFR) is large

Failure Rate Increase Curve
The position of this curve will change due to changes in design, material etc.
It is a type of mechanical failure such as wear that increases with the passage of time.

These three graphs together form a bathtub curve.
If you are forced to take a model change or countermeasure, it is easier to respond if you judge which trend is strong and respond.

Especially it is better to judge which function will be a strong curve

It is an easy-to-understand book as a judgment of quality engineering and function. Those who want to refer about function and quality please.
これでわかった! 超実践 品質工学 ~絶対はずしてはいけない 機能・ノイズ・SN比の急所~

Measures to improve reliability

Reliability may not be improved due to various factors.
There may be cases where reliability may not be improved due to human error such as “human error” or crossing the “planning” “design” “manufacturing” department.

Contents of reliability improvement

“Elimination” not making mistakes
“Alternative” not to induce mistakes
“Simplify” not causing mistakes
Detecting mistakes “Detecting abnormalities”
“Corresponding deployment” not to disseminate mistakes

Make no mistakes “exclusion”
Do not make the cause of mistakes, if you eliminate it in any way.

Do not trigger mistakes “Alternatives”
To avoid making a mistake, it is to substitute with another thing so as not to be difficult.

Do not make mistakes “simplify”
It is to simplify work items and make the mistake as few as possible.

· Notice of mistakes “Detect anomalies”
It is a method to notice the mistake by stopping the abnormality discovery device, inspection, etc. and stop it.

· Do not disseminate mistakes “Corresponding deployment”
It is to make similar mistakes not to happen elsewhere.

Although we need to think about these, we need to do “risk assessment” on top of that.
What is the risk assessment here when it becomes unreliable? It is something to think about.
Quality standards will also vary depending on the country and purchasing layer. Therefore, the risk assessment will also differ.

These ideas are the same as risk assessment .
Let’s do risk assessment and design corresponding to it. For details, please look over there.

The risk assessment will be different as I mentioned earlier.
I think that it is good if we start out measures and investigate the contents of evaluation beforehand by the method using the tool etc. described later.

As for the function, we also have to do risk assessment

The important part is “Redundancy”

Risk assessment, but still very important parts need to be “redundant” (duplicated).

“Redundancy” is a circuit or system that operates without problems even if one of the functions is duplicated and the other remains even if one of them fails.
This is a very good method as a way to increase functional reliability. However, the cost is double that amount.

In this way, a dual circuit system with backup is used in many places.

· airplane
· Power plant
· Financial system
· Safety circuit
 etc

Depending on risk assessment, redundancy must be selected

Tool for identifying countermeasures

As mentioned in the section on measures to improve reliability, quality standards will also vary depending on the country and purchasing layer.

Relationship between product quality and function “Quality function development” (QFD)
Change point analysis for prevention (DRBFM)
Failure analysis from component failure (FMEA)
Failure cause analysis (FTA)

It will become clear if you make full use of it.
For details, I would like to explain it in the case of the contents of each item.

Use tools to clarify and lead to risk assessment

An easy-to-understand book of quality engineering is here.
Please click here if you would like to refer.

Categories
Manufacturing terms

Let’s start with a big factor Pareto chart

Let’s start with a big factor Pareto figure

As a part of quality control (Quality Control), there is a “Pareto chart” as one of display methods often used.
I would like to explain what the Pareto chart looks like, how to view it, and how effective it is to use it.

What is a Pareto chart?


The Pareto chart can be expressed as shown in the figure below.

Pareto chart normal

In this way, it is a graph showing the magnitude and quantity of the factor represented by the bar graph and the ratio (cumulative ratio) expressed by the bar graph by a line graph.
Rules are decided, and the bar graph is arranged in order of magnitude and quantity of factors (from left).

By doing this, you can see at a glance the overall percentage of factors.
By arranging them in descending order, it becomes a graph that is easy to see visually as to what kind of item is large proportion.

Pareto chart is easy to recognize visually

How to use Pareto chart

Pareto charts are often used for quality control (Quality Control), but there are reasons for that.
Especially, if you say what you want to do with Pareto chart, it will be as follows.

· When all types of factors are too many to deal with all
· When you do not know the priority order of countermeasures and countermeasures
· If you want to produce larger results with less measures
· When you want to give persuasive power to a factor by presentation etc.

Even if there are many kinds, even if they are sorted in descending order, the ratio can be found, so if you do not know the priority, you can say that you start with the item on the left of the graph.
Also, because you know the ratio to items, you can judge to what extent you should do as a percentage.

It is easy to understand at a glance visually, you can instantly judge the size and ratio.
It is often used in presentations and other presentations etc. to give persuasive power against “response / countermeasure details” “response / countermeasure order” “response / countermeasure result”.

Pareto charts can be used to prioritize and want to produce results efficiently

How to read the Pareto chart

I will explain the viewpoint according to the usage described earlier.

· “value · magnitude” can be known immediately from “ratio”
Or “ratio” immediately from “value · size”

Pareto chart view 2
As a viewpoint of the Pareto chart, “value” and “ratio” can be known immediately.
If you draw a line from the axis of “cumulative ratio” and read the value of “magnitude”, that is the value for the ratio.
Conversely, if you draw a line from the axis of “magnitude” and read the value of “cumulative ratio”, it becomes the ratio to the value.

In this example, you can see that the size becomes “526” when the ratio is “50%”.

· “value · size” and “ratio” of summation of upper items are immediately known

Pareto chart view 1
As a way of looking at the Pareto chart, you can read the “Size” value and the “Cumulative Ratio” value of the sum of the top items immediately.
Draw a line above the right side of the item of the total you want to know.
When a line is drawn from the point where the line hit the line of the cumulative ratio to the axis of “size”, the summed value of “size” becomes
If you draw a line on the axis of “cumulative ratio” you can read the “ratio” that is the sum of the top items.

In this example, we sum up the top 3 items.
The sum of the total is “791”, and the combined ratio of the three are “75.2%”.
Since there is a value on the “cumulative ratio” line here, even if you do not draw a line on the axis of “cumulative ratio”, the ratio will be that value.

· In some cases, you may notice “factor” in “size” from “common point” in upper level item

Pareto chart viewpoint 3
As a way of looking at the Pareto chart, if we can find out common points of the top items, it may be possible to know the factor of “size”.
In fact you may find common points and find the root cause to “size”.

In this example, 4 out of 5 top items of “apple”, “mandarin”, “grape”, “pear” have the same thing as “fruit”.
There are five fruits and five vegetables in all items (10).
However, fruits are biased towards the top.
In this case, you should think that there is a relationship of “fruit” in the top item, “size”.

If you can learn how to see the Pareto chart, you can make a decision instantly.

The following books are useful as a way to show graphs in presentations etc.
Please try it.
レポート・プレゼンに強くなるグラフの表現術 (講談社現代新書)

How to write a Pareto chart (Excel version)

It takes a bit of effort to draw a Pareto chart.
I would like to explain the method in Excel.

· Create a table (sort by large order of factors)

I will make a table to create a Pareto chart.
Prepare the value that you want to express in the bar graph (below, the value you want to show in the Pareto chart).
Arrange the factors in descending order.

① Specify the range you want to rearrange. (Including items)
② Select “Size” from “Home” → “Sort / Filter” → “Sort by User Setting” in the menu bar.

After this, in order to create a graph, “accumulated value” is calculated → “cumulative ratio” is obtained → “graph” is created.

Seek cumulative value
We will issue “cumulative value” for obtaining “cumulative ratio”.
cumulative

① The first of the “cumulative value” is the same as “Size”.
② From the second one we will calculate it by “previous cumulative value” + “size of the current item”.
(The cumulative value of the bottom item (last item) is the same as the total value of magnitude.)

Find cumulative ratio
We will calculate the value for making a line graph of “cumulative ratio”.
cumulative ratio

① Leave the line above the top item empty.
② Enter “0” in the cumulative ratio column there. (First value of cumulative ratio)
(You do not need to write anything in the cause.)
③ “Accumulation of items” / “Last item of accumulation” * 100 to make cumulative ratio.
(The sum of “last item of accumulation” = “size”)

The reason for putting 0 at the top is the starting position of the line graph in the Pareto chart, because we want to start the line chart of the Pareto chart from the 0 position.
The lower column divides the cumulative number by the total number and multiplies by 100. That is why I will issue the cumulative percentage [%].

“Last item cumulative” is marked with “$”, but it is a mark that prevents the value from moving when copying and pasting.
By pressing “F4”, the corresponding part is marked.

· Create bar chart (size)

Next we will create a graph.

The following is a shortcut key used in graph creation.
shortcut key

① Hold “Ctrl” and select “size” · “cumulative ratio” (including 0).
② Press “Alt” + “F1”.
(If you want to put a graph on another sheet, press only “F11”) to create a graph.

Specify the range of the graph like this. Please enter 0 for the cumulative ratio.
Designate range for graph
Of course, there is no problem even if you select a bar graph from the menu.

· Create a line chart (cumulative ratio)

Next we will change it to a line chart representing the cumulative ratio.

Change graph of cumulative ratio

① Select the bar graph of “Cumulative ratio”, right click and select “Select graph type” → “Line chart with marker”.
② “Cumulative ratio” is changed from a bar graph to a line graph.

Add axis for line chart

I think that the vertical axis is now the size.
Next we add another axis and create the vertical axis of “cumulative ratio”.

Y axis addition of cumulative ratio

① Select a polygonal line which is a line of “cumulative ratio” on the graph
② Select “Format” of graph menu of menu (red frame on the right side of the figure)
③ Select “Format setting for selection” (red frame on the left side of the figure)
④ Check the “2nd Axis (Up / Right)” in “Series Options” in the data series format setting (red frame at the center of the figure)

I think that the vertical axis for “cumulative ratio” was created on the right side of the graph.

Changing axis changes

Adjust the scale of the axis of “size” and “ratio”. (So ​​that you can visually decide immediately)

First, adjust the scale of the bar graph.
① Select the vertical axis on the left side of the graph (vertical axis of “size”).
② Right-click and select “Format Axis”.
③ Set the “minimum value” of the axis option to 0 and the “maximum value” to the total value of “size” (the last item in the cumulative column).

Set axis value

Similarly, adjust the scale of the line graph.
① Select the vertical axis on the right side of the graph (vertical axis of “Accumulation ratio”).
② Right-click and select “Format Axis”.
③ Set the “minimum value” of the axis option to 0 and “maximum value” to 100 (the maximum of the ratio is 100%).

Formatting the chart

· Change width of bar graph
Make the width wider and prepare to align the mark of the polygonal line with the corner of the upper right corner of the bar graph.

① Select the bar graph (size).
② Right click and select “Format data series”.
③ Set the interval of elements of the series option to “None”.
(If the bar graph sticks hard to see, change it to “1 to 5%”, or add a border to the bar graph.)

Change width of bar graph

· Change the position of broken line mark
Change the axis position and align the polygonal mark with the upper right corner of the bar graph.

① Select a line graph (cumulative ratio).
② Select “Graph design” → “Axis” → “Second horizontal axis” in the graph’s graph tool menu.
③ Select “Select target format”.
④ Make the axis option “None” for “scale type”, “None” for “auxiliary graduation type”, and “none” for “axis label”.
(Center of figure)
⑤ Check the “Scale” of the axis position.

Line graph alignment

·others
Please add a value above the polygonal line marker, format the graph with the title of the figure, the name of the axis etc attached.

It takes time and effort to make once, but after creating it you can copy and use the graph.
Please try various.

If you make a type of Pareto chart, you can use it in the same way

The following books are used as a way of displaying graphs in presentations etc.
As a way of expression it will be helpful.

Categories
Manufacturing terms

Let’s reduce the number of tests (experimental design method)

Let’s reduce the number of tests (experimental design method)

We try to figure out combinations of several conditions for testing for quality and purpose extraction. If it is a normal idea, if you do the number of rounds per minute (by testing) it is certain. , but in cases where there are many types of combination conditions or there are many condition contents, the number of brute force increases exponentially. (It increases the number of tests.) It becomes way to suppress it as much as possible.

Rather than doing the number of experiments by round robin, express by probability · Reduce the number of times by performing with probability

Difference from brute force

Suppose, for example, you want to investigate the best condition under the condition that the seed of the plant germinates. Tentatively, as the type of condition, “type of soil” “quantity of water”. As the content of the condition, the brute force in the case of “type of soil” is “red soil”, “black soil”, “normal soil”, “the amount of water” is “less than 100 ml” “100 ml or more and less than 500 ml” or “500 ml or more” The number of times and the table (combination pattern) are as follows.
実験計画法総当たり1

The round-trip count (test count) is 3 × 3 = 9 times .

I will increase the type of condition here by one. The type of condition is “temperature”, the contents of the condition is set to “less than 10 ° C”, “10 ° C to 20 ° C”, “20 ° C or more”. In this case, the total number of contacts and the table are as follows.

実験計画法総当たり2

The round-trip count (test count) is 3 × 3 × 3 = 27 times . It increases steadily with a multiplier. . .

However, If it is the idea of the experimental design method, the number of times of experiments is not required for the total number of rounds. What are you talking about? You might think. It is represented in the table below.

実験計画法パターン

I think that I thought that “There is a pattern that is not done !!” ! In the experimental design method, the proportion of the contents (large elements) of the related condition is given using the analysis method of variance analysis. To put it briefly, a technique for establishing relevance by establishing it . Therefore, we have to compute the result. . . If you make it in Excel etc. beforehand, it is no problem.

Benefits:
The number of tests decreases. The relevant ratio of the contents of the condition can be specified.
Demerit:
The best combination is hard to understand, calculation is necessary
(You can see which items are strongly related by the content of each condition.)

 

Two things can be analyzed by using the experiment design method.

1. Reducing the number of tests: It will also be the subject of the title.
2. Analysis of data: You can tell by the probability how much the condition contents are related.

 

Concept of experiment design method

In the experimental design method, the type of condition mentioned earlier is called “factor” and the content of condition is called “level”.
I think that we can write the factors A, B, C and the levels 1, 2, 3 (A1, A2, A3) as calculation formulas as follows when thinking in brute force.
実験計画法考え方

It is not simple, but I will briefly explain.

· way of thinking

Among the above formulas, I think you will find that there are related parts (common parts) when rewritten as follows.
Even if you do not conduct a direct test using this relationship, the test results will appear in the common part so that it is a way to be tested without testing the whole number. Originally I will think about dispersion (variation).
実験計画法考え方2

Therefore, combination is very important . The combination uses an orthogonal table derived from a table called Latin square. It will be a combination called. By using the orthogonal table, the effect of each factor can be evaluated without counting in round robin (even without combining all patterns). This combination is important.

· What is Latin square?


If you explain the Latin square in a simple way, it arranges so that one element is inserted in each row. Use it to spread the combination.
ラテン方角

An orthogonal table is obtained by weighting the variance using this Latin square.
The following is an orthogonal table to be used in this case.
直交表

The idea is difficult ~! Well, I am the same.
The theory is difficult, but If you apply the correct pattern as it is, there is no problem.
For those who want to know more, the following books will be helpful.

Experimental Design

Next we will show the orthogonal table to use.

The important thing is just to assign an orthogonal table to use, depending on what level / factor / you want to do .

Reduce the number of tests by utilizing the fact that the test results appear in the common part.

Orthogonal Table Type

First of all, it is the notation method before showing the kind.
Write it as orthogonal table as follows.
直交表の表示
What’s this? However, this means that two levels of factors are orthogonal tables that can be handled up to 7 tests in 8 tests.
 
直交表の表示2

Orthogonal table: 2 levels 3 factors
2水準4因子

Orthogonal table: 2 levels 7 factors
2水準7因子

Orthogonal table: 2 levels 15 factors
2水準15因子

Orthogonal table: 2 levels 31 factors
2水準31因子

Orthogonal table: 3 levels 4 factors
3水準4因子

Orthogonal table: 3 levels 13 factors
3水準13因子

If the number of factors to be used is small, please use the factor without assigning any factors that have the remainder of the orthogonal table.
I presented a representative one. There are many others and it is also possible to derive.
There are combinations of levels (combination of 2 levels and 3 levels, etc.) and multiple levels of orthogonal tables, but we will skip here.
For details, please see the reference material at the end of the sentence.

After all, let’s try to minimize as much as the number of factors / level number increases as the number of experiments increases.

There are various kinds
Minimize the factor / level to conduct the test

How to evaluate in Excel

I think that I understood the kind of the orthogonal table. I will explain concretely how to use it.

Evaluation of Pass Rejection

In this state, we know the value of the acceptance and we will put out the completeness of the combination.

First select the orthogonal table you want to use. Here we use an orthogonal table of 8 tests (2 levels 7 factors).
As mentioned earlier, even if the number of factors is small (in this case two factors, six factors can be used).
In that case, please ignore the column (column).

· Thinking (pass failed)

As a way of thinking, we solve the following simultaneous equations.
Compare the characteristic value (predicted value, average value) μ with the result S and see the change in the coefficients a to g.
If the characteristic value and the result are the same, the coefficient becomes 0. Changes appear in the coefficients involved when they are different.
特性値

· How to use Excel (pass fail)

First, we will output the necessary orthogonal table. Add the column of the experimental characteristic value (predicted value, average value) μ to the last column of the orthogonal table.
For 2 pole like OK / NG for characteristic value, indicate it with 1 and 0.
特性値の追加

First, create an inverse matrix to solve the simultaneous equations.
In Excel, we use “MINVERSE function”.
① Select the same range as the orthogonal table + characteristic value range for the place you want to create the inverse matrix (the place you want to display).
② Fill in MINVERSE in the input field.
③ Select the range of MINVERSE (what you want to inverse matrix: orthogonal table + characteristic value).
④ I will let Excel recognize it as an array. In Excel, “Shift + Ctrl + Enter” will be recognized as an array that is related not to a simple numerical value.
This completes the inverse matrix table.
Excelでの逆行列

This time, it performs matrix multiplication and displays the change of coefficients a ~ g.
In Excel, we use “MMULT function”.
① Select the range of the number of factors plus 1 (characteristic value) in the same way as above, where you want to make the matrix product (the place you want to display).
② Fill in MMULT in the input field.
③ Select the range of MMULT (what you want to matrix, orthogonal table + inverse matrix of characteristic value, result).
④ I will let Excel recognize it as an array. In Excel, “Shift + Ctrl + Enter” will be recognized as an array that is related not to a simple numerical value.
This completes the matrix product table (values of coefficients a – g).
Excelでの行列積

You can see which factors are related from these.
· It can be said that factors with a change in the evaluation (coefficient) have influence (change) on the characteristic value.

Of course, if the characteristic value and the result are exactly the same, the evaluation (coefficient) will be 0.
In the case of evaluation such as pass fail, it means that it is a failure if the evaluation is other than 0.
As a factor of rejection, if the characteristic value (acceptance value) is 1, a negative factor of evaluation is a factor.
In this case, it can be said that factors of E and G cause this result to be different from the characteristic value.

Evaluation judgment and relevance of factors can be judged by Excel
If you prepare a calculation formula in Excel in advance, you can easily find related factors

Evaluation of maximum or minimum combination estimation

Here, we will explain how to find out which combination the numerical value is the maximum or minimum.
First select the orthogonal table you want to use. Here we will use orthogonal tables (2 levels 3 factors) of 4 tests.

· Concept (combination estimation)

We will use the orthogonal table to obtain the evaluation value.
Assign the result to the level of each factor.
Divide the factor by the number of allocated results of that factor and level and compare it.

If there are multiple test results, errors in measurement are suppressed and accuracy improves.
At that time, if the test result is D, the relationship between the true measured value S and the error N is as follows.
D = S + N

If it is 1 time, this is fine, but if you do it more than once, you will see the error on average as you try to express it on average.
Assuming that the test result is 3 times (D1, D2, D3) the average value D of the test, it can be expressed as follows.
/ 3 = S + (N 1 + N 2 + N 3) / 3 (S 1 + N 2 + D 3) / 3 =

If the error is close to the true measured value, it is difficult for the error (variation) to understand the difference between the true measured values ​​by just normal means alone.
So it is the idea of ​​dispersion.
We will give the expected value of variance of sample mean.
σ ^ 2 = (D1 ^ 2 + D2 ^ 2 + D3 ^ 2) / 3 = (Measurement 1st ^ 2 + Measurement 2nd ^ 2 + Measurement 3rd ^ 2) / 3

This variance value itself does not make sense, but if the error is even slightly less than the true measurement value, the true measured value is largely reflected.
“When the result of the test is multiple (n times)”, because the variance itself is raised to the second power, digits of the value are sometimes enlarged and it may be difficult to see, so we use LOG to reduce the scale.
The error (variation) is explained in “Variance and process capability” .

· When the test result is single (once)

I will explain the case where there is only one result for the number of tests.
Actually, since there is noise (variation), the number of tests will be reduced by the experimental design method but the test result will be more accurate if you do multiple tests.
Later, I will explain the case where multiple results are issued, but here we will explain the result one time.

In this example, we will find combinations (combinations of factors and levels) that maximize the test results.
We will use an orthogonal table of 2 levels 3 factors as below and take the results once (4 in total) for 4 test times (4 patterns).
パターンに対して単一の結果

With this list of test counts in order, rearrange the test order and test results of each factor as follows.
As you can see from the table, we just assigned the test results to the factor test order.
パターンに対して単一の結果 並び替えを行い因子水準に割り当てる

After that, we will issue total test results at the level of each factor.
Also we will count the number of factors that each level of factor has come up in the test.
Because of the test combination of the orthogonal table, the number of items appearing in the test of the level of each factor differs depending on the table.
パターンに対して単一の結果 直交表に出てきた水準因子をまとめる

We take an average and estimate the level value of each factor.
Since we want to find the maximum combination here, the highest combination of levels of each factor of the largest numerical value becomes.
(If you want to obtain the minimum, the combination with the smallest level of each factor is the combination with the smallest level.)
パターンに対して単一の結果4 平均より割り出し

The maximum combination here is “A1, B0, C0” or “A1, B0, C1”.
Even if you use a different orthogonal table it is possible to do the same.

· When the test result is multiple (n times)

I will explain the case where multiple results are output for the number of tests (when multiple measurements are taken).
As I mentioned earlier, since there are errors in practice, the number of tests will be reduced by the experimental design method, but the test results will be more accurate if you do multiple tests a few times.

We use three factor 2 level orthogonal table as below and take 3 results (total of 12) for 4 test times (4 patterns).
It is the same as halfway “When the test result is single (once)”.
パターンに対して複数の結果1

With this list of tests in order, distribute the test order of each factor and three test results and rearrange them as follows.
As I mentioned earlier, we will give the expectation of sample variance from test results.
Dispersion (σ 2) = (first test result ^ 2 + … + n th test result ^ 2) / n

Here we are logarithm using LOG.
When using Excel, try using the function “LOG 10”.
In sorting you can see by looking at the table, but the dispersed test results are only repeating for each factor.
パターンに対して複数の結果2 並び替えを行い因子水準に割り当てる

After that, we will make a total variance at the level of each factor.
Also we will count the number of factors that each level of factor has come up in the test.
パターンに対して複数の結果3 直交表に出てきた水準因子をまとめる

We take an average and estimate the level value of each factor.
Since we want to find the maximum combination here, the highest combination of levels of each factor of the largest numerical value becomes.
(If you want to obtain the minimum, the combination with the smallest level of each factor is the combination with the smallest level.)
パターンに対して複数の結果4 分散より割り出し

The maximum combination here is “A1, B0, C1”.
Likewise, even if you use a different orthogonal table, you can do it in the same way.

The largest combination can be easily estimated with Excel

Points to note in testing

Although it is not limited to this, attention is required for measurement in the test.
It is to make sure that data is not going wrong due to bias of conditions. For example, in the case of experiments such as those measured by humans, elements other than logical, such as preconceptions and memorization of previous data values, may be included. By doing it in random order, you can reduce those factors.

1. Measure several times

Measure several times to suppress variations in measured values. However, it can be omitted if the result does not vary.

2. Constant content other than factors

In order to limit to only the factors that become the condition, the external factor must be constant at all times.

3. Do in a random order

It is necessary to do it in random order to eliminate accustomment and prejudice to the examination.

Although the number of tests can be reduced, efforts to minimize external factors other than factors / levels are required

 

Experiment planning method in the field of economics (conjoint analysis)

The factor is called an item, and the level is called a category. Basically, with the same idea, I figure out what kind of element (the category in the item) is the strongest.

Since the way of thinking is the same as the explanation explained, we omit it.
Although it is not an exam as it was explained so far, it will be a method of making judgment using the data you have.
Up until now, the goal was to reduce the number of experiments.
For conjoint analysis, it is used to see how much the factor (the element here) is related as explained in Excel.

It is used in various fields other than economics. The name is variously different. . .
In addition to reducing the number of experiments, we often use it to see relationships like conjoint analysis.

· Analysis of factors and causes (confirmation of relevance of elements)
· Test for product defect check

It is used for element judgment and result judgment stochastically in various fields.

The experiment design method is performed when the relation of the element is not known. For that reason, we do experiments with content that understands relevance, which makes it much more troublesome. (I think that it is good in terms of confirmation) I think that there is a part that the expert in that field “knows somehow” that the relevance is related and it is said that it is said to be an experience · kang etc I will. Let’s use it for various things.

For those who want to learn more, this book is helpful.

Partial addition 2017/03/30

Categories
Manufacturing terms

Variability and process capability

Variability and process capability

Manufacturing and process capabilities are inseparable edges. Although it may not have been heard, simply say, process capability is ability to express the probability of fitting within the product’s standards in making the product .
Based on the capability index, we will grasp the loss of products and determine the inspection frequency. This way of thinking is used and applied also in considering probability other than monozukuri. Before explaining that, I would like to talk about product standards for making products.

Please look at “Noise of intention in decision” and “bias of intention in decision” for variations and bias in business and decision.

What are product standards?

There are standards for product manufacturing. For example, assume that a 100 cm stick is a product. There is no problem if the products are all 100 cm, 99 cm or 101 cm etc will come out on making.
製品ばらつき
However, if the product can be released as a product, for example, 99 cm to 101 cm, the product can be put out to the world if it is 100 cm ± 1 cm. At this time “± 1 cm” is the standard. If it is 98 cm, it becomes a defective product and it becomes a manufacturing loss.
By stochastically obtaining variations (measured several products) for this standard, we can figure out the number of product losses and lead to quality assurance. We use a normal distribution to determine the number from the probability at this time.

The product standard is the acceptance range that allows products to be released to the world

Normal distribution and variation

What is a normal distribution?

The probability distribution that represents the distribution of data around the mean value is called a normal distribution.
Even other articles briefly explain, what is the normal distribution? I will briefly explain it for people.
Normal distribution is the distribution from average / center (variation). The distribution is used to explain probability theory and statistical theory that the degree of variation will converge to the form of a graph of normal distribution.
It is not a phenomenon of any kind but if there is no disturbance it will actually converge to the shape of that graph. For a relatively large number of process capabilities, we will decide that this variation will be distributed with probability along this normal distribution.
標準偏差の図

This figure is a graph of normal distribution, and the normal distribution can be represented by one curve with peaks. (The 0 on the X axis in the graph means the center.) In the figure, a normal distribution in which the slopes of the two curves are different is shown. Assuming that there is variation in shape along this normal distribution process capability to be described later will be issued.
I think I understood the distribution of how to vary. We will explain the standard deviation which is the standard value of variation method.

Shape of distribution in probability. The distribution of the nature world will converge to the shape of this graph if there is no disturbance.

What is the standard deviation?

When values ​​are distributed in the form of normal distribution, it is difficult to explain how much variation (how to tilt in the graph) exists. The criterion becomes the standard deviation. The standard deviation is scaled by σ, but this value also changes as the gradient of the graph changes, and the distribution between ± σ is 68.2% from the entire graph and distribution ratio Is fixed . Therefore, it is often used as reference value for variation .

標準偏差と正規分布の関係

Even if there are n variations (99 cm, 101 cm in the previous example), you can not grasp the position of the graph by simply adding, subtracting, or just breaking out the average. How do you ask for it?
Therefore, in order to judge to what extent it is away from the center value, subtract the value which deviates from the center value μ.
(μ – 99), (μ – 100), … n pieces

Even if it is added as it is, we only calculate the center value, we weight the way of spreading (spreading). Apply the same value (squared)

(μ – 99) ^ 2, (μ – 100) ^ 2, … n pieces

Add all the values ​​and divide by (n – 1). This is the average value of the weighting of spreading.

((- 99) ^ 2 + (μ – 100) 2 + … n pieces) / (n – 1)

We call this dispersion as σ ^ 2. But it does not match the graph unit so it will not be displayed on the graph. Square root to make it the unit of the x axis of the graph.

σ=√(σ^2)

This is called standard deviation. This is a value that will be a measure of the dispersion.
Next, we will use this value to see how many can fit within the standard or not.

The standard deviation is a position that is an index of the inclination of the graph

 
The way of thinking is statistical analysis. Please see this reference if you want to know more.
Process Capability Indices
 

Process capability index CP value, CPK value

Process capability index CP value

I think that I do not know how much it is sensuously with the value of variation (standard deviation). Considering this as the probability or the number falling within the standard, it is necessary to compare with a reference value. The value to be compared is “CP value” .

Usl: Standard upper limit value
Lsl: Standard lower limit value
σ: standard deviation

CP値の計算

Although the calculation is expressed like this, the meaning is “Cp = 1 and the center value μ ± 3σ is the same as the standard width”. The width of the same standard as ± 3σ has the probability of 27 defects out of 10000 (nonstandard) occur.

What is one-sided standard?

It is an index that considers only the upper limit side or the lower limit side from the center from the difference between the two side standards. Therefore, calculation is done with half of 3σ.

Usl: Standard upper limit value
Lsl: Standard lower limit value
μ: average value
σ: standard deviation

Upper limit one side standard
CP値(片側規格:上限)

Lower limit single side standard

CP値(片側規格:下限)

The idea of ​​the CP value is a value when the average is the same as the center of the standard, or when the center is not considered. Failure also occurs when the central value does not meet the standards. So we will talk about the center later “CPK value” will be the probability and number of taking into account the misalignment of the center. (The difference from the CP value is the difference in the center position)

“CP value” is a numerical value to be compared with a value serving as a guide

Process capability index CPK value

The way of thinking is the same, it is value taking into account the central deviation . Therefore, the one with the smaller value of one side standard becomes the CPK value.
CPK値

For each company or product, measure the quality of the product, determine the content of the quality inspection, and judge the loss of the product depending on whether the CP value or the CPK value is larger than the set value.

Consider also the deviation of the “CPK value”

As a way of thinking, it is summarized below.

· CP value and CPK value are large → the defect rate is small. It is difficult to become defective in manufacturing (out of specification).
· CP value and CPK value are small → the defect rate is large. It tends to be defective in manufacturing (out of specification).
· CP value, CPK value & gt; Setting value: There are large variations, there is no standard center and average value. The set value is too severe.
· CP value & gt; setting value, CPK value & lt; setting value: Variation is large, setting value is too severe.
· CP value & lt; Setting value, CPK value & gt; Setting value: Standard center and average value are not present. The set value is too severe.

You can judge to what extent the defective rate is by setting value. Next, I would like to explain the setting values ​​that will set the CP value and CPK value.

The CP value is a value to know to what extent it is within the specification range. The CPK value is a value which shows to what extent with respect to the standard range and the standardized center

 

Six Sigma and PPM

Before talking about setting values ​​that will set CP value and CPK value, we will briefly explain words related to Six Sigma (6σ) and PPM defects.
In the manufacturing industry, Six Sigma (6σ) or PPM is often used.

What is PPM?

parts · par · million (one part per million) is called and is often used at the defect rate. For example, in 3PPM defective rate, the failure rate is 3/1000000. In the manufacturing industry, we often use this to express defective rates etc well.

Parts · Par · Million (one part per million).
It is often used with a value less than a percentage

What is Six Sigma (6σ)?

In the manufacturing industry, we decided to slogan Six Sigma to lower the defective rate and create a good product. In its slogan “We accept up to 3.4 (3.4 PPM) defects out of 1 million.” Actually, 6σ is statistically “2 out of 1 billion”, which is a much lower probability than the slogan mentioned earlier. As the idea of ​​Six Sigma, 6 σ is taken into account, 1.5 σ is taken into consideration of blur such as deviation of the standard center from the average value to 4.5 σ “tolerance of defects of 3.4 (3.4 PPM) out of 1 million.

4.5 σ is a numerical value considering 1.5 σ for blur such as deviation of the standard center from the average value

CP value, set value of CPK value

The criterion of the CP value was ± 3 σ. (Center value μ ± 3σ is the same as the standard width and Cp = 1)
Compared with the concept of this PPM I tried to make it a table. However, it is the ideal defective rate of CP value (one side standard). In case of CP value (two-sided standard), the defect rate is doubled.

Specification width CP value
(One side standard)
Rejection rate
(Nonstandard rate)
Fraction defective[PPM]
1 1.4/1000 1350
1.33 3.2/100000 32
4.5σ 1.5 3.4/1000000 3.4
1.67 2.8/10000000 0.28
2 2/1000000000 0.002

As we said in the case of Six Sigma, 4.5 σ is the value of 3.4 PPM. For that reason, the setting values ​​in the manufacturing industry are relatively using a CP value of 1.33 (32 PPM) of 4 σ and a CP value of 1.67 (0.28 PPM) of 5 σ.

The criterion of the defect rate can be determined from the set values ​​of CP value and CPK value

& Nbsp;

How to use Excel

Some people think that “Calculation is difficult, so it can not be used”? However, if you use functions in Excel you can easily obtain it. I will explain how to do it.

As an example, suppose you make a product of “100 cm stick” and measure 10 pieces. Ten measurements were written on lines 2 to 11 of the C column. We will calculate the process capability CP value and CPK value at this time. D line is the calculation content of line C.
Excelで工程能力の求め方

First, calculate the average value. We use Excel’s [AVERAGE function] for the mean value μ. (C Line 12 Line) I think that everyone understands this usage, but the range will be 10 measured values.

Find the standard deviation. The standard deviation σ is Excel’s [STDEV function]. It is a function that calculates the contents explained earlier in “Normal distribution and variation”. (C column, line 13) As well as AVERAGE, the range is 10 measured values.

We will calculate the CP value of process capability. It is the value obtained by dividing the standard width “standard upper limit – standard lower limit” by 6 σ as explained earlier in “Process capability index CP value”. (Line C, line 14)

We will calculate the CPK value of process capability. As explained earlier in the “Process capability index CPK value”, the smaller one obtained by dividing “standard upper limit – average” by 3 σ and the smaller “average – standard lower limit” divided by 3 σ as the one side standard becomes CPK value I will. (Line C, line 15)

If we summarize the order easily, we will ask in the following order.

  1. Mean value “μ”
  2. Standard deviation “σ”
  3. Process capability “CP value”
  4. Process capability “CPK value”


 

Process capability in the management field

Likewise, it can be used depending on ingenuity in management.
For example,
·production quantity
· Standard work in process
· Workload of subordinates (overtime)
Such

However, it can be managed only by thinking that the variation will be a normal distribution.
In other words, if it seems to be a normal distribution, you can use it accurately.

Statistical method that can use process capability in various fields

 

Relationship with educational scholastic ability deviation value

You often hear the deviation value, do not you? Although the deviation value itself is difficult to understand, it can easily be inferred from “how much group” from the idea of ​​normal distribution.
The deviation value is the same as the previous way of thinking, and it is a value representing the position in the whole.
As we said earlier, we can guess if it is a normal distribution (one distribution peak).

Scholastic deviation values ​​are calculated as follows.

Deviation value = ((score – average point) / standard deviation) × 10 + 50

We calculate the standard deviation as 10 and the average value as 50.

Easily made a list of deviation values.
However, these may be only Japan.
 

Deviation value ranking
90 0.000032*Total number
80 0.001350*Total number
70 0.022750*Total number
60 0.158660*Total number
55 0.308538*Total number
50 0.500000*Total number
45 0.691462*Total number
40 0.841134*Total number
30 0.977250*Total number
20 0.99865*Total number
10 0.999968*Total number

The following books will be helpful.
Please refer to those who want to know more.

Layout correction 2017/03/22

Categories
Manufacturing terms

Innovator theory and casm theory

Innovator theory and casm theory

In manufacturing, innovation (explosive popularization, or product thereof) is an important word and a goal.
In fact, in manufacturing, if we can innovate, is not it a successful person of manufacturing?

This time I will explain the theory which should be an indicator for innovation.
One is Innovator theory
The other is Cismuth theory .

The word theory seems to be difficult and I do not like it very much, but I would like to explain briefly.

Innovator theory

First of all, it is innovator theory. In the first place, what is innovator theory?

It is classified according to the correspondence at the time of product purchase along the normal distribution.
What is a normal distribution? I will briefly explain it for people.
Normal distribution is the distribution from average / center (variation).
The distribution is used to explain probability theory and statistical theory that the degree of variation will converge to the form of a graph of normal distribution.If there is no disturbance it will be surprising as it actually converges to that graph shape. Well, I do not think it is all ,,,, (The X axis center of the graph is average)

Well, in this case how you use it, say the time from purchasing the product to the time you purchase it . I think that everyone purchased a product and represents the distribution of time to purchase. I will leave it to the scholars’ people to see if it distributes like this, and I will see how to divide this theory.

· (anyone who purchases) innovator

(2.5% of the total)
People who buy anything as soon as a new product comes out

· (initial purchaser) Early Adapter

(13.5% of the total)
A person who considers products and purchases products with their own values. People who are comparatively social values ​​and can lead the purchase of products. It is also called a pinion leader.

· (follow-up purchaser) Early Majority

(34.0% of the total)
People who think relatively carefully about new things. But those who purchase it under the influence of Early Adapter.

• (Follow-up buyer) Late Majority

(34.0% of the total)
People who are not very interested in new things. People who buy with the sense that trying to use the same thing because the majority purchase.

· (Those who do not buy without tradition) raggard

(16% of the total)
Especially conservative people. It is a person who does not handle new things, purchase it after matured for a long time as a product for many years. It also includes people who have no intention of purchasing at all.

image_innov1

It can be divided like this. Why is it such a way of dividing here? Some people thought so, so I will briefly explain. A standard normal distribution is used for this theory. Then the way of variation is determined (the unit of standard deviation is σ), so it can be seen as an indicator as to how much it is dispersed and where it is. The standard deviation concept is as follows. (Perhaps I think that the percentage explained earlier is easy to understand.I have such a thing, please, please.

· Innovator: (- 2σ or less)
· Early Adapter: (-2 σ ~ -1 σ)
· Early Majority: (-1 σ ~ 0)
· Late majority: (0 to 1σ)
· Ragged: (1 σ or more)

Products will be known to the market in this order as new products come out. If the majority know the product and purchase it, it can be said that it is “popular”.
Especially if it can be accepted as “(initial purchaser) early adapter” (it can be known to the majority of initial purchasers), it is told that the majority is known and “to spread”.
If it is accepted to more than 16% in a group, it is easy to understand that Early Adapter prioritizes its own interests and values ​​to purchase products, Early Majority is convenience It is because the intention to purchase products is different.

Therefore, it can be said that you need to change the management approach to early adapter and Early Majority people. First of all, we offer brand newness for the early adapter, high value sought by society. In the case of Early Majority, it is necessary to convey the appeal of popularization and the sense of security and stability of products.

From here it is an arbitrary idea, but let us assume that the shape of prediction and actual normal distribution are the same. Suppose that the state of purchase of the product currently on sale is expected to be in the middle of the early adapter and Early majority (predicted -1 s position). However, if it is actually the position of the vertex of the normal distribution, the average value (the height of the center) of the number of purchasers originally thought is very different. Naturally, the total number of purchasers will be quite different as well.

I think that it is a difficult technique to actually grasp the current state accurately. Does it lead to the theory of casm? I think.
yosou

“Innovator” of paradigm shift concept is this article .
& Nbsp;

To spread the explosive products, there are points that can not pass easily (16% penetration rate wall).
In order to spread it, we need to change our approach.

Categories
Manufacturing terms

Digital Fabrication “New Equipment”

Digital Fabrication “New Equipment”

What is Digital Fabrication?


It is a device which automatically shapes the design data of digital data if material is input. I will reproduce digital data without human’s hand. Many experts do the work that was required, all together, it will be possible to manage constant creation time and quality.

Benefits of Digital Fabrication
1. The complexity is negligible
2. Respond quickly to design change etc.
3. Flexible to respond to the way of production
4. The amount does not change very much with respect to the number of production

Below is an example of one 3D printer of digital fabrication.

3dprint

Now it is only forming. However, if it becomes possible to synthesize various materials in a complex manner, would not it be possible to create most of the products that are in the rank? Also as individual printers like current printers, it seems that the market of product data will expand. device that does not require many kinds of parts and types> Equipment that has expansion of production method in the future .

Digital Fabrication Something like miniaturizing a manufacturing factory

Subtractive Manufacturing Equipment

Subtractive production equipment is formed by processing the material itself like a lathe. It is used for production processes relatively earlier than additive manufacturing equipment.
Example: laser cutter

We will step up one example product.

· Assembled type laser processing machine FABOOL Laser Mini
Assembled laser cutter made from smartDIYs. For individuals, the main unit price is about 60,000 yen.

Materials themselves cut / scrape (process by shaping)

additive manufacturing equipment

Additive manufacturing equipment is created from materials and formed. It changes the materials themselves according to the shape like casting or resin molding by die.
It is this additive manufacturing equipment that is making remarkable progress now. Using these techniques, we are making from simple resin molding to the house where people live.
Example: 3D printer

Currently, more various 3D printers are coming out, but for the time being we will bring up one example product.

· High speed and high precision 3D printer carbon 3D
We utilize the property of not curing with oxygen which is a weak point of photo-curable resin and light, let the product emerge from resin tank.
It can be made 100 times faster than ordinary 3D printers from 2D technology, and since there is no seam, strength similar to that of injection molding can be developed.

· Low price 3D printer Da Vinci Jr.
A low-priced 3D printer for XYZ Pritting individuals. The main unit price is around 50,000 yen.

· Low price 3D printers for metal 3D printer S1
A low price 3D printer using Aurora Labs metal powder. It can be molded integrally from data by a laser sintering method by burning from powder to laser.
The main body price is around 500,000 yen.

· Building 3D printer Delta WASP
Building 3D printer made by WASP. A gigantic metal frame with a diameter of about 6 m supports the nozzle and is made by stacking clay and mud as a material.

material extraction / generation (change and shape)

Categories
Manufacturing terms

“Innovation of new inventions and markets” Innovation

Innovation

I will explain about innovation.

o Innovation

 
Even with existing technology it can be called “innovation ” when the product begins to sell explosively and it is called innovation.

What is innovation?
“New combination of invention and market” (explosive popular new product)

 

o Destructive innovation

 

What is destructive innovation
“Phenomenon in which conventional ones are destroyed by destructive technology” (a phenomenon in which new products popularized explosively eradicate existing products)

It is a phenomenon in which innovation has occurred and the conventional products are destroyed and replaced with conventional products.
The phenomenon itself is called paradigm shift .

Especially it is said that destructive innovation is easy to get
More than conventional products

 · cheap
· small
· Easy to use

It is said that it is.

Example: (music) The cassette tape that came out in the record era, the CD that came out in the cassette tape era
(Phone) Smartphone that came out in the Garaka era
I would like to explain innovation theory and casm theory I will.