got a question concerning the rule of seven, which states that when 7 data points fall on one side of the mean, the process is considered "out of control" and the PM should have a look.
Now, 7 data points on one side of the mean might not be an unlikely event at all - in fact, it's quite likely to occur frequently, depending on your production output:
Let's assume a symmetrical distribution, i.e. the probability that a data point value is greater or less than the mean is both 0.5.
Let's also assume that events are not correlated (which should be the case for a perfectly functioning production line), i.e. the outcome of a measurement does not depend on the previous measurement.
In this case, the probability that 7 consecutive data point values are greater than the mean is (0.5)^7 = 1/128 - that's a bit less than 1%.
The probability that 7 consecutive data point values are on either side of the mean is even greater, namely 1/64 - or about 1.5% (chose any data point - the probability that the 6 consecutive points are on the same side of the mean is 0.5^6.)
In other words: on the average every 64th data point in your diagram starts a sequence which breaks the rule of seven. If you produce 10 bicycles a day, that's not much of an issue. But if you produce a zillioin screws a day, you'll run into this situation continuously.
So, under the premises of my assumptions, this rule doesn't make sense, so there must be more assumptions, which I'm missing.
Since the Rule of 7 is simply a heuristic (rule of thumb) approach to identifying when a process is "out of control", it does NOT necessarily guarantee that corrective action needs to happen, but does trigger an investigation into the variance. It also doesn't imply or suggest that the frequency at which a process becomes out of control is acceptable or not.
Increasing the sample size (bikes in your case) to a 'zillion' is irrelevant.
If I understand your question correctly, when you say "doesn't make sense" are you asking whether or not the Rule of 7 is "practical"?
If so, your example is not considering any "practical" specification limits or control limits which could influence what the data points are. (eg. data points may be a sample of 100 bikes from that zillion instead of all zillion of them.)
Your math looks correct to me -- for a random distribution, you should see seven consecutive data points on the same side of the mean only about 1% of the time.
Fundamentally, I agree with Jeremy -- the Rule of Seven is a rule of thumb, so it is not as if it is a mathematical law -- it is more of a guideline as to when it would be a good idea to investigate further.
It is a very common in production and manufacturing to spot check -- often, you only test one unit out of many thousands that are produced. In fact, sometimes testing a unit requires that it be destroyed. This is all the more reason to minimize the amount of testing that you do, as it directly eats into the bottom line.
Take for example model rocket motors. These have specifications regarding the amount of thrust they are supposed to produce, over a specific duration. The only way to test this is to fire the engine, which destroys it. So obviously the manufacturer would want to test a bare minimum number of engines in order to gain sufficient confidence that their product was within spec.
thanks for your answers - the differentiation between practicability and statistics helped.
In fact I was questioning the "sense" (being not a native speaker, I hope that's the correct term) of the rule. The probability of ~1.5% for a false positive seems too large, even for a rule of thumb.
That's where it links to practicability. If we leave aside the cost of testing (or the question of how to non-destructively test a fire cracker the number of tests per day determines whether the rule is practical or whether it isn't. Do several 100 tests a day and you'll have too many false positives for the rule to be practical. Check one item a day and you're ok.
Guess, that does it for me.
Thanks for your replies.
Moderators: Yolanda Mabutas, Ahmed Amin, Scott Gillard, Mary Kathrine Padua, ERIC BARTLETT, Gail Freedman, Kevin Nason, Steven Mudrinich, PMP, Mark Wuenscher, PMP, John Wolverton, Tracy Shagnea, PMP, Jada Garrett, Mark Lacattiva
This interview with Simona Fallavollita (LinkedIn Profile) was recorded at the magnificient Project Management Institute (PMI)® Global Conference 2017 in Chicago, Illinois. We discuss the how, what, why and when of the changes that are coming to the PMP exam.