Quantcast
Channel: Lean Math » Mark R. Hamel
Viewing all 32 articles
Browse latest View live

Applied EPEI [guest post]

$
0
0

Every part every interval (EPEI) is my favorite lean metric for high mix/low volume (HMLV) value streams and probably the least known. It’s especially helpful when changeovers are a significant portion of capacity as frequently is the case with machine-oriented operations.

As the title implies, I want to share how I’ve used EPEI in lean transformation rather than establish its usefulness or provide the math. These are snippets of how I’ve used and adapted EPEI to address the challenges of high mix/low volume value streams – more to get you thinking than to be exhaustive.

Favorite Non-math Definition of EPEI. EPEI is the time it takes to produce every member of a product family including the changeovers between products.

Planned EPEI. Much like takt time is a planning parameter used in the design of a future state, EPEI can be used in the same way. In HMLV operations, mix and volume can change radically, sometimes daily. I determine a Planned EPEI by taking representative mix and volume patterns from historical time periods and then calculating the EPEI for each time slice. Then we can make a decision about how much mix and volume we plan to support.

Dynamic EPEI. It’s all well and good to set a Planned EPEI for each process, but volume and mix changes all the time. I recalculate EPEI for each period using the expected mix and volume. That can be based on revised forecasts or actual orders. The idea of a dynamic EPEI supports the reality of varying demand patterns.

Using EPEI to Visualize Capacity. The ratio of the Dynamic EPEI to the Planned EPEI is then an expression of required capacity vs. planned capacity. Using a ratio allows a simple visualization of capacity requirements for a specific mix and volume across all processes in a value stream even when the Planned EPEI differs by process.Coy1

 In this example, the red line at the top represents 100% of Planned EPEI. We split out productive (in green) from non-productive (in red) which includes changeover, downtime, yield loss, rework, load/unload. The remaining is available capacity (in yellow) and is what we tell our sales force to go sell. Just to complete the picture, we separate machine capacity from labor capacity. 

Every Ordered Part Every Interval. The classic EPEI calculation assumes that all products are ordered for each period. When a value stream must support a collection of runners, repeaters and strangers, that’s not the case. The dynamic EPEI allows us to adjust the classic definition to “Every Ordered Part Every Interval”. When the value stream must support hundreds to thousands of product variants, clearly they aren’t all going to be ordered in every time period.

Different Types of Changeovers. Since EPEI is one of the few lean metrics that include changeover, it forces us to focus there. Not all changeovers are created equal. The normal changeover that we all think of is the time between when the last piece is produced to when the first good piece of the next product is produced. A second type of changeovers is what I’ve termed a “major changeover.” This is where there’s some additional changeover requirement for a group of products. This is reflective of pattern production. For example, in a paint line, there may be no or very low changeover between individual products of the same color but a longer changeover when a color change is required. Combining the concept of changeover groups with a preferred sequence of changeover groups allows sequencing – that could be from light to dark, wide to narrow, thick to thin, lower temperature to higher temperature, whatever. The third type of changeover is what I call a “useful life” changeover. That may reflect tooling wearing down, chemicals being depleted, or maintenance requirements.This is a changeover that may occur even within a longer run of the same product. I’ve seen these kind of changeovers be driven purely by time but also by quantity. For example, one machine must be re-calibrated and cleaned every 8 hours regardless of what was run but another machine must have tooling changed out every 15,000 strikes.

EPEI and Fixed Interval Scheduling. EPEI sets the time period for heijunka scheduling or fixed interval scheduling. In this approach, we plan to run our operation in a series of fixed schedules of the same length. My starting point typically is to set a target of a week but want to move to as short a period as can be sustained.That would be the Planned EPEI. Then as actual orders come in, we use the Dynamic EPEI to determine whether a period is oversold or undersold and invoke our buffering strategy to match capacity to actual order requirements. If we have a finished goods supermarket and are oversold, then we use our capacity to build what we can and pull the rest. Similarly, if we are undersold and have available time, then we can use that time to replenish the supermarket to targeted stock levels.

By running with fixed length intervals, we can move beyond a simplistic EPEI formula that requires you to provide a planned number of changeovers needed to a more comprehensive approach where the number of changeovers and the types of changeovers can be specifically determined in advance.

Driving EPEI for Continuous Improvement. The goal is to reduce EPEI to as low a level as possible. This moves us closer to our ultimate goal of one-piece flow. We want to use all of our available capacity to run as many changeovers as possible so we can reduce our batch size as low as possible and free up as much inventory as possible.  So we can use SMED to drive to setup reduction, or TPM to reduce downtime, or 6Sigma to reduce defects. Since EPEI is a composite of a number of performance factors, anything that improves a factor contributes to its reduction. An orientation to reducing EPEI is a useful strategy in our quest for perfection.

 If it isn’t obvious by now, software support for all of this is absolutely essential. High mix/low volume value streams are awash in data. Software is beginning to emerge that helps manage the variation and complexity found in real world operations.

_______________________________________________________________

This post was authored by Phil Coy, Managing Director, Strategic Services for mcaConnect responsible for the Manufacturing Excellence practice.  His professional experience of over 30 years includes more than 25 ERP implementations and over 13 years of lean experience, Phil specializes in high mix/low volume lean implementation and complex manufacturing. He is the designer of mcaConnect’s Areteium lean transformation software supporting future state design, modeling and planning for complex manufacturing. Integrating standard lean principles and tools with ERP solutions and then extending them to support increasing variation and complexity are his passion. His industry experience includes industrial equipment, specialty metals, chemicals (process), electronics, medical devices, food and beverage, and consumer packaged goods. Phil blogs at www.mcaconnect.net/blog.

The post Applied EPEI [guest post] appeared first on Lean Math.


Multi-Voting Math (or N/3)

$
0
0

Multi-Voting, also known as N/3 multi-voting, N/3 voting, dot-voting and sometimes mistakenly thought to be identical to nominal group technique, is a technique used by small groups to quickly select a subset from a broader set of options. This democratic approach allows team member to cast a finite number of votes, with few restrictions (e.g., individuals can’t “plump” all of their votes on one single candidate), for their options of choice. Ultimately, the process yields a rank order. Some options, the ones with zero or few(er) votes are de-selected, so that the team’s attention can be focused on the surviving options. Multi-voting can be done iteratively to further winnow down the options.

Most multi-voting math is reflected in the “N/3” moniker, which represents the formula for determining the number of votes each team member is allocated. This number is purposely less than the number of total candidates. The math follows:    

multivoting Some considerations:

  • If the number of calculated votes gets too large, for example 20 or more, then it may be prudent to artificially limit the total. Voting (and tabulating) can become unwieldy when such a large number of votes are extended against the number of voting team members.
  • Good N/3 math does not eliminate the need for good selection criteria and rigorous, critical discussion before voting. And remember the N/3 technique is about (rough) prioritization and reduction of the candidate population, it is not a magical means of picking a “winner.”

The post Multi-Voting Math (or N/3) appeared first on Lean Math.

Triangle Kanban Sizing

$
0
0

Triangle kanban, while one of three types of signal kanban, are unique in that there is only a single kanban per part number or stock keeping unit. Accordingly, kanban sizing math has nothing to do with determining the number of kanban – that’s obviously fixed.

triangle kanban 4Instead, the math is around determining the total manufacturing lot size, which is the total kanban size, and the appropriate replenishment trigger point or re-order point. Re-order point is addressed in a separate entry by the same title.

 There are two basic methods of sizing triangle kanbans. We will call these: 1) product specific lot size, and 2) universal lot size. The first applies a universal every part every interval (EPEI) for all parts made by the supplying operation. This yields lot sizes that are unique to each part because they are based upon their own unique demand. The second method applies EOQ-type thinking and/or a more simplistic one size fits all approach to determine a universal lot size for all parts. For example, management may determine that the supplying operation will produce every part number in 500 piece lots, no matter what each part number’s specific average demand level may be.

triangle kanban 3The product specific lot size method better matches production with demand and thus minimizes inventory levels, although it is a bit harder to manage. The universal lot size method, while easier to manage (same lot size for every part), will generally require more inventory.

triangle kanban 1triangle kanban 2

The post Triangle Kanban Sizing appeared first on Lean Math.

Making the Lean Learning Community More Valuable to You

$
0
0

Recently, a handful of us fellow-lean bloggers had the opportunity to chat about the voice of the customer (VOC). This was not an abstract discussion about someone else’s customers. We were focused on our own – the folks who comprise our blogging community, the lean learning community.

Yes, we were talking about YOU…and whether we were (at least) meeting your needs. Hopefully, your ears weren’t burning.

Understanding and successfully responding to the VOC is wholly consistent with lean. So, in an effort to get to part one (understanding), we’ve developed a quick survey that we sincerely hope you will take. Consider this as the necessary “check” part of PDCA for us.  

The survey is comprised of 10 short questions around things like the type of information you are looking for, what’s missing, preferred blog post frequency, etc. We promise that the survey is anonymous and that no telemarketers will stalk you because of your participation.

Your responses will help us improve our offerings. Of course, after we compile the survey results, we will share them on our respective blogs.

Thanks for your time!

Click here for the survey and here for a link to download a zip file of free materials from Jeff Hajek, Chad Walters, and Matt Wrye as a thank you for taking our survey.

The post Making the Lean Learning Community More Valuable to You appeared first on Lean Math.

Available Time for Changeovers

$
0
0

Available time for changeovers per period (Ta∆), also called available time for (internal) set-ups, represents the time per a given period day, shift, week, etc. during which a machine, equipment, or resource (i.e., room) can be changed over from one product to another, prepared for a different medical procedure, cleaned for another customer, etc. Ta∆, is foundational to every part every interval (EPEI), changeover distribution, and kanban sizing calculations.

One principal lean objective is to continuously shrink changeover times using set-up reduction strategies. Here, we are primarily concerned with internal set-up (the time during the set-up when a given resource is not available to produce or service the customer). This is also synonymous with “changeover time” – the time elapsed between the last part from the prior run and the first good part immediately following the changeover), unless external set-ups (the activity that is conducted before and/or after the internal) from one changeover “crashes” into another or there are insufficient human resources to execute the external such that it then becomes internal. The smaller the internal changeover, the higher the potential changeover frequency, the smaller the batch size, the shorter the lead time, etc.

Available Time for Changeovers.Fig1While Ta∆ can be exploited with shorter changeover times, Ta∆ can be expanded often by attacking the elements that consume the balance of available time within a day or shift. For example, the targeted resource can be used or run during breaks, lunches, etc. and/or those planned unavailable times can be modified. Also, for example, unplanned time losses can be countered through the application of total productive maintenance (TPM), and yields can be improved through mistake proofing, etc. Additionally, cycle times can be reduced through machine kaizen.

The following conceptual discussion about the “purist view” versus the “rationalist view” is useful.

Purist view. The lean purist’s perspective is that the time incurred for “unplanned” losses (such as machine breakdowns, adjustments and idling) should not be excluded from Ta∆. In other words, the calculated takt time (Tt) should presume these losses are less than minimal. It is also presumed that scrap is negligible.

If/when a process cannot repeatedly and regularly satisfy customer requirements, then the losses, if they are indeed the root causes of missed Tt, will be appropriately highlighted and aggressively addressed through kaizen. As previously mentioned, losses are often reduced through the application of set-up reduction, TPM, variation reduction kaizen, etc.

Losses that go beyond very minor unplanned interruptions, such as reduced speeds, changeovers and adjustments, tool changes, IT system crashes, phone outages, rework, and equipment breakdowns, can tend to be “hidden” when Ta∆ is reduced for such items. The same notion applies when Ta∆ is reduced to accommodate increased production quantities to offset yield losses.

A reduced Ta∆,let’s call it “Rationalized Ta∆,”accommodates less frequent changeovers and recognizes the waste of stoppage and yield losses, while allowing the build-up of inventory or queues. In the end, the process may satisfy its quota, but it will be done with hidden waste and unevenness, and sometimes, very public and painful overburden. We know that “hidden” is the antithesis of lean; essentially accommodating the waste and obscuring it…and allowing it to live on.

Available time for changeovers.2Available time for changeovers.3Rationalist’s view. Lean is all about taking care of the customer. Ultimately, the lean practitioner must defer to common sense in the pursuit of taking care of the customer.  Accordingly, it may be prudent to temporarily rationalize Ta∆  to reflect the impact of unplanned losses, including yield losses (and facilitate the satisfaction of the customer requirements). Purists for the most part can grudgingly accept this accommodation IF the situation is plainly acknowledged (i.e., not “hidden”) AND leadership actively and effectively pursues the root causes of the losses.

Available time for changeovers.4Available time for changeovers.5Some considerations:

  • It is pragmatic to calculate Ta∆ using weekly, monthly or even quarterly data in order to “smooth out” variation in demand, unplanned downtime, adjustment losses, etc. However, the lean practitioner is obligated to understand the variation and address appropriately and understand the implications to EPEI, replenishment intervals, and kanban sizing calculations.
  • If demand is dynamic, the available time calculation should be refreshed periodically.
  • A (rule of thumb) Ta∆ target is approximately 10% of available time, although this will vary based upon the value stream characteristics.

Related posts: Available Time, Every Part Every Interval (EPEI)Applied EPEI [guest post]

 

 

The post Available Time for Changeovers appeared first on Lean Math.

Plan Versus Actual Math

$
0
0

The plan versus actual chart is one of the most powerful and simple visual process performance metrics. In fact, it’s a sort of Swiss Army knife of charts in that it not only provides insight into process performance but, by the virtue of its comment field, begs and shares information as to when and why there is a variance from plan. Ultimately, it is about problem identification.

The chart is often positioned at the pacemaker process or at the output end of a line or cell (which can be the same thing). It goes by a number of different names: production analysis board, day-by-the-hour chart, ahead or beyond chart, production control board, etc.

Typically, the discrete time increments reflected on the chart are hourly, but this is not always appropriate depending upon the takt image (pitch) of the process. The plan should reflect the customer demand as communicated by a pre-set schedule or, when the resource is scheduled by downstream pull signals, the actual pulled quantity serves as the plan. In any event, the plan quantities are intended to reflect and accommodate takt time and, when appropriate, afford for standard internal changeover times.

The plan versus actual math is extremely simple (the challenge lies in the discipline and problem solving). While there are a number of derivatives, the figure below is a relatively standard design. It contains two alpha “call-outs” by which the math is explained.

planvactual2

planvsactual1Some considerations:

  • Many lean practitioners will add two columns immediately to the right of the cumulative columns. These additional columns represent the delta between the hourly (or pitch) plan versus actual and the delta between the cumulative plan versus actual. For example, within the figure’s 9:20 to 10:20 a.m. row, the hourly delta would be -13 and the cumulative delta would be -21. Clearly, this helps the viewer of the chart more quickly identify the delta, but it does require some quick figuring and writing by the person maintaining the chart…and most folks can easily do the math in their head.
  • Not all production quantities are equal. This is true in a mixed model environment where the cycle times for the various products, transactions, or services are significantly different. In such a situation, the plan versus actual may have to use common units for both planned and actual outputs.

Related posts: Pitch: Takt Image Math, Plan Vs. Actual – The Swiss Army Knife of Charts

The post Plan Versus Actual Math appeared first on Lean Math.

Value Stream Mapping Math: Rolled throughput Yield

$
0
0

Value stream analysis is an effective way to identify improvement opportunities within a product or service family’s value stream, envision a leaner future state and develop an actionable value stream improvement plan to achieve the future state. It’s bread and butter stuff for the lean practitioner.

Most folks are well acquainted with the value stream map’s lead time ladder. And many people are familiar with the concept of rolled throughput yield. However, based upon my humble observation of hundreds of value stream maps, there are precious few who incorporate a rolled throughput yield (RTY) line within their maps.

It’s a shame.

The process yield data is typically already captured in each process’ data box. From there, it doesn’t take too much effort to build out the RTY line.

Know that yield represents the percentage of the process output(s) – fabricated part, assembly, analysis, transaction, reports, etc. that do not require any sort of rework or replacement, at any time. In the service and healthcare industry, this includes completeness and accuracy, the first time.

VSM.RTY1Granted that the yield data contained within a current state map is often in the SWAG (scientific wild a** guess) or plain old WAG accuracy category, it still can generate a very insightful RTY line.

It is painful, but not surprising to see current state RTY’s in the single digit range (i.e., <=9%) …or worse. Think of that as opportunity.

The RTY line captures the discrete process yield directly below the related process box and lower lead time ladder rung. As you can see in the figures, it is “boxed in.”

The discrete process yield boxes are connected via a horizontal line. In between each box is recorded the cumulative or rolled throughput yield. By the very end of the RTY line, we have the full RTY for the value stream.

VSM.RTY2Things can get a little bit funky when there are branches within the map. See the second figure for an example. It essentially applies a weighted average yield.

In summary, make your value stream maps more useful. Add a RTY line!

Related post: Value Stream Mapping Math: Lead Time Ladder Process “Branch”

The post Value Stream Mapping Math: Rolled throughput Yield appeared first on Lean Math.

Graphs Are Math: Visual Process Performance

$
0
0

In the words of my friend and colleague Larry Loucka, “graphs are math.”

Graphs often serve as effective visual process performance tools. Typically, these types of graphs fall into the metric category. As reflected in the supporting concepts of the fourth dimension of the Shingo Prize model, good metrics should:
1. “measure what matters,”
2. “align behaviors with performance,” and
3. “identify cause and effect relationships.”

Real lean drives measurable operational and financial performance improvement. Results are typically enjoyed first at the operational level and then, as the transformation matures, the financial benefits follow.

opandfinmetricsLean thinkers pragmatically measure critical results and the actionable drivers of those results. Measurement is for the purpose of vertical and horizontal alignment within the organization and the encouragement of desired behaviors within the context of a lean management system. It focuses stakeholders on breakthrough and daily improvement through the characterization of current and target conditions and provides insight into past and present performance. Performance metrics therefore are integral to plan-do-check-act.

More simply put, continuous improvement, whatever the scale, is a never ending cycle of measure, improve, and then measure again.

truenorthmetricsSo, what operational stuff should we measure?

The fourfold “true north” metrics, reflected in the figure immediately above, represent the critical few metrics that drive business performance and human development. The euphemism “true north” captures the organization’s long-term direction and not simply its daily, weekly, monthly, quarterly and annual performance. Consistent with that thinking, the ultimate targets for true north could be construed as follows:

  • quality improvement – zero defects,
  • delivery – 100% value adding time,
  • productivity – 100% value-adding steps in all work,
  • people – 100% of the workforce contributing to improving the work

When contrasting the four true north metrics against the often used “SQDCIM” battery (safety, quality, delivery, cost, innovation and morale), it may initially appear that this true north is incomplete. This is not necessarily the case.

“Human development” encompasses both safety and morale and innovation is addressed with the improvement of the four metrics when applied to the new product/service offering development process.

However, there is one category that could reasonably be added to these metrics – growth. Lean, especially in concert with relevant strategic initiatives, drives business growth. A growth metric category explicitly captures performance relative to things like the penetration of existing business accounts and addition of new accounts. The Shingo Prize’s fourth internal measurement area is customer satisfaction (we can slip that under quality). Certainly, that is consistent with growth.

In the future, we will share some insights into visual process performance design considerations, related math (surprise!), as well as some examples. And, once in a great while, we may mix in the occasional financial metric.

Related posts: Balancing Two Types of Visual Controls within the Context of Lean Management (Gemba Tales), Plan Versus Actual Math

The post Graphs Are Math: Visual Process Performance appeared first on Lean Math.


Full Time Equivalent Math

$
0
0

Full time equivalent(s), commonly referred to as FTE(s), represents the number of equivalent employees working full time. One full time equivalent is equivalent to one employee working full time. Typically, FTEs are measured to one or two decimal points.
FTEs are NOT people. Rather, FTEs are a ratio of worked time, within a specific scope, like a department, and the number of working hours during a given period of time. As such, an FTE often does not equate to the number of employees actually on staff.
FTEs are a mathematical tool used to compare and help understand workloads, and the fragmentation of those workloads, across processes, teams, departments, value streams, and businesses. This is especially relevant in environments where employees work multiple processes, are shared among multiple teams, work odd schedules, and/or work part time. Managers use FTE insight for things like:

  • calculating current and future state staffing requirements,
  • calculating real or potential labor savings from process improvement,
  • understanding resource requirements for projects, and
  • normalizing staff count for the purpose of generating performance metrics such as revenue per person and productivity per person per hour (where the FTE is used as a mathematical proxy for “person”)

Some FTE math follows.

FTE1FTE2As with any measure (no pun intended) of Lean Math, there are at least a few things to be mindful of:

  • Just because the math “works” doesn’t mean that FTE-based conclusions are sound. As previously stated, FTEs are NOT people, they are ratios. Real people do the work. As lean practitioners try to understand improvement opportunities, and the chance, for example, to redeploy a worker to a process that needs additional capacity, they must pragmatically and respectfully consider things like how the work is/will be designed, standardized (which means understanding steps, sequence, and cycle time), balanced among team members, “levelized” in the context of volume and mix, apportioned given cross-training gaps and gap closure opportunities and limitations, political dynamics.
  • The math required to determine optimal staffing is specific to balancing line staffing for a given product or service (or family of products or service) and is rarely the same thing as FTEs. This is, among other things, because optimal staffing can be calculated for multiple “playbook” scenarios based upon different demand rates and, simply put, optimal staffing is often different than actual staffing.

Related post: Work Content

The post Full Time Equivalent Math appeared first on Lean Math.

Days Inventory on Hand

$
0
0

Days inventory on hand, also known as a days of supply, along with inventory turns, is a measure of inventory investment. While turns may be one of the most basic measures of an organization’s “leanness,” days inventory on hand perhaps helps lean practitioners better visualize the magnitude of (excess) inventory and its impact on a value stream’s lead time. This is especially applicable when the notion of inventory extends beyond parts and finished goods to transactional (i.e., files, contracts, etc.) and healthcare (i.e., tests, reports, etc.) value streams.

There are two basic approaches to calculate days inventory on hand: 1) divide the number of days that the value stream is operating by the inventory turns, or 2) divide average inventory by daily usage. Mathematically, it gets you to the same place. It is often more actionable and meaningful if the days inventory on hand is not only calculated with total inventory, but also by raw material and finished goods and even by other inventory sub-categories.

Like with many of the Lean Math entries, some math convention considerations bear discussion:

  • Number of days. Financial folks will often use 365 or 360 days as their nominator. That is reflective of reality IF the value stream is in operation virtually every day of the year, like Walmart®. However, most value streams are working something less than that – often 250 days a year or so. The purpose of the measure is to provide insight into how much cholesterol is really accumulating in the value stream. Use a number that mirrors the value stream’s available days during the year or use the second basic approach of dividing average inventory by daily usage (of course, apply the same logic when determining daily usage). Bottom line – understand your math convention and those against whom you might be benchmarking.
  • Inventory value versus inventory units. Inventory value is often used to calculate inventory turns and, as reflected in the separate inventory turns entry, it has its pros and cons. A unit-based approach does eliminate much of the “noise” that inventory valuation methods and high mix may introduce. Furthermore, units, especially in the area of finished goods, are what the customer “feels,” and the value stream experiences. See below for examples using value and units.

days inventory1days inventory2Related post: Inventory Turns Calculation

The post Days Inventory on Hand appeared first on Lean Math.

Cpk and the Mystery of Estimated Standard Deviation [guest post]

$
0
0

It all started when my colleague and I noted that we had used the same data to calculate Cpk, but ended up with different results. This led us down an Alice in Wonderland-like path of Google searching, Wikipedia reading, and blogosphere scanning.

After several days of investigation, we determined that there was no consensus on how to properly calculate estimated standard deviation.

Knowing that there must be a misunderstanding and that this should be purely an effort based on science, we decided to get to the bottom of this. My colleague and I decided that there was a need for a simple, accurate tool that anyone could use and afford. We wanted to break the economic and educational barriers that got in the way of conducting needed process capability studies. More on that in a bit.

Our investigation revealed that the biggest confusion out there was with the following two symbols.

levi1Or, regular sample standard deviation vs. estimated standard deviation (sporting that little hat over the sigma).

Regular sample standard deviation is used to calculate process performance, or Pp/Ppk. It is based on the actual data that your process has actually proven to perform in current reality (overall performance).

Estimated standard deviation is used to calculate process capability, or Cp/Cpk. In other words, what is your process capable of when at its current “best” state (within subgroups)?

This leads us to the simple tool that I referenced above.

There’s an App for that

The creation of “Cpk Calculator App” has been a long and winding road with a lot of  research and validation (also known as PDCA). But, in the end we created a tool that automatically calculates standard deviation in 1 of 3 ways depending on data set characteristics (The biggest dilemma on the web):

1. If data is in one large group, we use the regular sample standard deviation calculation:

levi2Many people use the calculation above to calculate standard deviation and call it Cpk, when in reality what they are calculating is Pp, or Ppk as they are not using estimated standard deviation. Ppk is definitely the more conservative of the two as it’s based on the actual standard deviation, but for whatever reason Cpk has become the more famous of the two.

And, they are often confused.

2/3. If you collect your data in subgroups, there are two preferred methods of estimating standard deviation using unbiasing constants:

levi3

Rbar / d2 is used to estimate standard deviation when subgroup size is at least two, but not more than four. The average of the subgroup ranges is divided by the d2 constant. This calculation is best when you tend to have many small sub groups of data.

levi4The calculations shown above reflect another way to estimate standard deviation that should be used when calculating estimated standard deviation of uneven sub groups, or sub groups larger than 4 data points.

Please see these links for the Cpk Calculator App on Google Play (for Android) and the Apple App Store.

More about Cp, Cpk vs. Pp, Ppk

Pp, and Ppk are based on actual, “overall” performance regardless of how the data is subgrouped, and use the normal standard deviation calculation of all data (n-1). Cp and Cpk are based on variation within subgroups, and use estimated standard deviation. Cp and Cpk show statistical capability based on multiple subgroups. Without getting into too much detail on the difference in calculations, think of the estimated standard deviation as the average of all of the subgroup’s standard deviations, and ‘regular’ standard deviation as the standard deviation of all data collected.

Cp (process capability). The amount of variation that you have versus how much variation you’re allowed based on statistical capability. It doesn’t tell you how close you are to the center, but it tells you the range of variation. Note that nowhere in this formula is the average of your actual data referenced.

levi5Cpk (process capability index). Tells you how centered your process capability range is in relation to your specification limits. This only accounts for variation within subgroups and does not account for differences between sub groups. Cpk is “potential” capability because it presumes that there is no variation between subgroups (how good you are when you’re at you best). When your Cpk and Ppk are the same, it shows that your process is in statistical control.

levi6Pp (process performance). The amount of variation that you have versus how much variation you’re allowed based on actual performance. It doesn’t tell you how close you are to the center, but it tells you the range of variation.

levi7Ppk (process performance index). Ppk indicates how centered your process performance range is in relation to your specification limits (how good are you performing currently).

levi8

What’s a “Good” Cpk?

A Cpk of 1.00 will produce a 0.27% fail rate, or a theoretical 2,700 defects per million parts produced. A Cpk of 1.33 will produce a 0.01% fail rate, or a theoretical 100 defects per million parts produced. In reality, the Cpk that is acceptable depends on your particular industry standard. As a rule of thumb a Cpk of 1.33 is traditionally considered a minimum standard.

Confidence Interval

Confidence interval shows the statistical range of your capability (Cpk) based on sample size. Basically the larger the sample size, the tighter the range. The confidence interval shows that there is an x% confidence that your capability is within “a” and “b.” The higher the confidence interval, the wider the range.

For example, if we report a Cpk of 1.26, what we are really saying is something like, “I don’t know the true Cpk, but based on a sample of n=145, I am 95% confident that it is between 1.10, and 1.41 Cpk.” This tells us that the larger your sample size, the tighter the range. Therefore, the more data you collect, the more accurate your measurement, and the more accurate your actual process capability, or performance. In most calculations 90 or 95% confidence is required, but confidence interval can be calculated at any %, just remember the fewer data points, the wider the confidence interval range.

levi9

Real Life Application

During the creation and testing of the Cpk Calculator App, we had the opportunity to test every scenario that we encountered in the real world. One of the real life scenarios that we ran into included a routine hourly check of a “widget’s” thickness that determined that the part was out of specification. After 15 minutes of data collection and testing on the floor using the app, we found that our process that normally had a Cpk of 1.3, now reflected a Cpk of 0.80. This led us to discover that the cutting machine cycle time had been reduced in an attempt to improve throughput and productivity by the machine operator. With that in mind, we reset the machine to original settings to confirm that we had found the root cause. Subsequently, we used the Cpk calculator as we gradually reduced cycle time as much as possible without negatively affecting process capability. In the end, we confirmed root cause, and implemented a new and improved cycle time for the piece of equipment.

________________________________________________________
Levi Head Shot 2This post was authored by Levi McKenzie, a continuous improvement kind of guy who enjoys exploring new facets of lean methodology, facts, data, and making things faster and better. Levi Is a co-founder of Brown Belt Institute, a mobile app development company that focuses on providing useful lean six sigma tools that are inexpensive and easy to use for the “blue collar brown belt” sector.

The post Cpk and the Mystery of Estimated Standard Deviation [guest post] appeared first on Lean Math.

Pitch Interval for Same Pitch Products

$
0
0

Pitch interval (Ip) can be thought of in two ways: 1) as a unit of time representing the (usually) smallest common pitch shared among a range of products, services, or transactions that are being produced, conveyed, performed, or executed by a given resource(s), and 2) as a count of the number of intervals of a common pitch over a period of time, typically a shift or day.

Ip often serves as the time intervals reflected in the typical design of heijunka, leveling, or scheduling boxes or boards in which instruction or withdrawal kanban are loaded within the heijunka sequence (as accommodated by actual demand).

Figure 1 captures the Ip math. This post is specific to products that share the same pitch. A future post will address Ip for products that have different pitches. Figure 2 provides some insight into heijunka box design and loading in the context of Ip.

Figure 1. Pitch interval formula tree

Figure 1. Pitch interval formula tree

Where:
Ta = available time for the period, typically a shift or day and expressed in seconds or minutes
P = pitch for the resource(s) related to Ta and expressed in the same unit of time.
GCD notation represents the greatest common divisor, also known as greatest common factor or highest common factor, for a given set (a, b…)
Pn = each non-equal pitch amongst the various products for the resource(s) related to Ta and expressed in the same unit of time.

Same Pitch Example:
There are three products (A, B, and C), all of which share the same 20 minute pitch. See table below for the pitch calculation.

Table 1. Pitch calculation

Table 1. Pitch calculation

pitch interval 3See Figure 2 for example heijunka box as loaded in an ABACABACAB sequence (a.k.a. heijunka cycle).

Figure 2. Heijunka box reflecting 20 minute intervals

Figure 2. Heijunka box reflecting 20 minute intervals.

Remember, life is messy…and sometimes the math is too. The lean practitioner often needs to use his or her judgment when determining whether or not to round and how to round (up or down). Unfortunately, math rarely comes out perfect (as it magically does in most lean books). When addressing things like pitch intervals, know that rounding has practical implications. For example, rounding up the number of pitch intervals may require either the shortening of the pitch (remember Ip x P should closely approximate Ta) equally across all or some intervals. In the example above, by rounding up to 21 intervals, we are artificially speeding up takt time by two seconds per unit, and thus our pitch by 20 seconds. The cumulative effect is that the last pitch interval of the day actually finishes up 5 minutes early. The lean practitioner has some options here: 1) don’t sweat it and do nothing about the 5 minutes early thing, 2) use a 21 minute pitch after every third interval, or 3) tinker with something else. As you may discern from Figure 2, we think option one is fine. Know that the road to figuring out the best option often requires a good bit of PDCA.

Related posts: Available Time, Heijunka Cycle

The post Pitch Interval for Same Pitch Products appeared first on Lean Math.

Viewing all 32 articles
Browse latest View live