Loading

Mega May PDF Sale - NOW ON! 25% Off Digital Certs & Diplomas Ends in : : :

Claim My Discount!
Study Reminders
Support
Text Version

Set your study reminders

We will email you at these times to remind you to study.
  • Monday

    -

    7am

    +

    Tuesday

    -

    7am

    +

    Wednesday

    -

    7am

    +

    Thursday

    -

    7am

    +

    Friday

    -

    7am

    +

    Saturday

    -

    7am

    +

    Sunday

    -

    7am

    +

Lecture-05
Law of Manufacturing-I

(Refer Slide Time: 00:15)

Welcome to session number 5. In the previous session, we started our discussion on the laws of
manufacturing. We will continue with the discussion, if you recall we start talking about
matching demand and supply in a variable and uncertain environment. We gave that example of
a dam, to how to match that supply and demand. We need some kind of buffers and buffer
becomes levers to match the demand and supply.

I give that example of a control structure like a dam because rainfall supplies highly uncertain. It
comes for a very brief period of time in a year but demand will be more or less constant. You
want to match demand and supply by putting that control structure as a buffer. The same logic
should be applicable in a manufacturing or services setting. We normally have three kinds of
buffer in the inventory buffer, the capacity buffer, and the time buffer.

I will explain that. You can think of the check-in counter at an airport. You have seen the check-
in counter. There would be at least some 3, 4 counters and in front of those counters, you can see

a long queue. Now when I say some kind of buffer, so it means that the ideal scenario would be
when a passenger comes at the counter is immediately available.

Usually, that does not happen because most of the time passengers arrival will be highly random.
Even though the service time at those counters will also be random. Now what happens is, if
there is the combination of these randomnesses you would see either you have unused capacity.
It means for a given period, there is not even a single customer arrives.

You see those counters empty or a lot of passengers may arrive at the same time. It means that
you see long queues. So, when I say capacity it means that you have some kind of capacity
buffer that remains unused or unutilized for some time but that will improve your service levels.
So, as soon as a customer comes in or a passenger comes in there is a capacity available to serve
that passenger.

But that is expensive; remember that capacity is not free. So, in this case, the machine or a server
or a check-in counter is waiting. When there is a queue. It means that assume that you have only
one check-in counter and there is a long queue in front of that. In that case, demand is waiting. In
a typical manufacturing process, there is a possibility that something is getting assembled at a
particular point.

The work-in-process is waiting to get processed at that machine. The example which I gave in
those cases, the airport example the inventory issue will not come. So, either the customer will
wait, or the capacity will have to wait. These are examples of buffers to meet the supply or to
match the supply and demand in an environment which is variable and uncertain.

In this setting, what we have to see, how technology can allow us to reduce the impact of the
environment in matching the demand and supply. The point I think which comes in can machine
intelligence do that. Can machine intelligence so, I will use AI and machine intelligence maybe

synonymously. Can machine intelligence allow me to understand the environment better so that I
can actually maybe match this demand and supply at a cheaper cost?

So, maybe the roles of buffers become limited. That could be in form of multiple ways. It can
allow me to make a better forecast of the demand. It may allow me to see if there is any
constraint or disruptions coming in supply chains. So, all those possibilities, so it means that I
can actually understand that environment better and that allows me to match the demand and
supply may be at a cheaper cost. (Refer Slide Time: 05:58)

With this narrative, I will continue as the supply chains are becoming more and more complex. I
think post-2000 is we have talked about the emergence of supply chains, what is happening is the
transportation cost and the logistic cost has come down. You see more trade with more distance
players and I think this is the whole idea of looking for cheap suppliers.

The cost arbitrage comes in and one of the objectives was to make always faster, cheaper better.
So, cheaper is instead of making it here, let me outsource it may be to some countries like China
or even India. You look for cheaper suppliers and we already have seen the benefit that comes
along with the outsourcing in terms of economies of scale and experience curve. The lead times
have gone up.

The complexity of managing supply chains have gone up. So, I think we are still not talking
about mass personalization but if I think of variety, I can actually say that even for toothpaste
you may have multiple variants available. So, the complexity of the product is actually
increasing because, in fact, you want to personalize the product, you can think of a typical
automotive example where the technology keeps on increasing. You see more and more
digitization now you may be talking about autonomous vehicles. So you may have a laser camera
that can scan the environment. More complexity, so the number of components may keep on
increasing, and moreover, it is not just about the far distances or variety or complexity. It is also
that even the supply chains are becoming more and more long, so not just in terms of the
distances.

In 2001 when Tsunami hit Japan, at that point companies like Intel, they try to map the risk in
their supply chain. They could not go beyond tier 2 or tier 3 because after that the supply chain
was highly opaque. Can the technology allow us to maybe complexity may remain but how you
actually mitigate the risk which comes along with that complexity?
(Refer Slide Time: 08:38)

I am going to spend some time on IoT when we talk about technology of the future. So, can
technology like sensor enabled internet of things can actually map what is happening in supply
chain, maybe get that information in a centralized form, maybe a centralized information

warehouse where all that information get collated. I can do some kind of computation and find
out what is going to happen.

All these possibilities will come along with technology. Moreover I talked about the pipelines to
platforms. So, instead of having dedicated suppliers, so is it possible for me to actually think of
supply chains in the form of platforms. I can give you an example of companies like Ariba. They
become a procurement platform.
(Refer Slide Time: 09:54)

Can the technology be such that, instead of even thinking about organization, I think I mentioned
about that. So, is it possible to just think of whether I need to make this product today and I may
not have any supplier, can you just get it on demand. So, is it possible and that can technology be
the facilitator in doing that. I am talking at a very macro level but this is something which is has
been already discussed as part of future.

We have talked about the economies of scale. Now I think we are talking about economies of
unscale. Your organization is unscale to such an extent that everything whatever you want is
available on demand and the platform certainly will play that role. When I talk about Ariba I am
looking something similar in fact in India we have this government the marketplace.

These are nothing but procurement platforms, which may in the future be true for manufacturing
setting also, that I do not need anything, I have a manufacturing platform where whatever I want
should be available. This is something which we are going to discuss as part of this course and
the technology behind that which actually enables that thing. Now if you recall we always were
looking into that faster, cheaper, better, and diverse.

I think now we are emphasizing more on how you cope up with that complexity because the
products are becoming more and more complex. I still have those 4 objectives in mind but with
increasing complexity, how can you still achieve those 4 objectives. Let me add, which may be
becoming more and more significant these days, is this notion of sustainability.

It should not confine just to saying that I am making the faster, cheaper, better, and diverse. It
should also be sustainable in the sense of environment. These days there is a lot of discussion on
that. You manufacture the product in such a manner that it should be easily recyclable. You can
reuse it in some form. You can think of the computers which we are using.

Is it possible to maybe either remanufactured or recycle or reuse? All those possibilities should
be part of your manufacturing thinking. It is not just about faster, cheaper, better, and diverse, let
it also be sustainable.
(Refer Slide Time: 13:10)

With this, I will start going slightly into the math side. Till this point, our discussion was mainly
in terms of the wisdom of manufacturing. We have seen economies of scale; we may be talking
about economies of unscale. Now, the point is how you quantify variations. Some of you have
seen probability theory, so I am talking given a density function or a probability density function
which is affected and μ could be the mean value of that function.

It could be anywhere. I am just put some arbitrarily somewhere μ. Now what we want is how
you quantify the variation. This is the typical formula for variance which says that given the
mean, how that mass is distributed with respect to that mean. If you have seen say the center of
gravity or moment of inertia, they actually do the same thing for a rigid body.

I can say that mean is nothing but for a given density function it is the center of gravity. So, how
the mass is distributed with respect to that center of gravity and you square it just to normalize it
because it could be either side this side and that side. This is just one measure or you can say the
measure of dispersion and it gives me how the masses distributed.
(Refer Slide Time: 15:23)

I think we do not want to just directly use the formula. The point what we want to use is, what
wisdom it gives me to cope with this variability and uncertainty. I maybe again talk about some
laws of manufacturing or maybe some laws of nature. I talk about simple mean. I give you some
numbers which maybe say assume sample from this normal distributed some density function.

=̅ݔ
1
݊
௜ݔ෍


௜ୀଵ

Where,

ݔ : Sample mean
n : number of points that are sampled from the distribution
௜ݔ
: Points that are sampled from a normal distribution

I give you some numbers which maybe say assume sample from this normal distributed some
density function.

ߪ ,ߤ)ࡺ ~ ௜ݔ

)

Where,
௜ݔ
: Points that are sampled from a normal distribution
ࡺ : Normal density function
ߤ : Population mean
ߪ

: Variance of population distribution

In this case we are taking a normal density function with the given mean and the variance.

ߪ ,ߤ)ࡺ ~ ௜ݔ

)

I sampled some numbers from the distribution and I compute the mean.

I can call it sample mean. Now because it is a function of what numbers we sample, it means that
ݔ itself will not be a constant because I am just sampling some n numbers.

If I do one more round of sampling, I will get a different ݔ .Now you keep on sampling and you
will keep on getting different ݔ .So, ݔ itself would have a distribution. Now if you see this

ߪ ,ߤ)ࡺ ~ ௜ݔ

)

௜ݔ If
is the points have been sampled from a normal distribution, the distribution of ݔ .The
distribution of sample mean would also be normal.

Because I have taken this as a normal, so people have seen CLT the central limit theorem that
may not be true all the time. It is only for large and I can say that if this ݔ ௜would be some
arbitrary distribution that would still be normal but that is true only when we talk about large n.
That is the concept from the central limit theorem but for time being I am just taking ݔ௜
is as

normally distributed.

ݔ will also be normally distributed with the same mean as the mean of that ݔ ௜or the distribution
of ݔ ௜but the variance is lower.

,ߤ)ࡺ ~̅ݔ



)

Where,

ݔ :̅Sample mean
ܰ : Normal density function
ߤ : Population mean



: Variance of sampling distribution of the sample mean
n : number of points that are sampled from the distribution

If n becomes infinity you can easily see that ݔ̅would not have any variation at all and this
wisdom is in fact one going to play a very key role when we talk about matching demand and
supply because it actually gives us some kind of a law. I can call it a law of nature that if I
aggregate I am actually reducing the variability and you can actually see that. ݔ̅is nothing but
some kind of an aggregation and aggregation is actually reducing the variability. This becomes a
significant wisdom to match the demand and supply.

The question is; is it possible for us to reduce demand variability? Demand is more or less
exogenous. I cannot change too much on the demand side. Demand comes with it is own

variation but is it possible, can I use some strategies, where is it possible for me actually reduce
the variability of demand.

We will be discussing some of these ideas as part of matching demand and supply strategy and
the same thing should be used in the manufacturing context. This wisdom should be there that
aggregation reduces variability it comes with a caveat. So, for the time being, I am just ignoring
that caveat part but for timing just believe in what it says that aggregation reduces variability.
(Refer Slide Time: 19:56)

From quantifying variation we come to the process parameters. Most of these manufacturing
systems, we already have talked about the job shop as a manufacturing process, we talked about
continuous, we talked about assembly or batch process. All these are processes. Typical
manufacturing process would take some input and give you some output.

Manufacturing process could have multiple sub processes also. Even I can actually call a factory
itself as a process which could have multiple sub processes. Now there would be some process
parameters which can come in the form of an inventory which represent as “I” and I am talking
the inventory within a process. I can call it work-in process.

There would be throughput rate at which the output is made and there is a cycle time which is
nothing but the clock time taken in the process. You actually can think of time from this to this r.

Now these are process parameters. There is some kind of relationship between these 3 process
parameters. There would be some relationship between work-in process, the rate at which output
is made, and the clock time taken in the process.
(Refer Slide Time: 21:43)

If this is clear, let me just start thinking about some laws. The first law which comes, some of
you might have seen this in some Queuing theory settings which is called as the Little’s law.

I=R*T

Where,

I : Minimum work-in process (WIP) inventory
R : Throughput
T : cycle time

You can see that this paper was published in 1961 and so the notations maybe different. This is
inventory this is the throughput and this may be the cycle time.
L=λ*W

Where,

L: Minimum work-in process (WIP) inventory
λ : Throughput
W: cycle time

Now these notations maybe different but the idea is that inventories equal to the throughput into
the cycle time. This is nothing but you can actually say this is the minimum WIP which you
actually need for a given cycle time to maintain the throughput rate of R. I will explain what that
means. Now if you see this, there are 3 settings given.

Let this been automotive process manufacturing process and these are maybe the work-in
process. Now let us think of an example where from input to output there are 10 different sub
processes and let us not bring any variable. In fact if you read more into the Little’s law it is
independent of the distribution. There could be a process time all those things have been
averaged out.

In fact I can call it the minimum of the average WIP is equal to average cycle time multiplied by
the average throughput. Now there would be say 10 say sub processes and each process takes a
10 minutes. I put some input here and so total cycle time is 100 minutes and let us assumes that
this particular process produces say 240 units per day and assumes that there is only 8 hour shift.

8 hour = 8*60
= 480 minutes

So,

240/480 (units/minutes)
= 0.5 units/ minutes

It means that for this case R is equal to 0.5, T is equal to 100, so the minimum inventory which
are WIP which I need in this case is

(100*0.5= 50)

You can actually see that. Now, if in the first case, you can actually assume that this is 50, in
this case my inventory is more then what actually is needed. If the cycle time remains 100
minutes, and I want a throughput of 0.5, I only need 50 WIP.

In this case it is less than 50, so the process may start because you do not have ample inventory.
If you want them to maintain the throughput of 0.5 they should at least have 50. Your system can
work beyond 50 also but that is not needed because in this case you actually have inventory more

than what is actually required. Now you can actually take this as a wisdom that if I reduce the
cycle time by some means.

If you recall when we move from the mass production to mass customization, we talked about
that reducing the setup time from one day to say some minutes. Now if we actually reduce the
setup time to such an extent the cycle time also reduces. In fact the FORD (26:31) idea of not
having variety was precisely because it does not want changeovers, because the setup time will
actually increase the cycle time and that will certainly impact the overall system.

In this case if you see if by some means, I reduce the cycle time, the inventory will also get
reduced for the same throughput. So, if instead of 100 say I make it 50 minutes by some means
then in that case I need minimum WIP

(50*.05=25)

needed and remember that this law was not actually given by Little’s he only proved the law this
is a folk law.

So, no one can actually claim that this law was given by this person. This law exists even before
this paper by J.D.C Little. Only thing is he actually proved this in a paper published in 1961 in
operations research. But the wisdom I think should be very clear that it says that what is the
minimum WIP needed to have a particular throughput for a given cycle time and by some means
if we reduced by maybe technology or anything if I reduce the cycle time, the amount of
inventory or WIP will also come down. This is something which is important and this forms one
of the major laws of a manufacturing process or we can just call it laws of one of the major laws
of manufacturing. Thank You.