主管:中国科学院
主办:中国优选法统筹法与经济数学研究会
   中国科学院科技战略咨询研究院

Chinese Journal of Management Science ›› 2016, Vol. 24 ›› Issue (6): 61-69.doi: 10.16381/j.cnki.issn1003-207x.2016.06.008

• Articles • Previous Articles     Next Articles

Research on Optimal Production-Inventory Control Policy for Erlang Assemble-To-Order System

LI Zhi1, TAN De-qing2   

  1. 1. School of Management, Tianjin Polytechnic University, Tianjin 300387, China;
    2. School of Economics & Management, Southwest JiaoTong University, Chengdu 610031, China
  • Received:2014-08-19 Revised:2015-06-24 Online:2016-06-20 Published:2016-07-05

Abstract: In today's business environment, with the increasing competitiveness of the global market, many manufacturing companies tend to adopt a hybrid operations strategy to deal with a variety of market environments. An assemble-to-order ATO system has emerged and became more popular. Most literature has focused on the production demand system, considering only one type of demand in ATO. However, in real market, there exists individual component demand. For example, in computer market, users not only need the PC, but also need some component such as monitor, keyboard, or hardware to repair or update their computer. That is the background of this paper. Here, two types of demand ATO system are considered: product demand and individual component demand, which are both lost sales. The system produces multiple components with one end product. The production time follows Erlang distribution and demand takes place continuously over time according to an independent Poisson process. The object is to find the optimal controlling policy, and the influence of different production stage on production and inventory allocation. This problem can be formulated as a Markov Decision Process (MDP), and the stochastic dynamic programming is used to solve the model. A simulation is applied in our numerical study to find out the optimal policy. In the simulation section, the software Matlab is used to calculate the average cost per period of system. In order to get the more general results, the example that we choose is three-component-four-demand ATO system. The behavior of the optimal production and allocation policies are studied for a variety of cases, each with a different combination of the system parameters. For instance, the lost cost rates of the system satisfy the condition c0>c1+c2+c3, and loading rate must be:μk0k,for k=1,2,3. The holding cost rate of the component should be less than the lost sales cost rate, that is hk<ck, hk<c0, for k=1,2,3. The dynamic programming theory, optimal control theory and numerical calculation method are used to study the existence of the optimal control strategy, and optimal value calculation. Then some results are obtained: the optimal policy of Erlang production time system can be characterized by two thresholds for components: a production base-stock level and an inventory rationing level. For any component, its base-stock level and rationing level are both non-increasing in the production stage. Besides, the influence of production stage on component' base-stock level and on the average cost are significant. Moreover, the impact of different production stages and system parameters on the average cost of the system is also investigated. In this study the decision model that corresponds better to the practical ATO system has been built, the theory has been established and the experimental validation has been carried out. To our knowledge, this study is the first one that works on the optimal policy of Erlang production time ATO system with both individual component demand and end-product demand. The results of Erlang distribution system are important and useful for further research, such as the batch production with batch demands ATO system. In addition, since Erlang production distribution is more approach to the real production time in manufacturing system, our work provides a new view and new method to study ATO system for a general case, thus it is more meaningful.

Key words: assemble-to-order (ATO), optimal control, markov decision processes (MDP)

CLC Number: