Loading...
主管:中国科学院
主办:中国优选法统筹法与经济数学研究会
   中国科学院科技战略咨询研究院

Table of Content

    20 October 2015, Volume 23 Issue 10 Previous Issue    Next Issue
    Articles
    Nelson-Siegel Dynamic Term Structure Model with Double Slope Factors and Its Applications
    SHEN Gen-xiang, CHEN Ying-zhou
    2015, 23 (10):  1-10.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.001
    Abstract ( 2619 )   PDF (1949KB) ( 2132 )   Save
    The four-factor dynamic term structure model is built up in this paper to improve substantially to widely used Nelson-Siegel model and tree-factor dynamic Nelson-Siege model. An additional slop factor is added to tree-factor dynamic model to make it more flexible in fitting and forecasting the short end of yield curve. Our model nests the three-factor dynamic Nelson-Siegel model as a special case and these two models can be compared directly by likelihood ratio. The model is formulated in state space form (6) and Kalman filtering is employed to construct likelihood. The empirical study is conducted using data from China interbank bond market and the conclusion shows that our double-slope-factor model can capture dynamics in short end of yield curve more accurately and then has a better fitting and forecasting performance than three-factor dynamic Nelson-Siegel model. The likelihood ratio test justifies the need of the additional slop factor. The model in the paper is an extension to those in the literature of term structure of interest rate and can be used in other empirical studies.
    References | Related Articles | Metrics
    Modeling Dynamic Financial Higher Moments: A Comparison Study Based on Generalized-t Distribution and Gram-Charlier Expansion
    HUANG Zhuo, LI Chao
    2015, 23 (10):  11-18.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.002
    Abstract ( 1948 )   PDF (1290KB) ( 1344 )   Save
    Dynamic higher moments is a stylized feature of financial returns. Empirical performance of the popular Generalized-t distribution (GT) and the Gram-Charlier series expansion of the Gaussian density (GCE) under GJRGARCH framework are compared in this paper, in terms of their capacity to fit time-varying higher moments and forecast Value-at-Risk. Using the daily returns of S&P 500 stock index in the U.S. and CSI300 stock index in China, it's shown that both return series exhibit time variation and persistence in conditional higher moments, and the persistence parameters of skewness are as high as 0.9. According to various statistical standards, both GT and GCE distribution have good empirical performance. GT models slightly outperform GCE models in fitting return distribution and forecasting extreme Value-at-Risk out-of-sample, despite some modeling advantages of GCE.
    References | Related Articles | Metrics
    Comparing Estimators of the High-Frequency Volatility Matrix in the Presence of Non-synchronous Trading and Market Microstructure Noise
    ZHAO Shu-ran, JIANG Ya-ping, REN Pen-min
    2015, 23 (10):  19-29.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.003
    Abstract ( 2011 )   PDF (1800KB) ( 2290 )   Save
    High-frequency volatility matrix estimator can effectively solve some bottleneck problems faced by traditional low-frequency estimators. However, because of the influence of non-synchronous trading and market microstructure noise, it has epps effect and some big bias. So mainly three kinds of synchronization methods for non-synchronous step-by-step high-frequency data and five types of the noised-reduction methods for the traditional realized volatility matrix are considered in this paper. The two kinds of methods are deeply compared separately, from data simulation and empirical analysis. The results suggest that refresh time method includes the largest amount of data among methods we considered, realized volatility matrix has epps effect and serious bias, multivariate realized kernels, two scales realized volatility matrix estimator and modulated realized volatility matrix estimator effectively reduce noise, but pre-averaging HY and HY estimators behave a little bad. The research results can provide a useful reference and guidance on methods for workers in related fields of further research and application.
    References | Related Articles | Metrics
    An Analysis of the Systemic Importance and Systemic Risk Contagion Mechanism of China's Financial Institutions Based on Network Analysis
    OUYANG Hong-bing, LIU Xiao-dong
    2015, 23 (10):  30-37.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.004
    Abstract ( 2237 )   PDF (1178KB) ( 3302 )   Save
    Applying Minimum Spanning Tree (MST) and Planar Maximally Filtered Graph approaches to construct and analyze financial market network can dynamically identify the systemic importance of the nodes in financial networks, and the uniqueness of MST can be used to analyze the contagion mechanism of systemic risk completely and straightforwardly. Using data from inter-bank borrowing market, the effectiveness and robustness of this method are proved. Particularly, MST provides an intuitive and effective approach to identify the potential contagion path of systemic risks as well as the macro prudential regulation systemic risks.
    References | Related Articles | Metrics
    Study of Change Points in Shanghai Composite Index Based on Least Absolute Deviation Criterion
    ZHOU Ying-hui, NI Zhong-xin, ZHU Ping-fang
    2015, 23 (10):  38-46.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.005
    Abstract ( 2079 )   PDF (1185KB) ( 1752 )   Save
    The series of Shanghai Composite Index not only has structural changes, but also is heavy-tailed. For example, the kurtosis of log returns of its daily closing prices from 2006/1/4 to 2011/12/31 is 5.61. It is much greater than that of the normal distribution, so the log returns data are heavy-tailed obviously. Moreover, the stock market in this period experienced the switchover from a big bull market to a big bear market, which will lead to many structural changes. However, the most existing studies did not consider the influence of the heavy-tailed feature on the estimation of change points. To solve this problem, an approach was proposed in this paper to estimate change points in heavy-tailed data based on the least absolute deviation criterion, which is more robust than least squares criterion and can fit data with heavy-tailed feature well. The results obtained from simulation studies showed that the estimates of the number and locations of change points based on the least absolute deviation criterion are more accurate than those based on least square criterion when the simulated data have heavy-tailed feature. This shows that the former is more efficient than the latter. The log returns of daily closing prices of Shanghai Composite Index from 2006/1/4 to 2011/12/31 were collected for empirical study. The empirical results indicated that the change points estimated by least absolute deviation criterion are different from those estimated by least squares criterion, and the former can describe the structural changes of Chinese stock market well. Hence, the results obtained from the simulated studies and empirical analysis show that it is necessary to consider the heavy-tailed feature in estimating structural changes when the data have heavy-tailed feature.
    References | Related Articles | Metrics
    A Study on High-Frequency Futures Trading Strategy Based on Variable Selection and Genetic Network Programming
    CHEN Yan, WANG Xuan-cheng
    2015, 23 (10):  47-56.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.006
    Abstract ( 2127 )   PDF (2288KB) ( 3752 )   Save
    In this paper, high-frequency futures trading strategy has been built by LASSO variable selection method and the genetic network programming (GNP). The proposed strategy uses LASSO as a variable selection method, which is able to select the most effective variables from a large number of technical indicators. Then, the selected indicators will be treated as the judgment functions in GNP to determine the buying and selling points. The 5 minutes high-frequency futures trading data of gold, aluminum and rubber is also used as an example for backtesting. The results show that: First, when compared with the optimal subset method, LASSO method selects the least number of indicators, while it shows almost the same prediction accuracy and better robustness. These indicators are mainly applied to describe the trend and shock. Second, GNP gets higher search efficiency and builds a simple and effective trading strategy after combining with Q reinforcement learning method. The proposed method outperforms the "Buy & Hold" strategy in different futures contracts and finally obtains the excess returns. It shows its practical value in the field of quantitative trading.
    References | Related Articles | Metrics
    The Game Analysis of the Supply Chain Cooperative R & D and the Government's Behavior under the Low-carbon Background
    ZHANG Han-jiang, ZHANG Jia-yu, LAI Ming-yong
    2015, 23 (10):  57-66.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.007
    Abstract ( 1988 )   PDF (1203KB) ( 2037 )   Save
    In the background of low-carbon,study the optimal emission reductions for vertical supply chain with a carbon tax levied by the Government. Through the three models of no reduction of R & D、manufacturer's reduction of R & D alone and cooperation reduction,the converse method is used to solve this problem.By this method,firstly the decision is made by the manufacturer on the optimal emission reductions;then the price of products as set by the supplier and the manufactarer respectively and then their own optimal profits are gained. Through the analysis of equilibrium,It's found that the optimal emission reductions under the supply chain's cooperation R & D is more than that by manufacturer's reduction alone;The optimal emission reductions have a negative relationship with R & D cost coefficient,and a positive relationship with share proportion.At last,a numerical analysis proves that the reasonable carbon tax rates not only maximize the emission reductions, but also is conducive to supply chain's cooperative reductions of R & D.
    References | Related Articles | Metrics
    Two-stage Robust- Stochastic Decision Model for Relief Allocation Based on Disaster Scenario Information Updata
    CHEN Tao, HUANG Jun, ZHU Jian-ming
    2015, 23 (10):  67-77.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.008
    Abstract ( 2532 )   PDF (1031KB) ( 1740 )   Save
    With the deepening of research of Emergency Management, emergency decision-making process is characterized by dynamic and uncertainty of disaster information and complexity of the decision-making system environment. At the same time, decisions making of the whole emergency rescue needs to be constantly adjusted with scenario information updating. On the basis of domestic and foreign research, a two-stage decision model is proposed in this paper for relief allocation with disaster scenario information updating. From the characteristics of the actual relief allocation, a stages division of emergency rescue response is put forward combined with the feature of disaster information update. On this basis, a two-stage robust-stochastic optimization model is then established, to achieve aftereffect of adjustment decisions. For the results of the first stage allocation affect the second stage decisions by defining dummy repository and dummy temporary distribution center. According to the characteristics of the Robust Optimization and Stochastic Programming, the corresponding Primal-dual and L-Shaped Method algorithm is designed to solve model. Finally, these models were solved by CPLEX, and carried out numerical examples under scenarios generation. By comparing with other methods, the superiority of the models we design is demonstrated under disaster scenario information updating.
    References | Related Articles | Metrics
    The Impact of Control on Performance of Enterprise Information Technology Project in the Risk Environment: The View from Project Managers and User Liaisons
    LIU Min, LIU Shan, ZHANG Jin-long
    2015, 23 (10):  78-87.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.009
    Abstract ( 1869 )   PDF (1155KB) ( 1464 )   Save
    Although project management ability is improved continuously, the success rate of enterprise information systems project is still pessimistic. Lack of roles orientation and mutual understanding, and lack of effective risk control mechanisms are the key factors that negatively influence project performance. In order to explore the joint effect of risk and control on information systems project performance from different perspectives, 65 project managers and 63 user liaisons from 128 projects are investigated. Furthermore, empirical analysis is conducted by using structural equation model. Hypotheses are tested by employing hierarchical regression analysis. The results show that both formal and informal controls positively affected project performance. However, formal control is more effective for project managers, while informal control is more effective for user liaisons. In addition, organization risk and technology risk suppress the effect of formal and informal controls on performance from the perspectives of both project managers and user liaisons. Therefore, the choice of control should not only be based on the characteristics of projects, but also on the role of stakeholders. Moreover, the project performance not only depends on success or failure factors, also on their balance. The research results provide a new conceptual framework and decision-making support for controlling risks in information systems projects.
    References | Related Articles | Metrics
    Resources Integration Decision about Online Shopping Supply Chain Based on Service Capacity Equilibrium
    YAO Jian-ming
    2015, 23 (10):  88-97.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.010
    Abstract ( 1695 )   PDF (2131KB) ( 1650 )   Save
    The key of improving the customer value and competitiveness of online shopping enterprise is to provide the satisfied individual service to customer. The success of it depends on the rational and effective integration of supply chain resources running in the background in order to provide the demand service capacity. Therefore, how to integrate effectively the supply chain resources based on different individual service modes in online shopping, realize the effective utilization of the supply chain resources and solve the special problems in online shopping are an important thesis for the enterprise to discuss. Based on the character analysis of the supply chain resource integration in online shopping, starting from the angle of the dynamic coordination and equilibrium between supply and demand service capacity in online shopping, the dominant factors of the resource integration are dug out by analyzing individual service mode in online shopping, and the integration decision optimization model and the improved ant algorithm are put up to solve the decision process. The results show that there are reasonability and feasibility of the resource integration decision optimization method by a case based on the simulation data.A cut-in point of the strategically harmonizing and balancing the supply and demand capacity in the online shopping is put forward and the research thought and the method can give an important reference significance to supply chain resource integration and supply chain scheduling optimization.
    References | Related Articles | Metrics
    Study on Vehicle Routing Problem and Tabu Search Algorithmunder Low-carbon Environment
    LI Jin, FU Pei-hua, LI Xiu-lin, ZHANG Jiang-hua, ZHU Dao-li
    2015, 23 (10):  98-106.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.011
    Abstract ( 1724 )   PDF (1166KB) ( 1674 )   Save
    From a new perspective of saving energy and reducing emissions, a vehicle routing problem under low-carbon environment is studied, in which transportation services are provided by a third party. The costs of energy, carbon emissions and vehicle leasing are considered simultaneously, which depend not only on distance, but also on client demands and vehicle speed. An energy consumption calculation method is proposed taking into account the vehicle weight and speed. Using the peddling shipment strategy, a low-carbon routing model named LCRP is built. Then, a tabu search algorithm named RS-TS using routes splitting method is designed to solve this model. This algorithm introduces a novel routes encoding and decoding algorithms named WSS, and adopts three neighborhood search methods. Computational results of benchmark instances verify that this algorithm is effective to search the satisfactory solutions, which also shed light on the tradeoffs of distance, energy, travel time and other parameters. Experimental analysis shows that the low-carbon routing arrangement is more economic and environmentally friendly, and selecting medium or low traffic speed is better to save energy consumption and reduce carbon emissions.
    References | Related Articles | Metrics
    Optimization and Coordination for Short-life-cycle Product Supply Chain with Double Time-varying Parameters of Forecast Accuracy and Cost
    SU Ju-ning, LIU Chen-guang, YIN Yong
    2015, 23 (10):  107-112.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.012
    Abstract ( 1532 )   PDF (951KB) ( 1651 )   Save
    For a short-life-cycle product supply chain with controlled lead-time, the lead-time compression would raise the demand forecast accuracy of distributor, while would increase the production cost of manufacturer as well. So the supply chain management is obliged to solve the optimal ordering point and the optimal ordering quantity, as well as the coordination of distributor and manufacturer. The decision models of decentralized and centralized supply chain were proposed respectively. The optimal decisions of each party in both decentralized and centralized systems were solved. Further, an incentive scheme is developed to facilitate coordination between the two parties. The results show that the dynamic wholesale price contract based on order rebate and penalty policy is able to coordinate the supply chain with double time-varying parameters effectively, and this contract allows the system efficiency to be achieved as well as to improve the profits of all parties by tuning the contract parameter α.
    References | Related Articles | Metrics
    Differential Price Closed-Loop Supply Chain Coordination with Contract under Production Cost Disruptions
    LI Xin-ran, HE Qi, MU Zong-yu
    2015, 23 (10):  113-124.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.013
    Abstract ( 1710 )   PDF (965KB) ( 1195 )   Save
    Aimed at a closed-loop supply chain (CLSC) with differential price between new and remanufactured products, the optimal strategies of centralized CLSC and the coordination of decentralized CLSC with revenue-sharing contract when the costs of new and remanufactured products are disrupted simultaneously are analyzed. The research shows that, in centralized CLSC, the original production plan have some robustness when the costs of two products are both disrupted slightly. When the cost disruption of at least one kind of product is disrupted largely, the original production plan has to be revised. The product which is disrupted largely will be revised in opposite to the direction of cost disruption. The adjustment of the other product is associated with the cost disruption of the largely disrupted product, the substitution coefficient and the extra production cost/disposal cost per unit. In decentralized CLSC, the original revenue-sharing contract under static environment can still take effect when the costs of the two products are both disrupted slightly. However, it is necessary to adjust the original contract when the production cost of at least one kind of product is disrupted largely. Besides, the improved revenue-sharing contract is also effective under static environment. Finally, some numerical examples are given to verify the rationality of the model and the validity of the contract.
    References | Related Articles | Metrics
    Optimization Model and Algorithms of Truck Appointment in Container Terminals
    ZENG Qing-cheng, CHEN Wen-hao, HU Xiang-pei
    2015, 23 (10):  125-130.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.014
    Abstract ( 2246 )   PDF (1385KB) ( 1710 )   Save
    Truck congestion of container terminals is an important issue which increases truck waiting time, decreases the operation efficiency of container terminal and increases the carbon emissions. Truck appointment is one of the effective methods to alleviate the gate congestion. In this paper, issues of truck appointment optimization are addressed. An optimization model for truck appointment quota is developed. In this model, the appointment quota for each time period is optimized, considering the constraints of adjustment quota. The non-stationary queuing model is used to describe the time-dependent characteristics of truck arrival. To solve the model, a method based genetic algorithm and Point wise Stationary Fluid Flow Approximation (PSFFA) is designed. Genetic algorithm is used to search the optimal solution and PSFFA is used to calculate the truck queuing time. In order to illustrate the validity of the proposed model and algorithms, numerical experiments with the data of one container terminal in Shenzhen from July 28 to August 2 in 2007. Results indicate that the proposed model can decrease the truck queuing time and PSFFA can solve the no-stationary queue problem efficiently. The optimization model developed in the paper can provide basis for decision-making of truck appointment, make a contribution to deepening the research on theory of truck appointment, and have a guiding significance in the practice of truck appointment.
    References | Related Articles | Metrics
    Information Integrated Frameworkand Its Algorithm of Generic Comprehensive Evaluation and the Applications
    YI Ping-tao, LI Wei-wei, GUO Ya-jun
    2015, 23 (10):  131-138.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.015
    Abstract ( 1671 )   PDF (1875KB) ( 1508 )   Save
    To problems with multi-sources and diverse information, the traditional evaluation model is extended, and a new method called generic comprehensive evaluation is provided. This method centers on the fusion of complex information and its associated algorithm. Particularly, a framework is built to integrate various types and structured information, and the stochastic simulation technology is used to discuss the algorithm of the framework. Because the algorithm cost increases as the complexity of information in the framework, further a type of simplified algorithm is proposed. The validity of this simplified algorithm is illustrated by numerical examples. And the theory of generic comprehensive evaluation can be handled easily by the using of simplified algorithm. The case of "performance evaluation of region's development by multi-participants" illustrates that generic evaluation makes democratic decision among different benefits become possible. The achievements of this paper can provide support for the questions such as discovery of collective wisdom under big-data background, analysis of democratic decision making results, and so on.
    References | Related Articles | Metrics
    Grey Clusters Method with Fixed Weights Based on the Interval Grey Number
    WANG Jun-jie, DANG Yao-guo, LI Xue-mei, CUI Jie
    2015, 23 (10):  139-146.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.016
    Abstract ( 2148 )   PDF (972KB) ( 1819 )   Save
    As the society and economy developing, more and more problems need to use interval grey number. Aiming at the situation that the turn point of the whitenization function must be real number, the method of building and calculating the function when the turn point is an interval grey number was discussed in this paper. Firstly, this paper constructed weight function with interval gray number was constructed through defining the standardized methods of the interval grey number. Then use the standardized interval grey number replace the real number of the whitenization function. And the expression of the whitenization function with interval grey number was given. Secondly, discuss the two circumstances that there was only one interval grey number in the piecewise curve of the whitening function or two endpoints were both interval grey numbers. And when the four turn points were all interval grey numbers, the expression of whitenization function fjk(⊗) was given. At last, the interval grey clusters model was used to evaluate the core competencies of eight private enterprises in Xuchang. Our group had investigated the eight enterprises to get the data. And twenty experts had been asked to build the whitenization function. The result indicated the model is effective and reasonable.
    References | Related Articles | Metrics
    Inconsistency Judgment Matrix Ranking Method Based on Manifold Learning
    WANG Hong-Bo, LUO He, YANG Shan-Lin
    2015, 23 (10):  147-155.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.017
    Abstract ( 1917 )   PDF (1012KB) ( 1515 )   Save
    To solve the problems of the traditional AHP method which needs to satisfy the consistency condition in constructing judgment matrixes, the reasons of consistency regulation from AHP are studied and an inconsistency judgment matrix ranking method based on manifold learning is proposed in this paper. In the ranking process of inconsistency judgment matrixes, on the basis of the neighbor distance, the neighbor distance matrixes of the data sets corresponding to judgment matrixes are constructed firstly. Next each data point is mapped to a low-dimensionally global coordinate system based on the linear representations of the neighbor points, and the low-dimensional embeddings that correspond to judgment matrixes are obtained. Then the ranking conclusion is gotten by analyzing the superiority and inferiority ranking of the elements according to the correspondingly calculated low-dimensional embeddings from each hierarchy. Finally, a numerical example illustrates that the proposed method has a higher level of effectiveness and practicability.
    References | Related Articles | Metrics
    Dynamic Evaluation Method Based on TOPSIS
    LI Mei-juan, CHEN Guo-hong, LIN Zhi-bing, XU Lin-ming
    2015, 23 (10):  156-161.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.018
    Abstract ( 2376 )   PDF (873KB) ( 1873 )   Save
    A dynamic evaluation method is proposed, in order to compare multiple systems at different times. The two-dimensional data with time series data is extended to three-dimensional data. Dynamic evaluation method based on TOPSIS is put forward. The difference degree and the values growth degree of evaluation index values are both considered.The values and sequencing results of the evaluation objects at a certain time and the overall evaluation values and sequencing results of the evaluation objects at a period of time are both obtained by this method. In practice, a dynamic evaluation is conducted on the independent innovation capabilities of the provinces in east China. The evaluation indicators for the regional technological innovation capacities come from Chinese statistics yearbook, Chinese science and technology, statistics yearbook, and the web site of People's Republic of China Ministry of Science and Technology which provides a database with the major indicators for the science and technology in China. It is proved that the findings of this work are practically effective. The method proposed in this study provides an idea in dealing with dynamic evaluation, and complements the comprehensive evaluation method, and facilitates the researches in related fields.
    References | Related Articles | Metrics
    Customer Targeting Model Based on Improved GMDH
    XIAO Jin, TANG Jing, LIU Dun-hu, XIE Ling, WANG Shou-yang
    2015, 23 (10):  162-169.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.019
    Abstract ( 1802 )   PDF (1315KB) ( 1393 )   Save
    In recent years, database marketing has become a hot topic in customer relationship management (CRM), and customer targeting modeling is one of the most important issues in database marketing. Essentially, customer targeting modeling is a binary classification problem, that is, all customers are divided into two categories: the customers responding to the corporate marketing activities and the ones responding to no activities. This study combines group method of data handling (GMDH) neural networks, re-sampling technique, as well as Logistic regression classification algorithm to construct customer targeting model LogGMDH-Logistic. This model consists of three phases: (1) In order to solve the highly imbalanced class distribution of training set for customer targeting modeling, a new resampling method (hybrid sampling) is proposed to balance the class distribution of training set; (2) To select some key features from a large number of characteristics describing the customers, the GMDH neural network is introduced and a new feature selection algorithm Log-GMDH is presented, which improves the traditional GMDH neural network model in both the selection of transfer function and the construction of new external criterion. In terms of the selection of transfer function, it uses the non-linear Logistic regression function to replace the linear transfer function of the traditional GMDH neural network; and in the construction of external criterion, it selects the hit rate suitable for the customer targeting modeling to replace the regularization criterion of the traditional GMDH neural network; (3) It obtains the training set by mapping according to the selected feature subset, trains the Logistic regression classification algorithm and predicts the response probability of potential customers. The experiment is carried out in a customer targeting dataset of a car insurance company from CoIL2000 prediction competition, and the results show that LogGMDH-Logistic model is superior to some existing customer targeting models both in performance and interpretability. In CRM, there are a lot of customer classification problems, such as customer churn prediction, customer credit scoring, which are similar to customer targeting modeling. Thus, the model proposed in this study can also be used to solve the above problems, and is expected to achieve satisfaction classification performance.
    References | Related Articles | Metrics
    Based on Information Fusion Technique with Data Mining in the Application of Finance Early-Warning
    ZHANG Liang, ZHANG Ling-ling, CHEN Yi-bing, TENG Wei-li
    2015, 23 (10):  170-176.  doi: 10.16381/j.cnki.issn1003-207x.2015.10.020
    Abstract ( 1702 )   PDF (1005KB) ( 2251 )   Save
    Different data mining methods for classification can produce different results. However, "one time" data mining process cannot often obtain a well support decision, so we introduce information fusion technique to fuse the different results to gain an optimal solution. In this paper, information fusion technique is used to build a finance early-warning model based on data mining methods such as SVM and Logistic model, which can integrate the respective strengths from different data mining methods to improve the prediction accuracy rate, it fuses the different data mining results to gain the prediction results for reliable decision. The real dataset of Chinese listed manufacturing companies is selected to predict the finance risk with information fusion technique based on SVM and Logistic model, and a higher prediction accuracy than those of the two methods respectively is obtained.
    References | Related Articles | Metrics