Loading...
主管:中国科学院
主办:中国优选法统筹法与经济数学研究会
   中国科学院科技战略咨询研究院

Table of Content

    30 May 2020, Volume 28 Issue 5 Previous Issue    Next Issue
    Articles
    Testing the Nonlinear Cointegration Relation of Monetary Models of Exchange Rate Determination ——An Analysis Based on the Deep GRU Neural Network
    LU Xiao-qin, FENG Ling, DING Jian-ping
    2020, 28 (5):  1-13.  doi: 10.16381/j.cnki.issn1003-207x.2020.05.001
    Abstract ( 535 )   PDF (3185KB) ( 252 )   Save
    The monetary model of exchange rate has been the focus of academic that many scholars have conducted linear cointegration test on it, which the results are not satisfactory.In this paper, three versions of monetary models of exchange rate determinations (Flexible price, Forward-looking and Real Interest Differential Models) are tested for six selected countries with floating exchange rate regimes, by applying the nonlinear Johansen cointegration tests, facilitated by the Gated Recurrent Unit (GRU) neural network technique. The GRU technique has the advantages of intelligent memory, autonomous learning and strong approximation ability in deep learning. Based on country-by-country analysis, evidence of nonlinear cointegration between exchange rates and macroeconomic fundamentals is found. This suggests the validity of monetary models and the advantage of advanced deep learning tools in testing economic theory.
    The concrete steps of the nonlinear cointegration test in this paper are: First, the long memory characteristics of the sequences are tested, because if there is a nonlinear cointegration relationship between the data sequences, it means that the sequence data must have the long memory characteristics. Secondly, the deep GRU neural network method is used to construct the nonlinear cointegration function, which has the transfer memory function that the ordinary neural network does not have, and has the advantage on the time series data mining. Finally, test whether the residuals of the constructed GRU model are short memory sequences (SMM). If the residuals are short memory sequences, the GRU can extract the non-linear characteristics between the sequences and prove the existence of nonlinear cointegration relationship between the sequences.
    The results show that: (1) The original sequence after normalization has the feature of long memory, which is suitable for the discussion with the nonlinear cointegration theory. (2) The residual of the nonlinear cointegration model constructed by GRU neural network is short memory sequence, which found evidence of nonlinear cointegration between exchange rates and macroeconomic fundamentals.
    In this paper, advanced deep GRU technology is introduced into nonlinear cointegration test, which can play a role in attracting valuable contributions to expand the tool box of cointegration empirical test.
    References | Related Articles | Metrics
    The Evaluation of Systemically Important Financial Institution of China: Based on Multivariate Extreme Value Theory
    LI Hong-quan, HE Min-yuan, HUANG Ying-ying
    2020, 28 (5):  14-24.  doi: 10.16381/j.cnki.issn1003-207x.2020.05.002
    Abstract ( 539 )   PDF (2063KB) ( 127 )   Save
    After the 2008 financial crisis, too-big-to-fail issue for financial institutions and its systemic risk have attracted more and more attention. In this paper, based on the theory of multivariate extreme value theory (EVT) and copula function, the multidimensional evaluation indexes are proposed, and a comprehensive assessment of systemic importance is made for 26 listed financial institutions in China. The results of the empirical study show that: (1) The banking sector in our country has made great contributions to systemic risk and plays an important role in the financial system;(2) Scale is the main factor in assessment of systemic importance, meanwhile there are also other factors having impact on systemic importance, some joint-equity commercial banks and city commercial banks should also be included as the focus of the financial regulatory.
    References | Related Articles | Metrics
    An Empirical Study of Home Bias in Online Investments: Evidence from Online Financing Market
    GUO Li-huan, GUO Dong-qiang
    2020, 28 (5):  25-38.  doi: 10.16381/j.cnki.issn1003-207x.2020.05.003
    Abstract ( 407 )   PDF (2569KB) ( 135 )   Save
    The rise of online financing has contributed to the prosperity of crowdfunding market. Investors' participation in a crowdfunding project is influenced not only by economic factors, but also by psychological factors. Among these factors, home bias is one of the factors. Investors' preference for local resources has been proven to be widespread in many other industries, but there is a lack of in-depth discussion in the area of crowdfunding filed. Home bias is often found in traditional transactions, leading the locations of buyers and sellers to converge. However, as the informationpresentation and the investment of crowdfunding campaigns are usually accomplished online, the crowdfunding theoreticallybreaks the limitation of geographical space, and home bias may not be significantly. To verify the home bias in online financing, we proposed the following three questions:(1) Is there any home bias in the crowdfunding market? And what is the difference between home bias at different levels? (2) The impact of home bias on investment behavior and pledge results; (2) How to explain the home bias of investors? We investigate Kickstarter by empirical models, we established the models of home bias incountry-level, regional-level and micro-level respectively. The key equations, such as c=sin(LatA)*sin(LatB)*cos(LonA-LonB)+cos(LatA)*cos(LatB) and Distance=Arccos(c)*R*Pi/180 are employed to calculated the distance between the fundraisers and the backers. At the meantime, dyadic analysis model is employed to estimate the economic impact of home bias on the pledged results, and the main model is prob(Investori backs Founderj)=β*SamePlaceij+f(InvestorInfoi,FounderInfoj,ProjectInfo)+εij. The following process is followed to solve the problem:first, whether there is conclusive evidence of home bias in crowdsourcing investment behavior is examined; then, if there is home bias unequivocally, what is the impact of hoe bias on crowdfunding financing results? Will home bias promote financing success or vice versa? Finally, the home bias in online financing for the psychology, behavior and economics perspectives is explained. The results of our study indicate that home bias is in crowdfunding investment exactly, but showing diverse patterns among different levels and projects. Drama projects are the most influenced by home bias, followed by food projects, while the game projects are the least impacted by home bias. A superimposed effect of home bias in crowdfunding is also shown, for example, the founder is able togain the support from investors of both areas when the campaign is launched outside the hometown, thus, obtaininghigher success ratio. For the distance between the investors and the founders, the average distance of drama projects is the shortest at 1474km, while the largest distance is found in game projects, which reached 4624km. The research of Internet financial and behavior pattern is enriched, and new perspectives for researches and practices in crowdfunding are provided. It is demonstrated that home bias exist and play a role in online financing, which makes up for the deficiency of existing theoretical research. In addition to E-commerce and other trading areas, the investors in online financing will also consider the geographical location of the project, and we set a fire for future theoretical research and practice.
    References | Related Articles | Metrics
    Trading Strategies and Extreme Return Predictions based on the Recurrence Interval Analysis
    WU Jing, JIANG Zhi-qiang, ZHOU Wei-xing
    2020, 28 (5):  39-51.  doi: 10.16381/j.cnki.issn1003-207x.2020.05.004
    Abstract ( 475 )   PDF (2199KB) ( 247 )   Save
    Predicting such extreme financial events as market crashes, bank failures, and currency crises is of great importance to investors and policy markers because they destabilize the financial system and can greatly shrink asset value. A number of different models have been developed to predict the occurrence of financial distresses. Here, an early warning model is built to predict the recurrence of financial extremes based on the distribution of recurrence intervals between consecutive historical extremes. The extreme returns are determined according to quantile thresholds which includes 95%, 97.5%, and 99%. By taking into consideration the time in which extremes occur, our prediction of extreme returns is based on the hazard probability Wt|t), which measures the probability that following an extreme return occurring at t time in the past there is an additional waiting time Δt before another extreme return occurs, where the hazard probability Wt|t)=. In this paper, three common functions are employed to fit the recurrence interval distributions, and it is found that the recurrence intervals follow q-exponential distribution. Using the hazard probability, an extreme-return-prediction model is developed for forecasting imminent financial extreme events. When the hazard probability is greater than the hazard threshold, this model can warn when an extreme event is about to occur. The hazard threshold is obtained by maximizing the usefulness of extreme forecasts. In order to test the validity of our extreme-return-prediction model, a recurrence interval analysis of financial extremes in the Shanghai Composite Index during the period from 1990 to 2016 is performed. The data before each turbulent period are used to calibrate the model and each turbulent period that follows for out-of-sample forecasting, which obtains three in-sample periods:2000-2002,2006-2009 and 2014-2016. It is found that the recurrence intervals exhibit the characteristics of positive skewness, fat-tailed distributions, and positive autocorrelations. Both in-sample and out-of-sample tests indicate that our model has great predicting power. Two trading strategies, including a long strategy and a short strategy, are further designed to check whether our model is able to provide significant profits. When the extreme positive return is greater than the threshold, a buying signal is gotten. When the hazard probability is less than the positive threshold after Δt time, a selling signal will occur, which is defined as a long strategy. For the extreme negative return, a short strategy is also defined. In addition to the Shanghai Composite Index, four stock indexes of CAC40, FTSE, HIS and N225 are also researched. The back tests reveal that the two trading strategies can efficiently avoid the decline stage, and the long strategy earn higher profits.
    References | Related Articles | Metrics
    Dynamic Product Differentiation Competitive Strategy Before and After IPO: Introducing the Stock-Price Information
    HU Zhi-qiang, HU Yuan, DI Chen-chen
    2020, 28 (5):  52-61.  doi: 10.16381/j.cnki.issn1003-207x.2020.05.005
    Abstract ( 518 )   PDF (916KB) ( 182 )   Save
    A product differentiation decision-making model including the learning mechanism of stock price information is constructed, and the micro decision-making mechanism of product differentiation and the dynamic characteristics of enterprise strategy changes before and after IPO are discussed. Thetheoretical research shows that product differentiation strategy is only a short-term strategy choice for enterprises after their IPOs. The effect of product differentiation strategy will weaken annually, and the stock price of competitive enterprises can affect the implementation intensity of differentiation strategy.The empirical samples of this paper include 996 A-share listed companies with IPOs from 2005 to 2015.The empirical test is conducted based on the measurement of the degree of product difference and the matching of enterprises with homogeneous products, which support above conclusions.This paper is the expansion of the existing product market competition mechanism based on product differentiation. The research conclusion has certain enlightenment for the construction of the internal correlation between the domestic securities market and the product competition market, and the display of the reference value of stock price for enterprise decision-making.
    References | Related Articles | Metrics
    The Study of High-dimensional Volatility Estimators and Forecasting Models based on Volatility Timing Performance
    QU Hui, ZHANG Yi
    2020, 28 (5):  62-70.  doi: 10.16381/j.cnki.issn1003-207x.2020.05.006
    Abstract ( 450 )   PDF (1636KB) ( 155 )   Save
    The selection of high-frequency data based covariance matrix estimators and forecasting models, jointly influence the forecasting performance of covariance, which therefore influences the performance of volatility timing portfolio strategies. When the number of assets is large, a lot of intraday data are not utilized in common high-frequency data based covariance matrix estimators due to non-synchronous trading, implying a loss of efficiency in information usage. Therefore, the KEM estimator which employs all the intraday price information is used to construct the high dimensional covariance matrix estimator in China's stock market, and its performance is compared, with two commonly used estimators. In addition, each of these three estimators is used in five forecasting models, namely the multivariate heterogeneous autoregressive model, the exponentially weighted moving average model, and the short-term, medium-term and long-term moving average models, and their economic performance under three risk-based portfolio strategies is compared. Empirical experiments with the tick-by-tick high-frequency data for 20 constituent stocks of the SSE 50 index show that: (1) The long-term moving average model is the best choice to forecast high-dimensional covariance estimators, since it achieves the lowest cost and the highest return for all the volatility timing strategies, no matter the market is stable or extremely volatile. (2) The KEM estimator is the best choice to estimate high-dimensional covariance matrix when the market is stable, since it achieves the lowest cost and the highest return for all the volatility timing strategies. However, the KEM estimator only achieves the lowest cost when the market is extremely volatile. (3) Among the volatility timing strategies, the equal risk contribution portfolio strategy always achieves the lowest cost, while the global minimum variance portfolio strategy always achieves the highest return, regardless of the market condition. The research not only for the first time evaluates the effectiveness of the KEM estimator in common volatility timing strategies, but for the first time empirically proves that the easily implemented long-term moving average model has significant superiority in high dimensional covariance matrix forecasting, and thus is of critical importance for applications such as investment decision making and risk management.
    References | Related Articles | Metrics
    The Decision of Economic Production Quantity with Quality-Contingent Demand and Perfect Preventative Maintenance
    LU Zhen, XU Jian, YANG Yun-feng
    2020, 28 (5):  71-78.  doi: 10.16381/j.cnki.issn1003-207x.2020.05.007
    Abstract ( 431 )   PDF (1073KB) ( 210 )   Save
    Based on the classical EPQ model, and considering the quality-contingent demand and periodic perfect preventive maintenance strategy, the decision model of economic production quantity is constructed with the goal of maximizing profit per unit time. In the construction of enterprise unit time profit model, the dynamic preventive maintenance cost, recovery cost, defective repair cost and product demand rate are constructed according to the actual problems of quality-contingent demand and equipment degradation. Due to the complexity of the objective function, this paper uses genetic algorithm to solve the model numerically, and verifies its rationality by comparing and analyzing EPQ decision-making without considering the quality-contingent demand.
    References | Related Articles | Metrics
    Soybean Future Prices Forecasting based on Dynamic Model Averaging
    XIONG Tao, BAO Yu-kun
    2020, 28 (5):  79-88.  doi: 10.16381/j.cnki.issn1003-207x.2020.05.008
    Abstract ( 647 )   PDF (3062KB) ( 365 )   Save
    In view of the complexity of soybean futures price fluctuation and the diversity of influencing factors, soybean futures price forecasting is conducted by introducing the dynamic model averaging theory. It should be noted that this technique is capable of dynamically choosing the explanatory variables and coefficient of variation, which will maximize the utilization of various information to control the models effectively and coefficients uncertainty, and finally improve the forecasting performance. More specifically, an analysis framework on influencing factors of soybean futures price is proposed. The time-vary characteristics of soybean futures price's influencing factors are identified from the perspective of futures markets and economics environment, and then a forecasting model for soybean futures price is constructed. Furthermore, covering an out-of-sample period from July 30, 2009 to June 15, 2017, the forecasting performance of the proposed forecasting model is evaluated and compared with six benchmarks on the basis of accuracy measures and Diebold-Mariano test. The experimental results show that the dynamic model averaging can effectively identify the influence degree of each explanatory variables on soybean futures price, and at the same time, outperform the Bayes model averaging, time-varying parameter model, and random walk in soybean futures price forecasting. Policymakers should be cognizant of the fact that there are many potential predictors that can help to forecast the Chinese soybean futures price, and the predictive powers of these predictors vary over time.
    References | Related Articles | Metrics
    Designing Contracts for A Closed-loop Supply Chain with Information Asymmetry under the Government's Reward-penalty Mechanism
    ZHANG Pan, YU Li-ting, XIONG Zhong-kai
    2020, 28 (5):  89-100.  doi: 10.16381/j.cnki.issn1003-207x.2020.05.009
    Abstract ( 548 )   PDF (2062KB) ( 734 )   Save
    In order to develop recycling economy and realize sustainable development of human society, many countries have issued recycling regulations with the nature of reward and punishment. Under the regulations of reward-penalty, more and more manufacturers begin to recycle their waste products through retailers and then remanufacture them. However, the retailer's information of recycling cost is private and cannot be observed by the manufacturers accurately, which will reduce the performance of recycling operations and supply chain. Hence, in order to solve this problem, incentives contracts are designed by using information-screening model. In detail, the following questions are investigated: How to design the information screening contracts of the manufacturer? Whether is it necessary for the manufacturer to provide information-screening contracts to the retailers of all cost types? What are the characteristics of the information-screening contracts and the impact of information asymmetry? What are the impacts of mechanism of reward-penalty on the information-screening contracts and other equilibria results? A Stackelberg model between the manufacturer and the retailer is constructed. The manufacturer first provides a two-part contract menu to the retailer. Second, the retailer declares its cost type base its true cost information, and then decides sale price and collect rate. The model is solved by the methods of standard backward induction, i.e., first the retailer's optimal decisions are solved and then the manufacturer's optimization problem is solved, which includes the incentive compatible constraint and the individual rational constraint. Through using the principal-agent theory, revelation principle and optimal control theory, the optimal two-part contract menu and other equilibrium outcomes are derived. Based on these results, other questions are further investigated by comparison analyses and sensitivity analyses.The results show that as the retailer's collection efficiency decreases, the manufacturer increases the whole price and decreases the transfer payment. The profits of the manufacturer and the supply chain decrease and the profit of the retailer increases due to the asymmetric information of collection cost. Moreover, the execution of the reward-penalty mechanism induces the manufacturer to decrease the whole price and to increase the transfer payment. Furthermore, when the government's requirement of collection is high and the level of reward-penalty is low, the execution of the reward-penalty mechanism decreases the supply chain's profit; otherwise, the supply chain's profit increases.
    References | Related Articles | Metrics