主管:中国科学院
主办:中国优选法统筹法与经济数学研究会
   中国科学院科技战略咨询研究院

中国管理科学 ›› 2025, Vol. 33 ›› Issue (12): 134-145.doi: 10.16381/j.cnki.issn1003-207x.2023.1775cstr: 32146.14.j.cnki.issn1003-207x.2023.1775

• • 上一篇    下一篇

基于交叉熵赋权的大规模创新型竞赛评审的优化模型

郭东威1, 朱英明1(), 陈玉磊2, 张耀1   

  1. 1.南京理工大学经济管理学院,江苏 南京 210094
    2.周口师范学院数学与统计学院,河南 周口 466000
  • 收稿日期:2023-10-25 修回日期:2023-12-18 出版日期:2025-12-25 发布日期:2025-12-25
  • 通讯作者: 朱英明 E-mail:zhuyingming@njust.edu.cn
  • 基金资助:
    河南省哲学社会科学规划基金项目(2022CJY060);河南省哲学社会科学规划基金项目(2024CJY075);国家自然科学基金项目(42471194)

Optimization Model for Judging Large-scale Innovative Competitions Based on ExpertsWeights Determined by Cross-entropy

Dongwei Guo1, Yingming Zhu1(), Yulei Chen2, Yao Zhang1   

  1. 1.School of Economics and Management,Nanjing University of Science and Technology,Nanjing 210049,China
    2.School of Mathematics and Statistics,Zhoukou Normal University,Zhoukou 466000,China
  • Received:2023-10-25 Revised:2023-12-18 Online:2025-12-25 Published:2025-12-25
  • Contact: Yingming Zhu E-mail:zhuyingming@njust.edu.cn

摘要:

创新型作品的评审具有一定的主观性,不同专家的评分可能差异较大。为了客观评审大规模创新型竞赛作品,提出了基于交叉熵专家赋权的评审模型。首先,建立了作品捆绑式交叉分发的数学模型。该模型一方面保证了每份作品都有可比较的对象,以便对异常分数进行修正;另一方面使得任意两位专家尽可能具有数量相等的交叉评审作品,以比较和发现各专家的评分特点,进而建立科学的专家赋权模型。其次,根据少数服从多数原则,建立了调整部分原始评分的有效方法。该方法降低了随机误差、专家误判等造成的不公平。然后,建立了基于交叉熵的专家赋权法。该权重在一定程度上弥补了大规模评审中采用标准分法的前提假设可能不成立的缺陷。最后,通过实验验证了本文方法的有效性、合理性和科学性。

关键词: 大规模创新型竞赛, 评分误差, 交叉熵, 专家权重

Abstract:

The judging of innovative works is somewhat subjective, and ratings may vary considerably from one expert to another. The establishment of a scientific evaluation method for innovative competitions is of great significance in promoting the fairness of competitions and increasing the motivation for innovation. Some of the problems of the current large-scale innovative competition evaluation program are addressed in this paper, including the problem of rational allocation of competition works, the problem of scientific adjustment of abnormal scoring, and the problem of determining expert weights, etc., and a complete set of evaluation program is established that can effectively reduce the harm of scoring errors.Firstly, a mathematical model of “bundled cross-assignment” of competition entries is developed. On the one hand, the model requires that every two entries be bundled together and distributed to the experts for evaluation, which ensures that each entry has a comparable object so that unreasonable scores can be adjusted. On the other hand, the model requires that there be as much cross-evaluation of works between any two experts as possible, thus ensuring that the workload of each expert is as close as possible, and at the same time, the scoring characteristics of any two experts can be compared and analyzed to provide reliable and objective data for the determination of experts' weights. In addition, a property of the optimal solution of the model is discussed, and a greedy algorithm for solving the local optimal solution is designed. Secondly, according to the principle of majority rule, a simple and practical model for adjusting the scores of a few experts is established based on the ratio of the scores of the majority of experts on the two works. The model can reduce the impact of random errors to a certain extent, and can effectively attenuate the unfairness caused by individual experts' misjudgment and deliberate high or low scores. Thirdly, a cross-entropy-based weighting method is established, which characterizes the experts' judging scores by means of “intrinsic information” and cross-entropy, and then designs a reliable formula for calculating experts' weights. The weights, to some extent, can improve the defect that the premise assumptions of the standardized scoring method may not be valid in the evaluation of large-scale competitions, and at the same time, the weights can further minimize the range of the scores for most of the works. Finally, in order to test the effectiveness of the method proposed in this paper, the mathematical modeling competition works of H university are used as experiments, and the five evaluation schemes are compared and analyzed by four evaluation indexes, namely, ranking difference degree, Spearman’s rank correlation coefficient, experts' scoring error degree, and works' controversy degree, and the results show that our method improves the Spearman’s rank correlation coefficient, works' controversy degree, and experts' scoring error degree and decreases the ranking difference degree, which shows that our method is more scientific and reasonable than others, and is able to make a fairer and more objective evaluation and ranking for the works of the competition.Four avenues for further investigation on this subject matter. (1) The competition entries’ distribution model proposed in this paper is a mathematical model of a class of assignment problems, and further research is required to develop an algorithm for its optimal solution. (2) The adjustments are introduced to the original scores, and the calculation formula can be discussed for transforming these scores (e.g., standard scores). (3) In order to address systematic and random errors in expert scoring, it is necessary to explore reasonable methods for assigning weights to experts. (4) Can the scores of t experts for each competition entry be changed into the interval number according to reasonable rules, and then the entries are evaluated and ranked based on the method proposed in literature [11].

Key words: large-scale innovative competitions, scoring error, cross-entropy, experts’ weights

中图分类号: