[1] Dong Shi, Wang Ping, Abbas Khushnood. A survey on deep learning and its applications[J]. Computer Science Review, 2021, 40: 100379.
[2] Pataranutaporn P, Danry V, Leong J, et al. AI-generated characters for supporting personalized learning and well-being[J]. Nature Machine Intelligence, 2021, 3(12): 1013-1022.
[3] Kochan A, Ong S, Guler S, et al. Social media content of idiopathic pulmonary fibrosis groups and pages on Facebook: cross-sectional analysis[J]. JMIR Public Health and Surveillance, 2021, 7(5): e24199.
[4] Timoshenko A, Hauser J R. Identifying customer needs from user-generated content[J]. Marketing Science, 2019, 38(1): 1-20.
[5] OpenAI. GPT-4 Technical report[R]. Work Paper, arXiv e-prints, 2023.
[6] Ouyang Long, Wu J, Jiang Xu, et al. Training language models to follow instructions with human feedback[J]. Advances in Neural Information Processing Systems, 2022, 35: 27730-27744.
[7] Kwong C K, Jiang Huimin, Luo X G. AI-based methodology of integrating affective design, engineering, and marketing for defining design specifications of new products[J]. Engineering Applications of Artificial Intelligence, 2016, 47: 49-60.
[8] Ranjan A, Yi K M, Chang J H R, et al. FaceLit: neural 3D relightable faces[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 8619-8628.
[9] Roose K. An AI-generated picture won an art prize. artists aren’t happy[J]. The New York Times, 2022,2:2022.
[10] Hong J W, Peng Qiyao, Williams D. Are you ready for artificial mozart and skrillex? An experiment testing expectancy violation theory and AI music[J]. New Media & Society, 2021, 23(7): 1920-1935.
[11] Gpt Generative Pretrained Transformer, Almira Osmanovic Thunstrm, Steinn Steingrimsson. Can GPT-3 write an academic paper on itself, with minimal human input?[J]. 2022. 〈hal-03701250〉,https://hal.archives-ouvertes.fr/hal-03701250/document.
[12] Wang Cong, Zheng Yifeng, Jiang Jinghua, et al. Toward privacy-preserving personalized recommendation services[J]. Engineering, 2018, 4(1): 21-28.
[13] Mehta N, Devarakonda M V. Machine learning, natural language programming, and electronic health records: the next step in the artificial intelligence journey?[J]. Journal of Allergy and Clinical Immunology, 2018, 141(6): 2019-2021.
[14] Roy K, Jaiswal A, Panda P. Towards spike-based machine intelligence with neuromorphic computing[J]. Nature, 2019, 575(7784): 607-617.
[15] Minsky M. A framework for representing knowledge [Z]. MIT, Cambridge. 1974.
[16] Minsky M. Steps toward artificial intelligence[J]. Proceedings of the IRE, 1961, 49(1): 8-30.
[17] Marvin M. Seymour a papert. perceptrons[J]. Cambridge, MA: MIT Press, 1969,6: 318-362.
[18] Minsky M. K-Lines: a theory of memory[J]. Cognitive Science, 1980, 4(2): 117-133.
[19] Mccarthy J. Applications of circumscription to formalizing common-sense knowledge[J]. Artificial Intelligence, 1986, 28(1): 89-116.
[20] Mccarthy J. Programs with common sense [Z]. London. 1959.
[21] Mccarthy J, Abrahams P W, Edwards D J, et al. LISP 1.5 programmer’s manual[M]. Gambridge, MA: MIT Press, 1962.
[22] Mccarthy J, Brian D, Feldman G, et al. THOR: a display based time sharing system[C]//Proceedings of the spring joint computer conference, April 18-20, 1967.
[23] Mccarthy J. The home information terminal-A 1970 view[C]// Proceedings of the Man and Computer. Proc. Int. Conf., Bordeaux 1970. 2000: 48-57.
[24] Newell A, Simon H A. Human problem solving[M]. Englewood Cliffs, NJ: Prentice-hall, 1972.
[25] Newell A. Unified theories of cognition[M]. Harvard University Press, 1994.
[26] Newell A, Simon H A. Computer science as empirical inquiry: symbols and search[M]. ACM Turing Award Lectures. 2007: 1975.
[27] Simon H A. Models of bounded rationality: Empirically grounded economic reason[M]. MIT Press, 1997.
[28] Feigenbaum E A, Simon H A. EPAM-like models of recognition and learning[J]. Cognitive Science, 1984, 8(4): 305-336.
[29] Lindsay R K, Buchanan B G, Feigenbaum E A, et al. DENDRAL: a case study of the first expert system for scientific hypothesis formation[J]. Artificial Intelligence, 1993, 61(2): 209-261.
[30] Feigenbaum E A. The art of artificial intelligence: Themes and case studies of knowledge engineering[C]//Proceedings of the Fifth International Joint Conference on Artificial Intelligence. Boston, 1977.
[31] Shortliffe E H, Buchanan B G, Feigenbaum E A. Knowledge engineering for medical decision making: a review of computer-based clinical decision aids[J]. Proceedings of the IEEE, 1979, 67(9): 1207-1224.
[32] Buchanan B, Sutherland G, Feigenbaum E A. Heuristic DENDRAL: a program for generating explanatory hypotheses[J]. Organic Chemistry, 1969: 30,209-254.
[33] Reddy D R. Speech recognition: invited papers presented at the 1974 IEEE symposium[M]. Elsevier, 1975.
[34] Reddy D R, Erman L D, Fennell R D, et al. The Hearsay-I speech understanding system: an example of the recognition process[J]. IEEE Transactions on Computers, 1976, 25(04): 422-431.
[35] Reddy R. Oral history interview with Raj Reddy[Z]. Pittsburgh, Pennsylvania: Charles Babbage Institute, 1991.
[36] Valiant L G. A theory of the learnable[J]. Communications of the ACM, 1984, 27(11): 1134-1142.
[37] Valiant L G. Three problems in computer science[J]. Journal of the Acm, 2003, 50(1): 96-99.
[38] Valiant L G. The complexity of computing the permanent[J]. Theoretical Computer Science, 1979, 8(2): 189-201.
[39] Valiant L G. A bridging model for parallel computation[J]. Communications of the ACM, 1990, 33(8): 103-111.
[40] Pearl J. Reverend bayes on inference engines: a distributed hierarchical approach[M]//Probabilistic and Causal Inference: The Works of Judea Pearl. 2022: 129-138.
[41] Kim J H, Pearl J. A computational model for causal and diagnostic reasoning in inference systems[C]// Proceedings of the International Joint Conference on Artificial Intelligence, Karlsruhe, West Germany, August 8-12, 1983.
[42] Pearl J, Verma T S. A theory of inferred causation[J]. Studies in Logic & the Foundations of Mathematics, 1995, 134(6): 789-811.
[43] Pearl J. Heuristics: intelligent search strategies for computer problem solving[M]. Addison-Wesley Longman Publishing Co., Inc., 1984.
[44] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J]. Nature, 1986, 323(6088): 533-536.
[45] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786): 504-507.
[46] Bengio Y, Ducharme R, Vincent P. A neural probabilistic language model[J]. Advances in Neural Information Processing Systems, 2003, 3: 1137-1155.
[47] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144.
[48] Lecun Y, Bengio Y, Hinton G. Deep learning[J]. Nature, 2015, 521(7553): 436-444.
[49] Lecun Y, Bengio Y. Convolutional networks for images, speech, and time series[J]. The Handbook of Brain Theory and Neural Networks, 1995, 3361(10): 1995.
[50] Cramer P. AlphaFold2 and the future of structural biology[J]. Nature Structural & Molecular Biology, 2021, 28(9): 704-705.
[51] Ramesh A, Dhariwal P, Nichol A, et al. Hierarchical text-conditional image generation with clip latents[J]. arXiv Preprint arXiv, 2022,220406125.
[52] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, CA, USA, December, 2017.
[53] Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems-Volume 2. 2014: 3104-3112.
[54] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J]. ArXiv Preprint ArXiv, 2018,181004805.
[55] Bao Hangbo, Dong Li, Wei Furu, et al. Unilmv2: Pseudo-masked language models for unified language model pre-training[C]//Proceedings of the 37th International Conference on Machine Learning, PMLR, 2020.
[56] Li Yujia, Choi D, Chung Junyoung, et al. Competition-level code generation with alphacode[J]. Science, 2022, 378(6624): 1092-1097.
[57] Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1): 5485-5551.
[58] Fedus W, Zoph B, Shazeer N. Switch transformers: scaling to trillion parameter models with simple and efficient sparsity[J]. The Journal of Machine Learning Research, 2022, 23(1): 5232-5270.
[59] Dong Li, Yang Nan, Wang Wenhui, et al. Unified language model pre-training for natural language understanding and generation[C]//Proceedings of the Annual Conference on Neural Information Processing Systems, Vancouver, Canada, Dec, 2019, 8-14.
[60] Smith S, Patwary M, Norick B, et al. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model[J]. arXiv Preprint arXiv, 2022, 220111990.
[61] Bubeck S, Chandrasekaran V, Eldan R, et al. Sparks of artificial general intelligence: Early experiments with gpt-4[J]. arXiv Preprint arXiv, 2023, 230312712.
[62] Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pretraining[J]. 2018,https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.
[63] Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019, 1(8): 9.
[64] Brown T, Mann B, Ryder N, et al. Language models are few-shot learners[J]. Advances in Neural Information Processing Systems, 2020, 33: 1877-1901.
[65] Zeng Wei, Ren Xiaozhe, Su Teng, et al. PanGu-MYM\alpha MYM: large-scale autoregressive pretrained Chinese language models with Auto-parallel computation[J]. arXiv Preprint arXiv, 2021, 210412369.
[66] Lin Junyang, Men Rui, Yang An, et al. M6: A chinese multimodal pretrainer[J]. arXiv Preprint arXiv, 2021,210300823.
[67] Liu Xiao, Zheng Yanan, Du Zhengxiao, et al. GPT understands, too[J]. arXiv Preprint arXiv, 2021,210310385.
[68] Jumper J, Evans R, Pritzel A, et al. Highly accurate protein structure prediction with AlphaFold[J]. Nature, 2021, 596(7873): 583-589.
[69] Lin Zeming, Akin Halil, Rao Roshan, et al. Evolutionary-scale prediction of atomic-level protein structure with a language model[J]. science, 2023, 379(6637): 1123-1130.
[70] Ravuri S, Lenc K, Willson M, et al. Skilful precipitation nowcasting using deep generative models of radar[J]. Nature, 2021, 597(7878): 672-677.
[71] Lam R, Sanchez-Gonzalez A, Willson M, et al. GraphCast: learning skillful medium-range global weather forecasting[J]. arXiv Preprint arXiv, 2022,221212794.
[72] Espeholt L, Agrawal S, Snderby C, et al. Deep learning for twelve hour precipitation forecasts [J]. Nature Communications, 2022, 13(1): 5145.
[73] Fawzi A, Balog M, Huang A, et al. Discovering faster matrix multiplication algorithms with reinforcement learning[J]. Nature, 2022, 610(7930): 47-53.
[74] Kauers M, Moosbauer J. The FBHHRBNRSSSHK-Algorithm for multiplication in MYM\mathbb{Z} _2^{5\times5} MYM is still not the end of the story 2[J]. arXiv Preprint arXiv, 2022, 221004045.
[75] Assael Y, Sommerschield T, Shillingford B, et al. Restoring and attributing ancient texts using deep neural networks[J]. Nature, 2022, 603(7900): 280-283.
[76] Coley C W, Thomas Iii D A, Lummiss J A M, et al. A robotic platform for flow synthesis of organic compounds informed by AI planning[J]. Science, 2019, 365(6453): eaax1566.
[77] Rajpurkar P, Chen E, Banerjee O, et al. AI in health and medicine[J]. Nature Medicine, 2022, 28(1): 31-38.
[78] Schiff L, Migliori B, Chen Ye, et al. Integrating deep learning and unbiased automated high-content screening to identify complex disease signatures in human fibroblasts[J]. Nature Communications, 2022, 13(1): 1590.
[79] Osterrieder J, Gpt C. A primer on deep reinforcement learning for finance[J]. Available at SSRN, 2023, 4316650.
[80] ChatGPT. Comment l′IA va transformer les métiers des ressources humaines? [J]. Management & Datascience, 2023, 7(2),https://management-datascience.org/articles/23428/.
[81] King M R, Chat GPT. A conversation on artificial intelligence, chatbots, and plagiarism in higher education[J]. Cellular and Molecular Bioengineering, 2023, 16(1): 1-2.
[82] ChatGPT. OpenAI’s ChatGPT and the prospect of limitless information: a conversation with ChatGPT [J]. Journal of International Affairs, 2022, 75(1):379-386.
[83] Stokel-Walker C, Van Noorden R. The promise and peril of generative AI[J].Nature, 2023, 614(7947):216.