Loading...

Table of Content

    25 April 2023, Volume 32 Issue 4
    Theory Analysis and Methodology Study
    Multi-shipborne UAV Cooperative Mission Planning Based on Dual Population Optimization Algorithm
    YUE Jianyi, SONG Yexin, CHEN Yang
    2023, 32(4):  1-7.  DOI: 10.12005/orms.2023.0107
    Asbtract ( )   PDF (1339KB) ( )  
    References | Related Articles | Metrics
    Mission planning of multiple shipborne UAVs for coordinated attacks on maritime targets is one of the key technologies to effectively improve the maritime combat capability of UAVs. Different from land-based UAVs, the take-off and landing platform of shipborne UAVs is a mobile platform at sea, and there are more options for take-off and landing points, which increases the complexity of the solution. Furthermore, shipborne UAVs work in a complex marine environment. During the navigation of the ship at sea, due to the different marine environments of the sea, excessive waves will affect the state of the ship and pose a safety threat to the departure and landing of the UAV. Therefore, the study of shipboard UAV mission planning is of important military significance. Based on the principle of multi-objective optimization, this paper focuses on the UAV mission planning modeling and solution problem with the background of a shipborne UAV coordinated attack mission against targets at sea.
    In mission planning modeling, a multiple takeoff and landing point is considered for the feature that the takeoff and landing platform of the shipborne UAV is a mobile platform. The time window constraint is added to each takeoff and landing point for the feature of the complex marine working environment of the shipborne UAV. In addition, the shipborne UAV mission planning model is established with the goal of maximum attack revenue and minimum UAV damage by combining various factors such as the air time required for the UAV to complete the mission and the survival threat.
    In the aspect of the solution, due to the strong constraints of this model, the feasible solutions are likely to evolve into infeasible solutions in the process of solving the traditional algorithm, which easily makes the algorithm fall into local optimum and makes it difficult to find the solution set of Pareto front. This paper adopts a dual population optimization algorithm, in which feasible and infeasible solutions are not directly compared while evolving in parallel, so that the algorithm avoids falling into local optimum, the reasonableness of the algorithm is verified by giving different hypothetical parameters, and the simulation results show the feasibility and effectiveness of the method by comparing it with the classical multi-objective optimization algorithms such as NSGA-2 and SPEA-2.
    A Semivectorial Bilevel Programming Model of Inter-basin Water Transfer-supply Project
    LYU Yibing, WAN Zhongping, HU Tiesong, GUO Xuning
    2023, 32(4):  8-13.  DOI: 10.12005/orms.2023.0108
    Asbtract ( )   PDF (1565KB) ( )  
    References | Related Articles | Metrics
    Freshwater resources are the basis for the survival and development of human society, as well as the basic and strategic resources for the healthy and sustainable development of the national economy. According to the statistics, China is a country rich in freshwater resources in the world, and also one of the countries with the poorest per capita freshwater resources. Therefore, the protection and rational use of freshwater resources has attracted the high attention of the Chinese government. On the other hand, the spatiotemporal distribution of freshwater resources in China is extremely uneven, which does not match the distribution of population, cultivated land, minerals and other resources. Therefore, it is necessary to rationally allocate the limited freshwater resources in order to maximize its effectiveness. At present, the joint operation of inter-basin reservoirs has become a feasible and effective means to solve the problem of uneven distribution of water resources in time and space in China. In the existing single objective bilevel programming model for joint operation of inter-basin water supply reservoirs, which is established by the authors in the early stage, the maximum weighted sum of the water supply guarantee rate of the reservoir to each water user is taken as the lower objective function and the weight is a fixed constant. It should be noted that the above approach may not guarantee the optimal regulation of water transfer and supply, because different water transfer and supply rules correspond to different weights. For the above defects, in this paper the semivectorialbilevel programming model for deriving the inter-basin multi-reservoir operating rule is constructed. The semivectorial bilevel programming model constructed in this paper takes the minimum near to the expected water allocation and the abandoned water of the receiving reservoirs as the upper level objective, and the minimum differentials between expected and actual water supply indexes of multi-reservoir system as the lower level objectives. Following the structural characteristics of the constructed model, a parallel population hybrid evolutionary particle swarm optimization algorithm is designed to obtain its “optimistic optimal solution”. In dealing with the constructed model, the weight of the lower level is regarded as the upper level variable. Finally, four water transfer scenarios are taken as examples to verify the feasibility of the model and algorithm built in this paper. In addition, the analysis shows that compared with the existingsingle objective bilevel programming model for joint operation of inter-basin water supply reservoirs, the semivectorial bilevel programming model constructed in this paper can generally reduce the generalized water shortage index of main water users of each reservoir by 15%~25%. This further confirms the validity of the semivectorial bilevel programming model constructed in this paper for deriving the inter-basin multi-reservoir operating rule. It is worth pointing out that the semivectorial bilevel programming model for joint operation of inter-basin water supply reservoirs proposed in this paper provides a new idea for the study of inter-basin water transfer and water supply problem. And at the same time, it is more likely to obtain the optimal water transfer and supply rules. In addition, the acquisition of optimal water regulation and water supply rules is also related to whether the algorithm used can obtain the global optimal solution of the model. The particle swarm optimization algorithm has the defects of precocity and can only obtain the local optimal solution of the problem. How to take a more effective global optimization algorithm to solve the built model is the further research direction.
    Evolutionary Game Analysis of State-owned Enterprise Audit Supervision Subject Strategy Selection under Blockchain Empowerment
    TAN Chunqiao, DANG Huaping, QIN Zhanhui
    2023, 32(4):  14-22.  DOI: 10.12005/orms.2023.0109
    Asbtract ( )   PDF (2103KB) ( )  
    References | Related Articles | Metrics
    With the deepening of the reform of the state-owned enterprise in China, the problems such as information asymmetry, multiple and repetitive supervision, and the inability to achieve dynamic auditing, which exist in the audit supervision mechanism of the state-owned enterprise, have become increasingly prominent. These problems have seriously hindered realizing the “full audit coverage” goal and construction of a“grand pattern of state-owned assets supervision” in China. The new blockchain technology, with features such as decentralization, transparency and traceability, provides an opportunity to address the challenges faced by state-owned enterprise audit supervision. Only some studies have combined blockchain technology with the audit to carry out relevant research, so it is worth analyzing the role of blockchain technology in state-owned enterprise audit supervision and the impact of its cost-sharing ratio on the strategy of audit supervision entities. Moreover, exploring the issue of state-owned enterprise audit supervision from the perspective of blockchain is not only a practical exploration of blockchain technology but also a further innovation and improvement of China’s audit theory and blockchain theory. These research conclusions can better provide theoretical guidance and reference for implementing future blockchain-enabling audit supervision work.
    This paper takes the audit supervision process of the state-owned enterprise empowered by blockchain technology as the research object, constructs the state-owned enterprise audit supervision framework under blockchain technology, and discusses the role of blockchain technology in the state-owned enterprise audit supervision process. We use the evolutionary game theory based on the bounded rationality hypothesis of game subjects. An evolutionary game model is constructed, composed of the state-owned enterprise operation, the audit by an auditing institution, and the public supervision. The corresponding replication dynamic equations and respective evolutionary stable points are obtained by using Mathematica 12.0. According to the analytical results, additional drawings of each game player’s related evolutionary phase diagrams are accepted, and the derivative of the dynamic replication equation is used to obtain the stable evolution point of each subject. We get a system of dynamic replication equations by combining the dynamic replication equations of the three-party game entities. The dynamic replication equations are obtained and solved to obtain two equilibrium solutions for the system in the long and short audit supervision processes. Then, we verify the stability of the equilibrium solution under different strategies through the Jacobian matrix and explore the causal relationship of complex systems through the system dynamics theory and the interaction mechanism of strategy selection among various game players. Through the system dynamics simulation software Vensim, a three-party SD model of three game subjects in the audit process of the state-owned enterprise is established. Finally, we verify the rationality and stability of the game model by numerical valuation and simulation analysis methods. We also aim to analyze the influencing factors of the choice of the implementation strategy of the three parties in the game, compare and analyze the changes in the evolution results of each entity, and combine actual phenomena to deeply explore why each entity implements the strategy.
    Through sensitivity analysis of the main influencing factors, it is found that the state-owned enterprise is sensitive to the change in the penalties for non-compliance, the audit institution is controlled by reputation loss, and the public is affected by the benefits of active supervision. Based on the simulation results, suggestions are given to improve China’s audit supervision mechanism by increasing the penalties for state-owned enterprise to increase their cost of illegal behaviors, and the occurrence of non-compliance can be suppressed, expanding the loss of audit reputation to make audit institutions conduct legally, and increasing the public monitoring revenue to enhance the enthusiasm of general supervision, so as to attract more people to participate in management. Introducing media as a third party to participate in the evolutionary game process or access enterprise actual data to explore the strategy selection and sensitivity analysis issues of game players may become following future research directions.
    Cooperative Games on A Class of Multi-agent Flow-shop Scheduling Problem
    GONG Hua, SUN Wenjuan, LIU Peng, XU Ke
    2023, 32(4):  23-28.  DOI: 10.12005/orms.2023.0110
    Asbtract ( )   PDF (973KB) ( )  
    References | Related Articles | Metrics
    Flow-shop scheduling problem is one of the most widely studied problems in the field of industrial production. It is a simplified model of many actual assembly line production scheduling problems. It is widely used in discrete manufacturing industry and process industry. Multi-agent production scheduling refers to that multiple agents with different customers compete for equipment resources to process their customers’ jobs. The traditional multi-agent production scheduling generally takes the production enterprise as the main body. According to the production environment, delivery time, processing capacity and other constraints, the agents and their jobs are uniformly scheduled to achieve the optimal overall objectives, such as the maximum completion time (makespan), earliness and tardiness penalties, total weighted completion time,etc. In actual production, jobs of different agents may come from different customers, who have their own needs and optimization goals. In the case of limited resources, agents or customers may form an coalition by cooperation, and optimize scheduling within the coalition,so as to reduce production costs or gain more profits. Therefore, it is of practical significance to usecooperative game theory to study the multi-agent production scheduling problem.
    In the field of production scheduling, Curiel et al. first applied cooperative sequencing games to single machine scheduling problem. They consider that there are n agents, each agent has a job to be processed, and the cost of each agent is a linear function of the job’s completion time. It is proved that the core of the cooperative sequencing game on the single machine scheduling problem is nonempty. On this basis, cooperative sequencing games are extended to multi-agent flow-shop scheduling problem in this paper.
    In a class of multi-agent flow-shop scheduling problem studied in this paper, there are g agents, and each agent has multiple jobs belonging to different customers to be processed. Each job needs to be processed through m processes and has the same processing path. There is only one machine for each process. The processing time of each job on each machine is known and related to the process, that is, the processing time of each job in the same process is the same.
    In this paper, we consider that the customer’s cost function is linearly weighted with job’s completion time, and the agents form an initial processing order σA0 based on their arrival time. Some agents can form an coalition by cooperation, and total costs of agents can be saved by rescheduling within the coalition. Under the premise that customers are subject to the agent’s scheduling, a cooperative game model of multi-agent flow-shop scheduling problem is established with minimizing total customers’ costs as an optimization objective. In the game model, the agents are taken as the players, and the maximum cost savings obtained from the cooperation of agents is taken as the worth of the coalition. It is proved that the cooperative games of multi-agent flow-shop scheduling problem with the equal processing time on the same process for each job are superadditive and σA0-component additive. The cost savings allocation of agents obtained by the Equal Gain Splitting rule (EGS rule) is a core element of the game. When allocating customers’ cost savings, the cost savings obtained from the agents by cooperation are equally distributed to each customer, and the cost savings obtained from the customers by cooperation are also distributed according to the EGS rule to ensure the fairness of cost allocation method and the stability of customers’ cooperation. Finally, the properties of cooperative games and cost allocation method proposed in this paper are verified by an example.
    In the future, we will pay more attention to the general flow-shop scheduling problems and other production scheduling problems with more complex shop environment.With the goal of minimizing the nonlinear costs of multiple customers, we will use cooperative game theory to study the scheduling problems, analyze the properties of cooperative games and design reasonable cost allocation methods.
    Evolutionary Game of Supply Chain Digital Decision-making with Reward and Punishment
    LIU Mingwu, WANG Xiaofei, WANG Yong
    2023, 32(4):  29-34.  DOI: 10.12005/orms.2023.0111
    Asbtract ( )   PDF (1567KB) ( )  
    References | Related Articles | Metrics
    The rapid advancements in new-generation digital technologies such as blockchain, big data, and artificial intelligence have led to a paradigm shift from the traditional linear supply chain to the digitized supply chain. This transformation has prompted a reorganization of conventional supply chain structures, modes of operation, and individualization. The incorporation of digital technologies can enhance the competitiveness of enterprise supply chains and facilitate the transformation and upgrading of supply chains. Digitization has become the prevailing industry consensus for the future development direction of supply chain management. However, recent findings from Accenture’s “China Enterprise Digital Transformation Index” report reveal that the majority of Chinese manufacturing enterprises are still in the initial stage of their digitalization journey. In order to achieve digital transformation and upgrading of the manufacturing industry, the active collaboration of supply chain node enterprises is requisite. However, the application and utilization of digital technology can significantly influence the operational decisions and actions of enterprises, thereby affecting the development trajectory of China’s manufacturing supply chain. The successful integration of digitalization and industrial and supply chains necessitates comprehensive theoretical investigation. During supply chain digitalization, there can exist spillover effects between suppliers and manufacturers when investing in technologies such as blockchain and big data, potentially hampering the willingness of supply chain enterprises to digitize and impacting the digital acceleration of manufacturing supply chains. From this point of view, an evolutionary game model is constructed based on the digital decision-making of supply chain enterprises. The evolutionary stability strategy of the model is analyzed through the application of evolutionary game theory. The determinants of digital decision-making of supply chain enterprises under different scenarios are examined, and the digital decision-making thresholds and policies of supply chain enterprises under the government’s incentives and penalties mechanism are explored. Lastly, the simulation analysis evaluates the impact of diverse incentives and penalties on the evolutionary outcomes and trends. The results show that the choice of digital investment strategy among suppliers and manufacturers is influenced by investment returns, resulting in seven scenarios where an evolutionary stable strategy exists. Among them, scenarios 2 and 6, as well as scenarios 3 and 5, share the same evolutionary stable strategy. Government incentives and penalties can effectively encourage supply chain companies to choose digital investments. Numerical examples show that under the conditions of satisfying the threshold of government subsidy or penalty policies, diverse incentives and sanctions, as well as initial investment probabilities, have a significant impact on the decision-making strategies of enterprises regarding investment in the digitalization of supply chains. The high cost of digital investment is the fundamental reason inhibiting the digital development of supply chain enterprises, and most Chinese enterprises are still in the early exploration stage of supply chain digitalization. In a follow-up study, we will address the challenges of high investment costs for supply chain digitalization and digital decision-making by discussing the specific fiscal and tax policies implemented by the government regarding supply chain digital decision-making. Additionally, we will establish a development stage model to evaluate the maturity level of supply chain digitalization, aiding supply chain node enterprises in identifying their current digitalization stage, so that the result of our research would more practical.
    Evolution Game Analysis among Government, Enterprises and Low Carbon Service Providers under Low Carbon Background
    ZHOU Zehui, ZHANG Guitao, YIN Xiaona
    2023, 32(4):  35-40.  DOI: 10.12005/orms.2023.0112
    Asbtract ( )   PDF (963KB) ( )  
    References | Related Articles | Metrics
    With the increasingly serious problems such as global warming and industrial production pollution, the government attaches more importance to the issue of carbon emission reduction, and enterprise low-carbonproduction is imminent and in urgent need of solution. Based on the government’s urgent appeal to enterprise low-carbon production and the reality of consumers’ preference for low-carbon production, enterprises are turning to low-carbon production development mode. However, it is difficult for most enterprises to independently implement low-carbon production due to lack of funds or immature carbon emission reduction technology. Therefore, this paper introduces low-carbon service providers to solve this problem. The goal of low-carbon service providers is to provide technical services of carbon emission reduction for manufacturing enterprises that are unable to carry out low-carbon production due to their own reasons but have a strong desire for carbon emission reduction. To ease the awkward situation, in this paper, the government, enterprises and low-carbon service providers are taken as the main body of the evolutionary game, and the three-party evolutionary game model of the government, enterprises and low-carbon service providers is constructed. The game process of the three-party stability strategy is analyzed by using the evolutionary game theory. Finally, the numerical simulation of the MATLAB software is used to analyze the evolution path of the government, enterprises and low-carbon service providers. The obtained simulation analysis results are convenient for expanding the research ideas of later scholars, enriching the evolutionary game study of the government, enterprises and low-carbon service providers, promoting the development of low-carbon service providers in low-carbon economy, and providing theoretical research basis for the development of China’s low-carbon economy.
    Traditional game theory has two features: first, it assumes that all game players are completely rational in the game process; second, the decision of game players is carried out under the condition of complete information symmetry, and the interests of game players only consider the game between of designated players. With the full combination of traditional game theory and Darwinian biological evolution, evolutionary game theory was born. Payoff matrix, also known as payoff matrix, refers to the representation of the benefits of different game players into the same matrix according to the strategy combination of game players. The benefits of game players in the system are not only related to their own strategies, but also closely related to the strategy selection of other game players. The payoff matrix is mainly used to describe the strategy selection and payoff situation of all game players in the model, which is an important basis for game players to adjust their strategies. Replication dynamic equation: The definition of replication dynamic equation was proposed by Taylor and Jonker in 1978. It was initially used to describe the cycle of adjustment of decision-making behavior of game players in the system over time. In evolutionary game theory, game players will adjust their own strategies according to interests, give up low-return strategies and choose high-return strategies. Meanwhile, other game players in the system will also adjust their strategies according to the changes in the strategies of game players in the system. According to the collation and induction, it can be concluded that such change rules can be described quantitatively. And the tool for this characterization is to replicate dynamic equations. Evolutionary stability strategy: As an important part of evolutionary game theory, evolutionary stability strategy plays an important role in the development of evolutionary game theory. In 1973, Smith and Price combined classical biological evolution with traditional game theory to obtain evolutionarily stable strategy based on population study. Evolutionary stable strategy means that the game players in the system reach an equilibrium state after learning and adjusting for many times, but this state is easy to be broken. When some factors of the system change, the game players will learn and adjust their own strategies according to the latest situation to ensure the maximization of benefits. With time, the system will eventually reach a steady state. The steady-state situation means that the system adopts evolutionary stability strategy.
    The cost of government regulation has a negative effect on the government’s choice of regulation strategy. Firms are more sensitive to penalties than subsidies. By reducing the cost of government regulation, increasing the penalty for enterprises that do not implement low-carbon production and reducing the service cost of low-carbon service providers, we can effectively promote the development of the tripartite evolutionary game model to a stable strategy, so as to promote the development of our low-carbon economy.
    At present, there are few studies on low-carbon service providers by scholars, and even fewer studies on low-carbon service providers in the low-carbon evolutionary game. Therefore, when building the evolutionary game model, the consideration of whether there are other interest relationships between the government and low-carbon service providers and whether there are other interest contractual relationships between enterprises and low-carbon service providers may be flawed. The relationship among the government, enterprises and low-carbon service providers should be discussed in the future.
    Study on Dynamic Subgroup Identification and Management Mechanism for Large-scale Group Risk Emergency Decision
    XU Xuanhua, LYU Jie, CHEN Xiaohong
    2023, 32(4):  41-46.  DOI: 10.12005/orms.2023.0000
    Asbtract ( )   PDF (979KB) ( )  
    References | Related Articles | Metrics
    The decision-making experts in the decision-making group have a wide range of sources, with different knowledge backgrounds and emergency experience, and the use of the crowd wisdom knowledge of the decision-making group is more conducive to the solution of emergency incidents. However, emergency events have high complexity, uncertainty, and dynamic mutation, and how to make good use of the advantages of large decision-making groups and properly deal with hesitation risks and conflicts of opinion is an urgent problem that needs to be solved in emergency event decision-making.
    In the literature on the risk of large-group decision-making, most of the literature does not consider the consensus reaching process, in the context of emergency decision-making, there is a lot of unknown information in the early stage of decision-making, and the consensus reaching process is also a process of exchanging opinions and modifying preferences among experts, and the lack of consensus will lead to a decline in the quality of decision-making. In order to obtain results with high decision-making validity, two conditions need to be met: first, the final ranking of alternatives needs to be recognized by most decision-making experts, that is, the level of consensus of large groups is high enough; Second, the degree of uncertainty of group decision-making information is low.
    Both consensus and risk affect the quality of decisions in large groups of emergency decisions, while fewer articles consider both consensus and risk. In order to obtain more robust decision-making results, firstly, a decision-making validity measurement method is proposed according to the consensus level and hesitant risk level. Then, a large group is dynamically divided from the two dimensions of consensus and hesitant risk, and four basic subgroups are obtained: core subgroup, bias subgroup, risk subgroup and invalidity subgroup. Then, with the goal of improving the validity of decision-making, the idea of “divide and conquer” is used to build corresponding management mechanisms according to the characteristics of subgroups, so that the decision-making results are more stable.
    After the outbreak of the epidemic, the government immediately organized a number of relevant government departments with a total of 20 experts to discuss the plan to control the epidemic, according to the epidemic prevention situation to formulate a blocking strategy, influenza strategy, reduce the flow of people, appropriate measures to increase social distance, according to the method proposed in this paper to obtain the optimal plan for the blocking strategy.
    Compared with research that only considers the level of consensus or the risk of hesitation, this paper aims at decision validity, and proposes a subgroup identification and management method, which can obtain higher quality decision-making results and can be applied in a wider range of decision-making situations.
    In the context of emergency decision-making of risky large groups, there are still many issues worthy of in-depth study on how to define decision-making validity and improve decision-making validity, such as considering the impact of minority opinions on large-group decision-making, the impact of social network relationships on large-group decision-making, etc., and future research will further expand the management mechanism of subgroups.
    Interactive Dynamic Intuitionistic Fuzzy VIKOR Method Based on Prospect Theory and Its Application in Zero-waste City Construction
    MENG Fanyong, JIANG Lei
    2023, 32(4):  47-52.  DOI: 10.12005/orms.2023.0114
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics
    In view of the complexity and multi-period characteristic of real decision-making problems, the traditional multi-attribute decision-making method is insufficient to solve intricate real decision-making problems. Dynamic multi-attribute group decision-making method as a powerful tool to address this type of decision-making problems is proposed, which involves multiple periods and multiple experts. In the decision-making process, there are two important aspects that influence the final decision results: the weighting information and the bounded rationality of decision makers.
    With respect to interactive dynamic multi-attribute group decision-making problems, where the weighting information is incompletely known, and intuitionistic fuzzy numbers express attribute values, this paper proposes a hybrid decision-making method: prospect theory+VIKOR method. Considering the interactive characteristic of the dynamic multi-attribute group decision-making problem, a generalized Shapley intuitionistic fuzzy Frank-Choquet average operator is introduced to infuse interactive evaluation information. Additionally, prospect intuitionistic fuzzy decision-making matrices are calculated by prospect theory to cope with the bounded rationality of decision-makers. Furthermore, how to obtain the decision weighting information is also studied. The corresponding optimization models are constructed using the intuitionistic fuzzy cosine similarity measure and the Shapley function to get the optimal weights of experts, periods, and attributes. According to these results, an interactive dynamic intuitionistic fuzzy multi-attribute group decision-making method is proposed.
    The new method is applied to a case study: evaluating the high-level treatment of general industrial solid waste in zero-waste city construction. The decision results show that the new method has some merits for addressing complex decision-making problems by comparing the results with the existing methods. It indicates the importance of considering various weights’ interactions and decision makers’ bounded rationality.
    To ensure the final results to represent the decision makers’ opinions with the preset level, we can further incorporate the consensus into the new method. In addition, we can extend the new method to more complex setting, including large-scale group decision-making and group decision-making under social networks.
    Dynamic Strategy of Closed-loop Supply Chain Considering Involvement of A Big Data Service Provider
    WANG Daoping, LIANG Sihan, ZHOU Yu
    2023, 32(4):  53-60.  DOI: 10.12005/orms.2023.0115
    Asbtract ( )   PDF (1325KB) ( )  
    References | Related Articles | Metrics
    Rapid economic development and rapid population growth have exacerbated environmental pollution and resource shortage. Many manufacturing enterprises have begun to incorporate closed-loop supply chain into their development system. There are many interference factors in the closed-loop supply chain. Enterprise decision-makers must face the impact of these factors and make timely decisions. Improper handling may affect the reputation of enterprises. Today is in the information age, and having valuable information is conducive to enterprise decision-making and planning. Therefore, big data service providers came into being. There are various types of big data service providers, including big data service providers that provide technology-based services and big data service providers that provide data resources. Big data services have been widely used. Enterprises can choose big data service types according to their own needs, such as product recommendation, marketing assistance and recycling assistance. When the closed-loop supply chain is faced with multiple factors and the participation of big data service providers, the supply chain member strategy becomes the key. The impact of the participation of big data service providers on enterprises deserves further study. In the closed-loop supply chain, considering the involvement of a big data service provider and the recovery rate affected by various factors such as goodwill and time. This paper studies the equilibrium strategies of supply chain members under three situations of no big data service provider participation, a big data service provider providing assistance recovery and marketing service respectively, and makes a comparative analysis. This paper describes the dynamic change of recovery rate through Ito process, and analyzes the influence of big data service fee sharing ratio and the sensitivity coefficient of goodwill to recovery rate on supply chain members. The results show that: the recovery rate of used products, decisions and profits of closed-loop supply chain members can reach the expected value in a short time, and fluctuate around the expected value due to the interference of random factors; the retailer is more aware of market changes and profit; the participation of a big data service provider can improve the recovery rate and the recycling effort level of the manufacturer and the profit of closed-loop supply chain members in some way. When the external environment has more constraints and restrictions on the recycling of waste products, under the same big data service level, assisting marketing is more conducive to the improvement of recycling effort and profit than assisting recycling. The enlightenment is as follows: (1)Although the recovery rate and the decision variables and profits of supply chain members can reach the expected value in a short time, they are still in a fluctuating state due to the interference of random factors. Especially in the era of the Internet, the rapid spread of information has exacerbated the volatility of random factors. Enterprises should pay more attention to the changes in the external environment and adjust their strategies in time; (2)As retailers perceive and respond to the market more quickly, manufacturers should maintain a good cooperative relationship with retailers, share business information, jointly discuss the sharing ratio of big data service costs, reduce the negative impact of uncertainty, and enhance the level of pricing, recovery efforts and profit stability; (3)In the era of big data, enterprises should have the ability to mine valuable information from massive data. Enterprises can consider whether they need the assistance of big data service providers. When selecting the type of big data service, they should consider the overall environment, including the external environment, such as the policies of the government and other third-party institutions, the industrial environment, the economic situation and the internal environment, such as the enterprise strategy and their own capabilities, It is conducive to the sustainable development of enterprises and the environment. The study considered all sold offline. However, with the development of the Internet, many enterprises have started the online and offline dual-channel sales model. Therefore, it will be the focus of future research to consider dual-channel sales and the participation of big data service providers in the closed-loop supply chain.
    Equilibrium Path Analysis of Price Cheating on Remanufactured Products in Closed-loop Supply Chain
    YANG Mingge, SHENG Xin, LIANG Xiaozhen
    2023, 32(4):  61-70.  DOI: 10.12005/orms.2023.0116
    Asbtract ( )   PDF (1213KB) ( )  
    References | Related Articles | Metrics
    In recent years, the rapid development of remanufacturing industry has promoted the transformation of traditional supply chain to closed-loop supply chain. Due to the lack of market supervision and the lack of corporate integrity, price cheating in the remanufactured product market emerges one after another. In order to obtain more profit, the sellers of remanufactured products usually spend some camouflage cost. They camouflage remanufactured products as new products and sell camouflaged remanufactured products at the price of new products in the market. This behavior of the sellers is the price cheating on remanufactured products. It not only severely damages the benefits of the consumers, but also hinders the healthy development of the remanufactured product market. In order to protect the benefits of the consumers and promote the healthy development of the remanufactured product market, we study the price cheating on remanufactured products in the back-to-sale link of closed-loop supply chain and discuss its internal mechanism and solution in this paper. We first consider the camouflage cost in the narrow sense, and then consider the camouflage cost in the broad sense. Also, the results under two cases are compared.
    Firstly, considering the camouflage cost in the narrow sense, we construct an evolutionary game model composed of the seller and the consumer, and analyze the evolutionary stability of mixed strategies through Jacobi matrix. The results show that when the difference between the camouflaged selling price and the camouflage cost of the remanufactured products is smaller than the non-camouflaged selling price of the remanufactured products, the seller chooses not to camouflage and the consumer chooses to buy. At this time, the remanufactured product market is effective. So in the evolutionary game between the seller and the consumer, increasing the camouflage cost can prompt the system evolve to an effective state that the seller chooses not to camouflage and the consumer chooses to buy. In fact, the increase of camouflage cost will lead to the decrease of the income of the seller in selling camouflaged remanufactured products. If the income of the seller in selling camouflaged remanufactured products is smaller than the income of the seller in selling non-camouflaged remanufactured products, the seller will not have the need to camouflage, and the seller will choose not to camouflage. Once the seller chooses not to camouflage, it prompts the consumer to choose to buy. In the end, the two sides evolve into an effective state that the seller chooses not to camouflage and the consumer chooses to buy, and form a benign development.
    Secondly, considering the rewards and punishments of the government for the seller, that is considering the camouflage cost in the broad sense, we establish an evolutionary game model composed of the seller, the consumer and the government, and analyze the evolutionary stability of mixed strategies through Jacobi matrix. To be specific, by using the step-by-step analysis method, the three-party evolutionary game is transformed into two two-party evolutionary games. On the basis of the results of the two two-party evolutionary games, the evolutionary stability strategy of the three-party evolutionary game is obtained by comprehensive analysis. The results show that when the difference between the camouflaged selling price and the camouflage cost of the remanufactured products is smaller than the non-camouflaged selling price of the remanufactured products, the seller chooses not to camouflage, the consumer chooses to buy and the government chooses not to inspect. At this time, the market allocation is effective. Obviously, the results obtained in the three-party evolutionary game is same to that obtained in two-party evolutionary game, so we can prompt the market to evolve to an effective state only by adjusting the camouflage cost. But in the three-party evolutionary game among the seller, the consumer and the government, the retail price of new products and remanufactured products are determined by the market, and the camouflage cost is an uncontrollable parameter. Therefore, in order to push the remanufactured product market to an effective state, we can increase the probability of government inspection of the remanufactured product market and increase the government’s rewards and punishments for the seller. When the government’s intervention plays a positive role in promoting the effective allocation of the market, the government can not achieve the Nash equilibrium at that time. This shows that the government needs to pay a certain price to promote the healthy development of the remanufactured product market.
    Finally, we verify the correctness of the corresponding results in the above models through numerical simulation. In this paper, the numerical analysis is carried out in the case of pure simulation and does not combine with the actual data of the remanufactured product market. In the future, we can collect big data of the remanufactured product market and use econometrics and data mining to conduct empirical analysis. So we can improve the research on the behavior of relevant participants in the remanufactured product market.
    Dual-channel Supply Chain Pricing and Financing Decisions under Retailers with Capital Constraints and Full Member Risk Aversion
    ZHAO Da, HU Huimin, JI Qingkai
    2023, 32(4):  71-77.  DOI: 10.12005/orms.2023.0117
    Asbtract ( )   PDF (1458KB) ( )  
    References | Related Articles | Metrics
    In the post COVID-19 era, as market competition increases and the real environment becomes more complex, enterprises will face various risks and challenges, and their operating revenue and capacity will continue to decline. In our country, small and medium-sized enterprises have the characteristics of small scale and single product form. In the context of the epidemic, they are faced with a lot of development difficulties (such as financial problems) and decision-making risks, and supply chain decision makers are faced with many risks and challenges. In order to survive and develop stably, some small and medium-sized enterprises will choose to avoid risks, and the degree of risk avoidance has an increasingly important impact on pricing decisions. At the same time, the international environment is grim. In the face of the global economy and the global epidemic, each enterprise cannot stay on its own, and the competition in the supply chain is increasingly fierce. At the same time, contactless distribution and online shopping, which are promising due to the arrival of the epidemic, have had a profound impact on every aspect of people’s daily life. Therefore, when the epidemic becomes normal, it becomes an urgent issue to explore the optimal pricing and financing decision of small and medium-sized enterprise with financial constraints and risk aversion in the dual channel supply chain. In this paper, we consider a dual-channel supply chain where all members are risk averse and retailers have financial constraints. Based on Stackelberg game model, we study the optimal pricing decision and financing strategy choice of supply chain members, and analyze the effects of financing strategy, risk aversion degree and interest rate on the optimal pricing decision of each member. The utility model is established by introducing risk coefficient and mean-variance theory. We consider the two main ways retailers raise money: First, deferred payment. At the beginning of the production period, a deferred payment contract is signed. The retailer pays part of the money (the initial amount of money), and the rest is paid at the higher wholesale price after the sales are realized. Second, borrowing from financial institutions. The retailer borrows from the bank before ordering, and returns the principal and interest of the loan to the bank after the sales are realized. The research shows that the risk aversion of all members will lead to the reduction of the whole price of the supply chain, forming a situation of small profits and quick sales. When the deferred payment rate is the same as the financial lending rate, all subjects in the supply chain prefer the deferred payment strategy with the increase of retailers’ risk aversion coefficient, but with the increase of manufacturers’ risk aversion coefficient, the difference between the two financing strategies weakens. At the same time, for supply chain enterprises, hybrid financing is also a way to be considered. Due to space limitations, this part is not discussed in this paper, but it will be the main direction of further research in the future. In addition, future research will also consider ordering decisions and joint ordering and pricing decisions under financial constraints, and discuss operational decisions of supply chain members under more financing strategies.
    Refueling and Freight Revenue Optimization of Liner Shipping Considering Emission Control Area and Multi-time Windows
    LI Dechang, YANG Hualong, SONG Wei, ZHENG Jianfeng
    2023, 32(4):  78-85.  DOI: 10.12005/orms.2023.0118
    Asbtract ( )   PDF (1236KB) ( )  
    References | Related Articles | Metrics
    With the continuous development of global trade, maritime greenhouse gas emission control has become one of the most challenging environmental issues, which has attracted great attention from all walks of life. Currently, the International Maritime Organization (IMO) has designated four Emission Control Areas (ECA) in the North Sea, the Baltic Sea, the English Channel and the Caribbean coast of North America and the U.S. China has also established three ECAs in the waters of the Pearl River Delta, the Yangtze River Delta and the Bohai Sea Rim. In addition, considering the availability of container terminal resources, liner companies usually have to sign cooperation agreements with ports in order to provide multiple time windows at the ports. In a circular route consisting of multiple ports, the liner companies deploy multiple ships to call at each port in a regular sequence according to a certain departure frequency (usually weekly) to provide customers with round-trip scheduled liner shipping services from week to week. Under the new situation of the implementation of ECA rules and deep integration of port and shipping supply chain, liners need to use MGO in ECA and LSFO outside ECA, and adopt different speed in different leg. On the one hand, this leads to profound changes in liner’s arrival/departure schedule, refueling port selection and refueling volume strategy, and on the other hand, it also leads to changes in container cargo loading strategy between port pairs, which in turn will affect the changes in voyage cargo revenue. Therefore, it is of great practical significance to study the problem of optimizing the refueling and cargo revenue of liner shipping considering ECA under the situation of drastic fluctuation of freight and fuel prices.
    This paper extends the study of refueling and cargo revenue optimization based on the background of multi-window cooperation agreements signed between liner companies and ports, in order to provide reference for liner shipping operation optimization decisions under the new situation of maritime greenhouse gas emission control and deep supply chain integration. The impact of fuel switching on ECA internal/external routes on fuel consumption is analyzed. Combining the differentiation factors of fuel price at each port of call and freight demand from origin to destination and freight rate, a mixed integer non-linear programming model is established, where the total liner shipping route freight revenue is maximized. Then, a piecewise linear secant approximation algorithm is designed. The method is applied to transform the original model into a mixed-integer linear programming model that can be solved directly using commercial software (e.g., CPLEX, etc.). The problem contains the following decision elements: (1)Determining the ship sailing speed of ECA internal/external routes; (2)Selecting the arrival/departure time of ship; (3)Determining the number of ships deployed on the route; (4)Determining the refueling ports and refueling volume of LSFO and MGO; (5)Determining the loading strategy of each O-D pair.
    Taking the MEX route of China COSCO Shipping Group Co., Ltd. as an example, the applicability and effectiveness of the model and algorithm are verified. The results of the numerical example show that the joint optimization of refueling and cargo loading can increase the voyage revenue of liner shipping by 4.21% under the consideration of ECA and multiple time windows. As the length of port time window increases, the possibility of liner being late to the port decreases, and the liner can adjust its speed more flexibly, better choose the port with low fuel price, and more reasonably balance the loading and refueling strategies, thus improving the voyage revenue. The study shows that the signing of multi-window cooperation agreements between liner companies and ports, as well as the deployment of new types of ships with lower fuel consumption coefficients, not only facilitate liner companies to flexibly adjust sailing speed and arrival/departure time, but also effectively reduce fuel consumption and improve the voyage revenue. Liner companies should sign win-win cooperation agreements with ports, so as to obtain a larger length of available time window at ports without increasing container terminal resources (i.e., without increasing loading and unloading operation time). In addition, liner companies should take into account the fuel consumption coefficients of different ship types according to the actual demand of liner shipping, and match the advanced and suitable ship types to ensure the maximum profit of liner shipping. The research conclusions can provide a useful reference for liner companies to make liner operation decisions with ECA rules.
    This paper investigates the problem of optimizing liner refueling and cargo revenue considering ECA when liner companies deploy the same type of ship on a single route. The study yields a series of important conclusions and management opinions. However, liner companies also deploy heterogeneous ships on some routes, and the next study can consider the optimization problem of liner refueling and cargo revenue for multiple ship types on multiple routes. In addition, robust optimization of liner refueling and cargo revenue is also an interesting direction due to weather, sea conditions and natural disasters that may cause delayed ship arrivals and port disruptions.
    Research on the Optimization of Strategic Behaviors of Related Subjects of Local Environmental Governance under the Vertical Management System
    PAN Feng, LI Yingjie, WANG Lin
    2023, 32(4):  86-92.  DOI: 10.12005/orms.2023.0119
    Asbtract ( )   PDF (1184KB) ( )  
    References | Related Articles | Metrics
    In 2016, the Central Committee of the Communist Party of China and the State Council issued the “Guiding Opinions on the Pilot Work of the Reform of the Vertical Management System for Monitoring, Supervision and Law Enforcement of Environmental Protection Organizations Below the Province”, pointing out that the environmental protection management system of local governments below the provincial level should gradually complete the transformation to a vertical management model. Until now, the reform of the vertical management system of environmental protection agencies below the provincial level has been basically completed, and the operation of the vertical management system of environmental protection agencies is advancing in an orderly manner. According to the “2020 China Ecological Environment Status Bulletin”, although the overall quality of China’s ecological environment has been improved, the protection of the ecological environment is still facing a severe situation. How to promote the vertical management system is playing a more effective role in promoting the comprehensive green transformation of economic and social development requires analysis and discussion of local environmental governance issues under this system. The integration of local environmental governance entities into a system for research, can clarify the interaction relationship among subjects and the direction of strategy adjustment and the entire system operation efficiently and stably, which is of great important practical significance for further improving the ecological environment and optimizing the strategic behavior of relevant subjects of local environmental governance. This paper constructs a four-party game model of the Municipal Monitoring Center—Municipal Ecological Environment Bureau—Local Government—Enterprise under the vertical management system. Firstly, the average expectation about the strategy of each subject is calculated, and according to the Malthusian equation, the four-dimensional dynamical system of the four players is obtained, and then the Jocabian matrix of the strategy is obtained. Secondly, in the stabilization strategy analysis, Lyapuno’s first law is used to determine the conditions required for local environmental governance-related subjects to choose the ideal evolutionary stabilization strategy by judging whether the eigenvalue sign is equivalent to a stabilization point. Finally, based on the relevant data of Chongqing City and Xingtai City in Hebei Province, the data comes from the “2019 Xingtai City National Economic and Social Development Statistical Bulletin”, “2019 Chongqing Municipality National Economic and Social Development Statistical Bulletin”, etc. They are publicly available.Matlab software is used to simulate the evolution trend of the system, the control variable method is used to study the interaction between variables, and the numerical simulation empirical analysis is carried out. The study finds that increasing local government interference costs, and improving the monitoring and assessment coefficients and rewards for local environmental protection agencies can reduce the indirect interference of local governments on the strict monitoring and law enforcement of local environmental protection agencies; Improving the evaluation coefficient and rewards for environmental law enforcement of local ecological environment bureaus is helpful to solve the dual leadership dilemma of the local ecological environment bureau, but the strategic tendency of the local government needs to be considered; Enterprises will measure the magnitude of the negative impact of the accident and choose either superior production or inferior production. The further research shows that the collusion between government and enterprise is an important factor that affects the strict performance of duties of each subject. The central government should pay attention to the control of the trends of local environmental protection agencies and improve the supervision and welfare benefits of environmental protection agencies. However, based on the consideration of government-enterprise collusion, the measures of increasing the interference cost of the local government, the evaluation coefficient and rewards of the municipal environmental bureau shall be used cautiously. In order to avoid hindering the ideal state of “environmental protection agencies strictly perform their duties, the government strengthens their main responsibility, and enterprises produce in an environmentally friendly manner”.
    The Banzhaf Value with Coalition Structures
    SHAN Erfang, LYU Wenrong, SHI Jilei
    2023, 32(4):  93-97.  DOI: 10.12005/orms.2023.0120
    Asbtract ( )   PDF (963KB) ( )  
    References | Related Articles | Metrics
    The Banzhaf value and Shapley value are two famous allocation rules in cooperative games with transferable utility, both of which determine the payment of the change participant by the marginal expectation of each player to all coalitions. The difference is that the Banzhaf value originates from the voting game, which assumes that players have the same probability of joining any coalition of any size, while the Shapley value only assumes that players have the same probability of joining any coalition of the same size. In addition, both Banzhaf value and Shapley value assume that any player can form a feasible alliance. However, the cooperation between players is affected by factors such as geography and intimacy. Players with closer cooperation in the same region are more likely to form a priori union. In 1974, Aumann and Dréze first studied the cooperative games with coalition structures. They define each coalition structure as a partition of the grand coalition, and each subset in the partition is called a priori union. In 1977, Owen assumed that any priori union could cooperate with all or part of the players in other priori unions. On this basis, he proposed the famous Owen value. The Owen value is to first assign the Shapley value to each priori union, and then use the Shapley value to perform secondary allocation within each priori union. Then, based on the Banzhaf value, Owen gives another allocation rule in cooperative games with coalition structures, which is called Banzhaf-Owen value. In 2009, Kamijo considered a situation different from Owen’s assumption that players in each priori union can only cooperate with other priori unions as a whole. Under this assumption, he proposes a new allocation rule in cooperative games with coalition structures, called Ka value. In order to distinguish, he refers to the allocation rules based on the Owen’s assumption as the coalition value, and the allocation rules under this assumption are collective value.
    In fact, we often need to consider the collective value under the Kamijo’s assumption. For example, a company holds a general meeting of shareholders to make decisions. Due to the different number of shares held by each shareholder, the corresponding voting power is also different. Some shareholders with the same or similar ideas will form the priori union in order to achieve a certain purpose. The priori unions are all involved in decision-making as a whole, with more shares to improve the bargaining power in decision-making or negotiation to obtain more benefits, and then distribute vested interests within the priori union. At this time, we can consider using the collective value to estimate the power index of each shareholder. In order to better estimate the power index of different participants in such problems, as the Banzhaf value originates from the voting game, it is particularly important to define the Banzhaf value with coalition structure under the Kamijo’s assumption.
    In this paper, based on that there may exist a coalition structure formed by priori unions among all participants, a new Banzhaf value with a coalition structure, called the C-Banzhaf value, is introduced into cooperative games with coalition structures. First, we show that the C-Banzhaf value is uniquely determined by pairwise merging of partition and standardness. Secondly, by the decision-making process of a company’s general meeting of shareholders as an example, the C-Banzhaf value is applied to analyze the power index of each shareholder and compared with other values. The results show that the C-Banzhaf value is a good power evaluation method when shareholders are more inclined to form the priori union to seek more benefits with their strong bargaining power.
    Measurement and Decomposition of Additive Cross Efficiency under A Two-stage Analysis Framework
    XUE Junmei, WANG Yingming
    2023, 32(4):  98-104.  DOI: 10.12005/orms.2023.0121
    Asbtract ( )   PDF (1035KB) ( )  
    References | Related Articles | Metrics
    The data envelopment analysis (DEA) was proposed in 1978 as an efficiency evaluation method with a variety of inputs and outputs. It is a “black box” evaluation method using objective data with the most favorable weight for the decision-making unit (DMU) itself. Traditional DEA models generally have two problems: First, the “black box” evaluation can’t detect the impact of internal processes on efficiency; Second, the influence of other DMUs is not taken into account, and the phenomenon of partial generalization is likely to occur. Network DEA is an effective means to solve the first problem, by opening the “black box” for evaluation, and mining the impact of internal factors on efficiency. The cross efficiency evaluation method is useful to solve the second problem, which fully considers the role of all DMUs in efficiency evaluation. However, none of them take into account the influence of decision makers’ subjective preferences, and scholars have made many efforts to improve DEA, but the research on combining cross efficiency and network DEA is still in its infancy. Therefore, this paper combines them and constructs two-stage additive efficiency models according to the most basic two-stage chain network structure. This not only expands the research scope of DEA, but also is more in line with reality, which is convenient for DMs to provide more comprehensive and in-depth decision-making reference.
    Cross-efficiency evaluation is mainly a “benevolent” and “aggressive” model constructed by auxiliary goal optimization. The “benevolent” model maximizes the efficiency of other DMUs as much as possible under the premise of ensuring its own highest efficiency, so the objective function is to find the maximum. The “aggressive” model, on the contrary, takes the minimum value and the other constraints remain unchanged. The network DEA considers the most basic chain structure, the output of the first stage is also the input of the second stage, and the overall efficiency of the system is equal to the product of the two-stage efficiency. This paper draws on the idea of additive efficiency decomposition, combines network DEA and cross efficiency method, and considers the preferences of decision-makers to construct a two-stage additive cross efficiency model by setting priorities for different processes. By calculating the proportion of sub-stage input in the overall input, the variables of optimization decision-making are obtained, and three new models in different situations are proposed, namely overall priority, stage 1 priority and stage 2 priority model. Finally, the overall efficiency, stage 1 efficiency and stage 2 efficiency of the evaluated unit are calculated by arithmetic average. There are two points to note.One is that when the model is a nonlinear programming, the Charnes-Cooper transformation should be used to solve the linear programming model; Second, each model is calculated using the weight that is most favorable to its own process, which is not necessarily optimal for other processes, so there may be a situation in the cross efficiency matrix where the mutual evaluation efficiency is greater than the self-assessment efficiency.
    This paper takes banks, insurance, and supply chains as examples to analyze the Stage 1 priority, Stage 2 priority, and overall priority models respectively. In the case of banks, the 1st stage is the deposit absorption process, which is invested in the net value of fixed assets and the number of employees. The 2nd stage is the profit process, and the output is the book profit, which verifies the feasibility of the priority model of phase 1, and the efficiency at this time mainly depends on the efficiency level of phase 1. In the case of an insurance, stage 1 is the marketing process, with operating expenses and insurance expenses as inputs, and output as direct written premiums and reinsurance expenses, and stage 2 inputs. Phase 2 is the investment process of the insurance company, and the output is underwriting profit and investment profit, which verifies the feasibility of the Phase 2 priority model, and the efficiency at this time mainly depends on the efficiency level of Phase 2. In the supply chain, stage 1 is selling, with manpower, operating costs, and transportation costs as inputs, output as the number of products shipped, and stage 2 inputs. The 2nd stage is buying, and output is sales and profit. It can be seen that the overall efficiency and ranking are in the middle or near the efficiency and ranking of the sub-stages, and the overall efficiency priority model is feasible.
    The model can also be applied to environmental efficiency assessment, enterprise performance evaluation, and other fields, and to various industries such as the hotel industry and pharmaceutical industry. At the macro level, the method can be applied to the efficiency evaluation of regional or international production activities; At the medium level, this method can be used to analyze the internal development factors of different industries in the same region or the same industry in different regions; At the micro level, it can be used for organizations such as businesses, banks or hospitals to find the causes of inefficiencies.
    Forecast Method for Flights Based on BP Neural Network during Public Emergency
    CHEN Huaqun, WANG Yujue, LIU Yunxi
    2023, 32(4):  105-111.  DOI: 10.12005/orms.2023.0122
    Asbtract ( )   PDF (1814KB) ( )  
    References | Related Articles | Metrics
    With the outbreak of the Covid-19 at the end of 2019 and the subsequent heavy blow on the civil aviation industry, in order to minimize the impact of public emergencies on the civil aviation industry and minimize economic losses in the future development, flight prediction of public emergencies has once again attracted the attention of civil aviation practitioners.
    The biggest feature of public emergencies is that the time of their occurrence has great randomness, which leads to the uncertainty of the occurrence, prevention and control of the events and the degree of damage to the social economy. According to numerous historical events, major public emergencies can disrupt the normal flight operation order, or even make the entire civil aviation industry into a sustained downturn. Flight prediction under public emergencies refers to the quantitative assessment of flight change trend when disasters or disasters occur outside the scope of human expectation, which will cause or may cause serious social harm and last for a long time.
    Through the analysis of the current situation, it is found that the impact of public emergencies on flights is mostly focused on qualitative management strategies, and there are few studies using data association analysis, deep quantitative mining and intelligent forecasting. Moreover, the trend prediction of traditional time series is easy to be affected by external interference, which makes it difficult to further improve the prediction accuracy. By taking public emergencies as individual guidance characteristics, ABI flight forecasting mechanism under the continuation of event life cycle is constructed to improve the anti-external interference ability of flight forecasting, change the disadvantages of traditional time series data trend forecasting, facilitate the formulation of operation strategy in advance and timely adjustment of transport capacity deployment, and reasonably ensure the safety and efficiency of flight operation.
    The Agent Based Index (ABI) mechanism is introduced, public emergencies are taken as individual guiding characteristics, and ABI indexes of different operation scopes and types of flights are finally output to determine the correlation between public emergencies and flight operation trends. Then SPSS data analysis technology is used to make a statistical analysis of the impact of historical public emergencies on flights, machine learning of the Spearman correlation between flights and emergencies, and quantification of the direct positive and negative correlation between the two through correlation test, excluding irrelevant factors. Then, the flight prediction improvement model of BP neural network is established. The trend reflected by the historical data can be used as the input sample, emergencies and flight trend changes are used as the training function, and the improved Matlab.net is used to train the sample data, and the weight and threshold of each layer of neurons are constantly corrected to make the error function decline along the negative gradient direction. Flight projections close in on expectations. Finally, the historical flight data of 46 months before and after the Covid-19 outbreak are obtained through the information network of the Civil Aviation Administration of China as an example to verify the feasibility and prediction effect of the model and algorithm. In order to avoid the phenomenon of overfitting, 70% of the data is set as the training set to train the algorithm, 15% of the data is set as the verification set to verify the above training results, and the remaining 15% of the data is set as the test set to test the final model.
    The results show that the computational search technique of BP neural network solves the complex nonlinear mapping relationship between event and flight to a certain extent, and the optimal verification set mean square error gets the prediction result closest to the expectation. The maximum training times are set as 1000 times and the autoregressive end as 3.The BP training mechanism for flight prediction based on the life cycle of public emergencies proposed in this paper is used. The running code in Matlab platform is iterated for 23 times and the prediction results of domestic, international and passenger and cargo flights in the next 8 months are obtained. With the small outbreak of the Covid-19 after the National Day holiday in 2020, flight data decreased in November and December 2020, and the total number of flights per month is expected to be about 300,000. In the next four months after January 2022, the total number of flights per month will remain at around 400,000.
    Due to the limitation of data acquisition, only domestic epidemic data are selected as independent variables to participate in the correlation analysis, and the impact of foreign epidemic changes on Chinese international flights is not counted. In order to avoid the prediction deviation caused by the monotonously decreasing trend of the selected sample interval, the change characteristics of historical data will be further analyzed in the future, the scope of reference data will be expanded, and the anti-interference mechanism of prediction will be improved.
    Allocation Method of Three-dimensional Warehouse Space Considering Seismic Shock
    ZHANG Shuiwang, FU Linping, WANG Rui, SHAO Lingzhi
    2023, 32(4):  112-117.  DOI: 10.12005/orms.2023.0123
    Asbtract ( )   PDF (1591KB) ( )  
    References | Related Articles | Metrics
    Traditional research on the stability of shelves only takes the lowest overall center of gravity in the vertical direction of three-dimensional space as the optimization objective, and rarely considers the collapse of shelves caused by external forces such as sudden natural disasters. Under the impact of the earthquake, whether the warehouse center can withstand the test of the earthquake and operate normally, avoid huge property losses, and reduce the life safety risks for the staff, has become a matter of practical significance. Simultaneously, the shelves are not always in full load state in the daily production process, and the distribution of goods also directly affects the force and safety of the shelf structure. In order to reduce the risk of shelf collapse caused by earthquake and improve the stability of storage center shelves, this paper takes three-dimensional shelf space allocation as the research object, and puts forward a three-dimensional warehouse space allocation method considering earthquake impact.
    Firstly, this paper selects the energy consumption of goods in and out of storage, frequent items nearby storage and shelf stability considering earthquake shock as targets, constructs a multi-objective model, and uses the ideal point method to deform the objective function to obtain the fitness function of the proposed algorithm. Then, combining with the characteristics of the model, the coding and solving steps of the Artificial Fish Swarm Algorithm(AFSA)are rationalized, and the solving algorithm suitable for the problem in this paper is obtained. Then, taking the data collected from an e-commerce warehouse as an example, the simulation experiment is carried out: 1)The FP-Tree algorithm is used to mine the sample order and obtain the frequent item set. 2)The AFSA is used to solve the three sub-objectives, and three ideal points are obtained, and the fitness function of the proposed algorithm is constructed. 3)The algorithm designed in this paper is used to obtain the optimized cargo space allocation results, and 1660 customer orders are randomly generated. The optimized cargo spaceallocation results are used to calculate the order picking distance, and the effectiveness of this algorithm to solve the problem is verified. Finally, multiple sets of comparison experiments are designed: 1)In order to verify whether the cargo space allocation method considering earthquake shock is reasonable, AFSA is used to simulate and optimize the cargo space allocation model considering earthquake shock and not considering earthquake shock respectively. After 20 times of simulation, the average value is taken as the final optimization result, so as to ensure the stability and reliability of simulation experiment results. 2)We change the horizontal seismic influence coefficient, and other variables of the model and algorithm remain unchanged. We use the AFSA to solve the corresponding total target value under different horizontal seismic influence coefficients, and analyze the influence of horizontal seismic influence coefficient on the solution results. 3)In order to verify the superiority of artificial fish swarm algorithm in solving the problem in this paper, it is compared with genetic algorithm (GA) and particle swarm optimization algorithm (PSO).
    The results show that: 1)In this paper, the space allocation method designed under the earthquake impact does not reduce the energy consumption of goods in and out of storage and the optimization effect of order picking efficiency, but improves the overall stability of shelves and enhances the ability of shelves to withstand the earthquake impact. 2)The method in this paper is applicable to warehouse space optimization problems in different intensity areas, which can reduce energy consumption of warehouse goods in and out of storage and improve shelf stability. 3)In solving the problem in this paper, the optimization effect of artificial fish swarm algorithm is better than that of genetic algorithm and particle swarm optimization algorithm.
    Application Research
    An IO-type Anomaly Interval Detection Method for Interval-valued Time Series and Its Application to Financial Time Series Analysis
    TAO Zhifu, FENG Haoyang, CHEN Huayou
    2023, 32(4):  118-125.  DOI: 10.12005/orms.2023.0124
    Asbtract ( )   PDF (1428KB) ( )  
    References | Related Articles | Metrics
    With the development of social technology, more and more types of financial time series have been saved, and interval-valued time series is one of them. The existing research shows that the interval-valued time series contains more information than the point-valued time series, and the study of the financial interval-valued time series can provide a theoretical basis for the investment and prediction of the financial market. However, the influence of abnormal interval on interval-valued time series modeling is rarely noticed in the existing research of interval-valued time series. Therefore, this paper studies the IO-type abnormal interval of interval-valued time series and its test method.
    Unlike point-valued time series, the numerical level of interval-valued time series is affected by the upper and lower limit series or the center and radius series. Besides, the affections caused by two partitioned subsequences would also be interacted with each other. Because the center and radius sequence can better reflect the change of interval value, this paper analyzes the IO type abnormal interval from the perspective of center and radius sequence. Three forms of IO-type abnormal interval in interval-value time series are given on the basis of traditional definitions of outliers, namely horizontal drift, temporary change and oblique rise. The mathematical expressions of three types of IO-type exception intervals are given to facilitate the subsequent research on the IO-type exception interval of interval-value time series. Then this paper proposes an IO-type anomaly interval detection algorithm for interval-value time series based on hypothesis testing. Using the idea of interval-value time series research based on statistical methods, ARMA modeling is carried out for the two sequences respectively from the interval center sequence and the interval radius sequence, and the corresponding model residual test statistics at each time are further constructed. The test statistics at each time are compared with the critical value to determine the time point of occurrence of the exception interval. This overall test method will cause large errors, so the Bonferroni law is adopted to ensure that the maximum probability of incorrectly identifying IO type anomaly interval is 5%. Finally, for the daily maximum and minimum prices of the Shanghai Composite Index from January 4, 2016 to December 28, 2018 constitutes the interval-value time series, its daily closing price constitutes the point-value time series, and the proposed detection method is used to detect the abnormal interval.
    Based on the analysis of the statistical significance and actual performance of the abnormal interval detection results of interval time series, it is concluded that the abnormal interval detected by the method proposed in this paper is effective. At the same time, by comparing the results of interval-value time series anomaly detection and point-valued time series anomaly detection, it is proved that the proposed interval-value time series anomaly detection method based on hypothesis test is more efficient than the point-valued time series anomaly detection method based on hypothesis test. Different types of anomaly interval detection methods are compared.It can be proved that the anomaly interval detection method based on hypothesis test proposed in this paper has more obvious advantages when facing interval-valued data with the same time period.
    Consequently, the research of this paper expands the application field of the principle and technology of classical outlier detection. Furthermore, it also enriches the research scope of interval value time series analysis.It can provide reference for the detection of outliers in other types of time series data and further research on prediction theory. The identification and detection of outliers in interval value time series can provide a certain technical guarantee for improving its prediction accuracy.
    Diagnosis of Thyroid Nodules Based on Two-stage Multi-criteria Classification
    SUN Hongjun, HE Liang, YU Feihong, XU Haiyan
    2023, 32(4):  126-133.  DOI: 10.12005/orms.2023.0125
    Asbtract ( )   PDF (1261KB) ( )  
    References | Related Articles | Metrics
    The ultrasonographic manifestations of thyroid nodules are complex and varied, and it is difficult to judge the benign and malignant. Ultrasound examination depends on the performance of ultrasound equipment, and is closely related to the understanding and experience of ultrasound doctors. Different sonographers have different understandings of thyroid nodules in the same patient, and their report conclusions differ greatly, which brings confusion to clinical treatment. The American College of Radiology published the ACR Thyroid Imaging Reporting and Data System (ACRTI-RADS), which proposed a risk stratification approach and gave five indicators for the diagnosis of thyroid nodules: composition, echogenicity, shapes, margins and echogenic foci. The imaging features of each indicator were described and scored in the system, and the total value of the five indicators was used to determine the malignant risk of thyroid nodules and assign them to the corresponding category. The diagnostic process has become more standardized and normalized, and it is possible to use computer-assisted diagnostic classification for thyroid nodules.
    The diagnosis of thyroid nodule is a multi-attribute decision making problem, which is to assign thyroid nodules to the corresponding malignant risk grade according to the imaging features. There are two main types of multi-attribute classification decision methods: One is direct classification algorithm, that is, decision-makers directly give decision parameters such as utility function, criteria weights and classification thresholds, and use these parameters for direct classification; The other is the classification based on case learning, that is, decision makers learn from the classification results of representative sets of typical cases, build a corresponding decision model to calculate the decision parameters, which are used to classify all evaluation objects. This paper combines the advantages of both approaches and proposes a two-stage multi-criteria classification method to diagnose and classify thyroid nodules. Firstly, we take the cases diagnosed by experienced doctors as case set and quantify the criteria according to ACRTI-RADS to construct a quantifiable case set. Then, the classification diagnosis process is divided into two stages. In the first stage, all cases are grouped into decision tree models based on the direct classification criteria given in ACRTI-RADS. Each type is regarded as positive class, while the other types are regarded as negative class. The classification accuracy of the models is calculated to identify the best distinguishable class, and then the cases with significant features and good discrimination are directly classified. In the second stage, a multi-attribute classification decision model is constructed. The center of each group in the case set is taken as the reference point, and the distance between cases and the central point of the group is defined. The decision objective function is to satisfy the minimum classification errors within and between groups. The constraint space is constructed with the classification distance critical value constraint and index weight constraint. Lingo is used to solve the multi-attribute classification model through the learning of typical cases, and the criteria weights and classification thresholds are calculated, so as to complete the decision classification of other complex cases.
    Data from 16 Chinese electronic medical records for thyroid ultrasound diagnosis including 4 types are obtained and quantified. Manhattan distance and Euclidian distance are respectively used as case distances, and classification calculation is carried out according to the two-stage model proposed in this paper. Moreover, classification results are compared with classical classification algorithms such as logistic regression model, hierarchical vector machine model, and direct multi-attribute decision model. The experimental results show that: (1)The classification performance of using Euclidean distance as case distance is better than that of Manhattan distance. (2)The method proposed in this paper is superior to these classical classification algorithms, whose performance depends on the learning of mass data, which is difficult in medical practice. (3)The proposed method overcomes the difficulty of direct classification of thyroid nodules due to the complexity of classification problems and the cognitive limitations of decision-makers, while taking into account the computational efficiency. In conclusion, the effectiveness of the two-stage multi-attribute classification method in the classification and diagnosis of thyroid nodules is verified by analyzing and comparing the classification results.
    In the follow-up study, we will continue to collect data to improve the accuracy of the model proposed in this paper. On the other hand, this model will be extended to other medical diagnosis of multi-classification diseases, such as the diagnosis of breast nodules, the staging of hypertension, etc. In the end, thanks to the support of National Natural Science Foundation of China, which makes this research carried out smoothly, and thanks to all the experts and editors for their suggestions on modification and improvement, which enable this article to be continuously improved and successfully published.
    Research on the Illness of GM (1,N) Model and Its Application in Ecological Innovation
    XIONG Pingping, LI Tiantian, TAN Chengwei, WU Yurui
    2023, 32(4):  134-139.  DOI: 10.12005/orms.2023.0126
    Asbtract ( )   PDF (1098KB) ( )  
    References | Related Articles | Metrics
    The rapid economic growth model has led to the excessive consumption of resources, and a series of ecological problems are increasingly prominent. Ecological innovation not only solves the pressure caused by the bottleneck of resources and environment, but also promotes the sustainable development of national economy. However, the amount of data related to eco-innovation indicators that can be collected is limited. The structure of the corporate eco-innovation system is complex, with certain grey characteristics such as uncertainty and small samples. Therefore, this paper takes Chinese industrial enterprises as the research object, and explores the grey model forecasting technology applicable to the characteristics of ecological innovation-related indicators with multiple variables and few data. The possible pathology of traditional grey prediction in parameter estimation is studied.
    In the actual data, there may be more influencing factor sequences than the number of samples, or there may be a strong grey correlation between influencing factors. When using the ordinary least squares method, pathological features may occur when the covariance matrix is close to singular. Therefore, the model parameters are estimated based on L2 regular terms, and the relative optimal value of the regular term coefficients is found by combining with particle swarm arithmetic, so as to solve the morbidity problem and improve the prediction accuracy of the grey model. In addition, one of the reasons for the poor prediction effect of the traditional GM(1,N) model is the non-homology of parameter application, so this paper directly obtains the time response and parameter estimation from the difference equation to solve the non-homology problem.
    The GM(1,N) model of the optimization algorithm is applied to the prediction of the number of patents of industrial enterprises in Jiangsu province and the north of China, and the results show that the number of patents of industrial enterprises in Jiangsu province and the north of China shows an increasing trend every year, and the total daily treatment capacity of urban sewage in Jiangsu province and the total current assets of industrial enterprises above designated size have a greater impact on the number of patents, and the number of units of industrial enterprises above designated size and the daily treatment capacity of urban sewage in the north of China are the main influence sequences.
    Research on Influencing Factors of Carbon Intensity Based on PDA-IDA Decomposition Method
    AN Qingxian, ZOU Yuqing, XIONG Beibei
    2023, 32(4):  140-146.  DOI: 10.12005/orms.2023.0127
    Asbtract ( )   PDF (989KB) ( )  
    References | Related Articles | Metrics
    As the top carbon dioxide(CO2) emitter, China has drawn global attention for its accelerated growth of CO2 emissions in the past few decades, and international effort to stabilize the global climate depend greatly on the carbon emissions footprint of China. Recently, government officials, industry entrepreneurs and academic researchers have acknowledged that addressing climate change requires a balance between economic development and environmental sustainability. Moreover, this balance requirement has made the aggregate carbon emissions intensity(CEI), defined as CO2 emissions per unit of gross domestic production, has been used to characterize the overall performance of climate change mitigation. In order to reduce CO2 emissions, China promised to cut CEI by 60%~65% by 2030 compared with the 2005 level. Furthermore, the Chinese government aimed to peak CO2 emissions no later than 2030 and increased the proportion of non-fossil fuels in primary energy consumption to 20% by 2030. In order to achieve these goals, we need to know what factors are driving the CEI change and their relative importance. Therefore, how to effectively decompose the driving factors of CEI in China has been an urgent problem.
    In this study, a decomposition method of CEI is proposed, which is based on the production-theoretical decomposition method and index decomposition method. The CEI change can be decomposed into eleven driving factors in our approach, including energy mix change, potential economicoutput structure change, potential energy intensity change, potential carbon emission coefficient change, economic output gap change, technical efficiency and technological change ofeconomic output, technical efficiency and technological change of CO2, technical efficiency and technological change of energy inputs. In addition, an efficiency measurement method that considers the materials balance principle is proposed to measure the technical efficiency of energy, economic output and CO2, which can scientifically quantify the impact of technical factors on CEI.
    Based on our proposed method, we decompose and analyze the driving factors affecting the CEI change of the transportation industry in China, with a panel data of 30 provinces during 2009~2017. The main results are as follows. From the national level, the CEI of the transportation industry declined by 18.65% from 2009 to 2017. Technological change of energy and CO2 were the most critical driving factors decreasing CEI. In contrast, potential carbon emission coefficient change, energy technical efficiency change and CO2 technical efficiency change led negative impacts to CEI decrease. At the provincial level, twenty-two provinces contributed positively to aggregate CEI. Shandong had the largest contribution to CEI decrease, while Anhui and Xinjiang were the top two for CEI increase. The specific suggestions are as follows: (1)The transportation sector needs to accelerate the optimization of the energy structure. They should focus on clean energy with low carbon content, gradually change the petroleum-dominated energy structure and form an energy structure with petroleum as the main body and joint development of multiple clean energy. (2)China has some rooms for improvements in reducing CEI by technical efficiency change. Transportation sector cannot just be limited to the investment of hardware technology but should pay more attention to the investment of management technology. They should rationally adjust the transportation structure, improve transportation management and dispatch, strengthen the technical training of traffic practitioners, and improve the operational level and transportation efficiency. (3)Technology progress is the essential measure to achieve CEI targets in the future. The Chinese government can formulate some policies to promote the development and application of advanced technologies in the transportation industry, gradually reduce the proportion of vehicles that use diesel and gasoline as the primary energy consumption. In addition, due to the different levels of technology in each region, the government can promote technological exchanges among regions and provinces to achieve diversified technologies.
    Research on Optimal Resource Input Strategy for Product Modular R&D Knowledge Coordination
    ZHENG Jiangbo, LI Junting
    2023, 32(4):  147-154.  DOI: 10.12005/orms.2023.0128
    Asbtract ( )   PDF (1236KB) ( )  
    References | Related Articles | Metrics
    Since product R&D is a relatively complex knowledge-based task, including the analysis of consumer preferences, market trends, technical routes, process realization, etc., the knowledge required for R&D is distributed among different functional module suppliers. For practical considerations, system integrators can only divide modules according to product functions and hand them over to these suppliers for research and development, resulting in coordination requirements based on division of labor.Therefore, modular R&D is a cognitive process in which module suppliers and system integrators gradually form a consensus on module R&D and integration based on the knowledge division of product functional modules based on the knowledge interdependence of modules. The process of knowledge coordination activities centered on knowledge learning between module suppliers and system integrators plays a key role.
    This paper firstly analyzes the knowledge coordination mechanism of module suppliers and the growth principle of knowledge stock. Module suppliers should not only apply and develop their core module R&D knowledge, but also master the systematic knowledge of interaction and integration between modules. Therefore, this paper proposes that module suppliers’ knowledge learning includes individual learning and collaborative learning. Among them, individual learning refers to self-learning for the knowledge needs of the module itself, with the purpose of improving one’s own core knowledge. Collaborative learning refers to interactive learning with system integrators for knowledge interdependence between modules, with the purpose of mastering systematic knowledge about interaction and integration between modules.With continuous learning, the depth of knowledge and the breadth of the knowledge of module suppliers are constantly changing until they meet the knowledge needs of module development. Based on existing research, this paper believes that there is a positive correlation between knowledge depth and module supplier R&D performance, while there is an inverted U-shaped relationship between knowledge breadth and module R&D performance.
    Both individual learning and collaborative learning require resource input, and changes in module suppliers’ knowledge stock (including two dimensions of knowledge width and knowledge depth) will affect their R&D capabilities and performance. Therefore, module suppliers must consider how to dynamically invest limited resources in the process of improving their knowledge width and depth based on the interdependence of knowledge among R&D modules, and on this basis seek the optimal resource investment strategy. Referring to the optimal control theory, with the goal of maximizing the performance of module R&D, the optimal control model of module supplier resource investment is constructed, and resource investment strategies in different situations are analyzed and summarized, including the termination of individual learning and collaborative learning. time, the main input objects of resources, and the optimal input intensity of resources.
    The research analysis draws the following important conclusions: (1)In order to achieve effective knowledge coordination in modular R&D, module suppliers should determine the optimal resource investment strategy based on the interdependence of knowledge between modules. The strategies include: Shifting from individual learning to collaborative learning; Collaborative learning-based shift to individual learning-based; Individual learning-led; Collaborative learning-led. (2)After determining the optimal resource input strategy, the module supplier should determine the corresponding changes in resource input intensity according to the marginal effect relationship between individual learning and collaborative learning on knowledge depth and knowledge breadth, and the marginal contribution of knowledge breadth. Module providers cannot commit excessive resources to a particular study. When the comprehensive marginal income of continuous individual learning is zero, the resource investment in individual learning should be terminated; When the comprehensive marginal income of continuous collaborative learning is zero, the resource investment in collaborative learning should be terminated.
    Optimal Credit Risk Evaluation Index System of Small Business from the Perspective of Likelihood Function
    BAI Xuepeng, ZHAO Zhichong
    2023, 32(4):  155-161.  DOI: 10.12005/orms.2023.0129
    Asbtract ( )   PDF (969KB) ( )  
    References | Related Articles | Metrics
    Small enterprises play an important role in alleviating employment and improving the vitality of national economic development, but the current situation of difficult and expensive financing for small enterprises hasalways existed. At present, a large number of studies have been conducted on the credit risk of small enterprises, and significant results have been achieved in the risk control of small enterprises. For example, at the end of 2021, the non-performing loan ratio of Chinese commercial banks was only 1.73%, however the balance of non-performing loans of Chinese commercial banks was as high as 2.8 trillion yuan, and although the risk judgment in this field has a certain effect, it has a lot of room for improvement. A reasonable credit risk evaluation of small enterprises is conducive to improving the current situation of financing difficulties, promoting financial development and increasing employment. The premise of evaluating the credit risk of small enterprises is to establish a set of reasonable credit risk evaluation index system. This involves two scientific issues: One is how to select the indicators that can be used for credit evaluation, that is, the indicators with default judgment ability. Second, different combinations of indicators can form different indicator systems. n indicators can form 2n-1 indicator systems. How to select an optimal credit risk evaluation indicator system? This paper puts forward a new standard of measuring the index and the default discrimination ability of the index system by the value of the log likelihood function. Taking the maximum value of the log likelihood function as the objective function, a 0-1 integer programming is constructed, and the credit risk evaluation index system of small enterprises with the maximum default discrimination ability is solved by the genetic algorithm. The data of small business loans of a regional commercial bank in China in 28 regions including Beijing, Tianjin, Dalian and Chengdu are selected as the empirical data of this paper, and these data are all loans that have been settled by banks, because the outstanding business could not determine whether a default occurred. Based on the loan data of small enterprises distributed in 28 cities of a commercial bank in China, a set of credit evaluation index system covering 17 indices such as “operating profit margin”, “credit situation of small enterprises”, “Engel coefficient”, “Cash recovery rate of all assets”, “Cash content of net profit”, “Ratio of net assets to loan balance at the end of the year” and “iquidity of collateral” is established. The index system constructed in this paper is compared with the index system constructed by forward search algorithm, backward search algorithm and single index screening method. Through the overall discrimination accuracy and the second type of error determined by confusion matrix, it is determined that the index system constructed by this model has higher discrimination accuracy of default risk. Research prospect: By deleting invalid indicators that are not significantly related to the default of small enterprises, this paper selects an index combination with the greatest default identification ability among the different combinations of remaining indicators to form the optimal evaluation system for credit risk of small enterprises. In this process, it is not considered that a single indicator is invalid, but the combination of multiple invalid indicators is not necessarily invalid, so the next step of research will consider not deleting a single invalid indicator, but selecting the optimal indicator system among different combinations of all indicators. This research has improved the rating theory and method of credit risk management, established a small enterprise credit evaluation system, and made up for the deficiency of the existing bank credit rating system.
    Group Evaluation Method and Its Application Based on Subject-object Collaboration with Heterogeneous Preference Structures
    ZHANG Faming, NIU Yufei, WANG Weiming
    2023, 32(4):  162-168.  DOI: 10.12005/orms.2023.0130
    Asbtract ( )   PDF (1148KB) ( )  
    References | Related Articles | Metrics
    Due to the increasingly complex decision-making environment, the resolution to some large-scale decision-making problems often requires group evaluation methods to improve the scientificity and decision-making quality. However, in previous relevant studies, most of them have not considered the participation of object side, and the expansion of applicable pattern is relatively insufficient. At the same time, most of the existing group evaluation methods based on subject-object collaboration have not considered the situation where both sides are willing to express their opinions by using heterogeneous preference information. Also, the interaction between the subject and object sides is relatively inadequate and the comprehensive correction effect of information is not obvious. Aiming at these problems, in this paper, a new collaborative group evaluation method is proposed based on heterogeneous preference information. After using uncertain language and interval grey number to express the evaluation opinion of each side respectively, the interaction and fusion of their opinions are realized through two collaborative processes.
    The goal of the “First Collaboration” between the subject and object is to correct the evaluation information. Firstly, the collaborative initial score of each evaluation value is given, and the principles of both sides are consistent. In detail, the calculations are realized based on the deviation degree of uncertain language and the relatively accurate integrated value of interval grey number respectively. Secondly, the “Completeness Degree” and “Trustworthy Degree” of corresponding information are calculated based on collaborative initial scores accordingly, and the adjustment factors are used to correct the corresponding evaluation information of both sides,so as to complete the first collaboration. Subsequently, the “Second Collaboration” between two sides is launched with the goal of evaluation information gathering. Firstly, based on the TOPSIS method, the corrected evaluation information of the subject and object sides is preliminarily fused respectively to obtain the comprehensive proximity degree, forming a comprehensive proximity matrix. Secondly, an ordered group clustering based on row division is proposed, and the grey relational clustering method is applied into the ordered group clustering based on the comprehensive proximity matrix of each side. Finally, based on the TDWA operator, the two-dimensional information of the comprehensive proximity is aggregated to obtain the comprehensive evaluation result of each side respectively. Then, the revised comprehensive opinions of two sides are aggregated to obtain the comprehensive evaluation results based on subject-object collaboration, and the second collaboration is completed.
    Through a practical example of medical equipment supplier partners’ evaluation and selection, the proposed method is effectively applied. And the partner with the highest score is the best medical equipment supplier that decision maker should select. On this basis, in order to illustrate the rationality and effectiveness of this method, the collaboration process between the subject and object is removed and the method internal comparisons are conducted. The results show that the two collaborations effectively realize the interaction and fusion of heterogeneous preference opinions compared to the independent evaluation results of the subject and object without collaboration. Moreover, in order to illustrate its advantages, the group evaluation method based on subject-object collaboration in previous studies are further introduced to conduct the method external comparisons. The results show that the two collaborative processes could effectively enhance the accuracy and depth of the evaluation information interaction between two sides, the rationality and stability of the evaluation results are also improved. Nevertheless, it should be pointed out that this method is mainly applicable to situations where the evaluation object has the ability ofactive conscious behavior. For this issue, the further in-depth research will be conducted in future to achieve its expansion in different application scenarios.
    Agricultural Natural Disaster Insurance Mechanism Design with CVaR Risk Measurement Criterion
    LIN Qiang, LIU Huang,XU Junxin, LIN Xiaogang, ZHOU Yongwu
    2023, 32(4):  169-176.  DOI: 10.12005/orms.2023.0131
    Asbtract ( )   PDF (1255KB) ( )  
    References | Related Articles | Metrics
    Agricultural natural disasters have become a key factor that influences the incomes of a tremendous number of small farmers. An effective lever, agricultural insurance, thus emerges to deal with the natural disasters. Insurance companies have successfully promoted and implemented such insurance types as output insurance and weather index insurance. Output insurance is one of the most common kinds of insurance in the market. The insured takes the output of the crops planted by the insured as the insured object and the insurer pays compensation according to the loss value of the actual output of the insured which is lower than the insured output, while weather index insurance is the innovation of agricultural insurance, which can meet the farmers’ security demand and effectively protect the farmers’ interests. This insurance refers to the insurance mechanism that takes weather index as the trigger condition. When the trigger condition is reached, the insurance company will pay insurance benefits to the insured according to the weather index regardless of whether the insured is affected by the disaster. The existing literature has not considered the use of agricultural insurance to transfer uncertain risks, and the literature of agricultural insurance has only considered the impact of single insurance mechanism. However, various kinds of agricultural insurance in practice bring farmers trouble in the choice. Therefore, from the perspective of farmers, considering how farmers with risk aversion transfer the natural disaster risk in the production process through agricultural insurance when weather conditions and effort level jointly affect output, this paper analyzes how farmers make insurance participation decisions and insurance choices, as well as the impact of different insurance mechanisms on farmers’ production decisions.
    This paper employs the conditional value-at-risk(CVaR)risk measurement criterion to characterize farmers’ risk aversion and builds a Stackelberg game model between risk aversion farmers and risk-neutral insurance companies. Considering the fact that the weather conditions and the effort level of farmers’ production inputs together affect the output, we first study the farmers’ optimal effort level decision and its utility under different agricultural natural disaster insurance and the insurance company’s optimal compensation decision and expected income, and in turn, we discuss and analyze the influence of the degree of risk aversion on the optimal decisions and utility/income of both sides. Second, we compare the difference between the optimal decision and utility/expected income of both sides under output insurance and weather index insurance, so that they know how to choose agricultural natural disaster insurance. Finally, through numerical analysis, we find the effects of the degree of risk aversion, insurance output and weather index on farmer’s utility, insurance company’s expected output and insurance type selection.
    The main results suggest that: (i)Output insurance will restrain the effort level of farmers’ production inputs, while weather index insurance can encourage farmers to improve their effort level and thereby increase agricultural product supply. The utility of farmers under weather index insurance, however, is not necessarily higher than that under output insurance. (ii)Whether farmers adopt insurance strongly depends on the degree of risk aversion. Specifically, only if the risk aversion is salient, the farmers would adopt agricultural natural disaster insurance. Furthermore, their optimal insurance choices will be changed from output insurance to weather index insurance as the risk aversion degree increases, which is also confirmed by numerical analysis.
    Our model has several management implications. First, when the degree of risk aversion is low, participating in production insurance can better cope with natural disasters, but when the degree of risk aversion is high, participating in weather index insurance can obtain higher risk returns. Second, for farmers with low risk aversion, it is a better choice for insurance companies to provide output insurance. For farmers with a high degree of risk aversion, insurance companies are more willing to provide weather index insurance for them, which may achieve a win-win situation for both sides.
    Short-term Forecasting of Stock Index Price Based on Hybrid Model
    GUAN Yongfeng, YU Min
    2023, 32(4):  177-183.  DOI: 10.12005/orms.2023.0132
    Asbtract ( )   PDF (1713KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of the social economy, the environmental economy is increasingly complex. Stocks, gold, and other financial product trading all have captured the attention of more and more investors. The market behaviors cover all information, and have a high degree of randomness and volatility, so investing stocks becomes a high risk, high return of economic behavior. As one of the main markets, China’s stock market plays a key role in the global financial market. The accurate prediction of the stock index not only attracts the attention of investors and many scholars but also has great significance to the government regulatory authorities.
    At present, the study of the stock index price prediction method has achieved a lot of research results, mainly including the time series analysis method, machine learning, deep learning, and reinforcement learning algorithm. These methods have had a good effect on the stock index prediction. However, the stock market is a complex and nonlinear dynamic system, so the above single prediction model is powerless to explain the information contained in stock index data. Before predicting the stock price, it is necessary to stabilize it to ensure that the prediction model can obtain better prediction accuracy. Traditional stabilization algorithms, such as the differential method, cause the loss of information about the original data. In consideration of this problem, some scholars have used the multi-scale decomposition algorithm to stabilize the stock index price, and achieved good results.
    To avoid error accumulation in the single model forecasting process, this paper adopts a hybrid model combining the improved empirical mode decomposition algorithm(HF-EMD) and the extreme learning machine(ELM) optimized by the particle swarm algorithm(PSO) for the short-term prediction of stock index price. Firstly, in terms of data preprocessing, this paper adds the high-frequency harmonic signal to improve EMD. Under the aid of high-frequency harmonic, the extracted signal component is more stable, effectively reducing the influence of noise signal in the stock index data. Therefore, the original stock index data is decomposed by the HF-EMD algorithm to obtain several stable mode components.
    Then, considering the defects of the traditional neural network model, such as slow convergence speed, being easy to get into local optimum, and too many parameters, this paper uses the ELM model to predict the stock index data, since it has better calculation speed and prediction accuracy. In the process of using the ELM model to predict the stock index price, because the initialization weight and threshold are random, they usually only adjust the number of neurons in the hidden layer, which can shorten the parameter adjustment time and effectively meet the requirements of real-time prediction of the stock index price. However, since the initialization weights and thresholds of the ELM are random, the forecasting results are unstable. In this paper, the PSO algorithm is used to optimize the ELM model. It can reduce the deviation of the network model output and improve the stability and robustness of the model. Therefore, the PSO-ELM model is used to predict the decomposed components, and the predicted value of each component is added up to obtain the total predicted value of the stock index price.
    Based on the four sets of the representative stock index data such as SSEC and Heng Sheng Index, we show that the hybrid model proposed in this paper can effectively grasp the variation law of stock index data and has a good prediction effect. This project has been supported by the National Natural Science Foundation of China under Grant 51877161.
    Research on the Influence of Domestic Crude Oil Futures Price from the Perspective of Spillover Effect
    DENG Chao, WU Zhiping, PENG Cheng, YAO Haixiang
    2023, 32(4):  184-191.  DOI: 10.12005/orms.2023.0133
    Asbtract ( )   PDF (1292KB) ( )  
    References | Related Articles | Metrics
    China is the world’s largest importer and second largest consumer of crude oil, but its influence on global crude oil pricing is insufficient and disproportionate to its position in the market. Obtaining crude oil pricing power is strategically important for ensuring China’s energy stability and enhancing its international status. Moreover, as the world’s largest producer, consumer, and importer of commodities, China has established spot and futures markets for major commodities such as agriculture, metals, and energy. It also launched the RMB-denominated China crude oil futures (INE) in March 2018. Currently, INE is the third largest crude oil futures variety in the world, next to WTI crude oil futures from the New York Mercantile Exchange and Brent crude oil futures from the London Exchange. The establishment of China’s crude oil futures market can safeguard national strategic security, improve the pricing mechanism of refined oil products, and take a crucial step toward gaining pricing power in the global crude oil market. In this context, exploring how domestic and foreign crude oil futures prices affect domestic commodity futures can help understand the status and influence of domestic crude oil futures in the global market, formulate risk prevention and control strategies for China’s crude oil futures market, and prevent systemic financial risks. This has significant theoretical and practical implications for hedgers, arbitrageurs, and policy makers in China’s commodity futures market.
    A review of relevant literature reveals that current research on the linkage between crude oil futures market and commodity market mainly focuses on yield and volatility spillover. However, the COVID-19 pandemic and geopolitical conflicts have caused frequent black swan events in the crude oil futures market and large fluctuations in domestic commodity prices. Thus, it is crucial to examine the correlation of extreme risks between domestic and foreign crude oil futures markets and domestic commodity markets. Moreover, most existing studies on the correlation of China’s crude oil futures focus on its price discovery, price fluctuation, and risk spillover with major international crude oil futures markets, while few investigates its risk linkage with other domestic and foreign markets. In fact, changes in commodity prices not only affect other financial markets by influencing the fundamental factors of the real economy, but also increase the information transmission and capital flow between them and crude oil futures markets as commodities become more financialized. This leads to more cross-market risk contagion and higher likelihood of financial crises. Therefore, this paper explores the spillover effect of domestic and foreign crude oil futures on domestic commodities.
    This paper constructs a two-step model to measure the correlation between domestic and foreign crude oil futures markets and domestic commodity markets: 1)It calculates the return series of each market and uses the AR(1)-GARCH(1,1) model and an appropriate GARCH family model to estimate the conditional volatility series and the extreme risk value VaR series for each market. 2)It applies the DY spillover index method proposed by Diebold and Yilmaz to compute the spillover indicators of different types of spillovers among various markets based on the returns, volatilities, and extreme risks (VaR) values obtained in the first step. The domestic and foreign crude oil futures include China’s crude oil futures (INE) and the major international crude oil futures varieties Brent crude oil futures and WTI crude oil futures. The Chinese commodities consist of the commodity futures general index (CCFI) and six commodity sector futures indexes. The six commodity sectors are chemical (CIFI), grain (CRFI), energy (ENFI), non-ferrous (NFFI), oils (OOFI), and soft commodities (SOFI). The sample data period ranges from March 26th 2018 to May 14th 2021, which covers the initial trading date of China’s crude oil futures listing, totaling 751 trading days of price index data. The data in this paper are all from Wind Financial Database (http://www.wind.com.cn/En).
    Using the DY spillover index model to model and quantify the spillovers of returns, volatilities, and extreme risks among domestic and foreign crude oil futures markets and Chinese commodity markets, and analyzing their dynamic spillover effects, the research results reveal that there is a high risk correlation between crude oil futures markets and Chinese commodity markets. China’s crude oil futures market has become a significant information receiver and transmitter in this spillover system, different types of spillovers have different features. Among them, return spillover has the strongest correlation, volatility spillover has the widest fluctuation range, extreme risk spillover has weaker correlation. Moreover, different commodity sectors have different reactions to changes in domestic and foreign crude oil futures markets. Chemicals, grains, and non-ferrous metals have stronger reactions, while energy, oils, and soft commodities have weaker reactions. This suggests that these commodity futures can have some risk hedging effects. Furthermore, in the short term, the risk correlation between commodity markets and crude oil futures markets has remained high, with evident time-varying characteristics. At the same time, due to the frequent occurrence of black swan events in the international crude oil futures market in recent years, the extreme changes in international crude oil prices will also cause extreme changes in China’s futures market and commodity market. Finally, China’s crude oil futures have increased their status and influence in the global crude oil market, and gradually dominate the impact on China’s commodity market.
    Study of the Stock Market Risk Warning Based on GWO-SVM
    ZHANG Heli, CHUN Weide, CHUN Zhengjie, PU Junchong
    2023, 32(4):  192-197.  DOI: 10.12005/orms.2023.0134
    Asbtract ( )   PDF (1408KB) ( )  
    References | Related Articles | Metrics
    With the advancement of China’s reform and opening up in the 1980s, the emergence of joint-stock companies has led to widespread attention on the construction of the stock market. As more and more joint-stock companies appeared and stocks were issued, the stock exchange emerged to deepen the reform and opening up of the financial industry. During the period from 1990 to 1991, the Shanghai Stock Exchange and the Shenzhen Stock Exchange were established, which opened the construction of China’s stock market. Although the Chinese stock market started later, it has been continuously improved through the advancement of reform. As an important component of the financial market, the stock market not only plays a crucial role for investors and listed companies but also stabilizes the national financial order and improves the ability to withstand risks. Throughout history, many financial crises have been caused by stock market crashes, such as the Great Crash of 1929 in the US, the 1990 Japanese stock market collapse, the 2015 Chinese stock market plunge, and the global stock market crash in 2020 caused by the COVID-19 pandemic. Studies have also shown that the stock market is the largest risk output and receiver, and a stock market crash can cause panic among investors, lead to financial crises for listed companies, and even affect the overall operation of the socio-economy. Therefore, it is essential to be able to alert and improve the ability to withstand risks for investors, listed companies, and the government.
    In the big data era, traditional linear prediction methods such as data simplification, composite index method, and financial stress index method are no longer accurate in describing financial market risks. The emergence of new intelligent machine learning algorithms such as Decision Trees (DT), Logistic Regression (LR), Random Forests (RF), Artificial Neural Networks (ANN), Copula, and Support Vector Machine (SVM) have greatly addressed the problems of the big data era. As a machine learning method, SVM is frequently used for data analysis and regression problems due to its strong non-linear fitting ability, simple learning rules, easy implementation by computers, and the ability to achieve optimal decisions using a small number of support vectors. It effectively solves the complexity of indicators in the era of big data. However, traditional SVM is sensitive to missing data, and the selection of penalty coefficient C and kernel function parameter g is subjective and empirical, which can consume a large amount of memory and time in the case of large samples. From existing research, SVM has been widely used in company financial risk warning and financial market warning, and has achieved certain research results. However, it has been less applied in stock market risk warning. The key to preventing risks lies in constructing a reasonable early warning model, so the adaptability of the SVM model to stock market prediction is a subject that needs further research.
    Given the importance of risk warning for the stock market, this article proposes a Grey Wolf Optimizer Support Vector Machine (GWO-SVM) stock market risk warning model to improve China’s stock market risk warning ability. This is in response to traditional SVM problems, such as difficulties in parameter selection and low prediction accuracy. The effectiveness of the model was tested using the Mean Absolute Error (MAE) and Mean Squared Error (MSE). The Grey Wolf Optimization algorithm is an intelligent optimization algorithm proposed by scholars from Griffith University in Australia in 2014. Inspired by the hunting behavior of grey wolves, this algorithm is a type of optimization search method that has strong convergence performance, fewer parameters, and is easy to implement. It can significantly improve the efficiency and prediction accuracy of SVM. Our paper focuses on the daily returns and volatility of eight major stock market indices in China. Daily returns can comprehensively reflect the price changes and trends of stocks, while volatility can effectively measure market sentiment and help managers judge the macro trend of the market. Moreover, these eight indices have a broad coverage and can basically represent the operation of the entire stock market. They are often used as benchmark indices to measure the overall market risk. Therefore, daily rate of return and volatility are selected as the research objects, with data collected from CSMAR and RESSET databases. The research results show that compared to SVM, GS-SVM, GA-SVM, and PSO-SVM, GWO-SVM has an average runtime efficiency that is 330% longer than the other three optimization algorithms. Meanwhile, the GWO-SVM model has an average decrease of 4% in MAE and 5% in MSE in predicting daily returns, and it also shows a highly fitting trend in predicting daily volatility. Therefore, the model can effectively improve the accuracy and efficiency of predicting stock market risks.By comparing the original and predicted data, GWO-SVM can accurately predict the fluctuation of the stock index, providing new ideas for stock market risk prediction in China. Future research will focus on characterizing risk indices and further optimizing the model to better analyze and predict stock market risks.
    Parallel Strategy between Innovation and Supervision on Internet Finance
    LYU Xiumei
    2023, 32(4):  198-204.  DOI: 10.12005/orms.2023.0135
    Asbtract ( )   PDF (1639KB) ( )  
    References | Related Articles | Metrics
    Innovation is a major feature of Internet finance. It can not only reduce information asymmetry, improve financial efficiency, but also better guide financial resources to develop inclusive finance and serve the real economy. However, Internet finance corporations may innovate in order to get rid of financial regulatory constraints. If regulatory authorities can supervise timely and effectively, regulatory arbitrage and illegal acts will be avoided. There exists a dynamic closed-loop game “supervision and innovation” between regulatory authorities and Internet finance corporations. Whether the supervision is excessive or insufficient, it is bound to inhibit Internet financial innovation. Only moderate supervision can provide a favorable environment for financial innovation. However, the existing literature rarely deals with the dynamic game between Internet financial innovation and supervision, and rarely analyzes the game balance between these two important participants. Therefore, the paper analyzes the impact of various factors on the strategic game with innovation and regulation and deduces the possible parallel strategic path. The paper can enrich the relevant research on the innovation and supervision of Internet finance, help clarify the incentives and constraints between the innovation of corporations and regulators, and provide more theoretical support and policy inspiration for promoting the healthy development of Internet finance.
    Using the evolutionary dynamic game method, the paper establishes a dynamic game model between the innovation and supervision of Internet finance and derives the evolutionary path of parallel strategy. Firstly, we consider the different conditions of whether Internet finance corporations and regulatory authorities participate in the game or not, and then obtain the income matrix of the game between them. Secondly, the replication dynamic equations for Internet finance corporations and regulatory authorities are deduced, respectively, by which the parallel strategy is discussed for these two parties under different conditions. Thirdly, the parallel strategy is simulated by Matlab to test the interpretation capacity of the game model in the real life. Finally, relevant conclusions and suggestions are presented.
    The research shows that innovation willingness of Internet finance corporates is positively related to the innovation income and the supervision cost. The innovation probability presents a normal distribution relation with the innovation cost and the innovation penalty that may be paid by Internet finance corporations because of violating rule or law. The regulatory willingness of the regulatory authorities is positively related to the possible innovation penalty and the extra regulatory income. The regulatory probability presents a normal distribution with the regulatory cost. Only when the net income of supervision exceeds the regulatory cost can the regulatory authorities be driven to supervise the innovative financial business of Internet finance corporations. Only when the innovation income of Internet finance corporations exceeds the innovation cost and the possible penalty can these corporates have the motivation for innovation. In addition, the cost of innovation has no effect on the regulatory strategy of the regulatory authorities, but it is negatively correlated to the probability of non-regulatory strategy of the regulatory authorities. Therefore, Internet finance corporations should take compliance as the premise, improve their innovation income through method innovation, model innovation, product innovation, etc., and reduce innovation cost through FinTech and diversified marketing channel innovation. Regulatory authorities need to reduce regulatory cost through innovating SupTech, optimizing supervision business on internet finance and communicating well with corporates. It is also necessary to take various measures to stimulate and encourage innovation when Internet finance corporations are not willing to innovate.
    Internet finance is known as FinTech 2.0, which makes China’s finance grow explosively based on logic and channels innovation. In the development of FinTech, SupTech, which means “standardizing science and technology with science and technology”, is gradually split up from FinTech. Therefore, further research can focus on the parallel strategy between FinTech and SupTech. Specifically, we need to study the game strategy of the micro entities that participate in FinTech and SupTech innovation, analyze the impact of their benefits and costs on the choice of game strategy, and propose incentives to promote financial institutions and regulatory authorities to cooperate in the innovative development of FinTech and SupTech.
    Management Science
    Study of Total Factor Productivity of Environmental Service Enterprises Based on the Three-stage DEA-Malmquist Model
    HU Dongbin, ZHOU Pu, CHEN Xiaohong
    2023, 32(4):  205-211.  DOI: 10.12005/orms.2023.0136
    Asbtract ( )   PDF (969KB) ( )  
    References | Related Articles | Metrics
    As an important part of the environmental protection industry, the level of development of the environmental services industry is an important sign of the maturity of the country’s environmental protection industry. In developed countries, the market share of environmental services is close to 70% as their environmental protection equipment market has become saturated. According to calculations by the China Environmental Protection Industry Association, China’s environmental services revenue accounted for 53% of the environmental protection industry in 2016, which is still a gap with developed countries. Despite the rapid development of China’s environmental service industry, environmental service companies are still facing some problems due to their late start, single service model, and lack of innovation ability. As an emerging strategic industry, does the high growth of environmental service enterprises mean high quality growth? At present, there is a relative lack of research literature on environmental service enterprises, especially on the productivity of environmental service enterprises from a micro perspective, which lacks systematic research and objective empirical evidence to support it. This paper aims to answer the following questions: What is the current productivity of environmental service firms? What factors drive the productivity levels of firms? What are the barriers to efficiency growth of firms? What are the productivity levels of environmental service firms in different segments?
    The main methods for measuring total factor productivity (TFP) are the traditional Solow residual method, Solow extended model, stochastic frontier model, and DEA-Malmquist model. Compared with other methods, the DEA-Malmquist model does not need to consider the functional forms of inputs and outputs, which is less constrained in the study and the measurement results are more scientific and objective. In addition, it can decompose and analyze the total factor productivity and deeply explore the reasons behind the productivity changes. However, the efficiency values measured by the traditional DEA method are susceptible to the influence of environmental and stochastic factors. In this paper, we use the three-stage DEA-Malmquist model to eliminate the influence of environmental and random factors on productivity. The specific research ideas are as follows: Firstly, the DEA-Malmquist model is used to measure the total factor productivity of environmental service firms. Secondly, the SFA model is constructed to estimate the effects of environmental variables on input slack variables using each input slack variable measured in the first stage as the explanatory variable and environmental variables as the explanatory variables. Finally, based on the regression results in the second stage, the environmental and random factors are removed from the input indicators. The DEA-Malmquist model is applied again to measure the total factor productivity of enterprises using the adjusted input data.
    Based on the data of 63 listed environmental service enterprises in China from 2015 to 2018,this paper uses the three-stage DEA-Malmquist model to eliminate the influences of environmental factors and random factors, and conduct an empirical analysis of these enterprises’ TFP. The results show that the enterprises’ TFP is sensitive to the external environment, which verifies the necessity of considering environmental and random factors. In recent years, there is an improving trend in the TFP of the environmental service industry, wherein the sub-sector with the fastest growth is the environmental monitoring industry. The TFP of environmental service enterprises is dominated by technical efficiency. Compared with technological innovation, the decision-making ability of management plays a greater role in improving productivity. Among the sub-sectors, the wastewater treatment industry is obviously driven by management and technical factors, while both driving forces are weak for the soil remediation industry.
    Research on the Influence of Intentional Organizational Forgetting on Enterprise Competitive Advantage ——Case and Emprical Study in Chinese Context
    HE Yongqing, PAN Jieyi
    2023, 32(4):  212-218.  DOI: 10.12005/orms.2023.0137
    Asbtract ( )   PDF (1033KB) ( )  
    References | Related Articles | Metrics
    How to obtain competitive advantage and remain invincible in the competitive market is not only a practical problem faced by enterprise management practice, but also an important theoretical problem to be explored in management research. An increasing number of researches believe that effective organizational learning is one of the significant sources for enterprises to gain competitive advantage. However, there is little attention to organizational forgetting, another aspect of organizational learning. Does organizational forgetting have an effect on the enterprise competitive advantage? This paper discusses this problem, and its theoretical significance is as follows. On the basis of previous studies, a four-quadrant model of organizational forgetting is proposed, which divides organizational forgetting into unintentional organizational forgetting and intentional organizational forgetting. The two main types of unintentional organizational forgetting are memory abrasion and inability to capture, and the two main types of intentional organizational forgetting are unlearning and avoiding bad habits, and then the functions of the two kinds of organizational forgetting are pointed out. The research in this paper has further enriched and promoted the research on organizational forgetting and enterprise competitive advantage in the Chinese context. In addition, this paper has some practical enlightenment for enterprise managers, employees, organization construction and other aspects, and serves practical guiding significance for Chinese enterprise management practice.
    This paper starts with a number of case analysis of Suning, Wallace, Haier Group, Waveguide mobile phone, Belle shoes, Galanz, Dali Garden, Tonight Hotel and other eight enterprises, and two dimensions of intentional organizational forgetting, unlearning and avoiding bad habits, are revealed from positive and negative perspectives. Then a model of the relationship between intentional organizational forgetting and enterprise competitive advantage is constructed. At the same time, environmental dynamics is introduced as a moderating variable, and four hypotheses are proposed. After that, questionnaires are distributed online and offline to enterprises established over one year in Shaanxi Province, and 258 valid questionnaires are received. In order to avoid the influence of enterprise characteristics such as enterprise establishment duration on the research results, like most literatures, this paper chooses enterprise establishment duration, enterprise size and industry category as the control variables. After data homology deviation test and sample reliability and validity test, this paper uses SPSS and AMOS as analysis software to conduct correlation analysis, hierarchical regression and other comprehensive analysis of the data, so as to test the main effect and moderating effect. In order to verify the results of hierarchical regression twice, this study also uses path analysis to verify the main effect again, so as to ensure the scientific test conclusion.
    This paper draws the following conclusions: 1)Unlearning has a significant positive effect on competitive advantage. Elements such as the original culture and core competitiveness of an organization are easy to be strengthened and amplified by the previous success of an organization, which leads to complacency and closure, giving rise to the “memory trap” of an organization. If this “success dependence” can be broken, it will free the thinking bondage of employees and form new cognition of the organization. As John Wick said, the management scientist, companies never get into trouble because they forget something. Rather, they often fail because they remember too much and rely too much on tradition. 2)Avoiding bad habits has a significant positive impact on competitive advantage. In the empirical analysis, the hierarchical regression coefficient and the standardized path coefficient of structural equation also show that the influence of avoiding bad habits on competitive advantage may be greater than that of unlearning. Therefore, before learning new knowledge, it is of vital importance to identify the value of knowledge and discard harmful knowledge in time. 3)Environmental dynamics plays a significant moderating role in the relationship between unlearning and avoiding bad habits on competitive advantage. The more dynamic the environment is, the more enterprises need to actively forget the old knowledge and change the old thinking, so as to avoid the “bloated” organizational knowledge base and affect the efficiency of knowledge absorption. In other words, in a highly dynamic environment, unlearning is more conducive to the formation of competitive advantage. Similarly, in a dynamic environment, “selective learning” can filter out harmful knowledge in advance and avoid bad habits, which can improve the efficiency of knowledge absorption and promote the formation of competitive advantage.
    This paper has the following limitations: First, the sample data of this study is cross-industry, and different industries may have different understanding or emphasis on intentional organizational forgetting. Despite the restrictions to industry categories, there may still be an impact on the results. Future research can accurately target industries, such as high-tech enterprises or a certain industry. Second, all the sample enterprises are located in Shaanxi Province. Since Shaanxi is an undeveloped province, the understanding of enterprises on intentional organizational forgetting may still be in the preliminary stage. On the other hand, the conclusions drawn only from the survey data of Shaanxi Province may not be universal enough.
    Evolutionary Game Analysis of Pollution Governance Strategy of Small and Medium-sized Manufacturing Enterprises with Multi-agent Participation
    HE Qilong, TANG Juanhong, LUO Xing
    2023, 32(4):  219-226.  DOI: 10.12005/orms.2023.0138
    Asbtract ( )   PDF (2154KB) ( )  
    References | Related Articles | Metrics
    Since reform and opening up, China’s economic and social development has achieved remarkable results, but it has also paid a huge cost of resources and environment, such as “the Schaeffler crisis”. At present, a large number of small and medium-sized manufacturing enterprises have serious environmental pollution problems, which have become the main source of pollution. A large number of small and medium-sized manufacturing enterprises have been shut down under the escalating government environmental supervision. However, as an important force in the development of the national economy, small and medium-sized manufacturing enterprises play an irreplaceable role in adjusting industrial structure, promoting economic development, expanding employment and maintaining social stability. Their closure triggered a series of chain reactions such as “Schaeffler supply disruption” and other cases, resulting in serious economic and social consequences. Therefore, how to reasonably and effectively control the pollution of small and medium-sized manufacturing enterprises has become an urgent problem to be solved.
    At present, research on pollution control strategies of small and medium-sized manufacturing enterprises in academic circles mostly focuses on the mode of direct participation of multiple entities outside the industrial chain. However, external entities including government, ENGO, the public, financial institutions and scientific research institutions have limited resources and strength, so the cost of participating in pollution control of small and medium-sized manufacturing enterprises alone is high. Different subjects also have free-riding behaviors based on their own interests, which leads to insufficient governance motivation and poor governance effect. Multi-subject collaborative governance model has gradually become an important policy approach for the state and local governments to break the dilemma of environmental pollution control. In addition, compared with the external entities, the information among the entities in the supply chain is more symmetrical. Especially, the core enterprises of the supply chain have innate advantages in the information of the upstream and downstream small and medium-sized manufacturing enterprises. In addition, the green pressure of upstream and downstream suppliers and customers in the supply chain will also significantly promote the green innovation behavior of enterprises. Therefore, the internal environmental governance based on supply chain enters the research horizon. In terms of mechanism, core enterprises can control the pollution control behavior of small and medium-sized manufacturing enterprises through environmental contract, green procurement, audit supervision and other means. On the one hand, although the core enterprises’ participation in pollution control of small and medium-sized manufacturing enterprises is to fulfill a higher level of social responsibility, which is not legally binding, the profit-driven enterprises will make the core enterprises have problems such as lack of motivation when leading pollution control of small and medium-sized manufacturing enterprises. However, as core enterprises realize that the environmental performance of upstream and downstream suppliers will have an impact on their reputation and market attractiveness, supply chain integration and collaboration can tap the value creativity of supply chain social responsibility, and based on the logic of stakeholder cooperation, supply chain corporate social responsibility co-governance becomes an important governance choice. This means that the discipline imposed by the government, ENGO, the public, financial institutions and other subjects on core enterprises and the incentive of providing resources will also become the main driving force for core enterprises to participate in small and medium-sized manufacturing enterprises’ environmental governance.
    In order to solve the pollution problem of small and medium-sized manufacturing enterprises, explore co-governance mode of multi-subject participation, this paper takes pollution control of small and medium-sized manufacturing enterprises as the research object, adopts the evolutionary game under the assumption of bounded rationality to study the dynamic evolution process of pollution control behavior of small and medium-sized manufacturing enterprises. The core is to build evolutionary game model respectively under a control mechanism whereby the government and the public exert pressure through core enterprises and the cooperative mechanism provided by financial institutions, analyze the pollution governance behavior of core enterprises and upstream small and medium-sized manufacturing enterprises, and try to find the conditions and influencing factors to reach the optimal stability state that small and medium-sized manufacturing enterprises control pollution under theleadership of core enterprises. Moreover, Matlab is used for simulation analysis to simulate the strategy selection behavior and dynamic evolution process of core enterprises and small and medium-sized manufacturing enterprises under different initial states to verify the effectiveness of the model.
    The research results show: 1)The reduction of governance cost of core enterprises and small and medium-sized manufacturing enterprises, the increase of the reputation incentive benefits of core enterprises leading pollution control and the positive industry benefits of small and medium-sized manufacturing enterprises when they control pollution and the negative industry benefits they suffer when they do not control pollution can effectively encourage the system to evolve to the optimal stable point. 2)The increase in financial support that medium-sized manufacturing enterprises obtain from financial institutions based on the overall credit of their own and core enterprises can contribute to the implementation of green supply chain smoothly. 3)The simultaneous implementation of the two mechanisms is more conducive to the system evolution to the optimal stability point.
    Targeted Advertising and Pricing Decision Based on Consumer Privacy Sharing
    HE Xiang, LI Li, ZHANG Hua, ZHU Xingzhen, YANG Wensheng
    2023, 32(4):  227-233.  DOI: 10.12005/orms.2023.0139
    Asbtract ( )   PDF (1014KB) ( )  
    References | Related Articles | Metrics
    The omnipresence of targeted advertising means that more and more consumer privacy information is accessed, transferred or stored, consumers are also becoming increasingly aware that their online activities are being monitored and their privacy is possibly being shared.Unlike consumer privacy information obtained directly, consumers actively sharing their privacy information makes the online seller’s price decision more complicated as the consumer’s privacy is no longer a constant for the seller and is instead flexibly controlled by the consumer. We mainly investigate sellers’ optimal price and advertising strategies in terms of the degree of consumer privacy sharing and try to answer these questions: (1)Do sellers benefit from consumer privacy sharing? (2)How does the degree of consumer privacy sharing affect price decisions? (3)In a competitive market, how does the neighbour seller’s product value affect price decisions?
    To address these questions, we consider a Salop circular city model as it allows several sellers in the market. In this model, consumer privacy is endogenous, and the degree of consumer privacy sharing is actively managed by the consumer. Sellers first decide whether to use consumer privacy information to target their advertising and then bid for the advertising slot. There are two types of advertising: If sellers do not use consumer privacy information, they place a mass advertisement; If sellers use consumer privacy information, they place a targeted advertisement.
    As a benchmark, we consider the duopoly case where two sellers sell products with the same value. We first compare the profit of the winning seller when using/not using consumer information and conclude that mass advertisements (not using consumer information) perform better than targeted advertisements (using consumer information) in the duopoly case.In addition, the consumer information shared by consumers increases the price to a certain extent, but when the level of consumer information sharing higher than a threshold, sellers need to reduce the price. Moreover, we also find that the effects of consumer information sharing on the price strategy are different for the dominant and non-dominant sellers. For dominant sellers, our results show that they would raise their price until the degree of consumer information sharing is greater than a threshold. For non-dominant sellers, the result seems more complicated as the effect of consumer information sharing on non-dominant sellers’ price could be different when non-dominant sellers’ product value is in a different segment.
    Finally, in a multi-oligopoly, we find that the dominant seller’s profit is affected by the relative value of its product to that of the neighbour seller’s product, in addition to the degree of consumer information sharing. Specifically, the dominant seller’s profit increases with an increasing degree of consumer information sharing when the value of the seller’s product is sufficiently large. We also conclude that every seller’s demand and profit are affected by their neighbour sellers’ product value.
    This paper has studied a Salop circular city model where consumers actively manage their information sharing. The general conclusion from our study is that using consumer information to target advertising will not benefit the seller in a duopoly. In other words, targeted advertisements might result in lower profits than mass advertisements in a duopoly. However, this changes when there are more sellers in the market.In addition, our paper implies that the effect of consumer information sharing on price decision is driven by different sellers. Furthermore, our analysis shows that the dominant seller can benefit from a medium degree of consumer information sharing in a multi-oligopoly when the seller’s product value is sufficiently large. These findings imply that sellers can gain consumer trust by giving customers access to manage their information.
    The Product Pricing Strategy Considering Consumer Patience and Enterprise Cost Reduction
    GUAN Zhenzhong, DU Huafeng, HE Sanming
    2023, 32(4):  234-232.  DOI: 10.12005/orms.2023.0140
    Asbtract ( )   PDF (1366KB) ( )  
    References | Related Articles | Metrics
    With the rapid development and popularization of new technologies, consumers can obtain timely and accurate product information(such as history prices)to form a rational expectation of future prices, and then make the best purchase decision. In anticipation of a significant price drop in a desired product, consumers will delay their purchases, showing a high degree of patience and a willingness to wait until the day of the event. This phenomenon is easy to see in the 2020 China’s double 11 shopping bonanza. Many studies have shown that consumers’ strategic waiting behavior will bring a huge negative impact on corporate profits. Therefore, how to choose an appropriate pricing strategy to mitigate the negative impact of strategic consumers has become an urgent problem for senior managers to solve. In practice, the price matching strategy is a typical copping means. By promising to lower prices and offering rebates to customers who previously bought the product, it can attract more customers to spend in advance, thus effectively preventing the loss of potential profits. On the other hand, in order to improve market competitiveness, the enterprise often carries out innovation activities in various forms to reduce production costs. Therefore, in the market environment of “price transparency”, we are interested in discussing the influence of consumer patience and cost reduction on the optimal choice of product pricing strategy.
    To solve this problem, we consider a situation where a monopoly sells a single product to strategic consumers. Following the previous studies, we use a popular two-period model. Specifically, for the optimal pricing strategy of monopolistic enterprise, we combine the consumers’ strategic behavior with the enterprise’ cost reduction, and construct two-period game model under the dynamic pricing strategy and the price matching strategy, respectively. Then, the advantages and disadvantages of the two pricing mechanisms are compared and analyzed. At the same time, we also deeply analyze the influence mechanism of the degree of consumer’s patience and the magnitude of cost reduction on the product prices, in order to provide some references for the enterprise’s scientific decision-making. Finally, we study the effect of different pricing strategies on market performance from the perspective of social welfare.
    Through the model construction and analysis, we get some interesting conclusions: (i)Compared with dynamic pricing strategy, price matching strategy can effectively alleviate the “smart” behavior of consumers, in turn, it can bring higher profits to the enterprise. In practice, 15-day or 30-day price insurance and other means(such as 1.2 times the price insurance payout)are the use cases of price matching mechanism. (ii)Compared with the price matching strategy, although the dynamic pricing strategy will lead to the loss of potential profits, it is good for the consumer or society as a whole under certain conditions (such as a greater degree of patience and less cost reduction). (iii)In the extension part, we further consider the case that strategic consumers have price reference effect, and find that the optimal decision-making has no essential change. At the same time, the price reference effect is not disadvantageous to the enterprise under any circumstances, which is also related to the choice of pricing strategy, the degree of reference effect and other factors.
    Finally, we would like to thank the National Natural Science Foundation of China(NSFC)(71572154), Service Science and Innovation Key Laboratory of Sichuan Province(KL2209)and Soft Science Project of Chengdu Science and Technology Bureau(2021-RK00-00087ZF)for their strong support for this paper. We would also like to thank anonymous peer reviewers and the editors for constructive comments on improving the quality of this paper.
[an error occurred while processing this directive]