Loading...

Table of Content

    25 September 2023, Volume 32 Issue 9
    Theory Analysis and Methodology Study
    Aerial Targets Threat Evaluation Based on Grey Incidence Method
    FENG Hui, SONG Baojun, ZHANG Chunmei
    2023, 32(9):  1-6.  DOI: 10.12005/orms.2023.0277
    Asbtract ( )   PDF (1094KB) ( )  
    References | Related Articles | Metrics
    Air target threat evaluation is an important component of air defense command decision-making. It ranks the threat levels of all air targets, distinguishes the priority of the targets, and provides a basis for firepower allocation. At present, in the research on threat assessment of air raid targets, the main approach is to obtain target attribute parameters through sensors. However, due to the complexity of the battlefield environment, the information obtained by sensors often has a certain degree of randomness and uncertainty, which can easily cause certain deviations in the evaluation results. Therefore, it is necessary to appropriately combine the commander's combat experience in the evaluation process. In response to the shortcomings of existing evaluation methods, a threat assessment index system for air raid targets has been constructed.
    The threat assessment factors are generally the target state information obtained by sensors and the target attribute information obtained by information fusion. There are many factors that affect the threat assessment of air raid targets, including target type, target speed, target distance, flight altitude, interference ability, maneuvering status, target acceleration, number of targets, route shortcuts, target azimuth, target RCS, air raid style, remaining time, and so on. These attributes are not independent of each other and usually have a certain connection. Through analysis, we can divide the evaluation factors into two categories: Combat capability and attack intent. Combat capability is determined by the attributes of the target itself, while attack intent is mainly determined by the target's flight actions. From this, a system of threat assessment indicators based on target type, target speed, route shortcut, flight altitude, maneuvering status, and arrival time can be obtained. With the help of combination weight theory, the objective information obtained from sensors is weighted differently from the subjective experience of the commander, greatly improving the accuracy of the evaluation process.
    The combination weight method refers to the grading of evaluation indicators when evaluating a certain event, and the weight allocation of evaluation indicators at all levels must be mutually considered. Therefore, it is necessary to connect subjective and objective weights to form a comprehensive weight. In the process of threat assessment, due to the uncertainty of the information obtained by sensors, we adopt the method of combining weights to reduce errors, and assign differential weights between the subjective weights of decision-makers and the objective weights obtained by sensors. In order to better reflect the operational experience of combat commanders and eliminate any deviations in their experience as much as possible, the entropy weight method is used to obtain the final subjective weight value. The objective weight of indicators can be calculated using the entropy method for the objective information obtained by sensors. The greater difference in indicator data,the more significant role of entropy value, and the corresponding weight value.
    On the basis of obtaining subjective and objective weights, an improved grey correlation method is used to establish an air raid target threat assessment model. Through case analysis, it is verified that this method can effectively avoid the problem of result deviation caused by only using objective information for assessment. The assessment results are accurate and can better adapt to the requirements of modern air defense operations. It has certain reference value for commanders' scientific decision-making.
    Civil-military Integration Boundary of National Defense Science and Technology Industry Based on Evolutionary Game Theory
    GAO Yuan, LI Renchuan, ZHANG Xuanhao
    2023, 32(9):  7-14.  DOI: 10.12005/orms.2023.0278
    Asbtract ( )   PDF (1637KB) ( )  
    References | Related Articles | Metrics
    Demarcating the boundary of civil-military integration is the first step to optimizing the allocation of resources, stimulating the vitality of all parties, and promoting the development of civil-military integration in national defense science and technology industry. Meanwhile,institutionalizing the civil-military integration boundary ofnational defense science and technology industry has become a common practice for countries around the world to ensure the combat effectiveness of troops and improve overall social benefits. Despite in-depth research on civil-military integration in national defense science and technology industry by scholars globally, few literatures focus on the boundary issue of civil-military integration in national defense science and technology industry, and the research on it mostly stays at the qualitative level. Scientifically and systematically determining the civil-military integration boundary of national defense science and technology industry is yet not only the basic premise for the development of civil-military integration in national defense science and technology industry but also the basic standard for cooperation and sharing among all participants. Therefore, it is particularly important to use the combination of qualitative and quantitative methods to demarcate the integration boundary based on the core capabilities of the participants, and take shared efficiency as the standard.
    There are both common interests and conflicts among the participants of civil-military integration in national defense science and technology industry. Therefore, the key to determining the boundary of civil-military integration in national defense science and technology industry is to find a game equilibrium that can meet the needs of each subject. Moreover, since the determination of the boundary of civil-military integration in national defense science and technology industry is a dynamic process of repeated games affected by internal and external random factors, each participant conforms to the assumption of bounded rationality and will make satisfactory decision-making behaviors through repeated trial and error, learning and improvement, so the evolutionary game theory is applied to this research.
    Based on the current situation, this paper uses the evolutionary game theory, takes the sharing of information and resources of national defense science and technology industry by civil and military scientific research institutions as an example, establishes the integration boundary game model, determines the boundary conditions of civil-military integration in national defense science and technology industry by analyzing the game behavior and evolution law of each participating entity, introduces the subsidy punishment mechanism for comparative analysis, and finally analyzes the game model conclusion and various influencing factors in depth based on examples. This paper responds to the contradictions and difficulties encountered in the reform and adjustment of national defense science and technology industry through a combination of qualitative and quantitative methods, and solve the topic of general interest in civil-military integration theory. The results show: The civil-military integration boundary of national defense science and technology industry is determined by the riskcost, the synergistic benefit, the subsidy penalty mechanism and the initial state. At the same time, the factors that affect the cooperative stable state of the system's evolution include the total amount of complementary and sharable resources, the synergistic impact level, the cooperation risk factor and the subsidy penalty mechanism. The resource conversion and utilization level and the cooperation cost coefficient do not affect the evolution trend. The subsidy penalty mechanism can effectively promote civil-military cooperation.
    Banzhaf Value for Digraph Games and Its Applications
    SHAN Erfang, LYU Wenrong, SHI Jilei
    2023, 32(9):  15-20.  DOI: 10.12005/orms.2023.0279
    Asbtract ( )   PDF (1016KB) ( )  
    References | Related Articles | Metrics
    Banzhaf value is one of the important allocation rules in classical cooperative games with transferable utility, which assumes that any participants can cooperate to form a feasible coalition. However, in reality, cooperation between participants is often constrained by various cooperation structures, resulting in some coalitions that cannot be truly formed. Considering the influence of cooperation structures on participants, Myerson proposed a cooperative game with graph structure, also known as communication situation game, referred to as graph game. The graph game describes the cooperation structure between participants by an undirected graph. The points of the graph represent the participants, and the edges of the graph represent some bilateral connection between the participants. He assumed that only connected coalitions can fully cooperate and produce cooperative utility, while the utility of other coalitions can only be obtained by the sum of the utility of the connected components it contains. Based on this idea, Alonso-Meijide and Fiestras-Janeiro extended Banzhaf value to graph games, and defined Banzhaf value under graph restricted games, namely the graph Banzhaf value. At the same time, they proved that the graph Banzhaf value can be given four axiomatic characterizations by fairness, isolation, merging, component total power and balanced contribution.
    Although the graph game reflects some bilateral connections between participants, not all the connections between participants are bilateral. For example, the water cycle in nature has a clear direction; The South-to-North Water Diversion Project transfers water from the Yangtze River, and flows through the northern region in turn, which is also directional; The industrial water system first uses industrial water for the production water subsystem, and then discharges the sewage generated by the production water subsystem into the sewage treatment subsystem. According to the flow direction of the river, the main body of water use in public rivers has an upstream and downstream relationship. Therefore, when studying the resource allocation problem of public rivers, the direction of the model cannot be ignored. In order to describe such problems, scholars have introduced directed graphs into cooperative games and proposed cooperative games with directed graph structure, referred to as directed graph games. Li and Shan studied the definition of feasible coalition in directed graph games. They argue that a strong component can be used as a feasible coalition in a directed graph game, assuming that a strongly connected coalition can obtain full utility, while the utility of a non-strongly connected coalition is realized by the sum of all the strong component utilities it contains. The research of Li and Shan makes it very easy to find feasible coalitions in directed graph games, and also provides convenience for further research on allocation rules in directed graph games. On this basis, studying the allocation rules in directed graph games can provide a theoretical basis for solving directed network problems such as water resources allocation.
    On this basis, we consider directionality of cooperative network, extend the Banzhaf value to digraph game, and propose a new allocation rule, called digraph Banzhaf value. We prove that the digraph Banzhaf value satisfies quasi-isolation, contractibility, fairness, strong component decomposability and strong component total power, and give two axiomatic characterizations. Finally, we discuss an application of digraph Banzhaf value in wetland water circulation systems and compare it with other values.
    Optimization of Construction Schedule of the Railway Extra-long Tunnel Based on Multiple Working Faces
    ZHOU Guohua, ZHANG Huake
    2023, 32(9):  21-27.  DOI: 10.12005/orms.2023.0280
    Asbtract ( )   PDF (1244KB) ( )  
    References | Related Articles | Metrics
    With the maturity of the construction technology, the number of extra-long tunnels built in China is gradually increasing. Extra-long tunnels are usually the key activities in a single project due to their long duration, and their duration has a direct impact on the single project. At the same time, the construction cost of long tunnels is high, and balancing the duration and cost is an important goal in the preparation of the construction schedule of long tunnels. In the current situation, the preparation of extra-long tunnel construction schedule is still based on manual experience, and the duration and cost are not yet finely and intelligently managed, which is a pain point that needs to be solved at this time of rapid construction of extra-long tunnels.
    The extra-long tunnel is usually divided into several sections for parallel construction to shorten the construction period, and the parallel construction requires additional working surfaces by excavating auxiliary tunnels outside the tunnel. Firstly, the number of working faces directly affects the duration and cost of the project. Secondly, when the number of working faces is constant, where to excavate auxiliary tunnels and where to increase working faces will also have an impact on the construction period and cost. Extra-long tunnels are usually regarded as key activities, and their construction period cannot be delayed. In this case, how to save costs while meeting the construction period requirements is a problem worthy of attention.
    When the tunnel realizes multi-face construction by excavating auxiliary tunnels, the sequence of each construction unit is not fixed but can be adjusted. The logical sequence of construction is soft logic, and the construction sequence will affect the duration and cost of the project. The trade-off model of duration and cost is constructed to minimize the project cost under the premise of meeting the duration. The discrete time-cost trade-off problem is NP-hard, and the addition of soft logic increases the difficulty of solving it. Therefore, the operator of genetic algorithm is improved to increase its solution speed and accuracy. Improvements include: (1)Designing adaptive crossover, mutation probability and improved catastrophe operator to avoid the algorithm from falling into local optimum; (2)Optimizing the mutation strategy to enhance the local search ability of the algorithm. Through investigation and research, data of a railroad extra-long tunnel projects are collected, including length, construction time and cost, etc.
    Based on the example, the model and algorithm are verified to be effective for the optimization problem. In addition, to test the superiority of the improved algorithm, nine calculation examples with different scales and constraints are designed to compare the proposed algorithm with other algorithms horizontally. For each calculation example, the three comparison algorithms are run 20 times respectively. On the one hand, the average and minimum values of the solution results of each algorithm are counted to compare the stability and optimization ability of the algorithm operation. On the other hand, the average running time of 20 times for each algorithm is counted, and the difference in solution speed is analyzed.
    The experimental results show that: (1)The proposed model and algorithm can be effectively applied to the extra-long tunnel construction schedule optimization problem. The compiler only needs to input the information of construction unit length, construction speed and cost, as well as the time and cost of auxiliary tunnels, and then can get the number and locations of auxiliary tunnels, the construction mode of working faces, and the total duration and cost and so on. (2)By comparing the proposed improved genetic algorithm with the standard genetic algorithm and particle swarm algorithm, it can be found that the other two algorithms can easily fall into local optimum when solving the problem as the size of the problem increases. Moreover, the solution obtained by the particle swarm algorithm is more volatile, while the solution obtained by the improved genetic algorithm is superior and more stable. In terms of running speed, the improved genetic algorithm is faster than the standard genetic algorithm. This is due to the fact that the improved genetic algorithm optimizes the search mechanism. The above results demonstrate that the improved genetic algorithm has better global search capability, stability and faster running speed when dealing with large-scale tunnel schedule optimization problems.
    This study can help enterprises to realize the intelligent compilation of extra-long tunnel construction schedule, assist in enterprises to make scientific decisions on the number and location of auxiliary tunnels. It helps to reduce unfavorable decisions due to human experience, and helps to control the project duration and cost. This study is applicable to the case where the auxiliary tunnel is an inclined shaft, vertical shaft or horizontal cavern. When the auxiliary tunnel is in other forms, the model needs to be constructed according to its construction characteristics, which will be improved through further research.
    Robust Project Scheduling Problem Considering Soft Logic
    ZHANG Lihui, LI Yifei, ZOU Xin, CAO Qiangnan
    2023, 32(9):  28-35.  DOI: 10.12005/orms.2023.0281
    Asbtract ( )   PDF (1396KB) ( )  
    References | Related Articles | Metrics
    Project scheduling in a deterministic environment has been explored by a number of scholars for decades. However, in the existing dramatically changing market environment, complex engineering projects (e.g., “The Belt and Road Initiative” transnational projects, PPP projects, etc.) are facing increasingly severe risks and uncertainties, such as inaccurate estimation of activity duration, untimely supply of resources, machine breakdown, severe weather conditions, and changes in design. These uncontrollable factors can disrupt the orderly execution of project activities and cause confusion in organization and coordination, leading to project delays and cost overruns. Therefore, it is necessary to present efficient approaches to these uncertainties.
    Robust project scheduling is to develop a schedule with strong anti-interference ability and flexible reactive strategies by taking into account the disturbance of uncertainties, having become a crucial research topic in recent decades. Existing robust scheduling studies have assumed that all activities can only be constructed in a fixed logical sequence. In practical engineering, however, the sequence of logical construction between many activities can be changed, which is known as soft logic. The research on soft logic mostly focuses on project duration and cost optimization, and there are no studies known to authors introducing soft logic to project scheduling in an uncertain environment. In this context, this paper studies robust project scheduling optimization considering soft logic, which has strong theoretical value and practical significance.
    Firstly, we analyze the impact of soft logic on project scheduling, including project duration, solution robustness, and the number of subsequent activities, which provides the theoretical basis for the proposed optimization model in this paper.Soft logic can make project scheduling more flexible. On the one hand, by reasonably adjusting the construction sequence of activities, the critical path of the schedule can be changed, thus affecting project duration. On the other hand, the precedence relationship among activities can be also regulated in the same manner, which in turn has an impact on the free float of activities and ultimately affects the robustness of project. We also quantify the specific impact of three types of soft logic on the number of subsequent activities, which is widely considered in the development of robustness measures.
    Secondly, an optimization model is formulated and solved by the e-constraint algorithm. We improve the robustness of the project schedule by inserting time buffer in the project activities, which extends project duration. Therefore, the objective of the model is to achieve the trade-off optimization between minimized project duration and maximized robustness, taking into account the soft logic relationships between activities and project deadline. The robustness objective function uses a measure based on free float and accounts for the magnitude of delay risk, allowing time buffer to be prioritized for allocation to activities with higher delay risk and more subsequent activities. Furthermore, an e-constraint algorithm is specifically developed to identify Pareto optimal solutions to the studied problem. We maximize the schedule robustness with the objective of minimizing the project duration as a constraint. In doing so, the proposed model is transformed into multiple single-objective optimization models, which helps to identify all Pareto solutions in a fairly small number of steps.
    Finally, the effectiveness and superiority of the model and algorithm for robust scheduling considering soft logic are verified through a typical practical case and Monte Carlo simulation. We respectively solve the duration-robust scheduling optimization problem under the assumptions of soft logic and traditional fixed logic. The results show that soft logic is more powerful than fixed logic, providing shorter project duration and stronger robustness. We also suggest that there are two approaches for soft logic strategies to improving robustness without sacrificing project duration. One is to reduce the time buffer formed by the network through soft logic while leaving space for the addition of artificial buffers, and the other is to increase the time buffer formed by the network through soft logic. Furthermore, Monte Carlo simulation results imply that compared with fixed logic, project schedule considering soft logic is more stable in the execution of its activities and on-time completion and is less vulnerable to disruptions caused by various uncertainties, and that with the increase in the level of uncertainty, soft logic is increasingly superior to fixed logic.
    The research results of this article can provide quantitative support for project managers' decisions to make robust schedules considering soft logic in uncertain environment. Owing to the practical limitations imposed by resource availability, the flexibility afforded by soft logic may be limited. Therefore, a future research direction would be robust resource-constrained project scheduling with soft logic.
    A Data-driven-based Robust Minimum-cost Consensus Model
    HAN Yefan, JI Ying, QU Shaojian
    2023, 32(9):  36-42.  DOI: 10.12005/orms.2023.0282
    Asbtract ( )   PDF (1534KB) ( )  
    References | Related Articles | Metrics
    The quality and reliability of the composite indicator are directly influenced by the aggregation of individual preference information. Even slight perturbations in the aggregation weights may result in the selection of unreasonable solutions, leading to economic and social losses. Therefore, it is crucial to determine the aggregation weights of decision-makers (DMs) appropriately. Robust optimization (RO) methods have gained significant attention due to their ability to generate uncertainty-immune solutions. However, these methods typically construct uncertainty sets based on experience, which introduces certain conservatism to the model results. In contrast, data-driven RO methods construct uncertainty sets based on uncertain observations, allowing for a reasonable balance between conservatism and robustness in decision outcomes. Consequently, it is necessary to develop a model that can effectively manage the uncertainty associated with aggregation weights using the data-driven RO methods.
    To tackle the uncertainty and randomness of DMs' aggregation weights, a series of data-driven robust minimum-cost consensus models is proposed in this paper. Firstly, a minimum-cost consensus model with consensus constraints is introduced as the foundation of the study. This model ensures that DMs achieve an acceptable level of consistency. Secondly, a kernel density estimation (KDE) method is used to derive probability density functions of uncertain weights based on historical data. These functions are utilized to construct uncertainty intervals with confidence levels, enabling control over the perturbation range of uncertain weights in the aggregation operator. Subsequently, two types of flexible uncertainty sets, namely flexible uncertainty set I and flexible uncertainty set II, are defined. These sets correspond to the set of three different shapes, including the box set, ellipsoidal set, and polyhedral set. By employing these uncertainty sets, the data-driven robust minimum-cost consensus models are developed to address six different uncertain environments.
    Finally, this paper abstracts a group decision-making problem from the carbon quota allocation problem. By estimating the probability density function for the government's allocation of quotas to each enterprise, the confidence-based uncertainty intervals and corresponding data-driven robust models can be established to deal with the uncertainty caused by deviations, which demonstrates the applicability of the proposed models. To analyze the performance of the model, three experiments, including all uncertainties with the same confidence level, different uncertainties with different confidence levels and flexible uncertainty sets versus classical uncertainty sets, are conducted in this paper, and the following meaningful results are obtained: (1)The data-driven method can effectively improve the quality of the aggregation operator by traversing the historical data of weights; (2)The government can choose the appropriate uncertainty set according to the degree of risk preference to make decisions; (3)The proposed model contributes to reducing the price of robustness in data analysis results to a certain extent.
    In summary, the propose data-driven robust consensus models provides novel insights and approaches for solving the uncertainty in the consensus reaching process of group decision-making problems. As decision-making environments grow more complex, involving a greater number of DMs, reaching consensus becomes increasingly challenging. Additionally, DMs' weights will be influenced by more factors such as social relationships. Therefore, in future research, the proposed model can be considered to be extended to large-scale group decision-making problems.
    Supply Chain Financing Strategy with Information Disclosure Based on Blockchain Technology under CVaR Criterion
    WANG Daoping, ZHU Mengying, DONG Hanxi
    2023, 32(9):  43-49.  DOI: 10.12005/orms.2023.0283
    Asbtract ( )   PDF (1748KB) ( )  
    References | Related Articles | Metrics
    In traditional supply chain financing, SMEs often require credit guarantees from core enterprises in order to secure financing. Moreover, supply chain enterprises frequently have a risk attitude when making operational and financing decisions, balancing revenue maximization with their risk tolerance.As a newly emerging technology, blockchain has the potential to address the challenges present in supply chains. It enables supply chains to ensure authenticity and share information securely. The implementation of smart contracts on the blockchain automates payment, settlement, and financial reconciliation processes, thereby reducing the potential risks associated with human factors. Additionally, blockchain adoption provides an opportunity to enhance consumer trust.Therefore, our study aims to model the roles of blockchain in supply chains, particularly investigating the differences before and after blockchain adoption in a cash-strapped supply chain scenario, providing theoretical support and meaningful references for how supply chain makes financing and operational decisions in blockchain environment.
    We examine two modes: The one is traditional supply chain financing(mode T)in which the retailer obtains financing with the credit guarantee of the core enterprise; The other is blockchain supply chain financing(mode B)in which the retailer can directly request loans from bank using the data of the blockchain platform as credit.The CVaR criterion is used to portray the risk-averse behavior of decision makers.Considering that a certain proportion of consumers in the market is sensitive to the information disclosure based on blockchain technology, the quantitative models under CVaR criterion are utilized to solve the optimal wholesale price, order quantity and application degree of blockchain technology. The impact of factors such as the cost coefficient of applying blockchain technology, the proportion of consumers sensitive and the degree of risk aversion of enterprises on the ordering, pricing and the application degree of blockchain technology decisions is investigated.And the comparison and analysis of the two financing modes are also conducted to provide further insights.
    The results show that: (1)When the cost coefficient of blockchain technology application is low, the optimal wholesale price in mode B is lower compared to mode T. However, when the cost coefficient exceeds a specific threshold, the optimal wholesale price in mode B becomes larger. When the consumer sensitivity ratio is high and the cost coefficient of applying blockchain technology is low, the optimal ordering quantity in mode B is consistently greater, regardless of the supplier's level of risk aversion. (2)For suppliers, the benefits obtained under mode B are always greater than the traditional one. If more consumers are sensitive and the cost coefficient of blockchain technology application is low, the optimal risk-return value of retailer is higher under mode T. If consumers are less sensitive and the production cost of the product meets certain conditions, moreover, the supplier still has a higher degree of risk aversion, the traditional financing mode is more beneficial to the retailer.
    This study solely focuses on the application of blockchain technology information disclosure by a single retailer. However, given the ubiquitous nature of competition in the market, future research will explore how the intensity of competition in information disclosure using blockchain influences supply chain financing and operational decisions.
    Modeling and Simulation Study of System Dynamics for Knowledge Co-creation of Service Supply Chain
    ZHU Xuechun, ZHAO Xinran, GONG Wenwei
    2023, 32(9):  50-56.  DOI: 10.12005/orms.2023.0284
    Asbtract ( )   PDF (2787KB) ( )  
    References | Related Articles | Metrics
    Service is an important source of value creation and is increasingly promoting the high-quality development of economic society. With the development of service economy, supply chain management has gradually extended from manufacturing industry to service field, and service supply chain management has become the core content of service operation management. Service supply chain focuses on knowledge integration and collaborative service innovation and realizing knowledge co-creation of service supply chain is an important goal of service supply chain management. Knowledge co-creation of service supply chain can not only promote the value creation of supply chain from tangible products to intangible services but also explore more potential value of service supply chain. Knowledge co-creation of service supply chain plays an important role in improving service quality and promoting service innovation, and promotes the healthy and stable development of service supply chain. It has significant theoretical significance and practical value to study knowledge co-creation of service supply chain and explore the promotion strategy for knowledge co-creation of service supply chain.
    The paper takes the service supply chain as the object and explores the operation mechanism on knowledge co-creation of service supply chain using system dynamics method. The study analyzes the different roles of service providers, the service integrator and customers in knowledge co-creation of service supply chain and explains the process of knowledge co-creation of service supply chain. Then the paper establishes the knowledge co-creation system dynamics model of service supply chain including service providers, the service integrator and customers with Vensim PLE software. The research does simulation and sensitivity analysis by customer demand knowledge, knowledge transfer capability of service provider and knowledge transfer capability of service integrator. Finally, the research explores the dynamic influence mechanism of customer demand knowledge, knowledge transfer capability of service provider and knowledge transfer capability of service integrator on knowledge co-creation of service supply chain. The paper also puts forward some suggestions for knowledge co-creation of service supply chain.
    The results of the study are as follows. Firstly, service supply chain knowledge co-creation system is a dynamic and complex system including service provider knowledge subsystem, service integrator knowledge subsystem and customer knowledge subsystem. Secondly, customer demand knowledge includes explicit knowledge and tacit knowledge. Customer demand knowledge can promote service providers and the service integrator to acquire knowledge and has positive influence on knowledge co-creation of service supply chain. Lastly, with the enhancement of knowledge transfer capability of service provider and the service integrator, knowledge co-creation of service supply chain increases continuously.
    The study can not only deepen the research of service supply chain and knowledge co-creation but also provide ideas for driving knowledge co-creation of service supply chain. However, there are different services in practice such as technology service, tourism service, logistics service and so on. Knowledge co-creation of different types of service supply chain may be different. In the future, knowledge co-creation of different types of service supply chain can be studied. Meanwhile, further research on knowledge co-creation of service supply chain can be carried out by empirical research and case study.
    Research on the Evolutionary Game of the Financing Mechanism for the Integrated Development of “Three Cooperatives”
    TANG Dexiang, PENG Tianyu
    2023, 32(9):  57-63.  DOI: 10.12005/orms.2023.0285
    Asbtract ( )   PDF (1625KB) ( )  
    References | Related Articles | Metrics
    The “three communities” are indispensable and important subjects in the process of China's agricultural and rural development, and are also the backbone force for promoting the organization and materialization of the rural collective economy. The integrated development of the “three communities” refers to the promotion of the high-quality development of China's agriculture and rural areas as the foundation, with agricultural services as the core, through the deep integration of the advantages of farmers' professional cooperatives, supply and marketing cooperatives and credit cooperatives in production management links, market supply and marketing channels, credit service support, etc., to form a joint force to promote the development of rural collective economy, and rationally allocate agriculture-related resources through professional division of labor and efficient cooperation, so as to better promote the development of agricultural modernization and provide strong support and guarantee for the rural revitalization strategy. However, in the process of the integrated development of the “three societies”, the financing difficulties of the integrated development of the “three societies” cannot be effectively alleviated due to the difficulty in controlling the financing risks.
    Based on the theoretical research, and the financial thinking of agricultural supply chain, this paper designs the financing mechanism of the integrated development of the “three communities”, constructs the evolutionary game model of the integrated development of the “three communities”, investigates the influence of the “three communities” on the equilibrium of the game with different initial probabilities and key parameter changes under different strategies, and conducts simulation analysis based on the actual situation. The results show that the parameters such as the probability of independent repayment of farmers' professional cooperatives, the liquidated damages and credit penalties of agricultural product order contracts, the discount rate of the purchase amount of agricultural product order contracts, the supervision costs of supply and marketing cooperatives, the loan interest rate of credit cooperatives, and the government risk compensation are the key factors affecting the equilibrium of financing games. Therefore, supply and marketing cooperatives need to strictly supervise whether the loan funds of farmers' professional cooperatives are used for the production process of agricultural products and strictly control the cost of supervision, not to reduce the discount rate of the purchase amount of agricultural product order contracts too much, but to leave enough profit margins for farmers' professional cooperatives, and to increase the liquidated damages and credit penalties for agricultural product order contracts while protecting the profits of farmers' professional cooperatives, so as to increase the restraint on farmers' professional cooperatives, ensure that they perform agricultural product order contracts as promised, and then enable farmers' professional cooperatives to always take the measures. “Cooperation” strategy, supply and marketing cooperatives always adopt the “Guarantee” strategy. For credit cooperatives, it is necessary to increase the risk protection of government risk compensation, appropriately relax the requirements for the loan loss compensation ratio of supply and marketing cooperatives, and increase the probability of independent repayment of farmers' professional cooperatives by establishing credit files, so as to reduce the risk of loan default of credit cooperatives, and finally form a game equilibrium result that makes the “Three Cooperatives” always adopt (cooperation, guarantee, loan).
    Driving Mechanism of Field Workers' Safety Ties Based on Evolutionary Game
    WU Chunlin, YANG Yang, ZHAI Fengyu, ZHAO Mofei
    2023, 32(9):  64-71.  DOI: 10.12005/orms.2023.0286
    Asbtract ( )   PDF (1101KB) ( )  
    References | Related Articles | Metrics
    The occupational safety situation in China is complex and alarming, with over 38,000 safety accidents occurring nationwide in 2020, resulting in over 27,400 deaths. As one of the most dangerous industries, the construction industry in China sees almost 4,000 deaths at construction sites every year. Therefore, researching and resolving safety issues at construction sites is paramount. Workers' unsafe behavior at the workplace, exhibiting diverse, dynamic, and complex characteristics, is a long-standing weak link in accident prevention and control systems at construction sites and often leads to safety accidents. Traditionally, safety management research and practice focus mainly on the impact of organizational and policy factors on worker behavior. The most direct approach to improving the safety conditions of workers at construction sites typically involves strengthening management and establishing strict rules and regulations to prohibit workers' unsafe behavior. However, research has shown that workers have a higher acceptance of safety reminders from team leaders and colleagues within the same operational group compared to rigid safety regulations. Workers also make autonomous decisions on whether or not to comply with regulatory constraints and whether to adhere fully or partially to the system, which significantly impacts their safety. Ignoring workers' subjective intentions and the complex, dynamic interactions between them can lead to less effective research results.
    Taking into account subjective factors such as workers' characteristics and their differing sense of belonging to the organization, this paper introduces the concept of a “safety bond” and constructs an evolutionary game dynamic model for the formation of safety bonds among workers at the construction site. A payoff matrix for safety bond formation among workers is established, depicting the process by which safety interactions among workers influence their decision-making probability to initiate “safety interactions”. By solving the replicator dynamic equations of the evolutionary game dynamic model, we obtained the evolutionary equilibrium points of the game system and analyzed them to get a stable equilibrium solution. We simulated the dynamic game process, described by a system of differential equations, to analyze the stability of the evolutionary game equilibrium points and evolutionary stable points. The Jacobian matrix of the dynamical system is derived from the local stability analysis, which then verified whether the combination of the game equilibrium points and evolutionary stable points is the final evolutionary stable strategy achieved by both parties.
    The results elucidate the evolutionary process of safety bond formation among onsite workers and the mechanisms driving rapid evolution. We found that when the organizational reward for a worker initiating a safety interaction is less than the cost of initiation, or when the organizational reward for both parties initiating a safety interaction is less than the benefits of forming a friendship-based safety bond, both parties will eventually evolve towards a stable strategy of either both initiating a safety interaction or not initiating at all. The evolutionary path of workers' strategy choices and the probability of initiating a safety interaction are closely related to the initial state of the game and the parameters of the payoff matrix. The speed of system evolution is positively correlated with the number of safety interactions initiated by workers and is influenced by the initial values of the system environment parameters. Factors such as workers' safety awareness, organizational belonging, job satisfaction, and the cost of safety interactions are the main influencers of workers' safety behavior decisions and the rate of system evolution.Finally, considering the reality of onsite workers, based on the analysis and conclusions, and the characteristics of workers in high-accident industries, we propose four management recommendations to reduce unsafe behavior, promote the transition from “required safety” to “desired safety”, and prevent occupational accidents: 1)Form a core operation team, establish a safety interaction reward mechanism, and provide special rewards for workers who are willing to interact altruistically. 2)Improve the vocational skills training system to encourage workers to initiate safety interactions from an emotional standpoint. 3)Establish a long-term wage payment guarantee mechanism, ensure all workers on the job site are covered by workers' compensation insurance, and make workers feel cared for by the government or company. 4)Pay attention to and mobilize the subjective initiative of workers to initiate safety interactions; During recruitment, select workers who are more proactive in initiating safety interactions to reduce unsafe behavior and accidents; Regularly conduct tests on workers' subjective initiative in initiating safety interactions during operations and reward those with excellent test results to motivate workers to initiate safety interactions.
    Pythagorean Triangular Fuzzy Number Density Operator and Its Application
    YI Pingtao, WANG Shengnan, LI Weiwei, WANG Lu
    2023, 32(9):  72-78.  DOI: 10.12005/orms.2023.0287
    Asbtract ( )   PDF (1115KB) ( )  
    References | Related Articles | Metrics
    Multi-attribute decision-making is a decision-making theory and method that uses multiple attributes to help decision makers choose alternatives, and it has been widely used in many fields. In addition to many attribute characteristics that can assist decision-making, the type of attribute value is also one of the important branches of multi-attribute decision-making research. With the higher complexity of decision-making environment, the research on multi-attribute decision-making with accurate values as attribute values cannot meet the current changeable decision-making environment, and then the research on multi-attribute decision-making with fuzzy numbers as attribute values has attracted extensive attention of many scholars. Fuzzy set theory has experienced the development from fuzzy set to intuitionistic fuzzy set and then to Pythagorean fuzzy set, and the explicable decision-making problems have also been improved in breadth and depth. The development of fuzzy sets more clearly depicts the uncertain nature of decision-making and ensures the authenticity of attribute information. Pythagorean triangular fuzzy number (PTFN) is an expanded data form. Although it can cope with the complexity and uncertainty of decision-making problems, the distribution characteristics of data are not considered in the process of aggregation. However, the decision-maker's preference for the density of data distribution will affect the final decision-making results in the face of realistic decision-making problems. Then, this is also the source of density operator. Density aggregation operator is the result of re-integration on the basis of classical integration operator, and it has also received some attention since it was put forward. It is a flexible aggregation method and can effectively enhance the accuracy of information aggregation process. Given this, this paper proposes a Pythagorean triangular fuzzy number density weighted operator (PTF-DM), which not only refines the fuzzy data types in multi-attribute decision-making, but also considers the decision-maker's preference for data distribution.
    The research content of this paper is mainly aimed at the multi-attribute decision-making problem with Pythagorean triangular fuzzy number, and considering the density relationship of attribute information distribution, the Pythagorean triangular fuzzy number density weighted operator is proposed. Firstly, the Pythagorean triangular fuzzy numbers and related algorithms are introduced, and the effective clustering of Pythagorean triangular fuzzy numbers is realized by using the basic idea of fuzzy reference ideal method (FRIM) and score function. Then, the Pythagorean triangle fuzzy number density weighted operator and composition operator are proposed, and the programming model is established by entropy method to determine the final density weight. Finally, an example named investment choice of manufacturing enterprises is given to illustrate the application of Pythagorean triangular fuzzy number density operator. Based on the data type of Pythagorean triangular fuzzy number, this method can further expand the practical application range of density operator.
    Pythagorean triangle fuzzy number density weighted operator is proposed, and in order to verify the effectiveness of Pythagorean triangular fuzzy number density weighted operator, four operators, PTF-DWAWAA, PTF-DWGAWAA, PTF-DWAOWA and PTF-DWGAOWA, are taken as this paper's examples for comparative analysis. Additionally, considering the preference of decision makers for the density of data distribution, the ranking results of comprehensive values of enterprises under the same preference and different operators, and the ranking changes of comprehensive values of enterprises under different preferences are obtained. From the results, it can be found that the Pythagorean triangular fuzzy number density weighting operator has good stability and can play a good role in assisting decision makers to make decisions. In addition, in order to further verify the effectiveness of Pythagorean triangular fuzzy number density weighted operators, this paper compares three Pythagorean triangular fuzzy number integration operators with Pythagorean triangular fuzzy number density weighted operators, and finds that Pythagorean triangular fuzzy number density weighted operators are the result of secondary aggregation, and the preference of decision makers for data distribution is the key reason that affects the aggregation results, which is also the reason why Pythagorean triangular fuzzy number density weighted operators and Pythagorean triangular fuzzy number integration operators have different results. Pythagoras fuzzy set and density information aggregation are the new development trend of information expression and information integration, which greatly expands the choice space of decision-making tools. In the future, the multi-attribute group decision-making problem under the three-dimensional information structure will be considered to realize the research on information expression and aggregation under a wide range of conditions.
    Computation of Joint Signature for A Smart Street Light System Based on Finite Markov Chain Imbedding Approach
    YI He, LI Xiang, LU Jingwen
    2023, 32(9):  79-85.  DOI: 10.12005/orms.2023.0288
    Asbtract ( )   PDF (1257KB) ( )  
    References | Related Articles | Metrics
    With the development of science and technology, equipment systems gradually present the characteristics of large-scale and complicated, which greatly increases the system failure risk, and the system reliability problem begins to attract people's attention. As a public infrastructure carrier integrating lighting equipment and sensing equipment, smart street lights can collect road data in real time and monitor road conditions through various sensors, and they can also manage vehicle flow in real time through lighting equipment, which is of great significance to the planning and construction of smart cities in China. Smart street lights can deploy different lighting modules and sensing modules according to the actual application scenario to achieve different functions such as lighting, advertising (LED screen), broadcasting (sound column), Wi-Fi, monitoring and alarm, and these modules may have different demand amounts and coverage. The city management department needs to choose the appropriate street light type and laying method according to the lane type of different road segments.
    In recent years, there have been numerous studies on smart street light systems at home and abroad, including system hardware and software design and implementation, energy saving strategies and control algorithms, related technical means and empirical research and so on. At present, researchers have realized the importance of reliability in the study of smart street light systems, but there is still a large gap in the system modeling and analysis based on reliability. In fact, for a smart street light system on a certain road segment, in order to study the related reliability problems, it can be regarded as several linear consecutive-k-out-of-n type redundant systems sharing components. In order to better characterize the structural properties of such systems, this paper presents a computational method for joint signature of two linear consecutive-k-out-of-n type redundant systems sharing components based on the finite Markov chain imbedding approach (FMCIA). This method can be used for reliability analysis and structural comparison of such systems, and will provide theoretical basis for management decision-making of road planning.
    Signature theory is an important tool for describing system structure in reliability theory. The proposing of signature measures overcomes the difficulty of characterizing the structure of large complex systems, and provides a way to compare system structures through stochastic orders. Computation of measures has always been a hot and difficult problem in the signature field. Existing methods include definition method, path/cut set method, reliability method, binary decision diagram method, generating function method, Markov process method and module decomposition method and so on. Each method has its advantages, disadvantages and application scopes. Among them, the reliability method uses the one-one relationship between signature and system reliability to transform the signature computational problem into a reliability computational problem, and its computational efficiency depends on efficiency of the reliability computational method. For the linear consecutive-k-out-of-n redundant systems sharing components studied in this paper, if traditional definition method is applied to computing their joint signature, the process of system state change needs to be considered under n! different orderings of component failures, which makes computational efficiency very low for large n. Therefore, this paper presents a computational method of the joint signature based on the FMCIA. The idea of this method is to obtain the joint reliability function of the system by using the FMCIA, and then calculate the joint signature according to the relationship between the joint signature and the joint reliability function.
    The FMCIA is a method to transform the reliability problem into finite-state Markov chain. With outstanding advantages in system reliability computation and unified analytical expression, this method is widely used in reliability computation, especially in the reliability computation of linear/circular consecutive-k-out-of-n type redundant systems and its derivative systems. In recent years, in addition to the common k-out-of-n: F/G systems and (m-) consecutive-k-out-of-n: F/G systems (with sparse d), this method has also been used to calculate the reliability of many other consecutive-k-out-of-n type redundant systems. When n is very large, this method can also simplify the computation by eigenvalue decomposition, which makes it more efficient in computing the reliability measures.
    To sum up, this paper studies the reliability problems of smart street light systems by using a model of linear consecutive-k-out-of-n systems sharing components, and presents a new method to calculate the joint signature based on the FMCIA. The main contributions of this paper are as follows: On the one hand, for the smart street light systems composed of lighting equipment and sensing equipment, a model of a linear consecutive-k1-out-of-n system and a consecutive-k2-out-of-n system that share components are established, which shortens the distance between the actual demand for reliability analysis of smart street light systems and relevant reliability theories; On the other hand, a computational method based on the finite Markov chain imbedding approach is proposed to solve the computational problem of the joint signature for these redundant system models, which is more efficient than the traditional definition method. This method effectively reduces the computational complexity and provides a more applicable theoretical tool for the computation of the joint signature. Its possible applications include such systems,but are not limited to them and they can be widely used in the system reliability analysis in fields like wireless communication, pipeline transportation, quality control, pattern recognition and many others in the future.
    Collaborative Optimization on Assembly Line Balancing and Material Supermarket Planning Based on Genetic Algorithm
    PENG Yunfang, SUN Lumeng, PENG Xuefen, XIA Beixin
    2023, 32(9):  86-92.  DOI: 10.12005/orms.2023.0289
    Asbtract ( )   PDF (1575KB) ( )  
    References | Related Articles | Metrics
    With the increasing demand for multiple varieties and customization, manufacturers should design assembly lines efficiently to enhance the competitiveness of their products. Recently, manufacturers began to widely use new material supermarkets near assembly lines to ensure just in time part supply of stations. In the assembly line design stage, the assembly line balancing problem and the supermarket planning problem are directly interrelated ones. The decision taken to solve assembly line balancing problem will limit the layout of the material supermarket, and the material supermarket planning will affect the logistics efficiency and cost of the assembly line.
    At present, most studies use a hierarchical approach to separate these two interrelated problems, which balance the assembly line to get the optimal number of workstations at first, and then decide the number and location of material supermarkets. The optimal solution in the first step limits the result of the material supermarket planning, and may cause an increase in total costs.In this paper, the model and algorithm for the collaborative optimization on assembly line balancing and material supermarket planning are proposed. Based on the problem description and related assumptions, an integrated mixed integer programming model is constructed with the goal of minimizing the total cost including workstation installation cost, material supermarket installation cost and transportation cost. In order to solve the large-scale problems, an improved genetic algorithm which applies a novel encoding mode and a new population initialization method is proposed.
    To evaluate the computational performance of the proposed improved genetic algorithm, problems of different scales are employed in the numerical analyses. The results obtained from the proposed improved genetic algorithm are compared with results solved by Cplex and the traditional genetic algorithm. The comparison demonstrates that the proposed improved genetic algorithmhas better performance as the scale of the problem increases. It can get the optimal solutions in a short time. Moreover, the comparison result shows that the collaborative optimization approach can get lower total cost compared with the hierarchical approach.
    Research on the Problem of Classification Planning Considering Time Sequence Constraints under the New College Entrance Examination
    SUN Zhe, WANG Chong, WU Qinghua
    2023, 32(9):  93-100.  DOI: 10.12005/orms.2023.0290
    Asbtract ( )   PDF (1291KB) ( )  
    References | Related Articles | Metrics
    Since the launch of the new college entrance examination reform in China in 2014, 29 provinces have published their reform plans in five batches. The reform involves the cancellation of the classification of arts and sciences and the adoption of a “3+3” or “3+1+2” model, in which Chinese, mathematics, and English are compulsory subjects, and physics, chemistry, and biology, as well as politics, history, and geography, are elective subjects. To safeguard students' right to choosing courses freely, the implementation of the “optional class system” teaching model is becoming increasingly popular. The timetabling problem under the comprehensive reform of the new college entrance examination in China can be divided into two stages:Class planning and course scheduling. Among them, class planning is the basis and prerequisite of the problem of course scheduling. On the one hand, a good classification plan can reduce the use of teaching-related scarce resources and the number of walking students, increase class stability, reduces the management difficulty of the teaching model with “optional class system”. More importantly, the optimized class placement plan can greatly reduce the conflicts caused by students attending classes at the same time. It is the basis and prerequisite for producing a legal and high-quality course timetabling, and can greatly reduce the difficulty and complexity of course scheduling.
    Aiming at a series of problems in class planning, firstly, this paper summarizes the various class classification modes of administrative class under the new college entrance examination, including “priority-fixed-three”mode and “fixed-two-choose-one” mode, and establishes the general mathematical model of this problem. Secondly, this paper summarizes the problem of teaching class division planning under the new college entrance examination.In order to meet the needs of all students in subject selection, it is necessary to set up a sufficient number of teaching classes, and in order to maximize the use of class hours, it is necessary to evenly distribute teaching classes to each class. Since the number of subjects students choose is 3, the best way is to have 3 courses in each class as a teaching class. In order to avoid the absence of a legitimate course scheduling scheme due to the conflict of students' class time, the school generally sets three fixed time periods (each student can find all courses matching his chosen subjects in these three time periods) for students to carry out unified class teaching. Therefore, this problem is defined as a teaching class division planning problem considering timing constraints. The corresponding MIP model under this system is established, which also fully considers the constraints of classroom resources and teacher resources.
    CPLEX is used to solve the model, and it performs well on small examples. Aiming at the large-scale calculation example of teaching class division planning problem, this paper proposes a variable neighborhood search (VNS) algorithm to solve it.The algorithm uses Flip Class, Swap Two Class, Exchange Class and Exchange Combination to select the best move from the current neighborhood in the way of traversing the neighborhood. After obtaining the intermediate solution and the final solution, it is necessary to optimize the solution on the basis of a single student (the subject selection combination is the unit in the neighborhood search) without changing the subject offered by the class, so as to balance the number of students in each class as much as possible. Finally, the rationality and validity of the model are verified by the actual case of senior three students in a middle school in Guangzhou and the result analysis of a random example.
    On the basis of the research in this paper, the future can be further considered on the basis of the matching of students' learning level and class level, the division planning of administrative classes and teaching classes, and whether the circulation between levels is allowed. In the process of dividing teaching classes, the best match between teachers and students is considered, that is, students are arranged to be taught in the class of their preferred teachers as much as possible.
    Volatility Prediction Evaluation of GARCH Models Based on Loss Functions
    WANG Susheng, LI Guanglu, WANG Junbo
    2023, 32(9):  101-106.  DOI: 10.12005/orms.2023.0291
    Asbtract ( )   PDF (965KB) ( )  
    References | Related Articles | Metrics
    Volatility is the core index to research the risk of financial assets, as well as one of the bases of the pricing of financial derivatives. Therefore, the research on volatility of financial assets, especially financial derivatives, has always been the focus and hot spot of academic research. Modeling to measure the volatility risk of financial products is also an important topic in the current financial research literatures. In previous studies, we used different volatility models to forecast the intra-day volatility of CSI 300 stock index futures samples, and different models have different performances. Therefore, how to compare the prediction level of different models on volatility is an important issue that we pay attention to at present. Under the premise of fixed sampling frequency, finding the optimal volatility prediction model for CSI 300 stock index futures will help investors grasp the trend of market changes and form appropriate financial assets through the combination and collocation of investment tools. In theory, it will further complement and improve the theoretical framework of price volatility prediction of financial derivatives.
    In the series of volatility prediction methods, the GARCH family model is simple and extendable, so it is widely applied and extended in theory and practice. In the process of applying GARCH family model, loss function is often used to measure the accuracy of prediction model. In this paper, we use a variety of loss functions to evaluate the prediction accuracy of three GARCH models (General Autoregressive Conditional Heteroskedastic Model), and try to find the optimal intraday volatility prediction model for stock index futures to assist us in financial investment. Specifically, we use the previous research results, three CSI 300 stock index futures intra-day one-minute yield as the research object, the standard GARCH, eGARCH and RealGARCH models for empirical test, and a variety of loss functions to measure the prediction accuracy of the three volatility models from different perspectives.
    First, from the fitting results of the volatility of the three research samples, the GARCH model, which can reflect the asymmetric fluctuations, has a more significant fitting coefficient for the samples, and its loss function is smaller, indicating that its fitting effect is better. This indicates that the intraday volatility of one-minute CSI 300 stock index futures has an obvious leverage effect, and the market fluctuations are different for positive and negative external shocks of the same size. Second, for the specific research object, the prediction accuracy of eGARCH and RealGARCH models is obviously better than that of standard GARCH model, and the prediction accuracy of RealGARCH models in Sample1 and Sample2 is better than that of eGARCH model. Third, the prediction accuracy of eGARCH model in Sample3 is higher. Therefore, when studying the intraday volatility of CSI 300 stock index futures, we should give priority to the volatility model that can reflect the asymmetric characteristics according to its sample characteristics to depict the de-volatility process and forecast the future volatility.
    Problems for further research: eGARCH and RealGARCH models are mainly aimed at asymmetric problems, and nonlinear problems can be studied based on this model in the future.Econometric models such as GARCH do have advantages in capturing linear features of volatility, while current research shows that tools such as machine learning algorithms has begun to find application in volatility prediction because it is better at capturing non-linear features. Future research may consider combining GARCH model with machine learning algorithm to build a combined model to further improve the accuracy of prediction.
    The reviewer has put forward good suggestions for the revision of this article. We would like to express my sincere thanks to the reviewer, but we shall take the responsibility for the article. At the same time, we would like to thank the Philosophy and Social Sciences Planning Foundation of Shenzhen, Guangdong Province, China for supporting this study.
    Economic Design of EWMA Control Charts with Variable Sampling Intervals for Monitoring Poisson Distributions Based on Preventive Maintenance
    XUE Li, CAO Doudou, WANG Qiuyu
    2023, 32(9):  107-113.  DOI: 10.12005/orms.2023.0292
    Asbtract ( )   PDF (978KB) ( )  
    References | Related Articles | Metrics
    The core techniques and methods of modern quality control have always been centered on how to improve monitoring efficiency and reduce the control cost. Statistical process control is a commonly used quality control method, and the control chart is an important tool for monitoring and ensuring the quality of the process. The design method of control charts has a large impact on the efficiency of monitoring and controlling the production process and control cost. The statistical design of the control charts is mainly based on statistical standards of the control chart, and although it takes into account the statistical properties of the control chart, it does not consider the economic benefits of the control charts.
    From an economic perspective, the cost of investigating alarm signal, producing defective products, sampling and testing and correcting abnormity cause influence the parameters design of control charts. Therefore, it is logical to design the control chart from an economic point of view. For the case where only the parameters of the traditional control chart are considered to be fixed and the statistical design of the control chart is considered, the traditional control chart for the number of defects c is proposed based on the Poisson distribution to monitor the number of defects in the product during the manufacturing process. In order to improve the efficiency of defect count c control charts for process monitoring and to reduce the cost of the control process, economic design of variable sampling intervals (VSI) EWMA (exponentially weighted moving average) control chart and preventive maintenance strategy under Poisson distribution is discussed in this paper.
    Firstly, an economic model of VSI EWMA chart based on preventive maintenance strategy and quality loss function under Poisson distribution is developed.Secondly, the loss cost function per unit time is minimized and the optimal solution of the economic model is obtained by using genetic algorithm. Taking the chemical building materials industry as an example, it shows how to determine the optimal solutions of the parameters based on the VSI Poisson EWMA control chart economic model established under the preventive maintenance strategy.Thirdly,a sensitivity analysis of the developed economic model is performed to obtain the relationship between the influence of the control chart design parameters according to the variation of the model parameters, and the results of and the analysis are as follows: (1)The sample size decreases with the increasing shift in process mean. (2)The lower control limit coefficient decreases with the increase of the frequency of abnormal causes and the average cost of finding and correcting an abnormal cause. (3)The long sampling interval decreases with the increase of process mean shift. (4)The lower warning limit coefficient decreases with increasing frequency of abnormal causes. (5)The smoothing coefficient increases with the decreasing shift in process mean shift. (6)The loss cost function per unit time increases with the increase of the sampling cost per sample, the average time of the correction process and the loss when the product is unqualified, and decreases with the increase of the process mean fluctuation. Finally, the superiority of the developed economic model is verified by making optimality analysis, and the VSI EWMA chart designed based on the economic model developed in this paper was superior to the VSI EWMA control chart designed by the statistical method, and has a smaller expected loss cost per unit time.
    Evolutionary Game Analysis of Environmental Governance in the Yellow River Basin Based on Public Participation
    WANG Yiqi, CAO Guoliang, LI Guoping
    2023, 32(9):  114-119.  DOI: 10.12005/orms.2023.0293
    Asbtract ( )   PDF (1178KB) ( )  
    References | Related Articles | Metrics
    As the mother river of the Chinese nation, the Yellow River Basin constitutes an important ecological barrier and is a key economic zone in our country. It is also a crucial region for building a beautiful China. The Central Committee of the Communist Party of China and the State Council attach great importance to the Yellow River issue and have elevated ecological protection and high-quality development of the Yellow River Basin to a major national strategy. How to carry out practical and effective environmental management actions in the Yellow River Basin, and coordinate the increasingly complex functional conflicts and the contradictions of multiple interest demands, have become urgent problems that need to be addressed.
    From the perspective of evolutionary game, this paper introduces the inspection intensity into the strategy choice of local governments. It constructs two scenarios without and with central government constraints, analyzes the dynamic evolutionary game of local governments, enterprises and the public in these two scenarios in the Yellow River Basin, and explores the behavior and strategy choice of the game players through numerical simulation. The paper reveals the key factors that influence the governance of the ecological environment in the basin.
    The results show that: Firstly, when the parameter values are determined, the strategy evolution speed of local governments, enterprises and the public in the Yellow River Basin will be affected by the strategy selection probability of themselves and other two parties. However, no matter how the value of each agent's choice probability changes, it will not alter the final behavior strategy choice of the game agent, and will eventually reach the stable strategy points under different stability conditions. Secondly, under the constraints of the central government, the tripartite game players can achieve the ideal evolutionary stable equilibrium (strict supervision, water saving and emission reduction, supervision). Thirdly, compared with the scenario without central government constraints, the constraints of the central government can accelerate the evolution of local governments, enterprises and the public to strict supervision, water conservation and emission reduction, and supervision under the corresponding stable conditions. This indicates that the central government's constraints promote the game players to choose the strategy of actively managing the basin environment. Based on the above research conclusions, the following suggestions are proposed: Firstly, the central government should increase the rewards and punishments, and formulate reasonable policies to strengthen the supervision and intervention in the environmental governance of the Yellow River Basin by local governments. Secondly, the central government should strengthen the environmental responsibility of local governments. Local governments should establish reasonable water resource tax rates and environmental regulatory standards, increase rewards and punishments for water conservation and emission reduction enterprises, and encourage enterprises to actively conserve water and reduce emissions. Thirdly, Local governments should implement the corporate responsibility for pollution control. Enterprises should enhance their environmental awareness, actively engage in technological innovation, and seek new development paths for water conservation and emission reduction. Fourthly, the governments should improve the mechanism for public participation and supervision. We should utilize media and reporting mechanisms to supervise enterprises, and make reasonable use of rewards provided by the central and local governments for reporting, in order to participate in environmental governance in an orderly manner.
    Application Research
    Study on VTS Radar Station Location Optimization Considering Obstacle Occlusion and Radar Attenuation
    HUANG Chuan, LYU Jing, AI Yunfei
    2023, 32(9):  120-127.  DOI: 10.12005/orms.2023.0294
    Asbtract ( )   PDF (2164KB) ( )  
    References | Related Articles | Metrics
    By 2021, China had had 135679 civil transport vessels, and the vessel traffic flow in China's coastal and inland water area had grown rapidly. The rapid growth of vessel traffic flow has led to an increasing number of waterway transportation accidents. The frequent occurrence of waterway transportation accidents will not only endanger the safety of shipping vessels, persons and property, but also damage the environment and cause serious impact on the development and usage of water areas. In order to ensure the safety of ship navigation, the maritime authorities need to rely on the vessel traffic service (VTS) to regulate and ensure waterway transportation safety.
    VTS system is the main equipment of China's maritime authorities for waterway safety supervision, and the location of VTS radar station, an important part of the system, has a significant impact on the safety and security efficiency of the whole system. As a result, it is necessary to study the VTS radar station location and configuration model. The traditional VTS radar station location and configuration optimization model is more suitable for the application scenario where there are no large objects such as mountains and forests in the station building environment, and the difference between the VTS radar station and the monitored water area in terms of altitude is not big. While in the actual station building situation, there are a certain number of mountains and forests in many station building environments, which may affect the electromagnetic wave propagation to a certain extent, and then affect the VTS radar monitoring performance in the whole water area. Meanwhile, the traditional location and configuration model of VTS radar stations is built in the form of one-to-one monitoring.
    With regard to the above factors, this paper takes the problem of obstacle blockage, attenuation of radar radio wave propagation, and the need for alternative coverage for single radar failure into account, and proposes a multi-objective location and configuration model for VTS radar stations based on the idea of collective coverage, and designs a multi-objective particle swarm algorithm with adaptive weight and introduces the ZDT series test function to analyze the performance of the algorithms. Then an example verification is conducted based on the VTS radar station building project. The algorithm is used to solve the proposed mathematical model quickly, and the example results verify the feasibility of the proposed model and algorithm. Then a sensitivity analysis and discussion on the maximum coverage radius factor based on the results obtained from the example are applied, and the results show that deploying a radar with better performance may be more effective than building more VTS radar stations. And in actual environment decisions need to be made on a case-by-case basis. The proposed model and algorithm also provide solutions for the location and configuration of VTS radar stations in real-world environments.
    However, the location and configuration of VTS radar stations in the actual environment is a comprehensive and systematic project that requires consideration of many factors, so the location decision can be made from several factors, such as the common assumptions of oil spill radar and other factors in the location process, while the algorithm used in this paper can be further improved to improve the accuracy and scientificity of the solution.
    Development Strategy of the Flea Market in the E-closed-loop Supply Chain Considering the Reference Price Effect
    BAI Chunyu, ZHAO Ying, GUAN Zhimin
    2023, 32(9):  128-135.  DOI: 10.12005/orms.2023.0295
    Asbtract ( )   PDF (1494KB) ( )  
    References | Related Articles | Metrics
    As consumers' environmental awareness continues to grow, many companies are recognizing the value of recycling waste products and selling second-hand products. These practices not only promote sustainability but also offer economic benefits for businesses.The rapid development of e-commerce has led to the emergence of recycling businesses on many e-commerce platforms. Some of the old products that are recovered are remanufactured by manufacturers, while others are sold directly by the e-commerce platforms as second-hand products.In addition, price is no longer the only criterion that affects consumers' purchase decisions, and consumers often have reference price behaviors because of psychological prices. Therefore, enterprises should be fully aware of the impact of reference price effects on consumers' purchasing decisions, and take it into operation decisions scientifically and rationally. Currently, existing literature on closed-loop supply chains often ignores the impact of consumer reference price effects. Moreover, the literature on the reference price effect does not consider the impact of the reference price effect on second-hand product sales decisions. Therefore, different from the existing literature, this study investigates the optimal decision-making problem of closed-loop supply chain members under the unopened and opened second-hand market of the e-commerce platform, and considers the impact of reference price effects on decision-making, which has some theoretical and practical implications.
    Consider an E-closed-loop supply chain structure where the manufacturer entrusts the e-commerce platform to sell new products and recycle used products. When the e-commerce platform does not open up the second-hand market, it will transfer all the recycled used products to the manufacturer for remanufacturing. When the e-commerce platform opens up the second-hand market, it sells both the new and second-hand products to consumers. The manufacturer does not need to pay related fees for second-hand products sold in the second-hand market, and the e-commerce platform is responsible for its own profits and losses in the second-hand market. Three scenarios are considered in this paper: (1)E-closed-loop supply chain model without opening up the second-hand market (Model A); (2)E-closed-loop supply chain model with opening up the second-hand market(Model B); and (3)E-closed-loop supply chain model considering the reference price effect (Model C).In the above three scenarios, we construct and solve the Stackelberg game model with the e-commerce platform as the leader and the manufacturer as the follower. Then, we analyze the equilibrium results analytically and numerically, and obtain the following important conclusions:
    (1)For consumers, the e-commerce platform opens up the second-hand market to break the monopoly of the manufacturer on the market. Thus, competition between the new and second-hand products results in lower sales prices for the new products and higher recycling prices for the old products, both of which benefit consumers.In addition, the reference price effect and the enhancement of consumers' preference for second-hand products can further increase the recycling price of old products and reduce the sales price of new products. This increases the consumer's utility for returning the old product and purchasing a new one.
    (2)For the manufacturer, the e-commerce platform opening up the second-hand market is a threat to it. As a result, the manufacturer has to raise the recycling price of old products and lower the sales price of new products to cope with the competition between new and second-hand products in the market. Although the price adjustment will increase the recycling of old products and improve the sales of new products, the profit of the manufacturer will still be lower. In addition, the enhanced consumer reference price effect and preference for second-hand products will lead the e-commerce platform to increase advertising levels and recycle more old products. But this will also lead to lower sales of new products and lower profits for the manufacturer.
    (3)For the e-commerce platform, opening up a second-hand market makes it no longer rely solely on commission fees to obtain profits. This is because by pricing second-hand products, the e-commerce platform can control the market to a certain extent and establish a cooperative and competitive relationship with the manufacturer. The opening up of the second-hand market by the e-commerce platform means that it has obtained a new way to make money, which has greatly increased its profits.In addition, the consumer reference price effect and the enhancement of second-hand product preference can make the e-commerce platform sell more second-hand products. Thereby increasing the advertising level of recycling of old products and recycling more old products. This will further increase the profit of the e-commerce platform.
    There are still some limitations in this paper. First, this paper only considers one manufacturer and one e-commerce platform. Therefore, the supply chain system composed of multiple manufacturers and multiple e-commerce platforms can be used as a future research object.In addition, this paper only studies a single-period game model. Therefore, the multi-period dynamic pricing strategy of the closed-loop supply chain can be studied in the future.
    Research on Coal Mine Gas Explosion Risk Evaluation Model Based on Partially Ordered Sets of Game Theory
    LAI Wenzhe, SHAO Liangshan
    2023, 32(9):  136-142.  DOI: 10.12005/orms.2023.0296
    Asbtract ( )   PDF (1398KB) ( )  
    References | Related Articles | Metrics
    Coal is the main source of energy for many countries and regions, the power base of social development, and also one of the world's most important non-renewable energy sources. At present, China is still one of the world's largest coal importers. The importance of coal energy is even more evident as it accounts for a relatively high proportion of the energy consumed in our electricity and industrial production. However, while having a high importance, the dangerous nature of coal mining and the complexity of the mining process cannot be ignored. Coal production work has now become one of the most dangerous jobs in China, and the whole production process is often accompanied by various hazardous accidents. Among them, gas explosion accidents with its high mortality rate and strong destructive nature cause damage to social property and posed a great challenge to environmental protection. So there is an urgent need for a practical and effective coal mine gas explosion risk evaluation model mine to overcome the above problems. This study has high research value and significance in the field of management science and coal mine safety engineering.
    Under the vigorous popularization of intelligent mine, the subordinate enterprises of China's coal industry have basically completed the high integration of automation and information technology. In order to further improve the safety management intelligence, this study proposes a risk evaluation model of coal mine gas explosion based on game theory partial order set. This fusion model, which includes techniques such as nonlinear science, multidimensional data fusion and data analysis, can contribute to the development of the field of coal mine safety governance.
    Modeling begins with the classification of risk security levels based on grading guidelines. Firstly, a comprehensive analysis of the factors affecting the risk of gas explosion is conducted, and finally 14 indicators (recorded as y1-y14), such as ventilation facilities and equipment, gas outflow, air supply and demand ratio, safety education and training, are selected to form the indicator set of the model. Then, considering the imbalance between subjectivity and objectivity of indicator assignment, the game assignment method is used to combine the weights obtained by entropy method and hierarchical analysis method to obtain the best balance weights. Finally, the game theoretical partial order set evaluation model with weight information is obtained by combining with partial order set theory.
    The model is applied to evaluate the risk level of 20 gas mines with similar hazard characteristics from Yitai Group in Inner Mongolia as the evaluation samples (denoted as A1-A20), and the results show that the Hasse diagram of the model evaluation results is more accurate. In order to further verify the robustness of the game theoretic partial order set model, this study is compared and analyzed with the other two models for several times, and the results show that the weights of the indicators in the partial order set model are in equilibrium after optimization by the ω game theory method to ensure the transformation of the partial order relationship. The conversion of partial order relationship can give full play to the feature that the evaluation results are robust if the weight order of indicators is fixed in the partial order set evaluation model, which can overcome the shortage of traditional evaluation methods to a certain extent. The proposed game theory partial order set model provides a new idea for the evaluation of coal mine gas explosion risk.
    In the course of the follow-up study, we will expand the index set and data set in order to obtain a more scientific evaluation of the over.
    Study on the Acquisition Methods of Power Batteries for New Energy Car Companies under Green Credit
    LI Jiangxin, LI Jizu, WU Yucheng
    2023, 32(9):  143-149.  DOI: 10.12005/orms.2023.0297
    Asbtract ( )   PDF (1113KB) ( )  
    References | Related Articles | Metrics
    As a national strategic emerging industry, new energy vehicles have been supported by national subsidy policies. However, although subsidies have continuously increased the scale of the new energy vehicle consumption market, their impact on improving innovation investment and innovation performance of enterprise is not significant. At the same time, due to imperfect assessment mechanisms and rapid technological development and progress in the industry, the promotion effect of the “dual credit” policy on innovation in the new energy vehicle industry has not been fully reflected. Obviously, the new energy vehicle industry is at a critical stage of transformation from policy driven to market driven, and enterprises in the industrial chain are facing severe competition, while capital is the biggest obstacle for enterprises in the development stage.
    With the gradual improvement of the green financial system, green credit will become an essential source of funding for innovation in the new energy vehicle industry chain. However, the role of green credit in the innovative behavior of the new energy vehicle industry has not been fully valued and implemented as the subsidy policy. In addition, in the early stages of the development of the new energy vehicle industry, due to the high entry threshold for the battery industry, most vehicle companies choose to purchase batteries from battery factories. As battery technology matures, vehicle companies have become more flexible in their strategies for acquiring batteries. So, what are the R&D innovation effects of green credit vehicle enterprises under the single mode of outsourcing, equity investment and self-research, and the hybrid mode of outsourcing-self research and equity investment-self research, which acquisition mode can better promote R&D and achieve higher returns, and which way can better deal with the declining subsidies for new energy vehicles and the problems of the credit market?
    In order to solve the above problems, the article constructs a supply chain game model for different modes of acquiring batteries for automotive enterprises in non-competitive and competitive situations. With the support of green credit, by comparing the innovation effect, decision value, and profit value of each model, the mechanism of the impact of points on new energy vehicle R&D innovation is revealed, and the relationship between the equity financing ratio coefficient, R&D cost sharing coefficient, and R&D innovation investment is clarified. Suggestions for selecting different models in the case of reduced subsidies and low prices are also proposed. Based on comprehensive consideration of actual data and data obtained from game numerical simulation analysis, the research results show that in competitive situations, the innovation effect and supply chain revenue of the new energy vehicle market have both increased significantly. At the same time, cooperation between vehicle companies and battery factories can maximize their own revenue and improve the innovation effect. For automotive companies, the single equity strategy in the absence of competition and the outsourcing strategy in competition can not only maximize their own profits, but also effectively address the current situation of declining subsidies for new energy vehicles and low point prices. In competitive situations, green credit and subsidy policies not only enhance the advantages of competition between new energy vehicles and traditional fuel vehicles, but also accelerate the updating and iteration of technology by promoting competition within the new energy vehicle market. The support of green credit can not only promote investment in R&D and enhance innovation effects, but also help alleviate the negative effects caused by low point prices and declining subsidies. Therefore, the government should actively promote the development of green finance and give full play to its role in promoting new energy vehicle innovation.
    The deficiency of this paper is that it is based on the assumption that rival automotive companies develop their own batteries, while in fact, automotive companies may also adopt outsourcing or other hybrid acquisition methods. In a follow-up study, we will further expand the model to explore the impact of changes in the battery acquisition methods of competing automotive companies on the battery acquisition methods of another automotive company. In addition, this article only considers the financing method of green credit. In a follow-up study, we will consider financing models such as supply chain finance and commercial credit, and explore how different financing models affect R&D investment through vehicle enterprise battery acquisition methods, so that we can further improve the research.
    Application of Markov Decision Process to the Treatment of Rheumatoid Arthritis
    XU Weifeng, CAO Ping
    2023, 32(9):  150-156.  DOI: 10.12005/orms.2023.0298
    Asbtract ( )   PDF (998KB) ( )  
    References | Related Articles | Metrics
    Rheumatoid Arthritis (RA), as a highly disabling disease requiring lifelong treatment, brings people not only great pain physically and mentally, but also poses a serious economic burden to families and society. Currently, there are about 5 million RA patients in China, but the number of doctors in the department of rheumatism is seriously insufficient. Therefore, it is of important practical significance to find the optimal treatment plan for RA from the electronic medical records of RA patients. In addition, during the treatment process of RA, as the patient's health status is not fully exposed, doctors often make multi-stage decisions in an uncertain environment, Markov Decision Process (MDP) is very suitable for modeling in uncertain environments, and currently RA patients can only control the development of their condition through lifelong treatment. Therefore, this paper proposes to apply the MDP model to the treatment process of RA patients under the infinite horizon average criterion. The theoretical significance of this paper is to provide a theoretical method and analytical steps based on historical medical data for research on hospital treatment decision-making, and the practical significance is that the treatment policies obtained through the constructed MDP model can provide reference for treatment policies for RA patients in other hospitals.
    The clinical data used in this paper are from the electronic medical records of patients in the Rheumatology Department of the First Affiliated Hospital of Anhui University of Chinese Medicine. When constructing the MDP model, this paper gives the definition of each parameter one by one and uses clinical data from the electronic medical records of RA patients for inference. Firstly, this paper takes the time point at which doctors give treatment plans as the decision-making time. Secondly, this paper uses the K-modes clustering algorithm with different numbers of clusters to cluster the laboratory indexes of patients as feature variables, and ultimately obtain hidden health states which are relatively reasonable and easy to explain. Then, as traditional Chinese medicine is the basic method of clinical treatment in traditional Chinese medicine, this paper regards traditional Chinese medicine which is used by patients between two laboratory indexes tests as the basis for the action of the MDP model. Next, for the transition probability, this paper calculates the empirical transition probability based on the ratio of the number of people who take an action in a certain state and transfer to another state to the total number of people who take that action in that state, and replaces it. Finally, this paper considers the sum of the improvement degree of patients' indexes and the length of hospital stay between two laboratory indexes tests as treatment reward and treatment cost respectively.
    When solving the optimal strategy of the MDP model, this paper uses relative value iteration algorithm to solve and obtains the corresponding treatment strategy, treatment reward and treatment cost. The experimental results show that the treatment reward obtained by the MDP model constructed in this paper is higher than that of the hospital, and the treatment cost is lower than that of the hospital. Therefore, it has a certain clinical application value to apply MDP model in the treatment of RA in traditional Chinese medicine.
    In future work, other treatment methods such as fumigation, massage, acupuncture and moxibustion and western medicine can be incorporated into the action of MDP model, and the treatment process can also be modeled using partially observable Markov decision processes. In addition, in the process of completing this paper, we have received much support. We would like to thank Professor XIE Jingui of Technical University of Munich in Germany for putting forward this research question, and the Rheumatology Department of the First Affiliated Hospital of Anhui University of Chinese Medicine for providing electronic medical record data of RA patients.
    Farmland Transfer: Fixed Revenue Mode vs. Revenue Sharing Mode
    GU Bojun, ZHONG Xiaoting, FU Yufang
    2023, 32(9):  157-164.  DOI: 10.12005/orms.2023.0299
    Asbtract ( )   PDF (1316KB) ( )  
    References | Related Articles | Metrics
    The fourteenth five-year (2021~2025) plan for national economic and social development of the People's Republic of China emphasizes giving priority to the development of agriculture and rural areas, and promoting the strategy of rural revitalization in an all-round way. Agricultural modernization is the basis of implementing the strategy of rural revitalization, and the development of moderate scale operation of agricultural land is an important path for the transformation from traditional agriculture to modern agriculture, and what's more, the farmland transfer is the premise of moderate scale operation of agricultural land. Therefore, under the realistic background of less land and more people in our country, changing the current situation of enrichment at both ends of urban service resources and rural agricultural land resources, and promoting the orderly circulation of agricultural land is an effective way to promote agricultural modernization and revitalize the countryside. However, the growth rate of the proportion of farmland transfer has slowed down gradually in recent years, the farmland transfer shows the phenomenon of “low circulation and high abandonment”, and the policy implementation falls into a state of “waste of resources” and inefficiency in some areas. There are many reasons for such a result, but benefit distribution is one of the core problems in farmland transfer. For farmers, the ancillary functions of agricultural land include economic income function, social security function and possible value-added income in thefuture. However, after the farmland transfer, the farmer fails to get normal income from farmland transfer. Therefore, how to optimize the benefit distribution mechanism among the subjects of agricultural land transfer and distribute the transfer income reasonably, so as to ensure that farmers increase their income and alleviate their dependence on the added value of agricultural land, has become an urgent problem to promote the orderly circulation of agricultural land and moderate scale management in our country.
    This paper employs a willingness function to characterize the premise of farmers' voluntary to participate in farmland transfer and builds a Stackelberg game model between the farmers in a region and an agricultural product plantation company. Given that a farmland transfer system consists of the farmers in a region and an agricultural product plantation company, a willingness function of farmland transfer is depicted based on the premise of farmers' voluntary transfer. Then, two-stage game models of farmland transfer are established under the fixed revenue mode and revenue sharing mode, respectively. Moreover, the optimal decisions and revenue are compared under two modes of benefit distribution, so as to reveal the influence mechanism of different benefit distribution modes on farmland transfer. Finally, the theoretical results are verified and explained directly by numerical examples combined with the agricultural production data.
    The main results suggest that: (1)There is an unique optimal equilibrium strategy for the two-stage game models of farmland transfer, regardless of whether or not the agricultural revenue is shared after the farmland is transferred. (2)Compared with the fixed revenue mode, it can not only reduce the transfer price, but also can increase the willingness to transfer under the revenue sharing mode. (3)Based on the fixed revenue mode, there is a Pareto improvement region in which both sides who participate in farmland transfer can get more benefits and the government's welfare remains unchanged under the revenue sharing mode. (4)The government subsidy is fully capitalized into the transfer price, which helps to raise the expected income of farmers. Moreover, the government can expand scope of application for the revenue sharing mode by increasing subsidy appropriately.
    This paper has guiding significance for promoting the farmland transfer orderly and moderate scale operation of agricultural land. There are three management implications: (1)The revenue sharing model is a better benefit distribution mechanism of farmland transfer than the fixed revenue model. (2)The technical efficiency and cost efficiency of planting enterprises engaged in agricultural production are the fundamental guarantee for the realization of benefit distribution. (3) The establishment of reasonable and effective government subsidy is very important to promote the farmland transfer orderly.
    Government Subsidy Mechanism in Contract-farming Supply Chain Financing under Efforts Effect
    TAN Leping, SONG Ping, YANG Qifeng
    2023, 32(9):  165-172.  DOI: 10.12005/orms.2023.0300
    Asbtract ( )   PDF (1631KB) ( )  
    References | Related Articles | Metrics
    As the economy enters a new normal and the depth of China's supply-side reform, agricultural development has become the focus of reform.“Central Document No. 1” in the past ten years, is focused on the “three rural” issues. It has promulgated a series of agricultural policies, and put forward the “order agriculture” development model, that is, “company”+“farmers” development model. In practice, it has solved the contradiction of “small production, big market” between agricultural production and sales, and put forward the development strategy of “production by sales, production and sales as a whole”. However, with the depth of practice, the financial constraints on farmers are increasingly prominent, according to the “three rural” Internet finance blue paper released in August 2016, the data show that in 2014, China's three rural financial gap exceeded 3 trillion yuan; In 2015, the scale of China's three rural Internet finance was 12.5 billion yuan, and would reach 320 billion yuan by 2020. Thus, agricultural subsidies have become an important policy for the country to protect and develop agriculture. However, for the government, how to choose the target of subsidies, determine the amount of subsidies, in order to maximize incentives for farmers' production inputs, and achieve the optimal effect of subsidies, etc., is a problem worth studying.
    Based on the above analysis, this paper raises the following questions under the assumption that farmers have financial constraints: (1)What are the optimal decisions and returns of firms and farmers as well as social welfare under different subsidy strategies? (2)What are the effects of government subsidy rates, bank lending rates and output volatility on the optimal decisions and returns of each subject as well as on social welfare? (3)What are the performances of no subsidy, subsidized banks, subsidized firms and subsidized farmers? How should the government choose?
    In order to explore the above questions and provide theoretical references and practical guidance for the formulation of government agricultural subsidy policies, this paper takes as the object of study a second-level order agriculture supply chain composed of a farmer and an agricultural product purchasing company, in which the farmer has financial constraints and the output of agricultural products is stochastic. The agricultural products purchasing company and financial institutions (banks) reach a strategic cooperation to provide financial support to farmers. In order to incentivize the smooth progress of financing, production and marketing in the agricultural supply chain, the government provides subsidies to banks or farmers and companies, and in this context, the optimal decision-making, benefits and social welfare of farmers and companies under these four subsidy strategies are investigated. Under the assumption that the government subsidy support is the same, the optimal decisions, returns and social welfare of farmers and firms under the four subsidy strategies are compared and analyzed, and the optimal subsidy strategy of the government is explored.
    The study shows that the optimal decisions, returns and social welfare of the contract farming supply chain under different subsidy strategies are negatively correlated with output volatility and expected return level of capital, positively correlated with the sensitivity coefficient of consumers and the subsidy rate of the government, and that the changes of output volatility, expected return level of capital, the sensitivity coefficient of consumers and the subsidy rate of the government don't affect the size of their rankings. Regardless of who is subsidized by the government, the optimal decision-making of the contract farming supply chain with a government subsidy strategy is higher than that without a government subsidy strategy in terms of revenue and social welfare, easing the pressure on farmers' financing, and the amount of production inputs by farmers, the level of promotional efforts by firms, and their revenues as well as social welfare are the largest in the case of a government-subsidized bank. The level of expected return on capital and the consumer sensitivity coefficient affect the government's subsidy rate.
    A Stochastic Hybrid Production Frontier Based on BP Neural Network
    LU Shichang, LIU Yushi, YU Zhilong, LIU Shu
    2023, 32(9):  173-178.  DOI: 10.12005/orms.2023.0301
    Asbtract ( )   PDF (1461KB) ( )  
    References | Related Articles | Metrics
    As a non-parametric model which is suitable for multi input and multi output set, Data Envelopment Analysis(DEA) and its extended models can avoid the reliance on the specific form of the production efficiency function, making them more universal and practical in solving practical problems. However, DEA computes the production frontier on the basis of deterministic assumptions which ignore the uncertain factors such as noise, random errors or environmental variables in the model while the data of actual production sets is composed of non-theoretical data which often include errors and other interference factors. Therefore, using DEA to estimate the efficiency of actual production sets would result in production frontier bias, which is easily influenced by specific data.
    Machine learning algorithm, having advantages in handling uncertainty problems, have become the mainstream method for data processing. In this paper, we consider combining DEA with machine learning algorithms to form an extensible integrated model with wider applicability to analyze multi-feature and uncertain data of actual production sets, reducing reliance on specific parameters or probability distributions of production efficiency function. As neural network algorithms have advantages in handling multidimensional and complex data, this paper develops a stochastic hybrid production frontier based on BP neural network algorithm (BP_SHPF), which can evaluate the efficiency of decision-making units on uncertain frontiers. The BP_SHPF includes the following four steps: (1)Using DEA to determine the production frontier. (2)Assuming that decision-making units located near the production frontier still have a certain probability of being effective, mix these decision-making units with the effective decision-making units to generate a new production frontier. (3)Constructing a three layer structure neural network, training the dataset, and determining the new position of the production frontier. (4)Estimating the decision-making unit efficiency value with the newly established production frontier, where the distance between the actual output value of the decision-making units and the output value on the production frontier represents the units' ineffective part.
    This paper uses Monte Carlo method to verify the BP_SHPF in single output single input sets and multi output multi input sets. The results show that the BP_SHPF model's MSE and BIAS values are lower than those of the control group experiment, and that these values decrease as the sample size increases and increase as the data dimension increases. Through Spearman rank correlation analysis, the efficiencies of the decision-making units computed by BP_SHPF are consistent with those of the original DEA, indicating that the efficiency rankings of the BP_SHPF is reliable. Additionally, this paper uses BP_SHPF to evaluate the efficiency of 107 China rural commercial banks between 2014 and 2018, and finds that rural commercial banks in the eastern region has the best efficiency average during this period. The efficiency standard deviation obtained by empirical analysis shows that the efficiency values' standard deviation of decision-making units processed by BP_SHPF is higher than that obtained by DEA in the same region, highlighting a greater variation in efficiency values in the same region and better reflecting the efficiency gap between rural commercial banks.
    In all, BP_ SHPF can not only avoid the limitations of non-machine learning methods, but also preserve the positional relationship between decision-making units and production frontiers as it corrects the problem of production frontiers. It obtains more easily distinguishable and reasonable decision-making unit efficiency values and efficiency rankings in evaluating the efficiency of actual production sets.
    Research on Deposit Insurance Pricing Based on the Loss Distribution of Bank's Unit Assets
    ZHANG Jinbao
    2023, 32(9):  179-185.  DOI: 10.12005/orms.2023.0302
    Asbtract ( )   PDF (1125KB) ( )  
    References | Related Articles | Metrics
    China officially implemented the deposit insurance system on May 1, 2015. The Deposit Insurance Regulations promulgated by the State Council stipulate that the deposit insurance rate consists of two components: The base rate and the risk differential rate. However, the rate system implemented at present does not yet truly reflect the differences in risk across banks, which means that the existing deposit insurance pricing methodologies still can't provide sufficient theoretical support for determining a reasonable risk-based rates.The pricing of deposit insurance is usually based on two paradigms: One approach is to price deposit insurance based on expected losses, which is limited to a small number of commercial banks with external credit ratings. Another approach is to use option theory to pricing deposit insurance, which is typically used for listed banks. However, the majority of the 3996 insured banks in China are unlisted small and medium-sized banks without external credit ratings. As a result, they cannot use either pricing method mentioned above to determine their deposit insurance rates. Therefore, the empirical study did not cover these small and medium-sized banks. This paper aims to establish a deposit insurance pricing methodology that better suits China's national conditions, helping to calculate premium for insured banks, especially numerous small and medium-sized banks. This will provide support for the top-level design of the deposit insurance premium rate system.
    Firstly, this paper extends the pricing method of expected losses based on the relationship between bank loss distribution, capital allocation, and deposit insurance pricing. The paper provides a pricing formula for deposit insurance that takes into account these factors. Based on this, the paper proposes a new method of measuring deposit insurance premium based on the loss distribution of bank's unit asset, defined by dividing the loss distribution of the entire bank's assets by its current total amount. This significantly reduces the cost of measuring deposit insurance premiums by requiring only a statistical sample of some of the bank's assets rather than measuring the loss distribution for every asset loss. Secondly, this paper treats the bank assets obtained through sampling as a loan portfolio. With reference to the Creditrisk+model, which is a well-known approach to modeling credit risk, the paper constructs the loss distribution of the loan portfolio and solves its loss distribution using the saddle point method. In doing so, the paper accounts for the loss given default of the loan portfolio, resulting in a more accurate measurement of the loss distribution of unit asset compared to the Creditrisk+ model. The methodology enriches the deposit insurance pricing by incorporating research outcomes from the field of credit risk modeling.
    The case study shows that: (1)The deposit insurance premium is significantly affected by the loss given default of commercial banks' assets, implying that improving the quality of commercial banks' assets is crucial to reducing the deposit insurance rate. (2)The data used in this article's method are readily available. The data of banks' assets, liabilities, and risk capital are all required to be publicly disclosed by regulation. The National Administration of Financial Regulation (NAFR) usually requires commercial banks to provide data for risk measurement and simulation of the loan portfolio, so sample data for the loan portfolio can be obtained from the supervisory authority. (3)The method does not depend on external rating data and market price data of bank stocks, and is applicable to all banks, so it has broad application prospects.
    Sparse Components in Macro Fundamentals and the Prediction of Stock Market Volatility
    LI Bolong
    2023, 32(9):  186-192.  DOI: 10.12005/orms.2023.0303
    Asbtract ( )   PDF (1005KB) ( )  
    References | Related Articles | Metrics
    Financial volatility is one of the most fundamental issues in both academic research and market practice. It provides valuable reference for participants in investing, hedging and arbitraging. With its great importance, understanding its dynamics becomes challenging, especially after the financial crisis in 2008, which changes the public's perception and expectation about financial market substantially, and which still has an influence on today's global economy.
    The development of data science in recent years can serve methods of analyzing asset volatility in complex situations. In this paper, we investigate the classic volatility forecasting problem in a data-rich environment, focusing on the roles of sparse components in macro fundamentals on determining future stock market volatility. The analysis can not only show us the relative performance of predictors in different sparse forms, but also provide us with access to learning the dynamics of stock volatility with respect to the macroeconomic environment.
    Formally, the “sparse components” in this paper refer to the subsets of predictors extracted from a large set of macro variables. There are two kinds of sparse components under consideration depending on the extracting methods: The “sparse characteristics” are the predictors selected through linear shrinkage techniques, whereas the “sparse factors” are latent factors extracted from the macro variables using principal component analysis. To select the sparse characteristics, regularized regressions with the smoothly clipped absolute deviation (SCAD)penalization are employed. Robustness checks based on the least absolute shrinkage selection operator (LASSO) are also presented. These L1 regularization terms can reserve only the most relevant predictors in predictive regressions while eliminating irrelevant variables from the predicting process. Though the latent factors are linear combinations of all the macro variables, they are able to summarize a sufficient large proportion of variation in these variables and thus appear in the regressions as sparse predictors. These sparse components reflect different types of dimension reduction methods, and the related results can give us access to understanding how macro fundamentals affect stock market volatility.
    After previous research articles, the problem in linear predictive regressions is studied. With the sparse components being predictors, realized volatility as the proxy for market volatility is used. Lags of volatility are also included in the regressions as predictors. The sparse components consider not only the first but the second moment of the variables. With a rolling window scheme, the regression and forecasting are estimated and then pushed forward. This dynamic forecasting process allows us to observe the time-varying impact of macro fundamentals on stock market volatility. The relative performance of the two kinds of sparse components according to the mean squared prediction error (MSE) is compared.
    The macro variables in this paper can reflect different aspects of China's economic environment. These variables are financial market variables such as the average price earnings ratio of stocks traded on the Shanghai Exchange (PE) and the Fama-French factors, macro-economic variables such as the growth rate of the consumer price index (CPI) and the growth rate of industrial production (IP), the global market variables such as the growth rate of the real effective exchange rate of CNY (FXI) and the growth rate of the commodity prices of crude oil (OIL), and policy uncertainty indexes of China mainland (CMPU), of the United States (USPU) and the global index (GPU). The monthly realized volatility is calculated using daily returns of the Shanghai Securities Composite Index. The data are from the CSMAR database, the RESSET database, the CEInet Statistics database, the database of the People's Bank of China, Yahoo Finance, the database of the International Monetary Fund (IMF) and the economic policy uncertainty index website.
    The result shows that the forecasting accuracy of sparse factors is superior across various predicting horizons. There are more predictors in average in the sparse characteristic equations, which has increased the forecasting variance and thus leads to the inferior performance. The accuracy of the forecasts is strongly time-varying and is almost negatively correlated with volatility, indicating that the cause of turmoil of China's stock market is independent of information in the fundamentals to some degree. It appears that the mode of forecast is different for characteristics and factors. While two kinds of predictive regressions both include lags of volatility, its relative importance to the sparse components is different. In the characteristic equations, macro variables especially the price earnings ratio and the housing selling area have shown influential predicting powers. But in the factor equations, auto-regressive terms of volatility turn out to be prominent predictors, with the factors mainly serving as supplements. These outcomes reveal that single macro variables are more closely related to volatility in local time periods, but this relationship is strongly time-varying and not stable. The overall movements of macrofundamentals are more relevant to stock market volatility than single variables in the sense of prediction, since they can help produce better forecasting performance. The conclusion of the study can provide reference for investing strategy making and financial risk management.
    Analysis of the Influence of Investor Rational Changes on Stock Market Fluctuation
    LI Bohua, ZHAO Baofu, JIA Kaiwei, WU Jinjin
    2023, 32(9):  193-199.  DOI: 10.12005/orms.2023.0304
    Asbtract ( )   PDF (1670KB) ( )  
    References | Related Articles | Metrics
    For the past years, the volatility of China's stock market has intensified, and the fluctuations of stock prices have not conformed to the traditional definition of a random walk process, but present irrational states, and frequently exhibit anomalous phenomena that are inconsistent with the assumptions of traditional financial theories, such as the “thousands of stocks hitting limit down” in 2015, the “fuse” phenomenon in 2016, and the recent fluctuations in the stock market due to the impact of the epidemic, which has attracted great attention from society. Investors serve as carriers and suppliers of heterogeneous beliefs and investor sentiment, playing a role in information transmission and circulation. Heterogeneous beliefs reflect investors' judgments on future expected situations, while investor sentiment is a reflection of investors' psychology, manifested as sensitivity to information. Therefore, this article re-evaluates the volatility of the stock market based on the theory of systemics, and attempts to apply the theory of complex systems to analyze the impact of rational investors' behavior on stock market volatility from a novel perspective.
     The stock market is studied as an open system, and the interactive impact between heterogeneous beliefs and investor sentiment under the joint effect of external environmental input information and internal factors change is analyzed. Based on individual differences, different types of investors are formed: Irrational investors (completely driven by emotions), limited rational investors (influenced by the other two types of investors), and rational investors. Investors conduct transactions under expected utility, which affects prices, and price changes are then used as initial condition information to enter the next link. The evolution of the stock market system is influenced by a variety of factors, and the interaction relationships between these factors are complex. This article incorporates the entire stock market into a dynamic system, establishes a random dynamic system model, with stock prices as the explicit feature of the system. Based on complexity, it is divided into three levels, and the three types of investors serve as the breakthrough point. As a characteristic,the different degree of rationality of investors, is used to construct a random dynamic system to simulate the evolution process of investors in the market, and to analyze the mutation of its elements and the evolution equilibrium of the system. Based on the Bayesian rules, and the asymptotic response of information flow and noise,there is a mutual transformation relationship among three types of investors, which mainly constitutes three feedback rings. Based on the randomness of investors entering and exiting the market, using the method of random system dynamics, a random dynamical system model has been built. It analyzes correlative influence factors, simulates the dynamic changes of three kinds of investors in the market, and makes an dynamic adjustment to the quantity of the three kinds of investors,in order to reveal the influence of the structure of investors rationality on the fluctuation of the stock market price. The results show that the changes in the rational structure of investors are a dynamic equilibrium process, and investor rationality is not discrete but continuous, with mutual transformation. The existence of irrational investors in the market is also necessary, it can enrich the liquidity of the entire market, enrich the investment levels of the stock market, and enhance the market's activity. The dynamic deduction process of the degree of rationality of the three types of investors in stock market transactions also constitutes an important driving force for the development of the stock market.
     This article will continue to further study the time-varying impact of the changes in investors' rational structure on stock market volatility, and conduct a series of empirical studies to reveal more complex generation mechanisms of influence paths.
    Multi-scale Dynamic Hedging of CSI 300 Index Futures Based on EMD-DCC-GARCH
    WANG Jia, HE Liuyang, WANG Xu
    2023, 32(9):  200-207.  DOI: 10.12005/orms.2023.0305
    Asbtract ( )   PDF (978KB) ( )  
    References | Related Articles | Metrics
    Comprehensively integrating market information to estimate the optimal hedging ratio has always been the key for price risk management. In recent years, with the development of mathematics and econometrics, scholars have proposed many hedging models. However, most of the existing studies ignore the impact of different time scales on hedging. In practice, market participants have different hedging horizons, and the time-series data of financial markets has both time and frequency characteristics. The traditional hedging model is only constructed on the perspective of the time domain, and it is difficult to fully extract the multi-scale information of data. This paper uses the Empirical Mode Decomposition (EMD) method to study the multi-scale hedging problem of CSI 300 index futures. It is helpful for investors to comprehensively consider the hedging horizon, choose the appropriate hedging model and risk measurement indicators to estimate the optimal hedging ratio. Meanwhile, the research results are of guiding significance for policy makers and investors to fully learn the hedging function of futures markets.
    This work selects the trading data of CSI 300 index spots and futures, which comes from the Wind database. The Empirical Mode Decomposition (EMD) method is used to divide the CSI 300 index spots and futures return into short-term, medium-term, and long-term time scales. The return means of spots and futures at different time scales are similar to that of the original returns. The difference among them is mainly manifested in volatility. The volatility information extracted from the short-term scale is the most important, while the long-term scale represents the long-term trend of the market, and contributes little to the volatility. Furthermore, combined with DCC-GARCH model, the multi-scale hedging problem of CSI 300 index futures under the hedging framework of minimum variance and minimum CVaR respectively are studied. The hedging ratios of the dynamic DCC-GARCH model are estimated. The hedging performance of the dynamic model are compared with that of traditional static models, i.e., simple minimum variance, simple minimum CVaR, ordinary least squares (OLS) and vector autoregression (VAR). The results show that the trends of the optimal hedging ratios at the original and short-term scales are similar. As the time scale increases, the ratios gradually decrease. In terms of hedge performance, the DCC-GARCH model outperforms the static models at the original and the short-term scales, which can significantly reduce portfolio VaR and increase the risk reduction ratio. However, both the DCC-GARCH and static VAR models are not suitable for the medium and long-term scales. For DCC-GARCH, the hedging performance calculated by the minimum CVaR method is better than that calculated by the minimum variance method.
    In our paper, it is assumed that the estimated parameters of the hedging ratio of different models are known. In reality, the return of CSI 300 spots and futures are stochastic and uncertain. Both the means and volatilities of returns have estimation risk. Under the condition of uncertain parameters of models, the research on hedging between spots and futures is more in line with the real investment environment and has wider applicability. Further research can introduce Bayesian methods into the hedging problem and construct the Bayesian hedging strategies under uncertain parameters. Another interesting direction is to consider ambiguity aversion psychology of investors from the perspective of behavioral finance, and study the optimal hedging strategies under different degrees of ambiguity aversion.
    Economic Policy Uncertainty and Short-term Interest Rate Volatility: An Empirical Study Based on BHK-L-MIDAS Model
    WU Xinyu, YIN Xuebao
    2023, 32(9):  208-214.  DOI: 10.12005/orms.2023.0306
    Asbtract ( )   PDF (1031KB) ( )  
    References | Related Articles | Metrics
    The short-term interest rate is a key variable in the pricing of fixed-income securities and derivatives. Meanwhile,the volatility of short-term interest rate has important impact on the asset allocation for investors. The short-term interest rate displays mean reversion,and its empirical distribution exhibits stylized facts such as leptokurtosis and fat tail. In addition,the short-term interest rate volatility changes over time and responds asymmetrically to good and bad news. If those stylized facts of short-term interest rate are ignored,it may affect the optimal asset allocation for investors as well as the accuracy of derivatives valuation. As a consequence,developing a rational model to model and forecast the short-term interest rate volatility is of great importance.
     This paper contributes to the literature on the modelling of short-term interest rate volatility in three aspects. Firstly,we develop a new model,namely the BHK-L-MIDAS model,which is able to capture the leverage effect (volatility asymmetry) as well as the impact of economic policy uncertainty (EPU) on the short-term interest rate volatility,to model and forecast the short-term interest rate volatility. Secondly,we employ China EPU as a proxy for the EPU,and incorporate it into the BHK-L-MIDAS model to examine the link between the China EPU and short-term interest rate volatility. By doing so,this study provides a new perspective for modelling and forecasting the short-term interest rate volatility. Meanwhile,this study highlights the importance of incorporating the leverage effect for modelling the short-term interest rate volatility by empirically comparing the performance of the BHK-L-MIDAS and BHK-MIDAS models. Finally,based on various loss functions and the model confidence set (MCS) test,this paper examines the predictive ability of the BHK-L-MIDAS model for the short-term interest rate volatility. Furthermore,a VaR analysis is conducted to assess the economic value of the BHK-L-MIDAS model for short-term interest rate market risk measurement.
    An empirical application to Shanghai Interbank Offered Rate (SHIBOR) and Chinese EPU index data based on the BHK-L-MIDAS model shows that the short-term interest rate exhibits a reverse leverage effect,and the EPU has a significantly negative impact on the short-term interest rate volatility. The BHK-L-MIDAS model outperforms the BHK and BHK-MIDAS models in in-sample fitting. Furthermore,out-of-sample analysis based on three loss functions and the MCS test suggests that the BHK-L-MIDAS model yields more accurate out-of-sample volatility forecasts than the BHK and BHK-MIDAS models. In particular,the superior forecast ability of the BHK-L-MIDAS model is robust to different forecasting windows. Finally,an empirical application to VaR estimation suggests the economic value of the BHK-L-MIDAS model for short-term interest rate market risk measurement.
    Our empirical findings provide useful insights for researchers and financial practitioners. In fact,our study is important to researchers who are trying to understand the dynamic nature of the short-term interest rate volatility. It is also of great significance for financial practitioners who are concerned about financial applications,such as the interest rate risk management. It is worth pointing out that our work could be extended to several directions. For example,we can extend the BHK-L-MIDAS model to incorporate the jump dynamics to capture the time-varying jump in the short-term interest rate. In addition,combing the BHK-L-MIDAS model and the Copula or VAR approach to study the co-movement (spillover effect) of short-term interest rate volatilities between two countries is also worth future research.
    Hypernetwork-based Tags Similarity Measure for Social Tagging Systems
    PAN Xuwei, ZENG Xuemei, LI Tao
    2023, 32(9):  215-221.  DOI: 10.12005/orms.2023.0307
    Asbtract ( )   PDF (1281KB) ( )  
    References | Related Articles | Metrics
    Social tags express users' preferences by a user-defined way to describe online resources and build the connections between users and resources. As a valuable resource, social tags have been exploited in link prediction and personalized recommendation to solve information overload in the era of big data. Social tags similarity evaluation is the foundational issue of tag-based link prediction and personalized recommendation. The current methods of tags similarity evaluation based on such as vector space matrix, bipartite graph, tripartite graph and tag co-occurrence network split the internal relationship of user-resource-tag in social tagging systems during their transforming processes, resulting in the loss of tags semantic association to some extent. To overcome this problem, this paper innovatively introduces the hyper-network model which can systematically describe the internal ternary relationship of user-resource-tag and proposes an approach to measuring social tags similarity based on hyper-network.
    The proposed approach focuses on behaviors of users' social tagging to build social tags hyper-network in which a tagging action is expressed as a hyper-edge, and tags are expressed as nodes. The constructed hypernetwork links users, resources, and tags in tagging activities by hyper-edges in that it can more accurately depicts the user's tagging behavior and maintains the intrinsic semantic association information of the user-resource-tag ternary relationship. Combining the topological structure of the social tags hyper-network and the two fundamentals of the proximity relation rules and ternal closure for describing the degree of association and similarity of objects based on object relation, two basic principles are established for measuring social tags similarity based on the constructed hyper-network. One is the principle of common hyper-edges, that is, the more common hyper-edges of two tag nodes, the more similar the two tag nodes are. Another is the principle of the number of nodes in one hyper-edge, that is, the fewer tag nodes a hyper-edge contains, the more similar these tag nodes are. Based on these two basic principles, a series of social tags similarity measures are established by referring to the logics of constructing the similarity index between nodes in general complex networks. The experimental study is conducted to verify the constructed similarity measures on the data sets from two representative social tagging applications of Delicious and Last.fm by using the AUC and Precision evaluation methods of link prediction.
    In term of the AUC and Precision criterions in the link prediction, the experimental results show that the tags similarity measures constructed on the principle of pure common hyper-edge and the combined principles of the number of nodes in one hyper-edge and common hyper-edge have better performances, which are obviously better than the tag similarity index constructed on the tags co-occurrence network. Especially, the distinct improvement in the Top-N Precision evaluation of link prediction has positive significance for improving the accuracy of personalized recommendation. At the same time, the experimental results also show that adding different normalization ways of node hyper-degree into common hyper-edges have a certain negative effect on the accuracy of tags similarity measurement.
    The social tags similarity measures in our proposed hyper-network based approach are built by mainly combining two basic structural features of networks: Common hyper-edges of nodes and number of nodes in one common hyper-edge. However, the situations and elements of affecting tags semantic similarity are complicated. For example, the “weak connection effect” existing in networks may affect the prediction effect of the method reflecting the strong connection relationship by a common hyper-edge, which is worth further exploration. In addition, social tags hyper-networks also have many other topological features, such as distance and path between nodes. Further work can explore the relationship between such topological features of social tags hyper-networks and the similarity of tag nodes, so as to build more effective social tags similarity measures.
    Management Science
    Research on Improvement of Residual Income Model Based on Enterprise Life Cycle
    WANG Lixia, TANG Yilan, TANG Chao
    2023, 32(9):  222-227.  DOI: 10.12005/orms.2023.0308
    Asbtract ( )   PDF (966KB) ( )  
    References | Related Articles | Metrics
    As an important enterprise valuation tool, the residual model has been widely recognized by the financial theoretical and practice circles. The classic Ohlson series residual model models are generally used in the both theoretical and practical circle of finance and financial accounting. On this basis, many follow-up researchers conduct extended research on the Ohlson series residual income model from various perspectives. However, current research on the residual income model is based on the assumption of continuous operation of the enterprise and the perpetual profit, which makes the research face the obstacle of the continuous operation assumption; And these studies mean that the profit level of the enterprise is constant or the abnormal return is constant. These assumptions are contrary to the enterprise life cycle theory. Based on the general residual income model and the theory of enterprise life cycle, this article states that enterprise has different levels of income at different stages of their life cycle. When the enterprise is in the growth stage, the level of income continues to rise; When a company reaches its maturity stage, its income level is relatively stable, which is manifested as fluctuating within the industry average income range; When a company enters a recession stage, its income level will continue to decline. We construct a Life Cycle Residual Income Model (LCRIM) in theory, which is more in line with the company's actual situation. Furthermore, numerical tests are conducted on the constructed LCRIM to verify the effectiveness of the model.
    On the basis of the above expanded research on the general residual income model, this paper also proposes a new assumption about the return on equity during the maturity period of the enterprise and expands the research on the time point for maximizing the company's value. According to the initial design of the LCRIM model, it is assumed that the net profit of the enterprise remains unchanged in the mature stage, but as the net asset increases gradually, the return on equity in the mature stage decreases yearly, which is not reasonable. Therefore, this paper proposes a new assumption that the return on equity of the enterprise remains unchanged during the mature stage, and further derives an improved residual income model. In addition, due to the fact that the value of a company will continue to decline over time when it reaches a recession stage the derivative of the improved model can further determine the time point for maximizing the company's value, which is the optimal operating time.
    In order to evaluate the accuracy of the proposed model, this paper finally analyzes the impact of different factors on enterprise value through numerical experiments. By changing the length of time and the growth and decline rates of different life cycles in which the enterprise is located, it can be found that, if ceteris paribus, the equity value of the enterprise is directly proportional to the growth period, growth rate during the period, and maturity stage of the enterprise, and vice versa, inversely proportional to the decline rate during the decline stage of the enterprise.
    On the one hand, this study considers the situation of enterprise's non-continuous operation, innovates the theory of residual income model. On the other hand, it also introduces the theory of enterprise life cycle to consider the different profitability of enterprises at different stages, improving the practicality of the model; In addition, the paper constructs the LCRIM can directly calculate the maximum value of the enterprise and its optimal operating time, providing a basis for scientific decision-making and planning for the enterprise. Therefore, the newly constructed residual income model based on the enterprise life cycle in this paper has good theoretical and practical value, and can serve as an effective valuation model and decision-making basis for enterprise equity value. At the same time, this study is not only the supplements of existing methods for evaluating the value of corporate equity, but also the supplements of research literature on corporate investment decision-making.
    Research on PPP Project Management Performance Improvement through Management Maturity Evaluation
    WANG Nannan, LIU Yunfei, HU Chenxu
    2023, 32(9):  228-233.  DOI: 10.12005/orms.2023.0309
    Asbtract ( )   PDF (1269KB) ( )  
    References | Related Articles | Metrics
    Government and social capital cooperation can effectively improve the quality of public goods and enhance the efficiency of public services. As a result, the PPP model has been widely applied in numerous fields. PPP projects in our country are of significant scale. However, during the implementation of the PPP model, several issues have been identified, including non-compliant PPP projects, difficulties in project financing, implementation challenges, inadequate regulation of PPP projects, and even fraudulent practices. The implementation of the PPP model in our country has not achieved the expected results. Although existing research has made certain achievements, there is still a lack of exploration from a process perspective and the development of an assessment system with the aim of enhancing PPP project management capabilities. Assessing the performance of PPP project management contributes to identifying management deficiencies and improving the project management capabilities of the management team. Moreover, the evaluation of management performance needs to consider both the management process and its outcomes. However, existing research on PPP performance management is mostly outcome-oriented and overlooks the evaluation of the management process.
    This study is based on OPM3 (Organizational Project Management Maturity Model) and aims to establish a performance evaluation model for PPP project management. It comprehensively considers both the process and outcomes of PPP project management and establishes a PPP project indicator evaluation system that meets the requirements. Additionally, drawing on organizational learning theory, the study utilizes the management maturity model and the WSR method to construct a PPP project management performance model, incorporating four physical-level indicators, eleven logical-level indicators, and three human-level indicators. The performance evaluation indicator system for PPP project management is then developed. The expert group scoring method and the matter-element extension method are employed to calculate performance levels, and the two-dimensional quadrant method is used to determine performance improvement paths, thereby enriching the research on PPP project management. The effectiveness of the model's functionality is verified through PPP project case studies, and relevant strategies for enhancing PPP project management performance are provided. From the perspective of regulatory authorities, this research presents a novel approach to evaluating PPP project management performance from a process perspective, offering new insights for related studies. The integration of capability enhancement and performance management broadens the application boundary of the management maturity model in the field of project management. The research results contribute to better government oversight of PPP projects, the establishment of best practices, and enable the private sector to continuously improve their overall project management capabilities through performance evaluation results.
    This study, from the perspective of regulatory authorities, provides a pioneering approach to evaluating the performance of PPP project management from a process perspective, offering new insights for related research. By combining capability enhancement with performance management, it expands the application boundary of the management maturity model in the field of project management. The research findings contribute to better government oversight of PPP projects, the establishment of best practices, and enable the private sector to continuously improve their overall project management capabilities through performance evaluation results. This study proposes relevant strategies for enhancing project management performance from the perspectives of physical elements, logical aspects, and human factors. Based on the research findings, this study provides insights and policy recommendations for the high-quality development of PPP projects in China. (1)The government should focus on enhancing the management capabilities of the private sector and improve the performance assessment mechanism, while striving for standardization of the performance assessment system. (2)PPP projects pose higher requirements for the overall project management capabilities of the general contractors. Therefore, it is essential to enhance their capabilities throughout the entire project lifecycle. (3)The results of PPP management performance evaluation based on the management maturity model have two practical applications. Firstly, they can identify mature management practices within projects, establishing best practices for the industry and promoting benchmark management in the PPP market. Secondly, these results can incentivize evaluated private sector entities to improve their own management capabilities, thereby enhancing the overall quality and sustainable development of the PPP industry.
    Regional Differences of Entrepreneurship from the Perspective of Business Environment
    XUE Jun, WEI Nannan
    2023, 32(9):  234-239.  DOI: 10.12005/orms.2023.0310
    Asbtract ( )   PDF (1203KB) ( )  
    References | Related Articles | Metrics
    China has a vast territory and a large population, with subtle differences in ecological environment, administrative management, and economic levels among different regions. In response to the demands of high-quality development in the new era of China, the “Regulation on Optimizing the Business Environment” was officially implemented on January 1, 2020. This is the first administrative regulation specifically formulated for the business environment in China. It not only stipulates the roles and responsibilities that the government and stakeholders should undertake in the construction of the business environment but also emphasizes the originality and differentiation of exploring the business environment within the legal framework. Therefore, based on the demand for differentiated business environments in China's new era, this paper actively explores new approaches to creating high-quality business environments according to local conditions. Starting from the perspective of entrepreneurs, this paper deeply analyzes the differences in entrepreneurial spirit among entrepreneurs in different regions of mainland China. It hopes to discover the shortcomings in the development of regional business environments through the extraction of these differentiated characteristics, provide important references for nurturing entrepreneurial spirit in various regions, optimizing the business environment, and enhancing economic development. This study aims to explore the regional characteristics of entrepreneurial spirit from the perspective of the business environment.
    Firstly, the method of literature research is adopted to construct a quantitative measurement index system for five dimensions of entrepreneurial spirit: innovative spirit, adventurous spirit, professional dedication, sense of responsibility, and learning spirit. This measurement system consists of the above five spirits as the target layer, 13 criteria layers representing the actual performance of entrepreneurial spirit, and 23 quantifiable indicator layers. Panel data from 30 provinces and municipalities in mainland China from 2015 to 2019 are used as research samples to indirectly analyze the entrepreneurial spirit reflected by each province and municipality. The weights of each dimension and indicator are determined using the comprehensive entropy method, and a decision matrix of indicator values for each province and municipality is established to construct a measurement model for the regional characteristics of entrepreneurial spirit in China. Relying on this measurement model and sample data, the scores of the entrepreneurial spirit in each dimension (on a scale of 0~100) are calculated for the 30 provinces and municipalities. This allows for a horizontal and vertical comparison of the different dimensions of entrepreneurial spirit among provinces and municipalities under the same criteria, providing practical references for provinces and municipalities with low scores in certain dimensions of entrepreneurial spirit. Secondly, the provinces and municipalities are divided into six major regions based on universally applicable geographical administrative divisions. A comparative analysis is conducted within each region to explore the scores of entrepreneurial spirit in different regions from the perspectives of geographical location, cultural characteristics, and economic development. The commonalities and characteristics of entrepreneurial spirit development in the same region are revealed. Finally, the scores of entrepreneurial spirit in the five dimensions for the six major regions are summarized for horizontal comparison. Combined with the unique development of the business environment and economic conditions in each region, a comprehensive exploration of the regional differences in entrepreneurial spirit is conducted to find breakthroughs in cultivating original regional entrepreneurial spirit.
    The research results show that different regional cultures and economic levels will lead to the differences in the development of regional entrepreneurial spirit. With the continuous development of China's economy and culture, the overall indicators of entrepreneurial spirit in various regions have shown a year-on-year increase. Chinese entrepreneurial spirit exhibits significant spatiotemporal heterogeneity, and specific differences are reflected in different dimensions of entrepreneurial spirit. Among them, innovative spirit has the smallest regional differences in entrepreneurial spirit, while adventurous spirit has the largest regional differences. In regions with higher economic development, entrepreneurs generally demonstrate outstanding innovative spirit, professional dedication, and learning spirit, while entrepreneurs in less developed areas generally possess a stronger adventurous spirit. Among the five dimensions of entrepreneurial spirit, learning spirit and innovative spirit are factors that have a significant impact on the potential of enterprise and regional economic development. They are also key areas of focus in the process of optimizing the business environment. In order to stimulate balanced development of entrepreneurial spirit in different dimensions across China's regions, the government and relevant departments should make joint efforts. They should adopt targeted cultivation measures based on regional entrepreneurial spirit, actively create a favorable business environment to promote the development of entrepreneurial spirit, and contribute to regional economic development, thus achieving a virtuous cycle.
[an error occurred while processing this directive]