Loading...

Table of Content

    25 June 2024, Volume 33 Issue 6
    Theory Analysis and Methodology Study
    Cascading Failure and Resilience Analysis of Equipment Support Network
    DUI Hongyan, XU Zhe, BAI Guanghan, TAO Junyong
    2024, 33(6):  1-6.  DOI: 10.12005/orms.2024.0173
    Asbtract ( )   PDF (1497KB) ( )  
    References | Related Articles | Metrics
    Aiming at the heterogeneity of combat system network units and the interaction connection between each unit, this paper establishes a three-layer equipment safeguard multilayer coupling network containing combat and safeguard units, divides the network into kill, maintenance and storage and supply layers, and at the same time gives an importance assessment method based on the topology of the network and the type of information transfer between nodes. Secondly, the cascade failure transfer process of the network is studied based on the load-capacity model, and a cascade failure model of the equipment safeguard network is established. Finally, in order to enhance the multilayer coupled network toughness, the remaining assignable task volume toughness strategy is proposed. By simulating the multilayer coupled network toughness under different attack methods, it is verified that the effect of the attack based on the importance degree is significantly better than that of the random attack; the multilayer coupled network toughness with and without the implementation of the residual assignable task volume toughness strategy is simulated, and the results show that the strategy can significantly enhance the network toughness, reduce the impact of cascade failure, and increase the average performance of the network in the process of failure.
    In this paper, the combat system architecture is divided into the combat subsystem for striking enemy targets, the support subsystem for repairing our destroyed equipment and institutions, and the support subsystem for supplying our equipment and institutions with materials. In order to elucidate the coupling relationship of different subsystems that are interconnected under the driving of combat mission tasks, the combat subsystem is set up as a kill layer (A), the maintenance subsystem as a repair layer (R), and the storage and supply subsystem as a storage and supply layer (S), which constitute the multiple coupling network of equipment safeguard through the command relationship and support synergies among each other (ψ).
    In this paper, we analyze the cascade failure of the equipment assurance network based on the load-capacity model, and the specific failure process is as follows: 1)Assuming that the enemy carries out a deliberate attack based on the importance of our node (IXij), and the node fails after the attack. 2)Determining the initial load of the network node(RXij(0)). 3)Redistributing the load of the failing node. 4) Updating the node load, and determining whether the node fails after the load is redistributed. 5) Determining whether the network is stable.
    In this paper, we propose the remaining assignable task amount toughness strategy, which is to make the neighboring node (original node) of the node reconnect to the same type of node of the failed node with the largest priority recovery factor (H) in the network with a certain probability after intentionally attacking a node; this strategy is implemented starting from the end of the cascade failures within the attacked network layer, and it aims at mitigating the impact of the cascade failures on the whole network. The ratio of the number of remaining kill chains (PNA*) is used as the network’s toughness indicator; the number of kill chains indicates the degree of redundancy in destroying the links to the target nodes; the higher the number of kill chains and the higher the redundancy, the lower the degree to which the network is affected and the higher the network toughness.
    The simulation analysis is based on complex network visualization software Gephi generated by 90 nodes composed of equipment security multi-layer coupling network, and the simulation takes different information importance weight α=1/2,β=1/3,γ=1/6; node internal and external load factor σ=1,τ=1.2; node topology information load factor δ=0.3; load tolerance factor ω=0.2. Multi-layer coupled network toughness during attack is based on information importance and compared with random attack scenario. The comparison concludes that the residual assignment task resilience strategy proposed for the equipment support multi-layer coupled network model can greatly reduce the failure of the multi-layer coupled network after the attack, improve the resilience of the network, and increase the average performance of the network in the failure process.
    Optimization of In-Plant Parts Receiving and Transferring Based on Adaptive Neighborhood Simulated Annealing Algorithm
    MAO Zhaofang, SONG Manjin, HUANG Dian, FANG Kan
    2024, 33(6):  7-13.  DOI: 10.12005/orms.2024.0174
    Asbtract ( )   PDF (1090KB) ( )  
    References | Related Articles | Metrics
    With the continuous development of society and economy, manufacturing plants are required to diversify their product offerings in order to meet the varied needs of customers. The mixed-line production method, which allows for the production of different products with a large number of common basic parts on the same production line, has gained popularity in modern manufacturing industry. However, this demand for mixed-line production puts significant pressure on the logistics system due to the requirement for multiple varieties and small quantities of parts. To enhance efficiency in production logistics operations and effectively implement the just-in-time (JIT) principle, many manufacturing companies have adopted a supermarket logistics model as an intermediate warehouse for nearby workstations’ part requirements. This model utilizes frequent small-volume deliveries through milk-run cycles to deliver parts to the assembly line. Such just-in-time delivery systems are widely used in mixed-line production models such as automotive assembly and agricultural machinery parts plants.
    As the head of production logistics, managers face a primary challenge in optimizing parts receiving and transferring. In factories that implement the JIT principle, there are three scenarios in efficient parts receiving and transferring: firstly, coordinating time windows for both supplier’s truck delivery and assembly line demand; secondly, solving the vehicle scheduling problem by determining the processing order of vehicles serviced at each warehouse door; and thirdly, addressing the vehicle allocation problem which involves assigning appropriate demands to each tractor-trailer. These factors significantly contribute to the complexity of this issue.
    To optimize parts receiving and transferring, this study establishes a just-in-time-based parts supermarket logistics model and implements cross-docking in factory logistics management. Compared to the traditional approach, the cross-docking model enables intelligent sorting, reduces storage and retrieval costs, and enhances the management of multi-species and small-lot parts. Building upon this foundation, we investigate the Vehicle Assignment and Scheduling Problem (VASP), which incorporates time window constraints, vehicle scheduling, and vehicle assignment considerations. We formulate and linearize the problem.
    To address this challenge effectively, an Adaptive Neighborhood Simulated Annealing (ANSA) algorithm is proposed in this paper. The ANSA algorithm encompasses various neighborhood operations along with adaptive rules for comparison against state-of-the-art commercial optimization solvers. Extensive computational experiments are conducted to validate the effectiveness of our proposed ANSA algorithm.
    The innovations of this paper are as follows: (1)The concept of cross-docking is applied to optimize production logistics, effectively enhancing the management of multi-species and small-lot parts in logistics. (2)In the cross-docking problem, the outbound vehicle typically corresponds to a customer’s demand. This paper extends this assumption based on practical factory experience, enabling tractor-trailers to simultaneously transport multiple demands from the supermarket. The research findings have significant implications for production practice. (3)This paper designs an efficient meta-heuristic algorithm tailored to the problem characteristics. The experimental results demonstrate its excellent performance across various scale scenarios.
    Improved Marine Predators Algorithm for Large-scale Optimization Problems
    ZHANG Wenyu, YUAN Yongbin, GAO Xue, ZHANG Bingchen
    2024, 33(6):  14-21.  DOI: 10.12005/orms.2024.0175
    Asbtract ( )   PDF (987KB) ( )  
    References | Related Articles | Metrics
    Large-scale optimization problems have been around across various domains in real life. However, these problems exhibit high-dimensional variables and complex interdependencies among variables, rendering traditional optimization algorithms often ineffective and inefficient. Given that swarm intelligence optimization algorithms possess strong global search capabilities, inherent potential for parallelism, and distributive characteristics, they are more suitable for addressing large-scale optimization problems. Nevertheless, such algorithms typically suffer from difficulties with balancing exploration and exploitation stages and are prone to local optima. Hence, it is imperative to investigate improvement strategies for swarm intelligence optimization algorithms. The Marine Predators Algorithm (MPA) is a novel swarm intelligence optimization search algorithm inspired by the foraging behavior of marine predators in nature. It features simplicity in principle, ease of implementation, and minimal parameter settings. However, similar to other swarm intelligence algorithms, the MPA also has its drawbacks, necessitating corresponding improvements to enhance its effectiveness.
    To address the shortcomings of the MPA in achieving low solution accuracy and susceptibility to local optima when solving large-scale optimization problems, this study first utilizes the Lloyd’s algorithm to initialize the prey population. This ensures an even distribution of individuals throughout the solution space, thereby enhancing the population’s global search capability. Subsequently, the three position update strategies of the MPA algorithm are employed as actions, with the number of offspring individuals surpassing their parents as the state, and the reduction in the optimization objective value as the reward. By employing the Q-learning algorithm, the optimal position update strategy is determined for each iteration, thereby balancing the exploration and exploitation processes of the algorithm and preventing it from being trapped in local optima. Additionally, a reverse operation is applied to each optimized individual after each iteration, obtaining the reverse solution of the current population. This expands the search space of the IMPA algorithm, enhances the diversity of the population, and effectively prevents the algorithm from falling into local optima.
    Finally, this paper conducts a comparative analysis of the IMPA with several existing improved versions of the MPA across 13 high-dimensional test functions. The analysis includes algorithm complexity analysis, convergence speed, and optimization performance analysis, as well as a statistical characterization based on Wilcoxon tests. The results indicate that the IMPA algorithm outperforms other comparative algorithms in terms of solution accuracy and convergence speed on high-dimensional problems. It demonstrates superior convergence capability and solution accuracy in large-scale optimization problems.
    Currently, the discussion of the performance of the IMPA algorithm is based solely on experiments conducted on high-dimensional test functions without testing the algorithm’s performance on actual large-scale optimization problems. The next step in research primarily involves applying the IMPA algorithm to specific large-scale constrained optimization problems and practical engineering issues.
    Optimization of External Container Trucks Retrieving Container Strategy with Random Arrival Time
    XU Gaoyu, ZHU Huiling, JI Mingjun
    2024, 33(6):  22-27.  DOI: 10.12005/orms.2024.0176
    Asbtract ( )   PDF (1314KB) ( )  
    References | Related Articles | Metrics
    As the throughput grows, so does the total number of containers stored in the container yard, and the lack of yard space becomes a problem. Since the actual arrival time of the external container truck often deviates from its appointment time, this may conflict with the original pick-up plan, resulting in terminal congestion, low work efficiency, and reduced customer satisfaction. Therefore, how to effectively deal with the uncertainty of the arrival time of external container trucks has become one of the difficult issues in research. At present, the first-come-first-served service strategy is usually adopted when the external container truck pick-up operation is used. There are only two situations when the container truck deviates from the appointment time, one is that the container truck arrives at the port too early, and the container truck that arrives first according to the first-come-first-served service strategy will be given priority to pick up the container. However, if the container picked up by the container that arrives first has many blocked containers on it, the relocation operation will incur additional costs, which may lead to an increase in the total cost. The other one is, if the target container is located on the top layer of the bay, according to the first-come-first-served service strategy, the container truck can only wait for the pick-up, which will incur a certain waiting cost, and may also incur the cost of turning over the container when the container is removed when other container trucks pick up the container.In the yard, if container trucks do not arrive on time, they will cause congestion in the yard and pollute the environment.
    This paper proposes a dynamic adjustment strategy(FCDS strategy) for the order of picking up containers considering weight and time difference. The aim is to minimize costs. A discriminant formula for deciding whether to start picking up containers immediately for external container trucks that arrive at container yard at the time of departure from the appointment time is proposed. In order to address the problem, a container handling scheduling model under the random arrival time of trucks is built to minimize the total cost when picking up containers. The genetic algorithm is used to solve the model.Finally, in order to prove the effectiveness of the strategy proposed in this paper, it is compared with the first-come-first-served pick-up strategy commonly used in external container trucks, and four sets of experiments are designed.The four groups of experiments are carried out under different disturbances, and the arrival scenarios of container trucks are randomly generated, and the relocating cost and waiting cost of the two strategies are calculated. This article mainly considers the impact of these two costs, which constitute the total cost incurred when the outer truck suitcase is collected.
    Based on the experimental data, we draw a plot to compare the advantages and disadvantages of the two strategies. The results show that the two strategies have obvious differences in the same situation, and the greater the disturbance, the more obvious the difference. The longer the departure time of the container truck, the better the FCDS strategy, because the FCDS strategy is more flexible and can reasonably arrange the order of pick-up. Finally, the two strategies are compared using the expectation and variance of cost. Expectations can reflect the average of total costs, and variance reflects fluctuations in costs. Numerical experiment results show that compared with first-come-first-served strategy, this strategy can reduce cost and ensure cost stability in various scenarios.
    Finally, in the case of the deviation of the external container trucks from the appointment time, a more flexible service strategy can be adopted to improve the work efficiency of the container yard, and the FCDS strategy has better stability in most scenarios, taking into account the needs of both the terminal and the customer, improving customer satisfaction, reducing the total cost, and making the external container more intelligent. In addition, good working order can reduce the environmental pollution of the port and promote the creation of a green port.
    Multi-strategy Integrated Harris Hawk Algorithm to Solve Global Optimization Problems
    LI Yu, LIN Xiaoxiao, LIU Jingsen
    2024, 33(6):  28-34.  DOI: 10.12005/orms.2024.0177
    Asbtract ( )   PDF (1506KB) ( )  
    References | Related Articles | Metrics
    In basic science and practical engineering applications, there are problems of solving optimization schemes in different dimensions and under multiple constraints, and it is difficult for most conventional methods to deal with such optimization problems effectively. Intelligent optimization algorithms are able to solve many optimization problems in the case of failure of classical optimization techniques due to their small search space, few search times, flexible computation and strong applicability. With the widespread application of intelligent optimization algorithms in many fields such as logistics scheduling, combinatorial optimization, system control, etc., more and more scholars have begun to study such algorithms.Most of these algorithms are designed by the influence of biological and physical phenomena in nature, and are increasingly used in the engineering field due to their advantages of simple concept and easy implementation. Harris Hawk Optimization (HHO) was proposed in 2019 as a new type of intelligent optimization algorithm, which was inspired by the hunting behavior of the Harris Hawk, and has the advantages of simple principle, easy programming, fewer parameters, high convergence accuracy and fast convergence, and has been applied to the design and engineering optimization problems in certain disciplines.
    For different types of function optimization problems and engineering applications, the HHO algorithm has the problems of slow convergence speed and insufficient stability of optimization search. In order to further improve the performance of the Harris Hawk algorithm in solving problems, this paper proposes a multi-strategy Improved Harris Hawk Optimization (IHHO) algorithm that integrates the good point set, nonlinear energy escape factor, and Logistic-Cubic cascading chaotic perturbations.Firstly, for the characteristics of random generation of the initial population, the good point set strategy is applied for optimization to uniformly distribute the initial population and improve its traversal ability. Secondly, for the problem that the algorithm is easy to fall into local optimal solution, a nonlinear energy escape factor is proposed based on the different characteristics of each stage of the algorithm, and the escape factor changes from large to small according to the number of iterations, i.e., expanding the search range in the early iteration to prevent the algorithm from falling into local optimal, and reducing the search range in the late iteration to accelerate the convergence of the algorithm, so as to balance the algorithm’s global and local exploration ability. Finally, for the problem that the search position is easy to converge locally, Logistic-Cubic cascade chaos is introduced to perturb the search position during the updating process, to avoid the algorithm from falling into local optimum, and to improve the solution accuracy and convergence speed.
    In the simulation experiment stage, the IHHO algorithm is used to solve 23 function problems with different characteristics, each problem is solved 30 times, and the mean and standard deviation of each result is taken to compare with the other 7 algorithms.And the results are verified by using the target convergence curve and Wilcoxon rank sum test. The results indicate that the IHHO algorithm has stronger optimization performance and solution stability than other algorithms. At the same time, the IHHO algorithm is used to optimize the solution of the three-truss design engineering problem, and the results show that the algorithm has strong competitiveness compared to the comparative algorithms, and has the ability to become an effective tool for solving global optimization problems. In the future, further research will be conducted on intelligent optimization algorithms, combining them with deep reinforcement learning to further solve larger and more complex practical application problems.
    Modeling of Carbon Neutrality Based on Optimal Transport Theory
    BAO Pan, GAO Leifu
    2024, 33(6):  35-42.  DOI: 10.12005/orms.2024.0178
    Asbtract ( )   PDF (1240KB) ( )  
    References | Related Articles | Metrics
    Global warming has become the focus of today’s society. The main cause of global warming is the massive emission of greenhouse gases such as carbon dioxide. China proposed the target for carbon peak and carbon neutrality, which provides a specific idea for further solving the global warming. Carbon peak refers to the fact that the carbon emission reaches the maximum at a certain time. Carbon neutrality refers to the total amount of carbon dioxide or greenhouse gas emission produced by the country, enterprise, product, activity or individual directly or indirectly within a certain period of time, which can be offset by afforestation, energy conservation and emission reduction to achieve positive and negative offset, and relative “zero emission”. Generally, the targets of carbon peak and carbon neutrality are referred to as the “double carbon” target.
    How to realize the energy transfer between the source and sink of carbon emission and carbon absorption effectively is the core issue of the research on carbon neutrality theory. The optimal transport theory is to find the optimal distribution calculation of the joint probability distribution field of the source and sink at the minimum cost. This research mode provides a new research perspective for dealing with the carbon neutrality problem.
    In this paper, the Kantorovich form problem in the optimal transport theory is taken as the basic model, and the content of probability distribution information in mathematical statistics is combined to explore the process of model establishment and method solving about carbon neutrality related problems. Since the acquisition of comprehensive data on carbon emission and carbon absorption is costly and involves internal information of enterprise, it is generally not published directly. Therefore, according to the data information obtained in the statistical yearbook, relevant literature and materials, and in combination with the actual development situation, the hypothesis of using Weibull distribution and Normal distribution to describe the distribution form of carbon emission and carbon absorption respectively is put forward in the process of numerical simulation.
    In the first part, based on the relationship between carbon emission and its influencing factors, the exponential distribution family is used to describe the distribution form satisfied by carbon emission. Combining the prior distribution of parameter constructed by the generalized linear model and the posterior idea of Bayesian distribution, the accurate estimation of parameter is obtained according to the Lagrange function and the maximum likelihood estimation method. And then the marginal probability distribution satisfied by carbon emission is obtained. At the same time, the distribution of relevant measurable data in the process of carbon absorption is obtained based on the experiment of industry data, so as to know the specific distribution information of carbon source and carbon sink.
    In the second part, according to the actual change, it is not difficult to find that the balanced development and path selection between carbon emission and carbon absorption can provide a reasonable plan and effective strategy for achieving the target of carbon neutrality, which is consistent with the idea of energy conservation in the optimal transport theory. Therefore, the distribution information of carbon emission and carbon absorption is considered as the marginal distribution, and the form of Kantorovich problem in the optimal transport theory is integrated to construct the optimal transport model with the constraint of carbon neutrality, which can express and deal with carbon related problems in a clearer way.
    In the third part, in order to modify the model, the regression analysis method is used to test the relationship between target system and prediction system, and the system parameter is fed back and adjusted according to the result obtained by minimizing the loss function, so as to obtain a better transmission system.
    In the last part, the feasibility and effectiveness of the proposed model and method are verified by numerical simulation. In this process, the coupling distribution between carbon source and carbon sink and the target optimal value can be obtained by using the Sinkhorn fast algorithm. At the same time, the specific transport plan between them can be visually displayed by the transport diagram.
    In summary, the optimal transport theoretical model with carbon neutrality as a constraint can describe the correlation between carbon emission and carbon absorption reasonably and effectively. At the same time, the specific transport plan between the two can be solved by using a fast algorithm, which ensures the fitness of the solution for the established model. It is worth noting that the proposed idea provides an effective path selection and concrete implementation strategy for the final realization of carbon neutrality target, which has the theoretical significance and application value for quantitative analysis about related problems. In the following research, the carbon related issue will be considered in combination with the dynamic evolution, so that the distribution information is closer to the actual complex situation of the development and change of carbon emission and carbon absorption.
    Adoption of New Energy Vehicle with Reference Effect
    LI Dongdong, YANG Jingyu
    2024, 33(6):  43-50.  DOI: 10.12005/orms.2024.0179
    Asbtract ( )   PDF (1317KB) ( )  
    References | Related Articles | Metrics
    Compared with conventional fuel vehicles, new energy vehicles offer significant advantages such as low fuel consumption and reduced emissions, which can effectively alleviate environmental pollution and mitigate the energy crisis. Recognizing these benefits, the Chinese government introduced a subsidy policy for new energy vehicles in 2010 to encourage their widespread adoption. This policy aimed to accelerate the transition to cleaner transportation and reduce the nation’s reliance on fossil fuels. However, the sustainability of these subsidy policies has been called into question due to incidents of “subsidy cheating” by some automakers. As a consequence, the Chinese government decided to phase out the subsidy policy, with a complete expiration set for 2023, marking the beginning of a “post-subsidy” era for new energy vehicles. In this new phase, potential strategies include enhancing regulatory frameworks, investing in research and development, improving infrastructure for electric vehicles, and encouraging market-driven solutions. By focusing on these areas, China can continue to support the growth of the new energy vehicle market, drive technological advancements, and achieve its environmental and energy goals.
    As new energy vehicles are relatively new products developed in the last decade, consumers often lack knowledge about them and tend to compare them with conventional fuel vehicles when making purchase decisions. This behavior, known as the reference effect, can affect the effectiveness of price rebates and green fiscal policies. This paper constructs a game model involving the government, firms and consumers, analyses these two promotional policies taking into account the reference effect, and discusses the optimal policies.
    The results suggest that: (i)Regardless of the value of the consumer purchase reference effect, the implementation of the green tax and price discount policy is beneficial to the promotion of new energy vehicles. (ii)When consumers prefer new energy vehicles, the optimal policy choice of the government is not affected by the consumer purchase reference effect, and the green tax policy is always the optimal choice of the government; when consumers prefer fuel vehicles, the optimal policy choice of the government is affected by the consumer purchase reference effect, i.e., the green tax policy is the optimal choice of the government when the consumer purchase reference effect is small; the price discount policy is the optimal choice of the government when the consumer purchase reference effect is large. When the consumer purchase reference effect is small, the green tax policy is the optimal choice for the government; when the consumer purchase reference effect is large, the price discount policy is the optimal choice for the government. (iii)The optimal new energy vehicle support policy is affected by the greenness of new energy vehicles and the marginal pollution damage of fuel vehicles. The relationship between these factors and the optimal policy is moderated by the consumer purchase reference effect.
    Our findings have significant practical implications for the formulation of government policies aimed at promoting sustainable development, particularly in the automotive sector. Firstly, we have determined that both price rebate schemes and green tax policies are effective tools for encouraging the adoption of new energy vehicles. This suggests that governments should conduct thorough and comprehensive studies to establish the optimal tax rate that would maximize the benefits of these policies. Secondly, our research indicates that in the current market context, where the majority of consumers still prefer gasoline vehicles, either a price rebate or a green tax policy can be effectively implemented to promote new energy vehicles. However, as consumer preferences evolve and shift towards new energy vehicles, the green tax policy emerges as the more optimal approach. This implies that while price rebates are beneficial in the short term, in the long run, governments should strategically plan to transit towards green tax policies to sustain and accelerate the growth of the new energy vehicle market. Therefore, it is crucial for government policymakers to anticipate this shift in consumer preferences and prepare to implement green tax policies in the near future. By doing so, they can ensure a smoother transition and continued support for the new energy vehicle industry as subsidies are phased out. This paper provides valuable insights that can guide the promotion of new energy vehicles and support the development of the new energy vehicle industry in the post-subsidy era, ultimately contributing to environmental sustainability and energy conservation. In summary, our findings underscore the importance of adaptive and forward-thinking policy measures. Governments should leverage the dual approach of price rebates and green tax policies based on current market dynamics and future trends, ensuring a robust framework for the promotion and growth of new energy vehicles.
    Research on Online and Offline Problems of Truck-drone Distribution
    YU Haiyan, YE Jing, WU Tengyu, GOU Mengyuan
    2024, 33(6):  51-56.  DOI: 10.12005/orms.2024.0180
    Asbtract ( )   PDF (1099KB) ( )  
    References | Related Articles | Metrics
    At the beginning of 2020, COVID-19 broke out all over the world, which seriously affected the normal life of the people. At present, although the epidemic is almost under control, it breaks out on a small scale in various regions, and the society is still in the post epidemic era. In the closed area during isolation, drone distribution can avoid direct contact to prevent the increase in cases. Truck-drone distribution can not only expand the scope of delivery, but also solve the timeliness difficulties of the orders. Therefore, the online and offline problem of truck-drone distribution is worth researching.
    The truck-drone mode means that a truck carries drones from the distribution center. The truck is regarded as a mobile warehouse on its way. The truck only stops at the corresponding stop to provide materials and charging services for drones. All orders are fulfilled by drones. By consulting relevant literature, the research into truck-drone distribution is in the preliminary stage. Most research considers how to dispatch trucks and drones under static conditions, but this needs to be researched under dynamic conditions. The online method is also an effective method to solve dynamic problems, but the current research on the online method is only limited to the case of traditional vehicle distribution, and there is still a lack of the online research into truck-drone distribution.
    The second part of this paper researches the problem of truck-drone online distribution, which aims at the shortest total time to serve all the orders and return to the distribution center. Orders are generated in real time and have strong dynamics. Firstly, it is proved that the lower bound of the competitive ratio of the truck-drone distribution problem is 2-δ. Secondly, the online OCOA algorithm that calls the offline TSOOA algorithm is designed, and it is proved that the upper bound of the competition ratio of OCOA algorithm on the general network is 2.5. The core idea of OCOA algorithm is to judge whether the truck is at the distribution center and discuss it in different cases. If the truck is at the distribution center, it will directly call the offline TSOOA algorithm to solve it. If the truck is not at the distribution center, the offline TSOOA algorithm will be called after the truck returns to the distribution center along the shortest path.
    The third part of this paper researches on the offline distribution problem of truck-drones, which aims at the shortest total time to serve all orders and return to the distribution center, which is how to choose the truck stop, how to distribute orders and how to plan the truck’s route when all information of the orders is known. For the offline problem, a model is established, a three-stage offline algorithm TSOOA algorithm is designed, and CPLEX is used to solve the linear model. For the same input information, the minimum relative error between the result of TSOOA algorithm and that of offline optimal algorithm is 0%, and the maximum is 6.69%, which proves the effectiveness of TSOOA algorithm.
    The fourth part of this paper uses professional numerical simulation software MATLAB programming to simulate and analyze the OCOA algorithm. In order to research on the effectiveness of the algorithm in different networks, two representative network data are selected for comparison. The ratio of the lower bound of the OCOA algorithm and the offline TSOOA algorithm is calculated to be about 1.75 for the two networks of the simulated network and the real network. The application effect of OCOA algorithm in real scenes is better than that of the upper bound of the competitive ratio.
    To sum up, the truck-drone distribution mode is a new mode that can be selected for daily life distribution in the future under the distribution orders in the post epidemic era and the rapid development of science and technology. The research on the online and offline problems of truck-drone distribution can provide some reference for decision-making and scheduling in real life, and expand the scientific research of this mode. In the future, we can research on the online distribution problem of trucks-drones and the distribution problem of trucks-drones with time window constraints.
    Environmental Service Supply Chain Decision-making Considering the Preferential Policies of Environmental Tax Reduction
    XU Minli, HE Jiali, JIAN Huiyun
    2024, 33(6):  57-63.  DOI: 10.12005/orms.2024.0181
    Asbtract ( )   PDF (1222KB) ( )  
    References | Related Articles | Metrics
    With the rapid economic development of our country, the problem of industrial pollution is increasingly serious. After entering the 21st century, the limitations of the traditional pollution control model continue to appear, and the third-party governance model based on the “the polluter pays” principle has opened a new idea of environmental governance, and has been quickly paid attention to by government departments. In order to further improve the emission reduction level of enterprises, the Environmental Protection Tax Law was officially implemented in 2018, and its preferential policies are designed to encourage polluters to exceed the standard of pollution discharge, but the reality shows that the incentive effect of environmental protection tax preferential policies on enterprises still needs to be improved. Therefore, some scholars put forward the suggestion of adjusting the tax rate and designing multiple relief gradients to achieve the layer-by-layer induction of polluters. It is of great significance to study how the preferential mechanism of environmental protection tax affects the decisions and profits of polluters and environmental service providers, and how to improve the incentive level of the preferential policy of environmental protection tax.
    Based on the third-party governance model of environmental pollution, considering the secondary environmental service supply chain composed of polluter and environmental service provider, the supply chain models under the two preferential mechanisms of tax reduction in grades and linear tax reduction are respectively constructed. Using Stackelberg game theory, the impact of the environmental protection tax reduction policy and the increase in preferential grades on the production and emission reduction decisions of enterprises is quantitatively studied. The results show that: under the graded preferential policy, with an increase in emission reduction ratio, the output and profit of the polluter increase, and the profit of the environmental service provider increases. However, there is a “difficult area for emission reduction” when the preferential interval is crossed, and the environmental service provider lacks the motivation for upgrading technology, which inhibits the incentive effect of environmental tax incentive policy to a certain extent. Compared with the graded preferential policy, the “difficult area for emission reduction” under the linear preferential policy disappears. When the emission reduction ratio is the same, the output and profit of the polluter increase, the social welfare effect is better, the profit of the environmental service provider is damaged, and the emission reduction cost level of the environmental service provider increases. Raising tax rates can ease the impact on environmental service providers.
    Different from previous articles on preferential policies of environmental protection tax, based on the realistic policy background, this paper quantitatively analyzes the impact of two environmental tax reduction mechanisms on polluters, environmental service providers and social welfare through the establishment of mathematical models, and obtains some management implications. This will help the government continue to improve the tax system and collection and administration, guide enterprises to improve the level of emission reduction, and create a healthier third-party governance market environment.
    Evolutionary Game Analysis of Smart City Information Security Service Quality Supervision under Different Reward and Punishment Mechanisms
    GUO Yihang, ZOU Kai, LUO Simin, LIU Xin
    2024, 33(6):  64-70.  DOI: 10.12005/orms.2024.0182
    Asbtract ( )   PDF (1627KB) ( )  
    References | Related Articles | Metrics
    With the popularization of smart city construction around the world, the issue of information security is getting more and more attention from all walks of life. At the same time, the demand for information security from various organizations or individuals is rising, and their requirements for the quality of information security services when using smart city-related products are also on the rise. Due to the information asymmetry and opportunistic behavior of enterprises, the quality of information security services will be greatly reduced, which will lead to information security problems not to be solved fundamentally. For this reason, it is important for the government to control the quality of information security services in smart cities by formulating scientific and reasonable regulatory measures. And how to balance the conflict between the interests of various subjects and improve the efficiency of regulation is the key to maintaining the benign operation of information security services in smart cities. In order to solve this problem, this paper transforms it into a game optimization problem for solving. Specifically, the government and smart city information security service provider present a dynamic game process, that is, both sides of the strategic behavior will be affected by the other party’s behavior and change. First, in the dynamic game process, what factors will the two sides face? Second, in addition to some necessary factors, what other influencing factors or subjects exist that are specific to the smart city context? Third, is there an effective mechanism optimizing the gaming system?
    This paper focuses on the problem of regulating the quality of information security services in smart cities. In order to solve this problem, we firstly choose the key variables based on literature and practice. In order to be closer to the actual situation, this paper further considers user participation, because users are the important subject of service quality. In order to explore the impact of regulatory mechanisms on this problem, we include the game problem under different reward and punishment mechanisms and explore the evolutionary stabilization strategies of the game subjects. The analysis results show that among the four different reward and punishment mechanisms, dynamic reward and static punishment are the optimal mechanisms. We observe the changes of the game system by regulating the reward ceiling value, the punishment ceiling value and the user feedback probability, and propose targeted strategies from both government and user perspectives. Future research can consider two directions: first, in the dynamic reward and punishment mechanism, the strategy choice of the game subject does not necessarily present a linear relationship with the reward and punishment upper limit value, but may be a nonlinear relationship. How to effectively model the relationship between the game subject and key variables can be explored in depth; second, in the process of regulating the quality of information security services in smart cities in addition to the two parties in the text, it will also involve the central government, social media and other subjects of the decision-making behavior of the impact of the central government, so more participants can be considered in the future.
    Real-time Scheduling Method Based on Reinforcement Learning for Material Handling in Assembly Lines
    XIA Beixin, GU Jiayi, TIAN Tong, YUAN Jie, PENG Yunfang
    2024, 33(6):  71-77.  DOI: 10.12005/orms.2024.0183
    Asbtract ( )   PDF (1140KB) ( )  
    References | Related Articles | Metrics
    The scheduling of the workshop material handling system is an important part of the production control system of the manufacturing enterprise’s flow workshop. Timely and efficient material scheduling can effectively improve production efficiency and economic benefits. In the actual production process, there may be some random events that make the workshop material handling system dynamic. In order to dynamically respond to changes in the state of the assembly line and effectively balance the production efficiency and energy consumption of mixed flow assembly, this paper proposes a reinforcement learning scheduling model based on Q-learning algorithm.
    The real-time state information of the manufacturing system includes all the state characteristic information of the system at a certain moment. Considering that the complexity of the system is difficult to cover all system states, in order to simplify the model and ensure the accuracy of the decision-making model, and effectively use reinforcement learning to solve it, this paper selects the current real-time information, forward-looking information of the system and the slack time of each part as the system state characteristics used in the scheduling decision model. It sets up five action groups according to the number of transported parts and the transport sequence of multiple parts. The calculation of the transport scheduling plan for each action group of a multi-carrying trolley is divided into three steps: selecting the transport task, calculating the start time, and coordinating the start time point. The reward and punishment function of the system feedback includes three dimensions: out-of-stock time, handling distance, and part-line inventory, which are given different weights according to the optimization goal, in order to realize the multi-objective optimization of minimizing the travel distance of multi-load trolleys and the line-side inventory of each part while satisfying the on-time delivery of parts on the assembly line as much as possible.
    In order to solve the problem that the Q table is too large, this paper proposes an improved two-parameter greedy strategy selection method, and introduces the LSTM neural network on the basis of the greedy strategy to fit the Q value, approximating the Q-value function with LSTM neural network, in order to achieve a balanced optimization between speeding up convergence and avoiding premature maturity.
    This paper uses Arena simulation software to build a simulation system for the mixed-flow assembly line of automobiles, and compare and observe the performance of different scheduling methods under different product ratios. The simulation results show that the optimization effect of modified Q-learning algorithm is better than other scheduling strategies which can effectively reduce the handling distance while ensuring that materials are delivered to the assembly line on time to achieve maximum output.At the same time, the calculation time consumed by the reinforcement learning scheduling method for a scheduling decision is significantly less than other methods, showing good real-time response capability, which meets the real-time requirements of the actual production environment for the scheduling method of the material handling system.
    Integrated Scheduling of Order Batch Picking and Loading Problems: Considering Cargo Conflict
    ZHANG Jun, ZHANG Yanfang, ZHANG Ning, TANG Shuo
    2024, 33(6):  78-85.  DOI: 10.12005/orms.2024.0184
    Asbtract ( )   PDF (1557KB) ( )  
    References | Related Articles | Metrics
    As a new e-commerce model, the online and offline (O2O) retail model is developing in full swing in China. One of the most representative models is the O2O large supermarket model. O2O large supermarketsprovide online shopping and offline delivery services, which have broken through the barriers between offline services and online transactions. In the O2O model, most of the goods ordered by customers are fresh products and daily necessities. In order to guarantee the safety of food, order fulfillment must consider the cargo conflict. For example, goods having different requirements for storage temperature should not be allowed to distribute in the same box. Normal goods and frozen goods may not be in the same box. Moreover, in the O2O model, orders show the characteristics of high frequency, small batch and urgent time window. Therefore, it is necessary to study the integrated scheduling of order batch picking and loading problems by considering cargo conflict.
    The differences between this paper and the current research are as follows: (1)This study considers the two-dimensional loading problem with the cargo conflict constraint, while the current study does not consider the cargo conflict. (2)In this paper, the loading cost includes two parts: box cost and penalty cost. The higher the loading rate, the lower the penalty cost. The current study does not consider the penalty caused by the loading rate. (3)The current research divides the order picking and loading decisions. In this paper, we integrate the order picking and loading problems.
    To improve the overall order delivery efficiency and ensure food safety and freshness in the O2O supermarket, this paper studies the integrated scheduling of order batching and loading problems with cargo conflict constraint. The mixed integer programming model is constructed. The objective function is to minimize the order picking cost and loading cost. The simulated annealing (SA) algorithm has been widely used in solving complex combinatorial optimization problems. The model considers the constraints of batch capacity and cargo conflict, which makes it difficult for the traditional SA algorithm to jump out of the local optimal solution, and the optimization performance limited. To increase the algorithm’s global search capability, the improved SA algorithm is designed. The improvement parts of SA include: the initial solution generation mechanism, the improved two-dimensional loading algorithm and the heating design. In order to verify the effectiveness of the model and algorithm, this paper implements a series of simulation experiments.
    The experimental results show that: (1)Under different order types and parameter combinations, the improved SA algorithm can reduce the total cost more effectively than the other two algorithms. (2)The improved seed algorithm is better than the improved first-come-first-service (FCFS) to generate the initial solution. (3)The improved loading algorithm has lower cost and more operation friendliness than the traditional loading algorithm. (4)The improved SA with the heating design has better global search and optimization capability than the traditional SA. (5)The cargo conflict constraint can avoid the cargo conflict goods before the start of loading, avoid additional checking costs and selection costs, and reduce the total cost.
    The limitation of this paper is that order picking and loading problems are often subject to multiple physical constraints in reality. Future work will aim to further study the joint scheduling problem of order picking and three-dimensional packing, which considers the constraints of products’ weight and fragility.
    Determination of Critical Chain Project Buffer Monitoring Cycle Based on Objective and Activity Risk
    ZHANG Junguang, LYU Yue
    2024, 33(6):  86-92.  DOI: 10.12005/orms.2024.0185
    Asbtract ( )   PDF (1298KB) ( )  
    References | Related Articles | Metrics
    In order to enhance management efficiency, Goldratt proposed the critical chain project management method in 1997. This method applies the theory of constraints, which was initially used in the production field, to the realm of project management. Instead of focusing on the critical path, this method highlights the critical chain. Additionally, it places a greater emphasis on the limitation of resource constraints. To mitigate the impact of uncertainty, the method incorporates a buffer into the project timeline. This not only shortens the project duration but also reduces the impact of human behavioral factors on the project. Simultaneously, the project is monitored and controlled through reasonable measures during its implementation.
    Classical research presumes that buffers can be obtained at any time. However, it is often difficult to achieve in the practical implementation of projects. Excessive monitoring can result in project deviation from the initial plan. To ensure that each buffer monitoring is beneficial for the project implementation, this study proposes a method for determining the monitoring period based on project objectives and activity risks. Different projects have varying priorities regarding schedule, cost, and quality. The relationship between these factors is utilized to determine the appropriate monitoring frequency for different project priorities. The monitoring interval for each activity is then allocated according to the risk level of each activity. This determines the monitoring period for each activity. Consequently, the project can attain optimal benefits with a limited number of buffer monitoring instances.
    The purpose of monitoring buffer consumption is to ensure controllability during project execution and make timely adjustments when deviations occur. The classical approach to monitoring critical chain projects considers the impact of monitoring triggers and makes buffer monitoring a continuous state by default. However, frequent monitoring consumes considerable manpower and time, increasing project costs. Therefore, setting a reasonable monitoring period and frequency is crucial during actual project monitoring. The main objective in setting the monitoring cycle is optimising project comprehensive benefits. As various uncertainties affect project comprehensive benefits during implementation, the monitoring interval should be adjusted according to project objectives and activity risks. A uniform monitoring cycle should not be used to achieve optimal monitoring effects with appropriate efforts. The monitoring cycle should not be standardised to achieve optimal results with the most appropriate monitoring efforts.
    Each project differs in duration, cost, and quality requirements. These factors interact, and shortening duration increases costs and decreases quality. Improving quality requires extending the duration and increasing costs, but high-quality projects reduce rework costs later. Therefore, ensuring project comprehensive benefits is optimal. We determine the overall monitoring frequency based on project objectives, and then analyse activity risk impacts to set the monitoring period. In the planning stage, we estimate multiple risk impacts for each project activity. In the execution stage, we calculate the buffer consumption risk impacts for each activity by determining the ratio of various buffer usage to the planned buffer amount. We analyse the comprehensive risk impacts of activities using the critic method and calculate the comprehensive risk impact coefficients. We combine these with the monitoring frequency to set a reasonable monitoring cycle.
    Monte Carlo simulation is made using Matlab, assuming each activity duration obeys the lognormal distribution, and generates 1000 random durations using Matlab. The experimental simulation using the static three-part method, the relative buffer monitoring method, and the proposed method shows that, compared to the comparative methods, the proposed method can shorten project duration and reduce costs while meeting quality requirements.
    Morris Factor Screening Method Based on Robust Statistics
    XIE En, MA Yizhong, LIU Lijun, ZHANG Fengxia
    2024, 33(6):  93-99.  DOI: 10.12005/orms.2024.0186
    Asbtract ( )   PDF (1034KB) ( )  
    References | Related Articles | Metrics
    The Morris method is usually applied to identify non-influential inputs for a computationally costly mathematical model for a model with a large number of inputs. It can be used to simplify a model by identifying the factors with a low influence which can be fixed, as a first step. It is widely used in factor screening studies where the inputs of the model are large and the computer model is computational because of its highly efficient and model-free properties. It is also known as the Elementary Effect (EE) method which measures the importance of the input by estimating the two statistics mean and standard deviation (or variance) of the distribution of the EE. However, in industrial production, the presence of outliers due to measurement errors, instrument failures or noise makes the problem of data contamination unavoidable. When there are outliers in the output, the mean and standard deviation of the EE will deviate from the true central tendency and variation. Then, the conventional Morris method cannot accurately identify the effect of the input on the output. Thus, in this paper, we propose to use robust estimators to replace the mean and standard deviation estimating the location and scale parameters of the EE. By this means the improved Morris method can accurately estimate the linear and nonlinear effects of the input, when faced with data contamination. The proposed Morris method is also widely used regardless of whether the output is a normal distribution or not.
    The Morris method is model-free which can identify the linear effect and nonlinear effect of input under complex situations. In this paper, we adopt median and Hodges-Lehmann (HL) to estimate factors’ linear effects, and employ Median Absolute Deviation (MAD) and Shamos to estimate factors’ nonlinear effects. The statistical properties of robust estimators (median, HL, MAD, and Shamos) are investigated. The breakdown points are used to measure the robustness of robust estimators. Median and MAD have a higher breakdown point than HL and Shamos. While the relative efficiency of HL and Shamos are higher. In the paper, the procedure of estimating factors’ effects is introduced. The proposed method does not increase the computational cost compared to the traditional method. Then, two test functions (Ishigami function, and a 20-input Sobol’ G function) and a real case (HyMOD model) are introduced to verify the robustness and convergence of the proposed Morris method. To verify the validity of the proposed method in the presence of outliers in the output, a certain percentage of the initial outputs are randomly replaced with outliers by employing a contamination function. For the Ishigami function, we investigate the impact of the number of outliers and the size of the outliers in the output on the experimental results separately. The maximum percentage of outliers in output is 0.3 which is larger than the breakdown point of robust estimator HL and Shamos. For the Sobol’ G function, the convergence speed of the proposed method and that of the traditional Morris method are compared.
    Several interesting conclusions are drawn. First, when there are no outliers in the output or the percentage of outliers is not greater than the breakdown point of the robust estimator, the proposed method can effectively perform factor screening. Second, the number of outliers in the output has an effect on the result, e.g. when the percentage of outliers in the output is larger than the breakdown point of the robust estimator, the proposed method will fail,but the size of the outliers in the output will have no effect on the result. Third, when there are no outliers in the output, by adopting the proposed method accurate results can be obtained at a smaller cost; the proposed method is more efficient than the traditional Morris method. Moreover, when facing the problem of outliers in the output, to obtain accurate screening results, the number of repetitive tests needs to be increased appropriately. In summary, the improved method has wide applicability and high robustness which can perform factor screening regardless of the presence of outliers in model response, and possesses higher efficiency than the traditional Morris method.
    Estimation of Product Reliability for Weibull Distribution under Ranked Set Sampling
    ZHANG Liangyong, DONG Xiaofang
    2024, 33(6):  100-105.  DOI: 10.12005/orms.2024.0187
    Asbtract ( )   PDF (977KB) ( )  
    References | Related Articles | Metrics
    The ranked set sampling method is a method for improving sampling efficiency, and it is appropriate for a situation in which the visual ranking of sampling units is easy but the quantification of units is difficult. Although the ranked set sampling protocol was first introduced to solve agriculture problems, it has been applied to various fields since then, including reliable engineering, clinical medicine, ecological environment, etc. Product reliability is an important metric for describing product lifetime. In recent years, some scholars have studied the estimation problem of product reliability under ranked set sampling. Through the comparison of estimation efficiency, it has been proven that the ranked set sampling method is superior to the simple random sampling method. However, most of the product lifetimes studied by these scholars follow an exponential distribution. Weibull distribution plays a very important role in reliability testing, which can describe the lifetime of components, equipment, and other products. When the population distribution is known, the maximum likelihood estimation method is an important method for seeking point estimates and has a wide range of applications.
    In order to improve the estimation efficiency of the product reliability for Weibull distribution, this paper studies the maximum likelihood estimation problem of the product reliability using ranked set sampling. Firstly, the maximum likelihood estimators of the shape parameter and scale parameter of Weibull distribution under ranked set sampling are analyzed. On the basis of the invariance of maximum likelihood estimation, the maximum likelihood estimator of the product reliability using ranked set sampling is given. Secondly, the Fisher information matrix of the shape parameter and scale parameter of Weibull distribution under ranked set sampling are computed. The proposed maximum likelihood estimator is shown to have asymptotic normality, and the asymptotic variance of the new estimator is calculated using the Fisher information matrix of the shape parameter and scale parameter. Thirdly, according to the asymptotic variances of the two maximum likelihood estimators under ranked set sampling and simple random sampling, their asymptotic relative efficiencies are analyzed. In order to study the efficiencies of the new estimator and the corresponding estimator under simple random sampling in small samples, a simulation study is performed. Using simulated biases and mean square errors, this paper calculates simulated relative efficiencies of the two maximum likelihood estimators under ranked set sampling and simple random sampling. Finally, the two maximum likelihood estimators under ranked set sampling and simple random sampling are applied to the strength data of single carbon fibers.
    The study results of asymptotic relative efficiencies and simulated relative efficiencies show that the proposed estimator is superior to the corresponding estimator based on simple random sampling, and the estimation advantage of the new estimator based on ranked set sampling increases as the set size increases. The actual analysis results of the strength data of single carbon fibers show that the new estimator under ranked set sampling is superior to the corresponding estimator under simple random sampling, which verify the efficiency of the ranked set sampling method.
    This paper studies the estimation problem of the product reliability for complete data using ranked set sampling. However, in reliability or survival analysis, the lifetime of interest can be often partially observed. For instance, in medical studies, a subject may die from a cause not related to the study or be lost for a follow-up. In such a case, the lifetime of that subject is truncated. Therefore, in the forthcoming studies, we will consider the estimation problem of the product reliability for truncated data based on ranked set sampling.
    “Coupling Effect” among Diversified Financial Markets ——Wavelet Local Multiple Correlation
    WANG Zongrun, YANG Miao
    2024, 33(6):  106-111.  DOI: 10.12005/orms.2024.0188
    Asbtract ( )   PDF (1141KB) ( )  
    References | Related Articles | Metrics
    The financial system is an important foundation for economic development and an important source of systemic risks in China. As an important window for the interaction between the real economy and financial economy, the real estate market also plays an important role in the formation and spread of risks. For financial risks, it is necessary to coordinate the coupling mechanism between the financial markets and fully understand the spillover mechanism of financial risk inside and outside the financial market.
    The overall “coupling co-movement” of the financial market reflects a kind of correlation. WLMC can not only deal with the time-varying co-movement between two markets but also describe the overall time-varying co-movement among multiple markets, capture the co-movement characteristics of multiple time series at different frequencies, and explain how this co-movement evolves over time. This study uses the wavelet local multiple correlation (WLMC) to explore the mutual influence and co-movement between multiple markets, and the Vine-Copula model to verify the co-movement structure between multiple markets. The WLMC model utilizes the maximal overlap discrete wavelet transform (MODWT) to obtain the wavelet coefficients as the time series input for further in-depth analysis. China’s important financial markets are currency, bonds, foreign exchange, and gold markets. Simultaneously, its real estate market is closely connected to the real economy and financial markets. Therefore, this study selects the stock, bond, currency, foreign exchange, gold, and real estate markets as research objects to explore the overall co-movement structure and dynamic time-frequency co-movement among multiple markets. We use the daily income data of stocks, bonds, currencies, foreign exchange, gold, and real estate markets from July 27, 2005 to August 13, 2021 as samples, and the sample data are mainly from the WIND and CSMAR databases. The daily rate of return for each market is obtained by logarithmic processing of the closing price data for each market. After sorting, 3888 valid samples are obtained.
    The research finds that: First, financial markets have stable characteristics of high-frequency low positive co-movement and low-frequency high positive co-movement. Second, in addition to the stock market, there is no stable and high-strength coupling co-movement between other financial sub-markets and the real estate market. Third, during the sample period, the financial system and the real estate market are closely coupled, and the stock market is the bridge between the financial market and real estate market. So the financial system and real economy are a positive sum game of symbiosis, mutual prosperity, and mutual promotion. To prevent systemic risks, we need to promote the deep integration of finance and real economy. The co-movement structure of multiple markets and key hub markets is crucial for timely blocking risk propagation paths, preventing and resolving financial risks. Therefore, in the process of risk prevention and control, we must focus on the spillover effect of financial market risks on the real estate market, and the spillover of risks on the real economy. Whereafter, the sliding window method can be adopted to effectively identify the structural changes in market co-movement before and after the crisis through different time windows, and then the changes in systemic risk can be studied. In addition, a mixed frequency model can also be used to study the relationship among economic fundamentals, economic policy uncertainty, investor sentiment factors, and market co-movement characteristics.
    Allocation Model of Non-hydro Renewable Energy Power Quota among Provinces Based on the Bi-level Programming Approach
    WANG Delu, LI Chunxiao, SONG Xuefeng
    2024, 33(6):  112-117.  DOI: 10.12005/orms.2024.0189
    Asbtract ( )   PDF (1079KB) ( )  
    References | Related Articles | Metrics
    The advancement of renewable energy power is a vital impetus for China’s energy structure reform, and the development of a scientific non-hydro renewable energy power quota allocation scheme is an important guarantee for achieving the 2030 carbon peak and 2060 carbon neutrality goals. In June 2020, the National Development and Reform Commission and the National Energy Administration jointly introduced the 2020 renewable energy power consumption responsibility weights for each provincial-level administrative region. In practice, the effectiveness of these policies has been suboptimal. Thus, devising a rational allocation plan for non-hydro renewable energy power quotas is a significant issue that demands concentrated research.
    In recent years, numerous scholars have conducted research on the implementation issues of the renewable portfolio standard policy. From a methodological perspective, existing research primarily employs optimization models, which overlook the heterogeneity of government interests in the allocation process of non-hydro renewable energy power quotas. In reality, the central and local governments, as the formulators and implementers of the renewable portfolio standard, have different objectives and demands. This issue represents a typical leader-follower bi-level optimization problem. From a research perspective, the existing literature mainly focuses on minimizing the total system cost, without considering the impact of subsidy costs and the environmental improvements brought by carbon dioxide emission reductions.
    In light of this, this paper is based on China’s current industry management system and thoroughly considers the diverse interests of the central and local governments. By designating the central government as the upper-level decision-maker and local governments as the lower-level decision-makers, it integrates subsidy costs into the central government’s quota allocation objectives. A provincial non-hydro renewable energy power quota allocation model based on bi-level multi-objective nonlinear programming is developed. Utilizing relevant data from 30 provinces in China and employing a genetic algorithm to solve the model, an optimal allocation scheme that balances cost, environment, and equity is achieved. This allocation scheme’s superiority is validated by comparing it with the current government allocation scheme. Furthermore, this study computes the proportions and execution rates of non-hydro renewable energy power quotas for the 30 renewable energy-generating provinces under various central government objective preference scenarios. The allocation schemes under different scenarios are analyzed and discussed. This will help the government more scientifically optimize the setting of non-hydro renewable energy power quota allocation schemes.
    The research findings reveal that: (1)Some local governments (e.g., Jilin, Henan, Yunnan) exhibit limited willingness to adhere to the central government’s quota scheme, while others (e.g., Liaoning, Xinjiang, Gansu) display notable enthusiasm, even surpassing their targets. (2)In comparison to the government allocation scheme, the bi-level optimization approach leads to lower subsidy costs, reduced emission reduction costs, and decreased energy substitution costs, while also promoting better equity. (3)Although there are variations in Gini coefficients calculated using different indicators, all Gini coefficients for the bi-level optimization scheme are below 0.2, indicating a high level of fairness. (4)Despite some differences in non-hydro renewable energy power quota allocation schemes under various scenarios, the trends align with real-world conditions. These results suggest that the model exhibits robust internal consistency and can offer valuable insights to the government for policy-making under different circumstances. Future research will further explore topics such as cross-regional power trading to enhance the alignment of renewable portfolio standard studies with practical realities.
    Financing Models in Agricultural Supply Chain with E-commerce Platform Pre-sale Order
    LU Qihui, ZHANG Shaoliang, TAN Qianhong
    2024, 33(6):  118-124.  DOI: 10.12005/orms.2024.0190
    Asbtract ( )   PDF (1108KB) ( )  
    References | Related Articles | Metrics
    A new pre-sale model based on e-commerce platforms has successfully bridged the gap between production and consumption, providing a novel approach to increase farmers’ income. It offers advantages such as securing demand in advance and more accurately predicting consumer needs, which optimizes warehouse management. Additionally, it enables early access to funds from pre-sales, alleviating financial pressures on farmers to some extent. However, given the substantial costs involved in agricultural production and storage for unsold goods, small and medium-sized farmers still face significant financial challenges.
    A review and summary of the existing literature reveals that while significant attention has been paid to product pre-sales and financing issues within agricultural supply chains in recent years, the integration of output uncertainty with product pre-sales and supply chain finance has not been thoroughly examined within the same framework. There is also room for further exploration into the financial value of pre-sales on e-commerce platforms. Therefore, this paper enriches the theoretical study of yield uncertainty and product pre-sale, while providing reference points for reducing risk associated with yield uncertainty and improving pre-sale system management on platforms. Additionally, it delves more deeply than traditional financing models to explore e-commerce-led agricultural supply chain financing strategies, introducing an innovative and proactive e-commerce financing model that enhances related theories in agricultural supply chain finance. This paper aims to contribute new insights and theoretical explorations in the context of e-commerce pre-sales, aiding the financing practices of agricultural supply chain stakeholders.
    As the new business format of e-commerce platform pre-sales continues to thrive, small and medium-sized farmers face an increasingly dynamic and competitive environment, where financing challenges become more evident. This research finds that: (1)Under external bank financing models, farmers’ profits increase with the volume of pre-sales, but when pre-sale volumes are too high, profits tend to plateau. Conversely, under the e-commerce platform’s early payment model, profits initially increase with the volume of pre-sales but eventually decrease. (2)When pre-sale volumes are low and wholesale price discount rates are high, both farmers and e-commerce platforms prefer the early payment model. However, as wholesale discount rates decrease, its attractiveness diminishes, and both parties lean towards reverse factoring financing on the e-commerce platform. (3)High expected capital return rates and wholesale price discount rates make the early payment model favorable for both parties. Yet, as wholesale discount rates decrease, financing preferences diverge; farmers may switch to the e-commerce platform’s reverse factoring financing model.
    Managerial implications are: (1)For farmers, more pre-sale quantities are not always better. Excessive pre-sale volumes can reduce or even negatively impact the alleviation of financial pressures. Farmers should plan their production inputs rationally and avoid blind manufacturing. (2)The choice of financing model is critically influenced by the volume of pre-sale, wholesale price discount rates, and expected capital return rates. Farmers should consider all aspects holistically, and e-commerce platforms should leverage their informational advantages to collect and forecast market pre-sale data, helping farmers make informed production decisions to increase income and align financing preferences. (3)E-commerce platforms, holding a dominant position, should make financing decisions strategically to lock in future capabilities, considering multiple factors such as expected capital return rates and wholesale price discount rates. It is crucial to set reasonable expected capital return rates without excessively pursuing low wholesale prices—moderating expected capital return rates and elevating wholesale discount rates could foster a win-win situation for all members of the agricultural supply chain.
    Research on Small-scale Agricultural Product Price Prediction Based on Decomposition and Integration Method
    LIU Hebing, HUA Mengdi, KONG Yujie, XI Lei, SHANG Junping
    2024, 33(6):  125-131.  DOI: 10.12005/orms.2024.0191
    Asbtract ( )   PDF (2023KB) ( )  
    References | Related Articles | Metrics
    The key deployment of agricultural market work shows the determination of the country to adhere to the stable development of agricultural and rural markets, and also reflects the necessity of studying agricultural product price prediction. The fluctuation range of the price series of smallholder farmers’ products is large, and the phenomenon of price sudden increase and decrease occurs frequently, which produces shocking news in society, which is not conducive to the stable development of the agricultural product market. Because the price series of small-scale agricultural products have obvious nonlinear and non-stationary characteristics, the prediction effect of a single model is not good. To this end, this paper proposes a combined prediction model based on “decomposition and integration”.
    Firstly, the Whale Optimization Algorithm (WOA) is applied to the Variational Modal Decomposition (VMD) algorithm, and the Sample Entropy is used to solve the problem. SampEn is used as the fitness function to screen out the optimal parameters. Then, the optimized variational mode decomposition method is used to realize the multi-mode decomposition of the agricultural product price series, solve the modal confusion problem in the complex small-scale agricultural product series, and obtain the modal components that can reflect the different characteristics of the original series. Secondly, the decomposed components and residual sequences are integrated into the Long Short-Term Memory (LSTM) neural network, which is trained as the feature quantities of the original agricultural product price series, so as to enhance the learning ability of the LSTM neural network and improve the prediction accuracy of the combined model. In this study, the daily average price data of potato, lotus root, white radish, Chinese cabbage, broccoli and cabbage in Henan Province from January 1, 2016 to December 31, 2021 are selected as the research object, and the combined model method is used to predict the price series of six small-scale agricultural products.
    The root mean square error (RMSE) and coefficient of determination (R2)are used as the evaluation indicators of the prediction effect of the model. The experimental results show that the RMSE of the WOA-VMD-LSTM combination model is 0.292, 0.381, 0.129, 0.125, 0.782, 0.142, respectively. The coefficient of determination is 0.755, 0.971, 0.947, 0.907, 0.911, and 0.973, respectively. The EMD-LSTM combination model and ARIMA model are used to predict six kinds of price series, and the prediction results of the three models are compared comprehensively. The RMSE values obtained by WOA-VMD-LSTM combination model for the price series of lotus root, white radish, Chinese cabbage, broccoli and cabbage are lower than those in the EMD-LSTM model and ARIMA model, and the determination coefficient values are higher than those of the other two prediction models. Although the coefficient of determination value obtained by the WOA-VMD-LSTM combined prediction model for potato sequences is not better than that in the ARIMA model, the RMSE value is48.1% lower than that in the EMD-LSTM model and 47.2% lower than that in the ARIMA model. In summary, it can be concluded that the method of using whale optimization algorithm to optimize the variational mode decomposition model for sequence decomposition and using neural network to complete the sequence prediction of agricultural product price can effectively improve the price prediction accuracy.
    This study tries to explore the influence of meteorological temperature, economic policy, crop yield, planting area and other factors on the daily average price data of six agricultural products, but does not achieve good results. Therefore, this paper uses the method of sequence decomposition to achieve the purpose of extracting sequence features. In the subsequent research, the sequence components can be divided according to different frequencies, and the characteristics of high-frequency components can be deeply analyzed to achieve deep noise reduction. The combined prediction model proposed in this study can effectively improve the accuracy of small-scale agricultural product price prediction, which not only stabilizes the supply and demand relationship of agricultural product market, but also protects the interests of agricultural product suppliers and consumers, and has the value of popularization and application.
    Forecast of Concrete Price Movement Based on Time Series and Improved Random Forest Model
    LIU Qing, HUANG Minghao, LEE Woon-Seek
    2024, 33(6):  132-138.  DOI: 10.12005/orms.2024.0192
    Asbtract ( )   PDF (1276KB) ( )  
    References | Related Articles | Metrics
    Ready-mixed concrete is one of the primary materials used in various types of construction, including railways, highways, bridges, tunnels, and buildings. Effectively and accurately predicting the price fluctuation trends of ready-mixed concrete can optimize construction planning, enhance economic benefits for construction enterprises, and hold significant importance for the planning of various construction projects.
    There are two feasible approaches for modeling the prediction of concrete prices: multivariable modeling and univariable modeling. Multivariable modeling involves first analyzing the factors that influence concrete price fluctuations and establishing related multivariate panel data. In contrast, univariable modeling uses historical price data to predict future prices. This method has the advantages of simple data collection and ease of operation, making it widely used in the prediction of various commodity prices.
    Existing research indicates that the random forest model exhibits higher predictive accuracy than other forecasting models. However, different data structures have their own unique characteristics. Optimizing the model for specific data structures can help enhance the algorithm’s performance on particular datasets.
    This paper constructs an autoregressive sequence using concrete price data, transforming the price trend prediction problem into a time series classification (TSC) problem. We then perform logical optimizations on the three core steps of building a random forest model. These enhancements improve the applicability of the random forest model to time series data, thereby increasing its performance in predicting concrete price fluctuation trends.
    Specifically, we first adjust the random sampling used for creating training subsets in the random forest to skewed sampling, strengthening the association between classification categories and classifiers within the random forest. Next, we modify the random feature vector sampling during decision tree splitting to stratified sampling, which helps preserve the temporal characteristics of the time series. Finally, we replace average voting with weighted voting, using the prediction accuracy of each decision tree as its weight. These targeted adjustments enhance the performance of the random forest algorithm in handling TSC tasks.
    The empirical results indicate that, compared to the original random forest algorithm, the improved model demonstrates significant advantages, achieving a prediction accuracy of 98.4% for changes in ready-mixed concrete prices. The precision, recall, and F1 score of the predictions are 98.7%, 98.2%, and 98.4%, respectively, enabling precise forecasting of price trends for ready-mixed concrete.
    To investigate the robustness of the Improved-RF model, we conduct a comparative analysis of price change predictions for rebar using both the native random forest algorithm and Improved-RF. To further validate the performance of the Improved-RF algorithm, we conduct comparative experiments with various deep learning models. The models selected for these experiments include multilayer neural networks (MLNN), convolutional neural networks (CNN), and long short-term memory networks (LSTM), all of which have demonstrated strong performance in various classification tasks. All models utilize the ReLU activation function and the SoftMax classifier. This study provides valuable insights for various time-series classification tasks and autoregressive-based construction material price predictions.
    Application Research
    Risk Preference and Enterprise Value ——Intermediary Effect Test Based on Leverage Ratio
    SONG Gaoya, LI Quan
    2024, 33(6):  139-144.  DOI: 10.12005/orms.2024.0193
    Asbtract ( )   PDF (957KB) ( )  
    References | Related Articles | Metrics
    Risk preference stems from the investment of enterprises in projects with higher risks but positive expected net present value, which reflects the full utilization of investment opportunities by enterprises and has a great significance for both micro-enterprises and macro-economy. From a micro perspective, a higher risk preference reflects the risk-taking and innovative spirit of managers, which is specifically manifested in higher R&D expenditures and capital expenditures, which is conducive to enterprises gaining competitive advantages. Moreover, the essence of profit is the return on risk. In order to obtain high returns, corresponding risks must be taken. For the macro-economy, the higher risk preference of enterprises can accelerate the accumulation of social capital, improve production efficiency, and benefit the long-term growth of the economy. In a perfect market, enterprises should choose all projects with positive expected net present value to maximize the value of enterprises. However, the existence of risk aversion will cause some enterprises to give up some projects with higher risks but positive expected net present value, which is not conducive to maximizing the value of enterprises. For Chinese enterprises in the period of structural transformation, how to make executives have a moderate risk-taking spirit while avoiding excessive risk-taking behavior is worthy of research by the theoretical and practical circles.
    From the perspective of agency cost and precautionary motivation, this paper examines the relationship between mechanism of risk preference and corporate value, and conducts further research based on the nature of property rights and equity concentration. Using China’s A-share listed companies from 2009 to 2020 as a sample, the results find that: first, risk preference is significantly positively correlated to enterprise value. When risk preference increases, enterprise value will also increase; second, leverage ratio has a positive impact on risk preference and enterprise value. The positive correlation between them has a mediating effect that is, as risk preference increases, the leverage ratio will increase, thereby promoting corporate value. Furthermore, compared with non-state-owned enterprises and companies with low ownership concentration, the incentive effect of risk preference on corporate value is obvious in state-owned enterprises and companies with high ownership concentration. This article relatively completely explores the logic of risk preference affecting corporate value. From the perspective of leverage ratio, it explores the channels through which risk preference affects corporate value, refines the research on the relationship between risk preference and corporate value, and has implications for corporate investment and financial decision-making.
    Different from the existing literature, which mainly studies the impact on corporate value from the perspective of asset structure, decision-making power allocation and ownership nature, this paper directly analyzes and tests whether corporate risk preference will enhance corporate value, and deepens the value-related research into corporate risk preference from the perspective of risk preference. In addition, some studies have explored the impact of corporate governance, equity incentives and other factors on corporate risk-taking. Corporate value is mentioned in the article as the economic consequence of risk preference. However, existing research is not enough on the mechanism through which risk preference affects corporate value. In depth, this article explores the channels of risk preference on corporate value from the perspective of leverage ratio, and details the research on the relationship between the two. Finally, in view of the particularity of China’s institutional background and the reality of high ownership concentration, this paper explores the moderating effects of the nature of property rights and ownership concentration, and finds that the incentive effect of risk preference on corporate value occurs mostly in state-owned enterprises, and in non-state-owned enterprises. It is not significant among enterprises, but compared with enterprises without concentrated equity, the promoting effect of risk preference is obvious in enterprises with concentrated equity, which can add new perspectives for theoretical and practical circles to improve enterprise value and manage enterprise risk preference in the Chinese context.
    Default Risk of Chinese Corporate Bonds: From the Perspective of Ownership and Industry
    WANG Guanying, SUN Xiaomei, WU Yilu
    2024, 33(6):  145-150.  DOI: 10.12005/orms.2024.0194
    Asbtract ( )   PDF (952KB) ( )  
    References | Related Articles | Metrics
    The Chinese corporate bond market has experienced rapid development since the issuance of the first corporate bond in 2007, becoming the world’s second largest credit bond market after the United States. In 2014, “11 Chaori Bond” defaulted, breaking the rigid payment mechanism for Chinese corporate bonds. As of the end of 2020, a total of 590 corporate bonds have defaulted in China, with a total default amount of 520 billion yuan. This motivates us to investigate the determinants of default risk of Chinese corporate bond.
    This paper selects 5160 corporate bonds listed on the Shanghai and Shenzhen Stock Exchanges from 2008 to 2020, including 590 defaulted bonds. All sample data are sourced from Wind Financial Database. The number of defaulted bonds increased from 6 in 2014 to 155 in 2020. This paper studies the determinants of default risk from the two aspects: the ownership and the industry. Companies owned by state-owned enterprises issue more bonds for public infrastructure construction or national strategic purposes, with lower default probability than private enterprises. Meanwhile, we find that manufacturing and comprehensive industries have the highest number of defaults of all industries.
    Macroeconomic state also has a significant impact on debt repayment. When the economy is in a downward cycle, it often leads to declining corporate performance, limited revenue generation, and difficulty in paying back debts on schedule. If the economy improves, rising social demand will lead to better corporate performance and reduce the risk of default. Areas with higher economic development levels have better market environment and development prospects for companies, and we believe that the economic level and default risk in the region are negatively correlated. Many scholars have analyzed the factors of corporate finance and bond characteristics, and this paper takes corporate financial indicators and bond features as control variables.
    The empirical results show that in terms of corporate finance factors, net asset growth rate, operating income growth rate, net profit margin, inventory turnover ratio, total asset turnover ratio, and cash ratio are negatively correlated to corporate bond defaults, indicating that deteriorating corporate finances increase the risk of default. Both the ownership and industry are significantly associated with corporate bond defaults. Credit rating is not significantly related to default, indicating that the quality of China’s credit rating market still needs to be evaluated, and the credit rating agencies in China cannot effectively measure the default risk.
    This paper calculates the default probability of corporate bonds and performs ROC curve testing on the model. The area under the ROC curve reaches 0.9608, indicating that the model can accurately predict the default of Chinese corporate bond. The average marginal effect test shows that when the corporate belongs to a state-owned enterprise, its bond default probability decreases by 2.36%. Manufacturing and comprehensive industries have the highest default rates, followed by wholesale and retail trade, construction, information technology, transportation and storage, and other sectors. From the perspective of the economic environment, if the GDP increases by 1 trillion yuan per year, the issuer’s corporate bond default probability decreases by 0.8%.
    The contributions of this paper have two folds. First, the existing literature predicts the default risk mainly based on financial conditions and macroeconomic state. This paper incorporates the ownership and industry factors to study the default risk of Chinese corporate bonds. Second, unlike previous research conclusions, this paper finds that credit ratings and maturity periods have no significant impact on Chinese corporate bond default risk. Chinese corporate bonds generally have high credit ratings, and rating adjustments are relatively lagging, making credit ratings unable to effectively measure credit risk.
    Research on Characteristics of China’s Green Economy Efficiency and its Changing Trend in the New Period: Temporal and Spatial Distribution, Regional Differences, and Convergence Characteristics
    TANG Xinmeng, ZHOU Xiaoguang
    2024, 33(6):  151-157.  DOI: 10.12005/orms.2024.0195
    Asbtract ( )   PDF (1669KB) ( )  
    References | Related Articles | Metrics
    The 18th Fifth Plenary Session of the Communist Party of China (CPC) solidified green development as a paramount avenue for China’s economic and social transition, crucial for sustainability and human health. Green economic efficiency emerges as a pivotal metric, reflecting the delicate balance between economic progress and environmental stewardship. Hence, understanding the nuances of China’s green economic efficiency, especially its recent trends and transformations, bears profound theoretical and policy significance in navigating the intricate interplay between economic growth and environmental protection.
    Green economic efficiency signifies the ability of an economic system to generate maximal output while minimizing environmental costs, under stable or reduced input levels. Scholars have predominantly utilized methodologies such as stochastic frontier analysis (SFA) or data envelopment analysis (DEA) to gauge China’s green economic efficiency intensity over recent decades. Building upon this foundation, the SSBM-DEA model has been instrumental in addressing input-output slackness and unforeseen outputs in green economic efficiency assessments. Moreover, research efforts have delved into localized evaluations of green economic efficiency and its interactions with various factors such as financial aggregation, economic concentration, and environmental regulations.
    However, a comprehensive review of existing literature underscores several critical gaps. Primarily, the focus has predominantly gravitated towards precise computations of green economic efficiency intensity, overlooking a nuanced understanding of its distinctive characteristics and evolving trends, which is imperative for both theoretical discourse and practical applications. Additionally, research perspectives have tended to be either regionally specific or centered on national aggregates, neglecting the spatial interdependencies between regions, thus constraining the comparative analysis and insights into regional disparities. Moreover, since the inception of China’s “high-quality economic development” paradigm in 2018, there has been a dearth of studies examining the post-2018 dynamics and emergent features of China’s green economic efficiency landscape, thus warranting urgent investigation.
    To address these lacunae, this study comprehensively investigates the spatiotemporal dynamics, spatial disparities, and convergence trends of China’s regional green economic efficiency from 2000 to 2020. Employing a multidimensional approach encompassing Kernel density estimation, standard deviation ellipse analysis, Dagum Gini coefficient and its decomposition, and β-convergence analysis, this research seeks to offer a holistic understanding of China’s green economic efficiency.
    The findings yield crucial insights into China’s green economic landscape. Temporally, China’s green economic efficiency exhibits cyclicality, transitioning towards a new phase of ascent, albeit with concurrent resurgence of regional disparities. Spatially, there is a discernible shift in the centroid of green economic efficiency towards the intersection of the Yellow River Basin and the middle reaches of the Yangtze River, accompanied by an eastward drift since 2018. Moreover, regional disparities emerge as significant drivers of overall disparities, with the northeastern, eastern coastal, Yellow River Basin, and middle reaches of the Yangtze River regions being primary contributors to increased regional disparities since 2018. Furthermore, β-convergence analysis indicates a trend towards convergence to higher values, signifying a period of positive momentum and opportunities for China’s green economic efficiency. Nonetheless, heterogeneity in regional convergence rates and influencing factors necessitates tailored policy interventions to maximize efficiency gains.
    Furthermore, this study underscores the need to address the emerging challenges and leverage the opportunities presented by China’s evolving green economic landscape. Notably, the imperative to foster coordinated efforts across eastern, central, and western regions to achieve balanced and sustainable green economic development cannot be overstated. Policy recommendations include promoting a staggered green economic efficiency development model, addressing regional disparities, and leveraging the momentum of convergence to propel China’s green economic efficiency to higher levels. In conclusion, this study contributes to a nuanced understanding of China’s green economic efficiency landscape, offering valuable theoretical insights and practical policy guidance for navigating the complex nexus between economic development and environmental sustainability in China’s pursuit of high-quality growth.
    Research on the Influence of Spatial Structure of Digital Economy Agglomeration on Energy Carbon Emissions and the Mechanism of Green Innovation
    CHEN Yubin, WANG Sen
    2024, 33(6):  158-164.  DOI: 10.12005/orms.2024.0196
    Asbtract ( )   PDF (960KB) ( )  
    References | Related Articles | Metrics
    As a new form of leading high-quality economic development, digital economy is of great significance to the effective achievement of the strategic goal of “double carbon”. For a long time, the research on the impact of digital economy on energy carbon emission has been a hot topic in the field, and the fact that the development of digital economy drives energy carbon emission reduction has been verified accordingly. However, there is still room for deepening: on the one hand, under the heterogeneous spatial structure of digital economy agglomeration, the expectation of “digital economy driving energy carbon emission reduction” may not be effectively achieved; because different provinces are composed of urban units with different numbers, scales and endowments, the reasons for the differences among provinces are not only explained by scale and quantity, but also caused by the spatial heterogeneity of urban development. There are “monocentric” or “polycentric” differences between different urban units in the province at different stages of development, and what kind of spatial structure of digital economy agglomeration is more conducive to the reduction of energy carbon emissions? How to rationally plan the spatial structure of digital economy agglomeration in different cities to promote the process of energy carbon emission reduction? On the other hand, most of the existing studies focus on the impact of the scale of digital economic development on energy carbon emissions, while ignoring the impact of the spatial structure of digital economy agglomeration on energy carbon emissions and its mechanism.
    Therefore, based on the perspective of spatial heterogeneity, and the data of provinces and cities in 2011—2019, this paper empirically analyzes the impact of monocentric and polycentric spatial structure of digital economy agglomeration on energy carbon emissions and the mechanism of green innovation by using static panel model, instrumental variable estimation model and intermediary effect model. The results show that: (1)The scale development of digital economy has significantly inhibited energy carbon emissions. (2)The monocentric spatial structure of digital economy agglomeration will promote energy carbon emissions, while the polycentric spatial structure will help to curb energy carbon emissions. (3)Monocentric spatial structure of digital economy agglomeration promotes excessive geographical concentration of green innovation elements, which in turn contributes to energy carbon emissions, while rational spatial allocation of green innovation elements driven by polycentric spatial structure helps to curb energy carbon emissions.
    Compared with previous studies, this paper has expanded in the following three aspects: First, taking the heterogeneity of digital economic agglomeration at the city level within the provincial system as the starting point, and taking into account the impact of the spatial structure of digital economy agglomeration on energy carbon emissions. Second, starting with the heterogeneity of polycentricity, the effects and laws of different degrees of polycentric spatial structure on energy carbon emissions are investigated. Third, identifying the conduction path in the process that the spatial structure of digital economy agglomeration affects energy carbon emissions. The conclusion of this paper contains policy implications such as increasing investment in digital economy construction, optimizing the spatial structure of digital economic agglomeration and strengthening the path dependence of green innovation, and then enriches the theoretical research between digital economy and energy carbon emissions, with a view to providing experience for relevant departments in formulating digital economic reform system and carbon emission reduction development strategy.
    Time-varying Spillover Effects and Portfolio Strategies between Clean Energy and Metal Markets
    ZHU Xuehong, DING Qian, CHEN Jinyu
    2024, 33(6):  165-170.  DOI: 10.12005/orms.2024.0197
    Asbtract ( )   PDF (1163KB) ( )  
    References | Related Articles | Metrics
    As the world pays more and more attention to climate change, clean energy, as an important resource for addressing climate change challenges and achieving energy transformation, has shown strong development momentum. However, compared with traditional energy systems, clean energy systems are more metal-intensive and require the consumption of more critical metals in terms of both types and quantities.With the vigorous development of the clean energy industry, the consumption of metals in the clean energy system has gradually increased. Changes in supply and demand have led to a reshaping of the relationship between clean energy and the metal markets. The increased connectedness between clean energy and metals has intensified the cross-market spillover effect between the clean energy and metal markets. The risk spillover between the clean energy and metal markets will undoubtedly have an adverse impact on the long-term stable development of the green financial market. Therefore, accurately measuring the risk spillover level between the clean energy and metal markets and deeply analysing the risk spillover transmission characteristics and paths are crucial to preventing cross-market risk contagion and making investment and risk management decisions.
    To provide insight into the financial connectedness and investment strategies between clean energy and metals, integrating the time-varying parametric vector autoregressive (TVP-VAR) model with the DY spillover index method, this paper examines the dynamic spillover effects between 11 clean energy sub-sector markets and metal markets. It uses complex network methods to construct directional risk spillover networks between clean energy and metal markets to analyse the risk contagion characteristics and paths of risk spillovers. Finally, based on the risk spillover analysis, a comprehensive sub-industry analysis of hedging and portfolio optimization between clean energy and metal markets is conducted to provide a reference for investors to mitigate risks and choose the optimal asset allocation strategy.
    The results show that the spillover effects between clean energy and metal markets are time-varying and sensitive to financial and economic uncertainty events. There is heterogeneity in the spillover effects between clean energy sub-sector markets and metal markets. Specifically, there are strong spillover effects between energy management/energy storage equities and metal markets. Base metal is the spillover transmitter, while rare earth metal is the spillovers receiver. The results of the marginal net spillover network indicate that the shock of COVID-19 pandemic has led to a significant increase in risk spillover effects between clean energy and metal markets. Diversification benefits can be achieved by adding metal assets to most clean energy portfolios, with the cost and effectiveness of hedging depending on the type of clean energy stocks.
    Our research provides important reference for policymakers in developing risk management frameworks and for investors in making optimal portfolio allocation decisions. In the current context where clean energy firm stocks have become the main choice for environmental investors, designing portfolio strategies for clean energy and metal stocks is conducive to effectively exerting the positive environmental and socio-economic impacts on clean energy investment. Future research will further explore the connectedness between more different types of metals and the clean energy market. In addition, the inclusion of traditional energy markets can be further considered in investment portfolio strategies to provide investors with a reference for avoiding investment risks and maintaining financial market stability.
    Subsidy Strategy Research into Unmanned Delivery Vehicle on Cainiao Platform Considering Distribution Frequency
    MENG Xiuli, AN Kun, LIU Bo
    2024, 33(6):  171-177.  DOI: 10.12005/orms.2024.0198
    Asbtract ( )   PDF (1076KB) ( )  
    References | Related Articles | Metrics
    The distribution of the last kilometre has always been a serious problem for the logistics industry. Some schools and communities often find it difficult to deliver parcels to their homes because of the large number of parcels, the large number of people and the long distance between them, and this brings inconveniences to some recipients who have little free time or are far away from the courier station. As a result, some logistics platforms have introduced driverless delivery services. Amazon’s new Robo Runner Cloud service, for example, already coordinates deliveries between robots from multiple vendors. Alibaba’s little donkey, which is owned by Cainiao platform, has been used in many universities since the 2021. In order to increase influence and cultivate the usage habits of recipients, the platform often provides subsidies to increase demand.
    Combined with the characteristics of the unmanned distribution vehicle of Cainiao platform, six subsidy schemes are proposed and models are established based on the authenticity of the recipient group demand information. The optimal distribution frequency, subsidy and profit under different schemes are solved. Besides, the impact of the distribution frequency and the recipient groups demand information on the subsidy scheme are focused. Cainiao platform’s work flow of unmanned delivery vehicle is: the platform presets the route of the unmanned delivery vehicle and the calculation rules of the delivery frequency. After receiving the delivery information, the recipient fills in the information on demand on the Cainiao App to place an order, and make an appointment for delivery to the building. After Cainiao platform receives orders in accordance with the back-end calculation of the berthing point and distribution frequency arrangements for unmanned vehicle delivery, the recipient can receive the package within the corresponding time. The platform needs to collect the demand information of the recipient group first, and then calculate the distribution frequency and subsidy when it is known. After observing the scenarios used by the platform, the recipient community makes a decision about whether to use the distribution service.
    Limited by the authenticity of recipient group information, the customized subsidy scheme with fixed distribution frequency, the no-subsidy scheme with customized distribution frequency and the fixed subsidy scheme are not advisable. Under a fixed distribution frequency scheme, the profit of no-subsidy scheme is always higher than that of existing subsidy scheme. However, the subsidy scheme makes the platform regulate the market coverage more flexible and is conducive to improving the effectiveness of the recipient groups. Under the customized delivery frequency scheme, providing customized subsidies doesn’t cause a distortion of the recipient groups of information collected by the platform. The platform should provide higher distribution frequency to the recipient groups with high demand and higher subsidies to the recipient groups with lower demand. During the period of platform promotion, only the fixed distribution frequency and the fixed subsidy scheme makes the platform delivery services cover the market fully. In the mature period of the platform operation, the choice of the final subsidy scheme of the platform depends on the market coverage and the profits.
    In this paper, a logistics monopoly market is considered, which is composed of a monopoly Cainiao platform and several recipient groups, the optimal distribution frequency, subsidy and profit and their changes with the needs of the recipients are obtained, and the influence of the distribution frequency on the subsidy scheme of Cainiao platform is studied. This paper focuses on the choice of subsidy strategy of unmanned delivery vehicle on the monopoly Cainiao platform, but does not consider the multi-platform situation. With the development of artificial intelligence and the implementation of the national anti-monopoly law, the monopoly position of Cainiao platform can not be unchanged, therefore, in the future, we will further study the subsidy strategy of unmanned delivery vehicle under multi-platform competition or multi-platform cooperation.
    Team Orienteering Pickup and Delivery Problem with Electric Vehicles and Pickup-point Selection
    WU Tingying, MENG Ting, TAO Xinyue
    2024, 33(6):  178-184.  DOI: 10.12005/orms.2024.0199
    Asbtract ( )   PDF (1144KB) ( )  
    References | Related Articles | Metrics
    Electric vehicles are widely used in logistics distribution as the government pays more and more attention to green logistics. Offline physical chain stores expand online channels to reach more consumers. Consumers place orders online and stores conduct offline delivery of goods. Under the consumption model of integrating online e-commerce and offline stores, the logistics delivery providers need to select a pickup point from several stores to pick up the goods and then deliver them to the customers. And in actual logistics distribution, there are often situations where delivery resources are insufficient or delivery to certain customers is uneconomical. Logistics delivery providers can only provide services for some delivery requests to obtain maximum profits. If there are resource constraints, which customers should be selected to join the delivery route is exactly what the team orienteering problem is studying. Therefore, focusing on the situation that each request has multiple selectable pickup points and distribution resources are insufficient, this paper studies a team orienteering pickup and delivery problem with electric vehicles and pickup-point selection (TOPDP-EVPS).
    A mixed integer programming model is formulated for the first time for the problem in which a decision variable is used to model the pickup point selection. In this model, it is not necessary to satisfy all delivery requests and the objective is maximizing the total amount of profit collected by the electric vehicles while not exceeding a predefined number of vehicles and the maximum time limit on each vehicle. Since this model combines the formulation of the pickup and delivery problem and team orienteering problem and add the decision variable of pickup point selection, it becomes hard for exact algorithm to solve it within an acceptable time range. Due to the complexity of this model,an improved adaptive large neighborhood search algorithm (IALNS) is designed to solve this problem. This algorithm combines taboo strategy and the ideal of simulated annealing to avoid getting stuck in local optimal solution too early, and at the same time multiple destroy operators and repair operators are designed for charging stations and request nodes which combines some classical operators proposed in the literature and two new operators especially designed for the first time, namely, a greedy random repair operator and a minimum spanning tree destroy operator, to improve the performance of the algorithm.
    In order to demonstrate the correctness of the TOPDP-EVPS model and the effectiveness of the IALNS algorithm in solving this problem, a number of numerical experiments are conducted.The test instances are generated based on the existing instances of pickup and delivery problem with electric vehicles, adding several pickup points on the basis of the original pickup point at each delivery point to form their corresponding collection of pickup points. At the same time, resource constraints are added on the number of vehicles and the maximum path time for each instance. After comparing the experimental result of 36 small-scale instances of TOPDP-EVPS problem gotten by CPLEX solver and IALNS algorithm, we find that the profit of solutions of 13 instances obtained by IALNS algorithm are better than that sought by CPLEX. For the remaining 23 instances, the solutions obtained by IALNS algorithm are the same as CPLEX, while in terms of solving time, IALNS is much lower than CPLEX, which indicates that IALNS can ensure both solution quality and solution speed at the same time. Furthermore, the influence of pickup-point selection on the total profit is analyzed by comparative experiments. The experimental result of large-scale instances shows that IALNS algorithm can stably and effectively solve instances of three distribution types. And the average total profit of the obtained solutions of the instances with pickup point selection increases by 11.28%, 14.75%, and 14.47% compared to those without pickup point selection, respectively. At last, we evaluate the effectiveness of these two new operators proposed in the algorithm. Through comparative experiment, we find that with the contribution of new operators, the average total profit of the obtained solution of the three kinds of large-scale instance increases by 0.97%, 0.97% and 1.03%, respectively.
    In the future, on the basis of this problem, further research can be carried out by considering the condition that the distribution requests have time windows, as well as the nonlinear charging and power consumption of electric vehicles, in order to solve the logistics and distribution problems that are more relevant to the real world.
    Internet Information Interaction Network and ESG Quality of Listed Companies
    ZHENG Guanqun, REN Peiyu
    2024, 33(6):  185-191.  DOI: 10.12005/orms.2024.0200
    Asbtract ( )   PDF (993KB) ( )  
    References | Related Articles | Metrics
    The transformation of economic development modes and the development of green industries are inherent requirements for high-quality economic development in China. “Environmental, Social, and Governance” (ESG) provides a systematic and quantifiable operational framework for sustainable and green development, gradually becoming a crucial source of non-financial information for assessing enterprises and guiding investments, as well as a focal point for government and regulatory bodies. Some scholars have investigated the impact of public opinion on corporate environmental pollution behaviors, environmental investments, green innovation efficiency, and social responsibility. However, these studies often focus on traditional media, overlooking the potential role of internet social platforms. Internet social platforms play a positive role in the structure and behavior of the capital market, especially as the information interaction networks forming around key opinion leaders may become a significant exogenous factor affecting corporate governance performance. Based on this background, this paper primarily examines the impact of internet information interaction networks on the ESG quality of listed companies.
    This paper utilizes user relationships and posts data from Xueqiu.com, a mainstream financial social network platform in China, to build a complex information interaction network centered around key opinion leaders. The centrality of listed companies in this network is used as a proxy variable for public opinion attention. Subsequently, using annual panel data of Chinese A-share listed companies from 2013 to 2020, a quadratic regression model is constructed and estimated to examine its impact on the ESG quality of listed companies. Finally, dividing the sample into high and low network centrality groups based on the average centrality, moderation effect models are constructed and estimated separately in the two samples to investigate the mechanisms through which internet public opinion attention affects the ESG quality of listed companies.
    The research finds that under the dual influence of supervision and pressure mechanisms, as the centrality of listed companies in the information interaction network increases, their ESG quality initially declines then rises, forming a U-shaped pattern. When the network centrality of the listed companies is low, the pressure mechanism generated by the information interaction network has a stronger effect than the supervision mechanism, suppressing ESG quality. Conversely, when network centrality is high, the supervision mechanism’s effect surpasses that of the pressure mechanism, enhancing ESG quality. As an external channel and informal factor, internet public opinion attention has a more significant impact on the ESG of low-pollution industries and non-state-owned listed companies that are less formally regulated, indicating its supplementary effect on corporate governance. The implications of the above conclusions are as follows: First, for regulatory authorities, it is essential to recognize the role of online social platforms and actively integrate online resources to enhance regulatory governance using internet information interaction networks. This includes strengthening the monitoring of listed companies’ ESG quality and punishing negative behaviors, as well as guiding individual investors to establish correct ESG investment concepts and managing public sentiment effectively. Second, for listed companies, it is advisable to use online platforms to enhance the voluntary disclosure of non-financial information, improve the level and quality of information disclosure, and proactively accept investor supervision to achieve long-term sustainable development.
    Project Portfolio’s Multi-sourced Risk Propagation and Resilience Measurement
    ZOU Xingqi
    2024, 33(6):  192-198.  DOI: 10.12005/orms.2024.0201
    Asbtract ( )   PDF (1352KB) ( )  
    References | Related Articles | Metrics
    Due to shared resources, similar technology and process requirements, overlapping target markets, and the diffusion of knowledge or experience among projects, there exist dependencies among various projects within the project portfolio. Also, because of these dependencies, risks occurring in one project can be transferred to other projects, potentially leading to the failure of the entire project portfolio. Aiming at the multi-sourced risk in complex R&D projects and risk propagation between projects, the paper builds a model of risk propagation considering multi-sourced risk and multi-stated problems caused by risk propagation based on the improved Bayesian network model.
    Firstly, the paper analyzes the multi-sourced risks and multi-stated problems in complex R&D project. Multi-sourced risks refer to different types of risks that exist during the process of project portfolio, such as technological risks, management risks, business risks, and external risks. Multi-stated refer to changes in project status caused by the occurrence of risks and the cascading propagation of risks within the project portfolio network. Based on the indicators of “the probability of risk occurrence” and “the probability of risk diffusion”, the paper classifies the projects in the portfolio into four states, specifically: 1)The risk transferrer, which refers to the projects with a high probability of risk occurrence and a high probability of risk diffusion, indicating that the project is prone to risks and is likely to transfer these risks to other projects in the network. 2)The risk terminator, which refers to projects with a high probability of risk occurrence but a low probability of risk diffusion, indicating that the project has encountered risks, but the team has excellent risk-handling capabilities and has effectively resolved these risks. And, the project accumulates rich knowledge and experience in the process and ensures that the risk does not transfer to other projects in the network. 3)The risk immunizer, which refers to projects with a low probability of risk occurrence and a low probability of risk diffusion, indicating that the project has not yet encountered risks. Additionally, the project team possesses excellent risk-handling capabilities, enabling the risks to be resolved within the project without transferring to other projects in the network. 4)The risk susceptible, which refers to projects with a low probability of risk occurrence but a high probability of risk diffusion, indicating that the project has not yet encountered risks, but the project team has poor risk-handling capacities, making it difficult to resolve risks within the project, thus increasing the likelihood of these risks transferring to other projects in the portfolio. In summary, the probability of risk occurrence depends on the project’s own risk probability and the probability of obtaining risks from other dependent projects due to risk propagation. And, the probability of risk diffusion depends on the risk-handling capabilities of project’s R&D team after the occurrence of risks.
    Furthermore, the paper analyzes the risk propagation in the portfolio network through the construction of an improved Bayesian network. Bayesian network, a commonly used method in machine learning, is a process of determining posterior probabilities based on known conditional probabilities and prior probabilities. In traditional Bayesian network models, nodes only have two states: Failure(F) and True(T). However, for individual projects within the project portfolio, merely using failure and success to measure the project’s status is inaccurate. For projects within the project portfolio, it is necessary to measure the impact of “multi-sourced risks” and “multi-stated problems” on the risk propagation in portfolio. Therefore, the paper further constructs an improved Bayesian network model to analyze the risk propagation process of “multi-sourced risks” and “multi-stated”.
    In addition, resilience is an important issue in the field of complex networks, which refers to the ability of the whole network to return to the initial state or better state when a certain element of the network is at risk. For the portfolio network, resilience refers to the ability of the entire portfolio to withstand the risk and achieve its initial performance when the project is exposed to risk. Under the condition of risk propagation, the risk influence includes direct impact and indirect impact. The direct impact is on the project where the risk arises, and the ability of the project to achieve its initial performance may be affected. Indirect impact refers to other projects in the portfolio. The risk may be transmitted to other projects in the network that are directly and indirectly related to the project, so that the performance of the whole project portfolio will be affected. In conclusion, the paper analyzes the resilience of project portfolio network through the following aspects: 1)The ability of the whole portfolio to achieve its initial performance after the occurrence of risks, that is, the robustness analysis of the project portfolio. 2)How quickly the portfolio can recover from the risk event, i.e. the time required for the portfolio to recover its initial performance. 3)The cost required to enable the entire project portfolio to achieve its initial performance after the occurrence of risks. Finally, the R&D project portfolio is taken as an example to verify the effectiveness of the model and method proposed in the paper.
    IGDT Optimization Model of Project Portfolio Selection Considering Investment Uncertainty
    LI Jinmeng, LI Xingmei, YAN Qingyou, AI Xingbei, LIU Da
    2024, 33(6):  199-206.  DOI: 10.12005/orms.2024.0202
    Asbtract ( )   PDF (1165KB) ( )  
    References | Related Articles | Metrics
    With the continuous development of financial markets and industrial policies, the number of investment fields and projects available to enterprises is increasing. For an enterprise, the budget for project investment is usually limited, and selecting the set of projects that can maximize the return on investment under financial constraints has been the focus of attention in previous project portfolio selection problems. However, due to the outbreak of sudden public events, a large number of projects have been damaged or even stalled due to a lack of labor and supply chain disruptions, and countless enterprises have suffered from broken capital chains or even collapsed due to failure to meet expected revenue targets on time. Before such events occur, business managers must adjust revenue targets based on the environment and market conditions and make decisions on the project portfolio most likely to achieve the expected goals. At the same time, because of the policy change and the lack of sufficient knowledge of project investment in emerging areas, managers’ project investment behavior often presents a certain degree of uncertainty. Corporate managers have a particular risk appetite when making portfolio investment decisions. Conservative decision-makers will reduce the investment amount to avoid risky losses; aggressive decision-makers will increase the investment amount to seek risky gains. Therefore, in the face of limited investment budgets and uncertain project returns, how enterprises make project portfolio choices based on risk expectations has become a significant challenge for managers and current research.
    Based on the above issues and context, this study introduces the concept of the net present value bias coefficient to describe the risk preferences of managers. The information gap decision theory (IGDT) is employed to formulate project portfolio selection models based on risk preferences. Specifically, the envelope constraint is utilized to represent the uncertainty of the investment budget. Based on this, a robust model under the risk-averse strategy and an opportunity model under the opportunity-seeking strategy are constructed according to risk preferences. The objective of the robust model is to find the maximum uncertainty in the investment budget that satisfies the expected net present value; the objective of the opportunity model is to find the minimum uncertainty in the investment budget that satisfies the expected net present value. The constructed bi-level optimization model is transformed into a single-level optimization model for the solution by analyzing the relationship between the net present value and the investment budget. The model also considers critical factors such as active interruption, flexible periods, and financial constraints in project portfolio selection problems.
    This paper utilizes a set of example data to solve the problem using the deterministic and IGDT models, obtaining the optimal project portfolio selection results for each model and conducting a comparative analysis. Additionally, an impact ability analysis is performed on the net present value bias coefficient. The results indicate that compared to the deterministic model, the robust model’s optimal solution reduces the number of selected projects to 9, decreasing the net present value of the portfolio selection by CNY 13,160. In contrast, the opportunity model’s optimal solution increases the number of selected projects to 11, increasing the net present value of the portfolio selection by CNY 15,340. Under the premise of meeting the expected net present value returns, the risk-averse strategy achieves a cost-saving ratio of 16.0%; the minimum increase in investment amount under the opportunity-seeking strategy is 11.2%. When the net present value bias coefficient varies within a small range, the best approach to meet the manager’s expected net present value requirements for project selection is to change the investment amount under the risk-averse strategy to retain as much investment cost as possible and under the opportunity-seeking strategy, to minimize investment cost expenditure. Once the bias coefficient exceeds a specific range, the reduction in investment cost does not significantly affect the net present value under the expected returns, and the increase in investment cost leads to a decrease in the benefit. This paper’s model can assist managers in formulating the optimal project portfolio investment strategy based on different risk preferences, providing a reference for managers in project risk control and capital management optimization.
    For future research, there are many relevant issues that need to be explored in depth. For example: 1)the cost and value of the project itself fluctuate to a certain extent due to the influence of the environment, and there is a correlation between the two; 2)there is an inflow of external funds in different periods of the project implementation, and these factors directly affect the results of the project portfolio selection. Therefore, in the next stage, we can study the uncertainty of project investment cost, investment return, external funds at various stages, and other factors to extend the IGDT model of project portfolio selection. This would further refine the research on project portfolio selection, making the solution results more practical and applicable.
    Option Pricing for SSE 50ETF Considering Impact of International Financial Risks
    SUN Youfa, YAO Yuhang, GONG Yishan, QIU Zijie, LIU Caiyun
    2024, 33(6):  207-213.  DOI: 10.12005/orms.2024.0203
    Asbtract ( )   PDF (1177KB) ( )  
    References | Related Articles | Metrics
    In recent years, the outbreak of COVID-19, fluctuations in energy market prices, and other global events have led to increasingly frequent occurrences of risk contagion across international financial markets. Understanding how international financial risks impact the Chinese market has become a prominent topic in academia. Most existing literature focuses on the stock market, using models such as risk spillover networks to study the mechanism and impact of international financial risks on China’s stock market. However, there has been relatively less focus in the literature on the risk spillover effects from international stock markets to China’s options market.
    Building on existing empirical findings, we select the price fluctuations of the S&P 500 ETF from the US market as an exogenous risk source for China’s options market. Specifically, the initial outbreak of COVID-19 from February to April 2020 serves as the sample period to better examine the extreme financial risks from international markets on China’s options market. Inspired by the emerging field of economic physics, we employ an underdamped second-order system’s step response function to capture the turbulent patterns of the S&P 500ETF during the pandemic. This function is then embedded into the return process of the SSE 50ETF prices, constructing a BS model that incorporates international financial risks, referred to as the IFR_BS model.
    However, financial physics models like the IFR_BS model typically pose challenges in deriving analytical expressions for options. Traditionally, numerical methods such as Monte Carlo simulations or finite difference methods are used for option pricing, but these approaches are limited by low computational efficiency and the inability to calibrate parameters in real-time, thus constraining practical applications. In recent years, Fourier transform-based analytical pricing algorithms, particularly the Fourier-Cosine method, have garnered wide attention due to their efficiency and precision. Nevertheless, this method heavily relies on the availability of characteristic functions. In this paper, we apply a perturbation method to derive a second-order asymptotic expression for the characteristic function of the asset price. We then use the Fourier-Cosine method to obtain an approximate analytical pricing formula for European options.
    The numerical experiments demonstrate that the IFR_BS model can effectively characterize the trend and oscillation characteristics of stock prices, and produce statistical features such as peaks, fat tails, and skewness in asset returns. The option prices generated by the IFR_BS model exceed those of the traditional BS model, with larger perturbation parameters leading to higher option prices. The numerical error of the Fourier-Cosine method based on the second-order perturbation of the characteristic function is within 10%, and its efficiency of computing the option price is 40 times higher than that of the Monte Carlo method. The empirical analysis based on SSE 50ETF option data from February to April 2020 shows that the S&P 500-induced volatility in the SSE 50ETF exhibits time-varying characteristics, with a positive feedback relationship. The option pricing formula under the IFR_BS model, which considers international financial risk premiums, achieves higher overall pricing accuracy than the BS model. Moreover, it addresses the shortcoming of the BS model in valuing short-maturity options, especially the deep out-of-the-money options close to expiry. Given that short-maturity options generally exhibit high trading volumes in practical market transactions, the IFR_BS model has the greater practical value.
    Credit Risk Mitigation and Medium-small Enterprises Financing: Credit Risk Transfer or Loan Selling
    LIU Zhiyang, MA Yan’an
    2024, 33(6):  214-219.  DOI: 10.12005/orms.2024.0204
    Asbtract ( )   PDF (1028KB) ( )  
    References | Related Articles | Metrics
    The financing difficulty of small-and-medium-sized enterprises is the bottleneck of China’s sustainable economic development, and it is also a worldwide problem. Overcoming the financing difficulties of small-and-medium-sized enterprises and private enterprises plays an immeasurable role in the high-quality growth of China’smacro economy. In the face of high-risk small-and-medium-sized enterprise loans, how commercial banks sign credit derivatives contracts to alleviate their credit risk, and then achieve their maximum utility in supporting the financing of small-and-medium-sized enterprises and private enterprises, has become a key issue for commercial banks to support the financing of small-and-medium-sized enterprises.
    This paper constructs a theoretical model, combines credit risk transfer tools with loan sales, and compares the utility differences of commercial banks with and without moral hazard. Based on this consideration, the main contribution of this paper is to combine credit risk transfer tools with loan sales, and compare the utility differences of commercial banks with and without moral hazard.
    The theoretical model analysis shows that: (1)When there is no moral hazard, the loan sales market will ensure that commercial banks achieve the expected return of loans, and tend to use loan sales to transfer credit risk. When they sell loans combined with their overall asset quality, the pricing of the loan is independent of the state of the economy. This shows that if they can accurately evaluate the quality of their loans, they can accurately price the credit risk of loans and reduce the price uncertainty of selling loans. (2)In the presence of moral hazard, the loan sales market cannot fully hedge the credit risk of commercial banks. If commercial banks do not make efforts to manage credit risk, they will lose all the benefits of the loan. Because commercial banks do not make decisions based on the overall quality of assets when selling loans, they do not fully realize the transfer of credit risk and will also bear the credit risk of loans. (3)In the presence of moral hazard, for credit risk transfer tools, the asset management ability of the seller of credit risk mitigation tools is very important for commercial banks to successfully transfer credit risk, and the signing of credit risk mitigation contracts has the characteristics of state separation. Due to the seller’s moral hazard, when the seller observes the expected good signal, it has the incentive to effectively manage the assets, because at this time, the buyer’s payment probability to the seller increases, and the seller’s effective management of its own assets will obtain more benefits. Once the seller observes the signal of the expected recession, the incentive for it to make efforts to manage the assets will be reduced, because the income will be used to compensate the buyer’s credit risk loss, and then the moral hazard will be very significant. Commercial banks are exposed to credit risk because they are not paid by the seller.
    In the future, it is necessary to explore how to design targeted financial risk management systems and policies for the moral hazard problems existing in the practice of credit derivatives in alleviating the credit risk of small-and-medium-sized enterprises, so as to give full play to the advantages of credit derivatives in managing credit risk and alleviate the financing difficulties of small-and-medium-sized enterprises.
    Management Science
    Research on the Propagation Mechanism of Miners’ Unsafe Behavior Based on Subject Heterogeneity
    LI Xinchun, QIU Zunxiang, LIU Quanlong, ZHANG Xiaolin, ZHANG Yueqian
    2024, 33(6):  220-226.  DOI: 10.12005/orms.2024.0205
    Asbtract ( )   PDF (1465KB) ( )  
    References | Related Articles | Metrics
    China’s coal mining industry is characterized by its high labor intensity, substantial occupational hazards, and elevated accident rates. Although recent stringent national regulations have enhanced the overall safety performance in this industry, major coal mining accidents continue to occur frequently. According to the accident causation theory, coal mining accidents result from the complex interaction of multiple hazards, including people, equipment, the environment, and management. Among these, miners’ unsafe behaviors are the primary cause of accidents. Therefore, exploring how to prevent production accidents caused by unsafe behavior of miners at the source is crucial for enhancing the safety production level in the coal mining industry. In recent years, numerous scholars have conducted extensive research into the mechanisms and prevention strategies of unsafe behaviors. These studies focus on individual-level factors, such as employees’ safety knowledge and psychological capital, as well as organizational-level factors, including safety culture and climate. However, few studies have explored the transmission patterns of individual unsafe behavior in interpersonal contexts and the pathways through which individual unsafe behaviors spread to group unsafe behaviors. In addition, there is often an oversight regarding subject heterogeneity in studies analyzing unsafe behaviors among miners, but failing to differentiate between the types of subjects involved in coal mine safety production. Considering that subject heterogeneity is a critical factor influencing the transmission patterns and effects of miners’ unsafe behavior, further in-depth investigation is required. Based on this, the present study proposes the following two research questions: What roles do different types of subjects in coal mining safety production play in the propagation of unsafe behaviors? Under the condition of subject heterogeneity, how do the unsafe behaviors of various types of coal mining safety production subjects affect others and subsequently lead to the occurrence of group unsafe behaviors?
    Current research that relies solely on network analysis and simulation to study propagation lacks effective differentiation and validation of unsafe behaviors among heterogeneous subjects. While existing studies suggest that simple network analysis and simulation can reflect the propagation characteristics of miners’ unsafe behaviors, understanding how individual unsafe behaviors spread and evolve within groups requires further consideration of subject heterogeneity. It necessitates a more in-depth investigation into the propagation of unsafe behaviors among different subject types. Against this backdrop, this study first identifies the unsafe behavior characteristics of various production subjects in 350 coal mine accident reports by the Word2Vec method. A propagation network of miners’ unsafe behavior is then constructed based on association rules and complex network theory. Finally, we identify eight core unsafe behaviors and their associated sets through network centrality analysis. These include: inadequate inspection and supervision of safety management work by government regulators, insufficient daily safety oversight by government regulators, insufficient safety inspections by on-site supervisors, failure to promptly eliminate hidden dangers by on-site supervisors, inadequate safety supervision by on-site supervisors, ineffective safety confirmation by on-site supervisors, and illegal operations and risk-taking behaviors by frontline workers. Moreover, based on the accident path analysis, critical paths for unsafe behavior propagation are identified for six types of accidents: roof, gas, transportation, electromechanical, water hazards, and other types of accidents. The results show that on-site supervisors are the most influential subjects in the propagation of unsafe behaviors among miners. Insufficient safety inspections and failure to promptly eliminate hidden dangers by on-site supervisors are the critical reasons for unsafe group behaviors. The propagation of unsafe behavior most likely causes electromechanical accidents. Inadequate inspection and supervision of safety management work→Inadequate safety education and training→Ineffective on-site safety confirmation→Weak safety awareness among staff→Illegal operations are the critical link for the propagation of unsafe behavior in electromechanical accidents.
    This study utilizes text mining and path analysis of 350 coal mining accident reports to identify and integrate the unsafe behaviors of various coal mining safety production subjects. It further reveals the relationships between different types of subjects in the propagation of unsafe behaviors and clarifies their roles in this process. The research implications of this paper can be categorized into three main aspects: Firstly, at the theoretical level, this study reveals the intrinsic mechanisms of the propagation of individual unsafe behaviors to the group under the condition of subject heterogeneity, thereby expanding the related theories and empirical studies on group unsafe behaviors. Secondly, at the methodological level, this study addresses the limitations of traditional network analysis and simulation methods by integrating Word2Vec, association rules, and complex network approaches, which helps to identify the characteristics and propagation mechanisms of unsafe behaviors among different subjects in coal mining safety production. Lastly, at the practical level, this study provides theoretical guidance and a scientific basis for coal mining enterprises to formulate strategies to prevent the propagation of unsafe behaviors. Additionally, this study has certain limitations. It does not cover all subject types involved in coal mining production accidents and focuses mainly on the mechanisms of unsafe behavior propagation across different subjects. Future research should explore the characteristics of populations susceptible to unsafe behavior propagation and develop strategies to prevent the transition of individual unsafe behaviors into collective phenomena.
    Incentive Mechanism of Cultural Dissemination of Historic Sites from the Perspective of Government Guidance
    JIAN Lirong, ZHANG Jie, ZHENG Zhouzhou
    2024, 33(6):  227-233.  DOI: 10.12005/orms.2024.0206
    Asbtract ( )   PDF (1223KB) ( )  
    References | Related Articles | Metrics
    China is a grand cultural nation.Cultural inheritance is an important function of China’s cultural heritage industry institutions. Cultural industry institutions should fully utilize historical resources to disseminate industrial culture, let more people understand the cultural heritage behind the industry, and enhance the cultural influence of the industry. The implementation of cultural industry policies plays an important role in the development of the cultural industry. Currently, research on cultural communication focuses more on technical issues and less on the impact of government policy guidance on cultural communication and incentives of the guidance for it. The profits obtained from the integration of cultural heritage institutions and tourism can be used in the protection of cultural relics, as well as the inheritance of customs. For example, Lijiang Mufu regards the revenue as the cost of repairing and maintaining the Mufu and spreading Naxi culture, using its original culture to attract consumers to visit and drive the local accommodation and catering industry.
    Given that the behavior of the government, cultural industry institutions, and consumers is constrained and influenced by each other’s strategic choices, it conforms to the assumption of “bounded rationality” in evolutionary game theory. The article proposes relevant hypotheses and constructs an evolutionary game model based on the principles of bounded rationality and maximizing benefits.In the process of achieving game equilibrium through continuous decision-making among three entities, the government guides and incentivizes cultural industry institutions to disseminate historical and cultural knowledge; cultural industry institutions disseminate their historical culture to consumers; consumers who visit and purchase cultural and creative products should also provide feedback on their consumption preferences and preferred promotional methods to cultural industry institutions; consumer feedback can enable cultural industry institutions to continuously adjust and improve their operational methods; the cultural dissemination of government and cultural industry institutions can enable consumers to gain learning effects.Finally, the article uses Matlab simulation to analyze the impact of various factors on cultural dissemination, simulates the strategic evolution process of each subject’s behavior, and conducts sensitivity analysis to analyze the evolution mechanism of cultural dissemination.
    From a theoretical and numerical simulation analysis, it can be concluded that: the government can implement tax exemption policies on cultural industry institutions with excellent management, guide cultural industry institutions in cultural dissemination, and achieve a three-way Pareto optimal strategy balance; the amount of materials used by the government and cultural industry institutions for cultural dissemination, the expenses incurred by consumers due to the influence of cultural dissemination, and the amount of cultural knowledge absorbed by consumers have a promoting effect on cultural dissemination; the coefficient of return on consumer visits has the greatest impact on the evolution of the tripartite game, reflecting the importance of overall evaluation by consumers after visits. Specifically, from a sensitivity analysis, the cost coefficient of visits by consumers affected by cultural dissemination, the amount of local historical and cultural materials transformed by the government, and the perceived value benefits of visits by consumers have the greatest impact on the government’s strategic choices. The improvement of these three parameters plays a role in promoting government cultural promotion.The parameters that have the greatest impact on cultural industry institutions are the profit coefficient of offline consumption by consumers and the cost coefficient of visits by consumers affected by cultural dissemination. The improvement of these two parameters has a positive effect on promoting cultural dissemination in cultural industry institutions.The parameter that has the greatest impact on consumer strategy choices is the coefficient of return on offline visits, which has a positive effect on promoting consumer consumption.
    Incentive Contract Design for Sales Forces in Two-sided Markets
    BAO Lei, JUAN Zhiru
    2024, 33(6):  234-239.  DOI: 10.12005/orms.2024.0207
    Asbtract ( )   PDF (1085KB) ( )  
    References | Related Articles | Metrics
    Similar to traditional enterprises, the platform enterprise also needs to hire salespersons to carry out promotional activities. However, unlike traditional enterprises, there are cross network effects between users on both sides of the platform. Therefore, whether cross network effects will have impacts on incentive contract design for salespersons in the platform enterprise is an interesting question.
    In this article, a principal-agent model is established, in which the platform enterprise decides to hire a salesperson for selling promotion. Specifically, we consider a hard-software platform like Apple iPhone: one side is the end user who wishes to purchase hardware devices (i.e. iPhones), the other side is the developer who provides soft wares (i.e. various kinds of apps). Because of cross network effects, user’s utility on one side will increase as the number of users on the other side rises. Platform enterprises charge fixed fees (i.e. the price of the hardware) to end users and royalties to developers. In order to increase the sales volume of hard wares, the platform decides to hire a salesperson. The utility of the end user who accesses the platform can be augmented through salesperson’s explanation and demonstration. Due to market uncertainty, there is volatility on both sides. In order to motivate the salesperson to make a greater effort, the platform enterprise determines to supply a linear incentive contract consisting of two parts, a fixed wage and the commissions.
    This article investigates two types of incentive contracts, i.e. one-sided incentive contract and two-sided incentive contract. In the former scenario, salesperson’s salary is directly linked to direct sales (i.e. the sales volume of hard wares); in the latter scenario, the platform includes the quantity of the other side (i.e. the number of developers) into the incentive contract. The results show that, if agent’s salary is just linked to direct sales, that is, if the platform compensates only for the side where the salesperson exerts efforts, the enhancement of cross network effects may not always be beneficial for the platform. However, if the number of developers is also included into the incentive contract, it will encourage the agent to raise his effort level, and the platform profit will increase.
    Unlike classical two-sided market researches, this article considers the separation of ownership from management for the platform enterprise, attempts to open up the “black box” and explores the issue of platform enterprise’s internal management. The research reveals that, the enhancement of cross network effects is not always profitable for the platform when the selling mechanism is considered. Besides, not only the spillover effect and risk transmission generated by cross-group network effects, but also the fee-setting must be considered for the incentive contract design in two-sided markets. The fact that two-sided contracts enable the platform enterprise to gain a higher profit than one-sided contracts indicates that the design of the incentive contracts for salespersons in two-sided markets subverts the traditional idea of “no pain, no gain”. Therefore, based on the research results, we suggest that in industries such as smartphones, firms should abandon the traditional salary incentive mode of “fixed salary+direct sales commissions”, and include indirect sales (this does not need the salesperson to make any efforts) into the incentive contracts, which is adopting a mode of “fixed salary+direct sales commissions+indirect sales”. By allowing salespersons to share risks in both markets of the platform, they are encouraged to make a greater effort and increase the profits of the platform enterprise.
[an error occurred while processing this directive]