Loading...

Table of Content

    25 August 2023, Volume 32 Issue 8
    Theory Analysis and Methodology Study
    Supply Chain Incentive Mechanism Design Based on Quantal Response Equilibrium
    JIANG Zewu, ZHAO Xiaobo, XUE Chao, ZHU Wanshan, XIE Jinxing
    2023, 32(8):  1-8.  DOI: 10.12005/orms.2023.0243
    Asbtract ( )   PDF (1621KB) ( )  
    References | Related Articles | Metrics
    Mechanism design has received attention in supply chain management for its ability to effectively address the problem of supply chain inefficiency caused by asymmetric information. Specifically, retailers may not know the exact cost structure of suppliers, such as raw material costs, labor costs, etc., while suppliers have knowledge of their own cost information. In order to maximize their expected profits, retailers can design a contract menu, which includes multiple contracts, each specifying wholesale prices and order quantities based on different suppliers’ costs. The design of the contract menu depends on the rationality of the decision-maker. This study investigates the optimal contract menu design problem for a rational retailer when a supplier has bounded rationality.
    Quantal Response Equilibrium (QRE) is an extension of Nash equilibrium under bounded rationality. This study introduces quantal response equilibrium into the model of supply chain incentive mechanism design, establishes a probabilistic selection analysis framework based on response functions, and modifies incentive compatibility and participation constraints that depend on complete rationality of decision-makers. Firstly, the optimal strategy under complete rationality is analyzed. Then, the quantal response equilibrium model is introduced to analyze the incentive mechanism design under bounded rationality. Finally, the incentive mechanism design of quantal response equilibrium under incomplete information is studied. This study further examines the impact of bounded rationality on the performance of the optimal mechanism in the supply chain. Through theoretical analysis and numerical examples, the following results are obtained.
    First, we find the optimal strategy under complete rationality: When the market size is small, the optimal mechanism only trades with low-cost type suppliers; When the market size is large, it can trade with two types of suppliers, and can always achieve channel optimality when facing low-cost type suppliers, while the optimal order quantity deviates from channel optimality when facing high-cost type suppliers. Secondly, we obtain the optimal strategy under bounded rationality. Under complete information, the optimal order quantity always maintains channel optimality, and the optimal wholesale price first increases and then decreases with the degree of suppliers’ rationality, and converges to the optimal strategy under complete rationality. Under incomplete information, when the market size is small, the optimal mechanism only provides contracts that generate positive profits for low-cost type suppliers, and the optimal wholesale price first increases and then decreases with the degree of suppliers’ rationality; When the market size is large, with the increase of suppliers’ rationality, the optimal mechanism gradually presents three states: Providing a single contract that generates positive profits for low-cost type suppliers, providing a single contract that generates positive profits for two types of suppliers, and providing a contract menu that generates positive profits for two types of suppliers. If the degree of suppliers’ rationality tends to infinity, the optimal mechanism converges to the optimal strategy under complete rationality.
    The impact of bounded rationality on the performance of the supply chain under the optimal mechanism is as follows. Firstly, the overall channel profit of the supply chain under bounded rationality is smaller than that under complete rationality, indicating that bounded rationality leads to a decline in supply chain channel performance. Then, under complete information, the expected profits of retailers and the channel increase monotonously with the increase of rationality degree, while the expected profits of suppliers are relatively low when their rationality degree is extremely low or high. Finally, under incomplete information, the expected profits of the channel increase monotonously with the increase of rationality degree, the expected profits of retailers are higher when the suppliers’ rationality degree is extremely low or high, and the expected profits of suppliers are relatively low when their rationality degree is extremely low or high. The above results mainly stem from the fact that when the suppliers’ rationality degree is extremely low, they may accept contracts with extremely small or even negative profits, resulting in lower expected profits for suppliers, while when the suppliers’ rationality degree is extremely high, they accept contracts with extremely small profits, allowing retailers to obtain a greater profit margin.
    This study mainly considers the design of incentive mechanisms by retailers under bounded rationality, where upstream suppliers have private cost information (consisting of wholesale prices and order quantities). Based on this, further extensions can be made in many aspects. For example, under bounded rationality, different types of mechanism designs could be examined, including revenue-sharing contracts or buyback contracts,different types of asymmetric information, where downstream retailers may have private market demand information, etc. The behavioral theory on incentive mechanism design in this study provides a reference framework for other mechanism design research that considers behavior.
    Inventory Cost Management in Supply Chains Considering Products Experience and Return Policies
    LU Fang, CHEN Zhengxiong, WANG Jing
    2023, 32(8):  9-15.  DOI: 10.12005/orms.2023.0244
    Asbtract ( )   PDF (1224KB) ( )  
    References | Related Articles | Metrics
    There is a gap between the experiential products that customers acquire online through text and photographs and the experiential products that they experience in actual stores, which results in returns for experiential products that need customers to determine their value through sensory experience. This discrepancy can prevent customers from returning their purchases. Therefore, coordinating the optimal storage decisions of retailers and suppliers through customers’ experience preferences and retailers’ return strategies has become a vital management problem that must be handled in the omnichannel operation of experiential products. This problem needs to be solved as soon as possible. This paper investigates the impact of product experience and consumer experience preferences on the optimal storage level decisions and expected inventory costs of retailers and suppliers under three return strategies: NAR (no accept returns), ARNR (accept returns and not-sell returned products), and ARR (accept returns and sell returned products). NAR stands for “no accept returns” and ARNR stands for “accept returns and not-sell returned products”. ARR stands for “accept returns and sell returned products”.The findings contribute to optimizing the inventory costs of retailers and suppliers in the supply chain of experiential products, complement the research on the purchasing experience behavior of consumers in uncertain markets, and provide proof of value for applying AR/VR technology to the multichannel operations of experiential products.
    The provider of experiential products places orders with the producer of the products and then wholesales those products to the retailer’s offline experience store, where those products are then sold to customers. At the same time, the provider implements channel cooperation with the retailer and directly distribute orders that the retailer’s main web store fulfills at the same price, whether they are made offline or online. In this study, we assume that the upstream manufacturer has adequate production capacity, and we investigate how offline experience stores and suppliers manage and control inventory levels to maximize profits while satisfying market demand from experiencing customers.
    Based on the characteristics of experiential products, it is assumed that the experience coefficient of the product is expressed as β. The larger β is, the more experience service effort the retailer needs to put in. The service effort level provided by retailers operating experiential products and the market demand function of experiential products is constructed on this basis. Due to the product’s experiential nature, the consumers’ channel choice preference is analyzed. Assuming that the preference degree of consumer behavior influenced by online interactivity is i, i.e., consumers will choose to buy directly from the flagship store online with a preference probability of the preference probability of consumers choosing to buy from the offline experience store is (1-i). Constructed for retailers’ choice of NAR, ARNR, and ARR, suppliers’ expected inventory cost models, and offline experience stores under two categories of consumers’ full offline experience preference and incomplete offline experience preference are based on the two base variables of experience coefficient and preference degree. These variables measure the degree customers prefer complete or incomplete offline experiences. Ordering, storing, losses due to consumer channel shifts that result in out-of-stock conditions, and return fees are the components that make up the anticipated inventory costs. A numerical simulation of genuine examples is used to simulate and analyze the optimum approach for matching retailers and customer experience preferences while keeping supply chain inventory costs minimal. This is done to determine which strategy results in the lowest overall cost.
    The findings indicate that the ideal storage amount of suppliers is unrelated to the return strategy of the retailers or the experiential product they offer; Instead, it is solely tied to the primary market demand. Nevertheless, the inventory cost is connected to the experience product and the return plan. When a retailer selects the ARR, they will find the maximum value of the ideal storage volume for the same product experience and preference for an incomplete offline experience. On the other hand, when a retailer selects the ARNR, they will find the lowest value of optimal storage volume for the same product experience and preference for an incomplete offline experience. The expected inventory cost of offline experience stores is relatively the highest when the retailer chooses the ARR, and the expected inventory cost of offline experience stores is relatively the lowest when the retailer chooses the ARNR under the same conditions regarding product experiential and incomplete offline experience preferences.
    In order to make the analysis more manageable, many essential presumptions have been included in the model: (1)Retailers of experience products typically have both online and physical locations for their stores, as the market demand follows a consistent distribution. (2)The provider has the highest supply capacity necessary to fulfill the market’s requirements. Research can be conducted in the future using a wholly randomized setting by computer simulation of a more realistic market demand distribution function. This will allow for a better exploration of the impact of product experience and consumer choice preferences on retailers’ return problems. Additionally, research can be gradually extended to the entire supply chain to more effectively guide inventory management issues that arise within the supply chain.
    Manufacturer’s Sales Mode Choices and Competitive Strategies Based on the Composite E-commerce Platform
    ZHOU Chi, WANG Yixin, YU Jing
    2023, 32(8):  16-23.  DOI: 10.12005/orms.2023.0245
    Asbtract ( )   PDF (1561KB) ( )  
    References | Related Articles | Metrics
    The accelerated rise of the platform economy has led to the gradual development of e-commerce platform into a composite e-commerce platform (referred to as platform), providing manufacturers with more options for collaborative sales models with platform. Manufacturers can choose the wholesale sales model, which involves wholesaling products to platform and reselling them to consumers. Manufacturers can choose the agency sales model, where they open stores on the platform but need to pay a certain commission to the platform. In addition, manufacturers can also choosethe hybrid sales model, which means they can sell their products in both ways. At present, it has become a norm for manufacturers to compete for consumers on the platform. Manufacturers often choose the wholesale sales model, which can not only deliver the ownership of products to the platform, but also reduce their operating costs on the platform. Consumers’ preference for the wholesale sales model can also increase the potential demand of manufacturers. However, in the face of manufacturers adopting the wholesale sales model, if competing manufacturers also choose the wholesale sales model, the market size of their competitors and the degree of product competition may strongly affect their profits. If themanufacturers choose agency sales model, they need to pay a commission to the platform. If they choose hybrid sales model, they also create unfavorable factors for both models. Therefore, based on composite e-commerce platforms and the competitive environment of manufacturers, how can manufacturers choose the optimal sales model? Could they achieve a win-win cooperation with the platform? How do different market factors affect the optimal profits of manufacturers and supply chain members?
    This paper considers two manufacturers A and B that produce alternative products, both of which sell products through platforms. Manufacturer B and the platform adopt wholesale sales model, we study the sales model selection strategy of manufacturer A. Then, we construct Stackelberg game models with manufacturers A and B and the platform under different channels. The platform first determines the unit commission, manufacturer B decides the wholesale price, then the platform decides the retail price of the wholesale sales channel, and finally manufacturer A decides the retail price of the agency sales channel. Assuming that the information is symmetric, all parties in the supply chain are rational.
    The research results indicate that under the wholesale sales model, the wholesale price of manufacturer A increases with the potential market size of the product, while the platform increases the retail price due to the higher wholesale price. Correspondingly, manufacturer Blowers wholesale price and cooperates with the platform, while price reduction of platform still cannot increase manufacturer B’s product demand. When the potential market size of manufacturer A is large, the platform can also benefit. At this point, Manufacturer A and the platform can achieve cooperation. When competition among manufacturers increases, an increase in wholesale price for the manufacturer leads to an increase in wholesale price for another manufacturer. Intuitively, when the potential market size of manufacturer A is high, the platform lowers the retail price of product A, thereby increasing consumers’ motivation to purchase product A. Competition from manufacturers can increase the profits of supply chain members.
    In the agency sales model, the potential market size of manufacturer A increases consumer demand for product A. Manufacturer B will respond to market competition by reducing the wholesale price, but it still cannot increase profits by adopting a “small profit but quick sales” strategy. When the unit commission is small, the profit of manufacturer A increases with the potential market size. Moreover, the platform decreases with the potential market size. When competition among manufacturers intensifies, both retail and wholesale prices of products increase. At this time, the profits of Manufacturer B and the platform will also increase with product competition, while Manufacturer A’s profits will be affected by unit commissions. Only when the unit commission is low, the profit of manufacturer A increases with platform competition. As the unit commission increases, the product prices of manufacturers A and B both increase. Contrarily, the profit of manufacturer A does not always decrease, and the profit of the platform does not always increase.
    In the hybrid sales model, when the potential market size of manufacturer A is small and the unit commission of the platform increases, the cost of agent selling products on the platform increases. In order to increase profits, manufacturer A will increase the retail price of products in the agent channel, causing consumers to turn to wholesale sales channels to purchase product A.On one hand, manufacturer A can benefit from agency channels through higher unit profits, and on the other hand, it can also benefit from wholesale channels through higher demand. But in this model, in order to compete with manufacturer A, the wholesale price of manufacturer B decreases accordingly. At this point, the platform can not only simultaneously increase product prices and obtain higher profits, but also enhance competition between products.
    Therefore, when the channel elasticity is low, if the platform unit commission is low, the agent sales model is the optimal choice. As unit commissions increase, manufacturers should switch to wholesale sales models. When the channel elasticity is high, as long as the unit commission is not too low, manufacturers should choose a mixed sales model. Furthermore, manufacturers achieve three different modes of cooperation with the platform under different conditions, thereby achieving a win-win situation. This paper only considers the situation with one platform, and the model selection problem of manufacturers with multiple platforms competing is the future research direction.
    Equilibrium Financing Portfolio Strategies for Dual-channel Supply Chain with Financial Institution Lending and Commercial Credit
    GUO Jinsen, CHEN Zhuo, ZHOU Yongwu
    2023, 32(8):  24-31.  DOI: 10.12005/orms.2023.0246
    Asbtract ( )   PDF (1581KB) ( )  
    References | Related Articles | Metrics
    Financial institution lending and commercial credit are the two most common financing methods in the supply chain for enterprises with capital constraints. When only upstream or downstream enterprises in the supply chain have capital constraints, enterprises can use the internal commercial credit financing model of the supply chain to solve their capital constraints problem through early or delayed payment. But when both upstream and downstream enterprises have capital constraints, the supply chain needs to alleviate the negative impact of bilateral capital constraints through a combination of financial institution lending and commercial credit financing model. However, different combination financing models may have different impacts on the operational decisions and profits of enterprises, and affect their preferences for selecting various combination financing models. With the development of e-commerce, many manufacturers sell their products through a dual channel model, such as Apple, Huawei, and Lenovo. In a dual channel sales environment, there is a conflict between online and offline channels, and the impact of different combination financing models on the operational decisions and profits of various enterprises is more complex. Therefore, it is of great significance to study the dual channel supply chain operation strategy of the combination of financial institution lending and commercial credit financing model under bilateral capital constraints.
    This paper focuses on the dual channel supply chain where both upstream and downstream enterprises have capital constraints, and considers adopting the following combination financing model to solve their capital constraints: (1)The manufacturer adopts a combination of financial institution lending and deferred payment financing model, which allows the manufacturer to meet his capital needs through financial institution lending and allows the capital-constraint retailer to delay payment of some of the goods. (2)The retailer adopts a combination of financial institution lending and advance payment financing model, which allows the retailer to meet his capital needs through financial institution lending and solve the problem of insufficient production funds for manufacturer’s offline channels by paying a certain amount of advance payment. (3)The bilateral bank financing and advance payment financing model, which means that on the one hand, the retailer meets his capital needs through financial institution lending and solves the problem of insufficient production funds for manufacturer’s offline channels by paying a certain amount of advance payment. On the other hand, the manufacturer also meets his online channel’s capital needs through financial institution lending.
    First of all, the paper solves the equilibrium solution of the game through the backwards-induction under different combination financing modes. Then, analyze the impact of capital scale and loan rate on supply chain operation decisions and profits under different combination financing modes. Finally, we compare and analyze the profit levels of the manufacturer and retailer under different combination financing modes, and discuss the preferences of the manufacturer and retailer for different combination financing modes. The research results show that: (1)The supply chain portfolio financing model not only effectively solves the capital constraint problem of enterprises, but also allows the retailer to obtain profits higher than that obtained without capital constraint. (2)When loan rate is relatively low, the retailer prefers “manufacture bank financing and deferred payment” combination financing mode, otherwise, “retailer/bilateral bank financing and advance payment” combination financing mode is dominant. (3)The manufacturer prefers “retailer/bilateral bank financing and advance payment” combination financing mode when loan rate is relatively low, otherwise, “manufacture bank financing and deferred payment” combination financing mode is dominant.
    Further research can be extended to the stochastic demand environment and the situation of enterprises assuming limited liability, analyzing the impact of market demand fluctuations and bankruptcy risk of enterprises under limited liability on the operational decisions and financing strategies of various entities in the supply chain.
    Dynamic Uncertainty-Optimization of Drug Logistics Multi-center Location
    YUAN Zhiyuan, GAO Jie, YANG Caijun
    2023, 32(8):  32-37.  DOI: 10.12005/orms.2023.0247
    Asbtract ( )   PDF (997KB) ( )  
    References | Related Articles | Metrics
    In order to reduce the burden of patients’ medication costs, improve the quality of clinical medication, fundamentally improve the ecological environment of the pharmaceutical industry, promote the transformation of the pharmaceutical industry from market-driven to innovation-driven, and promote the solution of deep-seated institutional problems in the field of medical service system, in 2018, with the approval of the Central Comprehensive Deepening Reform Commission, the state organized the implementation of centralized and volume procurement of drugs. In 2020, the scale of the third batch of national organized centralized drug procurement reached tens of billions of yuan, with a total of 189 enterprises participating in the bidding process. Among them, 125 enterprises were selected, and 191 drug product specifications were selected, with an average price reduction of 53%. How to utilize the data of natural disasters in various regions over the past two decades, based on the concept of big data, scientifically and reasonably laying out drug logistics centers and drug delivery routes, minimizing drug delivery risks, and safely and efficiently distributing multi enterprise, multi type, large batch, national, and high time efficient national centralized procurement drugs to demand cities, have become a new problem that drug logistics enterprises urgently need to solve. This article starts from the actual needs of the country and enterprises, and focuses on solving new problems that arise in reality. It provides a new solution for the multi type, large batch, national, and high time efficient drug distribution problem after the national drug centralized procurement. It further improves the scientific nature and safety of the drug logistics center location distribution path, and provides a scientific theoretical basis for drug logistics decision-making.
    In order to improve the timeliness and safety of drug distribution, we aim to address the problem of multiple types, large quantities, nationwide, and high timeliness of drug distribution. This article is based on the idea of big data, and based on the data of natural disasters occurring in the location of the alternative drug logistics center in the past twenty years, the ratio of the number of major sudden natural disasters occurring in the location of the alternative drug logistics center in the past twenty years to the selected age range of 20 is used as the predicted probability of the area suffering from major natural disasters in the future. Based on the predicted probability of major natural disasters in the future in the location of alternative drug logistics centers, a dynamic uncertainty drug logistics multi center location path optimization model is constructed, taking into account drug distribution safety, distribution costs, environmental protection costs, time satisfaction, and real-time road conditions. Based on the characteristics of the studied problem and in order to improve the efficiency of the algorithm, the fuzzyC-means algorithm is designed to be simple, has a wide range of problem solving, and is easy to apply to computer implementation. However, it is easily affected by the initial solution and falls into local optima. Particle swarm optimization (PSO) has strong global search ability, has more opportunities to solve the global optimal solution, and is easy to implement, high in accuracy, and fast in convergence. Using the solution obtained by particle swarm optimization as the initial solution of FCM can improve the computational efficiency of the algorithm. This paper makes full use of the advantages of fuzzy C-means clustering algorithm (FCM), particle swarm optimization (PSO) and Tabu search algorithm (TS) to design a hybrid PSO-FCM-TS algorithm. Based on the results of the second batch of centralized procurement bidding for drugs implemented by the national organization, calculations are conducted using PSO-FCM-TS, AG, TS, and PSO algorithms. The experimental results show that this algorithm improves convergence speed and has strong stability compared to AG, TS, and PSO algorithms.
    This article aims to demonstrate the effectiveness of the constructed dynamic uncertainty drug logistics multi center location path optimization model and the designed PSO-FCM-TS hybrid algorithm. Based on the road conditions data of Gaode Map software and the randomly selected results of the second batch of national centralized procurement bidding for drugs, the dynamic uncertainty drug logistics multi center location path optimization model and the PSO-FCM-TS hybrid algorithm are used, seeking the optimal overall cost of drug distribution for pharmaceutical enterprises, as well as a distribution plan with a lower risk of natural disasters for drug logistics centers in the future. The empirical results indicate that the model can effectively determine the multi-center location path optimization scheme for drug logistics, and the algorithm has high convergence and stability.
    Due to the difficulty of drug quality control, strong delivery timeliness, and high technical content, the professional and technical level of drug delivery personnel will have an impact on drug delivery. This article does not study the impact of human factors on drug delivery. Therefore, the impact of the professional technical level of drug delivery personnel on drug delivery will be the next research topic of the author.
    Milestone Payment Based Multi-mode Multi-project Cash Flow Balance Scheduling Optimization
    HE Yukang, JIA Tao, WANG Nengmin
    2023, 32(8):  38-43.  DOI: 10.12005/orms.2023.0248
    Asbtract ( )   PDF (945KB) ( )  
    References | Related Articles | Metrics
    In reality, as projects are implemented, the contractors may incur a series of cash flows occurring in the following two forms: Cash outflows are induced mainly by activity execution while cash inflows generally result from payments based upon the contract between the contractor and the client. It is easy to understand that throughout the projects, maintaining a positive balance between cash outflows and inflows is very important for the contractor, because if the outflows cannot be covered by the inflows in time, the contractor may not be able to smoothly implement the projects or may even incur project failure. However, in the area of project scheduling, although there exist a lot of researches that take cash flows into account, most of them focus on the problem of how to maximize the net present value of the projects’ cash flows. To the best of our knowledge, the multi-project scheduling problem with the objective of balancing the cash flows positively has not been studied intensively thus far.
    Based on the facts aforementioned, this paper investigates a milestone payment based multi-mode multi-project cash flow balance scheduling problem, in which the contractor needs to implement multiple projects concurrently, activities can be performed with several discrete modes, and the objective is to minimize the maximal cash flow gap under the constraint of project deadline. First, on the basis of the problem definition, we construct a nonlinear integer programming optimization model for the studied problem using the defined notations. In the model, the decision variables are the arrangement of the execution mode and start time of activities whereas the constraints include the precedence relationship between activities, project deadlines, calculation formulae of payment amounts, and definition domain of decision variables. Through the analysis of the constructed model, we proposed three properties for the problem, which can be employed to determine the maximal cash flow gap under a given schedule conveniently and reduce this gap by adjusting the completion or start times of some relevant milestone activities or non-milestone activities properly.
    Then, due to the NP-hardness of the problem, we develop a tabu search algorithm where the proposed properties are utilized to improve the generated initial and neighbour solutions and hence enhance the searching efficiency of the algorithm. In the algorithm, two decision variable sets are adopted to represent the solution of the problem and a decoding procedure is designed to transform the solution into the corresponding schedule of projects. The algorithm starts with an initial solution that is constructed according to the following way: Under the constraint of project deadline, the execution mode of activities is assigned as the one with lowest cost while the start time of milestone (non-milestone) activities are arranged asearly (late) as possible. During the searching process, neighbour solutions are generated randomly and the two decision variables are searched in a nested manner. When the operation time of the algorithm reaches a certain value, the algorithm stops and outputs the best solution gotten as the desirable solution.
    Finally, we utilize a real case, which consists of two projects that own different activity networks, project deadlines, payment conditions, and activity parameters, to verify the proposed model and algorithm. The two versions of the algorithm, namely the original and improved tabu search algorithms, are compared and the obtained results indicate that the latter can obtain the best solution for the case more quickly than the former, thus validating the contribution of the improvement measure to the algorithm. In addition, for the studied case, the desirable schedule found by the algorithm is remarkably better than the practical schedule and by comparing the two schedules, we derive the managerial insights described below: The contractor can reduce the maximal cash flow gap effectively by properly delaying the completion time of the relevant milestone activity or adjusting the start time of the relevant non-milestone activities based on the occurrence period of the maximal cash flow gap. Moreover, the balance status between cash outflows and inflows can be improved further through moving the schedule of some individual projects in light of the cash flow distribution over the course of the projects.
    A Train Schedule Optimization Method Considering the Time-varying Characteristics of Passenger Flow
    ZHANG Shiyu, YANG Yunchao, YANG Yuhao
    2023, 32(8):  44-50.  DOI: 10.12005/orms.2023.0249
    Asbtract ( )   PDF (1268KB) ( )  
    References | Related Articles | Metrics
    Urban rail transit has become the first choice for cities to solve the urban congestion problem, with its unique advantages of high capacity, high efficiency, low energy consumption and high environmental protection. Among the many factors affecting the optimization of train schedule, passenger flow is obviously the most important factor for therational allocation of transport capacity and transport system, and the formulation of operation schedule. The passenger flow of urban rail transit is obviously unbalanced in time and space, and the passenger flow of inbound and outbound stations along the line is very different, which makes it difficult to optimize the train schedule. The commonly used balanced train mode with equal intervals is widely used to control the operation of state-owned and intercity trains due to its ease of management; However, the time-varying characteristics of passenger flow will inevitably lead to the increase in operating costs or a decrease in service levels, which hinders the improvement of urban rail transit service levels. There are abundant research results on schedule optimization of rail transit at home and abroad. A large number of research results show that the study on schedule optimization should consider not only the time imbalance of passenger flow, but also its spatial imbalance. Therefore, it is very important to use the actual dynamic passenger flow information, scientific modeling method and optimization algorithm to solve the schedule optimization problem of urban rail transit.
    Based on a lot of existing research, aimed at the optimization problem of train schedule, the time-varying characteristics of passenger demand in urban rail transit are analyzed, and a method of calculating passenger cost based on passenger transport efficiency is presented. With train departure time as variable, station capacity, train capacity, departure interval, first and last train departure time and the number of spare vehicles as constraint conditions, and minimum total cost of passengers and operating units as optimization objective, an optimization model of train schedule of urban rail transit is constructed. According to the characteristics of model, a two-step genetic algorithm based on simulation is designed. Taking Wuhan Metro Line 1 as an example for empirical analysis, it is found that setting station design ability and train capacity as strong constraints can improve the efficiency of the solution and shorten the search time for feasible initial solution in the process of solving genetic algorithm. Optimization time interval is one of the important factors to determine the optimization result and optimization rate of train progress, so three different time intervals (5s, 10s and 30s) are selected to optimize the model. The optimization results begin to converge after 53 generations, and the optimal solution of the model is obtained at the time interval of 5s. At this time, the optimal solution of the model is 566142 yuan, which is 2736 yuan less than the optimal solution when the time interval is 10s, and 12549 yuan less than the optimal solution when the time interval is 30s. Therefore, the optimal operation plan obtained at a time interval of 5 seconds has a significant cost reduction compared with that of the other two time intervals, which verifies the effectiveness of the model and algorithm.
    The results show that the two-step genetic algorithm based on simulation has strong convergence and high solving efficiency. The “small and high-density” train operation plan can balance the interests of passengers and operating units. The smaller platform capacity severely limits the service level of urban rail transit, resulting in the increase of operating costs. The smaller the station capacity is, the higher the operating costs and passenger waiting costs will be. The larger the station capacity, the higher the operating costs and passenger waiting costs. According to the obtained optimal plan, relevant departments can optimize the train operation plan and determine a more reasonable train operation plan (including operation time, routing and train use plan, etc.), which will effectively improve the passenger travel efficiency, reduce the operating cost of each train at the departure station, improve the service level of urban rail transit, and promote the high-quality development of urban rail transit.
    Surgery Allocation and Optimization of Pelvic Fracture Patients Based on Stochastic Recovery Time
    LI Qing, SU Qiang, DENG Guoying
    2023, 32(8):  51-56.  DOI: 10.12005/orms.2023.0250
    Asbtract ( )   PDF (1384KB) ( )  
    References | Related Articles | Metrics
    For the past few years, motor vehicle accidents and industrial accidents have occurred frequently, and pelvic fracture has become a common orthopaedic injury. Pelvic fracture is a severe trauma caused by the direct compression of the pelvis, often accompanied by other organ and system damage, with a disability rate of up to 50%~60%. Based on the stability of the pelvis, pelvic fracture can be classified into three categories: A, B, and C. Type B and C fractures are usually recommended for surgical treatment. Based on the patients’ life state at the time of admission, patients are divided into two types, convalescent patient and scheduled patient. Through examination, the fracture type of scheduled patient is determined and the surgical plan is made. Convalescent patients have random recovery time and receive surgery when they have a stable life state. When there are two types of patients in the system, the surgical allocation strategy is developed to maximize the expected benefit. In the context of sharing medical resources such as doctors, nurses and operating rooms, optimizing surgical arrangements and rational allocation of medical resources are crucial.
    The Markov decision process model with the objective of maximizing the expected benefit is established. The backward iterative algorithm is used to get the optimal allocation strategy. The parameters of two types of patients are designed according to actual situation of the hospital. A case of 8-service-period is considered, then the optimal state path and decision path are obtained. The optimal allocation curve of different scenarios are drawn, and the structural properties of the optimal strategy are analyzed and proved. Changing the recovery time and the number of convalescent patients, the allocation strategies under different scenarios are got. Sensitivity analysis is also done by adjusting the parameters of two types of patients.
    (1)With the quadratic penalty function, the optimal allocation curve takes the form of switching-curve. (2)There exists a critical index c*t(s), so that a convalescent patient will be selected when c≥c*t(s) and a scheduled patient is selected if c<c*t(s). Besides, critical index c*t(s) presents a monotonically increasing form, ifs1≥s2, then c*t(s1)≥c*t(s2). (3)Scenarios with the same ncp always share the same allocation policy, which means that they have the same critical values of each s. Exceptions exist when ncp=6 and ncp=7, both of which have two optimal allocation policies. (4)The difference of critical values between each scenario is no more than the difference ofncp between them. The more convalescent patients, the greater priority they will have.
    However, limitations still exist and more work remains to be done in the future. First, we start the system with a random number of patients. In fact, patients who have not been served from the previous planning horizon may still wait in the system. Second, one request of each type of patient can arrive at every service period. Batch arrival is not considered, so the arrival patterns that we use have a discrepancy from the practical situation. Third, the time of surgery of each patient is assumed to be the same and equal to the length of the service period. However, due to the different types of pelvic fractures and the distinct condition of each person, the time of surgery may vary from patient to patient. The further research includes the following points: First, patients who are not served in the previous time period should be considered, and a reasonable initial value of the state should be set. Second, arrival patterns should be modified to consider more situations. Multi-facility and multi-patient-type problems should be modeled to further approach reality. Third, the planning horizon should be extended to make continuous and sequential decisions.
    Outsourcing and Authorization Modes Considering Green Preference and Government Subsidies or Carbon Taxes
    FENG Zhangwei, XIAO Tiaojun, MOU Shandong
    2023, 32(8):  57-64.  DOI: 10.12005/orms.2023.0251
    Asbtract ( )   PDF (1279KB) ( )  
    References | Related Articles | Metrics
    Benefiting from the development of remanufacturing technology and the introduction of government subsidies or carbon tax policies, manufacturers take back used products to produce remanufactured products in closed-loop supply chains. However, collecting used products and remanufacturing allow manufacturers to participate in sustainable operations, but they are not core businesses for many manufacturers, and outsourcing them to professional third-party remanufacturers (3PRs) can be a strategic option. Therefore, in reality a manufacturer has two third-party remanufacturing modes, outsourcing (O) and authorization (A). Mode O entails the manufacturer outsourcing only the remanufacturing process to the 3PR. For example, Land Rover and Caterpillar have an agreement whereby Caterpillar Remanufacturing Services (CRS) acts as Land Rover’s lead global remanufacturing services provider. Land Rover outsources the remanufacturing operations to CRS and sells both new and remanufactured products to consumers. While in mode A, the manufacturer licenses both the remanufacturing process and the sales operations of remanufactured products to the 3PR. For example, Apple which has an agreement with Foxconn in which the latter acquires the proprietary rights to remanufacture used iPhones, and then places them back as new phones in the Chinese market.
    To facilitate remanufacturing by these two types of third-party remanufacturing modes, many countries have carried out financial subsidy or carbon tax on remanufacturing process. Based on it, this paper aims to discuss which third-party remanufacturing mode is optimal for a manufacturer, and to examine the impacts of behavioral characteristics and government’s subsidies or carbon taxes on the optimal mode. To achieve the above aim, this paper constructs a closed-loop supply chain consisting of one manufacturer, one retailer and one 3PR. The manufacturer is the leader and chooses the outsourcing or authorization mode.
    This paper mainly gets the following management insights: (1)Under mode O, although the 3PR’s profit is small with the subsidy mechanism, it is effective for increasing the retailer’s profit. The 3PR will not participate in remanufacturing because it is unprofitable with the carbon tax mechanism. (2)Under mode A, both the manufacturer and the 3PR have the largest profits with the subsidy mechanism, but both the retailer and government profit less. When the government adopts the carbon tax mechanism, firms and the government are in the situation of multi-win.
    Several directions could be taken in future research. One may investigate the effect of competition between two homogeneous manufacturers. Other directions include consideration of dual channel collection and the existence of a green consumer segment (primary and green consumers) to facilitate the investigation of sustainability or considering how these factors affect the third-party remanufacturing strategy selection.
    Optimal Aggregate Abatement and Equilibrium Price of Carbon Emission Permit in Emission Trading Market
    LIU Na, SONG Futie
    2023, 32(8):  65-70.  DOI: 10.12005/orms.2023.0252
    Asbtract ( )   PDF (1209KB) ( )  
    References | Related Articles | Metrics
    Global climate change is the most severe challenge facing humanity today, affecting the sustainable development of the economy and society seriously. Anthropogenic carbon emissions are the mainspring of global climate change. To control carbon emissions, the Chinese government announced a target in the Copenhagen Climate Conferencein December 2009 to reduce carbon intensity by 40-45%, lower than that of the year of 2005 by 2020. In June 2015, China submitted its “Enhanced Actions on Climate Change—China’s Intended Nationally Determined Contributions”, which set a target of reducing carbon intensity by 60-65%,lower than that of the year of 2005 by 2030. To achieve China’s increasingly ambitious nationally determined contributions, a systematic emission reduction plan must be formulated, and a market-based effective emission control tool, such as “carbon trading”is used. The national carbon trading market was officially launched in July 2021, and the carbon trading price is the key element of carbon trading.If the carbon trading price is set too low, the real-world emissions may be higher than expected levels, making it difficult to achieve emission reduction targets. If the carbon trading price is set too high, the carbon trading market may not be able to optimize resource allocation. Therefore, it is necessary to scientifically formulate the future carbon trading price. Developing a coherent emission reduction plan and a reasonable carbon trading price can provide theoretical basis for China to achieve its carbon emission reduction targets through market-based approaches, and provide a pricing benchmark for carbon emission allowances in the national carbon market.
    This article focuses on power companies participating in the national carbon trading market and constructs a stochastic optimal control model that minimizes the total compliance cost, including the cost of clean energy replacing non-clean energy generation, depreciation cost of emission reduction equipment, carbon trading cost, and transaction friction cost. In this model, the optimal emission reduction and trading quantities are control variables, and the expected emission reduction, carbon quota price, and energy conversion price are state variables, with the boundary constraint that the expected emission reduction at the end of the compliance period is equal to zero, in order to realize the government’s promised emission reduction target from 2021 to 2030. This study uses Hamilton-Jacobi-Bellman (HJB) equation to convert optimal control problem to solving partial differential equation problem. By solving HJB equation, the optimal abatement and trading strategy for enterprise are derived. Then, we obtain analytical solutions of equilibrium prices of carbon emission permit and aggregate abatements.
    To validate the model, scenario analysis and sensitivity analysis are performed using real-world data. Through scenario analysis, the equilibrium carbon emission allowance price and the socially optimal emission reduction within the compliance period are obtained. Through sensitivity analysis, the marginal emission reduction cost of various clean energy sources replacing non-clean energy generation are compared, and it is found that the priority order for energy fuel usage by power generation companies is hydropower, solar, onshore wind, nuclear, offshore wind, and natural gas. Based on this, further research is needed on energy structure optimization in the power market, under the constraints of economic optimization and dual carbon targets, including the optimal proportion of wind power, solar power, hydropower (subject to water resource constraints), nuclear power, and fossil energy for power generation, the optimum usage ratio for various energy sources, and the optimal installation ratio of various energy sources for power generation.
    Research on the Scheduling of Heterogeneous Parallel Machines with Limited Intra- and Inter-transport Capacity in Virtual Manufacturing Cell
    GAO Longlong, HAN Wenmin
    2023, 32(8):  71-77.  DOI: 10.12005/orms.2023.0253
    Asbtract ( )   PDF (1269KB) ( )  
    References | Related Articles | Metrics
    As an important branch of flexible manufacturing, the virtual manufacturing cell (hereafter referred to as “virtual cell”) has attracted much attention in recent years. The virtual cell refers to the formation of a logical production cell by selecting the required equipment from alternative equipment resources based on the similarity of production tasks and considering the production conditions or constraints, and forming a logically interrelated virtual dynamic production entity by connecting the logistics system without changing the physical layout of equipment and resources, which has the advantages of reducing production preparation time and work-in-process inventory and improving equipment utilization. The virtual cell is mostly used in the production of large and complex products, whose production and logistics organization are complex, and the transportation of workpieces is mostly done with large transportation equipment such as flatbed transporters, traveling cranes and forklifts. It can be seen that the effective implementation of the virtual cell depends on the effective flow of the production logistics system, and the transportation organization has an important impact on the effective development and implementation of the scheduling scheme. Existing studies related to virtual cells ignore the influence of factors such as intra- and inter-transport capacity and non-load transport time, and the heterogeneity of parallel machines is not considered sufficient. These factors affect the recycling and availability of transportation equipment, which affects the continuity of production logistics and utilization of processing equipment. Consequently, the scheduling scheme often departs substantially from the actual production organization process or even is not even feasible.
    In summary, this paper constructs a joint decision model of transportation organization and heterogeneous parallel machine scheduling under the constrained transportation capacity the intra- and inter-virtual cell, non-load transportation time and heterogeneous parallel machines, with the objectives of minimizing the maximum completion time and minimizing the total transportation time. Moreover, an improved NSGA-II algorithm is proposed to solve the model. The proposed algorithm is used to combine the crossover and mutation processes of particle swarm optimization and the genetic algorithm to improve the convergence speed of the algorithm. The evolution mechanism of the simulated annealing algorithm is applied to the mutation process of the genetic algorithm, which leads to an increase in population diversity and prevented the problem of the genetic algorithm from being easy to fall into local optimum. The simulations of examples with small, medium, and large dimension indicate that C-metric, diversity metric, Inverted Generational Distance (IGD), and computation time are better than standard NSGA-II. These findings reveal that the proposed algorithm presents more advantages than standard NSGA-II for solution quality, diversity, robustness, and convergence.
    In actual production, there may be some variability in the transportation time of different workpieces at the same transportation distance, and the priority of each workpiece processing and transportation task may be different. In future research, the solving capability of the algorithm can be further improved, while the variability of transportation organization of different workpieces and the priority of each workpiece processing and transportation task can be incorporated into the joint decision problem of transportation organization inside and outside the virtual cell and scheduling of heterogeneous parallel machines for consideration.
    An Improved Estimation of Distributed Algorithm for A Single Machine Scheduling Problem with Re-entrant and Group Features
    YUAN Shuaipeng, LI Tieke, WANG Bailin, ZHANG Wenxin, ZHANG Zhuolun, YU Nana
    2023, 32(8):  78-84.  DOI: 10.12005/orms.2023.0254
    Asbtract ( )   PDF (1215KB) ( )  
    References | Related Articles | Metrics
    A single machine scheduling problem with re-entrant and group features is extracted from the realistic hot rolling production process of wide plates in the steel manufacturing industry. In this problem, jobs need to be processed in two processes, and a certain waiting time is needed between processes. To improve production efficiency, there is the possibility of group processing for adjacent jobs. This kind of scheduling problem with re-entrant and group features exists not only in the rolling shop of iron and steel industries, but also in other discrete manufacturing industries. However, to the best of our knowledge, no relevant research results have been found. Therefore, it is of great significance to study this scheduling problem.
    This realistic production scheduling problem is first formulated as a mixed integer linear programming model with the goal of minimizing the makespan, which enables practitioners to solve small-scale instances using commercial solvers. Then it is proven that this problem is strongly NP-hard. Furthermore, and two key characteristics of the optimal solution are proved, which lays a solid foundation for the design of algorithm.
    To efficiently solve this problem, an improved estimation of distributed algorithm (IEDA) is proposed according to the problem characteristics. Unlike other approaches of evolutionary algorithms, IEDA uses neither crossover nor mutation. It generates new offspring according to a probabilistic model learned from a population of parents. First, a problem-specific heuristic is presented to construct the initial population. Then, an effective probabilistic model is designed, where both the order of the job in the sequence and the similar blocks of jobs presented in the selected parents are taken into account. After that, local search procedures based on maximum reduction are designed to guide the IEDA to the promising regions at a fast speed. Finally, some individuals in the current population are replaced with new generated offspring. These steps are repeated until one stopping criterion is met.
    The studied problem is NP-hard and can be solved efficiently by various algorithms. Although these algorithms might be effective, knowing a lower bound for the problem would enable us to assess how good they are. Therefore, two different lower bounds are developed so as to effectively evaluate the performance of the proposed IEDA. Both of these two lower bounds are based on mathematical analysis of the studied problem.
    In the computational experiments, the best combination of parameters in the IEDA is first identified using the Taguchi method. Since no existing algorithm can be directly used, we select three state-of-the-art meta-heuristic algorithms developed for single machine scheduling problem. A lot of experiments are carried out, and the results show that the proposed IEDA can not only generate better solutions but also perform more steadily with various problem sizes. Therefore, we can conclude that the proposed IEDA is superior to the other three algorithms at solving the considered problem.
    This work only studies the case where two jobs are processed in a group. In actual industrial production, there are three or more jobs that are processed in a group. In further research, we will study this as an object, and design a more efficient algorithm by mining scheduling rules based on problem characteristics.
    Research on the Operational Regulation Behavior in WTE PPP Project: Evolutionary Game Analysis Based on Social Participation
    QUAN Xiongwei, ZUO Gaoshan
    2023, 32(8):  85-92.  DOI: 10.12005/orms.2023.0255
    Asbtract ( )   PDF (1466KB) ( )  
    References | Related Articles | Metrics
    Effective implementation and compliance of operation regulation in WTE PPP project is a key factor to the success and sustainable development of the project. In the game of WTE PPP project, the two parties of the game are two populations: Local government and social capital. Under the condition of information asymmetry and resource restriction, their behavior choices are chosen according to their own short-term interests. Local governments may have two choices in the operation regulation of WTE PPP project: Strict and lax supervision behavior. There may also be two behaviorchoices in social capital: Compliance with regulations and violation of regulations.
    Firstly by far, most research on WTE projects focuses on key success factors, risk management, benefit distribution, concession pricing and other related issues, while there is little research on operational regulation. Secondly, most of the existing studies use traditional game theory to conduct static analysis of the strategic behaviors between subjects, which is difficult to describe the interactive process of the complex strategic behaviors of different participants. In view of this, this paper employs the evolutionary game theory, from the perspective of social participation, by constructing the two-dimensional dynamic system model of the game between local government and social capital, and analyzing the behavior selection of local government and social capital in the model, and four propositions are put forward and proved. Next, by using matlab software, the numerical simulation analysis of the behavior selection evolution process between local government and social capital in different scenarios is made.
    The results of this study indicate that driven by profit, social capital has a strong motivation to choose the behaviors that violate regulations. The effect of social participation on local government behavior is significantly higher than that of private sector, and with the increase of social participation, local government will show obvious “free riding” behavior.Reputation on private sector’s behavior restriction is related to the degree of social participation. For example,in the case of higher social participation, excessive reputation loss can promote the social capital to obey the regulations. In the case of lower social participation, the accountability will form a certain constraint effect on the lax behavior of local governments,however, as long as the local governments can obtain greater benefits, they still have the tendency to take that lax behavior. Higher political praise can effectively motivate and guide local government to adopt strict supervision behavior on regulation, and under this condition, supplemented by reputation mechanism can finally achieve the ideal (strict, compliance) state.
    Evolutionary Game Research on the Collaborative Innovation of Agricultural Machinery Equipment Industry-university-research under Government Regulation
    SHI Huan, LI Hongbo
    2023, 32(8):  93-100.  DOI: 10.12005/orms.2023.0256
    Asbtract ( )   PDF (2381KB) ( )  
    References | Related Articles | Metrics
    For recent years, in order to implement the Outline of the National Program for Medium- and Long-Term Scientific and Technological Development, remarkable progress and achievements in scientific and technological innovation of agricultural machinery and equipment have been made in China. In the meantime, the transformation and upgrading of China’s agricultural machinery and equipment industries and their high-quality development have accelerated, giving strong support and guidance to modern agriculture. Besides, China has promoted the formation of an industry-university-research alliance of agricultural machinery equipment through system construction and the introduction of relevant policies. Collaborative innovation of industry, university and research in the field of agricultural machinery equipment is an important means to guide all resources to promote the transformation, popularization and diffusion of advanced and applicable agricultural machinery technology. However, there are obvious shortcomings in the current collaborative innovation mechanism of industry-university-research in the field of agricultural machinery equipment, which exposes problems such as the lack of common basic research on agricultural machinery equipment, the lack of government supervision, and the weak docking of talents for production, education and research of agricultural machinery equipment. How to give full play to the role of policy guidance and provide source power for the independent innovation and upgrading of China’s agricultural machinery equipment industry system has become an urgent problem to be solved.
    The research results show that: (1)The process of industry-university-research collaborative innovation of agricultural machinery equipment is a complex system process involving multiple actors such as agricultural machinery equipment enterprises, academic research parties and government departments. The government plays an active role in any stage of collaborative innovation, but the government alone cannot change the direction of system evolution. Agricultural machinery equipment enterprises and academic and research institutions are more influenced by each other’s collaborative innovation willingness than government guidance. (2)Despite the lag of policy promotion in the process of industry-university-research collaborative innovation of agricultural machinery equipment, government regulation can still effectively restrict speculation in the process of collaborative innovation, and agricultural machinery equipment enterprises are more sensitive to subsidy policies than academic and research enterprises. At the same time, government subsidies present “crowding out effect” and “deviation effect” in the process of industry-university-research collaborative innovation of agricultural machinery equipment. (3)Reasonable income ratio of collaborative innovation is conducive to the optimal state of the system, and the perception of income distribution ratio of agricultural machinery equipment enterprises is more sensitive. The lack of expected income and profit will affect the collaborative innovation willingness of agricultural machinery equipment enterprises. When the income distribution ratio is seriously maladjusted, the academic and research side will decisively choose negative behaviors.
    Structural Reliability Analysis Based on Two-stage Local Sampling Strategy
    XIAO Tianli, MA Yizhong, LIN Chenglong
    2023, 32(8):  101-107.  DOI: 10.12005/orms.2023.0257
    Asbtract ( )   PDF (1095KB) ( )  
    References | Related Articles | Metrics
    Multi-source uncertainty often exists in engineering practice, which affects the safe operation of engineering structures. Structural reliability analysis considers the influence of uncertainty in the form of failure probability, which can provide an effective guidance for the structural safety design. However, the evaluation of failure probability often requires a large number of calls to the actual performance function, leading to unaffordable computational expense, especially for the time-consuming models. To solve this problem, Kriging-based reliability analysis methods have attracted a lot of attention in recent years. In these methods, a well-trained Kriging model is used to replace the actual performance function, thus improving the computational efficiency of failure probability. Note that the number of calls to the actual performance function is equal to the size of the design of experiment, and the estimation accuracy of failure probability depends on the approximation quality of Kriging. Therefore, how to reduce the number of training samples as much as possible while ensuring the accuracy of failure probability is the goal of Kriging-based reliability analysis methods.
    In this paper, an efficient structural reliability analysis method based on a two-stage local sampling strategy is proposed under the framework of active learning Kriging to improve the estimation accuracy and computational efficiency of failure probability. In the process of active learning, an inaccurate Kriging model is first established based on a small initial design of experiment, and then the design of experiment is sequentially enriched and the Kriging model is gradually refined with the expected feasibility function and the two-stage local sampling strategy, until the stopping criterion is met. Since the new samples are designed based on the prediction information of Kriging during previous iterations, the sequential design is more efficient than the one-shot design. Moreover, the two-stage local sampling strategy is implemented to reduce the size of the candidate pool, which can improve the efficiency of searching the new samples. In the first stage, the new sample is selected from such a local region where the sampling center is assigned as the mean point of design variables and the sampling region is defined by the joint probability density of design variables. When the estimated failure probability reaches the threshold of stage division based on confidence interval, the second stage sampling will start. In the second stage of local sampling, the sampling center is located at the most likely failure point, and the sampling region is determined based on the target reliability and the nonlinearity of the performance function. In fact, both the first and second stage sampling methods have their disadvantages when used alone. For example, the selected samples of the first stage may contribute little to the accuracy of Kriging when the sampling center is far away from the limit state boundary; The estimated failure probability of the second stage may be inaccurate when the obtained most likely failure point deviates greatly from the actual value. However, our proposed local sampling strategy can overcome these shortcomings by switching adaptively between the first stage and second stage. Finally, the failure probability is estimated by combining the refined Kriging model and Monte Carlo simulation.
    Three application examples with different dimensions and complexity are employed to verify the performance of the proposed method. Among the compared methods, Monte Carlo simulation has a high accuracy when generating enough samples, and thus its estimated value is regarded as a reference. For fairness, the other three methods of comparison are all based on the active learning Kriging model. Among these three Kriging-based reliability analysis methods, both global and local sampling strategies are involved. The comparison results show that the proposed method can balance the global exploration and local exploration in the effective sampling area, and achieve high accuracy and efficiency of failure probability estimation. Note that the number of Monte Carlo samples may be very large when involving a small failure probability, which decreases computational efficiency. To accelerate the convergence of failure probability estimation, some advanced simulation techniques, such as importance sampling, subset simulation and line sampling, will be combined in the future work.
    Node Importance Recognition Method of Temporal Network Based on Mutual Information between Layers
    DENG Zhiwen, LI Xinchun, KONG Jie, WANG Dafu
    2023, 32(8):  108-113.  DOI: 10.12005/orms.2023.0258
    Asbtract ( )   PDF (1195KB) ( )  
    References | Related Articles | Metrics
    With the vigorous development of various social network platforms, complex network analysis has gradually received extensive attention from researchers in various disciplines. Complex network can be used to describe and research Internet, social network, scientific research cooperation network and paper citation network. Network node importance recognition is an important part of complex network research. It has been widely used in many fields, such as information diffusion mechanism and control, advertising and marketing strategy formulation. The existing methods for identifying the importance of network nodes are mainly based on the global and local structural characteristics of the network, or on the location of nodes in the network, or on the dynamic characteristics of the network. These methods are all based on time independent static networks. Temporal networks carry time information, and their edges appear or disappear over time, which is significantly different from the topology of traditional static networks. Network data not only includes nodes and edges, but also includes contact time information. Therefore, the methods for identifying the importance of nodes in the network are also significantly different from traditional methods.
    Temporal network is based on the time window model, which divides the network into multiple time series segments to form a network sequence. The importance measurement of nodes in the network is actually the influence of nodes. From the perspective of information theory, it is the amount of information that nodes transmit to other nodes. In time series network, the amount of information that nodes transmit is different due to the different network structure of each time series window. The existing research calculates the mutual information value between layers from the global perspective through the connected edge probability distribution and the joint probability distribution, and then obtains the overall correlation value between each layer of the network. However, the centrality of nodes in a single network segment cannot reflect the importance of nodes in the entire network sequence. However, the calculation of node importance should reflect the characteristics of nodes in local time and the global changes.
    This paper proposes a method to calculate the importance of nodes in temporal networks based on inter layer mutual information. The method first calculates the network eigenvector index in each time window, designs an algorithm model to calculate the correlation between nodes from the probability distribution of node edges in the time series window, calculates the mutual information value of nodes in the layer through the probability distribution of node edges in the time series window and the joint probability distribution of nodes between adjacent windows, and defines the correlation coefficient matrix as the change value of information propagation between nodes in the time window. We take it as the temporal coefficient of each node in each time window, and finally evaluate the importance of each node in the temporal network by combining the feature vector index and the temporal coefficient of the time window.
    Finally, the effectiveness of this method in calculating node importance in temporal networks is tested using manufacturing and Enrons real social data. The accuracy of the algorithm is evaluated through the modified SIR model, and the temporal K-kernel decomposition method is compared. The experimental results show that the node importance calculation results of this method had certain advantages compared to other methods. This article only analyzes the changes in node correlation between adjacent time windows, which has certain limitations and loses the information hidden in the whole or multiple time windows. Therefore, how to combine multiple time windows or the whole to measure the changes in node relationships in the network is a problem that needs in-depth research.
    Two-period Advertising and Dynamic Pricing Strategy of Omni-channel Considering Consumer Strategy Behavior
    HU Jiao, LI Li, ZHU Xingzhen, ZHANG Hua, YANG Wensheng
    2023, 32(8):  114-121.  DOI: 10.12005/orms.2023.0259
    Asbtract ( )   PDF (1249KB) ( )  
    References | Related Articles | Metrics
    With the digital economy developing rapidly, the omni-channel retail model, which gradually integrates online and offline, is becoming more and more popular, and it is becoming easier for consumers to access product information from different channels. In order to compete for a larger market share, the omni-channel retailer often takes the stage of price promotion to attract consumers to purchase. Although various promotion and discount activities can quickly increase the sales revenue, it will induce more consumers to choose to wait until the price discount stage to purchase, resulting in the consumer’s purchasing behavior in the market, which is increasingly strategic. Consumer strategic behavior, means that consumers compare the utility of purchasing in different channels and periods (normal period and discount period) to adjust the timing of purchase according to the expectation of the product’s future price, and wait for the product’s price reduction.In recent years, the influence of strategic consumption behavior on business operation decisions and profitability has attracted a great attention from both academic and practical fields.
    This study combines the effects of advertising and price on the purchasing utility of strategic consumers and explore how the omni-channel retailer can develop a two-period (normal and discount period) advertising strategy when addressing strategic consumers. Also, when conducting two-period advertising decisions, how can the retailer make dynamic pricing to gain greater market share and profit?In order to solve the above problems, we introduce consumer strategic behavior into the omni-channel retailing scenario, and construct three advertising decision models of omni-channel two-period advertising and dynamic pricing based on the principle of maximizing consumers’ utility in two periods of different channels, including no placing ads during two periods, place ads during normal period, and place ads during discount period. And we discuss the optimal response mechanism of omni-channel retailer. The model is checked by numerical study.
    The results show that: 1)When the consumer’s strategic level is low, the retailer chooses to advertise in the normal period (discount period), with higher price (or lower) for products in the normal period and lower price (or higher) for the products in the discount period. As the consumers’ strategic level increases, the price difference between the two periods gradually decreases.When the consumer’s strategic level is high, the price of the product during the discount period will be higher under the strategy of no ads in the two periods, and the retailer can increase the two-period price adjustment space through advertising; 2)The retailer’s optimal advertising level under the normal advertising strategy is greater than the discount advertising strategy. Compared to no ads, the retailer choosing to advertise in the first stage leads to the highest demand in the normal period and the lowest demand in the discount period. And as the advertising’s influence coefficient increases, the retailer’s advertising in the second stage also makes the normal period demand increase continuously and the discount period demand decrease continuously. This suggests that retailer’s advertising during the normal period or in the discount period is beneficial to the normal period of sales; 3)When the consumer’s strategic level is low or the influence coefficient of advertising is low, the retailer chooses to advertise in the normal period to get the best profit; When the consumer’s strategic level is high and the advertising’s influence coefficient is moderate, it is optimal for retailers to choose to advertise during the discount period; When the consumer’s strategic level and the influence coefficient of advertising are both high, it is the optimal strategy for retailer to choose not to advertise during two periods. The research results provide interesting theoretical suggestions for two-period advertising and dynamic pricing for omni-channel retailer.
    Application Research
    Green Product Innovation and Government Tariff Policy with Competition against Multinational Firms
    WU Yifan, ZHANG Qian, CHEN Jing
    2023, 32(8):  122-128.  DOI: 10.12005/orms.2023.0260
    Asbtract ( )   PDF (1384KB) ( )  
    References | Related Articles | Metrics
    With the increasing severity of environmental problems, consumers will pay more attention to the greenness of products when purchasing products. Environmental protection and environmental sustainable development both put forward higher requirements for the green innovation of products. The green product tariff policy is an important factor influencing the green product innovation decision of multinational enterprises. The government allows foreign green products to enter the local market by reducing tariffs on green products, and promotes local enterprises to accelerate green product innovation through market competition. How the government can use tariff policy to regulate green innovation input and competition intensity in the local market to ensure both domestic environmental and economic benefits is an urgent issue to be studied. Specifically, the problem is divided into two levels.The first is the micro-decision-making behavior of enterprises under certain tariff policies, that is, how do multinational enterprises and local enterprises make green product innovation decisions and output decisions, and explore how market sensitivity and product substitution will affect corporate decision-making? On the basis of clarifying the decision-making behavior of enterprises and the role of tariff regulation, we will study how tariff policies differ when the government maximizes the overall greenness of the market and social welfare.
    This paper considers a quantity-based competition model between local firms and multinationals. First of all, enterprises carry out green product innovation and research and development, produce and manufacture green products with a certain degree of greenness, and the products produced by the enterprise have a certain green substitutionability. This paper uses a three-stage game model to study the green product innovation and government tariff design of multinational and local enterprises in a competitive environment. Firstly, the impact of government tariff control on enterprises’ green product innovation and output decisions is analyzed, and then the government’s tariff decisions will be discussed.
    First, we study the role of tariffs on corporate green product innovation input and market competition, It is found that when the market’s green sensitivity is low, the higher the tariff. Compared with multinational companies, domestic companies have stronger willingness to innovate green products and change with the changes in market sensitivity; When the market sensitivity is too high, high tariffs will simultaneously restrain the willingness of both types of companies to innovate green products. In terms of government tariff design, this article finds that when the government aims to maximize market greenness, implementing zero-tariff policy under high market sensitivity is the optimal decision; While the government aims to maximize social welfare, when market sensitivity is higher, the government will increase the tariff to inhibit competition among enterprises, and the tariff will increase with the increase of market sensitivity and green substitutability coefficient.
    This paper focuses on the impact between government tariff controls and green innovation decisions by local and multinational enterprises. Future research directions can consider the impact of the government’s implementation of green subsidy policies on enterprises. The model in this paper does not consider market uncertainty, and an extended model with market uncertainty and information asymmetry can also be considered in the future. These need to be further studied and demonstrated.
    Research on Demand-oriented Incentive Comprehensive Evaluation Method with Double Incentive Critical Points
    GONG Chengju, DU Mingyue, PAN Xia, JIANG Jingui
    2023, 32(8):  129-136.  DOI: 10.12005/orms.2023.0261
    Asbtract ( )   PDF (1153KB) ( )  
    References | Related Articles | Metrics
    Comprehensive evaluation is the important research director of management science and engineering, system engineering, and information science. And comprehensive evaluation is an important premise of scientific decision-making. At present, the research on comprehensive evaluation is mainly focused on achieving the evaluation results which are further used to rank or sort evaluated objects. The measurement function is the prominent feature of comprehensive evaluation. However, to extend the application fields and value contributions, more and more research is focused on the function of comprehensive evaluation from measurement to management. A representative type of research results is integrated with incentive effects in the construction process of comprehensive evaluation methods to achieve guidance for the evaluated objects.
    Based on existing research results and the definition of incentive, this paper finds that achieving reasonable incentives for the evaluated object through the process of evaluating the reasonable needs of both the evaluator and the evaluated objects will greatly improve the effectiveness and acceptability of the incentive evaluation method. Therefore, this paper proposes a demand-oriented incentive comprehensive evaluation method with double incentive critical points. The biggest feature of this method is that the evaluation demands of the evaluation demander and the evaluated objects are important directions for the construction of the incentive evaluation method. And it adopts the approach of separately motivating different indicators of the evaluated object which are beneficial for accurate guidance for evaluated objects. First, this paper defines the problem, sets the demands of the evaluation demanders and the evaluated objects, and gives the idea and technology roadmap of the construction of this method. Second, when considering the demands of the evaluated objects, based on the impact of different evaluation indicators on the ranking value of every evaluated object, a method for determining the strengths and weaknesses indicators of every evaluated object, as well as a method for determining the incentive starting point of different types of evaluation indicators, is provided. Based on this, the incentive amount and total incentive amount of each evaluation indicator are determined when considering the demands of the evaluated objects. Third, when considering the demands of evaluation demanders, the indicators of evaluated objects are permitted without incentive, and to maximize the balance and difference of the overall development of all evaluated objects, a method for determining the dual incentive critical points of different evaluation indicators is constructed. By setting incentive coefficients and constructing their determination methods, the incentive amount and total incentive amount of each evaluation indicator are calculated when considering the demands of evaluation demanders. Then, the final evaluation result is obtained by aggregating the objective evaluation results of the evaluated objects before the incentive, considering the incentive amount when considering the demands of the evaluated objects, and considering the incentive amount when considering the demands of the evaluation demanders. Finally, an example is used to introduce the application process of the method proposed in this paper, and a comparative analysis is conducted with existing research results. It is verified that the reasonable needs and incentive methods for clear evaluation before incentives are crucial for the incentive results, proving the effectiveness and necessity of the method proposed in this article. Compared with existing research results, the method proposed in this paper places more emphasis on the role of the evaluated objects and the evaluators and can diagnose the development status of the evaluated objects in a more detailed manner. At the same time improving the acceptance of incentive results by the evaluated objects, this method can also play an important management role in effectively guiding their development.
    In future research, the method proposed in this paper will be further extended to dynamic evaluation and group evaluation as well as large-scale group evaluation problems. At the same time, solutions to incentive evaluation problems that consider the demands of multi-agent evaluation will be further explored to further promote the management effectiveness of comprehensive evaluation.
    Personal Credit Risk Assessment Based on Improved BS-Stacking
    GU Qinghua, SONG Siyuan, ZHANG Xinsheng, BAO Ziqi
    2023, 32(8):  137-144.  DOI: 10.12005/orms.2023.0262
    Asbtract ( )   PDF (2326KB) ( )  
    References | Related Articles | Metrics
    Credit risk is the core content of today’s risk management, and a lot of research has been done on practical models that support debtor credit assessment, pricing of credit risk instruments, measurement and control of credit risk exposure, and portfolio credit loss analysis. Personal credit risk refers to the possibility of default due to failure to repay debts or loans in time and in full for various reasons, and its degree directly affects the strength of credit. In the context of increasing personal credit default risk, in order to enable enterprises to accurately identify personal credit risks, this paper presents a personal credit risk assessment method based on the improved BS-Stacking model.
    This paper obtains bank data related to individual credit risk from the German credit data set published by UCI. The data set has 1000 samples, 700 positive samples, 300 negative samples, and 24 indicator attributes for each original sample. According to the characteristics of the personal credit risk data, we first use the improved Borderline SMOTE 2 algorithm to oversampling the data, and further remove the noise points on the basis of strengthening the identification of the minority sample boundary area, so as to ensure the accurate prediction of the default sample. In addition, for the problems that the classifier in the Stacking algorithm has redundancy and may reduce the prediction performance, grid search is used for parameter adjustment and LR is proposed to analyze the contribution degree of individual learners to obtain the optimal combination of individual learners and achieve the optimal performance of the entire model.
    In this study, Accuracy and AUC are used to measure the accuracy of the prediction, and Precision, Recall, F1 score, and Specificity are used to measure the validity of the model. In the initial algorithm, a total of8 models SVC, GBDT, RF, AdaBoost, XGBoost, LightGBM, KNN and LR are selected as individual learners, and LR is selected as the output of individual learners to train. After unbalanced algorithm processing and comparative experiments, the model after preliminary screening is formed. After analyzing the contribution degree of the base model and testing the whole model, the optimal combination model is obtained, and the performance of the integrated model reaches the optimal state. The experiment proves the effectiveness of the unbalanced algorithm and the integrated algorithm from many angles, and also shows that the algorithm can achieve high accuracy and robustness in personal credit risk assessment.
    Research on the Effectiveness of Electric Vehicle Quality Assurance Service and Charging Facility Construction Level on Sales Volume
    BAI Hua, TAN Deqing
    2023, 32(8):  145-151.  DOI: 10.12005/orms.2023.0263
    Asbtract ( )   PDF (1351KB) ( )  
    References | Related Articles | Metrics
    In order to ensure energy security and promote energy conservation and emission reduction, China’s automotive industry is rapidly transitioning to new energy vehicles. Between 2014 and 2019, the production and sales of new energy vehicles achieved rapid growth under a series of government incentives. But since the implementation of the new policy of subsidies for new energy vehicles in June 2019, sales of new energy vehicles have continued to decline, indicating that in the absence of policy subsidies, consumers have shown a clear lack of demand for new energy vehicles. For electric vehicles, due to the imperfect charging facilities and unstable vehicle quality, the sales of electric vehicles are far lower than that of fuel vehicles. To achieve green development in China, it is necessary to vigorously promote electric vehicles or other new energy vehicles. At present, the sales of electric vehicles in the market are mainly driven by policies. To truly achieve market driven growth, it is necessary to increase consumers’ willingness to purchase. Both the construction level of charging facilities and the quality of electric vehicle quality assurance service will affect the utility of consumers in using electric vehicles, and then affect consumers’ purchase intention and market demand for electric vehicles. And the parallel development of electric vehicles and fuel vehicles until electric vehicles completely replace fuel vehicles is a long process that requires consideration of time continuity.
    This paper starts with the impact of charging facility construction level and electric vehicle quality assurance service on consumer utility, considers consumer heterogeneity, and constructs a differential game model between electric vehicle manufacturers and fuel vehicle manufacturers to study the effectiveness of the impact of electric vehicle quality assurance service and charging facility construction level on electric vehicle sales under the condition of maximizing profits for electric vehicle manufacturers in a competitive environment. And under the premise of meeting the basic assumptions and implicit conditions, the parameters in the text are assigned for simulation.
    The results show that electric vehicle manufacturers can effectively use the strategy of increasing the input of quality assurance service or extending the warranty period to promote the sales of electric vehicles only when the critical point of quality assurance service is exceeded, and increasing the input of quality assurance service and extending the warranty period have interactive positive effects on promoting the sales of electric vehicles. Electric vehicle manufacturers cannot improve the effectiveness of quality assurance service input strategy by reducing the failure rate of electric vehicles at a lower quality assurance investment level. However, when the number of failures in the warranty period is greater than 1, they can extend the quality assurance period by reducing the failure rate of electric vehicles at a lower quality assurance service input level. In the period when the construction level of charging facilities is low, electric vehicle manufacturers should cooperate in the construction of charging facilities, which can more effectively promote the sales volume of electric vehicles than the quality assurance service strategy. When the charging facilities reach a certain level, the quality assurance service strategy can be effectively used to increase the market sales volume. The research results have important reference value for improving the competitiveness of electric vehicles and promoting the effective increase of electric vehicle sales volume. The research conclusion can guide electric vehicle manufacturers to more effectively utilize quality assurance service strategies under different charging facility levels to increase sales in the electric vehicle market.
    Supply Chain Finance, Earnings Management and Corporate Financing Efficiency
    GAO Yue, YANG Yi
    2023, 32(8):  152-158.  DOI: 10.12005/orms.2023.0264
    Asbtract ( )   PDF (1001KB) ( )  
    References | Related Articles | Metrics
    According to the report on the Work of the State Council in 2020, multiple measures are needed to promote market-oriented allocation of factors of production, develop a multi-tiered capital market and achieve sustainable development of enterprises. Under the current market economy conditions, the premise and important guarantee for enterprises to maintain sustainable development and achieve their set goals is the adequacy and liquidity of capital. However, due to the weak quality, poor anti-risk ability, information opacity and other problems of most Chinese enterprises, the disconnection between financial supply and real demand leads to the financial repression of real enterprises. Low financing efficiency is still the main obstacle to the sustainable development of enterprises. As a good external financing method, supply chain finance can gradually invigorate funds, shorten the operation cycle of funds, and alleviate the difficulty and high cost of financing for enterprises. Under the background that there are still hidden dangers in financial information in our country, earnings management, as an important means of corporate hiding the internal bad news, will intensify the information asymmetry with external investors and increase the financing cost of enterprises. By combining earnings management and supply chain finance, this paper analyzes the mechanism of corporate financing efficiency, which not only provides a good supplement to the existing financing theories and literature, but also provides practical empirical evidence for solving the problems of financing difficulty and low efficiency for enterprises, which has practical significance for the high-quality development of Chinese real economy.
    In this paper, non-financial listed companies on the main board of Shanghai and Shenzhen A-shares during 2014—2019 are taken as research samples. In order to ensure the integrity and accuracy of the data, descriptive statistics and Pearson correlation test are conducted on the sample data to observe the distribution of the data and their relationship with each other on the whole. The real earnings management model and the modified Jones model are used to measure the real and accrued earnings management of enterprises respectively. Comprehensive indicators are extracted from financing cost, capital allocation efficiency, anti-risk ability and corporate governance efficiency through principal component analysis to fully measure financing efficiency. Quick ratio, return on total assets, ownership concentration, earnings per share, growth rate of operating revenue, and age of the enterprise are selected as control variables, and two dummy variables, industry and year, are introduced to construct a multiple regression model. Through empirical analysis, this paper examines the influence of supply chain finance and earnings management on corporate financing efficiency, as well as the interactive influence of supply chain finance and earnings management on financing efficiency. Further, the scale and ownership attributes of enterprises are introduced as cross-terms to investigate the mechanism of corporate financing efficiency. At the same time, the robustness analysis of the regression results is carried out to ensure the reliability of the research results.
    Based on literature review and theoretical analysis, this paper discusses the relationship among supply chain finance, earnings management and corporate financing efficiency. The empirical findings are as follows: (1)High-quality supply chain finance is conducive to improving the financing efficiency of enterprises. (2)There is a significant negative correlation between earnings management and financing efficiency. The higher the degree of earnings management, the lower the financing efficiency of enterprises. Meanwhile, compared with accrual earnings management, real earnings management has a more significant inhibitory effect on the financing efficiency of enterprises. Thus enhancing the financing constraints of enterprises. (3)Under the same conditions, it is more effective for large-scale state-owned enterprises to use supply chain finance to improve their financing efficiency. State-owned enterprises are “implicitly guaranteed” by the government and have obtained strong financing ability with the help of strong financial support from the state. With the support of supply chain finance, the time to obtain funds is shorter, the cost is lower, the efficiency is higher and the quantity is larger. At the same time, the use of earnings management in large-scale state-owned enterprises also has a more significant inhibiting effect on financing efficiency. There are debt “soft constraints” in state-owned enterprises, and they lack enthusiasm for resource demand in the capital market, thus weakening the possibility for enterprises and their managers to obtain benefits by manipulating earnings information, especially accrued earnings management. (4)There is a substitution effect between supply chain finance and earnings management, and the substitution effect between supply chain finance and real earnings management is more significant than that between accrued earnings management. (5)Under the same conditions, the substitution effect between supply chain finance and real earnings management in large-scale and state-owned enterprises is stronger than that in small-scale enterprises and non-state-owned enterprises.
    The above conclusions provide a theoretical basis for solving the problems of “expensive financing, difficult financing and low efficiency” in the development of enterprises. The limitations of this paper may lie in: (1)The impact of earnings management on financing efficiency in earnings quality is only discussed, and future studies can comprehensively consider multiple influencing factors such as the quality and timeliness of earnings disclosure for further exploration; (2)This paper only discusses the influence of debt financing on financing efficiency in external financing, and does not include equity financing. Future studies can further discuss the changes of financing efficiency under different financing structures from the perspective of the overall financing structure.
    Research on Financial Distress Prediction with Financial Network Indicators for Listed Companies of Information Technology
    WU Chong, CHEN Xiaofang, MIAO Bowei
    2023, 32(8):  159-165.  DOI: 10.12005/orms.2023.0265
    Asbtract ( )   PDF (1616KB) ( )  
    References | Related Articles | Metrics
    With the advent of the era of big data in recent years, the concepts of 5G, Internet+, big data, cloud computing, and blockchain have been put forward, and especially the rapid rise of Internet companies, brings full economic vitality to the country and becomes an important driver for the rapid development of China’s economic quality. At the same time, the information technology industry has gradually become a pillar industry for national economic growth, and a pioneering and strategic industry-leading national production and life. While the IT industry is booming, its high-growth and high-risk characteristics are also becoming more and more prominent. As the information technology industry has a large capital investment at the beginning of the listing, uncertainty in the process and timeliness of research and development, high requirements for technology iteration, the short life cycle of related products, weak solvency, uncertain future earnings of enterprises, unstable cash flow, etc., make the industry more prone to potential financial risks and even the outbreak of financial crises. 16 information technology enterprises were specially treated (ST) in 2019. The information technology industry is therefore in urgent need of financial crisis prediction models, targeting to help companies predict in advance whether there are serious financial risks. The information of indicators in traditional FDP is more limited to financial indicators, or adding non-financial indicators, but market information can also reflect the operation of information technology enterprises, and based on this, this paper proposes to introduce the market information of enterprises into the FDP model.
     Considering that it is difficult to capture the trend of market changes of information technology listed companies, while stock return information can reflect market changes promptly, this paper uses stock information to construct a financial network and introduces stock information into the model in the form of network indicators. To give full play to the role of the integration algorithm in the financial crisis prediction model, improve the generalization ability of the model, and at the same time solve the problem that a single classifier cannot fully use the data, this paper adopts the lightGBM algorithm to construct the financial crisis prediction model of information technology listed companies, and proposes the integration strategy of tuning parameters based on the lightGBM algorithm. Through parameter tuning, the lightGBM algorithm model with the highest accuracy is selected as the base model, and then a new model is obtained by single parameter tuning of the base model. The results of the tuned model and the base model are selected by the classical voting method to obtain the final prediction results. A total of 102 listed companies in the information technology industry in the A-share markets of Shanghai and Shenzhen in China from 2010—2019 are used as the research objects, and the financial and non-financial data of T-3 years are selected, as well as the stock price information of 500 trading days before T-3 years are selected, and the network is constructed by using complex network theory to extract the corresponding network indicators such as centrality and Pagerank value, and combining the enterprise’s financial and non-financial indicators to construct a comprehensive model indicator system.
     After empirical research and analysis, the following results are obtained: (1)Compared with the basic lightGBM model, the prediction performance of the integrated lightGBM model is better, with an accuracy rate of over 90% and a recall rate of 91.18%; (2)Compared with the integrated lightGBM model, the accuracy and recall rate of the integrated lightGBM model with the addition of financial network indicators are increased by 3.17% and 2.57%, respectively. The prediction performance of the model with the addition of financial network indicators is higher than that of the model with only financial and non-financial indicators, as verified by other benchmark models (Logistic Regression, Support Vector Machine, Random Forest). The above results show that the lightGBM algorithm with the integration of tuning parameters has higher prediction performance. Meanwhile, the introduction of stock information increases the effective information in the FDP model, which is beneficial to the financial crisis prediction of enterprises. The research in this paper broadens the application of lightGBM algorithm in enterprise financial risk management and provides new ideas for the construction of financial crisis prediction models.
     The research in this paper focuses on the indicator set and integrated model of FDP. Regarding the indicator study, the inclusion of textual information about enterprises, such as their annual reports, can be considered in future research. Text-based indicators (e.g., “bad debts”, “dividends”, “investment in fixed value assets”) are extracted from companies’ annual reports, and financial indicators, non-financial indicators, market information, and textual information are combined to build the model indicator system. Besides, the data in this study is for Chinese listed companies, and since each country has its judicial system and accounting rules, the data of companies in other countries can be used for the study. The classifier model selection can choose other integrated models or try to improve other machine learning algorithms to predict the financial crisis problem of information technology enterprises.
    Robust Measure of Dynamic Higher Moments Risk and Its Application to Parametric Portfolio Selection
    LIU Shuting, XU Qifa, JIANG Cuixia
    2023, 32(8):  166-173.  DOI: 10.12005/orms.2023.0266
    Asbtract ( )   PDF (960KB) ( )  
    References | Related Articles | Metrics
    Skewness and kurtosis are frequently utilized to describe stylized facts within the financial community. However, their conventional moment-based measures exhibit a high degree of sensitivity to outliers. To improve the deficiency of risk measurement and model optimization in the existing higher order moment portfolio selection model, this paper develops a parametric portfolio model, referred to as the B-S-K model, which incorporates dynamic higher moments risk. Specifically, we first apply the mixed data sampling quantile regression (MIDAS-QR) model to improve the timeliness, accuracy and robustness of dynamic higher moments risk measure by exploiting rich information contained in high-frequency data. Second, we develop a parametric portfolio method with characteristic variables, dynamic skewness risk, and dynamic kurtosis risk for utility-maximizing investors with an exponential utility function. This parametric method with conditional kurtosis not only takes into account the investor attitude towards kurtosis, but also captures implicitly the relation between the stock characteristics and investor expected utility which has been ignored in most of the literature. Furthermore, our approach greatly reduces the number of parameters and improves the solution efficiency. In this case, we only need to estimate the coefficients of variables that enter the portfolio weights instead of the weight coefficients of each stock at each time point. Third, a three-step solution scheme is designed for parametric portfolio with dynamic skewness and kurtosis risks via the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. In the first two steps, the coefficients of stock-specific characteristics and dynamic skewness risk are estimated separately. In the third step, we keep the coefficients in the first two step fixed at their values obtained, the remaining dynamic kurtosis risk coefficient is estimated by maximizing the investor’s expected utility. By comparing the estimation results of step 2 and step 3, the role of conditional kurtosis in allocating assets and improving portfolio performance can be investigated.
    The empirical analysis is conducted on six single stocks and thirteen industry groups in China. There are at least three conclusions that can be drawn. Firstly, compared with moment-based skewness and kurtosis, the MIDAS-QR based dynamic higher moments risk measure is indeed robust and effective, which not only captures the time-variation in financial risk, but also shows much smoother and less sensitive to outliers. Secondly, price earnings ratio, book-to-market ratio and dynamic skewness risk are positively related to portfolio weights, while conditional volatility and dynamic kurtosis risk are negatively related to portfolio weights. The results provide evidence supporting the notion that investors exhibit a preference for stocks characterized by higher price-to-earnings ratio, higher book-to-market ratio, relatively larger skewness, lower conditional volatility, and smaller conditional kurtosis. Importantly, it is observed that stocks with higher conditional kurtosis are assigned smaller weights in the optimal portfolio selection. Once again, investor aversions towards minimizing kurtosis are confirmed. These findings offer rational mechanism explanations for investment decisions. Finally, the proposed B-S-K model, which incorporates dynamic higher moments risk, demonstrates significant and consistent superiority in terms of return, risk, and risk-adjusted return when compared to the equal weight scheme, M-V model, basic (B) model, and B-S model across various levels of risk aversion. These results highlight the necessity and benefits of incorporating dynamic higher moments risk into portfolio selection.
    Considering the fact that portfolio selection is affected by many other factors, such as the market capitalization, the lagged returns, and so on, incorporating as many factors as possible to the model helps to improve the performance of portfolio selection. For future research, we will consider the impact of more factors on portfolio weights, and introduce dimension reduction techniques such as penalized variable selection to identify key factors. This will be helpful for producing sensible portfolio weights and providing a list of important factors for investment decision making.
    A Spatial-temporal Attention Based BiLSTM for Stock Index Prediction
    YANG Mo, WANG Jing
    2023, 32(8):  174-180.  DOI: 10.12005/orms.2023.0267
    Asbtract ( )   PDF (1348KB) ( )  
    References | Related Articles | Metrics
    In the environment of increasing volatility in financial markets and international capital flows, the accuracy and robustness of forecasts are key factors in financial decision-making. Predicting stock price indices has been an active area of research. Among them, many studies use data mining techniques, including artificial neural networks. However, most studies have shown that artificial neural networks have certain limitations in terms of learning patterns, because stock market data has huge noise and complex dimensions, correlation problems between its external properties, and external influences in long-term forecasts can lead to increased stock price volatility. Artificial neural networks have excellent learning capabilities, but they are often faced with inconsistent and unpredictable noisy data. In addition, sometimes the amount of data is too large, and the learning of patterns may not work well. In addition, in long-term forecasting, the redundancy of features and the complexity of the model cause the forecasting model to be unable to accurately extract the price and time change relationship. The presence of continuous data and large amounts of data poses serious problems for extracting valid information from raw data. Reduction and transformation of uncorrelated or redundant features can reduce uptime and produce universal results.
    To solve the above problems, this article uses a bidirectional long short-term memory neural network(BiLSTM) based on attention mechanism to empirically test the effectiveness of the closing price of the Hang Seng Index in Hong Kong. The data used in this article comes from the Ruisi Financial Database, and the data interval is selected from all trading data with daily trading volume data available until August 3, 2020, and predicts the closing price of the stock index for 1 day (next day), 7 days, 30 days, 60 days, and 120 days respectively. Among them, spatial attention mechanism is used to capture the correlation between input indicators and assign different weights to them, and temporal attention mechanism is used to describe the time correlation of data to solve the problem of time dependence in long-term prediction and assign different weights to time steps. The BiLSTM neural network is used to fit the data and build a prediction model. This article also hopes to judge the effectiveness of the proposed model by comparing the performance of other popular artificial neural network models using time series in predicting market value, so it compares four attention-based neural network methods and six baseline methods.
    The experimental results show that the spatial attention mechanism proposed in this paper can achieve a higher accuracy than the traditional principal component analysis dimensionality reduction method, indicating that the spatial attention mechanism can capture the correlation between feature vectors, effectively simplify the model, and improve the generalization performance of the model. In addition, the temporal attention mechanism proposed in this paper can capture the relationship between stock price and time change in long-term forecasting, so it always performs better in medium- and long-term prediction than models that do not add time attention mechanism under the same conditions.
    BiLSTM based on the dual-dimensional attention mechanism can achieve more accurate prediction of the closing price of HSI stock indexes in short, medium and long-term forecasts compared with the current popular stock index prediction methods. Based on these two attention mechanisms, BiLSTM is not only able to adaptively select the most relevant input features, but also to appropriately capture the long-term temporal correlation of time series.
    The conclusion of this paper provides investors with a more effective investment strategy, and also provides practical insights and potentially useful directions for further research on how deep learning network can be effectively used in stock market analysis and prediction.
    Asymmetric Multifractal Correlation Based on EMD-MF-DCCA Method: A Case Study of Shenzhen and Shanghai Stock Markets
    ZHANG Hongmei, WANG Qin, WANG Ling, DONG Xin
    2023, 32(8):  181-186.  DOI: 10.12005/orms.2023.0268
    Asbtract ( )   PDF (1214KB) ( )  
    References | Related Articles | Metrics
    At present, a large number of studies have confirmed the existence of multiple fractal characteristics in financial markets. The factors that lead to the existence of this phenomenon in the time series of financial markets include two main aspects: On the one hand, the thick-tailed distribution, and on the other hand, the different degrees of correlation between large and small fluctuations. The multiple fractals, which are caused by different degrees of volatility, can predict the future trend of asset prices to a certain extent. As a typical feature of financial markets, the impact of good and bad news shocks on asset price volatility is inconsistent, resulting in asymmetric multifractals. Therefore, if we can comprehensively analyze the asymmetric multifractal characteristics of market volatility, we can better understand the market laws and effectively avoid risks.
    Considering that the time series of financial market may be affected by noise, and the empirical modal decomposition method of high and low frequency index values can effectively solve the problems of noise pollution, non-smoothness and heteroskedasticity after reconstructing the series. Therefore, based on the empirical modal decomposition method, this paper decomposes and reconstructs the series into high and low frequency series, followed by using the quadratic function to portray the dynamic trend of the market, and proposes for the first time to use the quadratic and primary coefficients as the proxy variables of “positive and negative volatility” to classify the three trends of up, oscillation and down, and proposes a new EMD based on the traditional MF-DCCA. A new EMD-MF-DCCA method is proposed to measure the asymmetric multiple fractal correlation of financial markets under the three trends of upward, oscillation and downward. Theoretically, the EMD-MF-DCCA method can portray the asymmetric multifractal correlations of financial markets under different volatility zone trends, and compared with the traditional MF-DCCA method, the EMD-MF-DCCA method is more accurate in portraying the multifractals under different volatility degrees.
    Taking Shanghai and Shenzhen stock markets as the research object, the daily trading data are obtained from “Tongdaxin” trading software. The data interval is from January 4, 2010 to March 19, 2020, and the closing price data of 2418 trading days are obtained. The empirical results show that: During the period of oscillation and decline, the two markets are characterized by long-term correlation in large fluctuations and anti-persistence in small fluctuations, with asymmetric multifractal relationship; In the rising period, there is a multifractal relationship between the two markets with time-varying fluctuations; Compared with the traditional MF-ADCCA method, EMD-MF-DCCA method can describe the multifractal strength of the market more accurately. The above research results provide reasonable suggestions for further studying the complex asymmetric dependence between markets.
    Research on A New Stock Prediction Model Combining LSTM and BLS
    HAN Ying, ZHANG Dong, SUN Kaiqiang, TAN Haoran, LU Chao
    2023, 32(8):  187-192.  DOI: 10.12005/orms.2023.0269
    Asbtract ( )   PDF (1383KB) ( )  
    References | Related Articles | Metrics
    As a core component of the capital market, the price trends and future trend prediction in the stock market are among the most important concerns for investors. Accurate prediction of stock prices can provide strong technical support for investors. For the past few years, deep learning has been widely used for stock prediction to explore more effective information. However, the nonlinearity, complexity, and multiscale characteristics of stock data make it relatively difficult to extract hidden information, and deep learning prediction models suffer from issues such as vanishing gradients and time delays, making it challenging to fit when the sequence exhibits significant fluctuations. Thus, there is an urgent need to improve prediction accuracy.
    Long Short-Term Memory (LSTM) networks have been widely applied to stock prediction in recent years, but their structural characteristics can easily fall into local optima, thereby affecting prediction accuracy. Drawing on the good approximation ability of the Broad Learning System (BLS) in time series forecasting, this study attempts to combine width learning with deep learning. It constructs an LSTM-BLS model by utilizing LSTM for feature extraction, feeding the extracted features to the mapping nodes of BLS through fully connected layers, generating enhanced nodes, and calculating the prediction values. Additionally, to address the non-stationary nature of stock sequences, Complementary Ensemble Empirical Mode Decomposition (CEEMD) is introduced for denoising. The adaptive decomposition characteristic of CEEMD does not excessively increase the complexity of the model. Therefore, the proposed CEEMD-LSTM-BLS stock prediction model is presented. The model is implemented using the Keras framework in the Python language, and empirical research is conducted using closing price data of the Agriculture, Forestry, Animal Husbandry, and Fishery Index (399231) from the China Stock Market & Accounting Research Database (CSMAR).
    Three evaluation metrics, namely mean absolute error, root mean square error, and coefficient of determination, are selected to assess the performance of the model. By comparing the LSTM-BLS model with baseline models and existing stock prediction models, it is demonstrated that the fusion of Deep Learning and Broad Learning System shows significant improvement in multiple accuracy indicators. Particularly, when comparing the CEEMD-LSTM-BLS model with the CEEMD-LSTM model without BLS integration, it is found that the LSTM model exhibits certain errors in prediction, especially at turning points where the volatility is higher, leading to more pronounced prediction errors. The BLS module in the CEEMD-LSTM-BLS model can address such issues. When the data exhibits significant fluctuations, the proposed new model in this study outperforms existing models in terms of fitting discrepancies and time delays. Therefore, the proposed CEEMD-LSTM-BLS stock closing price prediction model can accurately forecast the market’s ups and downs, providing valuable reference for investors.
    However, this study only considers the single factor of closing price, which has certain limitations. Therefore, the future experimental direction will be based on this study and consider multiple-factor input prediction to prevent information compression issues caused by single-factor input for highly volatile data, which can hinder the achievement of the desired prediction accuracy. The aim is to construct a stock prediction model that integrates deep learning and width learning systems under the influence of multiple variables and factors. Finally, the authors express their gratitude to the mentors for their guidance and support throughout this research.
    Risk Spillover Effect and Its Temporal and Spatial Characteristics in China’s Financial Market: Based on Spillover Index Method and DCC-GARCH Model
    LI Boyang, ZHANG Jiawang, HEN Yue
    2023, 32(8):  193-199.  DOI: 10.12005/orms.2023.0270
    Asbtract ( )   PDF (1065KB) ( )  
    References | Related Articles | Metrics
    With the accelerating pace of financial globalization, financial innovations emerge one after another, financial markets are closely related, and cross-market contagion of financial risks has seriously threatened China’s economic and financial operation. Under this background, it is of great theoretical significance and practical value to accurately describe the intensity, scale and direction of financial market risk spillover effect, accurately measure the conditional correlation coefficient between financial markets, and carefully describe the time-varying, fluctuating and asymmetric characteristics of risk spillover index and dynamic correlation coefficient, which can not only help to understand the occurrence mechanism and infection characteristics of risk spillover between financial markets, but also the regulatory authorities to formulate and take effective regulatory measures to prevent the chain reaction of cross-market risk spillover from triggering systemic risks under the impact of extreme events.
    This paper uses spillover index model and DCC-GARCH model to make a comprehensive analysis of the risk spillover effect and its temporal and spatial characteristics in China’s financial market from July 22, 2005 to August 27, 2021. In this paper, China’s financial market is divided into seven sub-markets, namely, stock market (using CSI 300 index), bond market (using CSI comprehensive net price index), money market (using 7-day interbank offered rate), foreign exchange market (using the central parity of US dollar against RMB exchange rate), commodity market (using Wind commodity comprehensive index), gold market (using spot price of AU9995) and real estate market (using Shenwan real estate industry).
    The results show that the risk spillover index of China financial market fluctuates between 18% and 52% in time dimension, which can be roughly divided into three stages: The first stage is from 2007 to 2011, during which the risk spillover index fluctuates greatly. The second stage is the post-crisis era, from 2012 to 2017, during which the overall operation trend of China’s financial market is relatively stable, and the risk spillover index fluctuates at around 33%. In the third stage, from 2018 to the present, firstly, under the influence of the large-scale default of the China bond market in early 2018 and the Sino-US trade dispute, the volatility correlation of China’s financial market has been strengthened. Since then, under the impact of the COVID-19 epidemic, the risk spillover index has greatly increased in early 2020 through the transmission channels of virus infection, panic infection and financial risk cross-market infection, and has continued to do so to this day. The dynamic correlation coefficient varies from 0.09 to 0.31, and the jumping up and down of the dynamic correlation coefficient in the time dimension is asymmetric. In the face of the impact of important domestic and foreign policies and risk events, the dynamic correlation coefficient of China financial markets has a leap-forward growth in a short period of time. After reaching the peak, the dynamic correlation coefficient between financial markets will not weaken immediately, but it will take some time to return to the size before the impact, which shows that the dynamic correlation coefficient between financial markets is asymmetric in the time dimension.
    In the spatial dimension, the direction of financial market risk spillover is asymmetric. As far as the acceptance risk spillover index is concerned, the acceptance risk spillover index of stock market, real estate market and gold market ranks in the top three, which are 49.60%, 47.90% and 29.70% respectively, indicating that these three markets are extremely vulnerable to risk contagion and have systemic fragility. As far as the external risk spillover index is concerned, the external risk spillover indexes of the stock market, the real estate market and the commodity market are relatively high, which are 51.90%, 51.90% and 28.20% respectively, indicating that these three financial markets are in the leading position of information in China’s financial system, and their information transmission efficiency is high. Once an emergency happens, the risks will be quickly transmitted to the whole financial system, and they are systemically important financial markets in China. Because the stock market and the real estate market have high acceptance risk spillover index and external risk spillover index, they play an intermediary and bridge role in China’s financial system. The stock market, commodity market and real estate market are the net spillers of risks, while the bond market, money market, foreign exchange market and gold market are the net recipients of risks. The risk correlation between stock market and real estate market, commodity market and gold market, and bond market and gold market is relatively large.
    Limited by data and capacity, this paper only analyzes the risk spillover effect among the seven major financial markets in China. In future research, we can consider distinguishing “good fluctuations” and “bad fluctuations” in financial markets. In addition, it is worth exploring to consider the price correlation and risk spillover effect between green financial markets and carbon markets under the background of energy transformation.
    Party Organizations Embedding and Earnings Management in Private Enterprises: Empirical Research Based on A-share Market
    WANG Chunfeng, LIANG Chaowei, YAO Shouyu, CHENG Feiyang, FANG Zhenming
    2023, 32(8):  200-206.  DOI: 10.12005/orms.2023.0271
    Asbtract ( )   PDF (976KB) ( )  
    References | Related Articles | Metrics
    Party organization embedding means that enterprises set up Party organizations and embed Party organizations into enterprise organizational structure as formal institutional arrangements. The Party plays an important role in guiding and promoting economic development. At the macro level, the Party influences the formulation and implementation of the government’s economic guidelines and policies with its ruling position, while at the micro level, the Party’s role is realized by embedding Party organizations in enterprises. Therefore, the Party and the government always attach importance to the construction of Party organizations in enterprises and continue to promote the establishment of Party organizations in enterprises. The purpose of this policy is to improve the influence of Party organizations on enterprises. In this context, the Party organization has become a normal company management system, which has a non-negligible impact on enterprise behavior. In recent years, the academic circle has gradually paid attention to this unique governance system, and conducted a series of studies on the governance effect of Party organization embedding.
    Private enterprises are an important part of our socialist market economy and an important link in grassroots Party construction. With the development of private enterprises and the promotion of Party building in private enterprises, Party organizations were established in 1.585 million legal entities of private enterprises (data came from the 2018 Intra-Party Statistical Bulletin of the Communist Party of China). The embedding of Party organizations in private enterprises has also become common. However, due to the different nature of enterprises, the function of Party organization in private enterprise has its particularity. Starting from the financial perspective, this paper takes the influence of private enterprise embedding Party organization on earnings management as the core issue, and further explores the governance effect of private enterprise embedding Party organization.
    Taking private listed enterprises from 2008 to 2018 as research samples, we manually collect the data embedded by the Party organization of enterprises, calculated the earnings management data of enterprises(DA) by using Jonse model, and carry out the research by using fixed effect model. All the financial data in this paper come from CSMAR database and RESSET database. The results of the empirical research show that the establishment of Party organizations in private enterprises can significantly reduce the level of earnings management, and this conclusion is valid for both positive and negative aspects of earnings management. In order to ensure the robustness of the research results, we test the robustness by adjusting the explained variable calculation method, multi-period DID model, instrumental variable method(IV), propensity score matching(PSM) and other methods, the results shows that the conclusions of the paper are robust.
    In order to explore the system by which Party organizations influence corporate behavior, this paper makes a hypothesis based on the existing literature that Party organizations can reshape corporate culture. The Party organization plays a leading role in the culture of the enterprise. It can introduce the advanced culture of the Party into the enterprise through various activities and influence the enterprise behavior with the new corporate culture. On the one hand, this influence is reflected in reducing the risk preference of the management, on the other hand, it is reflected in improving the quality of the internal control of the enterprise, and finally in the financial data of the enterprise. In this paper, we use the mediation effect model to test the hypothesis and found that it is valid.
    In order to analyze whether there are differences in the roles of Party organizations under different conditions, this paper conduct a group test according to the heterogeneity of conditions such as the stability of Party organizations, cultural environment, external supervision level and company characteristics, and expand the research results. In addition, this paper further explores the real earnings management as the explanatory variable, and the results show that the role of the Party organization is still valid for the real earnings management.
    Finally, based on the research of this paper, suggestions are put forward for enterprises to improve the governance system and enhance the standardization level of the market, so as to provide guidance for practical activities.
    Advertising Mode Choice of Video Platform
    FAN Haowen, ZHANG Yulin
    2023, 32(8):  207-213.  DOI: 10.12005/orms.2023.0272
    Asbtract ( )   PDF (1027KB) ( )  
    References | Related Articles | Metrics
    Video platforms usually provide consumers with free video services but generate revenue from advertisers. An increasing number of video platforms pursue a skippable advertising mode, which allows consumers to skip advertisements after only a few seconds and directly access the content they are looking for. YouTube, a popular video platform, has had significant success with the skippable advertising mode. It is reported that skippable advertisements account for more than 85% of YouTube advertisements. Skippable advertisements benefit agents on both sides of the video platforms. For one thing, viewers can skip irrelevant advertisements and avoid the delay of content. For another, skippable advertisements help advertisers filter non-target audiences since non-target audiences certainly choose to skip advertisements for videos; That is to say, advertisers only need to pay for effective watching.
    Different from skippable advertisements, traditional non-skippable advertisements force consumers to view all advertisements even though consumers are not interested in some advertisements at all, such that advertisers need to pay for the ineffective watching, that is to say, it may incur a high cost to promote their products or services. Unfortunately, small and micro enterprises cannot afford expensive advertisements. Furthermore, small and micro enterprises typically have a small target consumer size and they may be unwilling to pay for the ineffective exposure of their products. Therefore, skippable advertisements seem to attract more consumers and advertisers. It is reported that small and micro enterprises are becoming a new force in supporting the online advertising market. However, non-skippable advertisements guarantee profit for video platforms through forced advertisement viewing. For video platforms, there is a trade-off between attracting more agents on two sides and losing advertising exposure. Thus, a question naturally arises, which advertising mode is more profitable for video platforms?
    Skippable advertisements have attracted considerable attention. Most of the existing research focuses on empirical studies about factors that affect consumers’ advertisement-skipping behaviors. To complement existing studies, this work focuses on video platforms’ advertising mode choice. Specially, we seek to deal with the following questions: (1)What is the video platform’s optimal pricing strategy under different advertising modes? (2)What is the video platform’s choice of advertising mode? (3)What are the impacts of advertising modes on advertisers?
    To answer these questions, game theory, sensitive analysis, and numerical studies are used herein. We build a game model involving a monopoly video platform, heterogeneous advertisers, and consumers to explore the advertising mode choice. We consider two types of advertisers, type L and type H, where type L possesses a smaller target consumer size whereas type H possesses a broader target consumer size. Each type has an incentive to make efforts to increase advertisement quality and decrease consumers’ aversion to advertisements. In the non-skippable mode, consumers have to watch the advertisements before watching the content on the platform. In the skippable mode, consumers can skip irrelevant advertisements. Moreover, even for relevant advertisements, consumers can jump directly to their desired content if they are not interested in some advertisements. The sequence of events is as follows. In the first stage, the video platform chooses an advertising mode and sets the price for advertisements. In the second stage, two types of advertisers decide whether to join the platform to place advertisements and decide how much effort to make to increase the attractiveness of their advertisements.
    The results show that in the non-skippable mode, when the target consumer size is higher than the threshold, advertisers choose to join the platform. Additionally, type L advertisers face a higher bar when placing advertisements. As for the pricing strategy, the platform will set a low price to allow all advertisers to enter the market if all advertisers possess a relatively large target consumer size. However, if type L advertisers’ target consumer size is very small, the platform will set a high price so that only type H advertisers can place advertisements. By contrast, in the skippable mode, the video platform will increase the price of advertisements to make up for the decrease in advertisement exposure. It should be noted that, even with a high price, the entry barriers for advertisers are falling. In addition, type L advertisers prefer skippable mode because it avoids paying for ineffective advertising. On the contrary, type H advertisers prefer the non-skippable mode as it decreases advertisement prices. Concerning the platform’s choice of advertising mode, it depends on the advertiser structure. It is possible to cover the market and earn more in the skippable mode if the percentage of type L advertisers is high. If the proportion of type L advertisers is moderate, the video platform also prefers skippable mode when all advertisers possess a relatively high target consumer size.
    Management Science
    Literature Review and Enlightenment of Pension Risk Management Based on Lifecycle and Overlapping Generation Analyses
    YAO Haixiang, ZOU Zhiwen, ZHANG Weixuan
    2023, 32(8):  214-219.  DOI: 10.12005/orms.2023.0273
    Asbtract ( )   PDF (940KB) ( )  
    References | Related Articles | Metrics
    Pension risk refers to the risk caused by the lack of life security, the contingency and uncertainty of survival risk for the elderly population. The increasing risk of longevity and the reduction of mortality are important factors affecting the risk of pension in China.Although China’s pension insurance system has the characteristics of multi pillar and diversification, the pension problem of Chinese residents is still very prominent, and the pension situation is still very serious.Relevant data show that 176 million people aged 65 and above in 2019, account for 12.6% of the total population, which undoubtedly shows that China’s population aging is very serious, which will bring huge risks to residents’ pension.According to the prediction of China pension actuarial report 2019—2050, by 2035, China’s urban basic endowment insurance will be completely exhausted, and the pension situation is very serious. The reason is that the pension risk makes the pension security system unsustainable due to population aging. The reason for the aging of the population lies in the reduction of mortality and the increase of longevity risk caused by the improvement of medical and health conditions.
    From the perspective of individuals and families, investment, consumption decisions, life expectancy, health status and family composition structure have a direct impact on pension risk.From the social and national level, factors such as population age structure, economic development, pension security system and fiscal revenue and expenditure indirectly affect the pension risk.With the aggravation of population aging, the research of pension risk management has gradually penetrated into the fields of consumption and investment, derived the analysis framework of pension issues.The academia has conducted in-depth discussions on consumption, investment, policy parameters and pension insurance system.
    Pension risk management can be studied from many aspects, and life cycle analysis is an important research perspective.According to the life cycle theory, analyzing the pension risk caused by longevity factors helps to solve the problem of the smooth transition from work to pension.In addition, the intergenerational pension risk cannot be ignored. OLG model is helpful to study the intergenerational pension risk management.When the government formulates and implements relevant pension policies such as “delayed retirement” and “pension” entering the market, it can better avoid pension risks only on the premise of adapting to the individual life cycle and population age structure.Therefore, this paper systematically reviews the research on the driving factors of pension risk (mortality and longevity risk), introduces the modeling process of life cycle framework and overlapping generations model (OLG), and reviews the literature on the application of life cycle framework and OLG model.
    From the literature review and theoretical model research, there are still some deficiencies. We find: (1)There is a great correlation between the change of normal mortality rate and the change of fertility rate in the past. In recent years, scholars’ research on the prediction and modeling of mortality rate in China has been relatively few. Therefore, it is of great significance to explore the mortality model in line with the survival characteristics of China’s population. (2)At present, in the research of pension risk management based on life cycle theory, there is uncertainty (fuzziness) in the model or parameters, and the deviation caused by fuzziness will produce greater risks. When studying the investment consumption problem of coping with pension risk, robust investment consumption strategy is particularly important. (3)Most of the previous studies on life insurance decision-making only have considered the situation that the child is raised and the beneficiary is the next generation. In further research, it is necessary to consider not only the expenditure on child support, but also the expenditure on the support of the older generation.
    By combing the technologies and methods of pension risk at home and abroad, exploring the factors affecting pension risk, and combing the context of pension risk research through the life cycle framework and OLG model, this paper gets the following application enlightenment: (1)Building a reasonable multi-level and multi pillar endowment insurance system. This paper suggests that the government optimize and reform the current multi-level and multi pillar pension structure to ensure that each pillar maintains a reasonable proportion. (2)Further liberalizing the fertility policy to cope with the risk of pension. Fiscal subsidies should be combined to stimulate individual fertility desire and reduce the cost of child rearing and education, so as to effectively improve the population growth rate. (3)From the current situation of life expectancy in China, China will face huge pension risks. Therefore, we suggest that the government should improve the delayed retirement policy as soon as possible according to the actual situation in China, so as to effectively deal with the pension risk. (4)Developing the elderly care service industry to stimulate domestic demand. The government should speed up the development of the elderly care service industry and appropriately reduce the tax burden of the elderly care service industry.
    Research on Multi-stage Time Limit Assignment Model for Post-earthquake Initial Stage Based on Rank Belief Degree
    LI Xiaochao, ZHANG Lei
    2023, 32(8):  220-226.  DOI: 10.12005/orms.2023.0274
    Asbtract ( )   PDF (1099KB) ( )  
    References | Related Articles | Metrics
    Earthquake disasters have occurred frequently all over the world in recent years. On August 14, 2021, a 7.2-magnitude earthquake struck Haiti, killing more than 2200 people; On September 28, 2018, a 7.5-magnitude earthquake and tsunami struck Indonesia, killing 2091 people, injuring 10679, and leaving 680 missing; On April 25, 2015, a 7.8-magnitudeearthquake struck Nepal, killing more than 8800 people; On March 11, 2011, a 9.0-magnitudeearthquake and tsunami struck off the northeast coast of Japan, killing nearly 20000 people; On January 12, 2010,a 7.0-magnitude earthquake struck Haiti killing as many as 316000 people; And on May 12, 2008, a 7.9-magnitude earthquake struck eastern Sichuan, China, killing more than 60000 people. Large-scale earthquake disasters have become the focus and difficulty of research due to their wide range of impacts, hugely affected populations, severe economic losses, high degree of uncertainty, as well as their derivative disasters and their evolution characteristics. Existing studies have mainly focused on the dispatching and distribution of emergency supplies, while ignoring the role of people. In the process of emergency rescue after a disaster, the rapid and effective deployment of rescue teams is the premise and foundation of the emergency rescue work, and it is also an important guarantee to reduce the casualties and property losses at the affected sites.
    After a devastating earthquake, decision makers need to consider factors such as the number of rescue teams, the demand for rescue teams at each affected site, and the arrival time of rescue teams at the affected site, so as to make continuous multi-stage dynamic decisions on different dispatch tasks under different scenarios in multiple stages. In the multi-stage emergency decision-making problem, the decision of the previous stage will have an impact on the decision of the later stage, so the decision maker must consider the interactions between decisions made in adjacent stages in a thoughtful manner to ensure that the overall rescue work is carried out efficiently. On the other hand, in order to implement effective rescue operations, decision makers must also make a reasonable assessment of the efficiency of rescue teams relative to each disaster site. However, in the early post-earthquake period, the affected population, the extent of the damage, and the characteristics of the affected areas are often subject to significant uncertainties. Decision makers need to consider a number of factors when assessing the rescue efficiency, such as the number of rescue teams, their experience accumulation, the rescue equipment, the communication equipment, and their tactical level. For example, disaster sites with large populations but low earthquake intensity will have more requirements for the number of personnel in the rescue team than for the rescue equipment. On the contrary, disaster sites with less population but high earthquake intensity will have more requirements for the level of equipment and technology in the rescue team. Therefore, decision makers need to carefully analyze and assess the situation at different affected sites in order to rationalize the allocation of rescue resources and ensure the maximum effectiveness of rescue operations.
    Evidential reasoning (ER) is suitable for uncertain problems with incomplete, imprecise and unknown assessments, and is widely used in risk assessment and emergency management. In view of this, for the continuous dispatch of emergency rescue teams after a disaster, the ER theory is introduced into the post-earthquake emergency rescue dispatch model, and the efficiency matrix of the rescue team for the affected sites is constructed by expanding the weight dimension of evidential reasoning according to the initial disaster information of the affected sites. Based on the rescue time constraints, the rescue sub-schemes of each stage are constructed. By taking into account the dynamic feasibility of the sub-schemes of the adjacent stages, a multi-stage dynamic time-limit optimization model is established to maximize the efficiency of the whole rescue process. The solution algorithm is given according to the model characteristics. Finally, the solution process and the optimal dispatching scheme are determined through examples, from which the following conclusions can be drawn: (1)Single-stage decision-making often fails to meet the emergency demand, so it is necessary to consider the rescue time constraints and the dynamic feasibility of the stage rescue; (2)The use of the level confidence can effectively determine the rescue efficiency of the rescue team relative to the various affected sites, and provide the prerequisites for the realization of effective rescue; (3)The feasibility and effectiveness of the model and algorithm are verified.The next step will be to study the continuous dynamic assignment of rescue priority in the case of insufficient rescue force.
    Research on the Maturity of Construction Risk Management of Oil and Gas Long-distance Pipeline Engineering Based on F-SEM
    ZHANG Peng, JIANG Yifan, ZHAN Yuxin
    2023, 32(8):  227-233.  DOI: 10.12005/orms.2023.0275
    Asbtract ( )   PDF (1280KB) ( )  
    References | Related Articles | Metrics
    As an important part of the national lifeline, the long-distance oil and gas pipeline project has the characteristics of many field operations, long lines, large regional spans, and many natural obstacles. It faces a large number of uncertain factors, including weather factors, geographical terrain, and construction participants. These factors will affect the construction progress and quality to a certain extent and increase the risks borne by the construction party. Therefore, in order to effectively control and manage accidents, it is necessary to do a good job in pipeline construction risk management, so as to provide sufficient guarantee for the quality of long-distance pipeline construction, the safety of construction personnel and the smooth progress of the later stage of the project. However, the current research on the risk management of long-distance oil and gas pipeline construction mostly focuses on the specific content, influencing factors and measures of management. There is no objective understanding of the relationship between the influencing factors of risk management and the maturity of management level, especially the weak links and promotion strategies of risk management ability of long-distance oil and gas pipeline construction.
    Based on this and the classical capability maturity model, this paper introduces the maturity theory into the field of risk management of long-distance oil and gas pipeline construction by using the structural equation model and other methods. Based on the structural model of OPM3 model and the main model of CMM model, a maturity model including three dimensions of key risk factors, risk management process and maturity level is constructed, and an evaluation index system is established from the key risk factors, including 6 first-level indicators and 17 second-level indicators. Then spss25.0 software is used to test the reliability and validity of the model. On this basis, AMOS21.0 software is used to analyze the fitting degree of sample data and model, and the path coefficient of each factor is obtained and the index weight is calculated, so as to explore the contribution degree of different indicators to maturity. Through the above analysis, it can be determined that the influence degree of each latent variable on the risk management maturity from large to small is :Risk identification>risk analysis>summary suggestion>risk response>risk supervision>risk management planning. Among them, the risk identification factor (0.204) and risk analysis factor (0.202) have the greatest impact on maturity, indicating that in the risk management process, the identification and analysis of potential risks is very important. Only by collecting all kinds of risk events that affect the project objectives as far as possible, clarifying the types and nature of risks, and evaluating the impact of these risks on the realization of the project objectives can we better carry out follow-up management work. At the same time, through the weight analysis of each observation variable, it can be seen that the establishment of risk management process planning document (0.356), the preparation of risk identification report (0.362), the analysis of the scope and severity of risk (0.353), the implementation of risk response plan (0.523), the tracking of risk dynamic development status (0.369) and the perfection of technical reserve (0.365) have a significant impact on risk management maturity.
    Finally, taking a pipeline construction project as an example, the fuzzy comprehensive evaluation method is used to analyze the collected data. Through calculation, the risk management maturity of the project is between the repeatable level and the managed level, which is consistent with the actual development of the industry. From a practical point of view, it is verified that the oil and gas long-distance pipeline constructed in this paper is the feasibility and effectiveness of a risk management maturity evaluation model.
    Research on the Impact of Government Incentives and Quality Input on the Equilibrium of Bike Sharing Service Networks
    LI Jianfei, TANG Kun, SHEN Yang, LI Bei
    2023, 32(8):  234-239.  DOI: 10.12005/orms.2023.0276
    Asbtract ( )   PDF (951KB) ( )  
    References | Related Articles | Metrics
    In the new era, the sharing economy, characterized by platform, efficiency, openness, and distribution, has developed under the traction of cost reduction and efficiency enhancement of enterprises and the promotion of new-generation information technology, and bicycle sharing, as a typical representative of the sharing economy, has become the focus of attention from all walks of life. In recent years, bike-sharing companies have begun to pay attention to issues such as over-input and sharing frequency and use big data to forecast consumer demand to mitigate the negative impact of over-input. However, the practice has proven that it is difficult to solve the negative impact of over-input by a single demand-side forecast and scheduling under random demand perturbation, and it is necessary to integrate manufacturing suppliers, service integrators, consumer demand-side, and it is necessary to incorporate relevant subjects such as manufacturing suppliers, service integrators, consumer demand side, and government into the shared bicycle service system from the service supply chain network level, find strategies to achieve supply-demand balance among participating subjects in the service supply chain network, and plan, organize and optimize network resource allocation from the overall service supply and demand system to promote its sustainable development.
    In reality, the operation of the shared bicycle is a service supply chain network, and it is difficult to explain or find the optimal solution for the system as a whole by studying the node-to-node association at a certain link or a certain stage alone. Based on the competition of similar members in the closed-loop network system of the service supply chain, the shared bike sharing network is divided into three levels by this research: Manufacturing suppliers, service integrators, and consumer demand side, and an equilibrium model of bike sharing service network considering government incentives and quality preferences is established. Combined with the modified projection shrinkage method, the model solving procedure is designed for numerical case analysis.
    The research analysis draws the following important conclusions: (1)Only when the manufacturing supplier’s new product price is equal to the sum of shadow price, marginal production cost, and marginal transaction cost of the new product between the manufacturing supplier and service integrator, and the service integrator is willing to accept the price, the manufacturing supplier will supply at a profit; The same is true for remanufactured products. (2)The new product price of the service integrator changes with the average sharing frequency, and the more the average sharing frequency, the lower the market price. The price of the remanufactured product is not only affected by the average sharing frequency, but also associated with the government incentive, and the greater the government incentive, the lower the endogenous price. (3)The endogenous price of recycled products of service integrators is negatively associated with the quality preference (end-of-life aversion rate of recycled products) of service integrators, and positively associated with the quality preference (cost per end-of-life product) of manufacturing supplier. Meanwhile, as the service integrator’s recycling quantity of scrap products in the demand market increases, the average sharing coefficient of both types of products in the demand market of shared services increases, and the service integrator’s sharing revenue will be improved accordingly.
    The main contributions of this study are: Firstly, it enriches the research on the bike sharing supply chain. Secondly, this broadens the research horizon of the bicycle sharing service chain. Thirdly, combining consumer quality preferences with the bike sharing service supply chain network solves the shortcomings of previous studies. However, this paper does not consider whether the average sharing coefficient can be defined as the “ecological coefficient” in the shared service supply chain, which is worth exploring in the next stage.
[an error occurred while processing this directive]