Loading...

Table of Content

    25 July 2023, Volume 32 Issue 7
    Theory Analysis and Methodology Study
    Effect of Order Information on the Fairness Concern of Suppliers under Dual Sourcing: Experiment Study
    XUE Chao, ZHAO Xiaobo, ZHU Wanshan, WU Yan
    2023, 32(7):  1-6.  DOI: 10.12005/orms.2023.0209
    Asbtract ( )   PDF (1182KB) ( )  
    References | Related Articles | Metrics
    The practice of dual sourcing is common among manufacturers and is considered an effective strategy to deal with the risk of supply disruptions, high costs, capacity limitations, and/or lead time variability. To negotiate better terms, firms often rely more on a primary supplier, resulting in uneven order allocations between suppliers in dual sourcing. However, suppliers may reject the order, that is, supply shortage can be caused by intentional refusal of the suppliers even though their supply capacity is sufficient to meet the order. Fairness concern is one of the major drivers for the observed order rejections. To mitigate this type of behavioral supply risk, we focus on the effect of order information on the fairness concern of suppliers.
    To analyze the effect of order information on the fairness concern of dual-sourcing suppliers, this paper considers a supply chain consisting of a manufacturer and two suppliers. Two suppliers offer an incremental quantity discount policy. To benefit from discount, the manufacturer orders more from one supplier, which causes dissatisfaction of another supplier that receives a smaller order quantity. The small-order supplier’s concern of fairness may result in order rejection. We firststudy a normative model of order allocation game analytically and provide a theoretical benchmark. By designing and conducting behavioral experiments in laboratory, we compare the decision behaviors under two treatments. In one treatment, the order quantity is complete information, that is, each supplier can observe the order quantities that manufacturers allocated to both suppliers. In the other, the order quantity is private information, that is, each supplier only observes its own order quantity. We collect a total of 880 decisions for each role in complete information treatment and 1040 decisions in incomplete information treatment. We use the one-sample Wilcoxon signed-rank test to compare experimental results with theoretical predictions and the Wilcoxon-Mann-Whitney test to compare the two treatments’ experimental results. To explain the observations in experiment, we build behavioral models for both treatments. For the complete information treatment, we combine the ERC model and logit choice model. For the incomplete information treatment, we use perfect Bayesian equilibrium to analyze the decisions of manufacturers and suppliers. Maximum likelihood estimation is applied to estimate the parameters of the decision makers’ behavioral preferences.
    Experimental data indicate that behavioral decisions of subjects deviate from the normative models’ predictions in both treatments. In experiment, the manufacturer allocates to the small-order supplier higher order quantities than the prediction, but these orders are frequently rejected. Comparing with complete information treatment, manufacturers allocate higher order quantities to the small-order supplier in incomplete information treatment, but the orders are rejected more often.All these experimental results are statistically significant. Identifying fairness concern as the main driver, our behavioral models and parameter estimation demonstrate that the suppliers show stronger fairness concern in the incomplete information treatment than in the complete information one. The behavioral models capture the subject’s decision process, and accurately predict the manufacturer’s order allocation and the supplier’s rejection behavior in experiment. This study shows that manufacturers should focus on the fairness concern of the small-order supplier to reduce order rejection. Keeping order information private as in the incomplete information treatment is not an effective way to mitigate the behavioral supply risk caused by fairness concern, and order information transparency is more effective.
    Research on the Strategy of Cooperative Emission Reduction in Supply Chain Involving ESCO under Different Capital Structure
    BAI Shizhen, WU Dongxiu, YAN Zhanghua
    2023, 32(7):  7-14.  DOI: 10.12005/orms.2023.0210
    Asbtract ( )   PDF (1162KB) ( )  
    References | Related Articles | Metrics
    The national unified carbon emissions trading market fosters the growth of contractual energy management and supply chain collaboration to reduce emissions. It has been increasingly welcomed to cooperate with the energy service company (ESCO)to improve the energy efficiency under the carbon cap-and-trade mechanism. In ESCO contract, the capacity of the two parties to coordinate the allocation of risks and benefits is a significant factor in determining the ultimate emission reduction targets and benefits, and an unreasonable allocation will result in the dissolution of the partnership. Numerous studies have focused on the distribution of risks and benefits of energy management contracts, but they have not considered the structure of the capital contribution as the basis for determining the distribution of risks and benefits. Therefore, it is necessary to examine the capital structure of energy efficiency companies and the impact of different capital contribution methods on the efficiency of emission reduction cooperation. Under the uncertainty of thevalue of investment risk of Energy Performance Contracting Projects (EPCPs), three game structures, such as no credit support, energy saving benefits sharing and energy saving costs sharing under the support of green finance, are established to investigate the impact of capital structure on the emission reduction effect and profit. The relationship between the risk and benefit distribution of energy savings is determined by the capital structure of energy efficiency service companies. Exploring mitigation cooperation strategies should therefore begin with the capital structure, but there are few pertinent studies.
    Consequently, this paper examines the supply chain emission reduction cooperation strategies from the standpoint of capital structure. Based on the typical business model of energy efficiency service companies, this paper examines the changes in carbon emission reduction levels, supply market shares, and profits under three different capital structure scenarios. We divide the capital structure of the energy efficiency service company into three scenarios: No green credit support, energy efficiency benefit sharing with green credit support, and energy efficiency cost sharing. The findings provide additional theoretical references for promoting supply chain cooperation in emission reduction and achieving better cooperation strategies. Two capital structure indicators of ESCO are first developed to analyze the optimal solutions. Our research indicates that green finance benefits operational decisions regarding carbon emission reduction in the supply chain. In addition, ESCO’s capital structure and revenue share have a substantial impact on supply chain cooperation. The cooperation strategy can be determined by adjusting the two capital structure indicators of ESCO in order to increase the level of emission reduction, market share, and profits.
    The conclusions are as follows: (1)In the capital constraint scenario, the availability of green credits to ESCOs will impact emission reduction decisions made in cooperation. This means that providing capital support to ESCOs will not only diversify the risk of investment in emission reduction, but will also have a greater impact on the low-carbon transformation of the entire supply chain and the industry as a whole by increasing the level of emission reduction at nodal enterprises. This provides a solid theoretical foundation for promoting the growth of green credit. (2)Comparing the energy efficiency benefit sharing strategy with the abatement cost sharing strategy reveals that the profit of energy efficiency service providers is highly correlated with the level of carbon emission reduction, whereas the profit of manufacturers is highly correlated with their market share. In the game between the two parties, the manufacturer’s equitable allocation of abatement costs can increase the efficiency of abatement cooperation, their market share, and their profits. Current customer willingness to share the cost of emission reduction in China’s contract energy management is low, providing theoretical support for customers to adequately share the cost of energy savings. (3)Green market preferences create the conditions necessary to reconcile the risk and benefit allocation of emission reduction between energy efficiency service providers and manufacturers.The conclusions obtained could theoretically support the supply chain cooperation to reduce emissions. (4)The return on emissions reduction inputs and outputs of energy efficiency service companies is a lever to moderate supply chain emissions reduction cooperation, whereas manufacturers and green credits maximize economic and environmental benefits by influencing the capital structure and returns of energy efficiency service companies.
    This study’s contribution is to explain the effect of risk and benefit allocation on the efficacy of cooperation and to provide a theoretical foundation for policy formulation pertaining to contract energy management and even green credit and green supply chains. However, this paper only examines the characteristics of the ESCO’s capital structure, but not the customer’s capital structure and expected financial returns. In conjunction with the contractual energy management contract, additional research can be conducted to investigate a win-win contractual framework in terms of the allocation of risks and benefits for both parties.
    Research on Outsourcing Emission Reduction Strategies Considering Consumer Preference under Carbon Cap-and-Trade Policy
    JIANG Xiaofen, GAO Guangkuo, SUN Hao
    2023, 32(7):  15-22.  DOI: 10.12005/orms.2023.0211
    Asbtract ( )   PDF (1433KB) ( )  
    References | Related Articles | Metrics
    The emission of greenhouse gases has exacerbated the issue of global warming, prompting the international community to take a series of measures. From the introduction of the concept of “carbon trading” in the 1997 Kyoto Protocol, to the establishment of the EU Emissions Trading System in 2005 and its official operation in 2008, the practice has proven to be an effective way to reduce carbon emissions. Since China’s ratification of the Paris Agreement in 2016, it has gradually put forward a series of green initiatives to fulfill its nationally determined contributions under the agreement. China has committed itself to peak carbon emissions by 2030 or earlier, and has recently announced that it will strive for carbon neutrality by 2060. From the pilot carbon trading programs in seven provinces in 2010, to the issuance of the Management Measures for Carbon Emissions Trading (Trial) in 2020, the national carbon emissions trading market has already been launched. Under the government’s implementation of carbon quotas and carbon trading policies, energy-consuming enterprises often adopt two types of emission reduction methods: Self-reduction and “outsourcing reduction”. This paper considers a second-tier low-carbon supply chain consisting of a supplier and a manufacturer under the carbon quota trading policy. It establishes a two-stage game model based on different levels of cooperation in the supply chain’s self-reduction mode and a three-stage game model in the outsourcing reduction mode.
    By comparing the equilibrium results, we find that the higher the level of cooperation in the supply chain’s self-reduction mode, the better the emission reduction effect and the higher the supply chain profit. Therefore, under the government’s carbon quota trading policy, supply chain companies can only maximize their profits by strengthening cooperation and achieving better emission reduction effects. Under the outsourcing reduction mode, the emission reduction rate and profit are much higher than under the optimal emission reduction rate and profit in the self-reduction mode. Thus, the supply chain has an incentive to choose outsourcing reduction mode. The difference factor in emission reduction investment is crucial to the choice of the supply chain’s emission reduction mode. When it reaches a certain threshold, the optimal choice for the supply chain is outsourcing reduction. When it is lower than this threshold, the supply chain can obtain greater benefits by choosing self-reduction. Consumer low-carbon preferences have a positive impact on the emission reduction rate and related subject profits in the supply chain, but a negative impact on the share of energy-saving benefits. Also, the more consumers prefer low-carbon products, the more the supply chain tends to choose outsourcing reduction. Therefore, the government can guide consumers to establish low-carbon consumption awareness in the process of energy conservation and emission reduction to achieve the goal of low-carbon production by companies and achieve better emission reduction effects. This also suggests that the government can improve the level of social low-carbon production by increasing consumer awareness of low-carbon consumption. At the same time, by considering consumer low-carbon preferences, supply chain companies and energy-saving service companies can achieve a win-win situation of maximizing emission reduction and profit. The supply chain’s emission reduction rate and energy-saving service company’s profit show a reverse change relationship with the share of energy-saving benefits. The total profit of the supply chain shows an inverted U-shaped change relationship with the share of energy-saving benefits. Finally, through numerical simulation, this paper analyzes and explains the impact of the difference factor in the supply chain’s emission reduction investment, consumer low-carbon preferences, and the share of energy-saving benefits on the supply chain’s emission reduction mode selection, emission reduction rate, and related subject profits.
    This study assumes that the carbon trading price remains constant and the contract period and equipment lifecycle under the outsourcing reduction mode are single-period. Considering that the supply-demand relationship in the actual carbon trading market often changes, and energy-saving service contracts and equipment lifecycles tend to be multi-period, future research will focus on product markets and carbon emissions trading markets to study the issue of multi-period supply chain cooperation and emission reduction strategies, making the model more realistic.
    Channel Design and Coordination Strategy of the Manufacturer under Platform Selling
    ZHANG Mengying, ZHANG Zihao, WANG Ningning, WU Haihui
    2023, 32(7):  23-29.  DOI: 10.12005/orms.2023.0212
    Asbtract ( )   PDF (1135KB) ( )  
    References | Related Articles | Metrics
    In recent years, online retail has experienced strong growth, and many manufacturers have chosen to join online retail platforms by paying commissions to obtain the qualifications to sell their products on these platforms. By opening up e-commerce platforms as a sales channel, manufacturers can expand their sales reach, reduce sales costs, and avoid being dominated by traditional retailers. However, joining a platform sales channel not only requires paying a certain commission to the platform but also can lead to serious channel conflicts and competition for manufacturers due to the coexistence of multiple sales channels. Failure to effectively resolve conflicts between the two channels will damage the profits of the members of the supply chain. Therefore, whether to add a platform sales channel in the presence of traditional retail channels and how to deal with the competition and conflicts caused by the increase in channels are issues of concern to manufacturers.
    In this paper, we consider whether a manufacturer with a traditional retail channel should add the platform channel, and take market expansion, channel competition, and platform fee rate brought by platform selling into supply chain decision-making models. We construct a Stackelberg game between the manufacturer and the traditional retailer, where the manufacturer is the leader and the retailer is the follower. By constructing supply chain decision-making models with and without the platform channel, we analyze the manufacturer’s channel design strategy and its effect on the traditional retailer’s profit. Further, a wholesale-retail price contract with fixed compensation is designed to coordinate the supply chain for channel conflicts caused by channel addition. This paper derives several main conclusions.
    First, due to the existence of channel competition, when the potential market increment brought by the platform is limited, adding new platform channels will hurt the profits of both manufacturers and retailers. As the increment increases, adding new platform channels will benefit one party and hurt the profits of the other party. When the increment reaches a certain level, adding new platform channels will increase the profits of both manufacturers and retailers. Second, wholesale and retail price contracts can achieve the overall optimal profit of the platform’s dual-channel supply chain (realizing the coordination of the supply chain), but only in specific cases can they simultaneously improve the profits of manufacturers and retailers. In most cases, this contract improves the profits of retailers but hurts the profits of manufacturers. Third, on the basis of the wholesale and retail price contract, the retailer pays a certain fixed compensation value (franchise fee) to the manufacturer. When the fixed compensation value is within a certain range, the wholesale and retail price contract with fixed compensation can achieve the coordination of the supply chain and simultaneously improve the profits of manufacturers and retailers. Fourth, when the proportion of consumers who prefer traditional channels is moderate, the wholesale and retail price contract with fixed compensation has good flexibility, and there is more room for bargaining between manufacturers and retailers regarding the fixed compensation value. When consumers have a clear channel preference, there is less room for bargaining between manufacturers and retailers regarding the fixed compensation value.
    This article considers a manufacturer-dominant Stackelberg game, and future research could investigate Stackelberg games dominated by traditional retailers or Nash games where both parties make decisions simultaneously, which may yield different conclusions than those drawn in this article. Additionally, from an ease of implementation standpoint, this article designs wholesale and retail price contracts with fixed compensation to coordinate the platform dual-channel supply chain, and future research can explore the coordinating abilities of other contracts in this context. Finally, the establishment of the decision-making model in this article is based on the assumption of deterministic demand, and future research could expand from the perspective of random market demand.
    Financing Strategy of Closed-loop Supply Chain Retailers under Loss Aversion and Demand Uncertainty
    DING Lili, LU Mengtong, WANG Lei
    2023, 32(7):  30-36.  DOI: 10.12005/orms.2023.0213
    Asbtract ( )   PDF (1598KB) ( )  
    References | Related Articles | Metrics
    With resource scarcity and environmental degradation, how to balance economic and environmental benefit has become an important issue. The smooth operation of closed loop supply chains (CLSC) can solve these problems. However, the problem of retailer’s capital constraint in the CLSC always exits. The retailer can obtain capital support through external bank loans and internal equity financing. There are some literatures which have studied the operation decisions of CLSC, financing strategies in supply chain, and the impact of risk attitude on supply chain decisions. But the research on incorporating capital constraints and participants’ loss aversion into the CLSC is still relatively scarce. In addition, bank credit and trade credit are still considered as main financing modes in relative study, while the supply chain internal equity financing used commonly in reality is ignored. Based on this, this paper studies the green operation decisions of each participant under the external bank loans and internal manufacturer investment. Furthermore, this paper considers the impact of capital demanders’ loss aversion preference on financing mode selection. Thus, a three-level CLSC Stackelberg game model is established including a rational manufacturer, a capital constrained loss-averse retailer and a rational recycler.
    In this model, we assume that the manufacturer is the highest leader, the recycler is the secondary leader, and the retailer is the follower. The model can be divided two stages. The first stage is the production and sales stage, where the retailer chooses the order quantity based on the wholesale price determined by the manufacturer. And then, facing the uncertain market demand the retailer will get sale profits. We assume that market demand follows a uniform distribution. In order to simplify the analysis, the unit retail price is standardized to 1. In this process, there is a capital gap for retailers, which can be resolved through two channels: Bank loans or accepting investment from manufacturers. At the end of this stage, the retailer needs to pay for the cost of financing. The second stage is the recycling and remanufacturing stage. The recycler recovers used products from consumers or retailers and then sells them to manufacturers for reproduction. The manufacturer can produce new products and remanufacture products which have the same qualities, and the production of remanufactured products can save costs. And the recycler must establish recycling networks and channels before carrying out recycling activities, and the cost is a quadratic function. The loss aversion utility function is used to describe the characteristics of loss aversion of retailers. Based on this model, the reverse induction method is used to obtain the optimal decisions of each party under different financing modes. Then, numerical simulation is used to obtain the impact of loss aversion behavior on financing and operational decisions.
    The results show that financing costs have the same impact on the operating decisions of recyclers and retailers, but the opposite impact on manufacturers’ wholesale prices. Moreover, only the financing cost across a value threshold can help the retailer and investors realize their Pareto improvement. And the higher the loss aversion of the retailer, the larger the range of acceptable loan interest rates and the smaller the range of dividend ratios. When the negotiated dividend ratio is 0.05 and the loan interest rate is relatively small, the retailer prefers investment. In reality,recyclers may also face capital constraints and other parties may have different risk preferences. Thus, analyzing the impact of these factors on capital constrained CLSC is also worth in-depth research.
    Scheduling Just-in-time Part Supply for Mixed Model Assembly Lines Based on Material Supermarket
    PENG Yunfang, SHAO Wenqing, XIA Beixin
    2023, 32(7):  37-43.  DOI: 10.12005/orms.2023.0214
    Asbtract ( )   PDF (1427KB) ( )  
    References | Related Articles | Metrics
    The development of mass customization has led to an increasing variety of products. In order to meet diversified customer needs, a mixed model assembly line that allows different models to be assembled on a same production line has emerged. Tens of thousands of parts need to be delivered to the assembly line in time, which brings enormous challenge on part supply faced by manufacturing enterprises. In order to meet part requirements of mixed model assembly line, frequent deliveries with small batch are becoming a trend recently. A new material supermarket is introduced to promote just-in-time part supply for mixed model assembly line.
    The part feeding problem refers to the logistics process of delivering the matched parts to the corresponding stations on time according to the consumption of parts. Combined with the new part-feeding mode with material supermarket, this paper proposes a just-in-time part feeding method that is different from traditional periodic part feeding. According to the problem description and related assumptions, a mixed integer programming model with the goal of minimizing work-in-process inventory is constructed to determine the optimal loading parts and schedule of each tour under the constraints of tow train capacity and preventing part shortage. A heuristic algorithm is designed to solve the large-scale problems after considering the continuity and complexity of modern production. The heuristic algorithm initially arranges the part type and the start time of each tour according to the calculated shortage point. Once the interval time of two consecutive tours is overlapped, the start time needs to be coordinated by backward pushing.
    To evaluate the computational performance of the proposed heuristic algorithm, problems of different scales are employed in the numerical analyses. The results obtained from the proposed heuristic algorithm are compared with results solved by Cplex and the genetic algorithm. The comparison demonstrates that the proposed heuristic can get all the optimal solutions for the small-sized problems, and it outperforms genetic algorithm in terms of both solution quality and solution time. Furthermore,the just-in-time part feeding method is analyzed and compared with the widely used periodic part feeding under different tow train capacities as well as different material requirements. The results show that the just-in-time part feeding method proposed in this paper makes the delivery time as close to the shortage time of each part as possible. Compared with the widely used periodic part feeding method, it can more effectively reduce the inventory of the workstations, the occupation of workstation space and production costs. At the same time, the capacity of the tow train also has an impact on the average inventory, and it needs to be selected reasonably according to the shortage time to reduce the average inventory.
    Although there have been many researches on the part feeding problem for mixed model assembly lines, most of the existing researches focus on periodic part-feeding. The research in this paper provides the just-in-time scheduling for the tow train which travels between material supermarket and assemble line. It is of great significance to enrich and expand the research field of part supply process based on material supermarket. In the future, we will take more tow trains into consideration while this paper only allows one tow train which simplifies the problem.
    Maintenance Optimization of A K-out-of-N System Considering Common Cause Failure and Load Sharing
    ZHANG Nan, LIU Yu, CAI Kaiquan, ZHANG Jun
    2023, 32(7):  44-48.  DOI: 10.12005/orms.2023.0215
    Asbtract ( )   PDF (1118KB) ( )  
    References | Related Articles | Metrics
    Nowadays, the mechanical equipment is become more and more complex with high requirement of accuracy and complexity. The failure modes of systems are more and more diverse. The classical models considering only one single failure mode is incapable of describing the failure evolution and developing efficient maintenance strategies for systems. In this paper, we consider a K-out-of-N system with identical components.Both load sharing and common cause failure effects are investigated. The system may fail when the operating component number is less than K, or, due to the common cause. The common cause failure is a two-stage process, where the system can experience a defective state and the defectiveness is revealed and repaired, and then no common cause failure will occur. Otherwise, the defective state can turn into the failure state if no maintenance action is implemented in time. The sojourn time in the perfect working state and the defectiveness state all follow general distributions. An imperfect inspection policy is implemented to reveal the hidden failure of the system. If a system failure occurs and it is revealed by the inspection, maintenance is implemented immediately which can restore the system to the as-good-as-new state. Otherwise, if the failure is unrevealed due to the inspection error, it will be in the failure state until the failure is inspected. If no failure occurs, the system will continue to operate. We also assume that the inspection and maintenance times are non-negligible. The instantaneous availability and the steady availability of the system are derived. The inspection interval of the system is also optimized. A numerical example is presented to show the applicability of the proposed model. It can provide theoretical reference for the decision-maker when developing efficient maintenance strategies.
    We utilize probability theory to model the system reliability indexes. The renewal theory is implemented in formulating the stochastic optimization model where the objective is to minimize the inspection and maintenance cost of the system in the infinite time horizon. The decision parameter is the inspection period and the availability is the constraint. The stochastic optimization problem is solved numerically.
    Theoretically,the expressions of instantaneous availability and the steady availability are given. The cost rate in the infinite time horizon is derived. Numerically, an example is given to show the variety of the system reliability quantities with respect to different system parameters. It is shown that the system availability decreases with respect to the inspection interval and the inspection error rate. When the inspection is not frequent or when the efficiency is low, system defectiveness may be hidden without any correction and turns into a system failure with larger probability, which may decrease the system availability in average. In addition, the sensitivity of the system reliability with respect to K is examined, and as expected, the reliability function decreases the K, which is the smallest number of required operating component for the system functionality. With respect to the inspection and maintenance cost rate in the infinite time horizon, the optimal inspection interval is presented. It seems that the cost rate in the infinite time horizon is a convex function with respect to the inspection interval.
    In this work, both load sharing and common cause failure effects are investigated for a K-out-of-N repairable system. An imperfect inspection policy is implemented to reveal the hidden failure of the system. The instantaneous availability, the steady availability and the maintenance cost rate in the long-run are derived. Numerical examples are presented to illustrate this study. The optimum of the maintenance cost with respect to the imperfect inspection period subject to the availability constraint is discussed. It can provide theoretical references for the decision-maker.This work could be further extended in the following three aspects. First, we have considered the failure rates of components are constant. It is more interesting and challenging to relax this assumption by allowing general time-dependent failure rates of components. Secondly, we have considered only a constant inspection error rate, under which the inspection may fail to reveal the system defectiveness. In the future work, we can also consider the inspection may misidentify the system as in the defectiveness state, where its true state is good. Thirdly, we have assumed that the system is periodically inspected to facilitate the modelling, the corresponding calculation and analysis. Non-periodical inspection policies can be studied, where the inspection time based on the system health condition or the prediction lifetimes, etc. We plan to investigate these issues in the future work.
    Optimal Control of Crowdsourcing Logistics Service Quality Considering the Technical Level of Big Data and Supply Competition
    MENG Xiuli, YANG Jing, LIU Bo, TANG Run
    2023, 32(7):  49-55.  DOI: 10.12005/orms.2023.0216
    Asbtract ( )   PDF (1762KB) ( )  
    References | Related Articles | Metrics
    Crowdsourcing logistics uses big data technology to match the needs of both the employers and the receivers, thus saving logistics costs. At the same time, the service demand of crowdsourcing logistics has random fluctuation. When the logistics demand increases sharply, the service platform will face shortage of receivers. In addition, due to the characteristics of independent selection of the logistics supply for receivers, there will be fierce competition among the various crowdsourcing logistics service platforms. The service platforms compete not only in the logistics demand market, but also in the supply market of the receivers, such as the plunder of high-quality receivers by JD crowdsourcing and Meituan crowdsourcing. In the actual operating environment of crowdsourcing logistics market with high demand and competitive supply from its receivers, determining the optimal service dynamic competition strategy for the crowdsourcing logistics platform can effectively regulate the supply capacity of the receiver and meet the logistics order demand of the platform, which is of great significance for the operation management and optimization of the crowdsourcing logistics service platform.
    In view of the surge of crowdsourcing logistics demand, crowdsourcing logistics service platform is facing the fierce competition environment of supply shortage of receivers. Considering the situation that two service platforms compete with one receiver through commission and big data technology level, a crowdsourcing logistics service quality control’s differential game model based on big data technology level under supply competition caseis constructed. The changes of quality control level, profits and service quality of all parties under three situationsare analyzed, and the influences of choosing the strategy to improve big data technology on the service platform and the receiver are discussed. The results show that the higher the quality sensitivity coefficient, the higher the optimal quality control level of the service platforms and the receiver and the optimal big data technology level. The service platform is willing to invest more funds in the research and development of big data technology to improve the big data technology level, thereby promoting the improvement of quality control level. The higher the delay cost per unit demand, the lower the service platforms’optimal quality control level. The higher the delay cost per unit demand, the higher the cost of not being able to meet demand in a timely manner due to insufficient supply from the receiver, and the lower the willingness of the service platform to improve quality control. The profit of the service platform is positively correlated with the commission sensitivity coefficient and negatively correlated with the commission competition coefficient. The higher the initial crowdsourcing logistics service quality, the higher the profits of service platform and receiver. The cost of paying the same level of the service platform quality control is optimized, thereby increasing the enthusiasm of the service platform for quality control efforts. The service platform adopting big data technology strategy improves the quality control level itself, but it does not affect the quality control level of the competitive platform. The platform’s own profit is increased only when the proportion coefficient of big data technology cost optimization meets certain condition. As for the receiver, in the case of supply competition, the quality control level remains unchanged no matter whether the service platform adopts big data technology strategy or not.
    Improving the level of big data technology can improve the crowdsourcing logistics service quality and bring a satisfied consumption experience to consumers. To a certain extent, the improvement of big data technology level can promote the development of crowdsourcing logistics platforms and receivers. In the event that there are fewer receivers who cannot meet all crowdsourcing logistics needs in a timely manner, the service platform can appropriately increase the commissions to win over more receivers and meet the crowdsourcing logistics market with strong demand. During the delivery process, measures such as increasing the supervision of the service quality of the receiver, and inviting the recipient to conduct a satisfaction evaluation on the receiver after delivery are taken to ensure the crowdsourcing logistics quality. Future research can also apply other suitable mathematical models to provide diversified and diverse management suggestions for improving the crowdsourcing logistics services quality.
    Research on Tugboat Multi-objective Optimal Scheduling Considering Time and Fuel Consumption
    ZHONG Huiling, ZHANG Yugang, GU Yimiao
    2023, 32(7):  56-62.  DOI: 10.12005/orms.2023.0217
    Asbtract ( )   PDF (1438KB) ( )  
    References | Related Articles | Metrics
    Large ships entering and leaving ports need tugboats to assist in berthing and unberthing. Because of the influence of their own ship length, draft, wind flow, and berth environment, they are unable to fully utilize their own control force for berthing and unberthing maneuvers. The tugboat is an important part of the port resources. Tugboat scheduling is one of the important planning items of the port. However, tugboats in ports are limited. In the face of a large number of ships entering and leaving the port during the tide period that requires the tugboat assistance, how to effectively dispatch tugboats and provide timely service to ships is the key to improving the port’s service level. At the same time, fuel consumption generated by various types of tugboats varies in different states. How to improve the utilization rate of tugboats and reduce the fuel consumption of tugboats during the scheduling process is also essential for reducing the operational costs of tugboat companies. Therefore, it is necessary to seek scientific and reasonable tugboat scheduling decisions. However, research literature on tugboat scheduling is still very limited. Most of the tugboat scheduling models in existing literature are single objective optimization. In fact, there are many factors to consider in tugboat scheduling decisions, and there is relatively little research on multi-objective optimization of tugboat scheduling. Therefore, it is necessary to conduct multi-objective optimization scheduling research on tugboats in this article.
    Aiming at the problem of balancing completion time and fuel consumption in the process of tugboat scheduling to improve port service levels and reduce tugboat company operating costs, this paper aims at minimizing the maximum completion time of tugboats and the total fuel consumption of tugboats, and constructs a mixed integer programming tugboat multi-objective optimization scheduling model. The model takes into account the characteristics of a large number of ships entering and leaving the port in a tidal port during the tide, and calculates the fuel consumption of tugboats according to the different states in the tugboat scheduling process. To solve the model, the NSGA-II (Non-dominated Sorting Genetic Algorithm II) is used. NSGA-II has numerous advantages in solving multi-objective optimization problems and has been widely applied in solving various practical scheduling problems. The algorithm adopts one-dimensional real number coding, and the fitness function is set by the idea of event modeling. In combination with the characteristics of tugboat scheduling, the genetic operator is designed. The Pareto frontier solution obtained and the algorithm comparison show the effectiveness of the algorithm.
    Finally, the actual operation data of Guangzhou Port is used as an example to verify the feasibility and effectiveness of the model, which provides a decision-making basis for the port tugboat scheduling plans. The instance shows the two objectives of this model cannot be minimized simultaneously, and the total fuel consumption of the tugboats in the Pareto frontier solution decreases with the increase of the maximum completion time of the tugboats. Port tugboat dispatchers can choose a suitable tugboat scheduling plan from Pareto frontier solutions based on actual needs. Different scheduling plans also indicate that in the case of limited tugboat resources within the same area, tugboats from other adjacent areas can come to assist in reducing ship waiting time. However, when tugboats sail across areas, non-speed limited navigation will consume more fuel, so reducing the number of cross area voyages can relatively reduce tugboats’ fuel consumption. In addition to considering the issues of balancing efficiency and energy conservation proposed in this article, the dynamic optimization of tugboat scheduling is also an important task for tugboat scheduling planners to adjust tugboat scheduling to cope with uncertain events, as many uncertain factors may occur during the actual tugboat berthing and unberthing process, resulting in the original tugboat scheduling plan not being able to proceed as usual. In addition, tugboats are a part of port resources, and it is worth considering how to combine them with other port resources to make port integrated operations more efficient. The above are all directions that this article will continue to study in the future.
    Two Stage Matching Optimization Method for Emergency Response Team
    YI Yang, ZHU Jianjun, TONG Huagang
    2023, 32(7):  63-69.  DOI: 10.12005/orms.2023.0218
    Asbtract ( )   PDF (968KB) ( )  
    References | Related Articles | Metrics
    Recently, the emergencies deeply damaged the social development, which has attracted the attentions from the whole world. Scholars from diverse areas have studied how to deal with the emergency’s disasters, like the emergency supplies, emergency rescue, and post-disaster reconstruction. However, one of key issues, which means the building teams for emergency, has been neglected, which influences the performance of rescuing. Lots of examples have verified the importance of emergency response team (ERT), like the Fukushima nuclear meltdown in Japan and Australian bushfire. We would like to discuss the reasons for ERT’s importance. On one hand, because of the burstiness of emergency, the ERT temporarily forms without mature mechanism. On the other hand, generally, experts from different areas are required to deal with the emergence together. Different experts from diverse areas always have different opinions, and consensus reaching process in a short time is always difficult. These mentioned problems could be solved through selecting the team member. For the reasons, we could design some rules to select echelon with mature architecture and easily form the consensus. Obviously, how to determine the selecting rules are vital for ERT.
    After full investigations, we propose two-layer selecting mechanism for ERT. For the proposed two-layer selecting mechanism, considering the importance of the leader, we select the leader in the first layer. For the process of selecting the leader, because each task has his own features, we select the leader according to the main feature of the emergency task. Also, considering the confusion of multiple leaders, each emergency task has only one leader. Certainly, in the first layer, we select one leader for one emergency task based on the key feature of task. Then, for the second layer, the objective of the second layer is serving the leader, and we should design the rules for cooperating. The first rule is that the major of ERT’s team member should be different, and the all majors of the whole team could meet the requirements of the emergency tasks. As the emergency task requires several experts from different areas to cooperate together,the leader only has the knowledge of key requirement. The remained majors should be finished by the ERT team member. For the team member of ERT, it is necessary to make up the disadvantages of the leader of ERT team. Certainly, the first rule is making up the deletion of majors. Next, the second rule is the complementarity of ability. To better finish the emergency task, the whole team should cooperate together, and the complementarity is important. The complementarity of ability means the leader’s weakness should be made up for by the team member. We could perform the rule through the differences between the value of different indexes. Then, the third rule is the consistency. We discuss the cooperation from the perspective of indexes, and the cooperation intention has not been discussed. The consistency indicates the cooperation intension. The scale, which is used to measure the cooperation intensions, is used to represent the cooperation ability of the whole team members. Also, the objective is maximizing the whole cooperation intensions. All in all, the second layer is aiming at selecting the team members, and the three rules, including the covering of majors, the complementarity of ability, and the cooperation intention. In these rules, the first two rules could be realized through the constraints, and the last rule could be realized by the objective or the constraint. Finally, after introducing the two-layer mechanism, how to deal with the mechanism and select the final teams is the important. Because the two layers are connected and the second layer could only be selected after determining the first layer, the two layers have precedence relationships. To better perform the relationship, the two-level programming is proposed to solve the problem. Also, considering the difficulties of solving the problem, the genetic algorithm is proposed to solve the two-level programming model.
    To better verify the effectiveness of proposed method, the case study, indicating the aircraft fire, is used. The results of case study prove the advantages of our proposed methods.
    Investment Optimization and Resilience Improvement of Regional Core Industrial Enterprises’ Supply Network in Case of Severe Disasters
    WANG Layin, FU Yue, ZHAO Dong
    2023, 32(7):  70-77.  DOI: 10.12005/orms.2023.0219
    Asbtract ( )   PDF (1416KB) ( )  
    References | Related Articles | Metrics
    Industry is the leading sector of most national economies and industry is the most important producer of industrial and living goods. It provides the essential materials for people’s daily lives and for the economic activities of various industries. Recently, major disasters such as earthquakes, floods and epidemics have resulted in the large-scale shutdown of industrial enterprises in disaster-stricken cities and damaged intra-regional and inter-regional supply networks. Major disasters often cause significant economic losses, severe material shortages and inflated prices, threatening economic security and social stability.
    How to balance the stability and economy of the supply system under the restrictions of major disasters is meaningful. In view of this, this paper explores a multi-level, multi-channel, multi-stage core industrial enterprise major disaster from a meso perspective, systematically linking pre-disaster defense and post-disaster response, starting from the strength of government relief under major disasters, dividing network resilience into point resilience and line resilience, and exploring a multi-level, multi-channel, multi-stage core industrial enterprise major disaster by establishing a three-stage game model of defense-attack-response blocking response process. The two-stage robust optimization model is also decomposed into a master-subproblem using the column and constraint generation algorithm (C&CG), and solved by alternating iterations of the master and subproblems. Using city Z as an example, we verify the feasibility of this game model by examining the resilience state of the supply network of core industrial enterprises during the response to COVID-19 epidemic, including the defense plan and the response plan after suffering from the epidemic blockage, and propose strategies and characteristics to improve the resilience of the supply network of core industrial enterprises in city Z under the epidemic blockage.
    The results show that: (1)The game model for solving the resilience of energy networks in disaster management can also well portray the resilience of material supply networks in industrial enterprises, and the C&CG algorithm can balance the stability and economy of the supply system through alternate iterations of the main and sub-problems. (2)Different disaster levels should adopt different resilience enhancement strategies. (3)Different types of supply lines should adopt different resilience enhancement strategies. (4)Accurate prediction of major disaster blocking conditions is the key to the resilience enhancement of the supply network of core industrial enterprises. (5)The government should be guided by the disaster defense needs and market forecasts of core industrial enterprises in each category in deciding the material stockpile decision process, not only based on their regional importance level. Core industrial enterprises can independently adjust the level of material stockpiles according to their actual defense needs, thus reducing to some extent the investment costs for dealing with major disaster interdiction. In this case, even when the actual epidemic situation is worse than expected, the level of resilience of the regional core industrial enterprises’ supply network can still be guaranteed within a certain range. The research in this paper is intended to provide theoretical support for regional response to major disasters and to ensure the uninterrupted production and supply of critical supplies.
    Research on Collaborative Delivery Optimization Based on Crowdsourcing and Piggyback Collaboration
    ZHOU Lin, CHEN Yanping, LI Haiyan, ZHU Fangbin
    2023, 32(7):  78-84.  DOI: 10.12005/orms.2023.0220
    Asbtract ( )   PDF (1413KB) ( )  
    References | Related Articles | Metrics
    For the past years, the prosperity of commerce, especially the rapid development of e-commerce, has led to a surge delivery demand both in urban and rural areas. With the popularity of e-commerce and the spatial locations of customers, the delivery demands among regions are extremely unbalanced. In addition, with the characteristics of small batch, high frequency and personalized demand, logistics service providers are faced with great challenges in operation cost and customers’ satisfaction when independently carrying out delivery services. Effective integration of logistics resources to innovate and intensify delivery models is a crucial measure to improve operational efficiency and reduce operational costs.
    Based on the temporal and spatial distribution of crowdsourcing vehicles in the sharing economy environment, this paper proposes a collaborative delivery optimization research based on the crowdsourcing vehicle piggyback cooperation. This problem can be defined as the traveling salesman problem with time window under crowdsourcing piggyback collaboration. The problem is described as follows: There are two types of vehicles in the system, basic vehicle and available crowdsourcing vehicles. All customers have time windows, which are traversed by both basic vehicles and crowdsourcing vehicles. Basic vehicles start from the depot, visit customers in turn, transport the requests that need to be collaborated to the designated transfer points, and return to the depot after completing the delivery task. The selected crowdsourcing vehicles start from the starting point as piggy-back vehicles, receive one or more requests to be piggy-back at the transfer points, complete the delivery in turn, and return to the destination. The innovation of this problem is in the following three aspects: (i)There are differences in the capacity of crowdsourcing vehicles; (ii)The requests need to be transferred from basic vehicle to crowdsourcing vehicles at the transfer points; (iii)Each crowdsourcing vehicle can fulfill multiple delivery requests.
    In order to solve the problem effectively, a hybrid scatter search algorithm based on variable neighborhood search is designed according to the characteristics of the problem. In order to construct high quality initial solutions, a two-phase heuristic algorithm of “High quality seed solution followed by diversified population” is proposed to generate high quality initial population. For the construction of seed solution, a three-stage algorithm of “basic vehicle route-basic vehicle route optimization-cooperative delivery routes” is designed, and then several random construction operators are applied to the construction of diversified initial population based on the seed solution. In addition, to improve the efficiency of the algorithm, a progressive usage strategy of neighborhood structure based on variable probability is designed.
    The comparison between the constructed TSPTW-CSC instances and the TSPTW benchmark instances shows that the crowdsourcing multi-tasking collaboration can significantly reduce the operation cost, with a maximum cost saving of 23.9%. Meanwhile, the effect of collaboration is significantly affected by piggyback compensation cost. The selection of crowdsourcing vehicles with low operating cost can help contribute better piggyback collaboration. The comparison of algorithms under different algorithm components shows that the proposed two-stage population construction strategy of “high quality seed solution followed by diversified population” and the progressive neighborhood structure usage strategy based on variable probability can improve the solving quality of the algorithm and significantly improve the convergence speed.
    The Site Selection Optimization on the Recycling Center of Faulted Shared Bicycles Based on K-means Clustering Algorithm and Center of Gravity Methods
    LIU Quanhong, TANG Fuxing
    2023, 32(7):  85-91.  DOI: 10.12005/orms.2023.0221
    Asbtract ( )   PDF (1315KB) ( )  
    References | Related Articles | Metrics
    Bicycle sharing has not only enriched the travel options of residents, but also saved time, money and fitness, and has become an important means of transportation for people traveling short distances. However, the number of faulty bicycles caused by normal wear and tear and human damage in the use of shared bicycles is very large, and the task of recycling and repairing or scrapping them is very heavy, which has become a problem of reverse logistics in the shared bicycle market.
    The current academic research on the logistics of the shared bicycle market focuses more on the problems of placement scheduling and distribution of stationing points. In terms of the recycling logistics of shared bicycles, the existing research mainly focuses on the route planning in the recycling process of faulty bicycles. This paper uses the Wuhan shared bicycle market as an example to explore the optimal location strategy of recycling centers based on the clustering analysis of faulty shared bicycle scrapping points and the center of gravity method based on transportation cost orientation, which helps to solve the problem of choosing the location of shared bicycle recycling centers.
    This paper argues that the optimization of the location of faulty shared bicycle recycling centers can help reduce the operating costs of faulty bicycle recycling, improve the efficiency of faulty bicycle recycling, and promote the development of reverse logistics in the shared bicycle market. The idea of optimizing the location of faulty bicycle recycling centers is: (1)Faulty bicycle scrapping and overhaul. After a shared bicycle is put into a designated area, it needs to be scrapped and overhauled under two circumstances: Users mark it as a faulty bicycle through the APP, or the regional manager regularly checks the GPS information of the backstage vehicles and the condition of the vehicles on a daily basis, and marks the backstage of faulty bicycles that have safety hazards or need to be suspended. (2)Manual identification of faulty bicycles. The operation background sends the acquired information to the maintenance person in charge of the designated area, screens out the vehicles that need to be recycled and overhauled, and carries out secondary marking to facilitate subsequent recycling processing. (3)Clustering of scrapping points. The initial location of the faulty vehicle is marked as scrap point by the background, and according to the result of data processing, the faulty vehicle is clustered by K-means algorithm for the area to which the recycling center belongs, and according to the coordinates of the scrap point clustering, the daily arrangement of transportation vehicles and manuals will recycle the vehicles at the scrap point to the scrap center according to the established recycling route. (4)Execution of recycling tasks. The faulty bikes are transferred from the scrap center to the recycling center for overhaul and then put back, and the recycling task is completed.
    In this paper, we first construct a clustering model for the scrapping centers of faulty bikes, use the “elbow” method to determine the best cluster number, standardize the elemental data, determine the k initial cluster centers according to the best cluster number, and assign the samples to the nearest cluster according to the principle of the shortest Euclidean distance. The mean value of samples in each cluster is used as the new clustering center, and the above steps are repeated until the clustering center no longer changes, which finally makes all samples form the best clustering result, thus completing the clustering of scrap centers. The center of gravity method is based on the principle of optimal cost, and the recycling center is equivalent to the clustering object in the k-means clustering algorithm, and the center of gravity of the system is the best setting point for the recycling center of the faulty shared bicycle.
    The simulation validation of the model shows that the optimized location points of the recycling center not only reduce the cost of faulty shared bicycle recycling, but also have lower overall operation cost and higher efficiency of faulty shared bicycle recycling compared with the three existing recycling centers in Wuhan city which are distributed farther away, and facilitate the sub-regional operation and management of shared bicycles. It is proved that the K-means clustering algorithm and the center of gravity method are not only simple and feasible, but also convenient and fast, and this model can take into account many factors compared with the realistic site selection method that only considers cost. The K-means clustering algorithm and the center of gravity method are used to determine the location of the recycling center, which is suitable for all areas of the city.
    Research on Pricing and Return Strategy of Dual-channel Retailers Considering Cross-channel Returns
    BI Gongbing, CAO Qing, LYU Jiancheng
    2023, 32(7):  92-98.  DOI: 10.12005/orms.2023.0222
    Asbtract ( )   PDF (1371KB) ( )  
    References | Related Articles | Metrics
    With the continuous expansion of the scale of online consumption, the shortcomings of online shopping have become more and more obvious. Only when receiving the products, consumers can determine whether the purchased products perfectly meet their expectations. Consequently, the product return has become an inevitable part of the sales process, and making product return decisionis very important for the brand image and enterprise operation of online retailers. In the context of New Retail, in order to meet the different needs of customers when facing returns, retailers have accelerated the pace of integration of online and offline channels, and realize cross-channel product return through online and offline collaboration, better and more comprehensively serving customers, as the same time, increasing their own profits. The selection of return channels is very important for retailers, and currently some companies have attempted to open cross channel return services. As is often the case, retailers may stimulate the demand of online market and earn profit by adopting a lenient return policy, such as bearing the return freight. However, free return shipping is not as common as a full refund. Faced with a situation where the actual number of product returns is high enough to threaten the enterprise’s profits, paying the return freight may further lessen the profits.
    Focusing on the above concern about product return channels, this paper takes the dual-channel retailer that allows returns as the background, with the goal of maximizing its profit. Firstly, this paper constructs two different pricing decision models for unified pricing and independent pricing strategies both online and offline. Then, we model the market demand and return quantity when adopting original channel and cross-channel return strategies respectively, and solve the optimal pricing and profit of retailers. In addition to this, this paper also analyzes the factors that affect retailers’ pricing strategy and return strategy selection through numerical examples, such as consumers’ channel preference and the cost of return hassle. This article takes a retailer and several consumers as the retail scenario. Retailers sell products through both online stores and offline physical stores,and can decide whether to allow customers to return products purchased online to offline physical stores.Under the unified pricing model, retailers consider the overall interests of the two channels and determine the optimal pricing strategy through cooperation. At this point, the relationship between the optimal pricing of retailers and customer channel preferences is influenced by the return rate and return hassle cost of each channel. Therefore, when choosing pricing strategies, retailers should pay attention to return rates and return hassle costs. The setting of the optimal channel price should be proportional to the degree of customer preference for the channel, coordinating the prices of both online and offline channels to achieve a win-win situation for both channels and expand the overall profit. However, under the independent pricing model, each channel generates fierce competition based on the principle of maximizing its own profits, and the pricing decision-making power is related to customer channel preferences.Given the hassle cost of returns, when the customer channel preference is obvious, the profit of retailers adopting independent pricing strategy is greater than that adopting unified pricing strategy. Retailers should choose reasonable channel competition strategies based on market conditions. When customers have a high degree of preference for a certain channel, retailers should encourage competition between the two channels instead of adopting a unified pricing strategy. That is, when the proportion of consumers in a certain channel in the market has a significant advantage, retailers should adopt independent pricing strategies, otherwise adopt a unified pricing strategy.Under the independent pricing model,when customers prefer online channels, they should adopt a cross-channel return policy; When customers prefer offline channels, they should adopt the original channel return policy. Furthermore, retailers should choose reasonable pricing and return strategies considering customer channel preference and return hassle cost to maximize their profits.
    These results of the research are not only a supplement to the return strategy of dual-channel retailers, but also extend the theoretical model through numerical examples. It is helpful for dual-channel retailers that allow returns to set a reasonable return strategy and optimal pricing strategy. It provides inspiration and decision support for relevant practices of enterprises under different market conditions. Further analysis can be conducted on the impact of the BOPS method or the situation that customers return products at the pickup point on the optimal pricing, return strategy, and profit of retailers. In addition, customers can observe and compare the prices and quality of products from two channels before purchasing, and then choose which channel to purchase products. This is also a future research issue.
    Research on Promotional Pricing Decisions of Platform Retailer in the Environment of Customers Add-on Items Return
    SONG Sujuan, PENG Wei, WANG Chong, ZHANG Minjie
    2023, 32(7):  99-106.  DOI: 10.12005/orms.2023.0223
    Asbtract ( )   PDF (1539KB) ( )  
    References | Related Articles | Metrics
    The advent of internet technology has greatly promoted the development of e-commerce retail. In order to achieve the goal of high transaction turnover, more and more platform retailers are implementing “Value Increasing” promotion strategy to stimulate consumers to engage in online shopping. “Value Increasing” promotion strategy, as an effective marketing tool, can help platform retailer gain a user base faster in the fiercely competitive market. Simultaneously, in order to enhance consumers’ purchasing confidence in the perceived uncertainty of product quality, platform retailer often provides money-back guarantee (MBG). Although “Value Increasing” promotion strategy and money-back guarantee (MBG) strategy have increased consumers’ perceived utility and purchasing confidence, thereby increasing sales, it is indeed the simultaneous implementation of these two strategies that have induced the speculative consumers’ add-on items return behavior. On the one hand, speculative consumers’ add-on items return behavior will affect the calculation of inventory resources of platform retailer. On the other hand, the behavior of “customers who choose the add-on item deliberately” to some extent also suppresses the normal online shopping of other consumers, leading to customer loss. However, existing researches on promotional returns mainly focuses on the impact of consumers’ strategic waiting behavior on retailer’s decision-making, with few considerations given to consumers’ speculative behavior. And this behavior is a new phenomenon caused by platform retailer jointly implementing “Value Increasing” promotion and providing money-back guarantee (MBG). Therefore, based on the gap of existing research, this paper not only considers the improvement of consumer purchasing confidence through the promotion and return joint strategy, but also the losses caused by the speculative consumers’ add-on items return behavior, to explore the product promotion pricing decisions of platform retailer. The research conclusions provide reference for platform retailer to formulate promotional prices and discount amounts in different situations, and a theoretical basis for platform retailer to (not) adopt MBG under specific conditions.
    We attempt to analyze the “Value Increasing” promotion pricing decisions problem based on a scenario consisting of a platform retailer and two types of consumers (speculative consumer and ordinary consumers, respectively). Among them, speculative consumers will choose to purchase an additional add-on item to enjoy price discounts when their purchase amount is below the full reduction threshold. After successful payment, they will return the additional add-on item, while ordinary consumers will not do so. They make purchasing decisions based on the degree of relevance between two products. To solve the above problem, we construct single-cycle sales models considering the speculative consumers’ add-on items return behavior, and explore the optimal promotion pricing decisions under two situations where the platform retailer provides and does not provide money-back guarantee (MBG), respectively. Then, by comparing the two models, this paper explores the impact of the speculative consumers’ add-on items return behavior on the profits of platform retailer, as well as the threshold value for platform retailer to adopt (not) money-back guarantee (MBG).
    Our results show that: (1)Under the two scenarios the optimal promotion price to maximize the profit of platform retailer is obtained, and the optimal promotion price under the scenario of providing money-back guarantee (MBG) is always higher than the promotion price under the scenario of no-return. (2)In the case of no-return, the optimal promotion price is only affected by the relevance between two products. In the case with money-back guarantee (MBG), the optimal promotion price varies with the probability of a product being added and the proportion of speculative consumers. (3)In addition, through the comparison of the profits under the cases of no-return and money-back guarantee (MBG), we find that the profits in the case with money-back guarantee (MBG) are not always higher than those in the case of no-return, and the boundary conditions for the platform retailer to adopt different modes are further given. (4)Finally, numerical analysis shows that in the case of providing money-back guarantee (MBG), a higher degree of the relevance between two products can suppress consumer speculative behavior to some extent. Moreover, in the case of a certain proportion of speculative behavior in market, when the amounts of discount allocated to product is relatively high and the probability of product regarded as add-on item is relatively low, no-return can bring greater benefits to the platform retailer. The research conclusions provide a better theoretical basis and reference for platform retailer to make optimal decisions.
    Analysis of Government and Enterprise Pollution Control and Optimal Environmental Policy Based on Differential Game
    XU Hao, TAN Deqing
    2023, 32(7):  107-112.  DOI: 10.12005/orms.2023.0224
    Asbtract ( )   PDF (1260KB) ( )  
    References | Related Articles | Metrics
    Environmental pollution originates from the excessive discharge of pollutants by industrial firms. However, due to the pollutant attributes of externality and public goods, firms are reluctant to reduce their emissions or invest in emission abatement activities, resulting in market failure. Therefore, governments have adopted command-and-control or market-based environmental policies, such as emission standards, taxes, permit trading, etc., to govern the environment. China’s current environmental pollution management policies for enterprises are based on command-and-control pollution emission standards and pilot emissions trading systems in some domestic cities. At the same time, the official implementation of the Environmental Protection Tax Law of the People’s Republic of China in 2018 marked a shift from administrative command-based to various policies combining administrative and market-based policies. For this reason, how the government selects a reasonable environmental policy for the specific situation in its region is an urgent problem to be solved. In this context, this paper takes the government as the core perspective and selects four scenarios of emission standards, emission taxes, emissions trading, and mixed policies of emission reduction subsidies, and establishes a Stackelberg differential game model between the government and enterprises in which the enterprises take profit maximization as the decision objective. The government takes social welfare maximization as the decision objective under the dynamic change of regional environmental pollution capacity. The Nash equilibrium solutions are compared and analyzed, and finally, numerical analysis is performed, and relevant management strategies are given according to the results.
    The results of the study find that: If the cost of emission reduction is the same among enterprises, emission standards, and emission taxes, and emissions trading are equivalent, the environmental benefits are better than those of the mixed policy, and social welfare under the mixed policy is better than other policies only when the harm to the environment is less than a certain value; When the cost of emission reduction is different among enterprises, emission taxes, and emissions trading are equally equivalent, the environmental benefits under emission standards are lower than those of emission taxes and emissions trading, and the environmental benefits under the mixed policy. The environmental benefits are the lowest under the mixed policy. Finally, the dynamic trajectory analysis of pollution capacity and social welfare is analyzed by numerical simulation, which provides a scientific basis for the government to effectively control pollution.
    This study helps to improve the government’s environmental policy analysis and optimal decision-making in dynamic situations and has some reference value for regional pollution management. However, there are some improvements. For example, this paper only considers the environmental policy regulation of government enterprises in a region. More and more regions are affected by the pollution emissions of enterprises in neighboring regions, such as the Beijing-Tianjin-Hebei region in China. In this case, the government’s environmental policy regulation should consider the harm caused by the pollution of enterprises in the region and the influence of environmental policies of enterprises and governments in the surrounding areas on their decisions. In addition, many scholars believe that the relationship between government and enterprises is not only between regulating and being regulated but also between collusion and possible collusion, at which point the analysis of environmental policy and collusion between the two parties as well as central government regulation, is the focus of future research.
    Game Analysis of Enterprise Knowledge Governance Input and Knowledge Workers’ Turnover Behavior Based on ERG Theory
    JIANG Fengzhen, YANG Qian, ZHANG Jie, CHENG Hao
    2023, 32(7):  113-120.  DOI: 10.12005/orms.2023.0225
    Asbtract ( )   PDF (1684KB) ( )  
    References | Related Articles | Metrics
    In the era of the new knowledge economy, knowledge workers who have received higher education, had rich experience and relied mainly on mental labor, are the main force of enterprise innovation. However, the frequent turnover of knowledge workers will continuously weaken the core competitiveness of the enterprise, which makes the enterprise suffer greatly. The turnover behaviors of knowledge workers are affected by many factors. For example, a single incentive approach, such as the optimization of leadership style or salary increase retention, has shown many shortcomings in meeting the needs of knowledge workers. Knowledge governance, as a diversified combination of governance measures adopted by enterprises under specific circumstances, can satisfy the three core needs of survival, mutual relationship and growth of knowledge workers. Therefore, our research starts from the core needs of knowledge workers, embeds ERG theory into the turnover game model, discusses the root governance of knowledge governance on knowledge workers’ turnover behavior, and reveals the effective incentive mechanism for knowledge workers to remain. At the same time, the existing turnover game model is expanded to reveal the mechanism of the interaction between knowledge governance input strategy choice and knowledge workers’ turnover behavior decision, and the research results provide enlightenment for enterprises’ governance of knowledge workers’ turnover behavior.
    Based on evolutionary game theory, ERG theory and knowledge governance theory, our research constructs an evolutionary game model of enterprise knowledge governance input and turnover behavior of knowledge workers, and carries out model derivation and stability analysis to analyze the strategic choices of enterprises and knowledge workers under different labor market supply and demand conditions. Meanwhile, with the help of Anylogic7.0.2, the system dynamics simulation model of the game between enterprise knowledge governance input and knowledge workers’ turnover behavior is constructed, and the sensitivity of the main parameters under the stable equilibrium point to the system evolution is simulated and analyzed.
    Our research results show that when supply exceeds demand in the labor market, the high input income of enterprise knowledge governance is greater than the low input income of that, and the ESS(evolutionarily stable strategy) is the high input of enterprise knowledge governance and the non turnover of knowledge workers. Simultaneously, when workers do not leave the enterprise under low input of knowledge governance, the greater the enterprise’s income, the more seriously the enterprise will consider whether to increase the input of knowledge governance, so that the rate of tending to the high input is relatively slow, and most workers will choose not to leave the enterprise quickly. With high input of knowledge governance, the higher the income of the enterprise when the workers do not quit, the more quickly the enterprise will choose the high input of knowledge governance, so that the rate of tending to the high input is relatively fast, and the probability of not leaving of the workers who are motivated by the high input also rises. When the labor market is in short supply, the low input income of enterprise knowledge governance is greater, and the ESS is the low input of enterprise knowledge governance and the turnover of knowledge workers. In the meantime, whether it is low or high input in knowledge governance, the greater the return on workers’ growth needs when they do not leave, the slower the overall probability of low input in knowledge governance rises, and the slower the probability of knowledge workers who do not quit falls. The higher the cost of meeting workers’ growth needs under high input of knowledge governance, the faster the probability of low input of knowledge governance tends to be one, and the probability of knowledge workers who do not leave also tends to be zero faster. When the enterprise income of workers who do not resign under low input of knowledge governance is less than thirty, the probability of low input of knowledge governance will first decline and then rise to one. When the enterprise income of workers who do not resign under low input of knowledge governance is greater than thirty, the probability of low input of knowledge governance will continue to rise. When the enterprise income of workers who do not resign under high input of knowledge governance is greater than one hundred, the probability of low input of knowledge governance will first decline and then rise to one.
    Finally, in order to further investigate, future research can consider both the overall needs of knowledge workers and the different needs of diversified knowledge workers, such as the differences in the needs of research and development personnel and management personnel, so as to conduct differentiated research on the governance of turnover behavior problems for different categories of knowledge workers. In addition, we can not only explore how enterprises can reasonably meet the needs of knowledge workers through knowledge governance input, thus achieving the governance of workers’ turnover behavior problems, but also study the impact of their own attitudes on demand satisfaction from the perspective of workers’ personality traits. For example, from the perspective of positive organizational behavior, we can further explore the governance of knowledge workers’ turnover behavior problems.
    Congestion Analysis Based on the Improved Frontier of DEA
    XU Zelin
    2023, 32(7):  121-127.  DOI: 10.12005/orms.2023.0226
    Asbtract ( )   PDF (1113KB) ( )  
    References | Related Articles | Metrics
    Input congestion analysis provides another perspective for input-output analysis, that is, increasing output by reducing redundant input, which is considered to be a new state after the decreasing returns to scale, and it means the increase of input can not bring the increase of output, but lead to the decrease of output, so it is urgent to optimize the production structure. The world economy has entered the stage of decreasing returns to scale. Some developed economies have even experienced input congestion. The means of economic stimulus are almost exhausted, but the economic burden is becoming more and more serious. Under this new normal, China has also put forward the supply side structural reform and the goal of “carbon peaking and carbon neutralization”, which is intended to stimulate green economic growth by optimizing the production structure of the internal driving force of economic development. In this context, using suitable input congestion analysis methods will help optimize the input-output structure of production units, thereby promoting healthy economic development.
    At present, the main methods for input congestion analysis of production units are BCSW model and FGL model based on Data Envelopment Analysis (DEA) model. The basic idea of the two models is to compare the evaluated unit with the relative efficiency unit on the DEA frontier to obtain the input congestion. However, this method ignores the data sensitivity problem of DEA frontier, that is, the minimal change of units on the frontier will lead to a huge change in the evaluation results, resulting in the lack of robustness of the analysis results. The main impact of data sensitivity lies in the relatively effective units in DEA analysis results. Being relatively effective refers to the optimization space with technical inefficiency. The relatively effective unit is not located at the vertex position on the front surface constructed by DEA, so there is an optimization space. And since it is located at the edge of the frontier, it can be represented by linear combinations of other effective units. However, in practical situations, the probability that a certain production unit can be represented by a linear combination of other production units is very small, and there is always a greater or lesser gap, which leads to the instability of DEA analysis results.
    Input congestion always occurs in the inefficient unit, and this inefficiency includes technical inefficiency and input congestion. When using the DEA model for input congestion analysis, it is necessary to distinguish the inefficiency to determine the specific value of the technical inefficiency and input congestion. Due to the data sensitivity issue of the DEA frontier, there are very few relatively effective units, which makes it difficult for input congestion analysis to distinguish between technical inefficiency and input congestion. The typical result is that all inefficiencies are attributed to input congestion, and that means the input congestion analysis results ars meaningless.
    In order to avoid such situation, this paper proposes a heuristic method that starts from the perspective of improving DEA frontier, and improves the data sensitivity problem by adjusting the frontier within the error range, so as to make more relatively effective units and find the best frontier, finally makes the model analysis results more reasonable. In order to find the method, this paper starts from the data sensitivity of DEA frontier, and uses the basic idea of least square method to determine the property of DEA best frontier. Based on the property, the heuristic method is determined by using the idea and method of super-efficiency DEA model to solve the data sensitivity problems.
    To verify the effectiveness of this method, this paper uses the dataset when the BCSW model was first proposed. In the analysis results of the original BCSW model, due to data sensitivity issues, the input congestion analysis results attribute all inefficiencies to input congestion, and the analysis results have little significance. On the basis of the optimal frontier determined by the method proposed, this paper also uses the BCSW model for input congestion analysis, and obtains more reasonable and explanatory results on instance data compared to using only the BCSW model. And this confirms the data sensitivity problem that does exist in DEA frontier when using input congestion analysis and the improvement effect of solving this problem on input congestion analysis methods.
    Influence of Logistics Service Integrator’s Management Strategies on Production Safety Behavior of Providers in LSSC
    MEI Qiang, GAO Lingjie, LIU Suxia
    2023, 32(7):  128-134.  DOI: 10.12005/orms.2023.0227
    Asbtract ( )   PDF (1369KB) ( )  
    References | Related Articles | Metrics
    Recently, increasing logistics demand has led many large logistics enterprises to cooperate with small and medium-sized logistics enterprisesin various ways. So logistics service supply chain composed of functional logistics service providers and logistics service integrator has come into being. However, due to the low threshold of the logistics market, production safety of many small and medium-sized logistics enterprises cannot meet the standard, which is very likely to cause accidents. As the core of the entire logistics service supply chain, integrator’s reputation and customer will be negatively affected.Therefore, as the organizer and leader of various logistics activities, it is an urgent problem for integrators to design effective management strategies to regulate the production safety behavior of their logistics service providers.However, the theoretical and practical studies that consider the participation of logistics service integrators in the production safety behavior management of logistics service providers in the chain from the perspective of logistics service supply chains are rare. In addition, the nonlinear and dynamic interaction among logistics service providers, integrators and customers also makes the research of providers’ production safety behavior face the challenge of complexity.
    Based on this, this study uses computational experiment to explore evolution rule of logistics service providers’ production safety behaviors under the management of their integrator. The purpose of this study is to explore effective and economic strategies of the production safety management of integrators to providers, and put forward some new solutions to help small and medium-sized logistics enterprises change dilemma of accidents. At first, this paper extracts the factors that affect the production safety behavior of logistics service providers under the management of logistics service integrators in reality, including integrator’s safety factor weights in the order allocation strategy, production safety evaluation standard and security risk deposit. And on this basis, it sums up the influencing mechanism of different factors on the providers’ production safety behaviors. Finally, the computational experiment model of providers’ production safety behaviors under the management of the integrator is constructed. By designing experimental scenarios and setting experimental parameters, Netlogo is used to explore the evolution law of logistics service providers’ production safety behavior and logistics service integrators’ revenue under different scenarios. This study draws the following conclusions:
    (1)Logistics service integrators can promote logistics service providers to actively engage in production safety activities and promote healthy competition between them by increasing the safety factor weights in order allocation strategy.
    (2)When customers are of limited safety preference, it is difficult for logistics service integrators to continuously gain an advantage in earnings by increasing the weights of safety factor. Therefore, integrators also need to provide technical guidance and assistance to providers to help them improve their output efficiency of production safety inputs.
    (3)When production safety evaluation standard exceeds providers’ acceptable threshold, their production safety will be reduced. Therefore, integrators should fully consider providers’ production safety capabilities and reasonably set production safety evaluation standard.
    (4)Asking for security risk deposit from providers can improve their production safety, but its effect is limited. Therefore, integrators can ask for security risk deposits from providers in advance and should only regard it as an auxiliary strategy of production safety management.
    Application Research
    Head-driven Strategy for High-quality Content Generation on UGC Platforms Based on Platform Subsidy
    XU Peilei, PENG Zhengyin
    2023, 32(7):  135-141.  DOI: 10.12005/orms.2023.0228
    Asbtract ( )   PDF (1360KB) ( )  
    References | Related Articles | Metrics
    Relying on mobile Internet technology, user-generated content (UGC) platform has developed rapidly for the past years. UGC platform does not generate media content, but provides users with ways to generate and collaborate content, as well as distribution, customization and development of content methods, bilateral users are its core resources. With the support of information technologies such as big data algorithms and artificial intelligence, the user participation of UGC platform has witnessed explosive growth. By December 2020, the number of short video users in China had reached 873 million, with a usage rate as high as 88.3%. In the participation of a large number of waist and tail users, homogenization, vulgar and low-quality content problems have also gradually emerged.
    UGC platform has been actively fulfilling its “gatekeeper” responsibility, through the examination and control of platform information content to achieve the purpose of governance. Since 2018, UGC platforms have been expanding their already large audit teams, and actively promoting the technical audit means of machine algorithm and artificial intelligence. However, in the context of the explosive growth of content quantity and richer content forms, high labor costs and the value defects of machine technology make passive audit more and more difficult. Actively promoting the active stimulation of quality content has become a more effective path of content governance. How to give full play to the importance of high-quality original content of head users to the content orientation, value establishment and user guidance of the platform, while balancing the increased subsidy cost and traffic loss caused by the platform to achieve the inspirational effect, has become an important direction of thinking to solve the dilemma of quality content cycle of UGC platform.
    This paper studies the quality content generation system composed of head users and waist-tail users at the content generation end of UGC platform, and constructs three differential game models based on the platform subsidy head user driven decision, head users and waist-tail users collaborative decision, and decentralized decision without driving effect. In order to promote the ecological optimization of the content of UGC platform and realize the Pareto improvement of the overall revenue of the platform, this study selects the head users of the content generation end of UGC platform as the leading party of the game. Under the incentive of the platform subsidy policy, the driving strategy is adopted to form a Stackelberg differential game with the waist-tail users.
    In this paper, we solve the respective revenue maximization objective function in the decentralized decision-making mode, the overall revenue maximization objective function in the collaborative decision-making mode and the two-stage objective function in the driven decision-making mode, respectively, to obtain the optimal strategy under different modes and the revenue and quality content level under the strategy. On this basis, the three modes are compared and analyzed, including the decision-making behavior, the respective income and the overall income, and some theoretical conclusions are obtained. The above conclusions are verified by using numerical example simulation technology. The example data come from the professional scores of 65 experts, scholars and senior users in related fields of industry and academia, and is verified and processed by orthogonal experiment. Finally, the relevant conclusions are obtained: (1)Under the head-driven effect, the high-quality content generation system of UGC platform has been significantly improved by Pareto. (2)the driving decision mode has a virtuous circle significance of improving the driving force of the head users. (3)The subsidy coefficient of UGC platform has a positive correlation with the improvement effect of quality content generation system, and the selection interval is controllable. The research conclusion provides a certain theoretical basis and practical guidance for UGC platform to effectively carry out content governance and promote the healthy ecological development of user-generated content.
    Accuracy Assessment of COVID-19 Pandemic Regional Risk Classification Based on Bayesian Network
    CAI Mei, CAO Jie
    2023, 32(7):  142-148.  DOI: 10.12005/orms.2023.0229
    Asbtract ( )   PDF (1258KB) ( )  
    References | Related Articles | Metrics
    After COVID-19 Pandemic broke out in China, China’s epidemic prevention and control policy has always adhered to scientific measures and precise prevention and control, which not only maximized the protection of people’s life safety and health, but also minimized the impact of the epidemic on economic and social development. The measures of regional classification control have achieved good results. However, the accuracy of the judgement is still open to question. In order to adjust and optimize the prevention and control measures according to the time and situation, it is of great significance to conduct research on the accuracy of regional risk level classification of COVID-19.
    A Bayesian network decision-making model based on imprecise probability is proposed to evaluate the accuracy of regional risk classification of the new epidemic. First, the imprecise probability is used as the input parameter of the Bayesian network, that is, the key events in the epidemic are described with low probability and high probability. Then, by analyzing the regional risk level classification policy of COVID-19, we constructed a Bayesian network to represent the regional risk level classification process, and the mathematical description of the classification rule is carried out. Based on the analysis of the existing evidence collected in the epidemic prevention and control process, the imprecise probabilities of network nodes are extracted, and basic probability assignment functions of three states (true, false and uncertainty) are provided, so as to determine the nodes and range of nodes in a Bayesian network. Finally, an extended D-S (Dempster-Shafer) fusion technique is used to transfer the uncertainty of network nodes and to obtain joint probabilities on nodes from imprecise probabilities. The accuracy estimation of risk classification is obtained.
    According to Prevention and Control of COVID-19 (Fifth Edition), we applied relevant data published by the media and clinical data of Central South Hospital of Wuhan University to our Bayesian network decision-making model. The results show that: The accuracy of regional risk classification of medium risk is higher, while those of low and high risk are low. The following suggestions are drawn based on above results: (1)The uncertainty of the assessment results in low-risk areas is significant. The vast majority of regions in the country belong to this risk level, with a large base, and the analysis should be particularly precise, otherwise small errors will bring great fluctuations. (2)The uncertainty of the assessment results in high-risk areas is the greatest. Although there are few high-risk areas designated, strict prevention and control of high-risk areas have a significant negative impact on the production and life of the people in the area and surrounding areas. Over prevention and control can also harm the phased achievements of the epidemic. (3)More evidence is needed to improve the accuracy of regional risk classification. On the one hand, the accuracy of the accuracy assessment system’s research results can be improved by improving the accuracy of the evaluation criteria; More evidence can be added to make the system’s rules more complete, especially for judgments of low and high risks. For example, improving the accuracy of confirmed cases through multiple nucleic acid tests. When epidemic prevention and control enter normalization, precision prevention and control should also focus on the economic and social activity order of low and high-risk areas, so as to meet the needs of scientific, professional, intelligent and refined emergency management.
    Compared with fuzzy Bayesian network method and risk assessment method based on reliability theory, the proposal uses extended D-S evidence fusion technology to reduce the amount of calculation. And the information has been fully applied to solve the problem of knowledge uncertainty caused by data scarcity, data discontinuity, data incompleteness and prior ignorance. At the same time, the proposal can also solve the problem of information fusion when there are significant evidence conflicts, thereby improving the accuracy of decision-making. Our research focuses on the emergency management of public health events from another perspective, providing theoretical support for scientific decision-making and precise implementation. However, the regional risk classification of COVID-19 is a dynamic process. With the change of anti-epidemic situation, the adjustment and optimization of prevention and control measures can study the dynamic Bayesian network or the closed-loop dynamic Bayesian network topology.
    Impact of Major Disaster, Asset Sell-off and Insurance Systemic Risk
    XING Tiancai, SONG Xiaotong, LI Xiaoyi
    2023, 32(7):  149-155.  DOI: 10.12005/orms.2023.0230
    Asbtract ( )   PDF (1060KB) ( )  
    References | Related Articles | Metrics
    For the past years, with the outbreak of the new crown epidemic and frequent natural disasters, the insurance industry is constantly facing new challenges, and the systemic risk of insurance has become the focus of attention from all walks of life. Faced with the realistic characteristics of our country’s natural disasters such as wide distribution, high frequency, and huge losses, the “14th Five-Year Plan” clearly emphasizes the importance and necessity of enhancing the security of economic and social development, which is one of the important goals of social development. As an important tool to disperse major disaster risks, insurance is an indispensable means to enhance the anti-risk resilience of economic and social development. It is of great significance for the stable development of my country’s economy and society to ensure that the insurance company’s solvency is sufficient under the impact of major disasters, and to continue to play the buffer role of the insurance mechanism under the loss of natural disasters.
    This paper takes the external shocks of the insurance industry as the starting point, and establishes an insurance systemic risk evolution model based on the dual shocks of major disasters and asset sell-offs. In order to ensure that the simulation results conform to the development status of ourcountry’s insurance industry, the information disclosure report data of property insurance companies and reinsurance companies in the Chinese insurance market from 2009 to 2020 are used as the basic data, and the RAS algorithm based on the minimum cross-entropy principle is used. The insurance company’s reinsurance preference matrix constructs the reinsurance business transfer network to measure the compensation loss of the insurance business under the impact of different major disasters. Considering the intervention of the insurance protection fund, we further study the impact of major disasters and sell-off behaviors on the changes in insurance systemic risks, and analyze the response effects under pre-event and post-event response measures.
    The results show that among the direct related networks of insurance companies, larger-scale insurance companies and reinsurance companies are systemically important, and both “entry and exit” and “betweenness centrality” rank high in the insurance industry. At the same time, systemically important reinsurance and direct insurance companies have better performance in risk simulation contagion, and the risk of bankruptcy is relatively small, but once bankruptcy occurs, it will have a huge impact on the systemic risk of insurance. Under the double impact of reinsurance business and asset sell-off, insurance companies have experienced systemic risks due to a large number of bankruptcies, which proves that the insurance industry will trigger a systemic crisis due to risk contagion caused by external shocks. Among them, small-scale insurance companies are more vulnerable to systemic shocks, and the bankruptcy of large-scale insurance companies will cause greater systemic losses. Asset selling behaviors have increased the exposure of insurance companies to systemic risks. By comparing different coping strategies under the impact of major disasters, it is found that the way of direct government injection and assistance after the event is the most effective, which can produce synergistic effects with the insurance protection fund and resolve systemic insurance risks. Among the countermeasures in advance, the effect of business structure adjustment is more prominent, while the effects of reinsurance business adjustment and investment structure adjustment are less effective. It is difficult for the latter two countermeasures to play a strong role when implemented alone. Multiple measures should be taken to make full use of the regulatory system to prevent insurance systematic risk.
    On the whole, in order to prevent the impact of major disasters on the insurance systemic risk, the regulatory requirements for the minimum risk capital of catastrophe should be increased in the “Second Generation” regulatory system, and the insurance company’s ability to pay for major disaster losses should be improved. We should reduce the possibility of insurance systemic risks. At the same time, vigorously developing insurance protection funds and increasing the stock size of fund reserves can directly reduce the level of insurance systemic risk. Further, a catastrophe insurance system should be established quickly, and supporting measures for catastrophe insurance should be improved, such as the popularization of catastrophe bonds and the standardized process of government assistance, to reduce the impact of major disasters on insurance companies, and to use strong intervention from external means. we should avoid the outbreak of internal risks of insurance companies, so as to gradually achieve a large proportion of major disaster loss compensation coverage, and protect the property safety of corporate residents.
    Investor Sentiment, Stock Liquidity, and Stock Price Bubbles: An Empirical Study Based on the GSADF Test
    GAO Yang, ZHAO Kun, WANG Yaojun
    2023, 32(7):  156-161.  DOI: 10.12005/orms.2023.0231
    Asbtract ( )   PDF (1028KB) ( )  
    References | Related Articles | Metrics
    Stock price bubbles seriously affect the healthy operation of capital markets. Recently, with the development of behavioral finance and other financial theories, many studies have explored the causes of stock price bubbles from the perspective of investor behavior and market microstructure. Behavioral finance theory believes that investor sentiment plays an essential role in affecting the price of financial assets. Investors tend to be influenced by emotions to change their investment behavior, which to a certain extent, drives the speed and extent of stock price fluctuations and thus generates stock market bubbles. Moreover, market microstructure theory holds that stock liquidity, as a critical measure of market quality, will directly affect the speed and cost of market transactions, eventually acting on stock price fluctuations and leading to stock price bubbles. Therefore, investigating the relationship between investor sentiment, stock liquidity, and stock price bubbles can help maintain the stable development of our financial markets.
    To explore the mechanisms of both investor sentiment and liquidity on stock price bubbles, we select nine Shanghai Stock Exchange(SSE) industry indices, including Energy, Materials, Industrials, Discretionary Consumption, Staples Consumption, Pharmaceutical, Information Technology, Telecom, and Utilities. Then based on dynamic factor analysis, this study constructs investor sentiment indicators by combining online social media data, such as the Baidu index, with traditional sentiment proxy variables, such as turnover rate. Subsequently, the bubbles in different industries are detected based on the Generalized Sup-ADF (GSADF) test. The GSDAF test results show that bubbles are detected in all nine industry indices, and the original hypothesis of the existence of unit root is rejected at the 1% significance level except for the Energy industry. Specifically, the highest number of bubbles is four in the Staples consumption sector, the second highest is three in the Industrials industry, and two bubbles exist in the Pharmaceutical, Information Technology, and Utilities sectors. In addition, the first bubble mostly lasted from late 2014 to mid-2015, with an average duration of 5 months, consistent with the occurrence of the stock market crash in June 2015. The second or third bubbles lasted for a relatively short period, and the timing of the bubbles varied widely across industries. Furthermore, the relationship between investor sentiment, liquidity, and stock price bubbles is further analyzed using the panel Logit model and mediating effects test. Since liquidity refers to the ability of a larger transaction to materialize quickly and with weaker impacts on the market’s price, this paper uses two widely used liquidity measures: The Amihud illiquidity ratio and the quoted spread (QS). That is, we present regression results based on both the Amihud and QS liquidity indicators, respectively. Furthermore, this paper also conducts robustness checks by replacing investor sentiment and liquidity indicators to exclude the bias due to the selection of investor sentiment and liquidity measures.
    The results of both empirical analysis and robustness tests indicate that investor sentiment and liquidity have significant positive effects on the existence of stock price bubbles, and investor sentiment can lead to a further increase in the probability of stock price bubbles’ occurrence by promoting market liquidity. Specifically, investor sentiment and Amihud (QS) liquidity measure both positively impact stock price bubbles at the 5% significance level. When investor sentiment is high, investors’ trading behavior of mindlessly following will keep pushing up the stock price, causing the stock price to deviate from its intrinsic value. Then the elevated sentiment will accelerate the transmission of good news, enhance market liquidity, and increase market depth, leading to a rapid pull-up of the stock price and the creation of a bubble. Conversely, when investors’ sentiment is negative, the conservative investment strategy leads to declining stock prices, and the conservative investment strategy caused by negative sentiment reduces market depth. Then market liquidity is weakened with increasing transaction costs, resulting in a continuous decline in stock prices and the bursting of bubbles.
    Based on the empirical results, this study can have important implications for preventing stock price bubble risk and provide the relevant theoretical basis for regulators to strengthen the regulation of market sentiment. In future research, we can further apply various deep-learning methods to predict stock price bubbles based on investor sentiment and liquidity proxies. This study has been supported by the National Natural Science Foundation of China under Grant 72171005.
    Pricing of SSE 50ETF Options Based on Stochastic Volatility Models with Regime Switching Features
    LI Kunhao, QIN Xuezhi
    2023, 32(7):  162-169.  DOI: 10.12005/orms.2023.0232
    Asbtract ( )   PDF (1283KB) ( )  
    References | Related Articles | Metrics
    Accurate pricing is one of the prerequisites to ensure options functioning well in financial markets. Stochastic volatility models are widely used in option pricing as they can generate volatility smiles as well as term structures. However, the shape of volatility curves generated by one-factor stochastic volatility models has weak correlation with real fluctuating level, and cannot accurately reflect the regime switching characteristics of the volatility process as well. Adding state variables which describe regime switching characteristics of the volatility processes to stochastic volatility models can hopefully better describe the shapes of volatility surfaces as well as their dynamic processes, thus improving pricing accuracy. The application of stochastic volatility models with regime switching features in European option pricing has received growing attention for the past few years. However, in terms of model construction, existing research typically limits the relationship between volatility processes and the switching regime, without involving the adaptation of models to market features, and also without further discussion of their performance in certain option markets. In terms of pricing methods, when only the long-term mean is assumed to rely on a switching regime, a closed solution for European option pricing can be obtained. However, for other stochastic volatility models with regime switching features, closed solutions for option pricing are difficult to obtain.Existing research generally uses perturbation analysis or numerical methods to obtain option prices, failing to achieve balance between pricing accuracy and computational efficiency.In terms of the pricing of SSE 50ETF options, models and methods in existing research contribute to the improvement of SSE 50ETF option pricing accuracy, but have not taken regime switching features of volatility into account. However, according to iVIX data, the volatility of SSE 50ETF has regime switching features. Therefore, we aim to construct a stochastic volatility model with regime switching features to better reflect the volatility characteristics of SSE 50ETF, and further conduct study on option pricing.
    This paper constructs a series of stochastic volatility models with regime switching features on the basis of Heston model, which describe the regime of variance process with continuous Markov chain, and assumes all parameters in the variance process of Heston model to be any function of the Markovian regime. By specifying the concrete relationship between parameters and switching regimes, we can obtain models fitting different market features. Furthermore, we construct a semi-analytical pricing method suitable for European option pricing under the above model: By applying the properties of affine model, and obtaining the analytical formula of conditional characteristic function of the log-price distribution through Fourier transform under each path of the switching regime, option pricing is then achieved through Monte Carlo simulation of the state paths. In terms of calibration of parameters, we construct an algorithm to realize maximum likelihood estimation for model parameters based on particle filtering, by synthesizing stratified sampling and continuous importance resampling methods, based on iVIX and SSE 50ETF price data.
    By analyzing the characteristics of SSE 50ETF price and volatility, we find that there are significant differences in fluctuating levels across different time periods. Therefore, using stochastic volatility models with regime switching features can help better describe the dynamic process of the logarithmic price and variance of SSE50 ETF. Furthermore, based on market features, we select various forms of the model constructed above to describe different regime-switching features, and conduct empirical research on the performance of the stochastic volatility models with regime switching features in the pricing of SSE 50ETF options. The results show that, in the pricing of SSE 50ETF options, the stochastic volatility models with regime switching features in which the long-term mean and volatility of instantaneous variance are dependent on the volatility state can better describe the volatile characteristics of SSE 50ETF prices and significantly improve the accuracy and robustness of option pricing, and in particular, it is of great necessity to consider the regime-switching features of volatility in pricing when the fluctuating level fluctuates violently, especially when it rapidly increases. Among these models, the RS-SV-123 model, whose mean reversion speed also depends on the switching regime, is prominent among all models in terms of pricing accuracy as well as robustness during periods of intense fluctuation, while the RS-SV-23 model, whose mean reversion speed does not depend on the switching regime, is more prominent in terms of pricing robustness during the whole period.
    Heterogeneity of Functional Background of Top Management Team and Corporate Tax Avoidance Behavior: Empirical Evidence from Chinese Capital Markets
    YANG Shuili, TANG Wei, FU Qiang, LIU Yanghui
    2023, 32(7):  170-176.  DOI: 10.12005/orms.2023.0233
    Asbtract ( )   PDF (1031KB) ( )  
    References | Related Articles | Metrics
    The senior management team plays a crucial role in promoting strategic decisions, increasing the value of the enterprise, and contributing to economic and social development. In the current era of global economic changes, with slow expansion of external demand and domestic economic growth, it is important to leverage the strengths of senior management teams to promote high-quality enterprise development. This topic has gained attention in top-level design, academic research, and social practice. The senior management team is crucial to the success of an enterprise. Their personal characteristics, experience, ability level, and other backgrounds influence their cognitive preferences and problem-solving methods, ultimately affecting the strategic decision-making of the company. This paper aims to clarify the key elements, management mechanisms, and economic consequences related to the background characteristics of senior management teams. The study provides new empirical evidence for the governance effects of functional background characteristics of executive teams, offering a fresh perspective on the topic. This paper presents a systematic analysis of the impact of the functional background heterogeneity of executive teams on corporate tax avoidance. The study is supported by empirical evidence, which contributes to the economic consequences of executive team characteristics and expands the literature on corporate tax avoidance. Additionally, the research expands and enriches the understanding of the factors that influence corporate tax avoidance.
    This paper conducts an empirical study on the governance effect of functional background heterogeneity of top management teams on corporate tax avoidance using A-share listed companies in Shanghai and Shenzhen from 2008~2020. The study employs multiple regression analysis to verify the impact of functional background heterogeneity of top management teams on corporate tax avoidance and explores the differences in impact under different property rights. Additionally, the study uses a mediation effect test model to examine the mechanism of corporate tax avoidance related to the functional background heterogeneity of top management teams.To address potential issues with multicollinearity in regression, this paper uses Blau coefficient-based standardization to construct a heterogeneity index of the functional background of senior management teams.
    The results indicate that a diverse functional background among top management effectively reduces tax avoidance behavior, particularly in non-state-owned enterprises. Mechanism testing reveals that this heterogeneity primarily alleviates financing constraints as a means of inhibiting corporate tax avoidance. According to further research, the functional background of a top management team has varying effects on corporate tax avoidance, with the production function having the greatest impact, followed by the peripheral function and then the output function. This effect is more pronounced in regions with poor external institutional environments. These findings not only provide empirical evidence for the impact of corporate governance in relation to the heterogeneity of executive team functional backgrounds, but also have important implications for improving the corporate governance environment and reducing the motivation for corporate tax avoidance. Future studies could explore the influence of senior management team’s overseas and financial backgrounds on corporate tax avoidance.
    Optimal Reinsurance-investment Strategy with the Loss-dependent Premium Principle
    ZHANG Xuanzhen, GU Ailing, DENG Baijun
    2023, 32(7):  177-183.  DOI: 10.12005/orms.2023.0234
    Asbtract ( )   PDF (1299KB) ( )  
    References | Related Articles | Metrics
    Insurers are an important part of the financial market. They not only play a special role in providing security for individuals and enterprises, but also improve the stability and liquidity of the whole financial market and promote the normal operation of economic society. Like other financial institutions, the insurer is a for-profit financial institution. On the one hand, in order to avoid the risk caused by excessive or concentrated claims, the insurer can buy reinsurance to divert some risks. On the other hand, the insurer often invests part of their surplus in financial markets in order to increase profits. Therefore, how to choose the optimal reinsurance and investment strategy to diversify risks and increase returns is a practical problem faced by insurers in practice. In recent years, the reinsurance investment strategy of insurers has also become a hot topic in the field of financial mathematics and actuarial research. Therefore, this issue is of great significance in both practical and theoretical research.
    In this paper, we study the optimal reinsurance-investment problem for an insurer with the loss-dependent premium principle. Different from existing literature, the premium principle here can be dynamically updated based on past losses and an estimate of future losses, which is an extension of the traditional expected value premium principle. This premium principle is index-weighted and has a memory feature, which includes not only recent losses but also all past losses. We assume that the insurer’s surplus process follows the diffusion approximation of the C-L (Cramér-Lundberg) model. The insurer can purchase proportional reinsurance or acquire new business to hedge risks or increase profits. It is assums that the financial and insurance markets are independent of each other. The financial market is assumed to be composed of a risk-free asset and a risky asset, where the price process of the risky asset is described by an affine square root stochastic model. The affine square root model is a more general stochastic volatility model where the volatility is a random variable. Particularly, when the model parameters of risk assets are taken to some special values, they can be degraded into CEV (Constant Elasticity of Variance) model, Heston’s model and GBM (Geometric Brownian Motion) model.
    Based on the goal of maximizing the insurer’s terminal wealth expectation and under the CARA utility function, we derive explicit expressions for optimal reinsurance-investment strategy and the value function by using dynamic programming, stochastic control and other methods. When the model parameters are taken to particular values, we obtain explicit expressions for the optimal investment strategies under the CEV, Heston’s and GBM models. At last, numerical examples are given to analyze the influences of some model parameters on the optimal reinsurance-investment strategies. Through the analysis of the optimal reinsurance strategy, we find that two important parameters in the loss-dependent premium principle: The inferred intensity(β)and the average value of loss weights (s) have a significant impact on the optimal reinsurance strategy, please refer to 3.1 for the detailed analysis. In the analysis of numerical examples of optimal investment strategies, we analyse the impact of the model parameters on the optimal investment strategy under the CEV model as well as the Heston’s model.
    The article has some relevance to the issue of optimal insurance investment, but there is some room for further discussion. Further research could be considered in aspects such as model ambiguity and the correlation between insurance and financial markets. Even we can discuss the similar research under the mean-variance criteria.
    Excess Goodwill and Inefficient Investment: An Analysis Based on Risk Taking and Financing Constraints
    LI Bingxiang, SUN Yue, ZHANG Taotao, TAO Rui
    2023, 32(7):  184-189.  DOI: 10.12005/orms.2023.0235
    Asbtract ( )   PDF (1020KB) ( )  
    References | Related Articles | Metrics
    Merger and acquisition (M&A) is an efficient and innovative way for China’s listed companies to achieve industrial innovation, enhance growth, and explore new profit growth source, which has been hot among enterprises. However, irrational mergers and acquisitions can lead to companies facing high goodwill issues, which not only make it difficult to meet excess profit expectations, but also lead to tricky challenges for acquirers. Goodwill consists of reasonable and overestimated portions, and the overestimated portion corresponds to the excess goodwill in this paper. Due to information asymmetry, principal-agent issues, and irrational factors in M&A transactions, acquirers may overestimate the post-merger synergies and thus pay high premiums in M&A, which is the case for the excess goodwill in this paper. In terms of inefficient investment, scarce resources for investment have been crowded out by excess goodwill, which can lead to the passive abandonment of high-quality investment projects and reduce investment efficiency. At the same time, excessive goodwill puts management under pressure from high expected profits and threats to jobs, and agency problems motivate them to invest inefficiently to meet excessive profit expectations and pursue self-interested motives. Therefore, the relationship between excess goodwill and inefficient investment in Chinese A-share listed companies deserves further exploration. How to avoid the negative economic consequences of excessive goodwill is also an issue that needs attention.
    To explore the impact of excess goodwill on inefficient investment, this paper empirically tests the relationship from the perspective of risk-taking and financing constraints, based on data from 2007 to 2018 for A-share listed companies in China. Specifically, this paper uses a multivariate regression analysis to verify the correlation between excess goodwill and inefficient investments. This paper then empirically tests the mechanism of influence between excess goodwill and inefficient investment using a Mediating Effect Test model. Since the excess goodwill in this paper is defined as the difference between the actual goodwill size and the reasonably expected goodwill of the acquirer business, the residual measurement model can more quantitatively reflect the definition in this paper. Thus, the regression residuals of the goodwill expectation model are used as the variable for the excess goodwill in this paper. In addition, Richardson’s expected investment model is used to calculate the expected investment levels of the sample firms, and the residuals of this model are used to measure inefficient investment.
    It has been shown that excess goodwill has a significantly positive effect on inefficient investment, over-investment and under-investment. Risk-taking acts as an intermediary between excess goodwill and over-investment. Specifically, excess goodwill enhances a company’s risk-taking, thereby exacerbating its over-investment. At the same time, financing constraints act as an intermediary between excess goodwill and under-investment. Excess goodwill exacerbates corporate financing constraints and leads to under-investment. Corporate social responsibility disclosures can constrain the positive correlation between excess goodwill and over-investment. The paper enriches and expands the economic consequences, transmission pathways and governance measures for excess goodwill, and also provides targeted guidance for enterprises to optimize investment efficiency.
    As far as the M&A process is concerned, the expected synergies of the M&A should be reasonably assessed and the M&A premium should be reasonably paid. After M&A, enterprises should actively integrate with the acquired parties to achieve optimal resource allocation and ensure that the synergies from M&A are truly translated into value creation from M&A. In future research, it would be worthwhile to investigate normative policies for accounting information and non-financial information disclosures related to mergers and acquisitions.
    Multiple Major Shareholders and Shadow Banking of Non-financial Enterprises
    HUANG Xianhuan, JIA Min
    2023, 32(7):  190-196.  DOI: 10.12005/orms.2023.0236
    Asbtract ( )   PDF (1017KB) ( )  
    References | Related Articles | Metrics
    How to govern the shadow banking of non-financial enterprises is an important issue to prevent and defuse systemic financial risks in China. However, for many major shareholders, as a relatively popular shareholding structure in corporate governance, there has been no literature to examine its impact on the shadow banking of non-financial enterprises. Therefore, it is of good practical significance to reveal the role of the equity structure in the shadow banking of non-financial enterprises and to provide a theoretical reference for further optimizing the shareholding structure in the governance of shadow banking risks.
    Based on this, this paper selects the sample data of China’s A-share non-financial listed companies from 2007 to 2019, establishes a linear model, and uses stata15 to empirically test the influence and function mechanism of multiple major shareholders on the shadow banking of non-financial enterprises. Among them, the data of many major shareholders are manually sorted, while the shadow banking data come from Guotai’ian database and the annual report of listed companies. The study has found that the shareholding structure of many major shareholders has significantly promoted the shadow banking behavior of non-financial enterprises. Through the mechanism inspection, it is found that due to the existence of the coordination cost, the supervision of many major shareholders over the management has failed, increasing the first type of agency problem, thus promoting the shadow banking of non-financial enterprises. Further research shows that in the sample group with low media attention, low non-shareholding ratio of state-owned enterprises and institutional investors, low regional legal environment and low development degree of digital financial development, many large shareholders have a more obvious role in promoting the shadow banking of non-financial enterprises.
    The research conclusion of this paper is of practical value: First, it actively optimizes the shareholding structure of non-financial enterprises to avoid decision-making mistakes caused by excessive concentration of equity, prevent excessive coordination costs among multiple major shareholders, reduce information asymmetry and agency problems, and restrain the shadow banking of non-financial enterprises. Secondly, we give full play to the inhibitory role of the two external supervision mechanisms of media attention and institutional investors on the positive relationship between the shadow banking of multiple major shareholders and non-financial enterprises. Finally, we actively improve the regional legal environment, give full play to the positive role of digital finance, and focus on the shadow banking business scale of non-state-owned enterprises.
    A Comparative Study on the Operating Efficiency and Decomposition Efficiency of Commercial Banks in China Based on Complex DN-SBM-DEA Model
    ZHU Chuanjin, ZHU Nan
    2023, 32(7):  197-203.  DOI: 10.12005/orms.2023.0237
    Asbtract ( )   PDF (1041KB) ( )  
    References | Related Articles | Metrics
    Reasonably measuring the internal operational efficiency of commercial banks and exploring its internal reasons for operational inefficiency are of great practical significance for promoting the sustainable development of China’s banking industry. This article divides the internal business process of commercial banks into three stages, namely deposit absorption, debt allocation, and asset profitability from the perspective of sustainability, liquidity, safety, and profitability in business activities. Then, we construct a complex network structure for operating process of commercial banks, and a complex DN-SBM-DEA (Dynamic Network SBM-DEA) model. Taking 44 domestic and foreign commercial banks in China mainland from 2013 to 2017 as research samples, this article mainly draws the following conclusions: (1)Compared with the SBM-DEA model and NSBM-DEA model, the DN-SBM-DEA model considers both the internal processes of bank operations and the lagged effects of carryover variables, so the measurement results are more reasonable. (2)With the continuous expansion of openness in China’s banking industry, the overall operational efficiency of 44 domestic and foreign commercial banks in China mainland in 2017 slightly decreased compared to 2013. This is mainly due to the decrease in efficiency during the asset profit stage, which is related to the rise in non-performing loan ratios in the banking industry in recent years, due to the downward pressure on China’s economic environment. (3)Chinese banks have a relatively stronger ability to absorb deposits and allocate assets, which is mainly due to insufficient asset profitability. However, foreign banks have relatively stronger an asset profitability, which is mainly due to insufficient deposit absorption and asset allocation capabilities.
    Compared with the previous studies, the network structure constructed in this article is more closely related to the internal processes of bank operations, and in the third stage, it considers the impact of non-performing loans, an unexpected output, on asset profitability, making it more reasonable. In addition,previous studies have mostly shared the investment of the first stage in the second or third stage, but the investment of the first stage in the second or third stage is not fully utilized, and the true utilization rate is difficult to know. If treated directly in a fully utilized manner, it may lead to incorrect evaluation results. This article does not directly share the inputs of the first stage in the second or third stages, but rather selects the corresponding input and output variables reasonably based on the characteristics of the stage. However, this article adopts the mainstream approach of setting stage weights and period weights, which means that the importance of each stage (i.e., weight) is considered the same, and the importance of each period (i.e., weight) is also considered the same. This approach has certain limitations in practice, for example, in the business process of commercial banks, the importance of the three stages of deposit absorption, debt allocation, and asset profitability may not necessarily be the same. Therefore, the future expansion direction is how to determine reasonable stage weights and period weights.
    Optimal Dividend Strategy in the Compound Poisson Model with Credit-Debit Interest and Transaction Costs and Taxes
    LI Jingwei, LIU Guoxin
    2023, 32(7):  204-210.  DOI: 10.12005/orms.2023.0238
    Asbtract ( )   PDF (999KB) ( )  
    References | Related Articles | Metrics
    The ability of insurance companies to pay dividends is an important indicator that reflects the company’s operating conditions and economic strength. What kind of dividend strategy is adopted affects not only the interests of shareholders, but also the stability of the company’s surplus, the liquidity of assets, the ability to debt, and even the survival of the company. Seeking the optimal dividend strategy is of course one of the important issues that the industry and theorists care about, and it is an important means of corporate risk management. DE FINETTI[1] proposed a realistic and economically motivated stability criterion: The management of the company should look for maximizing the expectation of the present value of all dividends paid to the shareholders up to ruin time. GERBER[2] first showed that an optimal dividend strategy has a band structure for the compound Poisson risk model. Since then the optimal dividend problem has been developed rapidly for the compound Poisson risk model. Due to its practical importance, the issue of absolute ruin problem has attracted attention in the actuarial literature. Ruin (negative reserve) does not mean the end of the game but only the necessity of raising additional money and that it will be a good investment to rescue a company when the situation is not too serious. We then assume that when the surplus is negative or the company is on deficit, the company could borrow money at a debit interest force β>0. Meanwhile, the company will repay its debt continuously from its income. Thus, the surplus of the company is driven under the debit interest force β>0, when the surplus is negative. We allow a company to continue its business with debt as long as it is profitable. However, when the surplus of a company is below the level-c/β, we say that absolute ruin occurs at this situation. In the compound Poisson risk model, the surplus cannot return to a positive level once it attains the critical level-c/β. In this case, the value-c/β may be interpreted as the maximum allowable debt for a company. When the surplus is positive, it can be invested to obtain the fixed income. When the transaction costs are taken into account, the optimal dividend strategy becomes more complex. The fixed cost, however small, can have a big effect on the value function. The optimal dividend problem with fixed transaction costs has yielded fruitful results for diffusion processes. However, the works for compound Poisson risk model are relatively rare.
    We consider the optimal dividend problem with transaction costs for the compound Poisson model with credit and debit interest and control the times and the amount of dividends to maximize the expected cumulative discounted dividends payment until the time of absolute ruin. Due to the consideration of transaction costs, the problem is formulated as a stochastic impulse control problem. The necessary and sufficient condition for a strategy to be a stationary Markov strategy is presented firstly. The associated measure-valued dynamic programming equation (DPE) is derived by virtue of the theory of measure-valued generators. The verification theorem is proved without additional assumption on the differentiability of the value function. We also show that the optimal dividend strategy constructed in the verification theorem is indeed a stationary Markov strategy. By the Lebesgue decomposition we discuss the relationship between the measure-valued DPE and the QVI satisfied by the value function. We present the existence of the optimal dividend strategies and prove that the optimal strategy is a stationary Markov strategy with a band structure. An algorithm for getting a multi-level lump sum dividend barrier strategy and the corresponding value function is given. The analytical solutions of the value function and the optimal dividend strategy are obtained for the exponential claims.
    Management Science
    Model for Agent Service Scale Optimization of Online Service Center Considering Customer Abandonment
    DAI Tao, ZHANG Ning, WU Yong
    2023, 32(7):  211-218.  DOI: 10.12005/orms.2023.0239
    Asbtract ( )   PDF (1529KB) ( )  
    References | Related Articles | Metrics
    With the rapid development of mobile internet and instant messaging technology, the new online customer service center has gradually replaced the traditional call center and become the mainstream of customer service center in recent years. The biggest difference with traditional call center is that one agent can serve more than one customer at the same time. The maximum number of customers an agent can serve at the same time, named as service scale, directly affects the efficiency of the online customer service center. When the number increases, the agent needs to be frequently switched between several customers, which will reduce the quality and efficiency of the agent and cause customers to give up while waiting for the reply of the customer service. While the number decreases, more customers are forced to wait for the service, which will lead to a higher rate of customer abandonment and a lower rate of agent utilization. Therefore, when setting the number of customers that can be served at the same time, we should not only consider its impact on the average waiting time of customers in the system, but also consider its impact on the abandonment rate of customers. How to set the upper limit of the number of people that online customer service can serve at the same time is a new but important problem in online customer service center compared with traditional call center.
    The basic “one-to-one” queuing model which one customer serves only one customer at the same time, is not directly applicable to the situation of an online call center where one customer can chat with more than one customer at the same time. In order to analyze how to set the upper limit of customers that can be served at the same time, this paper establishes a two-layer queuing model to calculate the average stay time and abandonment rate of customers and other basic queuing indicators in the queuing system. This paper analyzes the characteristics of online customer service mode, divides its one-to-many service process into two one-to-one service processes, and establishes a two-layer queuing model,which is message layer and customer layer by birth-and-death analysis. According to this model, the average stay time and abandonment rate of customers in the system can be calculated directly by giving the customer arrival rate, customer patience time, the number of customer consultation frequency and the average time spent by customers and customer service to prepare each message. In the two-layer queuing model, the abandonment behavior of customers in the process of waiting for customer service access system is specially considered. According to this model, the average stay time of customers and the abandonment rate of customers in the process of waiting for customer service access can be calculated. In the numerical experiment, first of all, a Flexsim simulation model is built to verify the correctness of the two-layer queuing model. And furthermore, the sensitivity analysis is made when the average customer arrival rate, average customer patience time, customer switching cost, the average time of customer reply message and other relevant parameters change are comprehensively completed.
    The research in this paper has a strong practical guiding meaning. Enterprises can choose an appropriate model according to their own conditions to calculate the average stay time and abandonment rate of customers in the system by providing basic parameters such as the arrival rate of customers, the number of customer consultation messages, the value of customer patience, the average time for customers and agents to prepare each message. And we take this as the basis for setting the optimal number of online customer service. When the external parameters change within a certain range, the enterprise does not need to spend extra costs. It only needs to adjust the upper limit of the number of online customer service, so as to optimize the average stay time and abandonment rate of customers in the system, which is of great significance for the cost-oriented online customer service enterprise operation. For online customer service enterprises with sufficient funds, the upper limit of the number of online customer service can be adjusted with increasing the number of online customer service agents. The research of this paper also has a strong theoretical significance. It not only provides a referential idea for the establishment of the one-to-many queuing model of online customer service considering the abandonment of customers, but also provides a basic model for the calculation of human demand and the research of load distribution in the process of the expansion of single-agent online customer service center to multi-agent online customer service center.
    Mobile App Promotion Advertising Decisions with Traffic Changes and Transfers
    HE Xiang, LI Li, ZHANG Hua, ZHU Xingzhen, YANG Wensheng
    2023, 32(7):  219-224.  DOI: 10.12005/orms.2023.0240
    Asbtract ( )   PDF (1034KB) ( )  
    References | Related Articles | Metrics
    As consumer traffic in the mobile app market continues to increase, advertisers in increasing numbers choose to post promotion ads in mobile app market. In online market places, a primary concern for sellers planning an online promotion advertising has been how to publish the promotion advertising on the basis of consumer traffic. Compared with the offline market, the entry cost for consumers to enter the mobile app market becomes lower. With the consumers’ movements in online market places becoming more common and general, consumer traffic could affect the sellers in two ways: Traffic changes and traffic switches. Traffic changes mean the traffic of an online market place increases or decreases independently. For example, from Jan. 2019 to Jan. 2020, the traffic in Tiktok increased by 25% monthly on average. It reveals that the traffic increases in Tiktok App, while this kind of increase does not concern the traffic in other market places. Traffic switches mean the traffic transfers from one online market place to another, leading to a decreasing traffic of one market place and an increasing traffic of another. For example, if there is a consumer who is browsing the Tiktok, when he switches to Amazon at the next second, the traffic in Amazon increases, and decreases in Tiktok. In this context, traffic switches do not change the overall consumer traffic in the market, but only shift traffic between the market places.
    Accordingly, traffic changes and switches make the sellers’ decisions on online promotion advertising more complicated. Understanding the effect of the traffic changes and switches on online promotion advertising has become increasingly critical to sellers’efforts to improve their marketing strategies. In this paper, we mainly investigate sellers’ optimal promotion advertising in terms of traffic changes and switches, trying to answer these following questions: (1)What is the optimal promotion advertising on the basis of online traffic? (2)As an online seller, how to adjust its promotion advertising with traffic changes and switches among the markets? (3)If sellers are given the different traffic initially, how do they, with traffic advantage or disadvantage, adjust their promotion advertising?
    We aim to address the above questions by developing a Hotelling city model with two sellers and a continuum of consumers. Our research has separated this city into three markets: One competitive market and two non-competitive markets. In competitive market, two sellers compete for the online promotion advertising; In non-competitive market, there is only one seller publishing the online promotion advertisement.
    The results first show that when consumer traffic changes, the share of the competitive market and the change of consumer traffic interactively affect the optimal promotion price and advertising level. Note that, the variation of promotion pricing and advertising level decisions are not the same. Then, when consumer traffic switches among non-competitive market and competitive market, the seller also needs to consider the share of competitive market and the initial consumer traffic in competitive market to adjust its price promotion advertising strategy. Third, we also find that the increasing consumer traffic does not always benefit the demand and profit. Finally, when the traffics in single markets are not the same, this paper also provides guidance for sellers in traffic advantages or disadvantages on how to make pricing and advertising level strategy.
    We mainly examine the online promotion strategies (including pricing decision and advertising level decision) with the effect of consumer traffic changes and switches. Our findings provide the following useful insights for management practice. First, online promotion strategies should be adjusted by different market situations: When consumer traffic changes in the market, competitive market share and the change of consumer traffic interactively affect the strategies of online promotion. When consumer traffic switches from one market to another, the amount of consumer traffic in initial market should also be considered to affect the promotion strategies. In addition, pricing and advertising level does not always vary in the same direction. Initially, pricing and advertising level could vary in opposite direction if the share of competitive market is extremely small, and when the share of competitive market is gradually large, the directions become the same. Later, they will be opposite again if the share of competitive market is large enough. These findings provide useful guidelines for adjusting the advertising strategies dynamically on the basis of the share of competitive market. Moreover, the increasing consumer traffic does not always benefit the demand and profit. The consumer traffic is not the only factor that affects the demand and profit, and it can also be affected by the increasing consumer traffic in the market and its share of competitive market. Besides, the comparison between the initial consumer traffic in different markets provides sellers with management insights into how to adjust their price promotion advertising strategies based on their traffic advantage and disadvantage.
    Decision-making Model of Chinese Firms’ Technical Standards Internationalization in “Belt and Road”
    WANG Li, ZHOU Qing, WANG Dongpeng
    2023, 32(7):  225-232.  DOI: 10.12005/orms.2023.0241
    Asbtract ( )   PDF (1172KB) ( )  
    References | Related Articles | Metrics
    With the “Belt and Road” Initiative, Chinese firms have recognized the significance of strengthening the cooperation in standardization programs with firms along the “Belt and Road” and are devoted to promoting Chinese Firms’ technical standards internationalization for the “Belt and Road” markets, a process that Chinese firms cooperate with firms along the “Belt and Road” on standard-setting and application by joining the technical standards alliance. Compared with firms along the “Belt and Road”, Chinese firms have the advantages of mastering core technologies in key fields, holding Standards and Essential Patents, and manufacturing high-end products. Therefore, how to give full play to the first-mover advantages to realize technical standard internationalization in the“Belt and Road” becomes an outstanding issue that needs to be solved urgently.
    Based on a game model, this paper attempts to incorporate the global governance vision featuring extensive consultation, joint contribution, and shared benefit, into Chinese firms’ technical standard internationalization, and to investigate Chinese firms’ decisions when confronted with two modes of standardization cooperation, i.e., standard adoption and standard collaboration.Specifically, the standard adoption mode refers to the scenario where the projects collaborating with firms along the “Belt and Road”are constructed by adopting the existing Chinese technical standards directly. Take the “Belt and Road” infrastructure construction projects for example, the construction and operations of theAddis Ababa-Djibouti Railway, Yawan high-speed railway, and Mongolian railway all adopt Chinese Railway technical standards currently in effect. The standard collaboration mode refers to the scenario where the technical standards are co-developed and spread by the technical standard alliance, composed of Chinese firms and firms along the “Belt and Road”. Typical examples include the UHV transmission project in Brazil’s beautiful mountain by State Grid Corporation of China, and the energy efficiency standards formation in Pakistan by Haier through the localization of R&D, manufacturing and sales, and so on.
    Considering that Chinese firms are not only the initiator of the “Belt and Road” but also the leader in technical standard setting, this paper applies the Stackelberg game model to capture the interaction between Chinese firms and firms along the “Belt and Road”, the members in technical standards alliance, in the process of standard setting. This research examines the impacts of bilateral negotiation costs, technical standard spillover effects, market acceptance, and market turbulence on the decision-making of standardization alliances. The optimal decisions in technical standard-setting and the associated profit are derived in each mode and the thresholds under which one mode is dominated by the other are also discussed.
    Through theoretical analysis, this paper sheds light on how Chinese firms promote standard internationalization in terms of the decision-making mechanism and standardization modes selection for the “Belt and Road” markets. In terms of decision-making, Chinese firms should amplify the spillover effects of technical standards, closely keep up with the market trend, contribute to the negotiation cost reduction, and enhance coordination with firms along the “Belt and Road”, to drive the high-quality development of Chinese technical standards in the global market. At the same time, Chinese firms should also be mindful of the negative impact of market volatility. In terms of mode selections, Chinese firms should make smart choices based on the market condition and project reality. For the supportive livelihood project, Chinese firms are suggested to apply standard adoption mode in construction, while for the for-profit projects, standard collaboration is recommended. In particular to the market with limited spillover effect and acceptability, the implementation of standard collaboration will accelerate the internationalization of Chinese technical standards by alleviating the standard transformation barriers.
    This research upgrades the understanding of the process of technical standards internationalization through the “Belt and Road” initiative and builds the theoretical framework for technical standards formation. Moreover, it provides practical guidance for Chinese firms to establish a standard formation strategy and integrate into the trend of standard internationalization.
    Driving Forces of China’s Manufacturing Productivity Growth and the Super Growth Effect of New Enterprises: Base on An Augmented Dynamic Olley-Pakes Productivity Decomposition Method
    XU Yan, ZHENG Guanqun
    2023, 32(7):  233-239.  DOI: 10.12005/orms.2023.0242
    Asbtract ( )   PDF (1001KB) ( )  
    References | Related Articles | Metrics
    The question of where the drivers of social productivity growth come from is one of the central concerns in the field of economic growth. As research progresses from the macro to the perimeter level, researchers have discovered that the efficiency of resource allocation among firms may also be an important factor in social productivity growth. On this basis, many empirical research approaches have emerged in the field of aggregate productivity decomposition. Based on existing research, it can be seen that most national studies on the drivers of aggregate productivity growth have been decomposed using micro data of industrial enterprises. However, it is interesting to note that using the same data and overlapping sample intervals, researchers have obtained very different dominant drivers of productivity growth. Therefore, this paper improves the additive productivity decomposition method. Based on the DOP, this paper not only solves the problem of measurement bias caused by the inappropriate choice of productivity benchmarks in the decomposition of aggregate productivity change over multiple periods and improves the accuracy of the decomposition of aggregate productivity change, but also incorporates the super-growth effect of new entrants and the decline effect of exiting firms into the framework of aggregate productivity decomposition, thus enriching the connotation of the decomposition of aggregate productivity change. At the same time, this paper updates the perception of the additive productivity driving mechanism of Chinese manufacturing firms. Applying the improved method to decompose the additive productivity change of Chinese manufacturing firms, we find that the super-growth effect of new entrants is an important driver of productivity growth in the Chinese manufacturing sector, contributing more than 10% to the additive productivity growth of the manufacturing sector between 1999 and 2006. This is the first time we have quantified the contribution of the new entrant super-growth phenomenon to overall productivity growth in the literature.
    This paper corrects the bias of the Olley-Pakes dynamic decomposition method in decomposing the change in aggregate productivity over several periods, and incorporates the super growth effect of new entrants and the decline effect of exiting firms into the decomposition framework, thus presenting a new perspective to explain aggregate productivity growth. The paper uses data from 1998 to 2007 to decompose the change in TFP, employing the “cross-identification method” for sample matching and data cleaning, and the OP method proposed by Olley and Pakes to measure the total factor productivity of the firms in the sample.
    This paper decomposes the evolution of China’s manufacturing sector between 1999 and 2006 and finds that the super-growth effect of new entrants, i.e., the faster productivity growth of new entrants relative to surviving firms, is an important driver of manufacturing TFP growth, contributing more than 10% to manufacturing TFP growth. The contribution of the super-growth effect of new entrants increases over time and is characterized by heterogeneity in the nature of ownership rights, the nature of the product and geographical heterogeneity.
    The research presented in this paper, on the one hand, adds to the research literature in the field of additive productivity decomposition, complements the existing methodology and offers a new perspective to explain additive productivity growth; on the other hand, it helps to identify precisely the relative contribution of different factors to additive productivity growth. On this basis, researchers can better understand the motives for firm entry at the micro level and the contribution of firm entry and exit dynamics to economic growth at the macro level. For example, researchers can examine the characteristics and influencing factors of changing economic growth dynamics, assess the economic effects of national innovation and entrepreneurship policies and business services policies, or examine local government innovation and entrepreneurship policies and business services policies. For example, researchers can examine the characteristics and influencing factors of change in the dynamics of economic growth, assess the economic effects of national innovation and entrepreneurship policies and business services policies, or examine the different effects of local government policies on new and existing businesses, such as investment promotion policies, tax incentives and subsidies for newly established businesses.
[an error occurred while processing this directive]