Loading...

Table of Content

    25 July 2024, Volume 33 Issue 7
    Theory Analysis and Methodology Study
    Three-tier Emergency Supply Chain Coordination Based on Bi-directional Procurement of Supplies
    AI Yanfang, TIAN Jun, FENG Gengzhong, LIU Yang
    2024, 33(7):  1-7.  DOI: 10.12005/orms.2024.0208
    Asbtract ( )   PDF (1300KB) ( )  
    References | Related Articles | Metrics
    After a sudden event occurs, the government must respond swiftly to disasters to effectively safeguard the lives and property of the people. Timely and efficient transportation of relief supplies to disaster areas is crucial for emergency relief efforts. The interruption of raw material supply poses a risk of stock outage for the three-tier emergency supply chain. Manufacturing enterprises are unable to procure large quantities of raw materials for capacity expansion, thus struggling to meet the production procurement demands during special periods, severely affecting the efficiency of emergency rescue. Simultaneously, the risk of uncertainty of emergency supplies demand can lead to improper resource allocation and waste. To enhance the flexibility of the emergency supply chain and strengthen cooperation among supply chain members,this paper takes the three-tier emergency supply chain system consisting of government, the manufacturer and raw material supplier as the research target. It analyzes procurement and buyback strategies of relief supplies. This approach can mitigate the risk of rising raw material prices to a certain extent and address the loss caused by shelf life risks, providing necessary guarantees for the production and supply of relief supplies.
    The main conclusions are as follows: (1)For the three-tier emergency supply chain composed of government, the manufacturer and raw material supplier, different contract types can be chosen by the government for different regions. In regions with higher frequencies of disasters, call option contracts can be chosen to achieve coordination of the three-tier emergency supply chain while meeting the needs of disaster-affected populations. For regions with lower frequencies of disasters, buyback contracts can be adopted to achieve coordination of the three-tier emergency supply chain and prevent the waste of relief supplies, maximizing their utilization. (2)A comparative analysis with the two-tier emergency supply chain of centralized decision making shows that the overall ordering level of the three-tier supply chain with the participation of raw material suppliers will be improved. Under the call option contract, the three-tier emergency supply chain benefits its members, while under the buyback contract, it enhances the risk resilience of the emergency supply chain system. Conditions for achieving coordination in the three-tier emergency supply chain under the call option contract and the totalordering quantity under the call option contract greater than under the wholesale price contract are derived. (3)In the call option contract, the execution price of relief supplies is more sensitive to the spot price, disaster victims' satisfaction, penalty costs, and search probabilities, while in the buyback contract, the buyback price of raw materials is more sensitive to the spot price, disaster victims' satisfaction, penalty costs, and search probabilities. The impact of disaster victims' satisfaction and penalty costs on the execution and buyback price is similar, and the spot market price has a more significant impact on the execution and buyback price. Intervals for setting the ratio of buyback price to execution price and the ratio of buyback price for manufacturers participating in three-tier emergency supply chain collaboration provide theoretical guidance for governments and manufacturers when designing contracts. Finally, the validity of the model is further verified by numerical example.
    This paper explores procurement and buyback strategies for relief supplies based on a supply chain consisting of government, the manufacturer, and raw material supplier. However, it only considers a one-to-one three-tier emergency supply chain system. Future research directions could focus on the impact of competitive relationships among multiple manufacturers and raw material suppliers on the three-tier emergency supply chain.
    Pricing and Coordination of Construction Machinery Remanufacturing Supply Chain Based on Carbon Quota Repurchase Financing
    CHEN Weida, XING Jie
    2024, 33(7):  8-15.  DOI: 10.12005/orms.2024.0209
    Asbtract ( )   PDF (1276KB) ( )  
    References | Related Articles | Metrics
    The shortage of resources, intensified market competition, and increased environmental awareness have made the issues of emission reduction and resource recycling important. Governments around the world have adopted carbon policies to guide enterprises to balance profits, costs, and emission reduction goals, and asremanufacturing can save production costs and reduce emissions, enterprises have begun to implement remanufacturing. At the same time, with the rapid development of infrastructure construction and the upgrading of emission standards in China, in-service construction machinery has entered a period of large-scale replacement. Implementing remanufacturing can significantly reduce waste pollution. However, its large size, uncertain quality of recycling, and the series of remanufacturing steps can easily lead to additional costs and resource consumption, and the shortage of funds often limits the implementation of remanufacturing in the engineering machinery industry. In this case, manufacturers with limited funds in the supply chain of engineering machinery remanufacturing can obtain funds from suppliers through carbon quota repurchase financing, and coordinate the supply chain with a revenue-sharing contract.
    We consider the pricing decision of a two-level supply chain consisting of a supplier and an engineering machinery manufacturer, where the supplier provides core components to the manufacturer, and the manufacturer produces both new products and remanufactured products, which are sold in the same market. When the manufacturer considers carbon quota repurchase financing with the supplier, the manufacturer initially sells part of the carbon quota allocated by the government to the supplier to obtain financing, uses the financing funds together with self-owned funds for production, and repurchases the quota from the supplier at the end of production and sales period. The supplier allows the manufacturer to pay all the wholesale price of the core components at the end of the period, saving the manufacturer's capital occupation and enabling it to expand production scale. Then, in order to improve the overall performance of the remanufacturing supply chain, a revenue-sharing contract is designed to coordinate the supply chain. Finally, the corresponding model is solved by utilizing the Lagrange theorem and KKT conditions.
    Through model analysis and numerical simulation, this study explores the influence of carbon quota repurchase financing and revenue-sharing contracts on the optimal decision-making and profits of the engineering machinery remanufacturing supply chain. The research results show that: 1)When the manufacturer's initial capital is limited, the output of new products, the total output and the wholesale price of core components can be improved by adopting quota repurchase financing, and the profits of both sides of the supply chain and the total profit of the supply chain are always higher than those without carbon quota repurchase financing, in other words, carbon quota repurchase financing can promote the production activities of manufacturers. 2)When carbon quota repurchase financing is adopted, as the amount of carbon quota repurchase financing increases, the output of new products, the total output, and the wholesale price of core components all increase, while the output of remanufactured products decreases, and they all remain unchanged after reaching the optimal decision. 3)When the amount of quota repurchase financing that suppliers are willing to provide is low, supply chain members are more willing to accept the revenue-sharing contracts, and when the revenue sharing ratio is certain, both sides of the supply chain can achieve Pareto improvement.
    Effect and Mechanism of External Green Supply Chain Management Practices Driven by Customer Customization
    CHEN Qiujun, JIA Tao, WANG Yu
    2024, 33(7):  16-22.  DOI: 10.12005/orms.2024.0210
    Asbtract ( )   PDF (932KB) ( )  
    References | Related Articles | Metrics
    Since China proposed the “dual-carbon” target, firms have faced higher requirements for green transformation. In practice, the carbon emissions and environmental impacts vary across different stages of the supply chain, while environmental regulations and natural resource reserves not only change over time, but also are related to the geographic locations of different stages of the supply chain. This requires firms to coordinate and manage the environmental impacts of each stage of the supply chain to timely meet the customized product demands of customers in different application scenarios. Consequently, firms must shift the focus of green practices from the company level to suppliers and customers at the supply chain level, and review their external green supply chain management practices (ex-GSCMP). The implementation of ex-GSCMP has always been considered challenging due to the need for coordinating multiple suppliers and customers that are geographically dispersed, with highly uncertain implementation outcomes. Especially in the context of global manufacturing supply chain restructuring, the collaborative operations among firms at various stages of the supply chain become more complex, leading to increased uncertainty for firms implementing ex-GSCMP.
    In fact, the relationship between ex-GSCMP and financial performance has been controversial in the literature, and the performance burden resulting from a firm's green transformation remains a concern for decision makers in supply chain members. Based on this, this study further considers the operational context of customer customization and proposes the value of ex-GSCMP for innovation performance from the perspective of knowledge. Therefore, this study identifies “middle-level knowledge” (i.e., new knowledge regarding the integration of green product design and production with different application scenarios) for analyzing the relationship between ex-GSCMP and both financial and innovation performance, as well as the moderating role of environmental regulation. Based on the knowledge-based view and institutional theory, this study proposes four research hypotheses: (1)Ex-GSCMP has a U-shaped relationship with financial performance. (2)Ex-GSCMP is positively related to innovation performance. (3)Environmental regulation strengthens the U-shaped relationship between ex-GSCMP and financial performance. (4)Environmental regulation strengthens the positive relationship between ex-GSCMP and innovation performance. The survey data are collected from 264 firms and the ordinary least squares regression is then conducted with the SPSS to test the research hypotheses. The results show that there is a U-shaped relationship between ex-GSCMP and financial performance, and ex-GSCMP has a positive effect on innovation performance. Moreover, environmental regulation positively moderates the relationship between ex-GSCMP and financial performance, but does not significantly influence the relationship between ex-GSCMP and innovation performance.
    The theoretical contributions of this study are threefold. (1)This study first proposes the impact of ex-GSCMP on innovation performance and finds that there is a positive relationship between ex-GSCMP and innovation performance. From the perspective of knowledge, this study fills a gap in existing literature and enriches the theoretical research on the effectiveness of ex-GSCMP. (2)Combined with the influence of the accumulated “middle-level knowledge” in the context of customer customization, this study uncovers a U-shaped relationship between ex-GSCMP and financial performance, providing a new research perspective for understanding the mechanisms of ex-GSCMP on financial performance. (3)Considering the significant impact of environmental regulation on the relationship between the firm's green practice activities and its effects in reality, the moderating effect of environmental regulation is identified and verified. This study expands the application research field of institutional theory, enriching the contingency perspective on the relationship between ex-GSCMP and financial as well as innovation performance.
    Based on the research findings, management suggestions are provided for firms to proactively implement ex-GSCMP. (1)Firms should full leverage the opportunities of implementing ex-GSCMP to acquire external knowledge, accumulate unique “middle-level knowledge”, and develop innovative green products to enhance innovation performance, thereby achieving sustainable development. (2)Firms should make full use of the “collaborative platform” to reduce the interaction costs between firms brought about by the implementation of ex-GSCMP, thereby accelerating the accumulation of “middle-level knowledge”. They should disclose green information to enhance corporate compliance, strengthen inter-firm trust, and improve long-term financial performance. (3)They should combine the requirements and changes of environmental regulations, match suitable supplier resources to carry out ex-GSCMP, and timely adjust the implementation strategy and direction of ex-GSCMP.
    Based on the content of this study, further research could consider the following three aspects. (1)Future research could collect objective data to test the model. (2)The sample industries involve machinery, electronics and other industries, so more industry data can be collected in the future to further validate the findings of this study. (3)Because this study focuses on the moderating effect of environmental regulation, future research could analyze the moderating effect of non-governmental factors such as public attention.
    Financing and Equilibrium Decision of Capital Constrained Supply Chain under Asymmetric Information
    WANG Qiangqiang, ZHANG Bin
    2024, 33(7):  23-29.  DOI: 10.12005/orms.2024.0211
    Asbtract ( )   PDF (1014KB) ( )  
    References | Related Articles | Metrics
    Many small, and medium-sized enterprises (SME), especially those located in developing countries, face daunting financing challenges. The China Association of Small and Medium Enterprises reports that 81% of Chinese small and micro enterprises struggles with capital shortages. Traditionally, bank financing is considered a solution for many capital-constrained firms. However, according to the enterprise surveys of the World Bank Group, 79.2% of bank loans require collateral. It is difficult for SME to obtain financial resources because they lack collateral and history creditworthiness. To address the issue on creditworthiness, guarantee credit financing (GCF) is wildly used in practice as an alternative to bank financing. For example, China Minsheng Bank and DRC Bank (a regional bank in China) provide guarantee credit in cooperation with focal companies. Haier Group provides financial support through its financial holdings company for its dealers to sell Haier's products. With guarantee credit, the retailer easily obtains bank loans to support its operations. At the same time, the supplier can increase its revenue, and the bank faces less financial risk.
    Guarantee credit financing refers to bank loans provided to a capital-constrained firm (retailer) for specific procurement based on a guarantee provided by a focal company (manufacturer) in the supply chain. In theory, GCF is relatively novel to supply chain finance, but has obtained increasing attention for the past years. Scholars focus on the impact and role of GCF from the perspective of retailer's and manufacturer's credits and study how the manufacturer (retailer) can optimally structure the terms of the guarantee credit arrangement. Retailers with point-of-sale data are better positioned than suppliers to forecast future market demand. In practice, Haier Group provides GCF to some capital-constrained SME, especially for its retailers. Most of the capital-constrained retailers use point-of-sale systems to record historical sales data. By documenting and accumulating a large amount of sale data, they have more information advantage over the local demand than Haier Group. Retailers' access to demand information may allow them to obtain a favorable guarantee credit amount from the supplier.BABICH et al. (2012) point out that the supplier cannot use a simple contract to earn the maximum first-best channel profit when asymmetric information exists. Thus, it is interesting to investigate the impact of asymmetric demand information on supplier's decisions and profit under GCF.
    This research aims to address the following research questions: (1)How do retailers and suppliers make financial and operational decisions under GCF? Can retailers always obtain their desired credit guarantee, and under what circumstances? (2)How does asymmetric demand information affect the decisions and profits of both retailers and suppliers? Does the retailer's information advantage solely benefit themselves or does it harm the supplier? Under symmetric demand information, the supplier's production cost impacts the guarantee credit amount, which increases with a broader demand distribution. Retailers inflate reported demand to secure a full guarantee, knowing that reporting the exact demand information leads to a partial guarantee. In the presence of asymmetric demand information, we find that when the supplier's production cost is below a certain threshold, the asymmetric demand information will have no effect on the retailer's and supplier's decisions. The retailer and the supplier earn the same profits under both asymmetric demand information and symmetric demand information. When the supplier's production cost is higher than a certain threshold and lower than another threshold, and the prior probability is below a certain threshold, the retailer can be identified by the supplier with an acceptable constraint. The retailer's demand information advantage enables him to receive extra profits and harm the supplier's profit. Otherwise, the supplier provides the same guarantee credit amount to any type of retailer. In such situations, asymmetric demand information is detrimental to both the retailer and the supplier.
    This paper represents an initial exploration of GCF under asymmetric demand information, shedding light on how retailers and suppliers determine guarantee credit amounts and the impact of information asymmetry on their profits. Future research should delve deeper into the complexities of GCF and consider other relevant factors to enhance understanding and provide valuable insights for supply chain management.
    Manufacturer's Alliance Strategy Selection in an Online Co-opetitive Supply Chain
    WANG Tongyuan, SUN Kangjia, WANG Xianjia, CHEN Zhensong, HE Peng
    2024, 33(7):  30-36.  DOI: 10.12005/orms.2024.0212
    Asbtract ( )   PDF (1150KB) ( )  
    References | Related Articles | Metrics
    With the intensification of competition and changes in the market environment, many e-commerce platforms, such as Amazon and JD.com, have gradually switched from the pure mode of distributing manufacturers' products (i.e., reselling mode) or providing a marketplace to sellers (i.e., agency selling mode) to a hybrid mode.The platform not only provides marketplaces for retailers to connect with consumers, but also acts as e-retailers to distribute manufacturers' products. Meanwhile, some supply chain enterprises are facing the severe situation of withdrawing from the market or being merged with the increasingly fierce competition. Firms gradually realize the importance of alliance. In some fields or markets, long-term and stable profits can be obtained by allying with other supply chain members. For example, P&G allied with Walmart and reduced the operating cost of the whole supply chain. Midea and Gree reached a strategic alliance with JD. However, an improper choice of the alliance strategy will also harm the profits of both members and eventually lead to the rupture of the alliance relationship. For example, Nike Group terminated its alliance with the online retailer Zappos. Therefore, it is of great significance to characterize the manufacturer's alliance strategy in an online retailing supply chain.
    In this paper, we construct an online retailing co-opetitive supply chain consisting of one manufacturer, one retailer, and one platform. The manufacturer sells products to the retailer and platform at wholesale prices. The retailer opens a franchise store to sell products on the marketplace of the platform by paying a commission fee. The platform not only provides a marketplace to the retailer, but also builds a self-operated flagship store to resell the manufacturer's products. Hence, the revenue of the platform comes from the commission fee paid by the retailer and the profit from reselling products. It is worth noting that, on the one hand, the retailer and platform distribute the manufacturer's products and compete horizontally in the terminal market. On the other hand, the retailer sells products on the marketplace by paying a commission fee and cooperates vertically with the platform. Thus, there is a co-opetitive relationship between the retailer and platform. In this online retailing co-opetitive supply chain, the manufacturer has three alliance models: having no alliance, allying with the retailer, and allying with the platform. Considering the differences in sales entities and channels, the retailer and platform have different initial market demands. Based on the above analysis, this paper examines the impact of different initial market demands on the manufacturer's alliance strategy choice in an online co-opetitive supply chain.
    The main research is arranged as follows: Firstly, we take the no alliance model as a benchmark to explore whether the manufacturer has an incentive to ally with the retailer or platform, and analyze the impact of the alliance on the third-party member's profit. Secondly, we compare and analyze which member the manufacturer is more inclined to ally with. Then, we study the influence of different alliance strategies on the whole supply chain profits, social welfare, and consumer surplus. Finally, we relax the assumptions and expand the discussion on the manufacturer's alliance strategy choice in two situations, i.e., quantity competition, and the platform as the channel leader.
    The results show that: (1)The manufacturer always has incentives to ally with the retailer or platform. However, which member the manufacturer is more motivated to ally with is related to the platform's commission rate and the initial market demand ratio of different channels. Specifically, when the commission rate is low and the initial market demand ratio is large, the manufacturer is more inclined to ally with the retailer; otherwise, he prefers to ally with the platform. (2)The profit of the platform (retailer) will not always decrease when the manufacturer allies with the retailer (platform). Specifically, when the commission rate is within the appropriate range, the platform (retailer) will benefit from the alliance between the manufacturer and retailer (platform) if the initial market demand ratio is large. (3)When the initial market demand ratio is small, the manufacturer allying with the platform can achieve a win-win-win situation for the whole supply chain, consumer surplus, and social welfare. When the initial market demand ratio is large, the manufacturer allying with the retailer can achieve a win-win-win situation.
    Berth Allocation Optimization of Bulk Cargo Ports Considering Berth Shifting Operation
    ZHENG Hongxing, ZHU Junqiu, FAN Xin
    2024, 33(7):  37-43.  DOI: 10.12005/orms.2024.0213
    Asbtract ( )   PDF (1241KB) ( )  
    References | Related Articles | Metrics
    In recent years, the close connection of the international supply chain and prosperity of maritime trade have led to a sharp increase in cargo throughput in many bulk ports. Meanwhile, as large ships gradually become the mainstream, ports that cannot provide sufficient deep-water berths due to natural conditions are constantly facing complaints from shipping companies. In this context, the reality that most bulk ports in China still rely primarily on shallow-water berths becomes particularly prominent. To alleviate the waiting pressure of large ships and meet service needs for small and medium-sized ships, how to efficiently utilize existing berth resources has become an important issue in port management. To this end, this paper conducts an in-depth study of the optimization of berth allocation considering berth shifting operations. By designing flexible and efficient berth shifting operations, it can not only shorten the waiting time of large ships and reduce port congestion, but also better meet service needs for small and medium-sized ships and achieve balanced utilization of berth resources. This paper first outlines the research background and emphasizes the importance and practical application value of studying berth allocation optimization considering berth shifting operations. Subsequently, it reviews and summarizes existing literature from two aspects: one is the study of berth allocation without considering the impact of berth shifting operations, which mainly focuses on the basic theory of classic berth allocation problems; the other is the study of berth allocation considering the impact of berth shifting operations, which pays more attention to the influence of berth shifting operations on port operation efficiency and service quality. Through combing and evaluating the existing literature, this paper further elaborates on the theoretical value and practical significance of the issue.
    This study can be described as follows: it focuses on the temporal and spatial constraints of continuous berths within a single terminal in bulk cargo ports, while considering factors such as the draft depth of arriving vessels and the operational berth shifting requirements for certain ships. It designs an optimal combination scheme for partially arriving vessels involving load-shifting and berth-shifting to reduce the total time spent by all arriving vessels in port. To achieve this, a berth allocation model is constructed with the objective of minimizing the total time spent by all arriving vessels in port. Additionally, an improved genetic algorithm, incorporating variable neighborhood concepts, is designed to solve the model and obtain the optimal berth allocation scheme. In the case study analysis, the effectiveness of this scheme is first validated by being compared with that of existing schemes such as first-come-first-served and those that do not consider berth shifting operations. Secondly, the effectiveness of the proposed algorithm is verified by comparing it with that of the solution process of CPLEX. Finally, two types of sensitivity analysis experiments are conducted to analyze the impact of different proportions of large vessels and different intervals between vessel arrivals on berth allocation, emphasizing the importance of vessel arrival time intervals and considering periods with a higher proportion of large vessels among arriving vessels. The results indicate that, under the premise of limited berth resources, a berth allocation plan that encompasses appropriate combinations of load-shifting, berth-shifting, and operational berth shifting can effectively reduce the waiting time of vessels in port, thereby enhancing customer satisfaction. To further improve berth resource utilization, it is recommended to coordinate vessel schedules with anticipated arrivals from shipping companies to stagger vessel arrivals as much as possible, and to pre-plan large vessel allocation based on the scale and workload of arriving vessels, assigning multiple large vessels arriving in clusters to different terminals. These strategies offer new insights and significant reference value for the formulation of berth allocation plans in bulk cargo ports.
    Novel Optimization Model for Inbound and Outbound Flight Sequencing under Uncertain Runway Invasion Scenario
    SUN Bo, WEI Ming
    2024, 33(7):  44-50.  DOI: 10.12005/orms.2024.0214
    Asbtract ( )   PDF (1139KB) ( )  
    References | Related Articles | Metrics
    According to the International Civil Aviation Organization (ICAO), a runway invasion is any incident at an airport that involves the mistaken appearance of aircraft, vehicles, and pedestrians on the surface of a protected area used for aircraft take-off and landing. Obviously, all kinds of intrusion events have the characteristics of randomness and suddenness, and there are differences in the place, time and duration of their occurrence. When a runway incursion occurs, aircraft are prohibited from taking off or landing during this time, which affects flight arrivals and departures scheduling problems (FADSP) for some flights. As can be seen from the above, FADSP with flight arrivals and departures scheduling problem invasion(FADSPI) is more complicated than the traditional FADSP, mainly in two aspects: (1)Considering that the priority of incoming flights is higher than that of departing flights, some runway incursions result in excessively long waiting times for aircraft on some flights, which need to land at alternate airports due to limited reserve fuel. (2)It is urgent to analyze the internal relationship among the randomness of intrusion events, the approach and departure sorting scheme and flight delay. Therefore, FADSPI helps to improve the scientific level of flight arrival and departure management in emergencies, thereby improving runway capacity and avoiding large-scale flight delays.
    This paper proposes an optimization model for flight sequencing with multi-runway operation mode under uncertain scenarios. It is assumed that the probability of occurrence of each runway invasion event at different durations can be estimated, and the priority difference of delay and alternate handling for different flight types such as special aircraft, VIP and ordinary flights can be obtained in advance. The model tries to select some ordinary approach flights to alternate airports, allocate the remaining flights to different runways, and determine their take-off or landing time on corresponding runways, so as to reduce the delay cost of ordinary flights as far as possible under the circumstance of giving priority to special planes and incoming and outgoing passenger flights. According to the characteristics of the problem and based on the priority of inbound and outbound flights,a multi-stage parallel distributed heuristic algorithm is designed to solve the problem. Finally, a real case is used to analyze the difference in flight arrivals and departures ordering results under different invasion events. The relationship among the spatial and temporal distribution of intrusion events, the number of flights of different priority types and the delays is revealed to verify the correctness of the model.
    The main findings are shown as follows: (1)When considering the priority of flights, although part of the runway slot resources are wasted and the arrival and departure delay time is slightly increased, the losses caused by VIP flight delays are reduced and the actual special needs for flight arrival and departure sequencing are met. (2)When supply (the number of runways) is less than demand (the number of inbound and outbound flights), the queue length of inbound and outbound flights will increase. Otherwise, the queue length will be reduced. Because flights arrive unevenly, the length of the queue fluctuates from moment to moment. Since overall demand is greater than supply, it takes more time for all the queues to dissipate. (3)Due to the different runway operation mode and the proportion of inbound and outbound flights, when the location, time and duration of runway invasion are different, the queuing formation and dissipation process of inbound and outbound flights will be significantly different. (4)When the runway invasion time increases, it may cause diversion flights to appear, and the number of diversion flights will increase with an increase in invasion time.
    Analysis and Cost Optimization of an N-policy Repairable Queue with Single Vacation and Variable Failure Rates
    HE Yaxing, TANG Yinghui
    2024, 33(7):  51-56.  DOI: 10.12005/orms.2024.0215
    Asbtract ( )   PDF (1002KB) ( )  
    References | Related Articles | Metrics
    This paper develops an M/G/1 repairable queue with single vacation and variable failure rates under N-policy control, in which the server takes an uninterrupted vacation once the system becomes empty. When the server returns from vacation and finds that at least N customers are in the system, he/she immediately begins serving the waiting customers until the system becomes empty again. Otherwise, the server keeps idle but on duty until the number of customers waiting in the system reaches N and immediately begins serving the waiting customers. In addition, the service station has variable failure rates during its busy and idle periods. Such queueing model considers not only the random failures of the service station (service facility) that occur during its working periods but also the random failures of the service station that can also happen in its non-working periods due to environmental changes. Further, the random failures of the service station that occur during its non-working periods can be found only when the service station is activated. Hence, the idle failures of the service station can occur at most once in a busy cycle. The queueing model studied in this paper is more in line with the actual situation.
    Firstly, we apply the stochastic decomposition property of the steady-state queue size to derive its probability generating function of the system, and obtain some performance measures by some algebraic operations, such as the average queue size, the average length of the busy cycle and the average waiting time of an arbitrary customer. Secondly, we use the renewal process theory, the total probability decomposition technique and Laplace transform to discuss some critical reliability measures of the system, including the unavailability and failure frequency.
    Although setting the threshold N can reduce the cost of the system due to frequent startup, it also increases the customer's waiting time. Therefore, it is of great theoretical importance and application value to consider the cost optimization problem of the system under the expected waiting time constraints. Inspired by the above, we establish a cost model and a cost objective function to separately discuss the cost optimization problems with (without) the expected waiting time constraints under the widely-applied PH distribution. Several numerical examples are presented to determine the one-dimensional optimal threshold N* that minimizes the long-run expected cost of the system as well as the two-dimensional optimal threshold (N*,T*) when the vacation time is fixed as T, which provides ideas and theoretical support for the decision-makers to achieve the maximization of the economic benefits. Moreover, we compare the results of the unconstrained and constrained scenarios. The results show that the N* of starting the service without the expected waiting time constraints is larger than that of starting the service under the expected waiting time constraints, and the smaller the constraint threshold of waiting time is, the smaller the N*of starting the service is. The larger the corresponding minimum expected cost is. Thus, if the system manager expects to reduce customer's waiting time and increase customer satisfaction, it needs to start the system earlier and pay more costs. Such a consideration is from the manager's and customer's point of view to determine the optimal threshold, which is helpful for balancing the interests of the manager and customer, so as make the innovation of this paper clear. The theoretical analysis results have more practical application value.
    For future research, the continuous-time queueing system studied in this paper can be further extended to the corresponding discrete-time queueing system, and the non-Markovian arrival processes of customers and uninterrupted multiple vacations can also be considered.
    Adaptive Large Neighborhood Search for the Colored Traveling Salesmen Problem
    LU Yongliang, WU Qinghua, LI Jianbin, ZUO Pingcong
    2024, 33(7):  57-64.  DOI: 10.12005/orms.2024.0216
    Asbtract ( )   PDF (1119KB) ( )  
    References | Related Articles | Metrics
    The colored traveling salesman problem(CTSP)is an extension of the classical multiple traveling salesman problem. In CTSP, there are multiple salesmen and multiple city nodes that need to be visited. Each salesman is allocated a particular color and each city carries 1, 2, or all salesmen's colors. Each city allows any salesmen with the same color to visit exactly once. The goal of CTSP is to determine the best travel route for each salesman to visit cities, so that, while meeting the above constraints, the total traveling distance of all the salesmen is the least.
    CTSP originates from a class of practical applications in multi-machine scheduling. In this multi-machine system, the entire workspace is divided into multiple sections, and the processing tasks in each section have different accessibility for individual robots. Treating these processing tasks as city node locations and the robots executing tasks as salespersons, by setting the color attributes of salespersons and city nodes to control the differential access of robots to processing tasks, this problem is modeled as a CTSP problem. CTSP is an NP-hard combinatorial optimization problem with wide-ranging real-life applications, capable of solving many practical problems. For example, in modern logistics and distribution, customers may have preferences for delivery personnel; simultaneously, different types of goods may require different types of vehicles for delivery, and allocating different types of vehicles to transport different types of goods presents a challenge. Therefore, researching and developing effective algorithms to solve the CTSP problem is of great importance and contributes significantly to the current TSP literature.
    This article proposes an effective adaptive large neighborhood search (ALNS) algorithm for the NP-hard problem CTSP. The algorithm consists of four important components: (1)a random greedy initial solution construction method, (2)four specialized destroyer and repair operations, (3)an efficient local search procedure, and (4)an adaptive mechanism for selecting destroyer and repair operations. The ALNS algorithm first generates an initial solution using a random greedy method and then performs a series of iterations. In each iteration, the algorithm adaptively selects the most suitable destroyer and repair operators, applies them to the current solution, and uses the local search procedure to further update the current solution. In each iteration, the algorithm updates the current search by finding the best solution and adjusting the weights of the destroyer and repair operators. The computational results of the proposed algorithm on three sets of benchmark instances from the literature demonstrate the proposed ALNS algorithm can effectively solve the CTSP problem on three sets of standard test instances from the literature. Especially, the ALNS algorithm can obtain better quality solutions in less runtime than the CTSP algorithm in the literature.
    Future research could consider extensions of the CTSP, such as incorporating additional constraints like time windows of customers and vehicle capacities. This would be an interesting research direction with significant contributions to the current TSP literature. Additionally, there is a lack of exact algorithms for solving the CTSP in the literature. Researching exact algorithms for solving the CTSP would be a worthwhile direction. Furthermore, for the CTSP, exploring combinations and hybrid strategies of various algorithms, such as combining evolutionary algorithms with local search algorithms, could enhance the efficiency and quality of problem-solving. Additionally, leveraging machine learning techniques, especially deep learning methods, to tackle the CTSP is also a promising research direction.
    Vehicle Routing Problem in Mixed Synchronous/Asynchronous Delivery and Installation of Home Appliances
    DAI Ying, WANG Dan, YANG Fei, MA Zujun
    2024, 33(7):  65-71.  DOI: 10.12005/orms.2024.0217
    Asbtract ( )   PDF (1154KB) ( )  
    References | Related Articles | Metrics
    Delivery and installation are critical links in the last-mile logistics of home appliances. Their efficiency has directly affected customer satisfaction. In the past, most logistic enterprises used an asynchronous delivery and installation model, where installers offered secondary services separately after the product was delivered to the customer. Although it increases the delivery efficiency and flexibility of installation, the model also results in more frequent truck visits, which raise the overall cost and detracts from the customer's satisfaction with after-sales care. Therefore, many home appliance enterprises have adopted a synchronous delivery and installation model in recent years, requiring installers to simultaneously perform delivery and installation services. However, installation dominates a large part of the service time. It reduces multiple truck visits than the asynchronous delivery and installation model, leading to lower efficiency and reduced customer satisfaction during peak business hours. Furthermore, monotonously adopting the synchronous delivery and installation strategy will affect the distribution efficiency of the orders that do not need to be installed. Especially subject to available installers, some orders' delivery and installation service may take place later than promised, affecting customer satisfaction, corporate reputation, and market competitiveness.
    In this context, this paper addresses the vehicle routing problem in mixed synchronous/asynchronous delivery and installation of home appliances, that is, delivery and installation services for each customer can be synchronized or separated. By incorporating both characters, we expect to realize the overall efficiency of the delivery and installation and offer a more flexible schedule for the delivery vehicle route and service plan. A mixed-integer linear programming model is then developed for the appliance delivery and installation routing problem with soft time windows and constraints on vehicle capacity, available vehicles, and maximum working time. The objective is to minimize the total appliance delivery and installation costs, including fixed dispatching vehicle costs, vehicle travel costs, and penalty costs for delayed installation. Since it is an NP-hard problem that cannot be solved by exact solution methodologies within acceptable computational time, an improved genetic algorithm (GA) is tailored to solve the model. We assess the performance of the proposed genetic algorithm with numerical experiments. The results show that: (1)GA outperforms Gurobi in computation time in small-scale numerical trials. When it comes to large-scale problems, Gurobi struggles to find the best solution within 3600s, while GA can also obtain a better solution. (2)For medium-scale cases, GA works better than the adaptive large neighborhood search algorithm (ALNS) from existing research in computation time and solution quality. In a few experiments, ALNS is only marginally better than GA.
    Finally, we conduct a case analysis of real-life order data from a logistics and distribution center. We explore different delivery and installation strategies and evaluate the proposed models' operational performance compared to the tedious implementation of synchronous or asynchronous delivery and installation. The results reveal that: (1)The mixed delivery and installation model achieves maximum cost savings by operating the synchronous delivery and installation as much as possible while utilizing an asynchronous delivery and installation strategy for the remaining customer orders. (2)Additionally, the above model realizes a 30% reduction in average customer delivery completion time and requires fewer delivery trucks and installers. (3)Besides, delivery efficiencies increase significantly with greater separation of deliveries and installations. In conclusion, the proposed mode can not only guarantee the synchronous delivery and installation model as much as possible but also save delivery vehicles and installation personnel, significantly improve delivery efficiency, and reduce the total cost, which can provide decision support for the vehicle routing and scheduling scheme in appliances delivery and installation.
    In further study, the synchronous/asynchronous delivery and installation can be extended to multiple periods, dynamic orders, and other characteristics and cover more factors in real life.
    Effectively Restricted Neighborhood Structure Based Tabu Search for the Budgeted Maximum Coverage Problem
    LIU Yawen, PAN Dazhi, CHI Ying
    2024, 33(7):  72-78.  DOI: 10.12005/orms.2024.0218
    Asbtract ( )   PDF (1079KB) ( )  
    References | Related Articles | Metrics
    The Budgeted Maximum Coverage Problem(BMCP)(aka the knapsack maximum coverage problem) is a natural and more practical extension of the standard 0-1 knapsack problem and the set cover problem,and is also highly related to the set-union knapsack problem. Give n elements with non-negative profits, a set of m items, each item is a subset of the set of elements and each of them has a non-negative cost, and given a budget, BMCP aims to select some subsets such that the total cost of selected subsets does not exceed the budget, and the total profits of covered elements is maximized. It has a broad range of applications in real life, such as project allocation, financial decisions, worker employment, software installation packages, service operation providers, etc.BMCP, as an NP-complete problem highly related to SUKP, is difficult to solve, and relatively few studies have been conducted on the heuristic or meta-heuristic aspects of the standard BMCP. Therefore, it is of great importance to study this problem.
    In this paper, we propose a more efficient heuristic algorithm for solving the BMCP and improve the relative stability of the solutions in the existing literature, providing more options for solving such problems. We propose an effective restricted neighborhood structure based tabu search algorithm (ERNSBTS) for the BMCP. The algorithm mainly consists of three parts: dynamic initialization, limiting the neighborhood structure based on relative empty rate and relative gain rate, and re-initializing dynamic random disturbance. First, since the tabu search has a strong dependence on the initial solution, it is proposed to construct the residual profit and residual value density to generate the initial solution. Then, a counter G is introduced to record the times of element coverage under the current solution, and two strategies are designed to expect the most promising subset to restrict the neighborhood structure. After that, greed is combined with heuristic ideas to design a perturbation procedure to take into account convex combinations of global and local to increase the diversity of initial solutions. In ERNSBTS, the relative empty rate, relative gain rate and dynamic random disturbance re-initialization ensure the convergence and diversity of the algorithm search.
    In the computational experiments, the optimal combination of parameters in ERNSBTS is firstly determined using an orthogonal experimental approach. Secondly, to evaluate the effectiveness of the algorithm, we perform computations on 30 benchmark instances of the BMCP, and compare the results with the approximation algorithm, the probability learning based tabu search algorithm (PLTS) and the variable depth local search algorithm(VDLS). Finally, the ablation of the initialization method is investigated. The results show the high competitiveness of the proposed ERNSBTS algorithm in terms of solution quality, computational efficiency and robustness.
    In further studies, we will find that there are many similar problems that are not much different from the budgeted maximum coverage problem, and the frameworks such as relative empty rate and relative gain rate proposed in this paper can be used to solve similar problems. The Set-Union Knapsack Problem (SUKP) is an example.
    Collaborative Filtering Hybrid Recommendation Algorithm Based on Optimal Weight and its Application
    YU Qiaochu, ZHAO Mingqing, LUO Yuting
    2024, 33(7):  79-84.  DOI: 10.12005/orms.2024.0219
    Asbtract ( )   PDF (1185KB) ( )  
    References | Related Articles | Metrics
    Collaborative filtering recommendation, as a relatively mature information filtering technology in the recommendation algorithm, is widely used in the field of commodity recommendation, but it faces problems such as data sparsity and cold start, which will reduce the recommendation quality of the algorithm . In view of the low prediction accuracy and low recommendation quality of traditional collaborative filtering recommendation algorithms, many scholars have proposed some improved algorithms to improve the accuracy of a single algorithm. Some scholars have improved the similarity measure to improve the effect of a single collaborative filtering recommendation algorithm, but they do not comprehensively use the recommendation information of multiple single recommendation algorithms to further improve the quality of recommendation, and hybrid recommendation algorithm is an effective strategy to solve this problem. Although the idea of mixed strategy alleviates the sparsity of data and improves the recommendation accuracy, the weight determination lacks theoretical basis and is too subjective. So this paper proposes a collaborative filtering hybrid recommendation algorithm (BEST-CF) based on the idea of optimal combination prediction.
    Collaborative filtering recommendation mainly uses the similarity between users or the similarity between items to predict the user's rating of the item, its essence is the prediction problem. Different recommendation algorithms have different score prediction. In order to make effective use of different score prediction results, overcome the subjectivity existing in determining algorithm weight, and improve the quality of mixed recommendation more effectively, this paper applies the idea of optimal combination prediction to collaborative filtering mixed recommendation, so as to improve the accuracy of score prediction. The BEST-CF algorithm obtains the optimal weight by constructing the optimal combination prediction model, and uses the optimal combination of the user-based collaborative filtering recommendation algorithm (User-CF) and the item-based collaborative filtering recommendation algorithm (Item-CF) in the Movielens 100K data set.
    The experimental results show that the MAE value of the BEST-CF algorithm is 7% lower than that of Item-CF, 11% lower than that of User-CF, and the RMSE value is 11% lower than that of Item-CF, and 15% lower than that of User-CF.Therefore, the BEST-CF algorithm significantly improves the scoring prediction accuracy and can improve the recommendation quality. Finally, BEST-CF is used for the recommendation of insurance products. The experimental results show that the recommendation accuracy of BEST-CF is significantly higher than that of Item-CF and User-CF. BEST-CF can effectively improve the accuracy of product recommendation results, be more accurate to recommend products to users who need more accurate recommendations for customer preference of insurance products, and alleviate the problem of whether insurance products are fit for customers who cannot be determined because of lack of insurance knowledge.
    BEST-CF algorithm advantage is determined by its weight. The algorithm overcomes the subjectivity, and can make the combination forecast score to minimize the sum of squares fitting error.Its performance is superior to the single recommendation algorithm, and any of the multiple single recommendation algorithm weighted combination, but this does not take into account the user's interest in the dynamic change or fitting problems, which is the next issue we will study.
    Performance and Importance Analysis of Network System Considering Loss and Recovery
    DUI Hongyan, XU Huiting, WANG Ning, LIU Yumin
    2024, 33(7):  85-90.  DOI: 10.12005/orms.2024.0220
    Asbtract ( )   PDF (1126KB) ( )  
    References | Related Articles | Metrics
    A complex network is a logical model that can reflect the connectivity between substances. It abstracts the entities in a complex system into nodes, and the relationship between entities into edges. Complex network systems are closely related to people's lives and have been widely used in transportation systems, communication systems, power systems, industrial systems and other fields. If the nodes and edges in the network are affected by external environments such as natural disasters and human attacks, and failures occur, this will bring multiple losses to production and life, such as human, material and financial resources. Therefore, in order to reduce the losses, it is of great significance to study the prioritized recovery order of failed nodes in complex network systems to improve the performance recovery ability of complex systems network after failure.
    Aiming at the problem of node recovery order after multi-node failure in complex network systems, in order to study the recovery priority of failed nodes from different perspectives, this paper establishes the loss importance measure, recovery importance measure and resilience importance measure models of complex network systems based on Birnbaum importance measure theory, which combines the loss and recovery of complex network performance with the importance. These models consider the impact of node state changes on both loss and recovery of system performance, so that the critical nodes with the greatest impact on system performance can be found, and the recovery priority of the failed nodes can be studied.In this paper, firstly the performance change process of the complex network system after being hit and repaired is analyzed, and the loss performance and recovery performance of the complex network system are defined. Then the network nodes are divided into two states, normal operation and failure, and the loss performance and recovery performance of the system are analyzed respectively when the nodes are in different states. Different nodes also have different impacts on network performance, and the loss importance measure of nodes and the recovery importance measure of nodes are defined by combining the performance with the importance measure theory, respectively. By comparing the recovery importance value of each node, the recovery of network performance can be evaluated. By comparing the loss importance value of each node, the change of network vulnerability can be assessed, so that preventive maintenance can be carried out in advance for nodes with higher importance. Finally, the ratio of recovery importance measure to loss importance measure of a node is defined as the resilience importance measure of the node.Resilience importance measure integrates the impact of the node state on both the loss and recovery of system performance, and the larger the value of the node's resilience importance measure, indicating that the node has a greater impact on the network performance, the higher the node's maintenance priority.
    In order to verify the validity of the proposed importance measure model, this paper introduces a land transportation network system containing 6 transportation aggregation points and 11 transportation dispersion points. The nodes in the land transportation network are divided into transportation hubs and transportation dispersal points, and the connecting edges indicate the transportation routes between the transportation nodes. The land transportation network flow is represented by the cargo volume of nodes and connecting edges, and it is assumed that the states of all transportation nodes and routes in the land transportation network are independent of each other. By substituting the cargo volume of different transportation nodes in different states into the importance measure model, the loss importance measure value, recovery importance measure value and resilience measure importance value of the nodes are obtained, respectively, so that the impact of the node state on the vulnerability of the system, the system recovery situation and the system resilience can be assessed, and the recovery sequence of the failed nodes in different states can be obtained. Ultimately, by comparing the importance measure values, the recovery order of different failed nodes can be obtained. The case of the land transportation network illustrates, to some extent, the validity of the importance measure models proposed in this paper. However, these importance measure models still have shortcomings, and the model consideration conditions are not comprehensive enough. Since the recovery cost of different nodes may also be different, and the nodes that have a large impact on the recovery of system performance may also have a higher recovery cost. In future research we can consider the cost into the model and study the impact of failed nodes of network systems on the performance recovery of complex networks under the inclusion of recovery cost constraints. In addition, this paper only studies the recovery order of failed nodes, and in the future, we can also study the importance measure and recovery of failed edges.
    Optimal Location of COVID-19 Testing Stations Considering Staffing and Working Time
    XIANG Yin
    2024, 33(7):  91-97.  DOI: 10.12005/orms.2024.0221
    Asbtract ( )   PDF (1040KB) ( )  
    References | Related Articles | Metrics
    Once a COVID-19 epidemic breaks out, regular nucleic acid testing becomes a major measure for epidemic prevention and control in various regions. Through regular nucleic acid testing, the government can effectively achieve the goal of “early detection, early reporting and early isolation” of infected persons and reduce the risk of epidemic transmission. Taking Suzhou City as an example, since the COVID-19 broke out in February 2022, the government fully launched normalized nucleic acid testing policy, requiring residents to undergo nucleic acid testing every 48 hours and present a test certificate before entering or leaving the community. However, due to the sudden onset of the epidemic, large sampling demand and lack of staff, it is easy to cause disorder in the sampling process if there is no reasonable sampling site location and personnel allocation plan in advance. In this situation, how to locate testing stations? How are residential areas allocated? How many “sampling booths” should be set up at each testing station? How to configure testing personnel and set their working hours? These are all decision-making issues faced by epidemic prevention and control departments.
    Obviously, the optimal layout of testing stations belongs to the facility location problem. As a kind of classical combinatorial optimization problem, facility location problem has been widely concerned by scholars in the field of management science. However, the existing location models mainly focus on the p-median, p-center, set covering and maximum covering models, and they are mainly applied to the location of commercial facilities. At present, there is very little literature on the location of testing stations. Different from the traditional facility location model, thetesting station location not only has the common characteristics of traditional location-allocation problem but also needs to further determine service capacity (e.g., the number of testing booth in every testing station), optimize personnel allocation (e.g., how to allocate residents and doctors to every testing station?) and determine working hours(how many hours is each testing station open every day?). Therefore, according to the current policy of “dynamic zero clearing”, this paper proposes a new model of the nucleic acid sampling location problem to improve the sampling efficiency. Different from the traditional location-allocation problem, this model extends the traditional location-allocation problem, and further considers the optimization decisions of service capacity, staffing and working hours, as well as related constraints. The above problem is formulated as a nonlinear mixed integer model, which can be transformed into linear equivalence by adding variables and constraints.
    Finally, the above linear equivalent model is solved by commercial software such as Cplex. In particular, in order to test how large a problem our solution method can solve in an acceptable time, we perform a grid test. The results show that: 1)As the problem size increases, the computational time continues to increase. The reason is that the expansion of the problem size leads to an increase in model constraints and variables, greatly increasing the computational time cost of the branch and bound algorithm in Cplex. 2)In a given time of 3,600 seconds, the largest problem we can solve with the Cplex software contains 525 nodes, which indicates that our model has certain applicability in real-world scenarios. Furthermore, our model is also applied to the nucleic acid sampling case of Shuangta Street in Suzhou City. The calculation results not only help the government obtain an optimal testing station location and personal allocation plan, but also verify the feasibility and effectiveness of our model.
    Multi-stage Comparative Analysis of Emergency Coordination Efficiency in Major Emergencies from the Perspective of Social Networks ——Zhengzhou “7.20” Super Heavy Rainstorm Disaster as an Example
    QIE Zijun, BAI Na
    2024, 33(7):  98-104.  DOI: 10.12005/orms.2024.0222
    Asbtract ( )   PDF (2167KB) ( )  
    References | Related Articles | Metrics
    The current global climate anomaly is anticipated to lead to a heightened frequency of extreme natural disaster events in the future. With the rapid pace of urbanization, there is an increasing development of large urban complexes. This convergence of high-risk and fragile social systems will further underscore the interconnection, complexity, and importance of extreme disaster events. As a result, a single emergency entity can no longer adequately address the emergency needs arising from such complex disasters, necessitating collaboration among multiple entities within emergency organizations. However, cross-organizational emergency coordination does not always operate with optimal efficiency. The feasibility of emergency plans, fragmentation of inter-organizational collaboration, and inadequate crisis learning all impede the maximization of efficiency in inter-organizational collaborative governance. It is crucial to optimize and strengthen cooperation among emergency organizations and establish a coordinated and efficient risk management model.
    This article focuses on a significant major emergency—the “7.20” super heavy rainstorm disaster in Zhengzhou, Henan province, as the subject of research. The primary source of research data consists of case-related news reports and emergency plans at all levels published on government websites. Initially, text analysis is employed to identify the emergency organizations involved in this major event and establish them as nodes within the emergency collaboration network. The interactive and collaborative relationships between organizations are considered as edges within the emergency collaboration network. From a systemic and practical perspective, inter-organizational planning and collaboration networks based on the emergency plan are constructed alongside actual response collaboration networks during two distinct time stages. Subsequently, social network analysis is utilized to compare structural differences and evolutionary characteristics across different types and stages of emergency collaborative networks by examining both overall network characteristics and individual organizational positions. Furthermore, this study delves into assessing the alignment between inter-organizational collaboration outlined in the emergency plan with actual responses during current disaster relief efforts, as well as evaluating local government learning effects during crises.
    The main findings of this paper are as follows: (1)The alignment between the plan and the actual collaborative situation is suboptimal. Emergency planning poses challenges to facilitating efficient emergency response collaboration, exhibiting characteristics of “limited operability”. This is evidenced by a lack of forward extension, inadequate information communication mechanisms, insufficiently prominent unified command subject, and oversight of certain critical emergency response nodes. (2)Additionally, there are deficiencies in the initial actual response collaborative network with regards to collaborative efficiency and the role of the core organization. (3)Timely crisis learning can significantly enhance the efficacy of emergency response collaboration in major emergencies. The actual response collaboration network demonstrates characteristics of automatic evolutionary optimization under government crisis learning influence. It tends to form an emergency organizational network structure with core organizational leadership and coordinated division of labor.
    Based on the findings of this study, future research will endeavor to investigate the influencing factors that promote the establishment of coordination relationships among diverse emergency responders in large-scale disaster scenarios, and enhance our comprehension of the dynamic evolution mechanism of emergency collaboration networks. Additionally, it would be advantageous to validate the scientific principles identified in this investigation through examination of additional similar cases of extreme disasters.
    Research on Technical Standardization of Standard Alliance Based on Differential Game
    CAO Xia, LI Weijia
    2024, 33(7):  105-111.  DOI: 10.12005/orms.2024.0223
    Asbtract ( )   PDF (1392KB) ( )  
    References | Related Articles | Metrics
    With the advent of the digital intelligence era and the rapid development of emerging technologies, technical standards have gradually become the focus of enterprise and industry competition. Although China has successively introduced a number of policies and regulations to boost the implementation of technical standards strategy, technical standards drafted by China are still rare. Technical standard alliance is an alliance organization to achieve the goal of technology standardization, in which enterprises with key technical intellectual property rights and strong research and development strength are the core of the alliance. The emergence of standard alliance organizations has accelerated the process of enterprise technology standardization, but there are still some problems such as the poor cooperation among alliance members and lack of profit distribution criteria. Therefore, it is important to deeply analyze the technical standardization of the standard alliances and discuss the cooperation mode and benefit mechanism among the members of the alliance.
    Based on the differential game theory, this paper includes both technical standard R&D and diffusion into the research framework, takes the leading and supporting enterprises in the alliance as the research object, examines the optimal strategies and optimal returns of the two game parties under different mechanisms, and discusses the benefit distribution mechanism of the standard alliance under the dynamic framework. A leading enterprise refers to the enterprise in the core position in the alliance that has advanced key technologies, and rich technical and market resources, and undertakes the main task of technology standardization. Supporting enterprises refer to enterprises with heterogeneous technology and production resources that could support and assist leading enterprises to jointly promote technology standardization activities.
    In the model description part, this paper puts forward five assumptions such as standardization cost, technology level, market share, market demand and so on. In the model analysis section, it constructs and explores the technical standardization strategies and benefits of alliance enterprises under the three mechanisms of Nash non-cooperation, cost-sharing and collaborative cooperation, respectively. Under the Nash non-cooperation mechanism, both leading and supporting enterprises aim at maximizing their own revenue. Under the cost-sharing mechanism, leading enterprises take the initiative to bear part of the standardization costs for supporting enterprises. Under the collaborative cooperation mechanism, both leading and supporting enterprises work together to maximize the overall benefits of the alliance, and jointly determine the standardization strategies of both sides of the game. In the equilibrium comparative analysis part, it compares the optimal strategies of alliance enterprises, system benefits and technology standardization level under different mechanisms, and then proposes the revenue synergy mechanism of the alliance. In the simulation analysis section, it integrates the results of expert consultation and existing studies to assign values to the model parameters, and then simulates and analyzes the development and evolution of alliance enterprise strategies, enterprise and overall revenues, and technology standardization level to verify the correctness of the previous proposition.
    It is found that the optimal strategy of alliance enterprises is negatively correlated with the cost coefficient, technology decline and market shrinkage coefficient, and positively correlated with technology and market sensitivity factors; under the cooperation mechanism, the optimal strategy, optimal revenue and technology standardization level of the alliance enterprises are the highest, and the effect of standard R&D and diffusion is the best; there exists the optimal benefit distribution mechanism, which could realize the system Pareto optimality and the highest individual revenue at the same time.
    Differential Game Study of the Cross-organizational Cooperative R&D of General Purpose Technologies with Information Technology Support
    ZHENG Yuelong, LIU Siman, BAI Chunguang, ZHANG Yueyue
    2024, 33(7):  112-118.  DOI: 10.12005/orms.2024.0224
    Asbtract ( )   PDF (1308KB) ( )  
    References | Related Articles | Metrics
    The characteristics of quasi-public goods of general purpose technologies determine that their R&D has the characteristics of long period, continuous input, knowledge spillover and income uncertainty, etc., which make general purpose technologies easy to fall into failure dilemmas of insufficient R&D, and cross-organizational cooperation becomes an important organization mode to alleviate the failure dilemmas of general purpose technologies R&D. The previous researches have confirmed the effectiveness of general purpose technologies cooperative R&D using empirical and modeling methods, but there has been little focus on the impact on benefit distribution and payment structures. Therefore, with the support of information technology, it is of great theoretical and practical significance to study the dynamic game and payment structure selection among participants in the cross-organizational cooperative R&D for general purpose technologies.
    This paper introduces a cost payment structure where R&D effort levels can be monitored with information technology support, categorizing general purpose technologies cross-organizational R&D scenarios into output-oriented and process-oriented ones. By establishing a differential game model involving leading enterprises, universities and research institutes, and the government, the paper comparatively analyzes the optimal decisions of leading enterprises and universities and research institutes, the R&D system benefits, and the optimal government support strategies under the two R&D scenarios, and explores the selection of two R&D scenarios and their influencing factors. The research results show that: (1)Under both R&D scenarios, when the incremental elasticity of output is small (large), the R&D effort levels and benefit impact coefficients of leading enterprises and universities/research institutes will be positively (negatively) correlated with each other, and negatively (positively) correlated with their R&D effort cost coefficients and technology obsolescence rates, with a similar impact from monitoring costs; when the benefit impact coefficient is large (small), the R&D effort levels of both parties will be positively (negatively) correlated with the output elasticity and its increment. (2)When the incremental elasticity of output is small, the R&D effort levels and R&D benefits in the output-oriented scenario will be greater than those in the process-oriented scenario, otherwise, the ratio of monitoring costs to the effort cost coefficient of universities/research institutes will need to be considered. (3)In the output-oriented scenario, the cost subsidy coefficient is equal to the profit distribution coefficient between the cooperating parties, while in the process-oriented scenario, the government does not need to provide cost subsidies; when the incremental elasticity of output is small or large, the government's benefits will be higher in the output-oriented scenario, otherwise, the government will benefit more in the process-oriented scenario.
    The main contributions in this paper are: (1)Establishing a differential game model for cross-organizational cooperative R&D of general purpose technologies involving leading enterprises, universities and research institutes, and the government. (2)Designing output-oriented and process-oriented payment structures with the support of information technology, and advancing research on process-oriented cross-organizational cooperative R&D of general purpose technologies. (3)Conducting an analysis of the two scenarios with examples, and analyzing the impact of different payment structures on the optimal decisions and benefits of leading enterprises and universities/research institutes, so as to enrich research into the field of general purpose technologies R&D. Future extensions of this work will include exploring diversified government support mechanisms to play the enabling role of information technology to general purpose technology breakthroughs, and addressing the challenge of achieving horizontal joint efforts in supply and diffusion stages and vertical integration from supply to diffusion in cross-organizational cooperative R&D.
    A Class of Improved PRP Conjugate Gradient Methods
    YE Jianhao, CHEN Hongsheng, GUO Ziteng
    2024, 33(7):  119-122.  DOI: 10.12005/orms.2024.0225
    Asbtract ( )   PDF (1091KB) ( )  
    References | Related Articles | Metrics
    Optimization methods have been developed for recent decades, primarily using mathematical approaches to study the optimization paths and solutions for various systems, and providing a scientific basis for decision-makers. The purpose of optimization methods is to find the best plan for the rational use of human, material, and financial resources for the system under study, enhance and improve the system's efficiency and benefits, and ultimately achieve the optimal goal of the system.
    Optimization methods can be further divided into unconstrained optimization methods and constrained optimization methods. The unconstrained optimization methods include the steepest descent method, Newton's method, conjugate direction method, as well as the Conjugate Gradient method and the variable metric method. The constrained optimization methods include the simplex method, the graphical method for solving linear programming, the penalty function method for equality constraints, and the Rosen gradient projection method, among others.The Conjugate Gradient method only requires the use of first-order derivative information, but it overcomes the slow convergence of the Steepest Descent method and avoids the drawbacks of the Newton method, which requires storage and computation of the Hessian matrix and its inverse. It is characterized by low memory requirements and simple iterations, making it an effective method for solving large-scale unconstrained optimization problems.Different conjugate gradient parameters correspond to different conjugate gradient methods. Inrecent years, with the development of hot fields such as machine learning, fuzzy theory, neural networks, and the increasing maturity of computer technology, optimization methods have been increasingly valued, and the conjugate gradient method naturally has attracted more scholars for in-depth study and research.
    Current research on the conjugate gradient method is mainly divided into two categories. The first one is to directly improve the conjugate gradient parameters, and the second one is to mix different conjugate gradient methods, such as convexly combining two existing conjugate gradient methods to attempt to construct new algorithms. There are differences in the advantages and disadvantages, convergence characteristics, and other aspects of different mixing methods. Although the existing conjugate gradient method has shown excellent performance in practice, some algorithms still have limitations such as being susceptible to the influence of parameters, being applicable to specific functions, and possibly need to prove convergence under certain conditions. Therefore, issues such as the selection of convex combination parameters in the convex combination method, the optimization of new conjugate gradient methods, and proving the convergence of the algorithm under weaker search conditions are to be further researched and perfected in the later stage.In practical applications, the PRP method is considered one of the most effective conjugate gradient methods.
    In this paper, based on the two-term descent PRP method and the three-term descent PRP method, we propose a class of descent PRP methods. When the parameters take specific values, the methods are the two-term descent PRP method and the three-term descent PRP method respectively. Moreover, the algorithm does not rely on line search and has sufficient descent property. Under suitable conditions, we show that the algorithm is globally convergent under Armijo-type line search. The numerical results show that the algorithm is effective.
    Grey Barycentric Triangular Grid Relational Analysis Model Based on Panel Data
    WU Honghua, LIU Sifeng, FANG Zhigeng, DU Junliang
    2024, 33(7):  123-129.  DOI: 10.12005/orms.2024.0226
    Asbtract ( )   PDF (1242KB) ( )  
    References | Related Articles | Metrics
    Grey relational analysis is a key part of grey system theory and the basis of grey system modeling, grey decision-making, and grey control model. The basic idea of grey relational analysis is to judge the degree of relation between different sequences from the geometric point of view by comparing the similarity of curve features. Because the grey relational analysis model has “small computation and low data quantity requirement”, it has been successfully applied in economic management, geological and environmental protection, biological science, etc. Although grey relational analysis has achieved remarkable achievements in practical application, it still has some shortcomings. For example, different construction methods of surface clusters may produce the unique result of the relational degree, and changing the order of indicators often changes the degree of relation. It can be seen that the traditional grey relational analysis model may be easily affected by the construction method of surface clusters or the order of indicators.
    Aiming to the problems mentioned above, a new spatial projection method for panel data is proposed, and a grey barycentric triangular grid relational analysis model based on panel data is constructed in the paper. Firstly, based on the permutation and combination principle, the sample matrix is decomposed into the binary index sub-matrix, which is projected as points in the three-dimensional space, and the spatial tetrahedron is obtained by connecting the adjacent four points in pairs. Secondly, the barycenter of the tetrahedron is given and associated with the four vertices, the two adjacent vertices are also connected, and the barycentric triangular surface is established, thus the barycentric triangular grid of the binary index sub-matrix is obtained. Thirdly, based on the volume of the curved top cylinder of the barycentric triangular surface, the formula of the relational coefficient is constructed, and a grey barycentric triangular grid relational analysis model of panel data is proposed. The proposed model overcomes the shortcomings of the existing grey relational analysis of panel data, is easier toapply in practice, and has good properties, such as normalization, symmetry, and invariance of the relational degree for the translation transformation.
    The proposed model has been applied to the air quality assessment for the six cities along the east-west route of Shandong province, including Jinan, Weifang, Zibo, Liaocheng, Yantai, and Weihai. According to Ambient Air Quality Standards, AQI, PM2.5, PM10, SO2, CO, NO2, and O3 are selected as evaluation indicators. The range transform is used to normalize the raw data, and the characteristic behavior matrix of the system is determined based on the processed data of six cities along the east-west route of Shandong province. Using the proposed model to calculate the relational degree between the sample matrix of the six cities and the characteristic behavior matrix, we obtain that Weihai and Yantai rank first, followed by Weifang, Jinan, Liaocheng, and Zibo. By contrasting and analysis, the rationality and effectiveness of the model are verified. Furthermore, the results indicate that the proposed model can measure the degree of relation between panel data, confirming the objectivity and practicality of the proposed model.
    For panel data, the paper presents a new spatial projection method and proposes a grey barycentric triangular grid relational analysis model. The proposed model can measure the relationship and influence between panel data, and complement and perfect the theory of the grey relational analysis for panel data. It is worth pointing out that the focus of the grey relational analysis model is mainly on the order of relationships rather than on the size of the relational values, which measures the interrelationships and influences between sample data.
    Branch and Price Algorithm to Solve Home Health Care Scheduling Problem with Medical Resources
    LI Yanfeng, LUO Nan
    2024, 33(7):  130-136.  DOI: 10.12005/orms.2024.0227
    Asbtract ( )   PDF (1055KB) ( )  
    References | Related Articles | Metrics
    With the accelerating trend of population aging in China,in order to alleviate the pressure of elderly care for residents and meet the medical security requirements of different groups, the country is actively responding to the aging population. The country also vigorously promotes home health care signing services and home-based elderly care services, with home health care being the main service content. In this service, the home health care is available for patients to provide medical services. Decision makers need to develop the optimal scheduling plan for the home health care while considering various factors. This type of problem is known as home health care routing and scheduling problem.
    In the process of home health care services, home medical staff need to carry different medical resources to provide services to patients, such as thermometers, blood pressure monitors, gastric tubes, drugs, etc. Similar to thermometers, they can be reused, while instruments such as drugs and gastric tubes are disposable items. Medical resources have both reusable and disposable properties. The total amount of medical resources that home medical staff can carry is limited. Carrying medical resources during scheduling can affect the doorstep route of home medical staff. Therefore, considering the nature of home health care routing and scheduling problem of home medical staff has important practical significance.
    Previous studies have not considered the scheduling of medical resources. Some have simultaneously considered the scheduling of medical personnel and medical resource vehicles, and established a mathematical model to minimize travel costs and service costs. Others have considered the vehicle routing problem for scheduling traditional Chinese medicine resources in home health care services. They have studied the routing problem of pickup and delivery vehicles with time windows in home health care services. It refers to the distribution of drugs and medical devices from the pharmacy of a home care company to patients, and the extraction of biological samples and unused drugs and medical devices from patients.
    We consider the need for medical staff to bring medical resources for on-site services, dividing medical resources into reusable resources (such as medical equipment) and disposable resources (such as drugs). An integer programming model is established to minimize operating costs and penalty costs for flexible service time windows, taking into account constraints such as medical resources, flexible time windows for patients receiving services, doctor-patient matching, and working time constraints for medical staff in the problem. And a branch and pricing algorithm is designed to solve the mathematical model. This algorithm is a combination of column generation and branch and bound. The column generation algorithm finds the optimal solution of the relaxation problem at the node as the lower bound of the node, and then uses the branch and bound algorithm to find the integer solution. The core of the column generation algorithm lies in solving the pricing sub-problem. This article proposes an improved label algorithm to solve the sub-problem of whether medical resources are repeatable, in order to improve the algorithm speed. We analyze the efficiency of this algorithm through a large number of experimental examples.
    Through case analysis, it is found that the maximum capacity and flexible time window have a significant impact on the routing of home health care. The upper limit of medical resources that each healthcare worker can carry is crucial. By adjusting the on-site path of medical staff, we can allocate resources reasonably to medical staff to work within their capabilities. Medical nursing center managers can adjust the maximum amount of medical resources that medical staff can carry, taking into account the workload and door-to-door path of medical staff. The flexible time windows not only make scheduling paths more flexible but also reduce overall scheduling costs. Nursing center decision-makers can encourage customers to provide flexible service time windows to reduce scheduling costs. The efficiency of the branch and pricing algorithm is verified by testing and comparing it with CPLEX in different scale cases. Future research can consider home health care routing and scheduling problem based on medical resource scheduling in multi-cycle scenarios, and design more effective algorithms to solve it.
    Application Research
    Day-to-day Traffic Assignment Model of Heterogeneous Travelers in Urban Road Network under MaaS Mode
    CHEN Lingjuan, ZHAO Chenghua, MA Dongfang, LIU Zupeng
    2024, 33(7):  137-143.  DOI: 10.12005/orms.2024.0228
    Asbtract ( )   PDF (1916KB) ( )  
    References | Related Articles | Metrics
    To study the day-to-day routing behavior and traffic distribution evolution of heterogeneous users in urban road networks under the Mobility as a Service (MaaS) model, travelers are classified into platform users who select MaaS services and free users who make their own decisions.The travel behaviors of the two types of users are described as follows: (1)During the day-to-day decision-making process, users adjust whether to continue accepting platform scheduling, implying that user attributes change during the day-to-day deduction. (2)Free users seek to maximize their own travel utility and adjust their routes in day-to-day travel. (3)The platform plans and publishes the system path based on pre-trip input from users, ensuring the principle of System Optimal (SO). Under the MaaS model, heterogeneous users decide whether to change their travel mode and routes for the day based on the previous day's travel outcomes. The changes in user travel decisions lead to shifts in the final distribution state of network traffic, thereby influencing users' travel on the following day. Furthermore, this paper calculates the average travel time for both platform users and free users across all routes and, based on the principle of bounded rationality, defines a threshold where the travel time saved by an alternative exceeds the tolerance level, at which point users may reconsider whether to join the MaaS platform. Thus,based on these tolerance levels, transition probabilities are calculated, a day-to-day adjustment model for the total number of both types of users are established. Similarly, a day-to-day transition model for free users' routes is developed based on the time saved and tolerance values. The day-to-day model distributes the total number of MaaS users to routes following the SO principle, and transforms the path impedance to establish a User Equilibrium (UE) model equivalent to SO. The model's Frank-Wolfe algorithm solution is also provided. The paper demonstrates the principle of flow conservation in the overall model and uses fixed point theory to analyze whether solutions to the model exist. Using the FOURSIX network as an example to validate the model, the following conclusions are reached: (1)At equilibrium, the travel times for different routes of the same origin-destination(OD) pair are not the same, and the difference in travel times between routes reflects the tolerance value of bounded rationality users. (2)The adjustment of the total travel time in the network gradually weakens from the initial intense state, and the network is not stable at an optimal state, mainly due to differences in individual user utilities and issues such as fairness during the allocation process, resulting in unstable route flows, with heterogeneous users still likely to reselect on the second day based on the utility they receive. (3)At equilibrium, the overall proportion of platform users in the network increases, indicating that MaaS travel services have certain advantages and attractions, but there is a need for corresponding strategies to retain users. (4)The total travel time in a network of heterogeneous users is always less than that of pure free users, indicating that MaaS travel services optimize the network by fully utilizing network facilities and reducing overall travel time.
    Collaborative Incentive Contract of Live Streaming Commerce Supply Chain Considering Platform Traffic Subsidy
    ZHANG Yanfen, XU Qi, CHEN Haijun
    2024, 33(7):  144-150.  DOI: 10.12005/orms.2024.0229
    Asbtract ( )   PDF (1166KB) ( )  
    References | Related Articles | Metrics
    Live streaming commerce has become an emerging channel in the supply chain to boost product sales, clear inventory, and enhance brand value. However, as live streaming commerce is a relatively new business mode, several critical issues arise for its healthy development. These issues include the streamer's efforts in promoting products (such as comprehensive product descriptions, professional recommendations, and positive attitudes), the contractual relationships among streamers, brand suppliers, and platforms, as well as the fair distribution of revenues. Therefore, this paper focuses on the live streaming commerce supply chain, which consists of brand suppliers, live streaming platforms, and streamers. Based on the principal-agent theory and considering the information asymmetry, this study constructs incentive contract models under two scenarios: one where the platform does not have a contractual relationship with the streamer and only the brand supplier provides incentives, and the other where the streamer has a contract with the live streaming platform, with the brand supplier offering commissions incentives and the platform providing traffic incentives to the streamer. The research aims to explore the optimal revenue distribution and incentive contracts within the live streaming commerce supply chain under these two scenarios, analyze the effect of platform traffic subsidies as incentives, and examine how decision variables change with the streamer's influence. The study provides insights and references for the healthy development of the live streaming commerce supply chain. The main conclusions of this study are as follows.
    (1)Revenue sharing ratios among live streaming platforms, brand suppliers, and streamers: when the platform does not have a contract with the streamer, the optimal revenue-sharing ratio for the platform will be only related to the product category (i.e., related to the price and cost of a certain type of product) and fixed, which aligns with the real-world situation of live streaming commerce. In a contractual model, when the streamer's influence exceeds a certain threshold, the revenue-sharing ratios for both the streamer and the brand supplier will increase, while the platform's share will decrease.
    (2)Traffic subsidy incentives from live streaming platforms to streamers: the traffic subsidy provided by the platform increases with the streamer's influence, meaning that the more influential a streamer is, the more traffic subsidy it receives. In particular, providing traffic subsidies to highly risk-averse streamers can have a greater incentive effect.
    (3)Profits of various members in the live streaming commerce supply chain: there exists a signing threshold where it is unfavorable for the platform to sign a contract with the streamer if the streamer's influence is below this threshold. Only when the streamer's influence exceeds this threshold does it become beneficial for the platform to enter into a contract. Therefore, in practice, live streaming platforms could prioritize inviting streamers within the signing threshold and offer attractive traffic subsidy incentives to optimize the expected profits of all members in the supply chain and achieve a win-win situation.
    The limitations of this study lie in its focus on the incentive issues when the streamer's promotional efforts are unobservable. Future research could also consider the moral hazard when brand suppliers provide counterfeit or inferior products. Additionally, this study only considers the streamer's influence, and future research could incorporate the streamer's bargaining power when negotiating with suppliers for more price discounts, to explore pricing issues in live streaming e-commerce.
    Government Behavior and Coordination in Cross Regional Integration and Sharing of S&T Service Resources under the Background of Cloud Platform
    HUANG Xiaoqiong, XU Fei
    2024, 33(7):  151-157.  DOI: 10.12005/orms.2024.0230
    Asbtract ( )   PDF (1304KB) ( )  
    References | Related Articles | Metrics
    In the era of the knowledge economy, the regional industrial innovation and development have placed increasingly higher demands on the level of scientific and technological services. Against this background, a batch of technological service cloud platforms serving the innovative development of regional industrial clusters have emerged. These regional technological service cloud platforms need to integrate technological service resources from multiple administrative regions to enhance their spillover effects and unleash the potential value of technological resources. As the investors and owners of most technological resources, the government serves as the top-level designer, supervisor, and coordinator of the sharing mechanism of technological resources, possessing both administrative power and superior management resource advantages. Giving full play to the role of the government is conducive to promoting the sharing of regional technological resources. Therefore, studying the behavior of local governments in sharing technological resources from a micro perspective is of great significance for building a regional technological service system based on cloud platforms and advancing regional innovation and development.
    This article focuses on the issue of the integration and sharing of technological service resources among local governments under the background of cloud platforms. It employs a differential game model to investigate the cross-regional integration and sharing of technological service resources promoted by local governments within a dynamic framework. Taking the allocation level of technological service resources on technological service cloud platforms as the state variable, the article proposes a coordination mechanism for government resource sharing behavior based on a two-way cost-sharing contract, considering the equilibrium results under both Nash non-cooperative game and collaborative game scenarios. Furthermore, it analyzes the impact of factors such as the scale of innovation demand and the cost of innovative enterprises joining the platform on the equilibrium results.
    The research results show that: (1)The Nash non-cooperative game where local governments aim to maximize their own interests is not advisable, and a two-way cost-sharing contract can effectively coordinate the government's resource sharing behavior. (2)Compared with the Nash non-cooperative game scenario, the optimal resource sharing effort and optimal revenue of local governments under the two-way cost-sharing contract have both improved. (3)The optimal cost-sharing rate borne by the government is positively correlated with the size of local market demand, and negatively correlated with the size of competitors' market demand, the cost of enterprises joining the cloud platform, and the sensitivity coefficient of enterprises to the joining cost. (4)The cloud platform can intervene in the strategic choices of local governments by adjusting the cost of enterprises joining the platform.
    The research conclusions provide theoretical support for promoting cross-regional integration and sharing of technological service resources. The study emphasizes the importance of regional technological service cloud platforms in promoting cross-regional integration and sharing of technological service resources. On one hand, it is necessary to continuously improve and perfect the basic functions of the cloud platform, such as supply-demand matching and interaction, and actively develop advanced functions such as intelligent push and service recommendations. On the other hand, measures such as adopting a reasonable fee system and providing subsidies to participating enterprises can be taken to reduce the cost of entry for innovative enterprises. The study also points out that it is crucial to coordinate the resource sharing behavior among local governments and facilitate them to reach a reasonable two-way cost-sharing contract while ensuring its implementation. Additionally, local governments need to have a thorough understanding of the market environment and set reasonable cost-sharing rates, as unreasonable rates may lead to counterproductive results.
    In the future, the resource integration and sharing among multiple local governments can be further studied. Besides, the quantity and quality of technology service resources and the complementarity of resources among local governments are also important factors that affect the decision-making of local government resource sharing. The influence of these factors on optimal decision-making needs further study.
    Deep Learning Method Integrating Network Structure and Node Attribute for Link Prediction
    LIU Peng, GUI Liang, WANG Huirong, XIA Haoxiang
    2024, 33(7):  158-165.  DOI: 10.12005/orms.2024.0231
    Asbtract ( )   PDF (1500KB) ( )  
    References | Related Articles | Metrics
    In reality, social systems from various domains can be effectively characterized through network models, often exhibiting structural properties distinct from random networks, such as small-world and scale-free characteristics. The formation of these non-trivial structural properties is closely associated with the establishment of relationships (i.e., links) among individuals (i.e., nodes) in the network. Consequently, accurately predicting potential relationships in the network not only helps deepen our understanding of the underlying mechanisms driving network formation but also further elucidates the relationship between network topology and system function. Thus, the prediction of links between nodes has become an important research problem in the field of network science. For link prediction, a commonly used method is heuristic algorithms based on similarity. However, in more complex network scenarios, such methods struggle to effectively address high-dimensional non-linear problems resulting from network scale expansion or node feature growth. In recent years, the emergence of deep learning-based approaches has provided new opportunities by transforming complex network information into low-dimensional representation vectors. However, most existing deep learning-based approaches primarily achieve link prediction through the similarity of embedding representation vectors of network structures. Many empirical studies indicate that the formation of links in the network is influenced by node attributes, and similarity alone is not the sole criterion for link formation. Therefore, the link prediction approach based on deep learning is worth further exploration.
    In this paper, we propose a deep walk-deep neural network for link prediction (DDLP) model, which integrates network structure and node attribute information for link prediction. This model consists of two stages, i.e., the stage of node feature embedding and the stage of link prediction. In the first stage, network structure information is embedded using deep walks. Then, to obtain node feature vectors, the embedded structure feature vectors are merged with standardized node attribute feature vectors through early fusion. In the second stage, a deep learning model is constructed to capture the link patterns between node feature vectors through supervised learning, thereby achieving relationship prediction. We select real network data from three different domains, including open-source software development, patent research and development, and scientific collaboration, to examine the effectiveness of the model. Additionally, in the experimental sample networks, we compare the predictive performance of the proposed model with traditional methods such as common neighbors (CN) and resource allocation (RA), deep learning methods that only consider node structural information like deep walk and node2vec, as well as models that can incorporate node attributes like variational graph auto encoders (VGAE) and graph convolutional networks (GCN).
    The results show that the DDLP model, based on node feature embedding, effectively captures the distribution patterns of links in the network. Its performance (precision, recall, and F1 score) significantly surpasses that of traditional models based on vector similarity (such as CN and RA) and deep learning models such as node2vec and VGAE. Furthermore, compared to predictive methods that only incorporate network structural information, the integration of node attributes has significantly enhanced the predictive capabilities of both the DDLP model and comparative models such as VGAE and GCN. Particularly, the DDLP model has the highest performance metrics, indicating that the incorporation of node attributes allows it to learn a richer set of rules for link formation, thereby offering superior performance. This also further reveals that it is not enough to predict the link formation only by the similarity of node vectors, and there is a need for more refined processing to enable the model to better learn the rules of link formation within networks.
    This study not only proposes a deep learning framework that integrates network structure and node attribute information for link prediction but also lays the methodological foundation for related applications, such as system recommendations. In future work, we will explore the portability of the framework to other network analysis tasks such as mechanism analysis of the network formation, link prediction in heterogeneous networks, etc. through more extensive experiments. In addition, we intend to optimize the DDLP model to reduce computational complexity, making it more suitable for link prediction in ultra-large-scale networks.
    Research on the Deposit Strategy of Modular Reusable Containers with a Carbon Tax Policy
    XU Xianhao, CHEN Xuemei, YUE Ruiting
    2024, 33(7):  166-172.  DOI: 10.12005/orms.2024.0232
    Asbtract ( )   PDF (1220KB) ( )  
    References | Related Articles | Metrics
    For the past years, the conflict between industry development and environment contamination has become more prominent. The issue of environment has become a hot point all over the world. While pursuing economic development, various industries gradually put environmental protection on the agenda. Pollution from disposable plastic packaging is one of the major causes of environmental pollution, and the emergence of the reusable container (RC) provides a viable solution to alleviating this pollution.
    As one important category of RC, the modular reusable container (MRC) has been widely adopted in a variety of industries because of its good leak-proof and transport portability. MRC is composed of two or more sub-parts and each sub-part can function only when they are combined together, such as workbins in automobile industry, medical waste recycling bins in medical industry, as well as ceramic tableware in takeaway catering industry, etc. However, when MRC provides a better protection for the products in prevention of leakage of products, it also increases the difficulty of MRC operation management for the MRC owner because of the complexity of its composition and brings the MRC owner a series of management problems.
    Against the background of takeaway catering industry, based on a carbon tax policy, this paper considers that suppliers deliver their products with MRC and recycle containers after customers remove or use them, and there exists a situation where different parts for MRC have different loss rates, that is to say, the whole MRC may not be entirely lost as a result. Two optimization models with and without MRC deposit are developed to obtain the optimal decisions (i.e., the number of MRC per cycle, the amount of deposit with a deposit policy, and the purchase volume of MRC per cycle). This paper explores the effect of different deposit strategies on the supplier's MRC operations and carbon emissions, and analyzes the impacts of carbon emission coefficient and carbon tax on the supplier's optimal decisions and carbon emissions. Then, a numerical analysis in this paper is performed with the example of a takeaway catering company.
    The results show that,firstly, in order to obtain the best benefits, the supplier should either control the MRC loss rate at a low level and do not charge any MRC deposit, or keep the maximum market demand per unit time at a high level and charge a certain MRC deposit. When the MRC loss rate is higher, the profit of charging deposit will be larger, and when the maximum market demand per unit time is lower, the profit without MRC deposit will be larger. Besides, the supplier's carbon emissions can be reduced by using deposit strategy, thus charging deposit strategy is a good choice for supplier to adopt in the aspect of carbon emissions reduction. Meanwhile, the supplier can reduce its number of MRC per cycle by charging a certain MRC deposit, or can reduce its purchase volume of MRC per cycle by not charging any MRC deposit. Finally, carbon emission factors and carbon taxes have opposite effects on supplier carbon emissions and the same effect on MRC deposit. That is, with an increase in carbon emission factors, the supplier's carbon emissions per unit of time gradually increase, and with an increase in carbon taxes, the supplier's carbon emissions per unit of time gradually decrease. With an increase in carbon emission factors for procurement or inspection cleaning and maintenance, the value of MRC deposit gradually increases, and with an increase in carbon taxes, the value of MRC deposit gradually increases as well.
    This paper considers the problem that the whole MRC may not be entirely lost, which is closer to real situation, and at the same time discusses carbon emissions, which is in line with the concept of green development. Thus, this paper can provide some valuable references for the efficient operation and management of MRC.
    Study on Non-probabilistic Entropy for Hesitant Fuzzy Set and its Application
    TAN Jiyu, LIU Gaochang
    2024, 33(7):  173-179.  DOI: 10.12005/orms.2024.0233
    Asbtract ( )   PDF (1017KB) ( )  
    References | Related Articles | Metrics
    For multi-attribute decision making problems, because of the complexity of human thinking and personal preferences, there often exist some situations with high degree of uncertainty where a decision organization consisting of several experts is not very sure about a value, and is hesitant among several possible values when providing the membership degree of an element to a set. To better describe this decision scenario, hesitant fuzzy sets were originally introduced by Torra and Narukawa in 2009.Hesitation fuzzy sets allow an element to belong to a set with multiple different values of membership, effectively solving the problem of inconsistent preferences of multiple experts. However, the number of membership degree in different hesitant fuzzy elements (HFE) may be different, and the uncertainty of hesitant fuzzy sets includes fuzzy uncertainty and hesitant uncertainty, which directly leads to the complexity of calculating hesitant fuzzy entropy. Existing literature has made significant contributions to the study of hesitant fuzzy entropy, but there are still two shortcomings. One is that it ignores hesitant uncertainty and only considers fuzzy uncertainty, the other is that we must artificially add new membership degrees based on risk preference,when comparing the entropy values of any two hesitant fuzzy elements. Although some literature has addressed one of the issues, both have not yet been addressed simultaneously. The concept of non-probability entropy was proposed by Deluca and Termini to measure the uncertainty of fuzzy sets in 1972. Then, Kosko proposed a concise non-probabilistic fuzzy entropy formula from the perspective of distance, which shows a ratio of the distance: the distances between the fuzzy information and its nearest and farthest non-fuzzy neighbors. Motivated by the principle of the non-probabilistic entropy for fuzzy sets, hesitant fuzzy non-probabilistic entropy measure is studied in this paper.
    Firstly, we critically review the existing entropy measures for HFE, and demonstrate that these entropy measures have some shortcomings. Because the multiple membership degrees of hesitant fuzzy element coincide with the multidimensionality of Euclidean space, this paper considers that HFE are seen as some points in Euclidean space, and the membership degrees in HFE could be called coordinates accordingly. In the sequel, we deeply analyze the evolution law of fuzzy uncertainty and hesitant uncertainty in Euclidean space.Based on the hesitancy fuzzy Euclidean distance, the concept of hesitant fuzzy non-probability entropy is proposed creatively. It is a ratio of distance between the hesitant fuzzy elements and the two reference points(the full HFE and the empty HFE).In order to show the superiority of the hesitant fuzzy non-probabilistic entropy, a comparative analysis is carried out with the existing entropy measures. The results of the comparative analysis show that the proposed hesitant fuzzy non-probabilistic entropy has a higher distinguishing ability. And when comparing the entropy sizes of two hesitant fuzzy elements, it makes full use of raw decision information without artificially adding some new membership degrees, avoiding distortion of decision information. In addition, the proposed hesitant fuzzy non-probabilistic entropy can effectively combine the hesitating uncertainty with the fuzzy uncertainty without using a bivariate aggregation function to aggregate the two uncertainties.
    On the basis of the above theoretical analysis, a group decision method is developed by applying the proposed hesitant fuzzy non-probabilistic entropy. An investment company wants to invest in tourism projects. There are great risks in tourism project investment, so an in-depth investigation and careful decision-making is necessary. After preliminary investigation, four tourism projects are selected as alternative projects.Renowned experts are invited to conduct a risk assessment of the alternatives, and make a comprehensive risk assessment of each option in order to select the best investment option. Four representative risk assessment indicators are considered in the assessment process: market, policy, facility, and management risk. According to the decision information and proposed decision method, the ranking results of tourism projects are obtained.
    The follow-up study will attempt to expand the proposed method to other fuzzy environments. In addition, as the complexity of decision-making problems and number of experts increase, we will study large-scale group decision-making and combine it with complex network theory.
    Traders' Limited Rationality, Information-Noise Correlation, and Information Efficiency of Financial Market
    WANG Mingtao, SUN Ximing
    2024, 33(7):  180-186.  DOI: 10.12005/orms.2024.0234
    Asbtract ( )   PDF (1194KB) ( )  
    References | Related Articles | Metrics
    The efficiency of market information has always been a hot topic in academic research. Efficient market information efficiency plays an important role in improving resource allocation efficiency, promoting technological progress, and economic growth. Studying the relationship between information and market efficiency and its influencing factors is of great significance for improving market information efficiency and promoting the healthy development of the capital market.
    Studying the relationship between information and its market efficiency, as well as its influenced factors, has a very important theoretical and practical significance for improving market information efficiency and promoting the healthy development of capital market. This paper studies information efficiency and the mechanism of financial market response to information from the perspectives of traders' limited rationality and correlation between information and noise, by using a two-period pricing model including a single risky asset and three types of traders. In order to overcome the shortcomings of traditional index for measuring market information efficiency, the information contribution to the market (ICM) is put forward. The research outcomes show that: there is an inverted U-shaped relationship between the information public degree measured by the proportion of informed traders in all traders and ICM which evaluates information efficiency. This conclusion combines the research findings of LEE and LIU (2011), GOLDSTEIN et al. (2014). LEE and LIU (2011) found that when there is less information in the market, as the number of informed traders increases, the capitalization of private information into stock prices leads to increased stock price volatility and improved information efficiency; GOLDSTEIN et al. (2014) found that increasing the number of informed traders may reduce market information efficiency.
    Under the condition of traders' limited rationality, ICM of the underreaction is higher than that of the appropriate reaction and overreaction, when there are small quantities of information in the financial market. ICM of the appropriate reaction is higher than that of underreaction and overreaction, when there are more quantities of information in the financial market. YOU Jiaxing(2008) found that investors who underreact are mainly institutional investors, while investors who overreact are mainly individuals. Therefore, the information efficiency of underreacting to market information is higher than that of overreacting. On the other hand, when most traders in the market have already obtained this information, a conservative response to the information is not conducive to price discovery. Limited to negative correlation between information and noise, ICM will decrease. Under the conditions of both traders' limited rationality and correlation between information and noise, a decrease in ICM will have a synergistic effect.
    At last, the correctness of this paper's conclusion is verified by using the data of Chinese stock market. To analyze the impact of bounded rationality, noise, and other factors on the relationship between information and its market contribution, this paper divides the entire sample into bull and bear market sub samples. It is generally believed that investors are less rational and noisy in bull markets, but relatively rational and less noisy in bear markets. Due to the low degree of information disclosure, the market contribution of underreaction is greater than that of overreaction. In addition, in most cases, it can be assumed that the difference in the contribution of information markets in bull and bear markets is the result of the joint action of investor bounded rationality and noise, which proves the above conclusion.
    This paper theoretically explains the reasons why investor irrationality and noise reduce market information efficiency, and also for the mechanisms behind some phenomena in the market (such as market efficiency being higher in markets with underreaction than with overreaction), providing reference suggestions for policymakers to improve market information efficiency. When the degree of information disclosure is low, the goal is to more quantities of information disclosure.Otherwise, the goal is to improve information quality. In addition, investors should be guided to invest rationally and prevent excessive speculation.
    Efficiency Evaluation of Chinese Commercial Banks Considering Technological Heterogeneity and Dynamic Evolution
    ZHAO Xin, CAI Qingfang, DING Lili
    2024, 33(7):  187-192.  DOI: 10.12005/orms.2024.0235
    Asbtract ( )   PDF (1085KB) ( )  
    References | Related Articles | Metrics
    At present, China's financial system still relies mainly on indirect financing. In the financial system dominated by indirect financing, the operational efficiency of commercial banks is directly related to the optimal allocation of financial resources in the whole society, and is also an important path for banks to achieve high-quality development. In order to enhance competitiveness and improve their own efficiency, banks have adopted different development positioning and business models to cope with the impact of the external environment. The differences in scale, development positioning, business model, ownership structure, etc. of banks can lead to heterogeneity in production technology. The calculation of bank efficiency is a relative concept, and the performance evaluation of decision-making units under specific production technology constraints cannot be compared with that of other decision-making units under different production technology constraints. Therefore, how to evaluate the efficiency of banks with production technology heterogeneity and its impact on their multi period operations is a question worthy of in-depth consideration and research.
    Considering the phased characteristics of bank operations, a two-stage dynamic network DEA evaluation model consisting of four heterogeneous systems is constructed to address the technological heterogeneity and time lag effects of non-performing loans and carried over assets in the operation process of commercial banks in China. The above model is applied to evaluate the overall efficiency and production stage efficiency of four types of commercial banks in China from 2012 to 2019. From the perspectives of the entire cycle and each period, the impact of time delay effect on the continuous operational efficiency of commercial banks with technological heterogeneity is analyzed in depth. The changes in operational efficiency of heterogeneous banks during the continuous period and the reasons for their inefficiency are explored. Finally, the relative importance of the reasons for inefficiency affecting the overall efficiency of China's banking industry is explored using the constructed endogenous weights, and corresponding insights are obtained.
    The research has found that: (1)there is a significant difference in bank efficiency due to technological heterogeneity. In terms of both stage efficiency and overall efficiency, the overall performance of urban commercial banks is better than the performance of the other three types of banks, while the performance of joint-stock banks and rural commercial banks is lower than the overall average level; (2)the efficiency of rural commercial banks in the fundraising stage is more negatively affected by the time delay of non-performing loans than that of the other three types of banks, while the efficiency of state-owned, urban commercial banks, and joint-stock banks in the fund utilization stage is more negatively affected by the time delay of asset transfer than that of rural commercial banks; (3)the overall efficiency of commercial banks in China and the efficiency of fund utilization have been improved to a certain extent. The efficiency of urban commercial banks and rural commercial banks has improved significantly, but the efficiency of fund raising has decreased, with joint-stock banks experiencing the largest decline; (4)the operational efficiency of commercial banks in our country is dominated by the efficiency of the fund utilization stage. The efficiency of the fund raising stage is more important for the overall efficiency of urban commercial banks than that of the other three types of banks. The efficiency of the fund utilization stage has the greatest impact on the overall efficiency improvement of joint-stock banks.
    Research on Risk Spillover Effect of Chinese and American Capital Markets Based on TVP-VAR Model
    CHEN Weiguo, LI Zhan, YAO Yanzhen, LI Yongwu
    2024, 33(7):  193-199.  DOI: 10.12005/orms.2024.0236
    Asbtract ( )   PDF (996KB) ( )  
    References | Related Articles | Metrics
    China's financial market is facing a transformation from partial opening to comprehensive one. The continuous promotion of financial openness is not only necessary for the healthy development of the financial industry itself and the continuous deepening of financial supply side structural reform, but also an inherent requirement for achieving high-quality economic development. The continuous promotion of financial openness helps to enhance the breadth and depth of the financial market, promote efficient allocation of global capital, and enhance financial discourse power. But the impact of external risks brought about by financial openness on a country's economic and financial stability and security has become increasingly prominent. How to effectively resolve external financial risks, clarify risk information transmission mechanisms, and prevent financial risk transmission is of great significance for ensuring the stable development of the capital market, facing global competition in resource allocation, continuously promoting high-level financial openness, and helping China's economic rise.
    The research objective is to analyze the risk spillovers between capital markets deeply and resolve the external financial risks timely. On the basis of fully considering the intermediary role of the Hong Kong stock market in risk spillover between the Chinese and American stock markets, the daily data of major stock indices in the Chinese and American stock markets from January 1, 2007 to December 30, 2020 is used to construct risk spillover index based on the TVP-VAR model firstly. The static risk spillover coefficient is used to analyze the risk spillover relationship between different markets. Secondly, considering the financial markets themselves and their interrelationships are constantly changing in reality, the dynamic risk spillover coefficient is used to study the interrelationships between different financial markets in different periods. Once again, in order to analyze the risk spillover and acceptance relationship in different periods of each market deeply, the total net risk spillover coefficient of each market and the net bidirectional risk spillover coefficient between markets are constructed and analyzed. Finally, in order to deeply analyze the causal relationship between dynamic spillover effects in various markets, based on the analysis of dynamic spillover effects in various stock markets, Granger causality tests are used to further explore the Granger causal relationship between dynamic spillover effects in various markets.
    The research conclusions show that: (1)there is a strong correlation between markets. The risk spillover within the market is generally higher than that between markets, and the risk spillover of the U.S. stock market to the A-share market is usually larger in the crisis period; (2)there is a large risk spillover between the stock markets in 2011 and 2018. A-share had risk spillover to the three major U.S. stock indexes in 2015 and 2018, while in other periods, the three major U.S. stock indexes were mainly risk spillover to others; (3)the Granger causality test shows that under the significance of 5%, except the SPX and IXIC, the dynamic net spillover effects of the one markets are Granger causality of each other; (4) the Granger causality test shows that with the implementation of policies such as the Shenzhen-HK Stock Connect, Shanghai-HK Stock Connect, and Mutual Fund Connect, the internationalization level of the A-share market is gradually improving, and it has the ability to partially influence stock markets such as Hong Kong, but this influence is more regional.
    In response to the conclusions, this paper suggests that: (1)In the future, we should further increase the efforts of financial market reform and opening up, clarify the status and role of China's stock market in the international stock market, and enhance the internationalization level of the financial market and the independence of China's stock market situation. (2)With the continuous improvement of economic globalization, local financial risks will evolve into global financial crises through economic and trade activities, which requires governments to have a global, forward-looking and coordinated approach when formulating economic policies, and to strengthen cooperation between governments and regulatory authorities.
    Research on Fuzzy Multi-objective Portfolio Model with Investors' Dynamic Loss Aversion
    LI He, JIN Xiu, HOU Yuting
    2024, 33(7):  200-207.  DOI: 10.12005/orms.2024.0237
    Asbtract ( )   PDF (1048KB) ( )  
    References | Related Articles | Metrics
    According to non-statistical uncertainty and insufficient historical data in security return forecasts, fuzzy set theory has been applied in the past decades to build portfolio selection models. This paper deals with a multi-objective portfolio selection problem in a fuzzy environment, in which the effects of investors' dynamic asymmetric attitudes to losses and gains on portfolio selection are considered. Due to different dynamic loss aversion characteristics, the portfolio performances of conservative and aggressive investors differ. In the fuzzy multi-objective portfolio model, it is necessary to consider investors' dynamic loss aversion characteristics. In multi-period asset allocation, loss-averse investors adjust their investment strategies dynamically. High liquidity assets are conducive to investors' timely adjustment of asset holdings and improvement of investment return. Because downside risk describes the volatility risk that investors bear when they suffer losses, loss-averse investors pay more attention to the portfoliÓs downside risk. To meet the investment needs of dynamic loss-averse investors who pursue high portfolio liquidity and avoid downside risk, a credibilistic portfolio selection model is constructed with dynamic loss-averse utility, liquidity and downside risk as multi-objective, and the multi-stage investment decision-making problem of different types of investors affected by relative wealth changes is explored.
    Assuming that the return on assets and turnover rate are trapezoidal fuzzy numbers, the expected return, the lower semi-deviation and the expected liquidity of portfolios are derived, and a fuzzy multi-objective portfolio model is constructed under the credibilistic framework. The extended weighted Chebyshev programming method assigns different weights to each goal. The fuzzy multi-objective portfolio model is transformed into a single objective portfolio model, which allows conservative investors and aggressive investors to assign different importance to the investment target to meet the investment needs of various investors to weigh the difference of multi-objective time difference. A time-varying self-adaptive particle swarm optimization(TVSAPSO) is proposed to solve the model and add the time-varying inertia weight and acceleration coefficient to solve the problem of particle dynamic cognitive learningability and social learning ability in the stochastic ranking approach.
    The results show that the performance of the fuzzy multi-objective portfolio model considering dynamic loss aversion is better than that of the fuzzy multi-objective portfolio model considering static loss aversion and the mean-variance model. In the multi-period portfolio selection model, due to the different characteristics of dynamic loss aversion, there are differences in the importance of the objectives, asset structures, and the performance of the optimal portfolios of conservative and aggressive investors. Affected by the change in relative wealth, conservative investors are more sensitive to losses and prefer portfolios with higher liquidity levels to reduce the investment cost of adjusting portfolios. Conservative investors select the portfolio with the highest importance of liquidity, preferring defensive major consumer industry and risk-free assets. The risk-adjusted return of conservative investors' optimal portfolio is better than that of aggressive investors. Due to the existence of the break-even effect, aggressive investors pursue high returns to make up for early losses. Aggressive investors select the portfolio with the highest importance of loss aversion utility and prefer risky information and telecommunications industries and defensive major consumer industries. The results show that the fuzzy multi-objective model can meet the dynamic investment demands of different investors and provide a valuable reference for them to carry out multi-period asset allocation and risk management.
    This paper assumes that the loss aversion utility function is a piece-wise linear function. Furthermore, we will use the nonlinear function to describe the psychological characteristics of investors' different risk aversion to gain and loss. We also study the investment decision-making of dynamic loss aversion investors.
    Risk Constraint and Optimal Insurance: An Insurance Contract That Better Meets Expectations of the Insured
    MA Benjiang, JIANG Xuehai, ZHAN Jingang
    2024, 33(7):  208-214.  DOI: 10.12005/orms.2024.0238
    Asbtract ( )   PDF (981KB) ( )  
    References | Related Articles | Metrics
    For a long time, an optimal insurance design has always been a hot and difficult issue in insurance theory research, and has attracted widespread attention from theoretical and industrial circles. The pioneering research by Arrow, a Nobel laureate in economics, provides a model basis and research ideas for optimal insurance design. He assumes that risk-neutral insurance companies charge excess premiums in line with the current development level of the insurance market under the principle of expected premiums, while the insured belong to the risk-averse type and have a von Neumann-Morgenstern utility function, and designs insurance products according to the maximum expected utility of the insured. However, Arrow's research and subsequent related studies ignore the risk constraint needs of the insured. In reality, the insured usually hope to obtain sufficient compensation from the insurance company after an accident and control their own losses within their expected acceptable range. Therefore, if Arrow's insurance cannot meet the risk constraint needs of the insured, how to design an insurance contract that meets the risk constraint needs of the insured? This issue needs to be further studied.
    On the basis of the Arrow model, when the loss of the insured is no more than a certain non-negative value, the net loss constraint of the insured is introduced to study the optimal insurance problem of the insured. This is because: (1)Setting the net loss constraint of the insured in the partial range rather than the total loss range is to achieve utility improvement. (2)Setting the net loss constraint of the insured in the low loss range rather than the high loss range, the optimal insurance contract can motivate the insured to avoid risks reasonably. In addition, the constructed model also includes the continuity of the compensation function, which prevents insurance companies from refusing to provide insurance due to concerns about the moral hazard of the insured.
    According to the research ideas of RAVIV (1979) and GOLLIER (1987), the model is solved in two steps. First, we study the optimal insurance contract with fixed premium under the assumption of fixed premium, and then let go of the assumption of fixed premium to further study the optimal insurance contract with general premium. The study shows that if the solution of the Arrow model satisfies the net loss constraint of the insured, then the solution of the Arrow model is the solution of this model, and the optimal policy is a partial insurance contract with only one deductible. Otherwise, there will be a special solution to this model, and the optim al policy is a partial insurance contract with two deductibles. Drawing on the research methods of MA Benjiang and JIANG Xuehai (2024), this paper also proves a sufficient condition that the excess premium is strictly positive for the deductible by using the intermediate value, and according to the intermediate value, the key quantitative characteristics of the special solution of the model are obtained. For example, when the utility of the insured is optimal, the first deductible should be strictly less than the second deductible, and the sum of the optimal premium and the first deductible should be equal to the upper limit of the net loss of the insured, but the sum of the optimal premium and the second deductible should be strictly greater than this upper limit, and so on. In addition, the utility of the insured is related to the upper limit of net loss and the cut-off point of small loss, that is, the expected utility of the insured will increase with an increase in the upper limit of net loss and a decrease in the cut-off point of small loss. However, when the upper limit of net loss increases or the cut-off point of small loss decreases to a certain extent, so that the solution of the Arrow model satisfies the net loss constraint of the low loss interval of the insured, the utility of the insured reaches the maximum and will not be further improved.
    Future research can be expanded from the following two aspects: (1)Since Arrow's optimal insurance is a deductible insurance contract, the maximum loss of the insured is the deductible. However, in the proportional insurance contract, the net loss of the insured will increase with an increase in the loss, so the introduction of the net loss constraint of the insured in the deductible insurance contract will have greater research value. (2)Since the risk-neutral assumption of insurance companies, the expected utility function of the insured, and the calculation principle of expected premiums all have certain limitations, under the assumption of insurance companies' risk aversion, introducing other more reasonable and complex premium principles and the expected utility functions of the insured to build models will greatly enrich and deepen this study.
    Convergence Measurement and Convergence Mechanism Test of China's Financial Development
    FU Yiting, XUE Weiwen, ZHOU Xin
    2024, 33(7):  215-221.  DOI: 10.12005/orms.2024.0239
    Asbtract ( )   PDF (987KB) ( )  
    References | Related Articles | Metrics
    Financial development has played an important leading role in the economic growth of the economy. It has reached a consensus in the academic community. The imbalance of regional financial development is one of the factors that restrict the coordinated development of China's regions. The financial convergence hypothesis refers to the process in which the financial development level of low-level economies catches up with and draws close to high-level economies in the long run, that is, the process in which the financial development gap of economies gradually narrows to zero. It provides a complete analysis paradigm for the study of development gap, which can directly show the degree of narrowing the gap between economies, and can give a reasonable explanation for the causes of inter-regional gap. It is of great significance to narrow the gap of regional financial development for coordinating the balanced development of regional economy. Based on the convergence hypothesis, this paper identifies the convergence of financial development at the overall, regional and provincial levels in China, and further explores the influencing factors of China's financial convergence process, so as to provide ideas for explaining the imbalance of regional financial development in China.
    Firstly, in order to comprehensively measure the level of financial development in China, this paper selects multiple indicators at the three levels of banking, securities and insurance and uses the time series principal component analysis (GPCA) to construct a comprehensive evaluation standard. On this basis, this paper uses the nonlinear time-varying factor model to test the convergence of financial development at the overall, regional and provincial levels in China. The model breaks through the limitations of the homogeneity hypothesis of the traditional identification method, and can accurately identify the phenomenon of short-term divergence and long-term convergence, and can realize the endogenous division of the convergence group. Further, this paper identifies the existing financial convergence groups in China through the convergence endogenous recognition algorithm, and analyzes the distribution of convergence groups and the change of convergence path.
    The results show that, first, under the premise of allowing the existence of heterogeneous convergence transition path, China's financial development does not have the characteristics of national convergence, and the four major economic regions of the east, middle, west and northeast also show the characteristics of dispersion. Secondly, this paper further identifies four financial convergence groups endogenously, indicating that China's financial development converges to four different steady-state levels. Thirdly, the distribution of convergence groups presents a pyramid distribution pattern of “fewer members of high-level groups and more members of low-level groups”, and there is a financial polarization phenomenon of “developed in the east, underdeveloped in the middle, west and northeast”. However, with the evolution process, the gap between groups has gradually narrowed. Based on this, this paper argues that the convergence group should adopt differentiated development strategies according to local conditions, and the group should build a development mechanism in which competition and cooperation coexist, so as to accelerate the speed of financial convergence and promote the coordinated development of financial levels in various regions.
    Optimal Investment Portfolio, Cheap Reinsurance, and Barrier Dividend Strategies for Compound Poisson-Geometric Risk
    SUN Zongqi, YANG Peng, FAN Xueshuang
    2024, 33(7):  222-227.  DOI: 10.12005/orms.2024.0240
    Asbtract ( )   PDF (1134KB) ( )  
    References | Related Articles | Metrics
    In the insurance industry, both the non-claims premium discount system and the deductible system contribute to unequal occurrences of claim for compensation and settlement of claims. MAO Zechun and LIU Jin'e (2004,2005) introduced the compound Poisson-Geometric distribution and process, also known as the Polya-Aeppli process internationally, to characterize this phenomenon. Compound Poisson-Geometric processes, as an extension of compound Poisson processes, have attracted research attention from scholars in financial mathematics, particularly focusing on mean-variance models, utility models, and bankruptcy probability models.
    The dividend strategy serves as a crucial risk control method that not only stimulates policyholder engagement, but also boosts insurance companies' premium income and enhances their solvency. This approach has been widely embraced by each insurance company as part of its management strategy. While some scholars have examined the optimal dividends for insurance companies under compound Poisson-Geometric risk processes, they have not taken into account the investment of risk-free assets for the convenience of mathematical calculations. Risk-free investments are commonly utilized by insurance companies to maximize returns while ensuring high security, stable returns, low volatility, and good liquidity. Therefore, it is necessary to consider the investment of risk-free assets.
    This paper examines the compound Poisson-Geometric risk model with risk-free investment under optimal portfolio-cheap reinsurance and barrier dividend. By utilizing dynamic programming principles, we obtain and solve the HJB equation while obtaining analytic solutions to optimal investment-cheap reinsurance and optimal dividend functions.
    Finally, we analyze how changes in key parameters such as the risk-free interest rate influence optimal investment strategies and dividend functions,verify the rationality of our results,and propose management suggestions: from the perspective of stimulating insurance enthusiasm and increasing insurance dividends, increasing initial reserves and investing in risky assets with high yield, low volatility, low risk correlation with claims and risk-free assets with high yield are effective ways to increase dividends. At the same time, when the risk-free interest rate is high, it is also a wise strategy for insurance companies to switch from risky assets to risk-free assets. From the perspective of risk transfer, under the high yield and low volatility of risk assets, for the purpose of pursuing dividends, it is necessary to increase reinsurance, accept more risk investment, increase the transfer insurance risk and maintain the stability of the overall risk. The greater the correlation coefficient, if the volatility of risk asset is larger, the more conducive to dividends the reduction of reinsurance.
    Management Science
    Dynamic Control and Optimization of Enterprise Production
    ZHENG Kuankuan, TAN Jiyang
    2024, 33(7):  228-233.  DOI: 10.12005/orms.2024.0241
    Asbtract ( )   PDF (939KB) ( )  
    References | Related Articles | Metrics
    It is essential and important for a company to have a reasonable purchasing and production plan. Most relevant literature to this problem is the newsboy modeling one, which is a classic problem in operations research and management. It is noteworthy that the optimization objective of most relevant literature is to maximize the expected profit or minimize the cost. In addition, how to pay dividends to shareholders is also one of the problems considered by producers in the process of production and operation of enterprises. More common dividend methods are barrier dividend strategy, threshold dividend strategy and so on, and many studies have discussed them around the optimization of dividends.
    This study extends and explores the newsboy modeling problem from a new perspective based on existing research. In this study, we assume that the firm produces and sells a class of products, similar to the multi-period newsboy model, and assume that the producer uses the barrier dividend strategy to pay dividends to shareholders. The main model used in this study is the discrete Markov decision model and the objective is to find the production strategy that maximizes the expected discounted dividend. Specifically, we consider two scenarios of production models. First, assuming that a firm produces, sells and stores a class of products for a given amount of principal, the market demand in different periods is a non-negative, independent and identically distributed discrete random variable. During any production cycle, the decision maker may face an inventory or supply shortage problem. We assume that the inventory charge for the current surplus product is paid at the end of each period, and the barrier dividend strategy is used to pay dividends to shareholders. In this paper, based on the above assumptions, we construct an iterative formula for maximizing the expected discounted dividend using the full-expectation formula. Under the condition that the length of the examination period is limited, Python software is used to carry out several iterations to obtain the optimal production strategy and the optimal value function in a limited period of time. In addition, we further consider a case where the length of the inspection period is unbounded until production is terminated when the producer's holdings of the prior are insufficient to cover the production and inventory costs in the next period. Similarly, the Bellman equation maximizing the expected discounted dividend is developed. The existence and uniqueness of the optimal value function, i.e., the optimal value function is the immovable point of Bellman equation, can be proved by the principle of compression mapping. Secondly, this study assumes that the unit cost in each period is correlated with the unit selling price and the market demand in the previous period as random variables, and the three follow a joint distribution and they are independent and identically distributed random vectors. We establish the iterative formulation in finite time and Bellman equation in the case of no time constraints, and obtain the optimal value function and the optimal production strategy in such a multivariate situation.
    In conclusion, for a dividend optimization problem similar to a multi-period newsboy model, this study provides a method for finding an optimal solution which is useful in obtaining the expected discounted dividend that maximizes the expected discounted dividend and its corresponding optimal production schedule. However, there are some limitations in this study, for example, only a simple form of Markov decision-making problem is discussed in this paper, and more meaningful conclusions can be obtained if more complex conditions such as capital injection, investment, and Markov environmental process are further incorporated in the model.
    Visualising Demand Uncertainty Supply Chain Management: A Systematic Scientometrics Review
    YANG Zhenjie, ZHANG Wei
    2024, 33(7):  234-239.  DOI: 10.12005/orms.2024.0242
    Asbtract ( )   PDF (1012KB) ( )  
    References | Related Articles | Metrics
    In the current context of globalization and digitization, supply chain management has become one of the key strategic tools for enterprises to gain competitive advantage. Demand uncertainty poses a common challenge to supply chain management, and enterprises are facing increasingly complex operational environments and risks. Operational risks arising from demand uncertainty and disruption risks caused by external factors are the two main challenges facing supply chain management. Operational risks involve inherent uncertainties such as demand, supply, lead times, prices, and product return levels, with demand uncertainty being the most common. Furthermore, exploring the randomness behind demand is a key challenge to demand quantification management. Disruption risks include exogenous events such as geopolitical issues, natural disasters, and trade protectionism, all of which can have serious impacts on the supply chain management. The advancement of digital technology enables supply chains to provide the right quantity of products at the right time and place more accurately. However, this also poses a significant challenge to balancing supply chain management capabilities with demand uncertainty to cope with fluctuating demand. This article systematically presents the research panorama in this field, helping scholars to better understand the current research status in demand uncertainty supply chain management and providing references for further exploration of research frontiers and hotspots in academia and industry.
    This article uses Citespace software to visually and systematically review 961 papers and literature reviews published from 2009 to June 4, 2022. Through techniques such as core author analysis, keyword co-occurrence, and co-citation analysis, it reveals the “social-concept-knowledge” structure of the demand uncertainty supply chain management field, identifies key concepts and research hotspots, and clarifies key disciplines and emerging trends. Statistical analysis results identify the most influential scholars, key research institutions, core concepts, and the most influential journals in the field of demand uncertainty supply chain management. The current research hotspots in demand uncertainty supply chain management are topics related to product scheduling and the design of supply chain resilience networks, with the main research methods being modeling using operations research and management science methods. Based on the conceptual structure and knowledge structure, conclusions are drawn regarding future research directions and key research questions in demand uncertainty supply chain management.
    Based on the findings of this article, two future research directions and research questions are proposed: from the perspective of supply chain resilience, supply chain resilience involves pre-disaster absorptive capacity, post-disaster adaptive capacity, and recovery capacity, enabling the supply chain to better respond to emergencies. RQ1: how to effectively evaluate supply chain resilience and construct an evaluation index system? RQ2: how to design complex supply chain networks to enhance supply chain resilience? RQ3: how to balance the relationship between supply chain resilience and economic and social benefits? From the perspective of product scheduling, efficient scheduling patterns have become an important source of competitive advantage. RQ4: how does the widespread application of digital technology in product scheduling challenge existing supply chain management theories? RQ5: how to optimize dynamic scheduling problems in the supply chain through digital technology?
[an error occurred while processing this directive]