Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Markov Decision Model for Determining Optimal Outpatient Scheduling Jonathan Patrick Telfer School of Management University of Ottawa.

Similar presentations


Presentation on theme: "A Markov Decision Model for Determining Optimal Outpatient Scheduling Jonathan Patrick Telfer School of Management University of Ottawa."— Presentation transcript:

1 A Markov Decision Model for Determining Optimal Outpatient Scheduling Jonathan Patrick Telfer School of Management University of Ottawa

2 Motivation  The unwarranted skeptic and the uncritical enthusiast  Outpatient clinics in Canada receiving strong encouragement to switch to open access  Basic operations research would claim that there is a cost to providing same day access  Does the benefit outweigh the costs?

3 Trade-off  Any schedule needs to balance system- related benefits/costs - revenue, overtime, idle time,… versus patient related benefits – access, continuity of care,….  Available levers include the decision as to how many new requests to serve today and how many requests to book in advance into each day.

4 Scheduling Decisions Day 1Day 2Day 3Day 4Day 5 New Demand Day 2 Day 3 Day 1

5 Literature  Plenty of evidence that overbooking is advantageous in the presence of no-shows (work by Lawley et al and by Lawrence et al)  Also evidence that a two day booking window outperforms open access (work by Liu et al and by Lawrence and Chen)  Old trade-off between tractability of the model and complexity

6 Model Aims  To create a model that Incorporates a show rate that is dependent on the appointment lead time Gives managers the ability to determine the number of new requests to serve today The number of requests to book into each future day (called the Advanced Booking Policy – ABP) Allows the policy to depend on the current booking slate and demand.

7 Markov Decision Process Model  Decision Epochs Made once a day after today’s demand has arrived but before any appointments  State Current ABP (w), queue size (x) and demand (y)  Actions How many of today’s demand to serve today (b) Whether to change the current ABP (a)

8 Markov Decision Process Model  Transitions Stochastic element is new demand New queue size is equal to current queue size (x) minus today’s slate (x  w) plus any new demand not serviced today (y-b) New demand represented by random variable D.

9 Markov Decision Process Model  Costs/Rewards System Related: revenue, overtime, idle time Patient Related: lead time For switching the ABP

10 Bellman Equation  Used a discounted (but with a discount rate of 0.99), infinite horizon model to avoid arbitrary terminal rewards  Can be solved to optimality

11 Assumptions/Limitations  Advance bookings are done on a FCFS basis  Today’s demand arrives before any booking decisions need to be made  Service times are deterministic  Show rate dependent on size of queue at time of service instead of at time of booking  Immediate changes to ABP may mean that previous bookings need to be shifted  Does not account for fact that some bookings have to be booked in advance

12 Clinic Types Considered

13 Six Scenarios for each Clinic Type 1.Base scenario Demand equal to capacity Show rate based on research by Gallucci All requests can be serviced the same day 2.Demand > Capacity 3.Demand < Capacity 4.Some requests must be booked in advance 5.Same day bookings given a show probability of 1 6.Show probability with a steeper decline

14 Performance Results  Clinics #1,2,3: OA and MDP policy result in almost identical profits Same day access ranges from 89% to 100% (max lead time 1 day)  Clinics #4,5,6: MDP slightly outperforms OA (by less than 2%) Same day access ranges from 84% to 100% (max lead time 2 days)  Clinics #7,8,9: MDP vastly outperforms OA in all scenarios (by as much as 70%) Same day access ranges from 28% to 98% (max lead time 4 days)  For all clinics, MDP provides a significant reduction in throughput variation and peak workload

15

16

17 Optimal Policy (base scenario, w=11, x=0) Day 1 1 2 3 4 5 6 7 8 9 10 19 18 16 15 14 13 12 11 17 20

18 Optimal Policy (base scenario, w=11, x=0) Day 1Day 2 1 2 3 4 5 6 7 8 9 10 19 18 16 15 14 13 12 11 17 20

19 Performance Trends  MDP performed best when demand was high (e.g. when demand > capacity and when same day show rate was guaranteed).  MDP approaches OA as the lead time cost increases  Presence of revenue makes OA much more attractive  Maximum booking window in any scenario tested was 4 days  MDP manages to perform as well even when revenue is present by sacrificing some throughput in order to reduce overtime and idle time costs.

20 Conclusion  Model provides a booking policy that takes into account no-shows and reacts to the congestion in the system  Simulation results suggest that it achieves better results (same or higher objective, more predictable throughput) than open access with minimal cost to the patient in terms of lead times  Enhancements to the model certainly possible including the inclusion of stochastic services times, the transition to a continuous time setting, the possibility of a multi-doctor clinic….  Currently in discussion with local clinic to build enhanced model and test it.

21 Thank You!

22 Optimal Policy (base scenario, w=11, x=0) Number of New Requests Given Same Day Service y 012345678910111213141516 010000000000000000 101000000000000000 200100000000000000 300010000000000000 400001000000000000 500000100000000000 600000010000000000 700000001000000000 800000000100000000 900000000010000000 1000000000001000000 1100000000000100000 1200000000000010000 1300000000000010000 1400000000000010000 1500000000000001000 1600000000000001000 1700000000000000100 1800000000000000100 1900000000000000010 2000000000000000001

23 Optimal Policy (base scenario, w=11, x=0) Number of New Requests Given Same Day Service y '0''1''2''3''4''5''6''7''8''9''10''11' 0100000000000 1010000000000 2001000000000 3000100000000 4000010000000 5000001000000 6000000100000 7000000010000 8000000001000 9000000000100 10000000000010 11000000000010 12000000000010 13000000000010 14000000000010 15000000000010 16000000000010 17000000000010 18000000000010 19000000000001 20000000000001

24 ScenarioPolicy Lead Time Costs Average DailyCost/ProfitAppointment Lead Times THOTITActual Percent diff from OA01234 Show Rate with Same Day = 100% OA 100.0%12.5% -18.75 MDP 0 91.6%1.0%9.4%-5.6769.8%63.89%34.76%1.34%0.01%0.00% 1 93.4%1.8%8.4%-8.9252.4%71.51%28.19%0.30%0.00% 5 97.1%6.1%9.0%-16.939.7%87.37%12.63%0.00% Increased Demand (Demand = 12) OA 88.0%15.7%10.1%-20.81 MDP 0 78.0%2.9%9.2%-7.5064.0%27.76%44.56%23.06%4.37%0.25% 1 83.9%6.9%6.2%-14.2931.3%64.82%34.43%0.76%0.00% 5 87.8%14.7%9.4%-20.670.7%97.91%2.09%0.00% Base Case OA 88.0%6.9%18.9%-16.34 MDP 0 84.1%0.7%16.5%-8.9245.4%66.44%32.29%1.26%0.01%0.00% 1 85.9%1.7%15.8%-11.4526.8%81.52%18.36%0.12%0.00% 5 87.4%4.5%17.1%-15.64-59.5%94.89%5.11%0.00% Steep Decline OA 88.0%6.9%18.9%-16.34 MDP 0 82.7%0.8%18.1%-9.8140.0%78.01%21.83%0.16%0.00% 1 84.3%1.5%17.2%-11.6029.0%85.06%14.92%0.03%0.00% 5 86.6%4.0%17.4%-15.515.1%94.35%5.65%0.00% Advanaced Bookings OA 84.6%5.6%21.0%-16.16 MDP 0 81.5%0.6%19.1%-10.1137.4%43.64%52.99%3.32%0.05%0.00% 1 82.9%1.4%18.5%-12.0325.5%55.14%44.36%0.50%0.00% 5 84.2%3.7%19.6%-13.8914.0%65.98%34.01%0.00% Demand = 8 OA 0 88.0%2.1%31.7%-17.89 MDP 0 86.9%0.0%30.5%-15.2814.6%90.39%9.60%0.01%0.00% 1 87.2%0.2%30.4%-15.9311.0%92.63%7.37%0.00% 5 87.6%0.8%30.7%-17.323.2%97.09%2.91%0.00%

25 ScenarioPolicy Lead Time Costs Average DailyCost/ProfitAppointment Lead Times THOTITActual % diff from OA01234 Increased Demand (Demand = 12) OA 88.0%15.7%10.2%190.33 MDP 0 86.2%10.3%6.8%193.211.5%83.88%16.12%0.00% 1 87.1%12.3%7.8%191.700.7%91.33%8.67%0.00% 5 88.0%15.7%10.2%190.330.0%100.00%0.00% Show Rate with Same Day = 100% OA 100.0%12.5% 181.25 MDP 0 97.0%6.0%9.0%183.441.2%87.10%12.90%0.00% 1 98.1%8.1%10.0%182.380.6%91.96%8.04%0.00% 5 100.0%12.5% 181.250.0%100.00%0.00% Base Case OA 88.0%6.9%18.9%159.63 MDP 0 86.6%2.6%16.0%162.491.8%87.43%12.56%0.02%0.00% 1 86.9%3.5%16.5%161.291.0%91.54%8.46%0.00% 5 87.8%6.2%18.3%159.610.0%98.64%1.36%0.00% Show Rate with Steep Decline OA 88.0%6.9%18.9%159.63 MDP 0 86.6%4.0%17.4%160.460.5%94.35%5.65%0.00% 1 87.0%4.7%17.7%160.110.3%95.98%4.02%0.00% 5 88.0%6.9%18.9%159.630.0%100.00%0.00% Advanaced Bookings OA 84.6%5.6%21.0%153.03 MDP 0 83.3%1.9%18.6%155.461.6%61.19%34.48%4.19%0.14%0.00% 1 83.8%2.8%19.0%154.691.1%63.14%36.82%0.05%0.00% 5 84.4%4.8%20.4%153.050.0%68.61%31.39%0.00% Decreased Demand (Demand =8) OA 0 88.0%2.1%31.7%122.90 MDP 0 87.5%0.5%30.5%124.211.1% 95.87%4.13%0.00% 1 87.6%0.6%30.5%123.950.9% 96.27%3.73%0.00% 5 87.8%1.3%31.0%123.160.2% 98.52%1.48%0.00%

26 ScenarioPolicy Lead Time Costs Average DailyCost/ProfitAppointment Lead Times THOTITActual Percent diff from OA012 Increased Demand (Demand = 12) OA 88.0%15.7%10.2%195.40 MDP 0 86.7%11.4%7.4%196.690.7%88.59%11.41%0.00% 1 87.5%13.7%8.7%195.690.1%95.46%4.54%0.00% 5 88.0%15.7%10.2%195.400.0%100.00%0.00% Show Rate with Same Day = 100% OA 100.0%12.5% 187.50 MDP 0 98.2%8.3%10.1%188.060.3%92.43%7.57%0.00% 1 99.1%10.1%11.1%187.580.0%95.82%4.18%0.00% 5 100.0%12.5% 187.500.0%100.00%0.00% Base Case OA 88.0%6.9%18.9%169.08 MDP 0 86.8%3.1%16.3%170.510.8%89.82%10.18%0.00% 1 88.0%6.9%18.9%169.080.0%100.00%0.00% 5 88.0%6.9%18.9%169.080.0%100.00%0.00% Show Rate with Steep Decline OA 88.0%6.9%18.9%169.08 MDP 0 87.2%4.9%17.8%169.390.2%96.48%3.52%0.00% 1 87.8%6.4%18.6%169.110.0%99.11%0.89%0.00% 5 88.0%6.9%18.9%169.080.0%100.00%0.00% Advanaced Bookings OA 84.6%5.6%21.0%160.60 70.00%30.00%0.00% MDP 0 83.7%2.6%18.9%164.802.6%62.27%37.64%0.09% 1 84.0%3.4%19.4%161.160.3%65.12%34.87%0.01% 5 84.6%5.6%21.0%160.600.0%70.00%30.00%0.00% Decreaded Demand (Demand = 8) OA 0 88.0%2.1%31.7%138.73 100.00%0.00% MDP 0 87.5%0.5%30.5%139.450.5%96.04%3.96%0.00% 1 87.6%0.7%30.7%139.120.3%96.74%3.26%0.00% 5 88.0%1.9%31.5%138.790.0%99.78%0.22%0.00%


Download ppt "A Markov Decision Model for Determining Optimal Outpatient Scheduling Jonathan Patrick Telfer School of Management University of Ottawa."

Similar presentations


Ads by Google