Presentation is loading. Please wait.

Presentation is loading. Please wait.

Supporting an omnichannel strategy Measuring performance

Similar presentations


Presentation on theme: "Supporting an omnichannel strategy Measuring performance"— Presentation transcript:

1 Supporting an omnichannel strategy Measuring performance
Module 8 Video 4 Supporting an omnichannel strategy Measuring performance

2 Intro Retailers expanding their omnichannel capabilities are often entering uncharted territory. We discuss how retailers can use experiments to evaluate the impact of their omnichannel actions. We will also discuss how they can monitor the performance of their conversion funnel using control charts.

3 Running experiments Decisions in an omnichannel context often involve subtle tradeoffs. It is hard to anticipate the outcomes those decisions will generate. For example, consider a retailer that recently implemented a ship to store program and is now reevaluating their assortment policy. The retailer is considering removing some of the slowest selling products from the store assortment, and offering them through the ship to store option. The retailer anticipates that removing those products from the store assortment will save costs. However, what will happen to revenues is not clear. Because the products will be shipped from the distribution center to the stores, some customers seeking immediate delivery may be lost to the competition. Whether or not the cost savings compensate this loss is unclear. How could the retailer evaluate if that change in assortment is a good idea?

4 Running experiments (cont.)
The retailer could run a pilot test. The retailer could implement this new assortment policy at a few of the stores and compare the evolution of sales at those stores with the evolution of sales at comparable stores that did not implement the test. The analysis should consider sales through all possible channels, to make sure that the effects that we measure include ship to store sales and any other unexpected channel shifts. If the pilot test suggests that the cost savings compensate the reduction in sales, then the retailer could roll out this change to the rest of the stores. Proceeding this way has the advantage that if the test indicates a negative impact of the implementation, the costs are much lower than if the change had been rolled out to all the stores from the beginning. In order to evaluate the impact of the action, it is important to choose an appropriate comparison group, what we call the “control group”. We need a control group because if we simply focused on the evolution of the sales at the store where the intervention has been implemented, it could be that the evolution that we observe is affected by trends, seasonality, or other factors. We want to pick a control group that is as similar as possible to the group where we are implementing the pilot.

5 Reluctance to experiments
While the approach we described is quite straightforward, many retailers, particularly those coming from a brick and mortar tradition, do not have a culture of experimentation. Sometimes the cost of running the experiment can be a concern. However, in most cases, the costs of not experimenting, and making poor decisions as a consequence, can be even higher, so it is important to change the mindset Retailers that come from on online tradition are usually more open to running experiments. Online retailers often have A/B testing capabilities at their sites, which means that they can randomly assign customers to one of two versions of the site, and then compare the performance of those two versions. This type of experiments can help retailers make better choices. While many of the A/B tests run by online retailers evaluate simple changes in customer interface, it is also possible to test more strategic questions that can guide the omnichannel expansion. For example, we partnered with an online retailer to evaluate the impact of providing product information by a virtual fitting tool. The experiment was implemented as an A/B test where some customers had access to the tool and some customers didn’t. Since the availability of the tool was assigned randomly, we simply have to compare the performance of the customers who had access to the tool with the performance of the customers who didn’t have access to the tool. We did that and concluded that offering the virtual fitting tool increased sales and reduced returns.

6 When experimenting is not possible
Sometimes it is not possible to run a controlled experiment because the retailer wants to roll out the intervention to all existing sites. This does not mean we cannot estimate the effects of an intervention. For example, if the implementation is staggered, we can try to link the temporal variation in the implementation dates with the evolution of a performance measure. Another alternative is to try to find natural experiments. These are empirical studies where the exposure to the treatment is determined by factors the retailer cannot control, but that resemble random assignment. For example, we partnered with a retailer to assess the impact of a “buy online, pick up in store implementation”. That retailer had committed to a full rollout of the program, so it was not possible to run a randomized trial. However, we were able to study the impact of the program on online sales using a natural experiment. Customers that live far from a store are a good control group because those customers would not get any use from the “buy online, pick up in store program”, since they are not going to drive 200 miles to pick up an order. So sales in those areas give us a sense of how the online sales could have evolve if the program had not been implemented.

7 Monitoring Another way of measuring performance in an omnichannel context is by monitoring some metrics of the omnichannel conversion funnel. For example, based on our historical data, we may expect to find some level of cart abandonment rates. We can plot the average daily cart abandonment rate over time and define control limits for that performance metric. The daily rates will have some variation, but as long as they are within the control limits we define, we will consider this variation to be natural. If we start observing cart abandonment rates that are out of the control limits, that means that the performance measure is experiencing some systematic change. When that happens, it is a good idea to investigate the issue in more depth and to try to find the reason for the observed change.


Download ppt "Supporting an omnichannel strategy Measuring performance"

Similar presentations


Ads by Google