Supporting an omnichannel strategy Measuring performance

Slides:



Advertisements
Similar presentations
MANAGEMENT ACCOUNTING
Advertisements

Types of Evaluation.
Virtual Business: Retailing
Analyzing the Competition Lesson 16. Who is the Competition? Important for retailers to know who their competition is and to understand as much as possible.
Qantas Brand Refresh Kristy Dixon – Masters of Applied Project Management University of Adelaide 2013 Results of Risk Analysis Plan Hypothetical Project.
Business Strategy and Policy Lecture Recap Forward Integration Forward integration involves gaining ownership or increased control over distributors.
Marketing Management 1 st of June Marketing Channels.
McGraw-Hill/Irwin © 2008 The McGraw-Hill Companies, All Rights Reserved Chapter 4 Measuring the Success of Strategic Initiatives.
Africa Impact Evaluation Program on AIDS (AIM-AIDS) Cape Town, South Africa March 8 – 13, Steps in Implementing an Impact Evaluation Nandini Krishnan.
Bain & Company: Case Interview. Introduction Take notes Ask questions Structure your analysis Drive towards a recommendation.
Learning By Doing (Or Looping the Loop, Scooping the Poop & Shooting the Hoop)
Slide 1 Software Construction Software Construction Lecture 3.
Researching Innovation.  By definition, an innovation means that you are testing out something new to see how effective it is.  This also means that.
HRM-755 PERFORMANCE MANAGEMENT OSMAN BIN SAIF LECTURE: TWENTY THREE 1.
Copyright © 2009 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Homework 1- Gateway.
Balanced Scorecard: Quality, Time, and the Theory of Constraints
Multichannel Retailing
Employ product-mix strategies to meet customer expectations
Chapter 10 Product Issues in Channel Management.
Chapter 10 Product Issues in Channel Management.
Employ product-mix strategies to meet customer expectations
MGT 498 TUTORIAL Success trials - mgt498tutorial.com
CUSTOMER EXPERIENCE STRATEGY – CANVAS - TEMPLATE
Fundamentals of Monitoring and Evaluation
Shopper Traffic Case Study:
Chapter 5 Comparing Two Means or Two Medians
Quasi Experimental Methods I
Objectives Explain the purpose and goal of the selling function
QlikView Licensing.
Chapter 8 Experimental Design The nature of an experimental design
Performance evaluation
Workshop Presentation
MGT 498 Education for Service-- snaptutorial.com.
MGT 498 EDU Lessons in Excellence-- mgt498edu.com.
MGT 498 TUTORIAL Lessons in Excellence -- mgt498tutorial.com.
MGT 498 Education for Service-- snaptutorial.com
MGT 498 Teaching Effectively-- snaptutorial.com
From offline to online Information
GEOP 4355 Distribution Networks
Understanding Randomness
Omnichannel Journeys Towards omnichannel fulfillment
Module 6 Video 4 Fulfilling omni-channel demand
Chapter 10: Adjustments to the list of quoted prices
Decision-Making.
The Omnichannel Customer Deciding the retailer’s value proposition
HIV Counseling.
Fulfilling omni-channel demand Designing a Distribution Network
Omnichannel Journeys The information / fulfillment matrix
The Omnichannel Customer Omnichannel data
Omnichannel Journeys Online to offline information
Fulfilling omni-channel demand Introduction
Supporting an omnichannel strategy Enabling an omnichannel strategy
Chapter 10 Product Issues in Channel Management.
The Digital/Physical Interface Transformation
STRATEGIC HUMAN RESOURCE MANAGEMENT
Understanding the Marketing Mix and Marketing Plan
Unit 7 Distribution Chapter 21 Channels of Distribution
STRATEGIC HUMAN RESOURCE MANAGEMENT
Chapter 17 Measurement Key Concept: If you want to estimate the demand curve, you need to find cases where the supply curve shifts.
ENTREPRENEURSHIP Lecture No: 43 BY CH. SHAHZAD ANSAR
Scatter Plot 3 Notes 2/6/19.
Bricks and Mortar analytics
Steps in Implementing an Impact Evaluation
Marketing CHAPTER Marketing Basics
Marketing Experiments I
Contemporary Issues of HRM
Steps in Implementing an Impact Evaluation
Objectives Explain the purpose and goal of the selling function
Do You Have Multiple Amazon Seller Accounts? Amazon Knows it! By EsellersCare Contact : +1 (855)
Presentation transcript:

Supporting an omnichannel strategy Measuring performance Module 8 Video 4 Supporting an omnichannel strategy Measuring performance

Intro Retailers expanding their omnichannel capabilities are often entering uncharted territory. We discuss how retailers can use experiments to evaluate the impact of their omnichannel actions. We will also discuss how they can monitor the performance of their conversion funnel using control charts.

Running experiments Decisions in an omnichannel context often involve subtle tradeoffs. It is hard to anticipate the outcomes those decisions will generate. For example, consider a retailer that recently implemented a ship to store program and is now reevaluating their assortment policy. The retailer is considering removing some of the slowest selling products from the store assortment, and offering them through the ship to store option. The retailer anticipates that removing those products from the store assortment will save costs. However, what will happen to revenues is not clear. Because the products will be shipped from the distribution center to the stores, some customers seeking immediate delivery may be lost to the competition. Whether or not the cost savings compensate this loss is unclear. How could the retailer evaluate if that change in assortment is a good idea?

Running experiments (cont.) The retailer could run a pilot test. The retailer could implement this new assortment policy at a few of the stores and compare the evolution of sales at those stores with the evolution of sales at comparable stores that did not implement the test. The analysis should consider sales through all possible channels, to make sure that the effects that we measure include ship to store sales and any other unexpected channel shifts. If the pilot test suggests that the cost savings compensate the reduction in sales, then the retailer could roll out this change to the rest of the stores. Proceeding this way has the advantage that if the test indicates a negative impact of the implementation, the costs are much lower than if the change had been rolled out to all the stores from the beginning. In order to evaluate the impact of the action, it is important to choose an appropriate comparison group, what we call the “control group”. We need a control group because if we simply focused on the evolution of the sales at the store where the intervention has been implemented, it could be that the evolution that we observe is affected by trends, seasonality, or other factors. We want to pick a control group that is as similar as possible to the group where we are implementing the pilot.

Reluctance to experiments While the approach we described is quite straightforward, many retailers, particularly those coming from a brick and mortar tradition, do not have a culture of experimentation. Sometimes the cost of running the experiment can be a concern. However, in most cases, the costs of not experimenting, and making poor decisions as a consequence, can be even higher, so it is important to change the mindset Retailers that come from on online tradition are usually more open to running experiments. Online retailers often have A/B testing capabilities at their sites, which means that they can randomly assign customers to one of two versions of the site, and then compare the performance of those two versions. This type of experiments can help retailers make better choices. While many of the A/B tests run by online retailers evaluate simple changes in customer interface, it is also possible to test more strategic questions that can guide the omnichannel expansion. For example, we partnered with an online retailer to evaluate the impact of providing product information by a virtual fitting tool. The experiment was implemented as an A/B test where some customers had access to the tool and some customers didn’t. Since the availability of the tool was assigned randomly, we simply have to compare the performance of the customers who had access to the tool with the performance of the customers who didn’t have access to the tool. We did that and concluded that offering the virtual fitting tool increased sales and reduced returns.

When experimenting is not possible Sometimes it is not possible to run a controlled experiment because the retailer wants to roll out the intervention to all existing sites. This does not mean we cannot estimate the effects of an intervention. For example, if the implementation is staggered, we can try to link the temporal variation in the implementation dates with the evolution of a performance measure. Another alternative is to try to find natural experiments. These are empirical studies where the exposure to the treatment is determined by factors the retailer cannot control, but that resemble random assignment. For example, we partnered with a retailer to assess the impact of a “buy online, pick up in store implementation”. That retailer had committed to a full rollout of the program, so it was not possible to run a randomized trial. However, we were able to study the impact of the program on online sales using a natural experiment. Customers that live far from a store are a good control group because those customers would not get any use from the “buy online, pick up in store program”, since they are not going to drive 200 miles to pick up an order. So sales in those areas give us a sense of how the online sales could have evolve if the program had not been implemented.

Monitoring Another way of measuring performance in an omnichannel context is by monitoring some metrics of the omnichannel conversion funnel. For example, based on our historical data, we may expect to find some level of cart abandonment rates. We can plot the average daily cart abandonment rate over time and define control limits for that performance metric. The daily rates will have some variation, but as long as they are within the control limits we define, we will consider this variation to be natural. If we start observing cart abandonment rates that are out of the control limits, that means that the performance measure is experiencing some systematic change. When that happens, it is a good idea to investigate the issue in more depth and to try to find the reason for the observed change.