Download presentation
Presentation is loading. Please wait.
1
The right metrics at the right time
An EMC & VMware collaboration
2
Introductions Nancy Anderson Senior Director of Globalization, EMC
Mimi Hills Director, Product Globalization, VMware Eduardo D’Antonio Director, Globalization Operations, VMware Clove Lynch Senior Manager, Globalization Tools, VMware
3
Business Metrics
4
Volume of Words per Language
One of the key metrics we began tracking several years ago is the # of words/language. We do a quarter-on-quarter comparison across 4-5 quarters. We investigate any significant changes to determine what programs/projects are driving the shift (and to be sure there are no reporting errors). It also helps us to understand demand and language priorities. For example, when the “other” bucket grows, we see what languages are driving the increase and consider if we need to expand our activity in those languages and then consider our ability to support them: How are our reference materials? Do we have enough qualified resources? How do we handle review?
5
Localization Spend per Vendor
Another example is the total spend per vendor. We analyze this on a quarterly basis as well and look for any significant shifts to make sure that those align to our expectations and to keep an eye on the workload distribution. Some of our projects and larger programs are forecasted and assigned in advance but the ad-hoc projects are assigned by the PM to the vendor at project kick-off. So the volume per vendor does deviate from the forecast and also, often tends to align to performance (the PM will assign projects to whichever vendor is performing the best).
6
Spend per Task/Language
This view allows us to understand our spend distribution at a more granular level. The language analysis is just a different presentation (by % instead of word count) but allows us to easily compare from quarter to quarter. The Spend per Task allows us to keep an eye on where we are spending the most and allow us to dig in to any areas of concern. For example, several years ago, our review costs were over 20% and was an area we needed to focus on process and collaboration to improve quality and drive down review costs and requirements.
7
ROI for Adding a Language
Product code name Bookings 2014 (K) Bookings increase* after l10n Incre- Mental Bookings Product cost (K) Revenue multiplier ** Priority Notes Sparrow 3385 4062 677 350 1.93 4 Robin 2125 2250 425 58 7.33 2 Vulture 1600 1920 320 16 20.00 1 Egret 880 1056 176 200 .88 3 Strategic investment * Regional sales team predicts 20% bookings increase ** Revenue multiplier: Incremental bookings/cost Purpose: With limited funding, what order should we add a given language to products? 4 existing products, one (Egret) is relatively new, small current revenue, but strategically important Sale team predicts 20% more bookings due to localization. Conservative estimate, because you can’t get a government contract without localization, and these products can be incorporated into other products and services. Revenue multiplier = incremental bookings/Product cost Vulture is the cash cow
8
A slippery slope: Average cost per word (considering TM leverage)
Tracking cost per word including TM leverage, over a 5-year period. To a knowledgeable localization professional, this shows how your TM system has paid off. We see the curve as leveling off. It also makes a strong argument for centralization—more TM, more corpus, more leverage. To an exec looking to slash budgets, this looks like a place to get cost savings (sees the curve as continuing downward).
9
Change in Review % Over Time
10
Review Impact on Overall Project Costs
Average % = 21.28% Average % = 24.04% This shows how various levels of review impact project costs over a two year period, letting us see that overall review costs are trending down. Average % = 24.36% Average % = 36.64%
11
Quality Metrics
12
Aggregate Quality per Language
Each quarter we analyze our quality results per language across all vendors against our threshold of 90%. (The quality standard is based on EMC’s own quality ratings system.) Each project gets a rating which feeds into this analysis. This helps us to identify languages that have seen significant improvement or decline and if so, to ensure we understand what is driving the change (and if any process or tools changes are needed).
13
Linguistic Quality per vendor
This is one of the metrics that we use extensively and that has a lot more detail behind it. On a quarterly basis we review the overall linguistic quality per vendor against our threshold of 90% and the trend over 4-5 quarters for that vendor. Is the quality improving or is it declining or is it inconsistent? What improvement programs has the vendor put in place to meet our quality level and are they working? Behind this chart there are more detailed charts per vendor, per language and even detailing the error types or content types driving results below expectations.
14
Localization Product Testing
Goal: Increase automated test coverage from 80 to 100% Goal: Increase automated UI functional test coverage from 30% to 60% In Q3, new products were added to each product group.
15
Goal: Reduce the deferral rate (increase fix rate) of i18n product bugs
Goal is to get the i18n product bugs fixed; we influence product teams to fix those bugs (and sometimes help to fix them). So goal is to reduce the deferral rate; within our bug tracking system, we can easily track the fix rate. We set an 80% goal which is average across product teams for valid bugs. One problem we create for product teams; invalid bugs. We need to reduce those too. More on next slide.
16
Goal: Reduce the deferral rate (increase fix rate) of i18n product bugs
Quarter Incoming Fixed Invalid Outstanding % Fix rate Q1 374 133 85 156 46% Q2 540 320 115 261 55% Q3 325 241 62 283 Q4 330 402 65 146 73% Things to think about: Invalid rate—should be reduced Fix rate should include bugs not fixed previous quarter Product cycle Fix rate—1st time is Incoming – fixed – invalid/incoming + previous quarters outstanding. There is a quality debt that continues from quarter to quarter. We need to follow product team’s calculations. If you add up the totals per quarter, you’ll see over 80% fixed—but this is a one-time calculation, not taking into account continuing bug debt.
17
Quality Metrics – VMware (Q2-Q4)
Full Word Count vs. Sampling Word Count: Quality Metrics – VMware (Q2-Q4) This shows the percentage of sampling-based review vs. full review for a given period of time across all internal customer programs. The metrics show good progress in moving from full to sample-based review.
18
Average time spent to review 1k words in 2014: 77.9 min
Average Review Time Average time spent to review 1k words in 2014: 77.9 min 14Q4 target: 74 min 14Q4 Current: 76 min 15Q1 target: 70 min A 1-year snapshot view of review time for 1000 words of content, showing how we are doing relative to the target rate.
19
Vendor & QBR Metrics
20
Internal Vendor Scorecard
Language Quality Details Vendor Survey Feedback Languages Average Accuracy Readability GUI Fluency French German Italian Russian Portuguese Spanish Chinese Japanese Korean Ave. Rating* 3.7 Overall Satisfaction Level 3.8 Responsiveness & Communication Operational Efficiency 3.5 Problem Solving/Innovation 3.7 Financial Management 3.6 Review Process Management 4.0 *Scale: 1 (worst) – 5 (best) Performance against SLA (issues) Key Vendor Details Apr May Jun Total Volume* Linguistic 3 10 11 24 1.9M/587K 62 Projects PM/Delivery 9 6 15 Technical 20 Ability to handle complex projects Generally performing well and have capacity for more work Accuracy and timeliness of financials should be improved Flexibility to adapt to changing scope quickly This scorecard gives us a quick snapshot on a quarterly basis on how each vendor is performing – with both quality data points (Language Quality, SLA results) and team feedback (Survey). * Total TEP volume/Review volume
21
Vendor Satisfaction Survey Results
Overall Satisfaction Level 3.9 3.8 4.2 3.5 Responsiveness & Communication 4.0 4.1 3.4 Operational Efficiency 2.8 3.6 3.3 3.1 Problem Solving/Innovation 3.7 Financial Management 4.3 Review Process Management Average Rating This is a view comparing the internal team survey results across vendors. We use this to get a sense of how vendors are performing from the team’s perspective. We can also drill down to understand if there are any huge gaps in ratings which may indicate inconsistent performance or issues with specific programs. *Scale: 1 (worst) – 5 (best)
22
Vendor Balanced Scorecards
Primary Suppliers Performance Vendor 2
23
Vendor Survey Data Vendor Service Levels
24
LQE Review Environment
25
LQE Vendor Performance Tracking
26
Usage Metrics
27
Website User Activity We have recently set up an internal Translation Services portal where EMC users can get access to a number of translation-related services. We used an analytics tool to pull info on the # of users (new & returning) as one way to track activity.
28
Website Activity Summary
We also tracked general analytics on usage of our new portal, including total users, # of sessions and # of pageviews.
29
Web Traffic – Search Engine Referrals
30
Web Traffic – Referrer Type
31
Q&A
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.