Netezza’s Deep Dive: Getting Your Data Warehouse Up and Running in 24 hours Chi-Chung Hui Consulting I/T Specialist Information Management Software, IBM HK NOTE TO SPEAKER: Your VITAL role. It’s vital that IBM Business Analytic Executive Keynote’s are delivered confidently and effortlessly. IBM Business Analytics has staked out a “leadership” position by virtue of our organic development of our technologies, acquisitions, and business leadership and this presentation includes many of the key thoughts that make the case for IBM Business Analytics (all of our capabilities) as the “ultimate partner”. Presentation Objectives. The presentation objectives are designed to educate, influence, create urgency, and clarify why IBM is a company with whom you should take next steps with. Presentation Support. If you have any questions or comments in preparation or following delivery, please contact Doug Barton @ bartond@us.ibm.com or Tony Levy @ tlevy@us.ibm.com. Acknowledgements. A special thanks to both Christoph Papenfuss/Germany/IBM@IBMDE and Paul Bremhorst/Germany/IBM@IBMDE for their contributions to this keynote. MAIN POINT: IBM is committed to transforming the FINANCE (function) through ANALYTICS. SPEAKER NOTES: Thanks _________. It’s a real pleasure to be with you and to join our other presenters on this action packed agenda today. On behalf of all my colleagues at IBM Business Analytics, let me say that we are all very excited to be here to share our 7th annual installment of the Finance Forum; Finance Forum is our premier global event that begins each February and travels to more than 65 cities around the world. It’s all about practical approaches, actionable insights, and valuable information about innovations that can help you improve visibility, insight and control over the levers of performance, governance and risk in volatile times. It’s really just one SIGN OF OUR COMMITMENT TO DRIVING BUSINESS IMPACT THROUGH ANALYTICS, over the next several months we will reach TENS OF THOUSANDS OF YOUR COLLEAGUES. Now, I know many of you have been customers for many years and are using our solutions in several performance management processes within your organization right now. And still others are hearing about IBM Business Analytics for virtually the first time. Those of you who are IBM customers today, I want to thank you once again for the trust you have placed in our company and our solutions. To those of you who are new to IBM, a special welcome, thanks for being with us. We are glad you chose to join in and are confident your time with us will be valuable. <<END>> One of the things you should consider is to spend a moment to connect with the audience here with a personal, client/customer anecdote. Have a strong anecdote to tell about Business Analytics that grabs the audience in your local geo, language and local experience. If it’s about building for growth…tell that story, if it’s about working smarter to get home to families AND protecting the success / future of the company, tell that story. Consider using the “black screen” technique, simply press the B key during the slide show to turn the screen black to instantly pause the presentation and change the screen to black. Press B again to remove the black screen and continue your presentation. This: Automatically focuses the audience’s attention to the speaker. Provides a sure way to emphasize a strong point or tell an important story. 1
IBM Smart Analytics System IBM InfoSphere Warehouse Simplicity, Flexibility, Choice IBM Data Warehouse & Analytics Solutions Netezza IBM Smart Analytics System IBM InfoSphere Warehouse Flexible Integrated System True Appliance Custom Solution Warehouse Accelerators IBM offers customers a range of choices for their data warehousing and analytics needs. For our appliances, IBM invests in the solution design, integration and upgrades. This investment assures speed and ease of deployment and administration. The appliances provide optimized performance for a specific workload range. IBM also invests in the solution design, integration and upgrades of the Smart Analytics System. This system offers customers greater flexibility with multiple options of platform, capacity, and integrated software. The Smart Analytics System is customizable to optimize for a range and mix of workloads. IBM InfoSphere Warehouse offers customers complete flexibility to mix and optimize software, servers and storage for the entire range and mix of workloads. The client invests in solution design, integration and upgrades. Now let’s hear from one of Netezza’s customers. Information Management Portfolio (Information Server, MDM, Streams, etc) Simplicity The right mix of simplicity and flexibility Flexibility 2
Netezza Value Proposition Speed: Price/performance leader using hardware-based data streaming Simplicity: Black-box appliance with no tuning or storage administration provides low TCO and fast time to value Scalability: True MPP enables customers to conduct rapid queries and analytics on petabyte sized data warehouses Smart: Built-in advanced analytics pushed deep into database delivers analytics to the masses
Netezza and ISAS Choose Netezza: If the best price/performance is required If customer cannot afford too much tuning and administration If customer need the fastest time to value If customer does not want to pay for a separate database software license Choose ISAS: If AIX is preferred If SAN and remote mirroring are required If customer requires the warehouse to conform to the data center infrastructure standard If customer likes a more customized warehouse If customer need very specific/deep tuning techniques
Agenda Netezza Solution Highlight What is it? Why it is good? Netezza Tour Specialized hardware and architecture How Netezza operates in running database queries? Netezza Simplicity Netezza Performance
Netezza – Solution Highlight Summary True Appliance Hardware, software and storage pre-built for data warehouse With specially designed hardware designed for high-performance advanced analytics operations Hardware compression based on table columns Very fast Usually 10x to 100x faster than traditional database Minimal administration and tuning Low TCO
Why Netezza is Good?
Legacy DWH Architectures: Moving large amounts of data becomes Bottleneck!! Large amounts of data moved from disk, causing bottleneck Data Results Query RDBMS SW Server Storage Data is moved to memory, then SQL processed
Netezza Performance Server™ Netezza Performance Server We are better in EDW operations with complex BI queries! Netezza Performance Server™ CPU: 2% of existing systems Network traffic: 1% of existing systems Results Query SMP Host (2-4 CPU) Data processed as streams from disk, before moved to memory
Agenda Netezza Solution Highlight What is it? Why it is good? Netezza Tour Specialized hardware and architecture How Netezza operates in running database queries? Netezza Simplicity Netezza Performance
The IBM Netezza TwinFin™ Appliance Slice of User Data Swap and Mirror partitions High speed data streaming Disk Enclosures SQL Compiler Query Plan Optimize Admin SMP Hosts Snippet Blades™ (S-Blades™) The Netezza data warehouse and analytics appliance has a revolutionary design based on principles that have allowed Netezza to provide the best price-performance in the market. The four key components that make up TwinFin are: SMP hosts; snippet blades (called S-Blades); disk enclosures and a network fabric (not shown in the diagram). The disk enclosures contain high-density, high-performance disks that are RAID protected. Each disk contains a slice of the data in the database table, along with a mirror of the data on another disk. The storage arrays are connected to the S-Blades via high-speed interconnects that allow all the disks to simultaneously stream data to the S-Blades at the fastest rate possible. The SMP hosts are high-performance Linux servers that are set up in an active-passive configuration for high-availability. The active host presents a standardized interface to external tools and applications, such as BI and ETL tools and load utilities. It compiles SQL queries into executable code segments called snippets, creates optimized query plans and distributes the snippets to the S-Blades for execution. S-Blades are intelligent processing nodes that make up the turbocharged MPP engine of the appliance. Each S-Blade is an independent server that contains powerful multi-core CPUs, Netezza's unique multi-engine FPGAs and gigabytes of RAM--all balanced and working concurrently to deliver peak performance. FPGAs are commodity chips that are designed to do process data streams at extremely fast rates. Netezza employs these chips to filter out extraneous data based on the SELECT and WHERE clauses in the SQL statement, as quickly as data can be streamed off the disk. The process of data filtering reduces the amount of data by 95-98%, freeing up downstream components from processing unnecessary amounts of data. The S-Blades also execute an array of different database primitives such as sorts, joins and aggregations in the CPU cores. The CPU cores are designed with ample headroom to run embedded algorithms of arbitrary complexity against large data streams for advanced analytics applications. Network Fabric: All system components are connected via a high-speed network fabric. Netezza runs a customized IP-based protocol that fully utilizes the total cross-sectional bandwidth of the fabric and eliminates congestion even under sustained, bursty network traffic. The network is optimized to scale to more than a thousand nodes, while allowing each node to initiate large data transfers to every other node simultaneously. All system components are redundant. While the hosts are active-passive, all other components in the appliance are hot-swappable. User data is fully mirrored, enabling better than 99.99% availability. Processor & streaming DB logic High-performance database engine streaming joins, aggregations, sorts, etc. Page 11 11
The S-Blade™: CPU Blade + FPGA sidecar The S-Blade is where the Netezza magic happens. Each S-Blade is a combination of a standard blade server and a database accelerator card provided by Netezza. We use IBM’s “sidecar” technology to easily combine the two blades to make them act as a single logical and physical entity. Note: The sidecar technology is commonly used by IBM to expand their blade servers to add more memory or IO blades to each server. Netezza has licensed this technology to augment the IBM blade servers with our FPGA secret sauce. Page 12 12
IBM BladeCenter Server Netezza DB Accelerator S-Blade™ Components SAS Expander Module SAS Expander Module DRAM Dual-Core FPGA Intel Quad-Core IBM BladeCenter Server Netezza DB Accelerator 13
The IBM Netezza AMPP™ Architecture FPGA CPU Advanced Analytics Memory Hosts Host BI ODBC/ JDBC FPGA CPU Memory ETL FPGA CPU Loader This is a logical view of how all the components are configured in Netezza’s unique AMPP architecture, which combines an SMP front-end with a shared nothing MPP back-end for query processing. As a purpose-built appliance for high speed analytics, its power comes not from the most powerful and expensive components but from how the right components are assembled and work together to maximize performance. Each S-Blade operates on multiple data streams. The architecture offers linear scalability by adding more S-Blades and disk enclosures to the appliance in a balanced fashion. More than a thousand of these customized MPP streams work together to “divide and conquer” the workload in the largest TwinFin. The architecture also offers tremendous flexibility in creating different types of appliances by independently varying the number of disks, S-Blades or the RAM available on the S-Blades. Memory Applications Disk Enclosures Network Fabric S-Blades™ Netezza Appliance 14
How Netezza Operates in Database Queries?
Our Secret Sauce FPGA Core CPU Core Restrict, Visibility Complex ∑ select DISTRICT, PRODUCTGRP, sum(NRX) from MTHLY_RX_TERR_DATA where MONTH = '20091201' and MARKET = 509123 and SPECIALTY = 'GASTRO' FPGA Core CPU Core A key component of Netezza’s performance is the way in which its streaming architecture processes data. The Netezza architecture uniquely uses the FPGA as a turbocharger … a huge performance accelerator that not only allows the system to keep up with the data stream, but it actually accelerates the data stream through compression before processing it at line rates, ensuring no bottlenecks in the IO path. You can think of the way that data streaming works in the Netezza as similar to an assembly line. The Netezza assembly line has various stages in the FPGA and CPU cores. Each of these stages, along with the disk and network, operate concurrently, processing different chunks of the data stream at any given point in time. The concurrency within each data stream further increases performance relative to other architectures. Compressed data gets streamed from disk onto the assembly line at the fastest rate that the physics of the disk would allow. The data could also be cached, in which case it gets served right from memory instead of disk. The first stage in the assembly line, the Compress Engine within the FPGA core, picks up the data block and uncompresses it at wire speed, instantly transforming each block on disk into 4-8 blocks in memory. The result is a significant speedup of the slowest component in any data warehouse—the disk. The disk block is then passed on to the Project engine or stage, which filters out columns based on parameters specified in the SELECT clause of the SQL query being processed. The assembly line then moves the data block to the Restrict engine, which strips off rows that are not necessary to process the query, based on restrictions specified in the WHERE clause. The Visibility engine also feeds in additional parameters to the Restrict engine, to filter out rows that should not be “seen” by a query e.g. rows belonging to a transaction that is not committed yet. The Visibility engine is critical in maintaining ACID (Atomicity, Consistency, Isolation and Durability) compliance at streaming speeds in the Netezza. The processor core picks up the uncompressed, filtered data block and performs fundamental database operations such as sorts, joins and aggregations on it. It also applies complex algorithms that are embedded in the snippet code for advanced analytics processing. It finally assembles all the intermediate results together from the entire data stream and produces a result for the snippet. The result is then sent over the network fabric to other S-Blades or the host, as directed by the snippet code. Restrict, Visibility Complex ∑ Joins, Aggs, etc. Slice of table MTHLY_RX_TERR_DATA (compressed) Uncompress Project sum(NRX) select DISTRICT, PRODUCTGRP, sum(NRX) where MONTH = '20091201' and MARKET = 509123 and SPECIALTY = 'GASTRO' 16
Netezza Eliminates the I/O Bottleneck Move the SQL to the hardware… to where the data lives If you can move the query to where the data lives rather than moving the data to where the query lives, you can solve the IO bottleneck and fundamentally alter the price/performance characteristics for large scale warehouses. “Just send the Answer, not Raw Data” 17
Agenda Netezza Solution Highlight What is it? Why it is good? Netezza Tour Specialized hardware and architecture How Netezza operates in running database queries? Netezza Simplicity Netezza Performance
Why traditional database systems are not enough: Endless tuning Query performance is slow business person Vendors who ignore fundamental differences between transaction processing and analytical workloads damn their customers to constant tuning. Transaction processing databases are simply not designed for analytic processing; their query performance is never fast enough. 19
Why traditional database systems are not enough: Endless tuning I’ll add an index business person technical person Database administrators are consumed in a constant quest to find more performance. 20
Why traditional database systems are not enough: Endless tuning Load performance is slow. When can I access my data? business person However their tuning efforts create other problems. Each index added to the database consumes more disk space adding to the warehouse’s cost of ownership. More problematical is the impact on data loads, as indices must be recreated every time new data is added to the warehouse 21
Why traditional database systems are not enough: Endless tuning I’ll investigate and get back to you … business person technical person Technical teams become hostage to the database technology, forced to shoulder the burden of after-market engineering work. The vendor’s failure manifests itself as: poorly performing queries; poorly performing data loads; constant database administration and tuning. The business counts the costs as: data not available when needed, or stale multiple training courses to learn every complexity of the database management product; high cost to hire and retain experienced database professionals; data unexploited as business Training costs Performance problems Constant data partitioning Queries and loading as indices rebuilt Constant tuning Cost escalation Business distanced from their data Business build their own data infrastructures – cost and risk to governance Is this really the level of dialog your organization aspires to for its technical and business staff? Wouldn’t the business be better served by technical staff with time to team with business units to create real value from data? 22
Why traditional database systems are not enough: Endless tuning Okay… I will add an aggregate table to pre-calculate so that the report will run faster. business person technical person Technical teams become hostage to the database technology, forced to shoulder the burden of after-market engineering work. The vendor’s failure manifests itself as: poorly performing queries; poorly performing data loads; constant database administration and tuning. The business counts the costs as: data not available when needed, or stale multiple training courses to learn every complexity of the database management product; high cost to hire and retain experienced database professionals; data unexploited as business Training costs Performance problems Constant data partitioning Queries and loading as indices rebuilt Constant tuning Cost escalation Business distanced from their data Business build their own data infrastructures – cost and risk to governance Is this really the level of dialog your organization aspires to for its technical and business staff? Wouldn’t the business be better served by technical staff with time to team with business units to create real value from data? 23
Why traditional database systems are not enough: Endless tuning I want my report to be refreshed every 1 hour. business person However their tuning efforts create other problems. Each index added to the database consumes more disk space adding to the warehouse’s cost of ownership. More problematical is the impact on data loads, as indices must be recreated every time new data is added to the warehouse 24
Why traditional database systems are not enough: Endless tuning Oh… that is impossible… The report will be updated once everyday after night batch… business person technical person Technical teams become hostage to the database technology, forced to shoulder the burden of after-market engineering work. The vendor’s failure manifests itself as: poorly performing queries; poorly performing data loads; constant database administration and tuning. The business counts the costs as: data not available when needed, or stale multiple training courses to learn every complexity of the database management product; high cost to hire and retain experienced database professionals; data unexploited as business Training costs Performance problems Constant data partitioning Queries and loading as indices rebuilt Constant tuning Cost escalation Business distanced from their data Business build their own data infrastructures – cost and risk to governance Is this really the level of dialog your organization aspires to for its technical and business staff? Wouldn’t the business be better served by technical staff with time to team with business units to create real value from data? 25
Why traditional database systems are not enough: Wasted effort Task Description Transform Inspect Non-value Process Value-adding Process move data from sources 120 reconcile data 20 sort and prep 30 drop indices 5 drop constraints 1 drop aggregates 2 drop materialized views load data create constraints 180 create indices 90 create materialized views 60 create aggregates gather statistics 300 This is a Time Value Map – A time value map tracks a single work item through its process. It accounts for where time is spent on this action item. The time value map begins with the outset of the work item and it tracks it through delivery to the end user. The aim of the time value map is to eliminate waste. The problem is not just the time taken to load the warehouse, but the human effort required to administer the loading process. This waste analysis was undertaken by an organization using Oracle as their data warehouse. On the left are the task undertaken by database administrators every time they loaded data from source systems to their warehouse. The tasks are classified as transforms, inspections, value adding or non value-adding processes. Non-value processes are deemed waste. Their database technology makes them necessary but this work returns no value to the business. The total administration activity calculated as 920 minutes of which more than half, or 490 minutes was spent in non-value processing or wasted time. And this is just moving data from source systems to the warehouse. These figures take no account of all the tuning work required to make queries perform. When the company included that effort in their analysis they concluded that their activities using a traditional database product for their data warehousing was, on average, more than 90% required waste or non-value added processing. 26
Solving the data load and query performance problem Data loads jobs Oracle Netezza 1 + 5 hours 2 mins 53 secs 2 1 hour 12 mins 7 secs 3 mins 29 secs 3 1 hour 25 mins 56 secs 4 mins 20 secs 4 1hour 30 mins 00 secs 5 mins 42 secs “ We act out the market every day to capitalize on opportunities. Complex merchandize reports that had taken days to process on the old platform now take five minutes on the new one. Simpler queries are even faster. “ While query speed improvements make headlines fast data load speeds are important for successful data warehousing. Earlier in the presentation I spoke about the problems facing a retailer using a highly-indexed Oracle warehouse. This retailer moved from Oracle to Netezza and now doesn’t have to tune their warehouse. They have no indices to slow load performance. They have moved their service levels agreed with their business partners for data availability from 10.00am to 08.00am although the data is always available by 07.00am. Many Netezza customers continuously load their warehouses in micro-batches, so the business makes decisions in near real-time as events occur. -- Chief Information Officer at a large US retailer 27
Netezza Loads Data at 2.5TB per Hour
Netezza is Simple to Deploy Since it is so Fast Operations Simply load and go .… it’s an appliance Minimal DBA Tuning No configuration or physical modeling No indexes– out of the box performance ETL Developers No aggregate tables needed ->Less ETL logic Faster load and transformation times Business Analysts Train of thought analysis – 10 to 100x faster True ad hoc queries – no tuning, no indexes Ask complex queries against large datasets Page 29 29
Traditional Complexity … Netezza Simplicity 0. CREATE DATABASE TEST LOGFILE 'E:\OraData\TEST\LOG1TEST.ORA' SIZE 2M, 'E:\OraData\TEST\LOG2TEST.ORA' SIZE 2M, 'E:\OraData\TEST\LOG3TEST.ORA' SIZE 2M, 'E:\OraData\TEST\LOG4TEST.ORA' SIZE 2M, 'E:\OraData\TEST\LOG5TEST.ORA' SIZE 2M EXTENT MANAGEMENT LOCAL MAXDATAFILES 100 DATAFILE 'E:\OraData\TEST\SYS1TEST.ORA' SIZE 50 M DEFAULT TEMPORARY TABLESPACE temp TEMPFILE 'E:\OraData\TEST\TEMP.ORA' SIZE 50 M UNDO TABLESPACE undo DATAFILE 'E:\OraData\TEST\UNDO.ORA' SIZE 50 M NOARCHIVELOG CHARACTER SET WE8ISO8859P1; 1. Oracle* table and indexes 2. Oracle tablespace 3. Oracle datafile 4. Veritas file 5. Veritas file system 6. Veritas striped logical volume 7. Veritas mirror/plex 8. Veritas sub-disk 9. SunOS raw device 10. Brocade SAN switch 11. EMC Symmetrix volume 12. EMC Symmetrix striped meta-volume 13. EMC Symmetrix hyper-volume 14. EMC Symmetrix remote volume (replication) 15. Days/weeks of planning meetings Netezza: Low (ZERO) Touch: CREATE DATABASE my_db; 30 30
Netezza Delivers Simplicity Up and running 6 months before being trained 200X faster than Oracle system ROI in less than 3 months “ We will hear directly from XO Communications in a video to follow my presentation. “ Allowing the business users access to the Netezza box was what sold it. -- Steve Taff, Executive Dir. of IT Services 31
Agenda Netezza Solution Highlight What is it? Why it is good? Netezza Tour Specialized hardware and architecture How Netezza operates in running database queries? Netezza Simplicity Netezza Performance
POC - A Telco Company Environment Netezza TwinFin 12 full rack Raw Data volume Call Level Detail : 3TB (9 billion rows) Financial Bill : 600GB (5.4 billion rows) Customer Info : 60GB (91.1 million rows)
POC: Data Maintenance Testing scenarios Raw Data involved Elapsed Time Insert data to an empty table from a table with 750 million records 270GB 3m47s Concurrently insert data to 5 empty tables from 5 tables with 550 million records each respectively 990GB = 198GB x 5 11m35s Update all rows of a table with 1.5 billion records 9m40s Concurrently update all rows for 5 tables with 1.1 billion records each 1,350GB = 270GB x 5 44m44s Delete 183 million records from a table with 550 million records 198GB 1m32s Concurrently delete 183 million records from 5 tables with 550 million records each 9m43s
POC: Enquiries (With NO Indexes) Testing scenarios Table involved Elapsed Time for Single Task Elapsed Time for Concurrent Tasks Query on single fact table Call Level Detail (3TB, 9 billion records) 1m16s 2m49s (10 tasks) Query on joining 2 fact tables (3TB, 9 Billion records) Customer Info (60GB, 91.1 million records) 22s 58s (5 tasks) Query to get the top 100 call duration time in different groups for a particular month 1m53s 4m10s (5 tasks) Query on joining a fact table to multiple dimension tables 5 dimension tables 5s 14s (5 tasks) Query on joining a fact table with subquery involve joining another two fact tables Financial Bill (600GB, 5.4 billion records) 21s 1m55s (5 tasks)
Catalina Marketing: Building loyalty one customer at a time No targeting Basic targeting e.g., offer dog food coupon to customer buying dog food Using predictive models to find latent correlations Coupon redemption rate 1% 6-10% 25% Marketing to a segment of one – 195 million US loyalty program members Every coupon printed is unique to the individual customer Customized based on three years' worth of purchase history Increased staff productivity – from 50 to 600 new models per year Increased efficiency – from 4 hours to score a model to 60 seconds In-database analytics at Catalina Marketing: Building loyalty one customer at a time At the point-of-sale of Catalina's retail customers, real-time analysis of the contents of shopping baskets triggers printouts of coupons handed to shoppers with their receipt at checkout. Each coupon is unique--two shoppers checking out one after the other, with identical items in their carts, will get different coupons based on their buying histories, combined with third-party demographic data. By using predictive models based on analysis of three years' worth of purchase history for 195 million U.S. customer loyalty program members Catalina’s customers achieve redemption rates of 25%. Advance analytics is successful at building customer loyalty by treating each customer as an individual, or a market segment of one. By adopting Netezza Catalina has increased the productivity of their analytics staff and dramatically reduced the time required to score their models. Marketing to a segment of one – 195 million US loyalty program members Every coupon printed is unique to the individual customer Customized based on three years' worth of purchase history (assuming use of a loyalty card) Increased staff productivity – from 50 to 600 new models per year Increased efficiency – from 4 hours to score a model to 60 seconds 36
Netezza’s Deep Dive: Getting Your Data Warehouse Up and Running in 24 hours Chi-Chung Hui Consulting I/T Specialist Information Management Software, IBM HK NOTE TO SPEAKER: Your VITAL role. It’s vital that IBM Business Analytic Executive Keynote’s are delivered confidently and effortlessly. IBM Business Analytics has staked out a “leadership” position by virtue of our organic development of our technologies, acquisitions, and business leadership and this presentation includes many of the key thoughts that make the case for IBM Business Analytics (all of our capabilities) as the “ultimate partner”. Presentation Objectives. The presentation objectives are designed to educate, influence, create urgency, and clarify why IBM is a company with whom you should take next steps with. Presentation Support. If you have any questions or comments in preparation or following delivery, please contact Doug Barton @ bartond@us.ibm.com or Tony Levy @ tlevy@us.ibm.com. Acknowledgements. A special thanks to both Christoph Papenfuss/Germany/IBM@IBMDE and Paul Bremhorst/Germany/IBM@IBMDE for their contributions to this keynote. MAIN POINT: IBM is committed to transforming the FINANCE (function) through ANALYTICS. SPEAKER NOTES: Thanks _________. It’s a real pleasure to be with you and to join our other presenters on this action packed agenda today. On behalf of all my colleagues at IBM Business Analytics, let me say that we are all very excited to be here to share our 7th annual installment of the Finance Forum; Finance Forum is our premier global event that begins each February and travels to more than 65 cities around the world. It’s all about practical approaches, actionable insights, and valuable information about innovations that can help you improve visibility, insight and control over the levers of performance, governance and risk in volatile times. It’s really just one SIGN OF OUR COMMITMENT TO DRIVING BUSINESS IMPACT THROUGH ANALYTICS, over the next several months we will reach TENS OF THOUSANDS OF YOUR COLLEAGUES. Now, I know many of you have been customers for many years and are using our solutions in several performance management processes within your organization right now. And still others are hearing about IBM Business Analytics for virtually the first time. Those of you who are IBM customers today, I want to thank you once again for the trust you have placed in our company and our solutions. To those of you who are new to IBM, a special welcome, thanks for being with us. We are glad you chose to join in and are confident your time with us will be valuable. <<END>> One of the things you should consider is to spend a moment to connect with the audience here with a personal, client/customer anecdote. Have a strong anecdote to tell about Business Analytics that grabs the audience in your local geo, language and local experience. If it’s about building for growth…tell that story, if it’s about working smarter to get home to families AND protecting the success / future of the company, tell that story. Consider using the “black screen” technique, simply press the B key during the slide show to turn the screen black to instantly pause the presentation and change the screen to black. Press B again to remove the black screen and continue your presentation. This: Automatically focuses the audience’s attention to the speaker. Provides a sure way to emphasize a strong point or tell an important story. 37