What's New in Infernalis

Slides:



Advertisements
Similar presentations
In Production Juan Marin. Agenda Introduction Reliability Availability Performance Data optimizations Runtime optimizations Measuring your environment.
Advertisements

Protocol Configuration in Horner OCS
Ceph: A Scalable, High-Performance Distributed File System
Cacti Workshop Tony Roman Agenda What is Cacti? The Origins of Cacti Large Installation Considerations Automation The Current.
1 Chapter Overview Managing Compression Managing Disk Quotas Increasing Security with EFS Using Disk Defragmenter, Check Disk, and Disk Cleanup.
Navigating the Oracle Backup Maze Robert Spurzem Senior Product Marketing Manager
Chapter 5 Roles and features. objectives Performing management tasks using the Server Manager console Understanding the Windows Server 2008 roles Understanding.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
CSC 456 Operating Systems Seminar Presentation (11/13/2012) Leon Weingard, Liang Xin The Google File System.
Recovery Manager Overview Target Database Recovery Catalog Database Enterprise Manager Recovery Manager (RMAN) Media Options Server Session.
A BigData Tour – HDFS, Ceph and MapReduce These slides are possible thanks to these sources – Jonathan Drusi - SCInet Toronto – Hadoop Tutorial, Amir Payberah.
Sofia, Bulgaria | 9-10 October SQL Server 2005 High Availability for developers Vladimir Tchalkov Crossroad Ltd. Vladimir Tchalkov Crossroad Ltd.
Advanced Features of Nagios XI Sam Lansing -
Ceph Storage in OpenStack Part 2 openstack-ch,
Maintenance and Support Week 15 CMIS570. User Training Need to consider the same 2 groups: End users Use the system to achieve the business purpose Creating,
Maintenance and Support Week 15 CMIS570. User Training Need to consider the same 2 groups: End users Use the system to achieve the business purpose Creating,
Installing, Configuring And Troubleshooting Coldfusion Mark A Kruger CFG Ryan Stille CF Webtools.
CEPH: A SCALABLE, HIGH-PERFORMANCE DISTRIBUTED FILE SYSTEM S. A. Weil, S. A. Brandt, E. L. Miller D. D. E. Long, C. Maltzahn U. C. Santa Cruz OSDI 2006.
Engineering on Display: Back-End Development for Sensor Instrumentation Systems Student: Brian J Kapala Supervisor: Dr. Cavalcanti.
ArcGIS Server for Administrators
Ceph: A Scalable, High-Performance Distributed File System
1 MSTE Visual SourceSafe For more information, see:
CoprHD and OpenStack Ideas for future.
“Big Storage, Little Budget” Kyle Hutson Adam Tygart Dan Andresen.
 Introduction  Architecture NameNode, DataNodes, HDFS Client, CheckpointNode, BackupNode, Snapshots  File I/O Operations and Replica Management File.
Awesome distributed storage system
SOFTWARE DEFINED STORAGE The future of storage.  Tomas Florian  IT Security  Virtualization  Asterisk  Empower people in their own productivity,
AlwaysOn In SQL Server 2012 Fadi Abdulwahab – SharePoint Administrator - 4/2013
CWG12: Filesystems for TDS U. FUCHS / CERN. O2 Data Flow Schema FLPs Data Network 1500 EPNs ~3o PB, 10 9 files, ~150 GBps Data Management facilities,
Elara Introduction Wentao Zhang? (NOTE: PASTE IN PORTRAIT AND SEND BEHIND FOREGROUND GRAPHIC FOR CROP)
Getting Started as an EdgeX Developer
WAFL: Write Anywhere File System
Section 4 Block Storage with SES
Integrating ArcSight with Enterprise Ticketing Systems
Essentials of UrbanCode Deploy v6.1 QQ147
By Michael Poat & Dr. Jérôme Lauret
Ovirt Storage Overview
Efficient data maintenance in GlusterFS using databases
Section 6 Object Storage Gateway (RADOS-GW)
Amazon Storage- S3 and Glacier
Lead SQL BankofAmerica Blog: SQLHarry.com
Docker Birthday #3.
Getting Started as an EdgeX Developer
Introduction to Data Management in EGI
FTS Monitoring Ricardo Rocha
CSE-291 (Cloud Computing) Fall 2016
Section 7 Erasure Coding Overview
CSE451 NTFS Variations and other File System Issues Autumn 2002
How can a detector saturate a 10Gb link through a remote file system
BOOTP and DHCP Objectives
Introduction to Computers
Storage Virtualization
Solutions: Backup & Restore
Ceph: de factor storage backend for OpenStack
Which Study Material Is Best For IBM C Exam Preparations?
MVC in ASP.NET Core: The new kid on the block
Onboarding Session Victoria Martinez de la Cruz (vkmc)
REDHAT LINUX Training Syllabus
DHCP, DNS, Client Connection, Assignment 1 1.3
Btrfs Filesystem Chris Mason.
Lecture 15 Reading: Bacon 7.6, 7.7
TechEd /11/ :44 AM © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered.
Database Backup and recovery
Why Background Processing?
Ceph Appliance – SAFE Storage Appliance For Enterprise
Azure Active Directory
Squeezing Every Drop of Ink out of Ceph
A Virtual Machine Monitor for Utilizing Non-dedicated Clusters
Ceph at the Tier-1 Tom Byrne.
Network Connectivity Checker
Presentation transcript:

What's New in Infernalis The Rise of the Vampire Squid By Bryan Stillwell & Ian Colle - February 9, 2016

Agenda Welcome (food/drinks) Introductions Presentations Q&A What's New in Infernalis (Bryan Stillwell) What's Coming in Jewel (Ian Colle) Q&A Discussions

Introductions Who are you? What brings you here today?

Future Meetup Ideas How you're using Ceph CephFS RBD (RADOS Block Device) RGW (RADOS Gateway) RADOS Monitoring (Calamari/VSP/Grafana) Automation (Puppet/Ansible/Salt) Cache Tiering CRUSH Performance Tuning CBT (The Ceph Benchmarking Tool)

What's New in Infernalis (v9.2.x)

Old Version Numbering Codename Release Type Versions Argonaut Stable 0.48 -> 0.48.3 Bobtail 0.56 -> 0.56.7 Cuttlefish 0.61 -> 0.61.9 Dumpling LTS 0.67 -> 0.67.11 Emperor 0.72 -> 0.72.2 Firefly 0.80 -> 0.80.11 Giant 0.87 -> 0.87.2 Hammer 0.94 -> 0.94.5

New Version Numbering x.y.z x Stable release number (I == 9, J == 10, K == 11) y 0 = Development (early testers) 1 = Release Candidate (test clusters) 2 = Stable (production ready) z Revision/Bugfix release

New Version Numbering Codename Type Versions Infernalis Stable 9.2.x Jewel 10.2.x Kraken 11.2.x Note – all releases are Stable now. Removed concept of LTS.

General Improvements systemd (except for Ubuntu Trusty) systemctl start ceph-osd* systemctl stop ceph-osd@42 RedHat distros now have an SELinux policy

General Improvements Ceph daemons run as 'ceph' user chown -R ceph:ceph /var/lib/ceph Time consuming Must be done on mon nodes as well Problem with journals not getting chown'd Reboots fix, but only if you're using GPT

RADOS Improvements Cache tier improvements SHEC erasure coding no longer experimental Unified queue for handling client IO, recovery, scrubbing, and snapshot trimming Many ceph-objectstore-tool improvements Cleanup of ObjectStore API to facilitate new backends like BlueStore SHEC = Shingled Erasure Code contributed by Fujitsu (improves recovery efficiency) ceph-objectstore-tool

RGW Improvements Added support for SWIFT expirations Many SWIFT API compatibility improvements SWIFT expirations work well for storing database backups for example

RBD Improvements The size argument now supports suffixes Hammer (0.94.5): Infernalis (9.2.0): Default format also changed

RBD Improvements The rbd du command now shows actual usage (quickly if fast-diff enabled)

RBD Improvements Many stability improvements to object-map Can enable/disable object-map and exclusive-lock features dynamically Object-map was added during the Hammer release cycle. It tracks which blocks of the image are actually allocated and where.

RBD Improvements New rbd status command

RBD Improvements You can now store user metadata and set persistent librbd options associated with individual images Deep-flatten now supports snapshots The export-diff tool is now faster (now uses aio)

CephFS Improvements Snapshots can now be renamed Ongoing improvements around administration, diagnostics, and check and repair tools Dramatic improvements around caching a revocation of client cache state due to unused inodes The ceph-fuse client behaves better on 32bit hosts

What's Coming in Jewel (v10.2.x)

RADOS Improvements Continuing to make scrubbing smarter (don’t scrub when the OSD is busy, if scrub starts, finish as quickly as possible) Further ceph-objectstore-tool improvements Still evaluating tcmalloc 2.4 vs jemalloc performance BlueStore! – uses LevelDB for journaling – greatly improves performance of RADOS Further Cache-tiering improvements (but still caveat emptor)

RGW Improvements NFS for RGW Multisite v2 Keystone v3 Support Originally master-slave with sync agent, which only allowed writes in master location Now master-master Keystone v3 Support Swift per tenant namespace Static Web Hosting in S3 Bucket Passes TEMPEST

RBD Improvements Async RBD Mirroring iSCSI integration

CephFS Improvements Continued improvements around administration, diagnostics, and check and repair tools Active-Passive MDS now considered tech preview Stabilizing work on Active-Active

General Moved from Ceph Developer Summit (CDS) every 3-4 months to Ceph Design Monthly (CDM) on the first Wed of each month, alternating between European and Asian friendly times Next CDM 02MAR2016 2100EST Videos available here: https://www.youtube.com/channel/UCno-Fry25FJ7B4RycCxOtfw

Questions?