Valgrind, the anti-Alzheimer pill for your memory problems

Slides:



Advertisements
Similar presentations
Module R2 CS450. Next Week R1 is due next Friday ▫Bring manuals in a binder - make sure to have a cover page with group number, module, and date. You.
Advertisements

Utilizing the GDB debugger to analyze programs Background and application.
1 CIS*2450 (W05) - Seminar 2 Commenting Style Pointers and Arrays ISO C99 Preprocessor Directives Handling Unknown Quantities of Data.
Tools for applications improvement George Bosilca.
Chapter 10: File-System Interface
1 1 Profiling & Optimization David Geldreich (DREAM)
Bertrand Bellenot root.cern.ch ROOT I/O in JavaScript Reading ROOT files from any web browser ROOT Users Workshop
Reading ROOT files in any browser ROOT I/O IN JAVASCRIPT Bertrand Bellenot CERN, PH-SFT.
Introduction to the Enterprise Library. Sounds familiar? Writing a component to encapsulate data access Building a component that allows you to log errors.
Computer Systems Week 10: File Organisation Alma Whitfield.
Chocolate Bar! luqili. Milestone 3 Speed 11% of final mark 7%: path quality and speed –Some cleverness required for full marks –Implement some A* techniques.
Lesson 7-Creating and Changing Directories. Overview Using directories to create order. Managing files in directories. Using pathnames to manage files.
Memory & Storage Architecture Seoul National University GDB commands Hyeon-gyu School of Computer Science and Engineering.
UPC/SHMEM PAT High-level Design v.1.1 Hung-Hsun Su UPC Group, HCS lab 6/21/2005.
Silberschatz, Galvin and Gagne  Operating System Concepts File Concept Contiguous logical address space Smallest user allocation Non-volatile.
Microprocessor-based systems Curse 7 Memory hierarchies.
Computer Science Detecting Memory Access Errors via Illegal Write Monitoring Ongoing Research by Emre Can Sezer.
Chapter 10: File-System Interface Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Jan 1, 2005 Chapter 10: File-System.
Chapter 7 File I/O 1. File, Record & Field 2 The file is just a chunk of disk space set aside for data and given a name. The computer has no idea what.
® IBM Software Group © 2006 IBM Corporation PurifyPlus on Linux / Unix Vinay Kumar H S.
Operating Systems Lecture 14 Segments Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of Software Engineering.
Measuring Memory using valgrind CSCE 221H Parasol Lab, Texas A&M University.
Graphs, Continued ECE 297.
Computer Graphics 3 Lecture 1: Introduction to C/C++ Programming Benjamin Mora 1 University of Wales Swansea Pr. Min Chen Dr. Benjamin Mora.
CSE 374 Programming Concepts & Tools Hal Perkins Fall 2015 Lecture 10 – C: the heap and manual memory management.
It consists of two parts: collection of files – stores related data directory structure – organizes & provides information Some file systems may have.
1 Debugging (Part 2). “Programming in the Large” Steps Design & Implement Program & programming style (done) Common data structures and algorithms Modularity.
© 2002 IBM Corporation Confidential | Date | Other Information, if necessary Copyright © 2009 Ericsson, Made available under the Eclipse Public License.
Reading ROOT files in (almost) any browser.  Use XMLHttpRequest JavaScript class to perform the HTTP HEAD and GET requests  This class is highly browser.
Some of the utilities associated with the development of programs. These program development tools allow users to write and construct programs that the.
Code improvement: Coverity static analysis Valgrind dynamic analysis GABRIELE COSMO CERN, EP/SFT.
Procedures Procedures are very important for writing reusable and maintainable code in assembly and high-level languages. How are they implemented? Application.
Lecture 3 Translation.
Component 1.6.
Andy Wang Object Oriented Programming in C++ COP 3330
Chapter 2 Memory and process management
Basic 1960s It was designed to emphasize ease of use. Became widespread on microcomputers It is relatively simple. Will make it easier for people with.
Threads vs. Events SEDA – An Event Model 5204 – Operating Systems.
Debugging Memory Issues
Chapter 11: File System Implementation
CSE 374 Programming Concepts & Tools
CSCI-235 Micro-Computer Applications
Valgrind Overview What is Valgrind?
Tango Administrative Tools
Successful and not (yet?) successful optimisations in Valgrind
Files.
Coding Defensively Coding Defensively
Operating System I/O System Monday, August 11, 2008.
Experience with jemalloc
Workshop in Nihzny Novgorod State University Activity Report
Checking Memory Management
Lab: ssh, scp, gdb, valgrind
Capriccio – A Thread Model
Operation System Program 4
Partnership.
PerfView Measure and Improve Your App’s Performance for Free
Andy Wang Object Oriented Programming in C++ COP 3330
Code Analysis, Repository and Modelling for e-Neuroscience
Chapter 10: File-System Interface
Operating System Chapter 7. Memory Management
Chapter 2: Operating-System Structures
Update : about 8~16% are writes
Code Analysis, Repository and Modelling for e-Neuroscience
Where is all the knowledge we lost with information? T. S. Eliot
Chapter 2: Operating-System Structures
Valgrind Overview What is Valgrind?
Makefiles, GDB, Valgrind
SPL – PS2 C++ Memory Handling.
Return-to-libc Attacks
Enhancements to ROOT performance benchmarking
Presentation transcript:

Valgrind, the anti-Alzheimer pill for your memory problems Philippe Waroquiers FOSDEM 2017 valgrind devroom

Talk content Discuss/demo a new functionality, that provides an easier way to visualise memory usage and other types of data. If time permits: Discuss/demo memory pool and leak heuristics

How much memory, allocated where ? Massif : records the evolution of the memory By taking snapshots regularly Some snapshots are detailed, showing the allocation stack traces The peak snapshot is also detailed In 3.13 (SVN), new ' tree view' memory feature --xtree-memory=none|allocs|full to produce a memory report at the end of execution Can also be produced on demand, using vgdb

Demo : Who allocated how much memory ? valgrind --tool=massif --xtree-memory=full \ soffice.bin --invisible \ --convert-to pdf fosdem2017_optim_xt_hg.odp # If specifying a .ms extension, produces a Massif format: valgrind --tool=massif --xtree-memory=full \ --xtree-memory-file=xtmemory.ms.%p \ mfg # Default xtmemory.kcg.%p produces a callgrind format: valgrind --tool=massif --xtree-memory=full \ mfg

Demo : ms_print output

Demo : massif-visualizer

Demo : kcachegrind xtree memory full

Demo : massif-visualize xtree memory full

Massif format and --xtree-memory=full valgrind --tool=massif --xtree-memory=full \ --xtree-memory-file=xtmemory.ms.%p \ soffice.bin --invisible \ --convert-to pdf fosdem2017_optim_xt_hg.odp Massif format : Not designed for huge set of data Not designed for more than 1 'data kind' => the above generates a 6.3 Gb file => very (too?) heavy e.g. for massif-visualizer => for xtree, kcachegrind format is better

Xtree and memcheck/helgrind Like with Massif, --xtree-memory=allocs|full will produce a report at the end of execution To produce report during execution: vgdb xtmemory <filename> Supported by massif/memcheck/helgrind. Can be done even when --xtree-memory=none => it will produce a “=allocs report”

How to best visualise memory ? massif+ms_print : not that easy for big apps massif+massif-visualizer : nice evolution graph, digging in stack traces not that easy massif-visualizer+--xtree-memory-file=full : not really appropriate kcachegrind+--xtree-memory-file=full : easy to use, but file can become huge Maybe kcachegrind and callgrind format should be expanded to present evolution of data ?

Xtree and memcheck leak valgrind --tool=memcheck --xtree-leak=yes \ soffice.bin --invisible \ --convert-to pdf fosdem2017_optim_xt_hg.odp --xtree-leak=yes : final leak report as xtree report is automatically a “full” leak report report file name controlled by --xtree-leak-file=xtleak.kcg.%p vgdb monitor command 'leak_check' can also produce xtree leak reports Note: only the callgrind kcg format is supported for xtree leak reports

Xtree for syscalls experimental, not (yet?) in SVN valgrind --tool=massif --xtree-syscall=yes \ soffice.bin --invisible \ --convert-to pdf fosdem2017_optim_xt_hg.odp --xtree-syscall=yes currently keeps track of Nr of syscalls Nr of failed syscalls Syscall time (micro seconds) Bytes read Bytes written

kcachegrind xtree syscalls

kcachegrind cycles Cycles in a callgraph are created by Recursive calls, e.g. fnA calls fnB that calls fnA “Superposition” of 2 non recursive stack traces main calls fnA that calls fnB main calls fnB that calls fnA Cycles and kcachegrind visualisation Inclusive costs cannot be added kcachegrind represents all the functions of a cycle by a “Cycle x” function Complex cycles not easy to analyse/grasp

Xtree and kcachegrind cycles Xtree stacktraces : Code addresses are translated to function names Function names are displayed by kcachegrind What if there is no symbol for a code address ? Currently, xtree uses a “UnknownFn???” Fn name Consequence: many (artificial) cycles => some more ideas and work needed here Maybe heuristic to guess a function start address ? Maybe use callgrind-like callstack follow-up logic ?

Xtree is work in progress ... In 3.13 SVN, xtree provides already an easy way to visualise memory usage and leaks Some further work : Xtree for system calls Better handling of unknown function names Optionally keeping the leaf function (malloc/free/...) Allow to visualise evolution of data Kcachegrind improvements : virtual top function, show exact callstack, ... … Your ideas/suggestions here …

Questions about xtree?

Valgrind and address info Address information based on Knowledge of malloc/free calls Debug info of variables, when using --read-var-info=yes Mempool information ... Valgrind can give information about an address Complementary to (sometimes better) than gdb

mempool (most) valgrind tools need to understand memory allocation => such tools are replacing malloc/free Some applications have specific allocators E.g. for small blocks fast allocation + release all blocks in one operation How to “valgrind” these specific allocators ? Either a compile time option to use malloc/free Or describe these specific allocators using 'client requests' (documented in User manual and valgrind.h)

Memcheck leak search heuristics “Possibly lost” : block only found via “inner pointer” (pointing inside the block) Such inner pointers is a common pattern e.g. in C++ library implementation Memcheck has heuristics to detect some patterns : C++ stdstring, C++ multiple inheritance, C++ array of objects length64 (sqlite) Note : these heuristics depends on the ABI

Questions?