OpenMosix, Open SSI, and LinuxPMI
64 Raspberry Pi’s in a Lego enclosure Clusters Constructed from standard computers (nodes) without any shared physical memory Single System Image (SSI) Distributed computing method SSI Cluster A collection of machines that make up the cluster and it appears as a single machine Linux Cluster Technologies High-availability (HA) High-performance (HP) Load-Leveling Web-Service Storage Database 64 Raspberry Pi’s in a Lego enclosure
Open SSI Project Brings open source cluster technologies together Addresses all cluster environments with 3 key goals in mind Availability Scalability Manageability Best of each cluster solution Modularity in mind
Open SSI Components NSC (NonStop Clusters for Unixware) CI (Cluster Infrastructure) GFS (Global File System) DLM (Distributed Lock Manager) LVS (Linux Virtual Server) Lustre Mosix Scyld / Beowulf HA (High Availability) UML (User Mode Linux)
OpenMosix A tool for a Unix-like kernel, such as Linux Goals Kernel extension for single-system image clustering (SSI) Goals To improve cluster-wide performance Create a convenient multiuser, time-sharing environment Run-time environment is a computing cluster Capable of utilizing both UP & SMP nodes
OpenMosix Implementation A Preemptive Process Migration (PPM) mechanism A set of algorithms for adaptive resource sharing Kernel level via a loaded module Kernel remains un-modified Transparent to application level No Master-Slave relationship between nodes Nodes make decisions independently Dynamic configuration
Preemptive Process Migration (PPM) Any process, anytime Migration is based on algorithms User is able to manually override Unique Home-Node (UHN) Migrating process is divided into two contexts: User context System context
PPM Implementation User Context (the remote) Process can migrate many times Contains: code, stack, data, memory-maps, and registers of the process System Context (the deputy) UHN dependent, doesn’t migrate Contains description of the resources and a kernel-stack for the execution of system code on behalf of the process The interface is well-defined Easily forward information regarding interactions Implemented at the Link Layer
Digging Deeper… Migrated process characteristics Location transparency System calls are executed synchronously Intercepted by remote site’s link layer Site Dependent / Independent Drawback of deputy approach Extra overhead in execution of system calls File & network access operations
Resource Sharing Algorithms Dynamic Load Balancing Memory Ushering Scheduling
Dynamic Load Balancing Continuously attempting to reduce load difference between nodes Algorithm is decentralized Key Algorithmic Data # of processors / speed Dynamically responses to changes in node loads Runtime characteristics of processes
Memory Ushering Place the maximum # of processes in the cluster-wide RAM Avoid thrashing and the swapping out of processes Triggered by excessive paging Shortage of free memory Overrides load-balancing algorithm Migrates a process to a node even if it creates uneven load distribution
Scheduling Algorithm Algorithm to select which node a given program should run Complication Resources available on a cluster are heterogenous Units of measure are incomparable Memory, CPU, process communication Solution Create a unified algorithm framework Convert total usage of heterogenous resources into a universal “cost” Jobs are assigned to a machine where the cost is lowest Market oriented economy
LinuxPMI Extended from the now closed, OpenMosix 2.6 branch Multi-System-Image software Works in clusters with diverse Linux distro’s CPU types & specific kernel features must remain the same Linux kernel patches implementing process migration Allows the movement of a program from one machine to another and return
The End Questions? Resources PDF provided by professor Fukuda IntroductionToOpenMosix.pdf LoadBalancingOpenMosix.pdf ssi-intro.pdf http://linuxpmi.org/trac/ http://openssi.org/cgi-bin/view?page=openssi.html https://en.wikipedia.org/wiki/OpenMosix http://makezine.com/2012/09/13/raspberry-pi-and-lego-supercomputing-cluster/