Lecture 101 Lecture 10: Kernel Modules and Device Drivers ECE 412: Microcomputer Laboratory.

Slides:



Advertisements
Similar presentations
Device Drivers. Linux Device Drivers Linux supports three types of hardware device: character, block and network –character devices: R/W without buffering.
Advertisements

Operating System Structures
USERSPACE I/O Reporter: R 張凱富.
Chap 2 System Structures.
04/14/2008CSCI 315 Operating Systems Design1 I/O Systems Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem.
Architectural Support for Operating Systems. Announcements Most office hours are finalized Assignments up every Wednesday, due next week CS 415 section.
Introduction to Kernel
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
04/16/2010CSCI 315 Operating Systems Design1 I/O Systems Notice: The slides for this lecture have been largely based on those accompanying an earlier edition.
Home: Phones OFF Please Unix Kernel Parminder Singh Kang Home:
Embedded Real-time Systems The Linux kernel. The Operating System Kernel Resident in memory, privileged mode System calls offer general purpose services.
Students:Gilad Goldman Lior Kamran Supervisor:Mony Orbach Mid-Semester Presentation Spring 2005 Network Sniffer.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
1 I/O Management in Representative Operating Systems.
I/O Systems CSCI 444/544 Operating Systems Fall 2008.
Lecture 7 Lecture 7: Hardware/Software Systems on the XUP Board ECE 412: Microcomputer Laboratory.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Copyright ©: Nahrstedt, Angrave, Abdelzaher
I/O Tanenbaum, ch. 5 p. 329 – 427 Silberschatz, ch. 13 p
An Introduction to Device Drivers Sarah Diesburg COP 5641 / CIS 4930.
1 CSC 2405: Computer Systems II Spring 2012 Dr. Tom Way.
1 Input/Output. 2 Principles of I/O Hardware Some typical device, network, and data base rates.
1 CS503: Operating Systems Part 1: OS Interface Dongyan Xu Department of Computer Science Purdue University.
System Calls 1.
Segmentation & O/S Input/Output Chapter 4 & 5 Tuesday, April 3, 2007.
CSC 322 Operating Systems Concepts Lecture - 25: by Ahmed Mumtaz Mustehsan Special Thanks To: Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
Hardware Definitions –Port: Point of connection –Bus: Interface Daisy Chain (A=>B=>…=>X) Shared Direct Device Access –Controller: Device Electronics –Registers:
I/O Systems I/O Hardware Application I/O Interface
1 Lecture 20: I/O n I/O hardware n I/O structure n communication with controllers n device interrupts n device drivers n streams.
Windows Operating System Internals - by David A. Solomon and Mark E. Russinovich with Andreas Polze Unit OS6: Device Management 6.1. Principles of I/O.
OS provide a user-friendly environment and manage resources of the computer system. Operating systems manage: –Processes –Memory –Storage –I/O subsystem.
Composition and Evolution of Operating Systems Introduction to Operating Systems: Module 2.
Contact Information Office: 225 Neville Hall Office Hours: Monday and Wednesday 12:00-1:00 and by appointment.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Principles of I/0 hardware.
1-1 Embedded Network Interface (ENI) API Concepts Shared RAM vs. FIFO modes ENI API’s.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
Interrupt driven I/O. MIPS RISC Exception Mechanism The processor operates in The processor operates in user mode user mode kernel mode kernel mode Access.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 13: I/O Systems I/O Hardware Application I/O Interface Kernel I/O Subsystem.
An Introduction to Device Drivers Ted Baker  Andy Wang COP 5641 / CIS 4930.
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
Processes and Virtual Memory
CSC414 “Introduction to UNIX/ Linux” Lecture 2. Schedule 1. Introduction to Unix/ Linux 2. Kernel Structure and Device Drivers. 3. System and Storage.
1 Lecture 1: Computer System Structures We go over the aspects of computer architecture relevant to OS design  overview  input and output (I/O) organization.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Interrupt driven I/O Computer Organization and Assembly Language: Module 12.
Device Driver Concepts Digital UNIX Internals II Device Driver Concepts Chapter 13.
Silberschatz, Galvin, and Gagne  Applied Operating System Concepts Module 12: I/O Systems I/O hardwared Application I/O Interface Kernel I/O.
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 2: The Linux System Part 5.
Introduction to Operating Systems Concepts
Module 12: I/O Systems I/O hardware Application I/O Interface
Chapter 13: I/O Systems Modified by Dr. Neerja Mhaskar for CS 3SH3.
CS 6560: Operating Systems Design
Operating Systems (CS 340 D)
CS703 - Advanced Operating Systems
Introduction to the Kernel and Device Drivers
An Introduction to Device Drivers
CSCI 315 Operating Systems Design
Making Virtual Memory Real: The Linux-x86-64 way
I/O Systems I/O Hardware Application I/O Interface
Operating Systems Chapter 5: Input/Output Management
Operating System Concepts
13: I/O Systems I/O hardwared Application I/O Interface
CS703 - Advanced Operating Systems
Direct Memory Access Disk and Network transfers: awkward timing:
Chapter 2: The Linux System Part 5
LINUX System : Lecture 7 Lecture notes acknowledgement : The design of UNIX Operating System.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Module 12: I/O Systems I/O hardwared Application I/O Interface
Presentation transcript:

Lecture 101 Lecture 10: Kernel Modules and Device Drivers ECE 412: Microcomputer Laboratory

Lecture 102 Objectives Review Linux environment Device classification Review Kernel modules PCMCIA example Skeleton example of implementing a device driver for a BlockRAM based device

Lecture 103 Review Questions What are some of the services/features that an IPIF- generated interface to the PLB/OPB bus can provide? Byte Steering for devices with narrow data widths Address range checking to detect transactions your device should handle User-defined registers Interface to the interrupt hardware Fixed-length burst transfers DMA engine Read/write FIFOs

Lecture 104 Linux Execution Environment Program Libraries Kernel subsystems

Lecture 105 Device Classification Most device drivers can be classified into one of three categories. Character devices. –Console and parallel ports are examples. –Implement a stream abstraction with operations such as open, close, read and write system calls. –File system nodes such as /dev/tty1 and /dev/lp1 are used to access character devices. –Differ from regular files in that you usually cannot step backward in a stream.

Lecture 106 Device Classification (cont) Block devices –A block device is something that can host a filesystem, e.g. disk, and can be accessed only as multiples of a block. –Linux allows users to treat block devices as character devices (/dev/hda1) with transfers of any number of bytes. –Block and character devices differ primarily in the way data is managed internally by the kernel at the kernel/driver interface. –The difference between block and char is transparent to the user. Network interfaces –In charge of sending and receiving data packets. –Network interfaces are not stream-oriented and therefore, are not easily mapped to a node in the filesystem, such as /dev/tty1. –Communication between the kernel and network driver is not through read/write, but rather through packet transfer functions.

Lecture 107 Linux Execution Environment (review) Execution paths

Lecture 108 Process and System Calls Process: program in execution. Unique pid. Hierarchy. User address space vs. kernel address space Application requests OS services through TRAP mechanism –x86: syscall number in eax register, exception (int $0x80) –result = read (file descriptor, user buffer, amount in bytes) –Read returns real amount of bytes transferred or error code (<0) Kernel has access to kernel address space (code, data, and device ports and memory), and to user address space, but only to the process that is currently running Current process descriptor. current pid points to current pid Two stacks per process: user stack and kernel stack Special instructions to copy parameters / results between user and kernel space

Lecture 109 Kernel Modules Kernel modules are inserted and unloaded dynamically –Kernel code extensibility at run time –insmod / rmmod / lsmod commands. Look at /proc/modules –Kernel and servers can detect and install them automatically, for example, cardmgr (pc card services manager) Example of the content of /proc/modules –nfs Live 0x129b0000 –The first column contains the name of the module. –The second column refers to the memory size of the module, in bytes. –The third column lists how many instances of the module are currently loaded. A value of zero represents an unloaded module. –The fourth column states if the module depends upon another module to be present in order to function, and lists those other modules. –The fifth column lists what load state the module is in: Live, Loading, or Unloading are the only possible values. –The sixth column lists the current kernel memory offset for the loaded module. This information can be useful for debugging purposes, or for profiling tools such as oprofile.

Lecture 1010 Module Execution Modules execute in kernel space –Access to kernel resources (memory, I/O ports) and global variables ( look at /proc/ksyms) –Export their own visible variables, register_symtab (); –Can implement new kernel services (new system calls, policies) or low level drivers (new devices, mechanisms) –Use internal kernel basic interface and can interact with other modules –Need to implement init_module and cleanup_module entry points, and specific subsystem functions (open, read, write, close, ioctl …)

Lecture 1011 Hello World hello_world_module.c: #define MODULE #include static int __init init_module(void) { printk(" Hello, world\n"); /* is message priority. */ return 0; } static int __exit cleanup_module(void) { printk(" Goodbye cruel world\n"); } printk (basic kernel service) outputs messages to console and/or to /var/log/messages To compile and run this code: –root# gcc -c hello_world_module.c –root# insmod hello_world_module.o –root# rmmod hello_world_module

Lecture 1012 Linking a module to the kernel (from Rubinis book)

Lecture 1013 Register Capability You can register a new device driver with the kernel: –int register_chrdev(unsigned int major, const char *name, struct file_operations *fops); –A negative return value indicates an error, 0 or positive indicates success. –major: the major number being requested (a number < 128 or 256). –name: the name of the device (which appears in /proc/devices). –fops: a pointer to a global jump table used to invoke driver functions. Then give to the programs a name by which they can request the driver through a device node in /dev –To create a char device node with major 254 and minor 0, use: mknod /dev/memory_common c –Minor numbers should be in the range of 0 to 255. (Generally, the major number identifies the device driver and the minor number identifies a particular device (possibly out of many) that the driver controls.)

Lecture 1014 PCMCIA Read/Write Common/Attribute Memory application data = mem_read (address, type) mem_write (address, data, type) data = mem_read (address, type) mem_write (address, data, type) /dev/… PCMCIA registered memory fops /dev/… PCMCIA registered memory fops memory_read(), memory_write() - map kernel memory to I/O window - copy from PCMCIA to user ( &buf) - copy from user to PCMCIA (&data) - map kernel memory to I/O window - copy from PCMCIA to user ( &buf) - copy from user to PCMCIA (&data) USER SPACE KERNEL SPACE Libc: file I/O PCMCIA attribute common - open(/dev/memory_[common|attribute]) - lseek(fd, address) - read(fd, buf,1); return buf; - write(fd, data, 1) - open(/dev/memory_[common|attribute]) - lseek(fd, address) - read(fd, buf,1); return buf; - write(fd, data, 1) int buf Card insertion card_memory_config: - read CIS - config I/O window - config IRQ - register R/W fops card_memory_config: - read CIS - config I/O window - config IRQ - register R/W fops Kernel memory

Lecture 1015 PCMCIA Button Read Interrupt handling application data = mem_read (address, type) mem_write (address, data, type) data = mem_read (address, type) mem_write (address, data, type) /dev/… PCMCIA registered memory fops /dev/… PCMCIA registered memory fops memory_button_read() - interruptible_sleep_on (PC->queue) - memory_read() - map kernel memory to I/O window - copy PC to user ( &buf) - interruptible_sleep_on (PC->queue) - memory_read() - map kernel memory to I/O window - copy PC to user ( &buf) USER SPACE KERNEL SPACE Libc: file I/O PCMCIA attribute common - open(/dev/memory_common) - lseek(fd, address) - read(fd, buf,1); return buf; - write(fd, data, 1) - open(/dev/memory_common) - lseek(fd, address) - read(fd, buf,1); return buf; - write(fd, data, 1) int buf Card insertion card_memory_config: … - config IRQ handler card_memory_config: … - config IRQ handler Kernel memory Button int. int_handler: - wake_up( PC->queue) int_handler: - wake_up( PC->queue)

Lecture 1016 Skeleton Example: OCM-Based BlockRAM PowerPC has an OCM (on-chip memory) bus that lets you attach fast memory to the cache Xilinx provides a core (dso_if_ocm) that handles the interface to the OCM and outputs BRAM control signals –Found under Project->Add/Edit cores –Creates an interface that detects accesses to a specified physical address range and outputs control signals for a BlockRAM

Lecture 1017 Software-Side Issues Xilinx core handles the BlockRAM interface from the hardware side, but need to make BlockRAM visible/accessible to software Two issues: –Programs operate on virtual addresses, even when running as root –Ideally, want to be able to make BlockRAM visible to user- mode programs User-mode programs cant set virtual->physical address mappings

Lecture 1018 Direct Approach -- Use mmap() Only works for code running as root fd = open(/dev/mem, O_RDWR); bram = mmap(0x , 2048, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0x ); assert(bram == 0x ); Creates pointer to the /dev entry that describes the physical memory Maps 2048 bytes from /dev/mem onto the programs address space, starting at offset 0x from the start of the pointer Requests that those bytes be mapped onto addresses starting at 0x Checks (via assert) that mmap() returned the requested address, as mmap() isnt required to follow that request

Lecture 1019 Better Approach -- Device Driver Create device driver module and install into Linux Device driver module will map BRAM onto address space of currently-running program

Lecture 1020 Device Driver Device drivers provide mechanisms, not policy. –Mechanism: Defines what capabilities are provided? –Policy: Defines how those capabilities can be used? This strategy allows flexibility. The driver controls the hardware and provides an abstract interface to its capabilities. The driver ideally imposes no restrictions (or policy) on how the hardware should be used by applications. For example, X manages the graphics hardware and provides an interface to user programs. Window managers implement a particular policy and know nothing about the hardware. Kernel apps build policies on top of the driver, e.g. floppy disk, such as who has access, the type of access (direct or as a filesystem), etc. -- it makes the disk look like an array of blocks. Courtesy of UMBC

Lecture 1021 Device Driver Outline 1.Obtain memory map semaphore for currently running program (to prevent overlapping changes) 2.Insert new virtual memory area (VMA) for BRAM 3.Call get_unmapped_area with physical address range of BRAM 4.Allocate and initialize VMA for the BRAM 5.Call remap_page_range() to build page tables 6.Use insert_vma_struct() and make_pages_present() to enable access to new pages See Running Linux on a Xilinx XUP Board for more information (on the web, written by John Kelm).

Lecture 1022 Next Time Quiz 1