Download presentation
Presentation is loading. Please wait.
1
Solaris Doors ● Weizhen Sun – sunweizhen@263.net
2
2 Agenda Doors Overview Doors Implementation
3
3 Doors Overview Concept Doors provide a facility for processes to issue procedure calls to functions in other processes running on the same system. Process A process can become a door server exporting a function through a door Other processes can then invoke the procedure
4
4 Solaris Doors Interfaces The door APIs were available in Solaris 2.5.1 Solaris 2.6 was the first Solaris release The Solaris kernel ships with a shared object library, libdoor.so
5
5 Solaris Doors Interfaces The door APIs in Solaris 2.6 and Solaris 7
6
6 Solaris Doors Interfaces The door APIs in Solaris 2.6 and Solaris 7
7
7 Doors Overview How doors provide an interprocess communication mechanism
8
8 Doors Overview Local process call
9
9 Doors Overview RPC when client & serve are at the same machine
10
10 Doors Overview RPC when client & serve are not at the same machine
11
11 Doors Overview Function Sharing of data shared memory Exchange Information and data Pipe,FIFO,message queues Synchronization of access to shared resources semaphore,mutex,lock, Remote procedure call Sun RPC,Solaris Door
12
12 Unix Processes share data Share data in files Share data in the kernel > Pipe > System V Message queue > System V semaphore Share data in the shared memory
13
13 IPC object persistence Process-persistent > Pipe > FIFO Kernel persistent > System V Shared memory > System V Message queue > System V Semaphore Filesystem persistent
14
14 Pipe Special type of file that do not hold data but can be opened by 2 different processed so that data can be passed between them. API > Int pipe(int fd[2]); > Fd[0] – read Fd[1] – write Feature > Half duplex. > Atomic operation. > Usable between processes with the same parent.
15
15 Named pipe- FIFO Provide a bidirectional communication path between processed on the same system. API > Int mkfifo(const char *pathname,mode_t mode); > fopen/fclose Features > Half duplex > Automatic block > SIGPIPE is generated if a process write to a pipe and read is terminated > Atomic operation
16
16 System limitation - OPEN_MAX > The maximum of open files. PIPE_BUF > The maximum of data that can be written to a pipe or FIFO atomically.
17
17 Other commands to get system variables limit(csh)/unlimit(sh/bash/ksh) Sysdef > lists all hardware devices, as well as pseudo devices, system devices, loadable modules, and the values of selected kernel tunable parameters. > For example,sysdef(1M) Getconf > get configuration values > For example,getconf(1) > getconf OPEN_MAX
18
18 Doors Provide a facility for processes to issue procedure calls to functions in other processes running on the same system. Overview
19
19 Agenda IPC Overview System V IPC Posix IPC Tools for performance analysis System configuration Hands On Lab
20
20 System V IPC Overview Types System V shared memory System V message queues System V semaphore API
21
21 System V IPC overview key_t and ftok #include key_t ftok (const char * pathname,int id); Create and open IPC key=id(8bit)+st_dev(12bit)+st_ino(12bit)
22
22 Data structure in kernel for IPC ipc_perm data structure in /usr/include/sys/ipc.h
23
23 System V shared memory Concept sharing the same physical(RAM) memory pages by multi processes Feature Extremely efficient(*) Dynamically loaded when required,eg.modload Unload when system reboot or by command modunload Kernel resource consumption Shmid actual shared RAM pages data structure about the shared segment
24
24 System V shared memory - API Header file and system calls #include int shmget(key_t key, size_t size, int shmflg); void *shmat(int shmid, const void *shmaddr, int shmflg); int shmctl(int shmid, int cmd, struct shmid_ds *buf); int shmdt(char *shmaddr);
25
25 Data structure Struct shmid_ds{ struct ipc_perm shm_perm /* operation permission structure */ size_t shm_segsz /* size of segment in bytes */ pid_t shm_lpid /* process ID of last shared memory operation */ pid_t shm_cpid /* process ID of creator */ shmatt_t shm_nattch /* number of current attaches */ time_t shm_atime /* time of last shmat() */ time_t shm_dtime /* time of last shmdt() */ time_t shm_ctime /* time of last change by shmctl() */ }
26
26 Example,process with shared memory
27
27 System overheads with shared memory User level Quantity of created shared memory Size Kernel level System memory Quantity of identifiers Max size for shared memory segment Translation table of shared memory Swap space
28
28 System tunable parameters System profile(before Solaris 10) > /etc/system
29
29 Caveat Static mechanism Values specified are read and applied when system boot. Any changes are not applied until the system reboot Values specified in /etc/system are global and affect all processes on the system The obsolete tunable settings are ignored from Solaris 10.
30
30 Resource control available Dynamically resource control > Prctl(1) > get or set the resource controls of running processes, tasks, and projects > rctladm(1M) > display or modify global state of system resource controls > Man resource_controls(5) > Setrctl(2) > API,set or get resource control values > Man rctlblk_get_local_action(3C)
31
31 Dynamic resource control Settings in Solaris 10
32
32 Obsolete tunable parameter
33
33 Example Source control of shared memory > Prctl(1) > rctladm(1M)
34
34 Optimization of shared memory - ISM Concept ISM - Intimate shared memory Sharing of the translation tables involved in virtual-to-physical address translation No need to share the actual physical memory pages Contrast Non-ISM per-processes mapping for shared memory pages
35
35 Background introduction – memory basic MMU - Virtual memory management unit Management and translation of the virtual view of memory(address space) to physical memory. HAT - Hardware address translation layer Management mapping of virtual to physical memory TLB - Translation lookaside buffer Hardware cache of recent address translation information
36
36 Hardware address translation
37
37 ISM Sharing the memory translation table
38
38 Non-ISM
39
39
40
40 Feature Avoid generate redundant mappings to physical pages Intimate shared memory is an important optimization that makes more efficient use of the kernel and hardware resources involved in the implementation of virtual memory and provides a means of keeping heavily used shared pages locked in memory. Large page size automatically enabled ISM
41
41 Example - ISM used for Database Without ISM 400 database processes 2GB shared segment ~262144 8KB pages 8Bytes for each page mapping-->2M 2M*400 ➔ 800Mbytes for mapping! With ISM ➔ 2Mbytes only.
42
42 System V message queues Concept For process to send and receive messages in various size in an asynchronous fashion. Feature Dramatical loadable module Depends on /kernel/sys/msgsys /kernel/misc/ipc Kernel resource consumption Kernel memory Resource map
43
43 System V IPC - message queue API Header file and system calls #include int msgget(key_t key, int msgflg); int msgctl(int msqid, int cmd, struct msqid_ds *buf); int msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg); size_t msgrcv(int msqid, void *msgp, size_t msgsz, long int msgtyp, int msgflg); More available from man page > e,g, man msg
44
44 Data Structure Struct msqid_ds{ Struct ipc_perm msg_perm; /*read-write perms*/ Struct msg msg_first;/ ptr to first message on queue*/ Struct msg*msg_last; / ptr to last message on queue*/ msglen_tmsg_cbytes;/*current #bytes on queue*/ msgqnum_tmsg_qnum;/*current #of messages on queue*/ msglen_tmsg_qbytes;/*max # of bytes allowed on queue*/ pid_tmsg_lspid;/*pid of last msgsnd()*/ pid_tmsg_lrpid;/*pid of last msgrcv()*/ time_tmsg_stime;/*time of last msgsnd()*/ time_tmsg_rtime;/*time of last msgrcv()*/ time_tmsg_ctime;/*pid of last msgctl()*/ }
45
45 System V message queues structures
46
46 System overhead with message queues Kernel memory check first,no greater than 25% available kernel memory Resource map Identifier msqid_ds sys/msg.h Struct msg
47
47 System tunable parameter /etc/system
48
48 Available resource control process.max-msg-messages > msginfo_msgtql(obsolete) > maximum number of messages on a message queue process.max-msg-qbytes > msginfo_msgmnb(obsolete) > maximum number of bytes of messages on a message queue project.max-msg-ids > msginfo_msgmni(obsolete) > maximum number of message queue Ids allowed for a project
49
49 System V Semaphore Concept mechanical signaling device or a means of doing visual signaling. A method of synchronizing to a sharable resource by multi processes. Feature P – try or attempt; V – increase Semaphore sets available Kernel resource consumption Kernel memory Resource map
50
50 System V IPC semaphore -API Header file and system calls #include int semget(key_t key, int nsems, int semflg); int semctl(int semid, int semnum, int cmd,...); int semop(int semid, struct sembuf *sops, size_t nsops); More available from man page > Man sem > e,g.semget(2)
51
51 System V semaphore Binary semaphore > 0 or 1,like mutex Counting semaphore > 0~system limitation,P/V Set of counting semaphores > One or more,each set has a limitation
52
52 Data structures for semaphore Struct semid_ds Struct sem_buf
53
53 System overhead with semaphore Kernel memory Must less than 25% of kernel memory Resource map allocation Sem structure based on semmns Identifier semid_ds Undo structure pointer Undo structure themselves Kernel mutex lock
54
54 System tunable parameter /etc/system(part)
55
55 System tunable parameter /etc/system
56
56 Available resource control process.max-sem-ops > seminfo_semopm(obsolete) > Maximum operations per semop(2) call process.max-sem-nsems > seminfo_semmsl(obsolete) > Maximum semaphores per identifiers project.max-sem-ids > seminfo_semmni(obsolete) > Number of semaphore identifiers
57
57 Agenda IPC Overview System V IPC Posix IPC Tools for performance analysis System configuration Hands On Lab
58
58 Posix IPC Overview Types Posix message queues Posix semaphore Posix shared memory API
59
59 Posix IPC and System V IPC Common points Types/Function Difference-Implementation >Library – libposix4.so vs libc.so >Implementation >Built on Solaris file memory mapping interface -- mmap(2) >acquired desired resource by using a file name convention
60
60 Difference with System V IPC (continue...) >No APIs entering into kernel are executed >No tunable parameters are required >Limiting factors - Per-process limit open files memory spaces >Common routines _pos4obj _open/_pos4obj _name _open _nc/_close _nc _pos4obj _lock/_pos4obj _unlock
61
61 mmap Function mapping a file or some other named objects into a process's space address. Achievement Common files used for memory reflecting Special files for anonymous memory reflecting shm_open for unallied processes shared memory API #include void *mmap(void *addr,size_t len,int prot,int flags,int fd,off_t offset);
62
62 Process address space with mmap(2)
63
63 Posix Semaphores Named semaphore > sem_open()/sem_close()/sem_unlink Unnamed semaphore(memory based) > sem_init()/sem_destroy
64
64 System-imposed limits SEM_NSEMS_MAX > Maximum number of semaphores per process SEM_VALUE_MAX > Maximum value of a semaphore > e,g. #getconf -a|grep SEM
65
65 Posix message queue
66
66 Posix message queue System limits in /usr/include/limits.h > MQ_OPEN_MAX > MQ_PRIO_MAX > e,g getconf -a |grep MQ Tunable parameters > mq_open(3R) > mq_setattr(3R)
67
67 Agenda Solaris multi-thread process Inter-processes IPC System V IPC Posix IPC Tools for performance analysis System configuration Hands On Lab
68
68 Performance analysis process Understand the problem. Collect data with tools for performance analysis. > iostat,kstat,mdb,pmap,vmstat,ps,etc. > Proc tools > Dtrace > Sun studio11 Separate all the data you get from different layers. > CPU > Memory > System I/O > File system >...
69
69 Performance tuning process Find the area and the period of the bottleneck. Set up performance goal. Performance tuning > E,g. Resource control with IPC. Review and repeat until meet tuning goals. Report with comparison available. Tools for performance analysis related to IPC
70
70 kmstat(1M) examines the available kernel statistics,or kstats, on the system and reports those statistics which match the criteria specified on the command line. > format- module:instance:name and value > Example on the right kstat -p vmem::shmids:
71
71 vmstat(1M) Rports virtual memory statistics regarding kernel thread, virtual memory, disk, trap, and CPU activity. > memory-report on usage of virtual and real memory > mf-minor faults > pi- kilobytes paged in > po- kilobytes paged out
72
72 pmap -x(1M) Process memory usage > Physical memory usage > Virtual memory > pmap,ps Pmap > Usage information about the address space of a process. -x -Displays additional information per mapping. > Size,amount of RSS, anonymous memory and lockes memory
73
73 Pmap -x example
74
74 ipcs(1M) report inter-process communication facilities status > -m active shared memory segments. > -q active message queues. > -s active semaphores. > -i number of ISM attaches to shared memory segments. > -p process number > -A all print options, -b, -c, -i, -J, -o, -p, and -t. > -z information about facilities associated with the specified zone > -Z information about all zones
75
75 ipcrm(1) Remove a message queue, semaphore set, or shared memory ID > -m shmid > -q msqid > -s semid > -M shmkey > -Q msgkey > -S semkey > -z zone
76
76 dtrace(1M) Comprehensive dynamic tracing framework for the Solaris Operating system. > Powerful infrastructure to permit administrators,developers to understanding behavior of the operating system and user processes. Observe the system call related to ipc > fbt provider provides probe into every function in the kernel > shmsys provider provides probe into system API with ipc > ipc module,probe related to ipc in kernel
77
77 Sun Studio- Performance analyzer Combination of compiler,libraries of optimized functions and tools for performance analysis. Command and sub-command > er_print-- generate a plain-text report of performance data > Collect – simplest interface for collecting data > Dbx collector – performance analyzing for active process > Analyzer – GUI tool > #collect -L unlimited -A copy -F on -d /tmp/a.out ->/tmp/test.1.er > #analyzer test.1.er
78
78 Agenda Solaris multi-thread process Inter-processes IPC System V IPC Posix IPC Tools for performance analysis System configuration Hands On Lab
79
79 System configuration How to modify kernel parameters? > Modify the /etc/system > Use the kernel debugger(kmdb) > Use the modular debugger(Mdb) > Use the ndd command to set TCP/IP parameters > Modify the /etc/default files Viewing Solaris system configuration information > Sysdef(1M) command > Kstat(1M) or Kstat(3Kstat)
80
80 Example Setting a parameter in /etc/system > Set nfs:nfs_nra=4(read-ahead blocks that are read for file system mounted using NFS version 2 software) Using mdb to change a value
81
81 Sysdef command Provides the values for System V IPC settings,STRAMS tunables, process resource limits,etc.
82
82 Reference Solaris Internal Unix Network Programming Vol II Solaris Dynamic Tracing Guide Solaris Modular Debugger Guide Solaris Tunable Parameters Reference Manual
83
● Iris Zhu – iris.zhu@sun.com System Config IPC
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.