Tuesday, November 27, 2007

Chapter 6 - Lecture

First-Come, First-Served (FCFS) Scheduling
􀂄 Ready queue is FIFO: process served in order of arrival at the queue

Shortest-Job-First (SJF) Scheduling
􀂄 CPU is assigned to the process with the smallest next CPU burst, use FCFS to break ties
􀂄 SJF is optimal: give the minimum average waiting time for a given set of processes.
􀂄 Two schemes:
􀀩 Nonpreemptive: once the CPU is given to a process, the process cannot be preempted until it finishes its CPU burst.
􀀩 Preemptive: if a new process arrives with next CPU burst shorter than the remaining time of the current executing process, preempt the current executing process.
􀀗 Also called Shortest-Remaining-Time-First (SRTF).

Priority Scheduling
􀂄 A priority number (integer) is associated with each process
􀂄 The CPU is allocated to the process with the highest priority (smallest integer ≡ highest priority). FCFS used to break ties
􀀩 Preemptive: preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process
􀀩 Nonpreemptive: allow the currently running process to complete its CPU burst
􀂄 SJF is a priority scheduling where priority number is the predicted next CPU burst time.
􀂄 Problem: Starvation – low-priority processes may never execute.
􀀩 Solution: Aging –gradually increase the priority of the processes as time progresses.

Round Robin (RR) Scheduling
􀂄 Designed for time-sharing systems
􀂄 The algorithm:
􀀩 Each process gets a small unit of CPU time (time quantum or time slice), usually 10-100 milliseconds.
􀀩 After the time quantum has elapsed, the process is preempted and added to the end of the ready queue (an FIFO queue).
􀀩 Essentially FCFS scheduling with preemption
􀂄 If there are n processes in the ready queue and the time quantum is
q, then response time is no more than (n-1)q
􀀩 Smaller q 􀃎 smaller response time, but more context switches
􀂄 Performance of RR depends on size of time quantum
􀀩 q large: same as FCFS
􀀩 q small: each of the n processes has the impression that it has its own
processor running at 1/n the speed of the real processor
􀀗 q must be large with respect to context switch time, otherwise overhead is too high.

Multilevel Queue Scheduling
􀂄 Ready queue is partitioned into separate queues; each process is permanently assigned to one queue based on some property of the process (e.g., process type)
􀀩 E.g., separate queues for foreground (interactive) processes and background (batch) processes
􀂄 Each queue has its own scheduling algorithm
􀀩 E.g., foreground – RR, background – FCFS
􀂄 Scheduling must be done between the queues.
􀀩 Fixed priority preemptive scheduling
􀀗 E.g., foreground queue has absolute priority over background queue
􀀗 Possibility of starvation.
􀀩 Time slice between the queues: each queue gets a certain amount of CPU time
􀀗 E.g., 80% CPU time to foreground in RR, 20% to background in FCFS

Chapter 5 - Lecture

Chapter 5: Threads
􀂄 Overview
􀂄 Multithreading Models
􀂄 Threading Issues
􀂄 Linux Threads
􀂄 Pthreads

Threads Overview
􀂄 A process can contain multiple threads of control
􀂄 A thread (or lightweight process) contains:
􀀩 A thread ID
􀀩 A program counter
􀀩 A register set
􀀩 A stack
􀂄 Threads belonging to the same process share:
􀀩 Code section
􀀩 Data section
􀀩 Open files
􀂄 A process with multiple threads of control can do multiple tasks at a time

Single and Multithreaded Processes

Benefits of Threads
􀂄 Increase responsiveness to user
􀂄 Thread creation faster than process creation
􀂄 Thread context switch faster than process context switch
􀂄 Utilization of multiprocessor architectures: threads belonging to the same process may run in parallel on different processors

User Threads
􀂄 Supported by a user-level thread library
􀂄 Library handles thread creation, scheduling, and management with no support from kernel
􀂄 Advantage: user-level threads are fast to create and manage
􀂄 Drawback: a user thread performing a blocking system call will cause entire process to block
􀂄 Example user thread libraries: POSIX Pthreads, Mach Cthreads, Solaris 2 UI-threads

Kernel Threads
􀂄 Supported by the operating system
􀂄 Kernel handles thread creation, scheduling, and management
􀂄 Advantages:
􀀩 If a thread performs a blocking system call, the kernel can schedule another thread in the application for execution
􀀩 Kernel can schedule threads on different processors in a multiprocessor system
􀂄 Drawback: slower to create and manage than user threads
􀂄 Most modern operating systems support kernel threads

Multithreading Models
􀂄 Many systems support both user level and kernel level threads
􀂄 How to map user level threads to kernel level threads?
􀀩 Many-to-One
􀀩 One-to-One
􀀩 Many-to-Many

Many-to-One
􀂄 Many user-level threads mapped to one kernel thread.
􀂄 Allow the developer to create as many user threads as she wishes
􀂄 Entire process will block if a thread makes a blocking system call
􀂄 Multiple threads can’t run in parallel on multiprocessors
􀂄 Example: green threads (a user-level thread library) in Solaris 2

Many-to-One Model

One-to-One
􀂄 Each user-level thread mapped to a kernel thread
􀂄 Allow another thread to run when a thread blocks
􀂄 Allow multiple threads to run in parallel on multiprocessors
􀂄 Most systems limit the number of threads supported due to overhead of creating kernel level threads
􀂄 Examples: Windows NT/2000, OS/2

One-to-one Model

Many-to-Many Model
􀂄 Many user level threads mapped to a smaller or equal number of kernel threads.
􀂄 Avoid limitations of both many-to-one and one-to-one models
􀀩 Developers can create as many user threads as necessary,
􀀩 Kernel threads corresponding to the user threads can run in parallel on multiprocessors
􀀩 Kernel can schedule another thread for execution if a thread blocks
􀂄 Examples: Solaris 2, Windows NT/2000 with the fiber library

Many-to-Many Model


Threading Issues: fork and exec
􀂄 What if one thread in a program calls fork?
Some UNIX systems have two versions of fork:
􀀩 The new process duplicates all threads
􀀩 The new process duplicates only the thread that invoked fork
􀂄 What if one thread in a program calls exec?
􀀩 The new program replaces the entire process (including all threads)
􀀩 If exec is called immediately after fork, duplicating all threads is unnecessary

Threading Issues: Cancellation
􀂄 Thread cancellation: killing a thread before it has completed
􀂄 Asynchronous cancellation: the target thread is immediately cancelled
􀀩 Can cause problems if the target thread is in the middle of allocating resources, performing I/O, or updating shared data
􀂄 Deferred cancellation: the target thread periodically checks if it should terminate
􀀩 Allow cancellation at safe points
􀀩 Pthreads refer to safe points as cancellation points

Threading Issues: Signal Handling
􀂄 A signal is used to notify a process about the occurrence of an event
􀂄 Synchronous signal: sent from a process to itself
􀀩 E.g., division by 0, illegal memory access
􀂄 Asynchronous signal: sent from external source
􀀩 E.g., timer expire, ctrl-c
􀂄 A signal may be handled by
􀀩 Default signal handler:
􀀗 Every signal has one
􀀗 Run by kernel
􀀩 User-defined signal handler: override the default signal handler

Threading Issues: Signal Handling
􀂄 If a process has several threads, where to deliver a signal?
􀀩 To the thread to which the signal applies (e.g. division by 0)
􀀩 To all the threads in the process (e.g. ctrl-c)
􀀩 To certain threads in the process, i.e., those that are not blocking the signal
􀀗 Typically a signal is delivered only to the first thread that is not blocking the signal
􀀩 Assign a thread to receive and handle all signals for the process

Threading Issues: Thread Pools
􀂄 Motivating example: a web server creates a new thread to service each request
Two concerns:
􀀩 Overhead to create thread; thread discarded after it has completed its work
􀀩 No limit on the number of threads created, may exhaust system resources
􀂄 One solution: thread pools
􀀩 Create a pool of threads at process startup
􀀩 Request comes in: wake up a thread from pool, assign the request to it. If no thread available, server waits until one is free
􀀩 Service completed: thread returns to pool
􀂄 Benefits:
􀀩 Faster to wake up an existing thread than creating a new thread
􀀩 Pool limits the number of threads in system

Linux Threads
􀂄 clone system call used to create kernel level threads
􀀩 Allow child to share resources with parent (e.g. memory space, open files, signal handlers)
􀀗 Parameter ‘flags’ indicates the extent of sharing
􀀩 How to allow sharing?
􀀗 Kernel data structure for a process created by clone contains pointers to the data structures of the parent
process

Pthreads
􀂄 The POSIX standard (IEEE 1003.1c) API for thread creation and synchronization.
􀂄 The standard specifies interface only, implementation is up to developer.
􀂄 Common in UNIX operating systems.
􀂄 On Linux machines
􀀩 #include
􀀩 Link with the pthread library
g++ threads.cc -lpthread

Pthreads: Creating
int pthread_create(pthread_t * thread, pthread_attr_t * attr,
void * (* start_routine) (void *), void * arg);
􀂄 On success
􀀩 A new thread is created with attributes “attr” (NULL=default), id of new thread is placed in “thread”.
􀀩 New thread starts executing function “start_routine” with parameter “arg”
􀀩 New thread executes concurrently with the calling thread
􀀩 New thread runs until it returns from “start_routine” or it calls “pthread_exit”
􀀩 Return 0
􀂄 On failure
􀀩 Return non-zero error code

pthread_create Example
void* my_thread (void *arg)
{
char *msg = (char *) arg;
cout << “Thread says “ << msg <<“\n”;
}
int main()
{
pthread_t t;
char *msg = “Hello World”;
pthread_create(&t, NULL, my_thread, msg);
return 0;
}

Pthreads: Waiting
int pthread_join(pthread_t th, void **thread_return); //wait for
a thread to terminate
􀂄 th: the thread to wait for
􀂄 thread_return: NULL or address of buffer to store return value of th
􀂄 When a thread terminates, its memory resources (thread ID and stack) are not deallocated until another thread performs pthread_join on it.
􀂄 At most one thread can wait for the termination of a given thread.
􀂄 Return 0 on success, non-zero on failure

Chapter 4 - Lecture

Algorithm Evaluation
􀂄 How do we select a CPU-scheduling algorithm?
1. Define desired criteria
􀀩 E.g., maximize CPU utilization with the constraint that max response time < 1 second
2. Evaluate algorithms under consideration
􀂄 Four algorithm evaluation methods:
1. Deterministic modeling
2. Queuing models
3. Simulations
4. Implementation

Deterministic Modeling
􀂄 Take a predetermined workload and determine the performance of each algorithm for that workload
􀂄 Simple and fast
􀂄 Results only valid for specific workload􀃎not useful

Queuing Models
􀂄 The computer system contains a number of servers, each has a queue of waiting processes.
􀀩 E.g., CPU with ready queue, I/O device with device queue
􀂄 Queuing- network analysis: given distribution of process arrival and distribution of service time, we can compute utilization, average queue length, average waiting time, etc.
􀂄 Require inaccurate/unrealistic assumptions for numerical solutions􀃎results may not be accurate
􀀩 E.g. assume process arrival follows Poisson distribution, service time follows exponential distribution, processes are independent

Simulations
􀂄 Write a simulator to simulate the activities in the system
􀂄 Gather statistics of the algorithm performance as the simulation executes
􀂄 Two ways to generate the data to drive the simulation:
1. Generate workload according to certain probability distribution
2. Use the sequence of actual events in the real system (recorded on a trace tape)
􀂄 Advantage: results are accurate
􀂄 Drawback:
􀀩 Non-trivial to design, code, and debug a simulator
􀀩 Simulations are time consuming

Implementation
􀂄 Implement the scheduling algorithm in a real system
􀂄 Cost is tremendous: coding algorithm, modifying kernel data structures, system must be taken down
􀂄 Ideally, we want flexible scheduling algorithms
􀀩 During OS build-time, boot time, or run time, variables used by the scheduler can be changed by system manager or by user

Scheduling in Practice
􀂄 Solaris 2
􀂄 Windows 2000
􀂄 Linux

Solaris 2 Scheduling
􀂄 Priority based, preemptive scheduling
Four scheduling classes, each includes a set of priorities:
􀂄 Time sharing: default class for a process
􀀩 Use multilevel feedback queue with time slicing in each queue
􀀗 The higher the priority, the smaller the time slice
􀀩 Interactive processes have higher priority􀃎good response time
􀀩 CPU-bound processes have lower priority􀃎good throughput
􀂄 Interactive:
􀀩 Same scheduling policy as time sharing class
􀀩 Give a priority boost to the task in the active window
􀂄 System: for system threads
􀀩 Threads have fixed priority
􀀩 No time-slicing: thread runs until it blocks or is preempted by a higher
priority thread
􀂄 Real-time: for processes that require immediate system access
􀀩 Threads have fixed priority
􀀩 Time slicing among threads with the same priority.

Windows 2000 Scheduling
􀂄 Priority-based, preemptive scheduling
􀀩 32 priority levels, large integer = high priority
􀀩 One queue for each priority, RR scheduling in each queue
􀂄 Six priority classes: real-time, high, above normal, normal, below normal, idle
􀀩 Seven levels of relative priority per class
􀀩 An integer priority is assigned to a thread based on the (class, relative priority) pair
􀂄 For processes not in the real-time priority class, relative priority may be raised or lowered
􀀩 Lowered when a thread’s time quantum runs out􀃎limit CPU consumption of CPU-bound threads
􀀩 Increased when a thread is released from a wait operation
􀃎give good response time to interactive threads
􀂄 When a process moves into the foreground (i.e., currently selected on screen), increase time quantum by some factor (typically by 3)

Windows 2000 Priorities

Linux Scheduling
􀂄 Two scheduling classes: time-sharing and real-time
􀀩 Real-time class has priority over time-sharing class
􀀩 Only processes running in user mode may be preempted
􀂄 Time-sharing scheduling
􀀩 Each process has a priority and a certain number of credits
􀀩 Scheduler chooses the process with the most credits
􀀩 Running process loses 1 credit when a timer interrupt occurs
􀀩 Running process is suspended when its credits becomes 0
􀀩 If all processes in ready queue have no credit, perform a recrediting operation
􀀗 For every process, credits = credits/2 + priority (large integer =high priority)
􀀩 The crediting system automatically gives high priority to interactive or I/O bound processes

Linux Scheduling
􀂄 Real-time scheduling
􀀩 Two scheduling classes
􀀗 FCFS (no preemption)
􀀗 RR (preemption when time-slice expires)
􀀩 Each process has a priority and belongs to one scheduling class
􀀩 Scheduler always runs the process with the highest priority.
􀀩 Among processes of equal priority, use FCFS or RR


Process Creation
􀂄 A process (parent) may create new processes (children) during execution
􀂄 Resource sharing
􀀩 Child obtains resources from OS
􀀩 Parent and child share resources (such as memory or files)
􀂄 Execution
􀀩 Parent and children execute concurrently.
􀀩 Parent waits until some or all children have terminated
􀂄 Address space
􀀩 Child is a duplicate of parent.
􀀩 Child has a program loaded into it.

Process Creation in UNIX
int fork() : create a child process identical to parent
􀂄 Child process has a copy of the address space of the parent process
􀂄 On success:
􀀩 Both parent and child continue execution at the instruction after fork
􀀩 For parent: return code is the process ID (PID) of child
􀀩 For child: return code is 0
􀂄 On failure:
􀀩 Child not created
􀀩 Return code is –1 for parent

Waiting for Process Termination in UNIX
int wait(int *status) //wait for any child to terminate
int waitpid(int pid, int *status, int options) //wait for the specified child to terminate
􀂄 If status is not NULL, status information of child is stored in the location pointed to by status.
􀂄 Options: 0 - wait until child has exited
WNOHANG - return immediately if no child has exited
􀂄 Return value: the process ID of the child which exited, or zero if
WNOHANG was used and no child was available, or -1 on error
􀂄 If a child terminates before parent calls wait:
􀀩 When parent calls wait, it returns immediately and child’s PCB is deallocated.

Loading Programs in UNIX
int execl(const char *path, const char *arg, ...);
int execlp(const char *file, const char *arg, ...);
int execv(const char *path, char *const argv[]);
int execvp(const char *file, char *const argv[]);
􀂄 Used after a fork to replace a process’ memory space with a new program.
􀂄 Does not return on success, return –1 on error
􀂄 Second parameter:
􀀩 l - list of arguments
􀀩 v – array of arguments
􀀩 Both list and array must be terminated by a NULL pointer
􀂄 First parameter:
􀀩 empty – full path of executable
􀀩 p – name of executable, will search the search path for executable

Process Termination in UNIX
􀂄 A process terminates when it makes the exit system call.
􀀩 May return data to parent, parent can collect the data using wait.
􀀩 Resources deallocated by operating system.
􀂄 If a parent terminates without waiting for its children:
􀀩 The init process becomes the children’s parent
􀀩 The init process will wait for the children to terminate

Cooperating Processes
􀂄 Independent process - cannot affect or be affected by
the execution of another process
􀀩 Do not share data with any other process
􀂄 Cooperating process - can affect or be affected by the
execution of another process
􀀩 Share data with other processes
􀂄 Concurrent execution of cooperating processes requires
mechanisms for process communication and
synchronization

Producer-Consumer Problem
􀂄 A common paradigm for cooperating processes
􀂄 A producer process produces information that is consumed by a consumer process.
􀂄 Have a buffer of items filled by the producer and emptied by the consumer
􀀩 unbounded-buffer - no limit on the size of the buffer
􀀩 bounded-buffer - assume a fixed buffer size
􀂄 Producer and consumer run concurrently and must be synchronized

Bounded-Buffer Shared-Memory Solution
􀂄 Shared data
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0; //next free position in buffer
int out = 0; //first full position in buffer
􀂄 Shared buffer implemented as a circular array
􀀩 Buffer empty when in == out
􀀩 Buffer full when (in+1) % BUFFER_SIZE == out
􀀩 Allow at most BUFFER_SIZE-1 items in buffer

Producer Process
item nextProduced;
while (1) {
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}

Consumer Process
item nextConsumed;
while (1) {
while (in == out)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
}

Interprocess Communication (IPC)
􀂄 Allow processes to communicate and to synchronize their actions without sharing the same address space
Message-Passing System:
􀂄 IPC facility provides two operations:
􀀩 send(message)
􀀩 receive(message)
􀂄 If P and Q wish to communicate, they need to:
􀀩 Establish a communication link between them
􀀩 Exchange messages via send/receive
􀂄 Implementation of communication link
􀀩 UNIX pipes, sockets

Direct Communication
􀂄 Processes must name each other explicitly:
􀀩 send (P, message) – send a message to process P
􀀩 receive(Q, message) – receive a message from process Q
􀂄 Properties of communication link
􀀩 A link is established automatically between a pair of processes that want to communicate.
􀀩 A link is associated with exactly two processes.
􀀩 Between each pair there exists exactly one link.
􀀩 The link may be unidirectional, but is usually bi-directional.

Indirect Communication
􀂄 Messages are sent to and received from mailboxes (or ports).
􀀩 Processes can communicate only if they share a mailbox.
􀂄 Primitives:
􀀩 send(A, message) – send a message to mailbox A
􀀩 receive(A, message) – receive a message from mailbox A
􀂄 Properties of communication link
􀀩 A link is established between a pair of processes only if they
have a shared mailbox
􀀩 A link may be associated with more than two processes.
􀀩 Multiple links may exist between a pair of processes, each
link corresponds to one mailbox.
􀀩 A link may be unidirectional or bi-directional.

Synchronization
􀂄 Message passing may be either blocking or nonblocking (also knows as synchronous and
asynchronous)
􀂄 Blocking send: sending process is blocked until the message is received by the receiving process or by the mailbox
􀂄 Nonblocking send: sending process continues execution after sending a message
􀂄 Blocking receive: receiver blocks until a message is available
􀂄 Nonblocking receive: receiver gets (immediately) either a valid message or a null.

Buffering
􀂄 Messages exchanged by communicating processes reside in a queue implemented in one of three ways.
1. Zero capacity: queue has maximum length 0
􀂄 Link cannot have a message waiting
􀂄 Sender must block until recipient receives message
2. Bounded capacity: queue has finite length n
􀂄 A link can have at most n messages waiting.
􀂄 Sender blocks if queue is full.
3. Unbounded capacity: queue has infinite length
􀂄 No bound on number of waiting messages
􀂄 Sender never blocks.

COS 5 SUMMARY

http://courses.cs.vt.edu/~csonline/OS/Lessons/index.html

Friday, November 16, 2007

Assignment # 2 (11-17-07)

1. What are the 3 major activities of an operating system in regards to memeory management?
2. Whatare the five major activities of an operating system in regards to file management?
3. What is the purpose of commmand interpreter?
4. What is the puspose of the system Call?
5. What is the main advantage of layered pproach to system design?

Reference: Operating System Concepts 5th edition by Silberschatz/Galvan
Deadline: Wednesday Nov 23, 2007
Yellow Paper

Chapter 3

Chapter 3: Operating-System Structures
􀂄 System Components
􀂄 System Calls
􀂄 System Programs
􀂄 System Structure

Common System Components
􀂄 Process Management
􀂄 Main-Memory Management
􀂄 File Management
􀂄 I/O-System Management
􀂄 Secondary-Storage Management
􀂄 Networking
􀂄 Protection System
􀂄 Command-Interpreter System

Process Management
􀂄 A process is a program in execution.
􀂄 A process needs resources - CPU time, memory, files,and I/O devices
􀂄 The operating system is responsible for
􀀩 Process creation and deletion.
􀀩 Process suspension and resumption.
􀀩 Provision of mechanisms for:
􀀗 process synchronization
􀀗 process communication
􀀗 deadlock handling
􀂄 Ch4 - Ch8

Main-Memory Management
􀂄 A program must be loaded into memory to be executed
􀂄 Keep multiple processes in memory
􀀩 Improve CPU utilization
􀀩 Improve response speed to users
􀂄 The operating system is responsible for:
􀀩 Keeping track of who is using what parts of memory
􀀩 Deciding which processes to load when memory space becomes available.
􀀩 Allocating and de-allocating memory space as needed.
􀂄 Ch9, Ch10

File Management
􀂄 A file is a collection of related information defined by its creator.
􀂄 OS maps files onto physical media and accesses files via the devices that control the physical media
􀂄 OS is responsible for:
􀀩 File creation and deletion.
􀀩 Directory creation and deletion.
􀀩 Supporting primitives for manipulating files and directories.
􀀩 Mapping files onto secondary storage.
􀀩 Backing up files on nonvolatile storage media
􀀩 File protection: who can access a file in what ways
􀂄 Ch11, Ch12

I/O-System Management
􀂄 Manage and control I/O operations and I/O devices
􀂄 The I/O subsystem consists of:
􀀩 A general device-driver interface
􀀩 Drivers for specific hardware devices
􀀩 Buffering, caching, and spooling
􀂄 Ch13 (skip)

Secondary-Storage Management
􀂄 The operating system is responsible for:
􀀩 Free space management
􀀩 Storage allocation
􀀩 Disk scheduling
􀂄 Ch14 (skip)

Networking
􀂄 Processors in a distributed system are connected through a communication network.
􀂄 Communication among processors takes place using a set of protocols implemented by OS.
􀂄 Ch15 - Ch17

Protection System
􀂄 Protection refers to a mechanism for controlling access by processes or users to system resources.
􀂄 Ch18

Command-Interpreter System
􀂄 Command interpreter (or shell): get the next command statement and execute it
􀀩 Graphical shell: Microsoft Windows and Mac OS
􀀩 Command-line shell: UNIX
􀂄 Can be part of kernel or just an application program

System Calls
􀂄 System calls provide the interface between a process and the operating system.
􀀩 Usually available as assembly-language instructions.
􀀩 Some languages allow system calls to be made directly (e.g., C, C++)
􀂄 Three methods to pass parameters to OS:
􀀩 Pass parameters in registers.
􀀩 Store the parameters in a table in memory, the table address is passed as a parameter in a register.
􀀩 Parameters are pushed onto the stack by the program, and popped off the stack by operating system

Categories of System Calls
􀂄 Process control: load & execute, create, end, abort, get/set process attributes, wait, etc.
􀂄 File management: open, close, create, delete, read, write, reposition, get/set file attributes, etc.
􀂄 Device management: request, release, read, write, reposition, get/set file attributes, etc.
􀀩 UNIX: devices handled as virtual files, identified by special file names
􀂄 Information maintenance: get/set time or date, get/set system data, get/set process/file/device attributes, etc.
􀂄 Communications: open/close connection, send/receive message, etc.

Communication Models
􀂄 Message-passing model: processes exchange info though an interprocess-communication facility provided by OS
􀀩 Open connection before communication
􀀩 Communication done by exchanging messages
􀀩 Close connection after communication
􀀩 Easier to implement for intercomputer communication
􀂄 Shared-memory model: processes exchange info by reading/writing data in shared areas in memory
􀀩 Fast and convenient within a computer
􀀩 Must ensure processes do not write to the same location simultaneously

System Programs
􀂄 System programs provide a convenient environment for program development and execution.
􀂄 Categories of system programs:
􀀩 File management: cp, ls, rm
􀀩 Status information: df, date
􀀩 File modification: vi, pico, emacs
􀀩 Programming language support: compilers, assemblers,interpreters
􀀩 Program loading and execution: linkers, loaders, debuggers
􀀩 Communications: web browsers, email, remote login
􀂄 System programs define the user interface

Shell
􀂄 A system program that gets and executes the next user specified command
􀂄 Two ways to implement commands
􀀩 Shell contains the code to execute the command
􀀗 Size of shell determined by number of commands
􀀗 Difficult to add a new command
􀀩 Commands are system programs
+ Shell can be small
+ Can easily add commands
- Slower
- Need pass parameters from shell to the system program
- Interpretation of parameters may be inconsistent
􀂄 Unix: some commands are built in, the rest are system programs

System Structure
􀂄 Simple structure
􀀩 MS-DOS, Unix
􀂄 Layered approach
􀀩 Windows NT, OS/2
􀂄 Microkernels
􀀩 Mach, MacOS X server

UNIX System Structure
􀂄 OS consists of two parts:
􀀩 System programs
􀀩 The kernel
􀀗 Consists of everything below the system-call interface and above the physical hardware
􀀗 Provides the file system, CPU scheduling, memory management, I/O, etc.
􀂄 A large number of functions in one level􀃎difficult to enhance


Layered Approach
􀂄 The operating system is divided into a number of layers
􀀩 The bottom layer (layer 0) is the hardware
􀀩 The highest (layer N) is the user interface
􀂄 Each layer uses functions and services of only lower-level layers.
􀂄 Benefits:
􀀩 Simplify debugging and system verification: can debug one layer at a time
􀀩 Can change the implementation of a layer without affecting other layers, provided that the interface and functions of the layer do not change
􀂄 Problems:
􀀩 Design of layers can be tricky
􀀩 Less efficient: each layer may add overhead to a system call

Microkernel System Structure
􀂄 Remove all non-essential components from kernel, implementing them as services run in user space
􀂄 Usually, microkernel provides:
􀀩 Minimal process and memory management
􀀩 Communication facility
􀂄 Communication between client program and various services provided by message passing
􀀩 System calls become messages sent via kernel to appropriate service
􀂄 Benefits:
􀀩 Adding services doesn’t require kernel modification
􀀩 Kernel codifications simpler when required
􀀩 More secure: less code is running in kernel mode
􀀩 More reliable: kernel OK when a service fails
􀂄 Examples: Mach, MacOS X server

Assignment # 1 (11-17-07)

1. Define the essential properties of the following types of operating systems:
a. Batch
b. Interactive
c. Time Sharing
d. Real Time
e. Distributed
2. Whatare the 3 main purpose of the Operating System
3. Describe the 3 advantage of symmetric and asymetric multiprocessing
4. Why are destributed system desirable

Reference: Operating System Concepts 5th edition by Silberschatz/Galvan
Deadline: Wednesday Nov 21, 2007
Yellow Paper