Commit a9f2269d authored by Sadman Kazi's avatar Sadman Kazi 🎸

Initial commit

SE 350 Notes:
Chapter 1:
What is an operating system?
- an intermediary between the user of a computer and the hardware
- it's purpose is to provide an environment for the user to execute programs in a convenient and efficient manner
- **A software that manages the proper operation of computer hardware**
- enables proper use of hardware resources in the operation of the computer system
What is a kernel?
- A program that runs at all times on a computer
- The kernel and *system programs* make up an Operating System
Computer System Operation:
- The **bootstrap program** is a simple routine that initializes all aspects of the system: registers, device controllers, and memory contents. It also loads the kernel into memory.
- An interrupt signals the occurence of an event from either hardware or software
- CPU stops what it's doing and transfers execution to the interrupt service routine
- On completion of this routine, the CPU resumes the interrupted task.
- On the transfer step, program status and counter is saved in the control stack
- Execution of programs:
- Instruction execution cycle on Von Neumann architecture:
- Fetch instruction from memory
- Store instruction in the instruction register
- Instruction executed, may result in more fetching of operands from memory/registers
- The result of the operation might be stored back in memory
- Storage hierarchy:
- Registers - CPU Cache - Primary memory (RAM) - Secondary memory (electronic disk - magnetic disk - optical disk - magnetic tapes)
- **Volatile storage** loses content when power is removed
- **Nonvolatile storage** is vice versa
I/O Structure:
- OS have **device drivers** for each device controller, which presents a uniform interface to the device to the rest of the OS
- I/O Operation (interrupt-driven I/O):
- Device driver loads the appropriate registers within the device controller
- DC examines the content of the registers to determine the action to take
- DC starts transfer of data from the device to its local buffer
- After transfer, DC sends an interrupt to the driver
- Driver returns control to the OS, possibly returning the data or its pointer if it was a read operation
- Interrupt-driven I/O is fine for small data, otherwise bad (e.g. disk I/O)
- Direct memory access (DMA) is another method, where DC transfers an entire block of data to and from its buffer storage to memory, with no CPU intervention
- One interrupt per block, to indicate that operation is complete (otherwise interrupt per byte of data)
Computer System Architecture:
- Multiprocessor systems have two or more processors sharing the same bus, and sometimes the clock, memory, and peripheral devices
- Three main advantages:
- **Increased throughput** Although it's not linear due to the overhead of communication between the processors
- **Cheaper on average** Since they are sharing a lot of the things, a 4 processor system is cheaper that 4 single processor ones
- **Increased reliability** One processor failing does not mean that the system will crash
- **Asymmetric multiprocessing** each processor is assigned a specific task, while amaster processor controls the system
- **Symetric multiprocessing** Each processor performs all tasks, no master-slave relationship
- Multicore processors are similar to multiple processors except that the communication between the cores is faster, and it's also significantly less power hungry.
- **Clustered Systems** are two or more individual systems joined together.
- Used for high-availability service. They may/may not share the same storage area network.
SE 350
Chapter 2
Operating system defintion:
A program that controls the execution of application programs and acts as a standardized interface between applications and hardware
- responsible for managing resources
- relinquishes control of the processor
Kernel (also called nucleus):
Portion of the OS that is in main memory
- Contains frequently used functions
OS evolution:
- Serial processing (40-50s)
- No OS
- Machines run from a console with display lights, toggle switches, input device and printer
- Forced interruptions
- Long setup times for programs
- Simple batch system (goal: improve utilization)
- Monitor: a software that controls the sequence of events, batch jobs together
- Memory protection: protect area containing the monitor
- Timer to prevent any single job from monopolizing the system
- Certain machine level instructions only executed by the monitor, such as I/O operations and interrupts
Modes of operation:
- Reason: protect users and kernel from users
- User mode: certain instructions may not be executed
- Kernel mode (monitor): priviliged instructions and access to protected memory areas
- A running instance of a program
- Consists of three components:
1. An executable program
2. Associated data
3. Execution context (information need by the OS for management)
Common problems (multiple programs, threads, interrupts, I/O, shared resources):
- Improper synchronization
- Failed mutual exclusion (state corruption on shared memory)
- Nondeterminate program operation (interference among programs due to memory allocation, IO access)
- Deadlocks (resource access)
- Race condition (output dependent on the timing of other events)
Memory management:
- Isolate processes
- Allocation and management abstracted from the user
- Modular programming support
- Protection and access control for shared memory
- Virtual memory allows programmers to address memory from a logical point of view independent of how much memory is available:
- Virtual address is page number plus offset in the page
- Paging allows process to be comprised of a number of fixed size blocks called pages, which can be anywhere in memory
- Processor --> Virtual address --> Memory management unit --> Real address --> Main memory
+-> Disk address --> Secondary memory
Infomation security:
- Availability: system protection against interruptions
- Confidentiality: no unauthorized access
- Integrity: no unauthorized modification
- Authenticity: verification of the identity of users
- Scheduling and resource management: goal is to maximize throughput, minimize response time and accomodate as many uses as possible
- View the system structure as a series of levels
- Each level performs a related subset of functions
- Each relies on the next lower level to perform more primitive functions
- Decomposes a problem into more manageable subproblems
- Electronic circuits -> Instruction set -> procedures -> interrupts -> primitive processes -> ... etc.
Modern OS:
- Microkernel Arch:
- Kernel functions address spaces, interprocess comm, basic scheduling
- Everything else in user space
- Thread: a dispatchable unit of work executing sequentially and is interruptable
- Process is a collection of one or more threads
- Symmetric Multiprocessing (SMP): processors share same main memory and I/O
SE 350
Chapter 3
- Recall what a process is
- Elements of a process
- Identifier (PID)
- State
- Priority
- Memory Pointers (pointers + shared memory blocks)
- Context data (registers, PSW, program counter)
- I/O status (I/O requests and devices in use)
- Accounting information (processor time, time limits and threads)
- Process Control Block (PCB):
- Data structure that contains the process elements
- Created and managed by the OS
- Allows support for multiple processes
- Users may manipulate PCB partially by setting priority
- Varies between OSs
- Trace of the Process: sequence of instruction that execute for a process
- Dispatcher: switches the processor from one processor to another
- Five state process model:
- Running: process currently being executed
- Ready: process that can be executed
- Blocked/Waiting: process that cannot execute because it's waiting for something
- New: a new process to enter the system
- Exit: a halted or aborted process
- Use multiple queues for multiple event type waiting processes
- Suspened process:
- Swap blocked processes to disk to free up memory
- Blocked state becomes suspend state when swapped to disk
- Add two new states: Suspend/ready, Suspend/blocked
SE 350
Chapter 4: Threads, SMP and Microkernels
- **Resource Ownership**: process includes a virtual address space to hold the
process image
- **Scheduling/Execution**: process follows an execution path that may be interleaved
with other processes
### Threads
- Dispatching -> thread or lightweight process
- Resource of ownership -> process or task
- OS supports multiple threads within a single process
Process vs Threads:
- Processes
- Have a virtual address space which holds the process image
- Protected acces to processors and other processes, files, and I/O
- Threads: (within a process, each has...)
- an execution state (e.g. running, ready, etc.)
- saved thread context when not running
- an execution stack
- some per-thread static storage (for local variables)
- Access to the memory and resources of its process (shared by all threads)
- user stack, kernel stack, and a thread control block
Benefits of threads (compared to a process):
- Faster to create than a process:
- skips resource allocation through the kernel
- factor 10 times faster
- faster to terminate:
- don't have to release resource through the kernel
- less time to switch between two threads within the same process
- more efficient communication between them
- share memory when within the same process
Uses of threads:
- Foreground to background work
- t1 handles sampling, t2 background checks, t3 data processing
- Asynchronous processing
- t1 computes everything, t2 pass through RS232
- Speed of execution
- multiple threads on multiple CPUs/cores, I/O does not block the app, only
a single thread
- Modular program structure
- Suspending a process means suspending all threads of the process since they share
the same address space
- Same for termination
Thread states:
- Associated states with a change in the thread state:
- Spawn
- Block
- Unblock
- Finish (deallocate register context and stacks)
- User-level threads
- All thread management is done by the application
- Kernel not aware of their existence
- Application schedules execution
- Kernel schedules processes
- One thread calling a kernel I/O function blocks the process (?)
- Kernel-level threads
- Kernel maintains the context info for the process and its threads
- Scheduling done on a thread basis
- User:
- Less switching overhead (save 2 mode switches)
- Scheduling is app specific
- Can run on any OS
- Kernel:
- OS calls are blocking only the thread
- Can schedule threads simultaneously on multiple processors
Combined approaches: (eg. IO processes)
- Threads can be grouped to kernel threads
- Kernel knows and schedules processes and threads
Relationship between T and P:
- 1:1 -> each thread is a unique process with its own address space and resources
-> e.g. Traditional Unix implementations
- M:1 -> multiple threads created and executed with a process with address space
and dynamic resource ownership
-> e.g. Windows NT, Solaris, Linux
- 1:M -> thread can migrate from one process **environment** to another, allowing
easy movement among distinct systems
-> e.g. Ra (Clouds), Emerald
- M:N -> M:1 + M:2
-> e.g. TRIX
### Microkernels
- Small OS core
- only essential OS functionalities
- external subsystems (not included in the kernel):
- Device drivers
- File systems
- Virtual memory manager
- Windowing system
- Security services
- Uniform interface on request made by a process
- No distinction between user- and kernel- level services
- All services provided by message passing
- Extensibility
- Allows addition of new services
- Flexibility
- Features can be added or subtracted
- Portability
- When porting to a new processor, only the microkernel needs to change, no
change in services
- Reliability
- Modular
- Can be tested better
- Module crashing does not crash kernel
- Distributed system support
- Message sent without knowledge of target machine
- components are defined interfaces that can be interconnected to form software
(*and how is it different from regular kernel??*)
- Low-level memory management
- Virtual page -> physical page frame mapping
- Interprocess communication
- Concepts: messages, ports, capabilities
- Requires copying of messges; remapping pages may be faster
- I/O and interrupt management
- Interrupts are messages sent to processes
- Microkernel doesn't need to know anything about the IRQ handing function
### Multicore
*If you have N processors, is program now N times faster?*
Amdahl's law:
speedup = 1/((1-f)+(f/n))
(1 - f) ... fraction that is inherently serial
Sources of serialization: synchronozation, data setup, reading input, etc.
- Multithreaded native applications: e.g. pthreads
- Multiprocess applications: e.g. chromium
- Virtualized languages: e.g. JVM
- Multiinstances applications: e.g. multiple browser windows
SE 350
Chapter 5 - Concurrency, Mutual Exclusion and Synchronization
### Concurrency:
- Mutiple applications (multiprogramming)
- Structured application (app can be a set of concurrent processes)
- OS structure (OS is a set of processes or threads)
- Key terms:
- critical section: A section of code within process that requires access to
shared resources and which may not be executed while another process is in a
corresponding section of code.
- deadlock: A situation in wich two or more processes are unable to proceed
because each is waiting for one of the others to do something
- livelock: two or more processes continously change their state in response to
changes in the other process(es) without doing any useful work
- mutual exclusion: the requirement that when one process is in a critical section
that accesses shared resources, no other process may be in a critical section
that accesses any of those shared resources
- race condition: multiple threads or processes read and write a shared data
item and the final result depends on the relative timeing of their execution
- starvation: a runnable process is overlooked indefinitely by the scheduler;
although it is able to proceed, it is never chosen
- Difficulties:
- Sharing of global resources
- OS managing the allocation of resources optimally
- Difficult to locate programming errors
- OS Concerns:
- Keep track of various processes
- Allocation and deallocate resources
- Processor time
- Memory
- Files
- I/O devices
- Protect data and resources
- Functions of process must be independent of the speed of execution of other
concurrent processes
- Interaction among process:
1. Processes unaware of each other
2. Processes indirectly aware of each other
3. Process directly aware of each other
| Degree of Awareness | Relationship | Influence that one Process has on the other | Potential Control problems |
| Process unaware | Competition |Results of one independent of others, timing
might be affected
| Processes indirectly aware| Cooperation by sharing|Results of one may depend on info obtained by
others, timing may be affected
| Process directly aware | " by communication |
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment