FIT9137(week1-week4)

2021-03-28  本文已影响0人  raito4

week1:

three major functions of an operating system (OS):

1940s ENIAC: ENIAC was the first general-purpose (programmable) electronic digital computer.

Bill Gates - Microsoft founder

Steve Jobs - Apple computer co-founder

Ken Thompson and Dennis Ritchie - receiving the 1999 US National Technology Medal for the invention of Unix

Larry Ellison - Oracle founder

Richard Stallman - Founder of the Free Software
Foundation and creator of the GNU Project

Linus Torvalds - Creator of Linux

Basic components of a PC:

The von Neumann Architecture:

Three concepts underlie the architecture:

Binary number system (“base 2”)
Binary Digit (or bit): 0 or a 1
8 bits = 1 ‘byte’, eg. 0100 1001

Machine language is the lowest level programming language (closest
to the hardware)

The Fetch/Execute Cycle

Interrupts

Interrupt and Multiprocessing

Week2

Infrastructure Architecture

Infrastructure Architecture

Application Architecture

History of Operating Systems

Operating Systems


History of Unix - GNU

Richard Stallman (often referred to by his username rms) is the father of the Free Software Foundation which included the GNU (GNU’s not UNIX) project.

History of Unix - MINIX

Andrew Tanenbaum (PhD UC Berkeley) wrote a UNIX clone from scratch called MINIX in order to support an operating systems course he was teaching.

Unix Philosophy

File Management

everything in Unix is treated as a file

Typical tasks of the File Management System:

Memory Management

The operating system is also key to the control of data in primary storage (main memory).

In modern operating systems, memory allocation may be non-contiguous (a single logical object may be spread over several disjointed memory areas).

A major function of modern memory management is the implementation of virtual memory.

Process Management

A process is normally defined as “a program in execution”.
The operating system typically needs to provide the following functionality with regard to processes:

File Management

File management systems allow users to store information in fundamental units called 'files'. What the file actually represents is defined by the system and/or the user.
File system provides connection between logical file structure and physical implementation, creating logical view for user, and hiding physical view.

O/S maintains a directory structure for each device, to facilitate location and organization of user files, and keeps track of free space, allocating space and reclaiming it as required.
Provides naming, access/manipulation/storage, security/protection functions for files and directories.

Ordinary Files

Directories

Special Files

Files - naming conventions

Week3

A special file : /etc/passwd
This file holds important information about all the users in the system. For instance, you may find in this file entries such as:
muni:x:1000:1000:muni:/home/muni:/bin/bash
/etc/group holds information about all the groups in the system.
• a device is represented as a file.
• a program is a file.
• a directory is really also a file (!)
The kernel does not identify files by names; it uses a unique number to identify a file called the inode number. $ stat /root/test.txt

Character File type

File permissions

user group others r/- w/- x/- r/- w/- x/- r/- w/- x/-

$ls

The option “-l” in the command above is to request the output in long listing, format
The option “-a” in the command above is to request the output in all files (including the hidden files, format
There is another option, “-h”, which will make ls display sizes in “human readable” format (eg. 8K, 555M, 4G, etc.)

Change file permission (chmod)

Syntax: chmod [-R] who [op] [permission] file-list
who is one of:
• u user owner of the file
• g group group to which the owner belongs
• o other all other users
• a all can be used in place of u, g, o
op is one of:

permission is one or more of : r, w, x
Note : this is typical of a Unix command – the command is given with some option(s) & the actual operands)

File access for processes

When a process executes, it has four id’s:

these id’s determine a process’s access permissions to
files/directories.
Real UID is the UID of the user that created THIS process – i.e. the user
who executes/runs the program.
Effective UID is used to evaluate privileges of the process to perform a
particular action.

setuid and setgid

  1. set-user-id and
  2. set-group-id

chmod 764 file r:4 w:2 x:1

Every time a shell is started, 3 files are opened automatically :

Links - Hard Links

Hard link (default) : ln ~/week3/newfile.txt hardlink1
Soft link (using -s option) : ln -s ~/week3/newfile.txt softlink1

Memory management

Physical main memory is finite (and expensive)
✔ Single-processing: 1 process in memory at any one time. Easy to implement – either it fits or it doesn't.
✔ Multi-processing: multiple processes in memory at the same time.

Solutions : Swapping, Virtual Memory.

Swapping is a technique used to run more than one process at once. It allows the computer to rapidly "swap" its CPU between the process by loading and unloading them into/from memory. The switching occurs sufficiently quickly that it gives the user the illusion that the system is multi-tasking.
Virtual Memory is a more complicated technique used to solve memory management problems. It allows the computer to separate logical program addresses from actual physicaladdresses, using dynamic relocation of program addresses in memory.
It allows programs to be divided up into small sections stored in different parts of memory, and allows execution of programs larger than physical memory ie. only currently executing portion of program is in memory.
Virtual memory may be implemented using paging and/or segmentation

External Fragmentation– memory which is unallocated (to any processes) and unused.
Internal Fragmentation – memory which is allocated (to a process)
and unused.

Memory Management – Paging

Paging:

Page size is usually hardware-dependent, and is typically in powers of 2 (e.g. 512Bytes, 1024Bytes, etc.) to aid address translation. 2-4KB size is common.
Large page sizes will :

The O/S also maintains a Process Table for all the processes. A Process Table contains entries called Process Control Blocks (PCB).
Each PCB represents one process
A typical PCB entry contains data such as :

Improving Paging performance by Caching

Page Replacement/Swapping

Paging Algorithms

Least Recently Used (LRU) page swapped out - each access time stamped 🡺 extra overhead.
Least Frequently Used (LFU) page swapped out - each access adds to count, but this causes problems with new pages that have just been brought in (which would naturally have a low frequency count!)
First In First Out (F.I.F.O) - oldest page swapped out first; simplest but has the disadvantage that the most heavily used page may be replaced.

Paging Algorithm Variations

Not Used Recently (NUR) - modification of LRU, that also looks at whether page has been modified (in addition to being accessed).
Second Chance algorithms are modification of FIFO to allow second chance if reference bit is set. Page is time stamped again as though new.

The Working Set Model

Page Replacement Policy

Segmentation

Segmentation is another approach to memory management, similar to Paging.
• Reflects the Logical division in programs and data, e.g. may have segment for global variables, code portions of functions and procedures, etc.
• the division is decided by the programmer, and the O/S will then build the appropriate Segment Table (as opposed to a Page Table) to mirror the division.
• Program addresses are now of form:
segment_number : offset

The address mapping for logical to physical addresses is maintained in a Segment Table (similar to a Page Table).
• each segment may vary in size depending on its function (i.e. logical division)
• each segment is a logical entity of the program it represents compare this approach to Paging : where each page/frame is of the same size.
• Segmentations is more complicated to program than paging; so less popular in use to implement virtual memory

Segmentation vs Paging

image.png

Virtual Memory Technique

  1. Provides a large logical memory space to physical memory space ratio.
  2. Allows more processes to run concurrently.
  3. Process isolations - protect processes from each other. Each process has its own virtual memory space.
  4. Less I/O resource, as we load in only the required sections of a user process.
  1. Adds complexity to memory management
  2. Can cause performance degradation if not implemented properly – e.g. Thrashing due to excessive page faults
    Memory Management

Order of increasing complexity:

  1. Single-processing – process loaded into memory and stays there until it has finished.
  2. Multi-processing - all processes staying in memory.
  3. Swapping - system can handle more processes than it has room for in memory, by swapping processes to and from disk.
  4. Virtual memory - can handle processes bigger than physical memory using paging or segmentation.
  5. Paging/Segmentation - memory subdivided into pages or segments.

Process Management

Process States

✔ New: The process is being created.
✔ Ready: waiting for a processor to become available
✔ Running: instructions are being executed
✔ Blocked: waiting for some event, e.g. I/O completion


image.png

Processes

Process Control Block

Usually there are more processes than processors. Concurrency achieved by interleaving processes i.e. allocating each process a fraction of the CPU time.
When a process is interrupted, its current state must be saved, for it to be resumed later. This info is stored in a “Process Control Block”, which forms one entry in the Process Table .
Process Control Block (PCB)

Scheduling

Long-term (or high-level) scheduler decides which processes will be
admitted into the system’s Ready Queue. Decision based on memory availability and system load. Important in batch systems, but less so in most modern interactive systems.
Short-term (or low-level) scheduler works with processes already in memory and ready to run. A Dispatcher then decides which one to run next.

High-Level Scheduler

Low-Level Scheduler

Pre-emptive Vs Non-Pre-emptive Scheduling

• Non-Pre-emptive scheduling means system relies on a (well-written) process itself to relinquish CPU when it finishes. Eg. Windows 3.1, Windows 95 (16-bit), “Classic”
MacOS, etc
• Pre-emptive scheduling means the O/S controls how the CPU is shared by the processes. Eg. Unix, Linux, Windows NT/2000/XP/Vista/8/9/10, Mac OS X, etc

Non-pre-emptive scheduling

Non-pre-emptive algorithms are more applicable to batch systems. Differ from pre-emptive as processes will only stop executing when they decide to stop.
• Examples of non-pre-emptive algorithms:

Pre-emptive Scheduling

With pre-emptive scheduling, computer uses an inbuilt clock to ensure no process runs for too long.
• Pre-emptive scheduling is more common in interactive systems, but involves much more overhead.
• Most modern O/S’s use pre-emptive scheduling. e.g: an internal clock creates interrupts 50-100 times/sec.
• The O/S dispatcher runs at each clock interrupt to decide on next process to execute.
• Different algorithms can be used to achieve maximum efficiency and response.

Pre-emptive scheduling algorithms

Round Robin:

• All processes assigned equal time quantum to run.
• All ready-to-run processes are maintained in circular linked-list, and take turn to use CPU.
How long should a reasonable time quantum be? E.g.
• If process switch takes 5msec, 20 mses quantum means 25% of CPU time spent just switching processes.
• If 500mses quantum is used - very slow response time to interactive users.
• Usually, quantum of ~100 msec is used.


image.png

Round Robin problems

• Round Robin does not allow definition of “more important processes”, ie. priority
• Round robin also indirectly penalizes processes that frequently use I/O resources, by always returning them to back of queue even if used only small % of quantum. This is because I/O always takes longer to complete, hence such processes have higher chance of waiting/blocking.

Other Scheduling Algorithm

• Priority Scheduling:
• Processes given initial priority level.
• Usually multiple priority classes exist.
• Runnable processes maintained in priority queues, with round robin used within queue.
• To prevent CPU starvation for low priority jobs, may need to temporarily boost priority (e.g. if they have been waiting for long periods). Once process has had its share of CPU time, its priority drops back to normal.

Dynamic Priority Scheduling

• Another variation of priority scheduling is to assign priorities dynamically, using some formula.
• For instance, based on fraction of the last time quantum used (f), priority formula could be 1/f (i.e. more time used now, lesser priority later).
• This would favor interactive users and I/O bound jobs (these tends to spend more time in blocked states & use less CPU time quantum, then the 1/f formula will give them higher priorities) rather than CPU bound jobs.

  1. Mutual Exclusion - ensuring that non-shareable resources (e.g. printers) are accessed by only one process at a time.
  2. Synchronization - for processes which must cooperate (e.g. chat programs) with one another.
  3. Deadlock - when two or more processes want to use a non-shareable resource held by each other.

Process Management

In order to implement mutual exclusion, synchronization and deal with deadlocks, some form of process-coordination is required.
• In addition, processes often need to exchange information.
• Both of these goals are met by Inter-Process Communicationmechanisms (IPC).

Local Area Network (LAN) (room, building): a group of clients and servers that share a circuit
Backbone Network (BN) (< a few km): high-speed connection between LANs
Metropolitan Area Network (MAN) (> a few km): connect LANs and BNs across locations
Wide Area Network (WAN): same as MAN except longer distances
• 1 Mbps (million bits per second) from your home to your ISP (Internet Service Provider), 10-20 Mbps in the other direction
50-500 Mbps within your WLAN (wireless network)
1 Gbps in LANs (local area network, e.g. Monash lab)
10 Gbps in backbone networks
Tbps (tera bits per second, 1012) in optical fibre networks

上一篇下一篇

猜你喜欢

热点阅读