What is the difference between Logical Address and Physical Address Space?
- Logical and Physical Addresses are same in compile time and load time binding scheme but differs in execution time binding scheme.
Logical Address | Physical Address |
Generated by CPU | Location in memory unit |
It is a set of all virtual addresses generated by a program. | It is a set of all physical addresses generated by a program. |
User can access logical address | User cannot directly access the physical address. |
What is MMU ?
MMU stands for memory management unit which is a type of simple scheme(which is a generalization of base-register scheme i.e. ) that maps the virtual address to physical address space in memory.
The base-register here is called relocation register which adds a certain value to the incoming virtual address.
How does OS maintains isolation and protection?
Using Virtual address space concept.
MMU maps the logical address dynamically by adding the relocation register value. So each process gets a legal range of address space by adding a base value and limiting it with a range.
When CPU scheduler selects a process for execution, the dispatcher loads the relocation and limit registers with the correct values as part of the context switch. Since every address generated by the CPU (Logical address) is checked against these registers, we can protect both OS and other users’ programs and data from being modified by running process.
What are different methods of Memory Allocation in Physical Memory?
A. Contiguous Memory Allocation
FixedPartitioning:
Advantages:
Easy Implementation.
Less OS overhead in managing and design.
Disadvantages:
Internal Fragmentation: Occurs when program required less space than than size of the partition in memory the extra space leads to internal fragmentation.
External Fragmentation: These extra spaces stacks up to some memory space despite which the process cannot use it is called external fragmentation.
Limits of process size as no bigger size of process than the partition size cannot access the memory space here.
Low degree of Multiprogramming as no flexibility is present between size of memory and size of processes.
Dynamic Partitioning:
-
Advantages:
No internal fragmentation.
No fixed limit on process size.
-
Disadvantages:
External fragmentation.
Complex Memory Allocation.
What is the solution to manage Free space i.e. for external fragmentation?
Defragmentation or Compaction takes place where all in between vacant spaces are brought up together and used. This minimizes the external fragmentation to an extent.
It uses free list (Linked list type data structure).
Has an overhead and decreases the efficiency of OS.
How to satisfy a request of size n from a list of free holes?
First-fit: Allocate the first hole that is big enough .Fast/Less time complexity
Next-fit: Enhancement on First fit but starts search always from last allocated hole.
Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size . Produces the smallest leftover hole. Less internal fragmentation. Slow, as required to iterate whole free holes list.
Worst-fit: Allocate the largest hole; must also search entire list . Produces the largest leftover hole. Less external fragmentation.
What is Paging?
Paging is dividing the process in fragments to fit in physical memory space to avoid external fragmentation.
The logical address space is divided in pages while the physical memory is divided in frames of similar size of pages.
What is Page Table and its hardware architecture?
It is data structure that maps pages to frames in physical memory.
The page table consist of base address of each page in the physical memory.
Every address generated by CPU (logical address) is divided into two parts: a page number (p) and a page offset (d). The p is used as an index into a page table to get base address the corresponding frame in physical memory.
-
Page table is stored in main memory at the time of process creation and its base address is stored in process control block (PCB).
A page table base register (PTBR) is present in the system that points to the current page table. Changing page tables requires only this one register, at the time of context-switching.
Paging is slow because there are too many memory references to access the desired location in physical memory.
How to speed up paging?
Translation Look Aside Buffer it is type of cache that stores the key and value to speed up paging process.
Page table is stored in main memory making it slow when references are made.
When we are retrieving physical address using page table, after getting frame address corresponding to the page number, we put an entry of the into the TLB. So that next time, we can get the values from TLB directly without referencing actual page table. Hence, make paging process faster.
-
Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies each process to provide address-space protection for that process. Otherwise need to flush at every context switch which is not efficient.
What is Segmentation?
It is a memory management scheme that divides the user program in different segments in user view of memory in logical address space.
Process is divided into variable segments based on user view.
Modern System architecture provides both segmentation and paging implemented in some hybrid approach.
What makes segmentation better than paging?
Paging is closer to OS than user view so it may divide the same function into different pages and those pages may or may not be loaded at the same time into the memory. It decreases the efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each segment contains the same type of functions such as the main function can be included in one segment and the library functions can be included in the other segment
Advantages:
a. No internal fragmentation.
b. One segment has a contiguous allocation, hence efficient working within segment.
c. The size of segment table is generally less than the size of page table.
d. It results in a more efficient system because the compiler keeps the same type of functions in one segment.
Disadvantages:
a. External fragmentation.
b. The different size of segment is not good that the time of swapping
What is Virtual Memory?
Virtual memory is separation of user logical memory from physical memory .
Only part of the program needs to be in memory for execution.
Logical address space can therefore be much larger than physical address space
Allows address spaces to be shared by several processes .
Allows for more efficient process creation .
More programs running concurrently .
Less I/O needed to load or swap processes.
Advantages:
The degree of multi-programming will be increased.
User can run large apps with less real physical memory.
Disadvantages:
The system can become slower as swapping takes time.
Thrashing may occur.(explained below)
What is Demand Paging?
Demand Paging is a popular method of virtual memory management.
In demand paging, the pages of a process which are least used, get stored in the secondary memory.
- Less I/O needed, no unnecessary I/O , less memory needed ,faster response , more users.
Lazy swapper – never swaps a page into memory unless page will be needed. Swapper that deals with pages is a pager.
When does page fault occurs ?
A page is copied to the main memory when its demand is made, but if the page is not found in the primary memory page fault occurs.
The valid-invalid bit scheme in the page table is used to distinguish between pages that are in memory and that are on the disk.
Valid-invalid bit 1 means, the associated page is both legal and in memory.
Valid-invalid bit 0 means, the page either is not valid or is valid but is currently on the disk.
How is Page Fault handle?
If there is a reference to a page, first reference to that page will trap to operating system => Page fault.
Operating system looks at another table to decide:
Invalid reference => abort
Just not in memory
Find free frame .
Swap page into frame via scheduled disk operation .
Reset tables to indicate page now in memory Set validation bit = v
Restart the instruction that caused the page fault.
What is Pure Demanding Paging?
Extreme case – start process with no pages in memory.
OS sets instruction pointer to first instruction of process, non-memory-resident => page fault .
Solution : Locality of Reference. e.g. consider fetch and decode of instruction which adds 2 numbers from memory and stores result back to memory
Hardware support needed for demand paging
Page table with valid / invalid bit .
Secondary memory (swap device with swap space) .
Instruction restart.
What is Free Frame List?
Most operating systems maintain a free-frame list -- a pool of free frames for satisfying requests such as swapping in pages from secondary memory to primary.
Operating system typically allocate free frames using a technique known as zero-fill-on-demand -- the content of the frames zeroed-out before being allocated
When a system starts up, all available memory is placed on the free-frame list.
Effective Access Time (EAT) EAT = (1 – p) x memory access + p (page fault overhead + swap page out + swap page in). where p= page fault rate
What happens if there is no free frame?
There is need for Page Replacement.
Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory
The page replacement algorithm decides which memory page is to be replaced. Some allocated page is swapped out from the frame and new page is swapped into the freed frame.
Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk.
What is Thrashing?
If a process does not have “enough” pages, the page-fault rate is very high, thrashing occurs.
Page fault to get page =>Replace existing frame .
But quickly need replaced frame back .
This leads to:
Low CPU utilization .
Operating system thinking that it needs to increase the degree of multiprogramming.
Another process added to the system.
How to handle thrashing?
Working set model based on the concept of the Locality Model.
- The basic principle states that if we allocate enough frames to a process to accommodate its current locality, it will only fault whenever it moves to some new locality. But if the allocated frames are lesser than the size of the current locality, the process is bound to thrash.
Page Fault frequency we establish upper and lower bounds on the desired page fault rate.
- If pf-rate exceeds the upper limit, allocate the process another frame, if pf-rate fails falls below the lower limit, remove a frame from the process.
How does basic Page Replacement handled?
Find the location of the desired page on disk .
Find a free frame: -If there is a free frame, use it => If there is no free frame, use a page replacement algorithm to select a victim frame - Write victim frame to disk if dirty.
Bring the desired page into the (newly) free frame; update the page and frame tables.
Continue the process by restarting the instruction that caused the trap