Operating System Memory Management Solved MCQs
In this section of Operating System Main Memory – Memory Management.it contain Operating System Main Memory – Memory Management MCQs (Multiple Choice Questions Answers).All the MCQs (Multiple Choice Question Answers) requires in detail reading of Operating System subject as the hardness level of MCQs have been kept to advance level.
- Disk ,RAM , Caches
- RAM,Disk,Caches
- Caches,RAM,Disk
- RAM,Caches,Disk
- Caches
- RAM
- Disk
- All of the above
- Caches
- RAM
- Disk
- All of the above
- Caches
- RAM
- Disk
- All of the above
- Caches
- RAM
- Disk
- All of the above
- Caches
- RAM
- Disk
- All of the above
- Caches
- RAM
- Disk
- All of the above
- Miss Ratio
- Hit Ratio
- Bit Ratio
- Byte Ratio
- Run time library
- dynamic library
- static library
- load time library
- Caches
- Disk
- RAM
- Virtual memory
- Caches and RAM
- Caches and Disk
- RAM and Disk
- All of the above
This MCQs section is mainly focused on below lists of topics
- Managing Memory hierarchy.
- Static and Dynamic Memory Allocation.
- Static Binding
- Dynamic Binding
- Execution of programs.
- Compilation or assembly
- Linking
- Relocation
- Static and Dynamic relocation of program
- Linking
- static and Dynamic Linking
- Self relocating program
- Reentrant Programs
- CPU
- MMU
- GPR
- None of the above
- Efficient use of memory
- speedy allocation of memory
- speedy deallocation of memory
- All of the above
- Static memory allocated can be performed by a compiler,linker or loader.
- Static memory allocation to a process is possible only if size of its data structures are known before its execution begins.
- static memory allocation is performed in a lazy manner during execution of program.
- static memory is allocated to a function or a variable just before its is used for the first time.
- 1,2
- 2,3
- 3,4
- 1,4
- Dynamic memory allocated can be performed by a compiler,linker or loader.
- Dynamic memory allocation to a process is possible only if size of its data structures are known before its execution begins.
- Dynamic memory allocation is performed in a lazy manner during execution of program.
- Dynamic memory is allocated to a function or a variable just before its is used for the first time.
- 1,2
- 2,3
- 3,4
- 1,4
- Dynamic memory allocation
- Static memory allocation
- contiguous memory allocation
- None-Contiguous memory allocation
- Dynamic memory allocation
- Static memory allocation
- contiguous memory allocation
- None-Contiguous memory allocation
- Dynamic
- Static
- contiguous
- None-Contiguous
- loader
- Linker
- Compiler or Assembler
- re-locator
- loader
- Linker
- relocator
- Compiler or Assembler
- loading
- Linking
- Compiling or Assembling
- relocation
- linking of object module
- loading of object module
- Relocation of object module
- compiling of object module
- compiler
- linker
- loader
- relocator
- compiler
- linker
- loader
- relocator
- static relocation is performed before execution of the programs begins.
- static relocation is performed during execution of the programs .
- static relocation can be performed by suspending a program's execution,carrying out the relocation procedures and then resuming its execution.
- it would require information concerning the translated origin and address sensitive instructions to be available during the program execution.
- 1
- 1,2
- 1,2,3
- 1,2,3,4
- Dynamic relocation is performed before execution of the programs begins.
- Dynamic relocation is performed during execution of the programs .
- Dynamic relocation can be performed by suspending a program's execution,carrying out the relocation procedures and then resuming its execution.
- it would require information concerning the translated origin and address sensitive instructions to be available during the program execution.
- 1
- 1,2
- 2,3
- 2,3,4
- Base Register
- special register
- Relocation register
- PSW-Program Status word
- the linker links all modules of a program before its execution begins.
- The linker is invoked when an unresolved external reference is encountered during its execution.the linker resolves the external reference and resumes execution of the program.
- if several programs use the same module from a library,each program will get a private copy of the module;several copies of the module might present in a memory at the same time if program using the same module.
- All of the above.
- 1 only
- 1,2 only
- 1,3 only
- 4
- linking is performed during execution of a binary program.
- if the module referenced by the program has already been linked to another program that is execution,the same copy of the module could be linked to this program as well,thus saving the memory
- it produce binary program that does not contain any unresolved external references.
- All of the above
- 1 only
- 1,2 only
- 1,3 only
- 4
- reentrant program
- object module
- Binary Program
- self-relocating program
- reentrant program
- object module
- Binary Program
- self-relocating program
- Dynamically linked program
- Object module
- Binary Program
- self-relocating program
- Dynamically linked program
- Reentrant program
- Binary Program
- self-relocating program
- Dynamically linked program
- Reentrant program
- Binary Program
- self-relocating program
This MCQs section is mainly focused on below lists of topics
- Use of stack..
- The memory Allocation Model for a process.
- Heap
- stack
- CPU register
- Program Counter
- FIFO
- LIFO
- FILO
- LILO
- top of Stack (TOS),Frame Base (FB)
- top of Stack (TOS),Stack Base (SB)
- Stack Base (SB),top of Stack (TOS)
- Frame Base (FB),top of Stack (TOS)
- stack
- Frame Base
- base Frame
- Stack frame
- Address of the function's parameters
- values of the functions parameters
- the return address
- All of the above
- The next value of the frame base.
- The previous value of the frame base.
- the return address of the function
- All of the above.
- The next value of the frame base
- The previous value of the frame base.
- the return address of the function
- All of the above.
- Last entry in the stack frame
- the return address of the function
- the last local data in the stack frame
- First entry in the stack frame
- the last local data in the stack frame
- Last entry in the stack frame
- the return address of the function
- First entry in the stack frame
- Directory entry
- size of the stack
- PCD-program controlled dynamic data vary during execution of the program
- all of the above
- The code and static data components in the program are allocated memory area that exactly matches their size.
- The PCD data and stack share a single large area of memory but grow in opposite directions when memory is allocated to new data.
- The PCD data is allocated by starting at the low end of this area while the stack is allocated by starting at the high end of the area. the memory between these two components is free.
- In this model the stack and PCD data components do not have individual size restrictions.
- 1,4
- 1,2
- 1,3
- 1,2,3,4
- MMU
- run time libraries of the programming language ie.library routines
- Kernel
- All of the above
This MCQs section is mainly focused on below lists of topics
1.Reuse of memory.
-Maintaining a free list.
-Performing fresh allocation by using a free lists.
- First-Fit Technique
- Best-Fit Technique
- Next-Fit Technique
- Random and Sequential order
- Sequential order
- Random Order
- None of the above
- Stack
- heap
- stack and heap
- None of the above
- stack-based
- heap-based
- stack and heap based
- None of the above
- stack-based
- heap-based
- stack and heap based
- None of the above
- stack-based
- heap-based
- stack and heap based
- None of the above
- Maintaining a free list.
- select a memory area for allocation.
- merge free memory area.
- All of the above
- The size of the memory areas
- the pointers used for forming the lists.
- The size of the memory areas and the pointers used for forming the lists.
- Stack pointer
- Singly linked free list.
- Doubly linked free list
- Stack
- singly linked free list and stack
- First Fit technique
- Best-Fit technique
- Next-Fit technique
- All of the above.
- First Fit technique
- Best-Fit technique
- Next-Fit technique
- Good-Fit technique.
- First Fit technique
- Best-Fit technique
- Next-Fit technique
- Good-Fit technique.
- First Fit technique
- Best-Fit technique
- Next-Fit technique
- Good-Fit technique.
- First Fit
- Best-Fit
- Next-Fit
- worst-Fit
- First Fit
- Best-Fit
- Next-Fit
- worst-Fit
- First Fit
- Best-Fit
- Next-Fit
- worst-Fit
- First Fit
- Best-Fit
- Next-Fit
- worst-Fit
This MCQs section is mainly focused on below lists of topics
1.Memory Fragmentation
- Merging of free memory area
1.Boundy tags
2.Memory Compaction
2.Buddy System and Power of 2 Allocators.
- Buddy System Allocator
- Power of 2 Allocator
- The existence of usable area in the memory of computer system
- The existence of unusable area in the memory of computer system
- The existence of unreachable area in the memory of computer system
- None of the above
- memory area remain unused because it is too large to be allocated
- memory area remain unused because it is too small to be allocated
- More memory is allocated than requested by the process
- less memory is allocated than requested by the process
- memory area remain unused because it is too large to be allocated
- memory area remain unused because it is too small to be allocated
- More memory is allocated than requested by the process
- less memory is allocated than requested by the process
- stack overflow
- page faults
- Better Utilization of memory
- poor utilization of memory
- Boundary tags ,is a status descriptor for a memory area.
- It consists of an ordered pair giving allocation status of the area;whether it is free or allocated.
- Boundary tags are identical tags stored at the start and end of memory area.
- when an area of memory becomes free ,the kernel checks the boundary tags of its neighboring areas.
- 1,2 only
- 1,3 only
- 1,4 only
- 1,2,3,4
- When an area of memory is freed,the total number of free area in the system decreases by 1
- When an area of memory is freed,the total number of free area in the system increases by 1
- When an area of memory is freed,the total number of free area in the system remain the same depending on whether the area being freed has zero,two,or one free area as neighbors.
- All of the above
- the number of free area is half the number of allocated area ie. m=n/2
- it helps in estimating the size of the free list
- it also gives us method of estimating the free area in the memory at any time
- if sf is the average size of free area of memory,the total free memory is sf×n/2
- 1 only
- 1,2 only
- 1,2,3 only
- 1,2,3,4
- Memory Paging
- Memory Swapping
- Memory Compaction
- Memory segmentation
- it involves movement of code and data in the memory.
- it is feasible only if computer system provides relocation register;the relocation can be achieved by simply changing the address in the relocation register
- it does not involves movement of code and data in the memory
- it does not involves use of relocation register
- 1,2 only
- 3,4 only
- 2,3 only
- 1,4 only
- External fragmentation
- Internal Fragmentation
- No fragmentation
- None of the above
- power of 2,same
- square of 2,same
- power of 2,different
- square of 2,different
- Buddy System splits and recombines memory blocks in a predetermined manner during allocation and deallocation.
- No splitting of blocks takes place, also no effort is made to coalesce adjoining blocks to form larger blocks; when released, a block is simply returned to its free list.
- When a request is made for m bytes, the allocator first check the free list containing blocks whose size is 2i for the smallest value of i such that 2i ≥ m.if the free list is empty ,it checks the list containing blocks that are higher the next higher power of 2 in size on so on. an entire block is allocated to a request.
- When a request is made for m bytes. the system finds the smallest power of 2 that is ≥ m. Let this be 2i.if the list is empty, it checks the lists for block of size 2i+1.it takes one block off this list and splits it into two halves of size 2i.it put one of these blocks into the free list of size 2i,and uses the other block to satisfy the request.
- 1 only
- 1,2 only
- 2,3 only
- 1,4 only
- Buddy System splits and recombines memory blocks in a predetermined manner during allocation and deallocation.
- No splitting of blocks takes place, also no effort is made to coalesce adjoining blocks to form larger blocks; when released, a block is simply returned to its free list.
- When a request is made for m bytes, the allocator first check the free list containing blocks whose size is 2i for the smallest value of i such that 2i ≥ m.if the free list is empty ,it checks the list containing blocks that are higher the next higher power of 2 in size on so on. an entire block is allocated to a request.
- When a request is made for m bytes. the system finds the smallest power of 2 that is ≥ m. Let this be 2i.if the list is empty, it checks the lists for block of size 2i+1.it takes one block off this list and splits it into two halves of size 2i.it put one of these blocks into the free list of size 2i,and uses the other block to satisfy the request.
- 1 only
- 1,2 only
- 2,3 only
- 1,4 only
- TRUE
- FALSE
- TRUE
- FALSE
This MCQs section is mainly focused on below lists of topics
- Contiguous Memory Allocation
- NonContiguous Memory Allocation
- -Logical Addresses, Physical Addresses and Address translation
- -Approaches to Noncontiguous Memory allocation
- Paging
- Segmentation
- Same process is allocated in a different area in the memory
- all the process is allocated a single contiguous area in the memory
- Each process is allocated a single contiguous area in the memory
- All of the above
- Memory fragmentation
- Page Faults
- less throughput
- Less hit ratio
- Internal fragmentation
- External fragmentation
- inline fragmentation
- outline fragmentation
- Internal fragmentation
- External fragmentation
- Page Faults
- Swapping
- removing of memory
- No Relocation
- Static relocation
- Dynamic relocation
- Program Counter
- Special Purpose Register
- Relocation Register
- Program status Word
- Portion of its address space are distributed among many areas of memory
- all the process is allocated a single continuous area in the memory
- Each process is allocated a single continuous area in the memory
- All of the above
- it increases the external fragmentation
- it reduce the external fragmentation
- it increases the internal fragmentation
- it reduce the internal fragmentation
- constituents the physical address space
- of an instruction or data byte as used in a process.
- in a memory where an instruction or data byte exists
- All of the above
- constituents the logical address space
- of an instruction or data byte as used in a process.
- in a memory where an instruction or data byte exists
- All of the above
- Physical address
- logical address
- effective address
- None of the above
- Physical address
- logical address
- effective address
- None of the above
- physical address translation
- logical address translation
- address translation
- All of the above
- TRUE
- FALSE
- TRUE
- FALSE
- TRUE
- FALSE
- TRUE
- FALSE
- Paging
- Segmentation
- Memory compaction
- power of 2 allocator
- 1,2
- 2,3
- 3,4
- 1,4
1.In approaches to noncontiguous memory allocation, the memory can accommodate an integral number of pages. It is partitioned into memory areas that have same size as page. This approach is known as______
- Segmentation
- Paging
- Thrashing
- none of the above
- Segment
- Page table
- Pages
- All of the above
- Segmentation
- Paging
- Segmentation with paging
- All of the above
- Segmentation
- Paging
- Segmentation with paging
- All of the above
- Paging, Pages
- Segmentation, Pages
- Segmentation, Segments
- Segment table, Pages
- Segments
- Page table
- Pages
- Segment table
- Segmentation
- Paging
- Segmentation with paging
- All of the above.
- Segmentation
- Paging
- Segmentation with paging
- All of the above
- -1
- 1
- 2
- 0
- -1
- 1
- 2
- 0
1.A memory hierarchy ,consisting of a computer system’s memory and a disk, that enables a process to operate with only some portion of its address space in memory known as _______
- RAM
- ROM
- Virtual memory
- Disk
- Memory management unit
- Virtual memory manager
- Memory manager
- All of the above.
- Memory management Unit
- Virtual memory manager
- Software Management Unit
- A and B both
- RAM
- ROM
- Disk
- Virtual Memory
- Memory management Unit
- Virtual memory manager
- Memory manager
- All of the above.
- Thrashing
- Swapping
- Demand paging
- Segmentation
- Law of locality of objects
- Law of locality of pointers
- Thrashing
- Law of locality of reference
- True
- False
- True
- False
- True
- False
- True
- False
- True
- False
- True
- False
- True
- False
1.The memory of the computer system is considered to consist of page frames, where a page frame is memory area that has the same size as a ____
- Page Table
- Page
- Segment
- Page frame table
- 1 to #frames
- 1 to #frames-1
- 0 to #frames -1
- Any of the above
- effective memory address of a logical address (pi,bi) = Start address of the Segment containing page pi + bi
- effective memory address of a logical address (pi,bi) = Start address of the Page Frame containing page pi + bi
- effective memory address of a logical address (pi,bi) = Start address of the Page Frame containing page pi - bi
- None of the above.
- Page table
- Frame table
- Pages
- Frame list
- Valid bit
- Page frame#
- Modified
- Prot info
- All of the above.
- Valid bit
- Page frame#
- Modified
- Prot info
- Valid bit
- Page frame#
- Modified
- Prot info
- Ref info
- Modified
- Prot info
- Other info
- Ref info
- Modified
- Other info
- Prot info
- Ref info
- Modified
- Other info
- Prot info
- Ref info
- Modified
- Other info
- Prot info
- Valid bit
- Page frame#
- Modified
- Prot info
- Valid bit
- Page frame#
- Modified
- Prot info
- Valid bit
- Page frame#
- Modified
- Prot info
- Look up page table
- Obtain page number and byte number in a page
- Form effective memory address.
- All of the above.
- Page hit
- Page miss
- Page Fault
- All of the above.
- Page in
- Page out
- Page replacement operations
- All of the above.
- Page hit
- Page out
- Page Miss
- Page in
- Page hit
- Page out
- Page Miss
- Page in
- Page I/O
- Process I/O
- Program I/O
- Disk I/O.
- Only Time consumed by the MMU in performing address translation
- Only the average time consumed by the virtual memory manager in handling a page fault
- Time consumed by the MMU in performing address translation and the average time consumed by the virtual memory manager in handling a page fault
- None of the above.
- Effective memory access time = pr1 × 2 × tmem + (1 - pr1 ) × (tmem + tpfh + 2 × tmem)
- Effective memory access time = pr1 × 1 × tmem+ (1 - pr1 ) × (tmem + tpfh + 2 × tmem
- Effective memory access time = pr1 × 1 × tmem1 ) × (tmem + tpfh + 2 × tmem)
- Effective memory access time = pr1 × 2 × tmem - (1 + pr1 ) × (tmem + tpfh + 2 × tmem )
- Page faults occur and there are no free page frames in the memory.
- Page faults occur and there are free page frames in the memory.
- Page faults would arise if the replaced page is referenced again.
- It is important to replace a page that is not likely to be referenced again in the immediate future.
- 1 only
- 1 and 3 only
- 1 , 2 and 4 only
- 1,3 and 4 only
- It states that the physical addresses used by a process in any short interval of time during its operation tend to be bunched together in certain portion of its logical address space.
- It states that the logical addresses used by a process in any short interval of time during its operation tend to be bunched together in certain portion of its logical address space.
- It states that the physical addresses used by a process in any short interval of time during its operation tend to be bunched together in certain portion of its physical address space.
- It states that the logical addresses used by a process in any Long interval of time during its operation tend to be bunched differently in certain portion of its logical address space.
- More page faults and high hit ratio in cache
- Fewer page faults and high hit ratio in the disk
- High hit ratio in the cache and fewer page faults
- None of the above.
- An overcommitment of memory to a process implies a low page fault rate for the process;hence it ensures good process performance.
- An undercommitment of memory to a process causes a high page fault rate,which would lead to poor performance of the process
- In An overcommitment of memory to a process however if a smaller number of processes would fit in memory which would causes CPU idling and poor system performance.
- All of the above.
- Active state
- Pending State
- Blocked State
- Ready State
- Swapping
- Switching
- Paging
- Thrashing
- Swapping
- Context Switching
- Paging
- Thrashing
- the number of bits required to represent the byte number in a page
- Memory wastage due to internal fragmentation
- Size of page table for a process
- Page Fault rates when a fixed amount of memory is allocated to a process
- 1 Only
- 1 and 2 only
- 1,2 and 3 only
- All of the above.
- n = [z/s]
- n = [s/z]
- n = [zs]
- n = [z - s]
- s bytes
- s/2 bytes
- 1/s bytes
- 2s bytes
- Page address resistor (PAR)
- Page Frame Address resistor (PFAR)
- Page table Address resistor (PTAR)
- Page table size Resistor (PTSR)
- Relocation Register
- Process Control block
- Stack
- Heap
- relocation register
- page table address register
- page table size register
- Process control block
- Memory protection
- Efficient address translation
- Page replacement support
- All of the above.
- PCB
- VM Manager
- Free frame list
- TLB
- Valid bit, page frame #, Prot info
- Page # , Page frame # , Prot info
- Valid bit, page frame #, Page #
- Page # , Page frame # , Valid bit
- TLB hit ratio
- TLB miss ratio
- Memory hit ratio
- Memory miss ratio
- TLB hit ratio
- TLB miss ratio
- Memory hit ratio
- Memory miss ratio
- Pr2 × (tTLB + tmem ) + (Pr1 - Pr2) × (tTLB + 2 × tmem) - (1 - Pr1) × (tTLB + tmem + tpfh + tTLB + 2 × tmem)
- Pr2 × (tTLB + tmem ) - (Pr1 - Pr2) × (tTLB + 2 × tmem) - (1 - Pr1) × (tTLB + tmem + tpfh + tTLB + 2 × tmem)
- Pr2 × (tTLB + tmem ) + (Pr1 - Pr2) × (tTLB + 2 × tmem) + (1 - Pr1) × (tTLB + tmem + tpfh + tTLB + 2 × tmem)
- Pr2 × (tTLB - tmem ) + (Pr1 + Pr2) × (tTLB - 2 × tmem) + (1 - Pr1) × (tTLB - tmem + tpfh + tTLB + 2 × tmem)
- Inverted page table
- Single level page table
- Multilevel page table
- Both a and c.
- Inverted page table
- Single level page table
- Multilevel page table
- Multiprogramming page table
- Inverted page table
- Single level page table
- Multilevel page table
- Multiprogramming page table
- Likely to be referenced in the immediate future
- Not Likely to be referenced in the immediate future
- Currently in use by the process
- None of the above.
- LRU Page replacement Algorithm
- FIFO page replacement Algorithm
- Optimal page replacement algorithm
- NRU Page replacement algorithm
- LRU Page replacement Algorithm
- FIFO page replacement Algorithm
- Optimal page replacement algorithm
- NRU Page replacement algorithm
- Prot info
- Valid bit
- Ref info
- None of the above.
- Prot info
- Valid bit
- Ref info
- None of the above.
- Prot info
- Valid bit
- Ref info
- None of the above.
- LRU Page replacement Algorithm
- FIFO page replacement Algorithm
- Optimal page replacement algorithm
- NRU Page replacement algorithm
- Heap property
- Array Property
- Stack Property
- All of the above.
- LRU Page replacement Algorithm
- FIFO page replacement Algorithm
- Optimal page replacement algorithm
- NRU Page replacement algorithm
- LRU Page replacement Algorithm
- FIFO page replacement Algorithm
- Optimal page replacement algorithm
- NRU Page replacement algorithm
- The number of page fault decreases when memory allocation for the process is increased
- The number of page fault increases when memory allocation for the process is decreased
- The number of page fault increases when memory allocation for the process is increased
- None of the above.
- LRU Page replacement Algorithm
- FIFO page replacement Algorithm
- Optimal page replacement algorithm
- NRU Page replacement algorithm
- LRU Page replacement Algorithm
- FIFO page replacement Algorithm
- Optimal page replacement algorithm
- NRU Page replacement algorithm
Post a Comment