• LOGIN
  • No products in the cart.

OS2: Memory Management-Unit 2: Memory Allocation Techniques

Introduction As you have learnt from the previous unit, memory management is essential to process execution which is the primary job of the CPU. The main memory must accommodate both the operating system and the various user processes. You therefore need to allocate different parts of the main memory in the most efficient way possible. In this unit, therefore, you will about the some memory management algorithms such as contiguous memory allocation and its different flavours. Also, the problems that may arise from contiguous memory allocation (fragmentation) will be discussed in this unit. 2.0 Objectives At the end of this unit, you should be able to: Describe contiguous memory allocation Describe the various variants of contiguous memory allocation such as best-fit, worst-fit, and first-fit Distinguish between internal and external fragmentation Describe methods of solving external fragmentation

 

Contiguous Memory Allocation

As you may not iced on your system, the memory is usually divided into two partitions: one for resident operating system and one for the user processes. You may place the operating system in either low memory or high memory. The major factor affecting this decision is the location of the interrupt vector. Since the interrupt vector is often in low memory, programmers usually place the operating system in low memory as well. Therefore, in this course, we shall only discuss the situation where the operating system resides in low memory.

We usually want several user processes to reside in memory at the same time. We, therefore, need to consider how to allocate available memory to the processes that are in the input queue waiting to be brought into memory. In this contiguous memory allocation, each process is contained in a single contiguous section of memory.

3.2

Memory Allocation

Memory allocation can be done in two ways:

  • fixed partition
  • variable partition

These two methods are discussed in the following sections.

3.2.1 Fixed Partition Methods

One of the simplest methods for memory allocation is to divide memory into several fixed- sized partitions. Each partition may contain exactly one process. in this multiple-partition method, when a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process. This method was originally used by the IBM OS/360 operating system (called MFT). It is no longer in use. The method we are going to describe next is a generalization of the fixed-partition scheme (called MVT). It is used primarily in a batch environment. The ideas presented are also applicable to time-sharing environment in which pure segmentation is used for memory management.

3.2.2 Variable Partition Methods

The operating system keeps a table indicating which parts of memory are available and which are occupied. Initially all memory are available for user processes, and is considered as one large block of available memory, a hole. When a process arrives and needs memory, we search for a hole large enough for this process. If we find one, we allocate only as much as is needed, keeping the rest available to satisfy future requests.

As processes enter the system, they are put into an input queue. The operating system takes into account the memory requirements of each process and the amount of available memory space in determining which processes are allocated memory. When a process is allocated space, it is loaded into memory and it can then compete for the CPU. When a process terminates, it releases its memory, which the operating system may then fill with another process from the input queue.

At any given time, we have a list of available block sizes and the input queue. The operating system can order the input queue according to a scheduling algorithm. Memory is allocated to processes until, finally, the memory requirements of the next process cannot be satisfied i.e. no available block of memory (or hole) is large enough to hold that process. The operating system can then wait until a large enough block is available, or it can skip down the input queue to see whether the smaller memory requirements of some other process can be met.

In general, a set of hole of different sizes is scattered throughout memory at any given time.

When a process arrives and needs memory, the system searches this set for a hole that is large enough for this process. if the hole is too large, is too large, it is split into two. One part is allocated to the arriving process, the other is returned to the set of holes. When a process terminates, it releases its block of memory. Which is then placed back in the set ofholes. If the new hole is adjacent to other holes, these adjacent holes are merged to form one larger hole. At this point, the system may need to check whether there processes waiting for memory and whether this newly freed and recombined memory could satisfy the demands of any of these waiting processes.

The procedure is a particular instance of the general dynamic storage allocation problem, which is how to satisfy a request of size n from a list of free holes. There are many solutions to this problem. The set of holes is searched to determine which hole is best to allocate.

3.3

Memory Allocation Strategies

The first-fit, best-fit and worst-fit strategies are the most common ones used to select a free hole from the set of available holes.

3.3.1 First-Fit

In first-fit algorithm, you allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or where the previous first-fit search ended. You can stop searching as soon as you find a free hole that is large enough.

3.3.2 Best-Fit

In best-fit algorithm, you allocate the smallest hole that is big enough. You must search the entire list from top to bottom except in a case where the list is ordered by size. This strategy produces the smallest leftover hole.

3.3.3

Worst-Fit

In worst-fit algorithm, you allocate the largest available hole. As in best-fit, you must search the entire list, unless the list is kept ordered by size. This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach.

It can be shown, using techniques such as simulations, that both first-fit and best-fit are better than worst-fit in terms of decreasing both time and storage utilization. Neither first-fit nor best-fit is clearly better in terms of storage utilization, but first-fit is generally faster.

However, these algorithms suffer from external fragmentation. As processes are loaded and removed from memory, the free memory space is broken into little pieces. External fragmentation exists when enough total memory space exists to satisfy a request but it is not contiguous. Storage is fragmented into large number of small holes. This fragmentation problem can be severe. In the worst case, we could have a block of free (or wasted) memory between every two processes. If all this memory were in one big free block, we might be able to run several more processes.

The selection of the first-fit versus best-fit strategies can affect the amount of fragmentation.

First-fit is better for some systems whereas best-fit is better for others. Another factor is which end of a free block do you allocate? However, you should note that no matter which algorithm you use, external fragmentation will be a problem.

3.4

Fragmentation

In the previous section you learnt about external fragmentation. You should, however, note that memory fragmentation can be internal or external.

3.4.1 Internal Fragmentation

To illustrate this, consider a multiple partition allocation scheme with a hole of 18,464 bytes. Suppose that the next process requests 18,462 bytes. If you allocate the requested blocks, you are left with a hole of 2 bytes. The overhead to keep track of this hole will be substantially larger than the hole itself. The general approach is to break the physical memory into fixed-sized blocks, and allocate memory in unit of block sizes. With this approach, the memory allocated to a process may be slightly larger than the requested memory. The different between these two numbers is internal fragmentation i.e. memory that is internal to a partition but is not being used.

3.4.2 Solutions to External Fragmentation

  1. Compaction: this is a solution to the problem of external fragmentation. The goal is to shuffle the memory contents to place all free memory together in one large block. But compaction is not always possible. If relocation is static and is done at assembly or load time, compaction cannot be done. Compaction is only possible if relocation is dynamic, and is done at run time.
  2. Noncontiguous Logical-Address Space: Another solution to external fragmentation problem is to permit the logical-address space of a process to be noncontiguous. Therefore, allowing a process to be allocated physical memory wherever the latter is available. Two ways of achieving this are through paging and segmentation or you can combine the two techniques of paging and segmentation. You will be exposed to these two techniques in the next unit.

4.0

Conclusion

In this unit, you have learnt about some of the algorithms for memory management especially contiguous memory allocation. You have also learnt about the problem of fragmentation especially external fragmentation. In the next unit, we will go further and discuss paging and segmentation which are ways of implementing noncontiguous logical- address space solution to external fragmentation.

 

 

Courses

Featured Downloads