Multiprocessor Scheduling

Surbhigoel
12 min readDec 9, 2022

--

Hello everyone in this blog we will discuss about multiprocessor scheduling. The multiprocessor system there may be many number of processor that may be connected with a single computer system. In multiprocessor scheduling, load sharing plays the major role in case of multiprocessing environment. Load sharing means that there may be many number of processors that were available so each and every processor were executing many number of processes. Suppose if one of the processor is executing more number of processes, which means it is heavily loaded and the other processor sits idle(it does not have any process to execute) which is nothing but a bad example of load balancing. In case of multiprocessing so we should avoid it. We should share the load evenly for each and every processor.

Types of multiprocessors

There are actually two forms of multiprocessor that are available. One is homogeneous multiprocessor and another one is heterogeneous multiprocessor. Homogeneous multiprocessor means all the processor are executing the same functionality, whereas in heterogeneous multiprocessor, which is nothing but the processor may execute different functionality or that non identical processor is nothing but heterogeneous. In case of homogeneous multiprocessor scheduling, it is somewhat more easier than the heterogeneous.

Different approaches in multiprocessors system

There are two different approaches in case of multiprocessor system. First one is asymmetric multiprocessing and the other one is symmetric multiprocessing. Asymmetric multiprocessor means, there will be only one processor that acts as a server which is nothing but the master processor and all other remaining processor are the slave processor. So in symmetric multiprocessing, the processor that takes care of responsibility of scheduling all the processes evenly to all other remaining processor is treated as a client processor. There will be one processor which acts as a master processor or it is said to be the server and it take care of all the scheduling properties which means scheduling techniques and how the scheduling task will be executed. All the processes will be executed only by one processor, this particular processor will do the scheduling process for each and every processor that were available. Scheduling is somewhat easier in case of asymmetric because there is only one processor has the access to share the data which means the common data structure. Whatever we need in order to schedule the process to each and every processor, only one processor may access that particular common data structure so it’s somewhat easier. Now what do you mean by symmetric multiprocessing, here we do not assign anyone of the processor to take care of the scheduling task, each and every processor that has been connected with the system is said to be the scheduling task so it will perform that scheduling task which is nothing but self-scheduling. Each and every processor may have its own set of scheduling criteria in order to execute all the scheduling tasks to all the sector of process, that is what symmetric multiprocessing. Here we have neither the common ready queue or we can have the private common ready queue, meaning to all the processors that where available one common ready queue will be maintained, there we will store all the set of processes and from the ready queue we will be assigning all the processes to each and every processor. But scheduling will be done by each and every processor separately, the only thing we are maintaining is the common queue. Also we can maintain the private ready queue, which means each and every processor must have their own structure of process. Most modern operating system will use the symmetric multiprocessing concept for example Linux, windows, macOS, etc., so everything is supporting the symmetric multiprocessing. If you use the symmetric multiprocessing concept, the scheduling is somewhat more complicated because each and everything is performing their own type or own style of scheduling. So what happens there, it may lead to some situation. Something like, at the same time more than one processor may schedule the same process. It may also be possible we may get a kind of situation. Symmetric multiprocessing scheduling is somewhat more complicated than the asymmetric multiprocessing. What are all the concepts that we should know in case of symmetric multiprocessing, first one is processor affinity So what is that processor affinity so as I said if each and every processor has their own set of processes that is actually maintained in the ready queue. If suppose there are 2 processors A and B and the processor A has a process that is actually executing so here the cache memory plays a major role. Cache memory is used to store frequently access and detect that whichever the data which is accessed often more than many times. If you are accessing the same set of data, that particular set of data will be stored in the cache memory so we can easily access it within a short period of time. That is what the purpose of cache memory. What happens when the process b is executing and it is accessing the data which is stored in the cache memory, in some cases this particular process may migrate to some other processor. It will transfer to some other processor in order to execute that same process, when this particular thing will the process must be migrated from one processor to another processor. Now, what is the reason of process migration. So in some cases will be getting the process migrated from one processor to another processor. In the first a processor we are executing a process using its own catch memory so we have the private cache memory, we are not sharing the cache memory here. When after the process has been migrated to processor B, what happens there the content of the cache memory will become invalidated and the content of the new cache memory must be repopulated. So we are just making the new content. Whatever the content we have already in this cache memory will be stored again in the cache memory of the new process because in order to execute this process right. All these things will be more expensive. Using the cache memory is itself an expensive thing but even then what you are doing you are just invalidating the old cache memory content and you are just repopulating the content of the new cache memory, so old and new which is nothing but the process of the previous processor whatever it is using the cache memory and the new processor on which the process has been migrated or created so it is nothing but the content has to be repopulated again. In particular cache memory, all these sorts of things are said to be more expensive and it also takes time to do all these things right. So in order to avoid some of the operating system may not help this migration, so process must not be migrated from one processor to another processor. Migration is not a load in some of the operating system. In order to avoid all these issues, in order to overcome all these things we are not allowing the process to migrate from one processor to another processor. So here in the process, if it enters to anyone of the processor it must stay in the same processor itself until its entire completion execution is done. If suppose there are 2 processors A and B and the processor A has a process that is actually executing so here the cache memory plays a major role. Cache memory is used to store frequently access and detect that whichever the data which is accessed often more than many times. If you are accessing the same set of data, that particular set of data will be stored in the cache memory so we can easily access it within a short period of time. That is what the purpose of cache memory. What happens when the process b is executing and it is accessing the data which is stored in the cache memory, in some cases this particular process may migrate to some other processor. It will transfer to some other processor in order to execute that same process, when this particular thing will the process must be migrated from one processor to another processor. Now, what is the reason of process migration. So in some cases will be getting the process migrated from one processor to another processor. In the first a processor we are executing a process using its own catch memory so we have the private cache memory, we are not sharing the cache memory here. When after the process has been migrated to processor B, what happens there the content of the cache memory will become invalidated and the content of the new cache memory must be repopulated. So we are just making the new content. Whatever the content we have already in this cache memory will be stored again in the cache memory of the new process because in order to execute this process right. That is what is called processor affinity.

Processor Affinity

The meaning of processor affinity is, if anyone of the process has been allocated to anyone of the processor, if it holds that process that particular process will not be migrated in between the execution before completing its execution, it should not be migrated to any other processor and that is what is processor affinity. This processor affinity will have several forms. one is hard affinity or hot affinity and another one is soft affinity. Soft affinity means, we are just maintaining the concept of keeping the process in one of the processor itself and are not allowing the operating system to allow the process migration. But this will not give the guarantee that this process must and will not be created at all. In some cases it may migrate, so migration may happen even though beyond just allocating the process to one of the processor. Migration may happen to be or not we are not providing the guarantee. We cannot give the guarantee that this particular process will not be migrated at all, that is what soft affinity. A particular process will be executed in the same processor but we cannot guarantee because in some cases migration may also happen, so that is what soft affinity. Coming to this hard affinity, so we are just allowing one of the process to execute in the same processor, same processor depends upon the memory architecture whatever we have for the particular processor. So this process will be executed till its execution in the same processor itself, that is what hard affinity. Hard affinity is based on the architecture, some processes will specify the subset of processor on which it will be executed. It will give the whichever the processor it may execute, it will give the simplest answer that is what hard affinity. So we can understand, this particular process may execute in these processor, that is what had affinity.

Non uniform memory access

Coming to this non uniform memory access and the CPU scheduling. Why we are going to this concept? This non uniform memory access will eliminate the benefit of this processor affinity. So what is the benefit of processor affinity? In order to avoid all the switching of the content from one processor cache memory to another processor cache memory, we are just going with a processor offering it but going with some of the architecture of processes may not help with uniform memory access. It may also go to a non uniform memory access, which means there will be one system and this particular CPU, if it access the memory within the same process of it, then it could be a fast access. The access can also be taken from some other processor memory, so it will slower the access of the content. So whatever the content it needs it may get it with a longer time. That is what is non uniform memory access because it is actually trying to contact with other processor, so at that time again the contact will be transferred from other processor to this particular processor so it is also denying the benefit of the processor affinity.. There will be one processor which acts as a master processor or it is said to be the server and it take care of all the scheduling properties which means scheduling techniques and how the scheduling task will be executed. All the processes will be executed only by one processor, this particular processor will do the scheduling process for each and every processor that were available. Scheduling is somewhat easier in case of asymmetric because there is only one processor has the access to share the data which means the common data structure. Whatever we need in order to schedule the process to each and every processor, only one processor may access that particular common data structure so it’s somewhat easier. Now what do you mean by symmetric multiprocessing, here we do not assign anyone of the processor to take care of the scheduling task, each and every processor that has been connected with the system is said to be the scheduling task so it will perform that scheduling task which is nothing but self-scheduling. Each and every processor may have its own set of scheduling criteria in order to execute all the scheduling tasks to all the sector of process, that is what symmetric multiprocessing. Here we have neither the common ready queue or we can have the private common ready queue, meaning to all the processors that where available one common ready queue will be maintained, there we will store all the set of processes and from the ready queue we will be assigning all the processes to each and every processor. Some the content will be transferred from one browser to another, migration again takes place so it is actually meeting the benefit of the processor affinity, so processor affinity is achieved based on the architectural state of the operating system or that particular processor.

Load Balancing

Next one is load balancing. Load sharing plays a major role in case of multiprocessor system. So we need to balance the load evenly for all the multiprocessors. Suppose if anyone of the processor is heavily loaded but other processor is not having any of the process, its simply sitting idle, it is said to be the idle processor. So what happens there, the load may be imbalanced, we cannot produce the effective performance. So performance will be created and all the efficiency will become low here. In order to balance the load among all the different multiprocessors, we are just going with the 2 approaches. First one is push migration and the next one is pull migration. So, push migration means here we are just allowing to migrate any of the process from heavily loaded processor to an idle processor. Here also we are performing the migration because one of the processor is actually not able to execute all the incoming process because it is heavily loaded and receiving the process it is highly difficult to switch between all those process and it has to execute all the process in right order. So, push migration means here we are just allowing to migrate any of the process from heavily loaded processor to an idle processor. Here also we are performing the migration because one of the processor is actually not able to execute all the incoming process because it is heavily loaded and receiving the process it is highly difficult to switch between all those process and it has to execute all the process in right order. So it is highly difficult to perform all those things. We are just performing the migration, so we are just pushing some of the processes from heavily loaded processor to the idle processor this is what push migration and in the same way another approach is also been used which is nothing but pull migration, here also we have 2 processors, and one is heavily loaded and another one is idle processor which does not have any process to execute. Load is not evenly balanced. So what we are going to do is, idle processor may pull some of the process from the heavily loaded processor.So it is highly difficult to perform all those things. We are just performing the migration, so we are just pushing some of the processes from heavily loaded processor to the idle processor this is what push migration and in the same way another approach is also been used which is nothing but pull migration, here also we have 2 processors, and one is heavily loaded and another one is idle processor which does not have any process to execute. Load is not evenly balanced. So what we are going to do is, idle processor may pull some of the process from the heavily loaded processor. so it may pull one of the process at least from the heavily loaded processor to the idle processor by itself and it can execute all the incoming process that is what pull migration. So these 2 approaches also be done in order to share the load in between the many number of rows and it also in order to balance the load among the different processors. We are going with this approach but this push migration and pull migration is mutual, which means it does not actually prefer to mutually exclusive. So we can execute these 2 approaches in parallel on the multiprocessor in the timing, that is also possible. Even if you support this load balancing approach again, they are denying the processor affinity issue because we should not migrate any of the process from one processor to another processor that is what affinity. So in order to overcome the difficulties whatever we are facing in accessing the cache memory or any of the data whatever it is accessed by that particular process that has been migrated. So in order to avoid all those things we are just going with the processor affinity but in order to balance the load beyond just denying the benefit of the processor affinity we are just taking over the control of migration from one processor to another process, that is what is load balancing. So all this is about multiprocessor scheduling.

--

--

Surbhigoel
Surbhigoel

No responses yet