The number of comments discussing the past year questions hit a record of 100 today. I thought it would be useful to summarize the confusing points from the discussion in a post here.
Q1
- There is a confusion over Choice A. Microkernel fits the description of A, but we may still implement things in the user space without resorting to Microkernel architecture. One example is to schedule threads in the user space instead of kernel space (which we did not cover this semester)
Q2.
- Running exec() will not change the pid of the current process. Only the core image will be replaced.
- Since exec() replaces the core image of the process, any line of code after exec() will not be printed unless there is an error.
Q3
- Context switch can happen when you make system calls (Lecture 2) and when page fault occurs (Lecture 9)
Q4
- The confusion is on Choice D. Disk I/O takes longer time if there are multiple processes contending for the same disk. The disk scheduler would have more requests to serve, and a process disk I/O request would have to wait longer.
Q8
- The confusion is basically over the meaning of “when making a function call.” The most direct interpretation (at least to me, anyway) is when we are already inside the function f. If you interpret it any other way (e.g., somewhere between the caller and the callee), then it’s ambiguous.
Q10
- Kernel code and structures are normally pinned in the physical memory, since the kernel is accessed very often and to avoid irresolvable situations (imagine if the page fault handler is paged out — who will handle the page fault now?)
Q11
- Two things you need to consider is: (i) how to access the end of a huge file quickly? and (ii) how to support reading a huge file backward (from end to beginning)
Q12
- There is a confusion over Part (c). As discussed in tutorial, size of swap space does not affect the number of page fault.
Q15
- In RR, reducing time quantum improves the response time. This question extends this principle to MLFQ — if we reduce the time quantum of low priority job, the high priority jobs will get to their turn faster as well.
Hi Prof Ooi
for qn 15 will there be a chance that the time quantum be reduced so much that interactive jobs will not have a chance to block for I/O within the short time quantum? This would mean that interactive jobs gets lower priority as the algorithm recognizes them as CPU intensive jobs
We only reduce it to the same time quantum as the highest priority level.
So if the original time quantum for highest priority level is already too small, then you are right — we may misclassify them as CPU jobs.
Prof, would like to clarify some stuff. For Qn 4, so do process wait for another process to ENTER critical region or to EXIT critical region?
Prof, need your guidance:
13.
(a) disabling and enabling interrupts – nothing will happen.
(b) peterson’s algo – during the proc’s second attempt to enter, if another process wants to enter too, the second process may get to enter and thus mutual exclusion violated. If not, the original process will re-enter the CR which is not big of a deal.
(c) test & set – deadlock will occur. The second time it tries to enter, it fails to set the lock and goes back to the start and wait to try to set the lock again, which means it lock itself out of CR. And that means no other process and unlock it.
15.
(c) assigning time quantum of highest priority job as the time quantum for all jobs – the RT of interactive jobs will improve. this is because the time quantum for all jobs is shorter, more jobs get prempted sooner, more jobs get considered as cpu-bound, hence only the “highly” i/o jobs get to stay at top of priority –> RT improve.
(d) depends on the nature of the cpu, i/o jobs. But i would guess, improve? Not very sure.
13(b) I don’t think mutual exclusion will be violated for correct implementation of peterson algorithm.
Below are my explanation. Assume that one process managed to enter the critical region for the first time and need to call enter() for the second time.
a) If no other process is waiting to access critical region, since none of the other processes’ interest is set (false), the calling process will enter critical region.
b) If other process is waiting to access the critical region, one of the process will be selected to access the critical region based on the turn it set to. The calling process will have to wait for its turn to access the critical region. Although enter() maybe called twice in each of the process, it will still not result in deadlock.
eg.
Process A and B wished to enter the critical region.
A call enter() first time, attempt to enter critical region.
B call enter() first time, B wait for critical region
A call enter() second time, set turn to B
B attempt to enter critical region.
B call enter() second time, set turn to A
A enter critical region.
B wait for it turn.
A leave.
B enter critical region.
Hmm.. but for Peterson’s algorithm, won’t mutual exclusion be violated? The Peterson’s algorithm that I have in mind is similar to the lecture notes as such:
“while(turn==other process & interested[other process] ==1)”
For process A to enter() the first time, it needs process B to also try enter() because of the following sequences:
Process A sets its interest to 1
Process A sets turn to B
while loop
Process B sets its interest to 1
Process B sets turn to A
Process A enters critical region
When Process A tries entering again, the second time:
Process A sets its interest to 1, which is already 1
Process A sets turn to B
while loop
Process B will enter critical region isn’t it?
Correct me if I’m wrong.
hmmm…..you are right
but that is based on the assumption that all the other processes are also using the same buggy code…
but hendy is also right…hendy’s assumption is that one process is buggy but the other is not buggy…
hmmm…i wonder will the prof accept both answers?
Even if other process is not using buggy code, mutual exclusion will still be ensured: only one process can enter the critical region, regardless which process enter first. Raymond’s example has show us the sequence if process B is not using the buggy code. Clearly, when process B entered the critical region, other process are waiting for access to the critical region.
i agree with 13 (a), (c) and 15 (c). Need help in Question 15(d). Any expert willing to share your answer?
Hi,
I thought 13(a) will cause system to hang.
– process enter critical region and disables interrupt
– process tries to enter critical region but cannot because its locked
– no other process can pre-empt this therefore this process runs forever
what do mean by this line?
it has locked the critical region on its first time, so second time it cannot access critical region
A process that disable interrupt basically prevent other process from taking it CPU away. Calling it twice shouldn’t block the process.
It should not be lock right? It is just disabling interrupt from other process. Then when it call again, it will disable interrupt from other process again but it doesn’t matter to the process itself. It will then continue to run it’s code. Correct me if I’m wrong.
hmm… enter critical region means the process has locked it and nobody else can enter it anymore right.. including itself..
But disabling interrupt doesn’t use a lock. It only stop other process from taking over the CPU and run the same critical region.
ok i see.. means the enter() has nothing to do with locking lol..
ahh ok thanks!
Np! Jia you for tomorrow!!! Have an early night =D
yes i think u r right on this. i made the mistake of thinking that only one process will call the buggy peterson’s algorithm. i forgot that the buggy code is shared by all processes trying to enter.
I’m not expert, and I wanted to ask also. what if process A edit variable value in the critical region before calling enter again?
E.g.
A do count++ then called enter again
B entered and do count++ then some operation to do with count.
B leave
A enter and continue with a wrong count value
There is race condition but does this mean it did not ensured mutual exclusion?
I supposed the question asked the following
Pseudo code for Process A/B:
enter()
enter()
count++;
leave();
So A does not do count++ until the buggy code called enter() twice
But it only state the process call enter again in critical region without stating any other condition.
If the following is the pseudo code
enter()
// Section 1
count++
enter();
// Section 2
leave();
A call enter()
A run section 1 code
B call enter(), wait for it turn
A call enter() second time, pass turn to B
B run section 1 code
B call enter() second time, pass turn to A
A run section 2 code, A leave()
B run section 2 code, B leave()
When one process run either of the section of the code, other process does not run the same section of the code at the same time. It ensure mutual exclusion.
So I can safely say It still ensure mutual exclusion just that race condition did occurred for process A?
I don’t know =) Sorry =)
Sorry… it does not ensure mutual exclusion in this case, since two process can be in the critical region.
Yes, if B enters the critical region while A is inside, there is no mutual exclusion.
hi,
for 15(d)
i think average RT will deprove..
new interactive processes that come in need to wait for the earlier interactive process to run whenever the earlier ones are ready. hence contribute to higher RT
i’m assuming RT only considers the period from job submission to job starts to run.
Refer to page 156 of the textbook. It states that the response time will be reduced if we separate the waiting and executing commands of an interactive process as separate “job”, then we could minimize overall response time by running the shortest one first.
Yeah if we regard each command of an interactive job as seperate jobs, then the response time will go down.
I was assuming we can’t do that lol..
I suppose if we write our assumption, prof will accept our answer?
Yes. You can write down additional assumptions you make, and I accept as long as it is reasonable.
Q10
Kernel code and structures are normally pinned in the physical memory, since the kernel is accessed very often and to avoid irresolvable situations (imagine if the page fault handler is paged out — who will handle the page fault now?)
Hilarious when I think about it..