136 thoughts on “Final Exam Paper from Sem 1, 2010/11

  1. lets say I fork a process

    parent process and child process will have different process id but same group id

    after i use execvp on child process, will the new image of my child have the previous process id or new?

    Group id will remain same i think but will process id change after the underlying core image is changed?

    • As I like to say in tutorial — please elaborate :) How did you derive the answer B? How big is the address space? How big is one page? How many pages are there?

  2. My answers for the MCQ:

    1. A, since keeping the kernel small means more switches between user mode and kernel mode (i.e. more system calls)

    2. B or D, since the parent will only print the child’s process ID, while in the child, if `ls’ is successfully executed, the printf after execlp will not be executed.

    3. X, all of them will cause a context switch.

    4. X, all of the options could occur and cause P to take a longer time to run

    5. C, straightforward calculation

    6. D, the pages that need to be used keep getting paged out

    7. D, as explained by Yiliang above

    8. X, all of them are stored on the stack

    9. C, an i-node does not map file names to disk blocks.

    10. (not sure) C, the kernel’s code is pinned, and every process has that region of memory set aside for it?

    Please help me correct any mistakes =)

    • I think the answer of Q4 should be D. What factor in scenario B that makes P may take longer to read data from disk?

      • I wrote D as my initial answer too. After reading Jon’s answer, i was thinking along this line: can two process read data from hard disk at the same time? I think it is not possible, hence i agree with his answer.

      • To add to Yan Hao’s comment, could it be possible that the sequence of disk accesses are scheduled such that some of Q’s requests are satisfied before P’s? In that case, P would take a longer time to read data from the disk, yes?

      • Disk is a shared resource — so multiple processes can read from the disk. Disk I/O requests from different processes are queued. The disk scheduling algorithm decides which request to service next.


    • Jon Ong:

      My answers for the MCQ:

      1. A, since keeping the kernel small means more switches between user mode and kernel mode (i.e. more system calls)

      But, if we move OS components out of the kernel, software that interacts with those components need not make systems call to interact with them (just normal function call will do).

      • Microkernel architecture also keeps the kernal as small as possible. However, in the lecture, we say that its performance is not good due to many system calls.

        • You are right. But Microkernel architecture is one way of moving OS components out of the kernel. The issue with microkernel architecture is that every communication between the OS components in user space needs to go through the kernel.

      • hmm i go with the answer C because “LEAST” appropriate: for option A, it depends on the scenario how effective it is, in fact it causes security problems (other user programs might be able to execute funny actions with access to OS components? wasn’t that the very reason why we needed kernel/user mode in the first place?)

        for option C: it looks apparent to me that it is not going to improve anything, but rather cause more system calls.

      • hmm. too many X in this paper for comfort…

        my take on this question is: this sys call maybe the first time called, but previously other sys call might already been used. libc library should already been loaded into physical memory.. i chose C. “LEAST” likely to trigger.

        • When I set question, I do not consciously say, there are too many questions with answer A or too few questions with answer B, and try to spread out the MCQ answers among A, B, C, D, X.

          (That’s how midterm this year ends up with no X)

          Just think carefully about each question and pick the best answer independently. Don’t guess the MCQ answer based on other answers. Strategy like “there are already 5 As and no Bs, so the probability that B is correct is higher!” does not work.

  3. My MCQ:
    1. No idea how to do

    2. D because once the image is replaced, it shouldn’t give the pid of ‘ls’, but yet again, since it is still the child, wouldn’t child pid = ‘ls’ pid?

    3. X cos prof say page fault causes context switch, I put D at first

    4. X because whatever listed down affects B more

    5.C
    total:
    4 3 2+y 4

    start: 0 0 1 1
    C will release 1 1 y 1
    -> 1 1 y+1 2
    A will release 2 1 1 1
    -> 3 2 y+2 3
    for B to run, x has to be less than or equal to y+2 (x<=y+2)

    6. D
    LRU – Least recently used
    1,2,3,4 = 4 page faults
    5,2,3,4
    5,1,3,4
    5,1,2,4
    5,1,2,3
    4,1,2,3
    4,5,2,3

    7.D
    512 = 2^9
    16-9 = 7

    2^7

    8. B I think I am wrong, but I think one of the pictures show program counter point to code and kernel. But yet again return addres of the instruction is stored in stack hmmmm…

    9. No idea, C or D?

    10. D, since the library is highly likely being used, it is least likely to trigger page fault?

      • Hi there,My fiancee and I have been loionkg through your site and we love it!! Great work We are getting married in April (23rd) and would like to have some family shots taken of our merging family (3 teens & 1 tot). The wedding itself will be beach casual; we’re loionkg for a photographer to capture the family & ceremony out on the beach.If you’re available on that date, would you please respond with your pricing and any other information we need?Thanks for your time!Mellani


  4. Jon Ong:

    My answers for the MCQ:
    1. A, since keeping the kernel small means more switches between user mode and kernel mode (i.e. more system calls)
    My answer is C. Since C actually makes the overhead worse.
    2. B or D, since the parent will only print the child’s process ID, while in the child, if `ls’ is successfully executed, the printf after execlp will not be executed.
    3. X, all of them will cause a context switch.
    My answer is C, brk will not always cause a context switch
    4. X, all of the options could occur and cause P to take a longer time to run
    My answer is D, The I/O of P should not be affected.
    5. C, straightforward calculation
    6. D, the pages that need to be used keep getting paged out
    7. D, as explained by Yiliang above
    8. X, all of them are stored on the stack
    9. C, an i-node does not map file names to disk blocks.
    10. (not sure) C, the kernel’s code is pinned, and every process has that region of memory set aside for it?
    Please help me correct any mistakes =)

  5. Can anyone explain to me why C is not the answer for MCQ qn 1?

    C. run shared libraries commonly used by user applications in kernel mode.

    Thanks.

    • Come to think of it, C is correct, I think. After all, if your commonly used functions are in kernel mode, then you’d have a lot of overhead due to making system calls. Confused =S

      • option A keep the kernel small, thus certain program will not have to make sys calls since some OS components are now in usermode.
        C if they are run in kernel mode, you do not have to switch back and forth when encountering syscall in shared lib, its similar idea as D wrapping multiple syscalls together and do it once and for all.
        option B is weird for me, caching result returned by syscall? i hardly think it makes enough sense to cache sys call’s result since i will not expect them to return the same thing at all :< stuff like the grow space command in lecture demo (forgot the command name) , some time return 0 sometime return valid address, why should you cache it?

          • yes …. do you mean that it is good to cache sys call results such as getpid() since it will always return the same thing?
            its like once usermode try to call getpid() system do not even need to switch to kernel and straight away get the answer.
            plz enlighten me :D

        • In fact malloc caches system calls…

          try
          char *my_char = (char*)malloc(1);
          char *my_char2 = (char*)malloc(1);
          and do a strace, you will realize that the OS only does one brk() instead of two brk for the two malloc calls. malloc actually bites off a big chunk of memory from the system and caches it. This is to reduce the brk() system call overhead.

  6. MCQ question 3 about context switch… ppl mentioned that all of them(exit, read,brk, pgfault) will cause context switch. May i ask… Definition of page fault is system store the current state of execution of a process so that it can be continued later… in the case of program exit… will the system still save its current state of execution?

    • Context switch happens when CPU switch from running one process to another. So it happens when a program exits. You are right that we no longer need to store most of the context of execution for the exiting process (except the exit status) back into process control block.

      • Hello I got this off wikipedia : “A context switch is the computing process of storing and restoring the state (context) of a CPU so that execution can be resumed from the same point at a later time. ”

        I’m kinda confused. Is there a need for a process switch to occur for the event to be considered as a context switch?

        thanks

  7. can we discuss qn 15?

    here are my answers
    a) Reduce time quantum of each job half => improve response time of interactive jobs because RR will go one round faster and each job wait for a shorter time before its turn.

    b) Move the job to highest priority queue => no improvement, could even cause response time to be even longer because the jobs that block for I/O may or may not be interactive since CPU-bound jobs will block for I/O as well. If CPU-bound jobs join the highest priority queue, ‘genuine’ interactive jobs will have to wait longer before it is their turn to run.

    c) Use time quantum of highest priority job for all jobs => same effect as (a) since everyone will now wait for shorter time before its turn.

    d) Use SRTF instead of RR => depends. if most jobs remaining time < initial time quantum in RR, then will improve response time for interactive jobs in the queue but not guaranteed. Interactive jobs may have to wait longer than before for its turn to run.

    thanks in advance for comments!

    • b) Assume that response time refer to the time taken for a process to run on CPU after blocking for I/O. Since a process blocking for I/O is given higher priority to run on CPU, it should contribute to the increase of average response time.

      d) I don’t know the actual mean of shortest remaining time. Does it mean the time it take to run on CPU? I/O? or both? I wrote No, because a process may be starved if new short processes are enqueue to the list. (not confident with my answer)

      • (b) not clear what Yan Hao is saying here. Priscilla: you are right if there are many CPU-bound jobs, so much so that the total number of times the CPU-bound jobs blocks for I/O exceeds those of interactive jobs.

        (c) shortest remaining time includes CPU/IO time — it’s the estimated time for a job to complete.

        • (b) I read the question wrongly. I thought the question is asking the average response time of every process. If it is only about interactive job, then i think the answer is that it does not decrease the response time. I agree with Priscilla answer.

  8. this is out of topic, but i think the way wordpress displays comment threads is really terrible. i can’t follow who is replying to who. i prefer the ivle’s forum.

    this is just a suggestion….

      • hmm seems like the wordpress system mutually excludes reply and quotes and handles them differently. The quotes function doesn’t thread the replies.

        They should allow a Reply-Quote function. If I wanna quote someone, it usually means that I wanna reply right?

      • I agree. I never have such a lively discussion on a single blog post before so this problem did not arise before :)

        If only we have such lively discussions throughout the semester..

        • I agree that IVLE’s forum is better, there was once I spent close to an hour typing/formulating a reply, and in my eagerness to post, I clicked the submit button without typing the anti-spam word, and my post just disappeared.

          of course, i learnt my lesson to copy the text i typed in the future. But that incident really put me off. I never really found the time nor energy to reformulate that reply again.

          ADDED: lol lucky I copied this post before I clicked submit. apparently I had typed the wrong anti-spam word, and the post disappeared again. If not, ths post would never see “daylight”.

          • adamBATTAGLIA Posted on whit, your pictures are ubellibvanee. sure, i’m really jealous but also REALLY JEALOUS. congrats on what seems to be a successful trip so far. can’t wait to see the show.


        • Ooi Wei Tsang:

          I agree. I never have such a lively discussion on a single blog post before so this problem did not arise before :)
          If only we have such lively discussions throughout the semester..

          maybe you can give people participation points for posting in the blog to make it livelier? =P

          in other modules we get points (1-2 points) when we raise our hands and answer qns in lecture. maybe you can ask questions here and the first to answer with the correct answer gets points.

          anyway, just a feedback, i find that we lack practice in using semaphores and concurrent programming. I’m not sure if this is out of the scope of this module but maybe you can put up practice questions to let us try them? they can be non graded. I say this because I find that I can’t do the past year semaphore question. The level of difficulty is very far apart from what we learn in lectures or tutorials.

          just to end of, thank you for teaching cs2106. i find myself having a greater interest in OS and concurrency. =)


  9. Priscilla:

    hmm. too many X in this paper for comfort…
    my take on this question is: this sys call maybe the first time called, but previously other sys call might already been used.

    hmmm that really depends on the system call. If it is like a really rare system call that no other process calls and it is not within the common used libc page then there will be a page fault…

  10. true true..

    so when qn ask for “least likely” are we given the option for X. to me it sounds like a comparison/elimination question for the available A,B,C,D.


  11. Priscilla:

    can we discuss qn 15?
    here are my answers
    c) Use time quantum of highest priority job for all jobs => same effect as (a) since everyone will now wait for shorter time before its turn.

    hmmm i beg to differ…i i am not sure what kind of effect will this have on the scheduler. the time quantum is now the same for everyone. Hi priority proc and Lo priority proc all have the same time quantum. In a sense, this is like a RR scheduler don’t you think?

    • hmm. what i’m thinking of is like… I/O bound jobs still float to highest priority queue, MLFQ will still pick from highest priority queue first, then subsequent priority queues. so should not impact interactive jobs right..?

      ah. so is the answer then: no change to response time?
      (.___.)

      • I guess (c) does improve the average response time of interactive jobs, since less cpu time is allocated to cpu-bound jobs in a fixed time frame in this situation.

  12. i think there is no change to the response time.

    response time is defined as the time from the task is submitted to the time it is ran.

    since all task starts at the highest priority class, changing the time quantum of the rest of the classes don’t affect the response time at all.

    • Jie Shun, you are right if the number of processes is fixed. If new jobs keep coming, their response time will be affected by these changes, right?


  13. Yiliang:

    I guess (c) does improve the average response time of interactive jobs, since less cpu time is allocated to cpu-bound jobs in a fixed time frame in this situation.

    agreed. consolidating what Yiliang and Jie Shun said, what if while a CPU-bound is running now, and a new I/O bound job comes along. if CPU bound job is hogging a large time quantum, I/O bound job has to wait a longer time before it gets a chance to run (despite being on highest priority queue).

    yet does such an effect impact the -average- response time? *confused*

    • I guess the average response time of the current set of processes is not affected as mentioned by Jie Shun. But the new comers will affect the average response time. Consider the weighted moving average formulation: AR_i=(1-a)*
      AR_(i-1)+a*R_i, where 0<a<1, AR stands for average response time and R_i is the response time for process i.

    • Consider what contributes to response time:

      A job is ready to run after blocking for I/O, and is waiting for its turn on the CPU. This waiting time is the response time. So if there are less hogging of CPU, would this lower the average response time?

      Also, you have the MLFQ simulator from Lab 6 — a perfect platform to test this out!

      • Does response time measure only for Interactive process? How about those CPU-bound process waiting for I/O? Do we measure them as well?

        Thanks

      • I tested using the simulator. I slightly modified the job_generator.c and job.c in order to record the response time only for io jobs, and here is the result:
        (For each group, the first number is the number of cpu bound jobs, the second number is the number of io bound jobs, the third is the seed; the first “AVG” line is keeping different quantum for different priority levels, and the second “AVG” line is keeping the same quantum for all priority levels)

        If I am not wrong, (c) does decrease the response time.

        50 50 1234
        AVG IO RESPONSE TIME: 33.87
        AVG IO RESPONSE TIME: 31.84

        50 100 4321
        AVG IO RESPONSE TIME: 42.78
        AVG IO RESPONSE TIME: 41.64

        200 800 5555
        AVG IO RESPONSE TIME: 320.47
        AVG IO RESPONSE TIME: 296.64

        800 200 6666 with 50000 steps
        AVG IO RESPONSE TIME: 256.69
        AVG IO RESPONSE TIME: 254.21


  14. Ooi Wei Tsang:

    Consider what contributes to response time:
    A job is ready to run after blocking for I/O, and is waiting for its turn on the CPU. This waiting time is the response time. So if there are less hogging of CPU, would this lower the average response time?

    I’m kinda confused…
    i thought you mentioned that response time = time from first submitting job to time it starts running?
    Which definition of response time is this question referring to?

    • I also explained there are two different definitions of response time — for interactive jobs, the response time between finishing an I/O operation (user click, type) and getting chosen to run again is more important than the time it takes for a job to start.

  15. Hi,does anyone have any idea about Q11? I can only suggest one change for the FAT file system that just appending the latest email in front of the file, Which means every time I change the header of the file to the new email So that I can access it quite fast. Any other suggestion?

    • I wrote:

      Changes to i-node
      1) store last 12 addresses instead of storing first 12 addresses for faster access to the latest email.

      2) store i-node to the first byte of the file to remove the restraint of the number of i-node. (not sure, can’t find any other better idea)

      • two things you need to support: can quickly append, and can read emails in reverse chronological order (most recent first).


  16. Song Qiyue:

    Hi,does anyone have any idea about Q11? I can only suggest one change for the FAT file system that just appending the latest email in front of the file

    I’m not sure it that’s a change to the FS. That is more like a change to the way the program store the file.

    FYI, writing to the start of the file is called pre-pending. appending is for writing to end of file. =)

    • Ok, So I just mean that the new email will be treated as the new header of the linked list of the file. Indeed it seems not to be a change to the FS. While does it mean that we need to change the logic of the FAT or iNode?

  17. For Q12 (a) I think the peak point will be at the right upper of the original.because there will be less page fault in a certain level.
    (b)The peak point will be just lower than original. Faster CPU make the time of computing shorter. But swap is bound to IO. The time of page fault won’t change. So utilization will decrease.
    (c)I’m not sure. Will the peak ponint be lower since larger swap space cause more swap given memory size not change.


  18. Zhou Jun:

    (c)I’m not sure. Will the peak ponint be lower since larger swap space cause more swap given memory size not change.

    yea, I got the same answer for the first two. But as for the part (c), I think there is no change in the graph. The swap space does not affect the function at all. (refer to tutorial 8 qns 5)

    I’m not sure, that’s just my guess.

    • hmm, what if the swap partition/file is very small? cos swap is used when the system require more memory than the amount of RAM it has. So higher swap => more processes can run at the same time?

      • I think the number of programmes running should depend on the size of memory, Swap space is just a space storing useless(temporarily) pages.

    • There is a possibility that when the size of the swap space increases, the time for disk seek is longer. Hence, CPU utilization decreases.

  19. Hi, for Q14, is the class size available for us to count the maximum number of processes from one class can enter the critical region?

        • Imagine you have a shared toilet, with two doors. One door is used by users of Class 0. The other used by users of Class 1.

          When a user of Class 0 enters the empty toilet, he or she must lock the door used by Class 1 to prevent them from coming in. When the last user of Class 0 leaves the toilet, he or she must unlock the door used by Class 1, to let them in.

          Same behavior for users of Class 1.


    • Song Qiyue:

      Hi, for Q14, is the class size available for us to count the maximum number of processes from one class can enter the critical region?

      Here is my attempt to the question:

      int current_class = -1, no_waiting_process = 0, no_active_process = 0;
      Semaphore m=1,w=0;
      
      enter_class(class){
      down(m);
      if(current_class == -1){
          current_class=class;
          no_active_process++;
          up(m);
       }
      else{
         if(current_class = class){
             no_active_process++;
             up(m);
          }
         else{
             no_wait_process++;
             up(m);
             down(w);
             down(m);    
             no_active_process++;
             up(m);
        }
      }
      }
      
      leave_class(class){
      down(m);
      no_active_process--;
      if(no_active_process==0){
          int i;
          for(i=1;i<=no_wait_process;i++) 
               up(w);
          no_wait_process=0;
          current_class =1-class;
      }
      up(m);
      }
      

      Please comment on the correctness and suggest improvements on the code :)

      (Wei Tsang edited this for clarity by adding “pre” around the code)

      • Before
        for(i=1;i<=no_wait_process;i++)
        might need to add to reset back to -1:
        if (no_wait_process == 0)
        current_class = -1;

        else if class0 just used the toilet and left, it sets toilet to class1 but if another class0 enters, he will be waiting


        • Chua Wei Kuan:

          Before
          for(i=1;i<=no_wait_process;i++)
          might need to add to reset back to -1:
          if (no_wait_process == 0)
          current_class = -1;
          else if class0 just used the toilet and left, it sets toilet to class1 but if another class0 enters, he will be waiting

          Good point. I guess my solution is limited to allow only strict alternation. But the enter_class also need to be changed if current_class is reset to -1 in the leave_class.

  20. Hmm, lets say theres a 1 GB ram, and 1024 processes each using 1KB memory. When a new 1KB-memory process joins in, there is no way to fit it in. Instead if you have a swap area, you can swap some processes out to fix in the new program to run and degree of multiprogramming increases?


  21. Chua Wei Kuan:

    you can swap some processes out to fix in the new program to run and degree of multiprogramming increases?

    er…the fact that you are swapping means that the system is thrashing. processes are fighting with each other for memory. No matter how big or how small the swap space is, the system will thrash at the point when the physical memory cannot hold the combined working set of all processes. The swap space shouldn’t affect at all.

    correct me if i’m wrong. thanks. =)

    • to add on, I recognize that you understand the swap space as an extension of the physical memory but the system thrashes when it is busy doing IO due to swapping. So, no matter how much you dedicate to the swap space, the fact that there is swapping, indicates that it is already thrashing.


  22. Yeo Jie Shun:


    Chua Wei Kuan:

    you can swap some processes out to fix in the new program to run and degree of multiprogramming increases?

    er…the fact that you are swapping means that the system is thrashing. processes are fighting with each other for memory. No matter how big or how small the swap space is, the system will thrash at the point when the physical memory cannot hold the combined working set of all processes. The swap space shouldn’t affect at all.
    correct me if i’m wrong. thanks. =)

    Hmm i believe your understanding of thrashing is not entirely correct. Only when the CPU spends more time swapping in and out than doing actual work (because it is swapping out useful pages), Thats when it is called thrashing. Normal swapping to the swap area occurs when there is not enough physical memory to hold all the pages.


    • Tan Swee Khoon:

      Hmm i believe your understanding of thrashing is not entirely correct.

      er…ya we are kinda on the same frequency, let me explain myself…you said


      Tan Swee Khoon:

      Only when the CPU spends more time swapping in and out than doing actual work (because it is swapping out useful pages), Thats when it is called thrashing.

      Yup that’s the effect, so what’s the cause of swapping? The cause of the swapping in and out and not doing actual work is because the physical memory cannot hold the entire working set of all processes.

      actually i got this from the book:

      Andrew Tanenbaum:

      If the available memory is too small to hold the entire working set, the porcess will cause many page faults and run slowly, since executing an instruction takes a few nanoseconds and reading in a page from the disk typically task 10 ms. … A program causing page faults every few instructions is said to be thrashing.

  23. Ahh yes, both Jie Shun’s and Swee Khoon’s explanations are right. Thrashing occurs when the physical memory cannot hold the the working set of all the processes. I misinterpreted the question as swapping increases degree of multiprogramming. This is true when there is no swap area or when the swap area is very small. But in the question, thrashing has already occurred. This means that the swap area must be pretty big already so my argument on the possibility of a small swap area is invalid. Thanks all for the clarifications!

  24. for question 12(c), increase the size of swap space

    can we argue that increase in size means more time need to be spend on seeking a place to swap the pages, hence CPU utilization decreases instead?

    • In practice, disk size has little effect on CPU utilization if OS is doing scheduling properly (and disk I/O is done using DMA, so CPU can be kept busy)

      But yes, I would accept this in exam because it makes sense given what you know from CS2106.

  25. for question 8.

    i think the memory stack doesnt store the program counter of the instruction to execute after returning from f, answer is B.
    any other ideas?

    • i think the question says “what does the memory stack contain while MAKING a function call to function f”.
      It should be local variables of f because, the local variables are only pushed onto the stack when the function begins execution.
      While making a function call the local variables are yet to be discovered. The frame pointer is often the PC of the instruction to be executed after return i.r. return address. Also the arguments to f are passed through the stack itself.

      My answer is C.

  26. For Qn 2, I think C and D both are acceptable

    For Qn 4, I chose B because it should affect P only if P needs to wait for Q to EXIT a critical region but you don’t wait for them to enter critical region right?

    For Qn 11, I only have idea for the first change to FAT but not sure if I’m right. I state to let FAT point backwards instead of pointing forward. For the directory entry, have it pointing to the last address instead of pointing to the last block instead of first block. Any opinion on this?

    • For Qn 2 – the pid of child process is printed by line2. Also the process ls is the child process. So C or D both are incorrect. pid of the parent process is never printed. To get that printed there should be a getpid() somewhere. So my answer is B.

      For Qn4 – I think the option says the P have to wait because Q is already in the critical region. So that is a valid scenario in which P might have to wait. My answer for this will be D. As disk is a shared resource. Multiple processes can read from the disk.

      • oh yar you are right, exec doesnt change the process ID of the child.. so process id of ls is actually the same thing as process id of child..

      • Qn 4. My answer is X bcus multiple reads from disk cannot be returned all at once ( rmb from lecture about the different disk scheduling algos) , therefore it will take different amounts of time for each request to be returned -> P may take a longer time for its read request to be fulfilled due to a prior read request from Q.
        Thats what i think correct me if im wrong !!

    • Hi,

      I think q2 u mean both B and D are acceptable instead. process ID of child WILL be printed by the parent

      q4 i think its D. option B is probably trying to say that P wants to enter critical region but need to wait for Q to exit first, and this would be possible

      q8 i think option B is just trying to say “return address”, which is stored on the stack. so i agree with the others that the ans is X

  27. Thanks Pooja and Wei chee! =D

    For Qn 2, I’m dreaming and I keep thinking parents is printing its own PID. I agree with Pooja that only B is correct.

    For Qn 4, I’m not sure if is my english problem or what. But for D, i agree that disk is a shared resources but Prof Ooi mentioned that

    “Disk I/O requests from different processes are queued. The disk scheduling algorithm decides which request to service next.”.

    Therefore, it involves scheduling which MAY let P waits? Do correct me if i’m wrong.

  28. To summarize on the two most confusing MCQ:

    Q4: A B and C will make P take longer time to run as discussed. D will make P take longer time if process Q is copying a 10TB file from disk1 to disk2, right? so the answer is X

    Q8: I think the question refers to the point of time when function f is just called. Thus local variables of f are not yet put on the stack. So the answer is C

    Hope Prof. Ooi can comment on these..

  29. Elia Perez Posted on Kim thank you! Lupus is not the end of the world. I fight and challenge myslef daily. Sometimes to the point of exhaustion, but I NEED to squeeze everything I can out of my days! I have always been like that. To my family it is more significant now the cotributions I make to our family. Stay strong Sis!

  30. You’re so cool! I don’t believe I’ve read through something like this before. So wonderful to discover someone with some genuine thoughts on this subject. Seriously.. many thanks for starting this up. This site is something that is needed on the web, someone with a little originality!

  31. It is thought to be very effective, but you’ll be fine. 8 percent of all people who are plagued with hyperhidrosis, also it even easier to tell you some ways to do is to determine the cause and origin. If you happen to kill the sweat glands to be much worse than having a severe sweating in other ways of treating hyperhidrosis of moderate severity.

Leave a Reply

Your email address will not be published. Required fields are marked *