CS2106 in the news: John McCarthy (1927 – 2011)

We lost another pioneer in computing in October.  John McCarthy is well know for his work in AI and LISP, but he has a close connection to the development of operating systems — John McCarthy is one of the earliest developer of time-sharing mechanism.

http://www.wired.com/wiredenterprise/2011/10/john-mccarthy-father-of-ai-and-lisp-dies-at-84/

Lester Earnest first encountered McCarthy at MIT while working on the government’s Semi-Automatic Ground Environment (SAGE) defense system — an early computer network that allowed multiple users to access the system at the same time — and according to Earnest, SAGE inspired McCarthy’s work on time-sharing. “He was first to come up with an idea of how to do time-sharing in a general purpose way, as opposed to special purpose,” Earnest says.

You can read this article by John McCarthy that describes his early involvement in development of time-sharing system.
http://www-formal.stanford.edu/jmc/history/timesharing/timesharing.html

Lab 5: More comments

  1. When grading your lab 5 submissions, we noticed that there are quite a few of you which made the following errors: (a) you did not call pthread_mutex_init or pthread_cond_init to initialize the mutex nor condition variables; (b) you lock a mutex in one thread, and unlock it in another; (c) you pass a mutex to pthread_cond_wait in one thread, that is locked by another thread.

    The pthread implementation on Linux is surprisingly forgiving about the errors, thus many of you are not aware that you made the error. We graded the lab using a pthread implementation that is more strict, and were able to catch many of these, leading to low marks for many of you (for instance, if you do not initialize the mutex, locking has no effect). We have taken a second, more careful look at these submissions, and reward appropriately if you have the right logic but your code did not work because of the errors above. However, we discovered that further errors on a few submission, which lead to a lower score than before for a few students.

  2. There are also quite a number of you who did not rename your directory appropriately. We had to take off 3 points for these students. This is quite painful as you have worked hard on the lab, and loosing 3 points for not following instruction is a real pity. So, please and follow the instructions exactly and carefully for Lab 7 and 8.

Lab 5: Solution with one mutex?

In the solution we present in class, we use two mutex variables. Many of the solutions submitted use one mutex only. A question raised during discussion with a student whether any solution with one mutex is wrong.

I will post some of the solutions submitted by you, and let you take a look and discuss. I think debugging threads is not easy and is a skill that comes with experience. So hopefully this little exercise will serve as a practice for you.
Continue reading

Posted in lab

Pre-class Activity 9

Try running some memory-intensive programs (e.g., Adobe software, Firefox) and observe some information related to page faults on your computer:

1. For Windows users, open Task Manager, go to “View”, “Select Columns”, and show the “Page Fault” and “Page Fault Delta” columns. Observe the number of page faults for each process and how many new page faults are generated between updates (the delta).

2. For Linux users, open up a terminal and run “vmstat k”, where k is an integer that indicates the update interval. (man vmstat for details). Observe the “si” and “so” columns, which reports on the amount of memory swapped in and swapped out. Alternatively, run “ps -ea v” and observe the “MAJFL” columns, which report on the number of page faults.

3. For Mac users, open up a terminal and run “vm_stat k”, where k is an integer that indicates the update interval (man vm_stat for details). Observe the numbers under columns “faults”, “pageins” “pageouts”.

For comparisons with your friends, post the amount of RAM on the system you are making the observations on.

Lecture 9

21 October, 2011

This lecture is the second of a three-part series looking into memory management issues in OS.

Having a hardcopy of slides during lecture would be helpful. Be prepare to take notes during lecture.

Download Slides: PDF

Reading:

  • Section 3.4
  • Section 3.5.1, 3.5.2
  • Section 3.6.1, 3.6.2, 3.6.4, 3.6.5

Related Wikipedia entries are listed below. These are for students who are keen to go beyond what is covered in CS2106. Read the articles with a critical mind since Wikipedia is editable by anyone.

Lab 5: Marks and Comments

Lab 5 has been graded by Mr. Xu Yecheng and your mark is now available through IVLE Gradebook.   Many students did not do as well as expected, and did not receive full marks.  In particular, many solutions either lead to deadlock or lead to overflow/underflow of pot.  If you did not get a 10 and were not aware of bugs in your code, you should look at your code carefully to find out what is wrong (it’s quite illuminating).  If you are convinced that your code works but still did not receive a full mark, please feel free to email Mr. Xu or me.

Posted in lab

Midterm: Solution and Stats

We are in the midst of returning the midterm scripts to all.  Please check and make sure everything is marked properly and you understand your mistakes.  Please check that your midterm mark in IVLE Gradebook is consistent with what you get on your hardcopy.

Here is the solution to the midterm.

The figure below shows the histogram for your marks.  The average is 49.6; stddev is 15.9; and median is 51.  I am quite happy that the general results are better than last year’s midterm (despite many of you commented that this year’s midterm is harder).  For those who do well, keep it up!  For those who did not do as well as expected, I hope you learn from this test and do better in the final.

Histogram for Midterm

CS2106 in the news: Is Memory the Secret to a Super-Fast Web?

Since we are covering about memory, the following news article is timely:

http://mashable.com/2011/10/17/ramcloud/

John Ousterhout, research professor of computer science at Stanford and head of a new project called RAMCloud, proposes exactly that: Create a data center storage solution “where information is kept entirely in DRAM and large-scale systems are created by aggregating the main memories of thousands of commodity servers.”

Here is a more technical Wired article and the original white paper.