Sensor-Rich Mobile Video Indexing and Search on 21 September

This week Zhijie will give talk on how the information gathered by mobile devices can be used to improve the video indexing and search.

Title: Sensor-Rich Mobile Video Indexing and Search

Speaker: Zhijie Shen

Description: Smart phones equipped with the video recorder are becoming ubiquitous and the volume of captured video material from them is increasingly large. Therefore, tools for searching video databases are indispensable. Current techniques that extract features purely based on the visual signals of a video are struggling to achieve good results. However, with the help of various sensors of smart phones, the location and orientation of a camera can be continuously acquired in conjunction with the captured video stream. By considering this related meta-information, more relevant and precisely delimited search results can be obtained. We also propose a novel approach for querying videos based on the notion that the geographical location of the captured scene in addition to the location of a camera can provide valuable information and may be used as a search criterion in many applications. Then, we provide an estimation model of the viewable area of a scene for indexing and searching and reports on a prototype implementation. As video tag annotations have become a useful and powerful feature to facilitate video search in many social media and web applications, we further develop an automatic video tagging technique, which utilizes model the viewable scene model geo-information databases. Additionally, we define six criteria to score the tag relevance and rank the obtained tags based on these scores. Furthermore, we associate the tags with the video and the accurately delimited segments of the video. Based on these techniques, we have built a prototype of a georeferenced video search engine (GRVS) that utilizes an estimation model of a camera’s viewable scene for efficient video search. For video acquisition, our system provides the iOS and android applications that capture videos together with their respective field of views (FOV). The acquisition softwares allow community-driven data contributions to the search engine.

See you all on Wed, 4pm, COM1 SR9.

Visualizing Software Behavior on 14 September

This Wednesday Wu Yongzheng will present his work on visualizing software behavior.

Title: Visualizing Software Behavior

Speaker: Wu Yongzheng

Description: Software systems are becoming more complex in terms of the size of the codebase, the number of components from different vendors and the complex interaction of components. This complexity makes software comprehension difficult which ultimately causes software bugs, vulnerabilities, and performance problems. Software traces contain information on the interactions, however, the traces are usually very large and thus hard to examine. In this talk, we will show how visualization of software traces can be used to study (i) the dependencies of software modules, and (ii) patterns and anomalies of software behaviors.

See you all in SR9, 4pm.

Workload Assignment in Video Surveillance on 7 September

Hello all,

Today we resume CSTalks with a talk on research problems in video surveillance. The time and venue is as usual: 4pm, SR9 (COM1-02-09).

Title: Dynamic Workload Assignment in Video Surveillance Systems

Speaker: Mukesh Saini

Description: Current surveillance systems consist of large numbers of cameras. The video feeds from cameras are automatically processed for threat detection, which is a computationally intensive task. In order to meet the real-time requirements of surveillance, we need to distribute the video processing over multiple computers. Generally the cameras are statically assigned to the processors; we show that this is not a desirable solution as the workload for a particular camera may vary over time depending on the number of the targets in its view. In future, this uneven distribution of workload will become more critical as the sensing infrastructures are being deployed on the cloud. In this work, we model the camera workload as a function of the number of targets, and use that to dynamically assign video feeds to the processors. Experimental results show that the proposed model successfully captures the variability of the workload, and that dynamic workload assignment provides better results than a static assignment.

See you all in SR9 at 4pm!
The CSTalks Team

Check out our blog:
Follow us on Facebook:
Follow us on Twitter:

Performance of Shared-Memory Programs on Multicore Systems on 31 August


Wednesday 31 August, Bogdan Tudor will discuss some of the research problems in performance analysis of shared-memory programs on multicore systems. Come see his talk at 4pm in SR9 (COM1-02-09).

Title: Performance of Shared-Memory Programs on Multicore Systems

Speaker: Bogdan Tudor

Description: As the third generation of mainstream multicore system is set to reach the market, systems with core count in the range of hundreds will become the backbone of parallel processing. To address the hardware changes, a growing number of programming models, languages and methodologies are proposed as effective ways of writing parallel programs. However, this wide range of programming models and languages, as well as the size and heterogeneity of multicore systems make performance analysis of parallel programs an increasingly difficult task. In this talk, I will present my work on a practical model for performance analysis of shared-memory programs. The model derives the speedup and speedup loss due to data dependency and memory overhead for various configurations of threads, cores and memory access policies in both UMA and NUMA systems. The approach, based on generally available and non-intrusive inputs derived from the trace of the operating system run-queue and hardware events counters, is highly practical. Furthermore, I discuss the relationship between memory access patterns and the degree of contention, as observed on extensive measurements on state-of-the-art UMA and NUMA multicore systems with 8, 24 and 48 cores. Validation of the model against measurements of HPC as well as real-world programs shows that the model has good accuracy. As applications of our model, I show how memory contention changes for different problem sizes and how to derive the number of cores that maximizes speedup for a program running on a given machine configuration.

As usual, there will be refreshments to cater for our sweet-tooth 🙂

See you all on Wednesday!

Unified Recommandations Framework for Social Networks on 24 August


Wednesday, 24 August, Chen Wei will discuss how to overcome some of the challenges of social networks-based recommendation systems. Come see his talk at 4pm in SR9 (COM1-02-09).

Title: A Unified Framework for Recommendations in the Social Network.

Speaker: Chen Wei

Description: Social network systems such as FaceBook and YouTube have played a significant role in capturing both explicit and implicit user preferences for different items in the form of ratings and tags. This forms a quaternary relationship among users, items, tags and ratings. Existing systems have utilized only ternary relationships such as users-items-ratings, or users-items-tags to derive their recommendations. In this talk, we show that ternary relationships are insufficient to provide accurate recommendations. Instead, we model the quaternary relationship among users, items, tags and ratings as a 4-order tensor and cast the recommendation problem as a multi-way latent semantic analysis problem. A unified framework for user recommendation, item recommendation, tag recommendation and item rating prediction is proposed.

As usual, there’s refreshments to sweeten our afternoon 🙂

See you all on Wednesday!

Polymorphic Heterogeneous Multi-Core Systems on 17th August

Hello! We kick-start a new semester of CSTalks with a presentation about a hot topic in computer architecture. Mihai Pricopi will share with us some of the latest developments in polymorphic multi-core systems. Join us at SR9 (COM1-02-09) at 4:00pm.

Title: Polymorphic Heterogeneous Multi-Core Systems

Description: The current commercial trend is to build multiprocessors that are just collections of identical (possibly simple) cores. These homogeneous multi-cores are simple to design, offer easy silicon implementation, and regular software environments. Unfortunately, general-purpose emerging workloads from diverse application domains have very different resource requirements that are hard to satisfy with a set of identical cores. In contrast, there exist many evidences that heterogeneous multi-core solutions customized for a particular application domain can offer significant advantage in terms performance, power, area, and delay.

In this talk, Mihai will present a novel proposed architecture called “polymorphic heterogeneous multi-core architecture” that can be tailored according to the workload by software. He will make a short introduction on his research group and how the different important aspects of the new architecture are handled separately inside the group. Then he will give a closer look on his part of the project, what has been done and what are the next steps to be approached.

As usual, in pure CSTalks tradition, we will be having drinks and refreshments during and after the talk 🙂

CSTalks are coming back!

Computer Students Talks are coming back with the beginning of the new semester! The venue, day and time will be the usual: every Wednesday at 4 pm in SR9 (COM1-02-09).

Hope to have many interesting talks during the next semester. We would like to encourage all students to attend the talks and participate to the interesting discussions on different topics in Computer Science. Students can also present their research work and get new insight and fresh opinions on the problems.

You can find some useful information about CSTalks by reading these 2 posts:

Also, you can join us on Facebook:

MOGCLASS, Music by Collaboration for Kids on 1st June

This Wednesday, June 1st, Zhou Yinsheng will be presenting some work related to virtual music instruments. Join us at SR9 from 4:00pm.

Title: MOGCLASS – a Collaborative System of Mobile Devices for Classroom Music Education of Young Children

Speaker: Yinsheng Zhou

Composition, listening and performance are essential activities in classroom music education, yet conventional music classes impose unnecessary limitations on students’ ability to develop these skills. Based on in-depth fieldwork and a user-centered design approach, we created MOGCLASS, a multimodal collaborative music environment that enhances students’ musical experience and improves teachers’ management of the classroom. We conducted a two-round system evaluation to improve the prototype and evaluate the system. Improvements were made based on the results from an iterative design evaluation, in which a trial system was implemented. The system then underwent a second round of evaluation through a three-week between-subject controlled experiment in a local primary school. Results showed that MOGCLASS is effective in motivating students to learn music, improving the way they collaborate with other students as well as helping teachers manage the classroom.

Object Detection and Tracking on May 25th

We’re having Mukesh discussing object detection and tracking this Wed, May 25th.

Title: Object Detection and Tracking

Speaker: Mukesh Kumar Saini

Object detection is a fundamental step in most of the video analysis applications. There are many research challenges involved in automatic object detection, depending on different scenarios. The most prevalent application of object detection is in the field of multimedia surveillance. In this talk we will discuss the common problems in the object detection in a surveillance video. Further, we will discuss the Gaussian Mixture Model (GMM) based object detection method. While object detection is the basic step of video analysis, higher level semantic interpretation of the scene requires trajectory information. Most of the suspicious event detection methods use tracking as the basic building block. In the second part of the talk, we will discuss particle filter based method of object tracking. To summarize, the aim of the talk is two-fold: (1) Discuss common problems in object detection and tracking (2) Hands on experience of how to use classical methods of GMM and particle filtering in problem solving.

See you all at the same place as usual (SR9).