Conference talks at CSTalks

Today we have two talks:

1. Title:

Vignette: Interactive Texture Design and Manipulation with Freeform Gestures for Pen-and-Ink Illustration
Presenter: Mr Rubaiat Habib


Vignette is an interactive system  that facilitates texture creation in pen-and-ink  illustrations. Unlike existing systems, Vignette preserves illustrators’ workflow and style: users draw a fraction of a exture and use gestures to automatically fill regions with the texture. We currently support both  1D and 2D synthesis with stitching.  Our system also has interactive refinement and editing capabilities to rovide a higher level texture control, which helps artists achieve their desired vision. A user study with professional artists shows that Vignette makes the process of illustration more enjoyable and that first time users can create rich textures from scratch within minutes.



Exploring User Motivations for Eyes-free Interaction on Mobile Devices

Presenter: Yi Bo


While there is increasing interest in creating eyes-free interaction technologies, a solid analysis of why users need or desire eyes-free interaction has yet to be presented. To gain a better understanding of such user motivations, we conducted an exploratory study with four focus groups, and suggest a classification of motivations for eyes-free interaction under four categories (environmental,  social, device features, and personal). Exploring and analyzing these categories, we present early insights pointing to design implications for future eyes-free interactions.

See you all in SR9 at 4pm!

Computing 2D Constrained Delaunay Triangulation Using the GPU by Qi MENG on 29 Feb

This week we host a conference talk by Qi Meng. The talk will be presented in ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D 2012), March 09th – 11th, 2012.

Title: Computing 2D Constrained Delaunay Triangulation Using the GPU

Speaker: Qi Meng

Abstract: We propose the first GPU solution to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight line graph (PSLG) consisting of points and edges. There are many CPU algorithms developed to solve the CDT problem in computational geometry, yet there has been no known prior approach using the parallel computing power of the GPU to solve this problem efficiently. For the special case of the CDT problem with a PSLG consisting of just points, which is the normal Delaunay triangulation problem, a hybrid approach has already been presented that uses the GPU together with the CPU to partially speed up the computation. Our work, on the other hand, accelerates the whole computation by the GPU. Our implementation using the CUDA programming model on NVIDIA GPUs is numerically robust with good speedup, of up to an order of magnitude, compared to the best sequential implementations on the CPU. This result is reflected in our experiment with both randomly generated PSLGs and real world GIS data with millions of points and edges.

See you all on Wed, 4 pm.

Explicit Semantic Analysis in Federated Search by guest speaker Matthias Wauer on 8 Dec


This week we have a special CSTalks on explicit semantic analysis in federated search, with a guest speaker from TU Dresden, Germany: Matthias Wauer. The talk is Thursday 8 Dec in SR9, 4pm.

Title: Integration and Relevance Assessment of Product Information Sources: Towards enhancing Federated Search with Semantics

Speaker: Dipl.-Medien-Inf. Matthias Wauer

Abstract: Product information is usually stored in a large number of different information systems, and can commonly be found in unstructured documents. Providing a comprehensive view and instant search functionality on that information is challenging, and even more so when the infrastructure changes dynamically.
The typical centralized search index struggles to provide the required flexibility. Distributed information retrieval methods can be applied, but existing approaches typically use limited models for executing federated search subtasks.
Based on a service-oriented architecture, this talk presents current research on a resource description and resource selection method, which uses only the domain ontology and the external information sources to build conceptual resource descriptors. The approach is based on Explicit Semantic Analysis. The talk will show how existing methods perform, and what the main issues of this approach are.

See you all on Thursday, at 4pm!

Visual Interpretation with Three-Dimensional Annotations on 9 Nov


For the last CSTalk of this semester, Sharmili will present her work on visual interpretation of medical images. This work will be presented next month at the 2011 Conference of the Radiological Society of North America (RSNA 2011), in Chicago, USA. The time and date are the usual: Wednesday 4pm, SR9 (COM1-01-09).

Title: Visual Interpretation with Three-Dimensional Annotations (VITA) : Open source automated 3D visual summary application using AIM (Annotation Imaging Markup) enabled PACS based on radiologist annotations

Speaker: Sharmili Roy

Abstract: Medical doctors in virtually all fields of medicine now rely on imaging technology to make diagnoses and clinical decisions for treatment. The workflow generally involves an ordering physician who requests an imaging study to be performed on a patient. A radiologist interprets the exam using a dedicated image workstation which allows the radiologist to make visual annotations on the images to denote regions of interest, or to make quantitative measurements, or simply to select key images in the study that are of clinical importance. The radiologist also prepares a textual report that refers back to the visual information prepared during the exam. Surprisingly, however, the ordering physician often relies solely on the text-based report. This lack of access to the image-based annotations is due to variety of reasons ranging from software incompatibilities to cumbersome workflows.

The aim of this work is to address the limitations in current medical imaging and reporting workflow, in particular the out-dated reliance of ordering physicians on text only reports. Our focus is to develop a
software framework that allows radiologist to produce visual summaries to augment or integrate into their text reports in a format that is not only easily accessible, but is concise and visually informative to the ordering

See you all on Wednesday!

Natural Language Processing on 2 Nov

This week we have a new topic from an area not covered previously. Daniel will share with us the challenges and research problems of Natural Language Processing. The time and place is the usual: Wednesday, 4pm in SR9 (COM1-01-09).

Title: Natural Language Processing

Speaker: Daniel Dahlmeier

Description: In this talk, I will give an introduction to natural language processing: the branch of computer science that tries to make computers ‘talk’ like humans. I will start from the historical motivation for language processing, show why language is not as easy for computers as we might think it is, and close with an overview about current research topics.
No prior knowledge of linguistics or language processing is required to attend this talk, although speaking at least one (natural) language is a plus.

See you all in SR9 at 4pm!

Towards a Model Checker for NesC and Wireless Sensor Networks on 19 Oct 2011

This week we have a new topic from Software Engineering. Manchun will present her work on verifying the correctness of wireless sensor networks applications.

Title: Towards a Model Checker for NesC and Wireless Sensor Networks

Speaker: Manchun Zheng

Description: Wireless sensor networks (WSNs) are expected to run unattendedly for critical tasks. To guarantee the correctness of WSNs is important, but highly nontrivial due to the distributed nature. Traditional tools for debugging and simulating TinyOS are incapable of detecting all possible errors under any circumstance. Bugs occurring during rarely encountered scenarios are difficult (or expensive) to be detected by traditional debugging, testing or simulating. Therefore, there is a need for a technique that is able to automatically search all possible scenarios for potential bugs or errors, in order to ensure the reliability of WSN implementation. Model checking is a technique to verify desirable properties by systematically exploring all possible scenarios of the given system. Model checking has been successfully applied to find intricate errors of both software and hardware systems. However, little has been done to model check WSN implementations. This talk will introduces the domain-specific model checker NesC@PAT that automatically builds models from NesC programs and verifies the programs against various properties. NesC@PAT is the first systematic model checker which tackles errors of WSN implementations at different levels.

See you in SR9, at 4pm!

BehaviorBoxes: A new Parallel Paradigm Beyond OO on 12 Oct 11

This week we have a talk in the domain of software engineering and programming languages, in particular parallel programming languages.

Title: BehaviorBoxes: A new Parallel Paradigm Beyond OO

Speaker: Marcel Böhme

Abstract: Nowadays, we start with a sequential program and explicitly add parallelism. Indeterministic behavior introduces problems such as race conditions. Then, locks, mutexes, and monitors are complex attempts to tackle this indeterminism, introducing yet other problems, such as deadlocks or starvation. This is no minor nuisance! We will discuss BehaviorBoxes as a new programming paradigm that renders parallelism implicit. However, it is research in its infancy and still on an abstract level. I hope we can discuss your comments, ideas as well as critics and concerns.

See you all in SR9, 4pm.

Program Verification with Separation Logic on 28 Sept


This week’s CSTalks starts at 4:30pm. Cristian will discuss some of the research problems of program verification in the context of separation logic. In addition, there’s a practical demo at the end of the talk.

Title: Program verification in the context of separation logic: problems and approaches.

Speaker: Cristian Gherghina

Description: Program verification is still a pretty much open research topic. The goal of this field is to develop systems that statically and automatically prove user provided assertions about a given program. Current investigations focus on both the expressiveness of the system (to be able to prove richer assertions about programs) and on the efficiency of the method (trying to scale to larger programs).
Several theoretical frameworks have been proposed to address the above issues, among which is separation logic. This logic is very well suited for concise reasoning about heap manipulating programs. Now the challenge is to develop the separation logic solvers to allow for fast reasoning and also provide extensions to the underlying logic to support richer programming language features (e.g. exception handling, practically used concurrency primitives).

As a showcase, in this talk I will discuss our work on the HIP/SLEEK separation logic entailment checker and the different theoretical and practical techniques we developed (e.g. a predicate specialization calculus that allows impressive speedups, elegant logics that allow for simple reasoning about complex language features).

See you all in SR9 at 4:30pm!