Detecting Mastication (OEI)

Automatic dietary monitoring has become an important area of research for helping to diagnose allergies, monitoring food intake for people with type 1 diabetes, and improving self- awareness for chronic over-eating. Continuing use of wear ables for monitoring food intake depends on the comfort and ease of use of the system. In this research, we explore the use of the Outer Ear Interface (OEI) to recognize eat ing activities. OEI is a wearable multimodal system, which utilizes a set of proximity sensors encapsulated in an off-the- shelf earpiece to monitor jaw movement by measuring the deformation it causes in the ear canal without contact. The system also contains a 3D gyroscope to prevent errors due to body motion. We evaluate...Read more

Silent Speech

Consumer users of wearable computers often desire subtle and silent control of their devices during meetings, public transportation and other social situations. In addition, environments, such as those involved in aviation, military, and emergency response, are often too noisy for speech recognition. We address these problems through silent speech recognition and gesture control, capturing movements associated with speech and intentional gestures of the tongue and jaw. The system has two components: the Tongue Magnet Interface (TMI), which utilizes the 3-axis magnetometer aboard Google Glass to measure the movement of a small magnet glued to the user's tongue, and the Outer Ear Interface (OEI), which measures the deformation in the ear canal caused by jaw movements using proximity sensors embedded in...Read more

Captioning on Glass (CoG)

Captioning on Glass (CoG) provides real-time captioning, allowing the deaf and hard of hearing to converse with others. For more information, visit the project website at http://cog.gatech.eduRead more

Mobile Music Touch (MMT)

We present Mobile Music Touch, a wearable, wireless haptic piano instruction system, composed of (1) five small vibration motors, one for each finger, fitted inside a glove, (2) a Bluetooth module mounted on the glove, and (3) piano music output from a laptop. Users hear the piano music and feel the vibrations indicating which finger is used to play the note. We investigate the system’s potential for passive learning, i.e. learning piano playing automatically while engaged in everyday activities, as well as the opportunities to use this system for rehabilitation .Read more


Gestures for interfaces should be short, pleasing, intuitive, and easily recognized by a computer. However, it is a challenge for interface designers to create gestures easily distinguishable from users’ normal movements. Our tool MAGIC Summoning addresses this problem. Given a specific platform and task, we gather a large database of unlabeled sensor data captured in the environments in which the system will be used (an “Everyday Gesture Library” or EGL). The EGL is quantized and indexed via multi-dimensional Symbolic Aggregate approXimation (SAX) to enable quick searching. MAGIC exploits the SAX representation of the EGL to suggest gestures with a low likelihood of false triggering. Suggested gestures are ordered according to brevity and simplicity, freeing the interface designer to focus on...Read more


Research in dolphin cognition and communication in the wild is still a challenging task for marine biologists. Most problems arise from the uncontrolled nature of field studies and the challenges of building suitable underwater research equipment. We develop a novel underwater wearable computer enabling researchers to engage in an audio-based interaction between humans and dolphins. The design requirements are based on a research protocol developed by a team of marine biologists associated with the Wild Dolphin Project. Furthermore, we work on discovering and indexing dolphin whistles recorded in the wild.Read more


As a member of the iDASH project (integrating Data for Analysis, Anonymization, and SHaring), Dr. Heintzman is lead for DMITRI 1.0 (Diabetes Management Integrated Technology Research Initiative). The DMITRI project currently has a daily life, diabetes management data set from 16 subjects with diabetes over 72-96 hours. It tracks data from wearable medical equipment, personal logs, nutritional logs, clinical history data, and questionnaires. The DMITRI datasets are shared through the iDASH portal at the National Center for Biomedical Computing at UCSD. This dataset is unique in that it combines an extensive amount of on-body monitoring (insulin pump dosage logs; Dexcom continuous glucose monitor; SenseWear activity monitor with accelerometer, GSR, and skin temperature sensing; Polar heart monitor; Philips Actiwatch; and Zeo...Read more


CopyCat is designed both as a platform to collect gesture data for our ASL recognition system and as a practical application which helps deaf children develop working memory and language skills while they play the game. Please view the video below The system uses a video camera and wrist mounted accelerometers as the primary sensors. In CopyCat, the children use ASL to communicate to the heroine of the game, Iris the cat. For example, the child will sign to Iris, "ALLIGATOR ON CHAIR" (glossed from ASL). If the child signs poorly, Iris looks puzzled, and the child is encouraged to attempt the phrase again. If the child signs clearly, Iris "poofs" the villain and continues on her way. If the...Read more


The Gesture and Activity Recognition Toolkit (GART) (formerly Georgia Tech Gesture Toolkit) is a toolkit to allow for rapid prototyping of gesture-based applications. There are two versions of the toolkit, a Linux shell scripting based version and the more refined Java version. About GART GART Releases A short video demo of Gesture Watch that used GART Roadmap Team GART Support Installation instructions GART Manual - includes information on the comm and vision modules, as well as the WritingPad, PinkCup, and AccelGestures examples (note that the information involving Maven is outdated) Tutorials API Source code and bug tracking(trac) Additional resources Hidden Markov Model Toolkit Homepage Better UI for Weka Internal wiki page for GART Other Information Linux-Shell GT2K Home Page (older...Read more


Telesign is a system designed for Deaf adults attempting to carry out service transactions with a hearing person such as visiting the veterinarian or getting their oil changed. American Sign Language (ASL) is the native language of the Deaf in the United States. ASL has a completely different grammar than English. This makes it difficult for a Deaf person to communicate in English with a hearing person. Traditional methods of communication between the Deaf and hearing are writing on paper and passing it back and forth or typing onto a Sidekick-like device and showing that. Both of these methods require the Deaf individual to create grammatically correct or at least semantically understandable English phrases which may be difficult. Telesign is...Read more


Subscribe to Contextual Computing Group RSS