Home
Videos uploaded by user “HCT UBC”
Spheree: A 3D Perspective-Corrected Interactive Spherical Scalable Display
 
03:01
We have created Spheree, a personal spherical display that arranges multiple blended and calibrated mini projectors to transform a translucent globe into a high resolution perspective-corrected 3D interactive display. We track both the user and Spheree to render user-targeted views onto the surface of the sphere. This provides motion parallax, occlusion, shading and perspective depth cues to the user. One of the emerging technologies that makes Spheree unique is that we use multiple mini-projectors, calibrated and blended automatically to create a uniform pixel space on the surface of the sphere. The calibration algorithm we developed allows for as many projectors as needed for virtually any size of sphere, providing a linear scalability cost for higher-resolution spherical displays. Spheree does not have any seams or blind spots, therefore rendered scenes are not occluded and the display can support stereo 3D experiences. Using touch and gesture we support tangible interactions such as moving, rotating, sculpting and painting objects, in addition to object manipulation. Computer generated 3D models or 3D models of real-objects can be imported into Spheree. Once objects are modified within Spheree, our workflow supports export of the modified model to work easily with other applications. At SIGGRAPH Emerging Technologies we will exhibit two Spherees (different sizes) to illustrate that calibrated multiple-projector spherical displays represent a future of interactive, scalable, high-resolution, non-planar displays. Participants at Siggraph will have a magical experience with Spheree, demonstrating its use in a 3D design workflow environment. To be presented at the Emerging Technologies and Poster sessions of SIGGRAPH 2014: Fernando Ferreira, Marcio Cabral, Olavo Belloc, Gregor Miller, Celso Kurashima, Rosali de Deus Lopes, Ian Stavness, Junia Anacleto, Marcelo Zuffo and Sidney Fels, ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2014), Vancouver, British Columbia, Canada, August 2014
Views: 30595 HCT UBC
Cubee: Thinking Inside The Box
 
01:46
Cubee is the original prototype of our "reactive" cubic head-coupled 3D display. Cubee is suspended to enable viewing from all sides and manipulation in space. It encloses a small virtual space inside the physical boundaries of a cubic display. The display motion is mapped into the virtual scene to create a compelling interaction metaphor of objects inside a box. The system presents a tangible way to evaluate interactive realism of dynamic simulations.
Views: 199 HCT UBC
A 3D Cubic Puzzle in pCubee - Public Video
 
03:20
A public presentation video showing our interaction design of a cubic puzzle implemented in pCubee, as part of the submission to the 3DUI 2011 contest.
Views: 2024 HCT UBC
Glove-TalkI-Part1.mov
 
04:04
Explanation of the Glove-Talk system. Refer to: Glove-Talk: A Neural Network Interface Between a Data-Glove and a Speech Synthesizer, Fels, S. and Hinton, G.; IEEE Transactions on Neural Networks, Vol 4, No. 1, pp. 2-8, 1993.
Views: 164 HCT UBC
pCubee ACM Multimedia 2009 Demo
 
02:14
Awarded best demonstration at the Technical Demonstration session.
Views: 129 HCT UBC
Iamascope: Short demonstration performance
 
01:13
The Iamascope is an interactive multimedia artwork. The Iamascope combines computer video, graphics, vision, and audio technology enabling performers to create striking imagery and sound. The result is an aesthetically uplifting interactive experience. At an installation, the user takes the place of a colourful piece of floating glass inside a computer generated kaleidoscope, and simultaneously views the kaleidoscopic image of themselves on a huge screen in real time. By applying image processing to the kaleidoscopic image, the performer's body movements directly control music in a beautiful dance of symmetry with the image. The image processing uses simple intensity differences over time which are calculated in real-time. The responsive nature of the whole system allows users to have an intimate, engaging, satisfying, multimedia experience.
Views: 61 HCT UBC
KEYed User Interface: A music sequencing interface
 
01:51
Our primary goal is to reduce the need for keyboard and mouse during a music composing task. We accomplish this by moving the most commonly used computer keyboard macros and mouse functions to the MIDI controller keyboard. This allows the composer to work more efficiently. An example of a macro that can be relocated to the controller is the copy function, or [Control]-[C], which copies a highlighted sequence to the clipboard. To distinguish between keystrokes that represent a note and keystrokes that represent a macro, a momentary foot pedal is used as a mode switch.
Views: 13 HCT UBC
Tooka: A Two Person Flute - Video of NIME2002 version of Tooka being played
 
03:40
We describe three new music controllers, each designed to be played by two players. As the intimacy between two people increases so does their ability to anticipate and predict the other's actions. We hypothesize that this intimacy between two people can be used as a basis for new controllers for musical expression. Looking at ways people communicate non-verbally, we are developing three new instruments based on different communication channels. The Tooka is a hollow tube with a pressure sensor and buttons for each player. Players place opposite ends in their mouths and modulate the pressure in the tube with their tongues and lungs, controlling sound. Coordinated button presses control the music as well. The Pushka , yet to be built, is a semi-rigid rod with strain gauges and position sensors to track the rod's position. Each player holds opposite ends of the rod and manipulates it together. Bend, end point position, velocity and acceleration and torque are mapped to musical parameters. The Pullka, yet to be built, is simply a string attached at both ends with two bridges. Tension is measured with strain gauges. Players manipulate the string tension at each end together to modulate sound. We are looking at different musical mappings appropriate for two players. This project is open for 496 projects to continue to improve Tooka and/or make Pushka and Pullka. Musical knowledge is an asset.
Views: 593 HCT UBC
Analysis and Practical Minimization of Registration Error in a Spherical Fish Tank VR System
 
01:52
We describe the design, implementation and detailed visual error analysis of a 3D perspective-corrected spherical display that uses calibrated, multiple rear projected pico-projectors. The display system is calibrated via 3D reconstruction using a single inexpensive camera, which enables both view-independent and view-dependent applications, also known as, Fish Tank Virtual Reality (FTVR). We perform error analysis of the system in terms of display calibration error and head-tracking error using a mathematical model. We found: head tracking error causes significantly more eye angular error than display calibration error; angular error becomes more sensitive to tracking error when the viewer moves closer to the sphere; and angular error is sensitive to the distance between the virtual object and its corresponding pixel on the surface. Taken together, these results provide practical guidelines for building a spherical FTVR display and can be applied to other configurations of geometric displays. Authors: Qian Zhou, Gregor Miller, Kai Wu, Ian Stavness, Sidney Fels
Views: 99 HCT UBC
Tooka: Video of 2004 version of Tooka being played in improv with other instruments
 
04:10
We describe three new music controllers, each designed to be played by two players. As the intimacy between two people increases so does their ability to anticipate and predict the other's actions. We hypothesize that this intimacy between two people can be used as a basis for new controllers for musical expression. Looking at ways people communicate non-verbally, we are developing three new instruments based on different communication channels. The Tooka is a hollow tube with a pressure sensor and buttons for each player. Players place opposite ends in their mouths and modulate the pressure in the tube with their tongues and lungs, controlling sound. Coordinated button presses control the music as well. The Pushka , yet to be built, is a semi-rigid rod with strain gauges and position sensors to track the rod's position. Each player holds opposite ends of the rod and manipulates it together. Bend, end point position, velocity and acceleration and torque are mapped to musical parameters. The Pullka, yet to be built, is simply a string attached at both ends with two bridges. Tension is measured with strain gauges. Players manipulate the string tension at each end together to modulate sound. We are looking at different musical mappings appropriate for two players. This project is open for 496 projects to continue to improve Tooka and/or make Pushka and Pullka. Musical knowledge is an asset.
Views: 80 HCT UBC
Waking Dream: Reality of Awake and Dream (CBC)
 
04:01
We live in two illusory states: awake and dream. The two only co-exist at a special time during a "waking dream". At this point, we only exist; dream and awake co-exist. This can happen when we are waking up in the morning and is accompanied by a strong sense of situatedness and paralysis. It can be an unsettling, frightening, and enlightening moment. In one experience, we feel pressure on our chest holding us down in our bed but we can see the room around us. Something is happening around us, trying to get us out of bed but we can't get up. We are aware but immobile. Tension mounts and we try harder and harder to raise up. We panic and struggle. Then, we realize, we are dreaming and fall back asleep hoping to really wake up. This pattern cycles around as if layers of consciousness are being peeled back. In "Waking Dream", we explore this moment of coexistence. What does it mean? Is this "reality" free of illusion?
Views: 34 HCT UBC
Iamascope: Introduction to how the Iamascope works
 
01:49
The Iamascope is an interactive multimedia artwork. The Iamascope combines computer video, graphics, vision, and audio technology enabling performers to create striking imagery and sound. The result is an aesthetically uplifting interactive experience. At an installation, the user takes the place of a colourful piece of floating glass inside a computer generated kaleidoscope, and simultaneously views the kaleidoscopic image of themselves on a huge screen in real time. By applying image processing to the kaleidoscopic image, the performer's body movements directly control music in a beautiful dance of symmetry with the image. The image processing uses simple intensity differences over time which are calculated in real-time. The responsive nature of the whole system allows users to have an intimate, engaging, satisfying, multimedia experience.
Views: 147 HCT UBC
Visualization of Personal History for Video Navigation - CHI2014
 
00:31
We compared two different visualizations of video history in a prototype viewer: Video Tiles and Video Timeline. Video Timeline extends the commonly employed list-based visualization for browsing history by applying sizes to indicate heuristics and visualizing more history items to occupy a full screen. Video Tiles visualizes history items in a grid based layout by following predefined templates based on items heuristics and occurrence. It utilizes the screen space effectively by presenting more history items at a time. These visualizations were investigated against the state-of-art method where participants were asked to share their previously seen affective clips. Our studies show that our visualizations outperform the current method and are perceived as intuitive and strongly preferred to current methods. Based on these results, Video Tiles and Video Timeline provide an effective addition to video viewers to help manage the growing quantity of video. They provide users with insight into their navigation patterns, allowing them to quickly find previously seen intervals; this leads to applications such as efficient clip sharing, simpler authoring and video summarization. Visualization of Personal History for Video Navigation Abir Al Hajri, Gregor Miller, Matthew Fong and Sidney Fels Human Communication Technologies University of British Columbia Presented at CHI2014 in Toronto, Canada
Views: 222 HCT UBC
Iamascope: Performance by Naomi Takano at Opera Totale 4 (1999)
 
06:40
The Iamascope is an interactive multimedia artwork. The Iamascope combines computer video, graphics, vision, and audio technology enabling performers to create striking imagery and sound. The result is an aesthetically uplifting interactive experience. At an installation, the user takes the place of a colourful piece of floating glass inside a computer generated kaleidoscope, and simultaneously views the kaleidoscopic image of themselves on a huge screen in real time. By applying image processing to the kaleidoscopic image, the performer's body movements directly control music in a beautiful dance of symmetry with the image. The image processing uses simple intensity differences over time which are calculated in real-time. The responsive nature of the whole system allows users to have an intimate, engaging, satisfying, multimedia experience.
Views: 41 HCT UBC
Hockey Player Tracking from Multiple Views
 
00:18
An example of our computer vision method for tracking hockey players using multiple synchronised views.
Views: 894 HCT UBC
MediaDiver
 
02:47
Multi-view video viewing and annotation tool developed at the UBC HCT lab
Views: 249 HCT UBC
MediaDiver @ CHI 2011 Interactivity
 
03:12
MediaDiver: A demonstration of our new interface for browsing multiple view video containing dynamic objects. In our interface users can enter comments to represent any time or view of the video, or even any object within the video. These comments can be used later to help users navigate the video, such as automatically switching views to follow an object.
Views: 315 HCT UBC
Tooka: a Two Person Flute - Video of 2004 version of Tooka being played
 
02:01
We describe three new music controllers, each designed to be played by two players. As the intimacy between two people increases so does their ability to anticipate and predict the other's actions. We hypothesize that this intimacy between two people can be used as a basis for new controllers for musical expression. Looking at ways people communicate non-verbally, we are developing three new instruments based on different communication channels. The Tooka is a hollow tube with a pressure sensor and buttons for each player. Players place opposite ends in their mouths and modulate the pressure in the tube with their tongues and lungs, controlling sound. Coordinated button presses control the music as well. The Pushka , yet to be built, is a semi-rigid rod with strain gauges and position sensors to track the rod's position. Each player holds opposite ends of the rod and manipulates it together. Bend, end point position, velocity and acceleration and torque are mapped to musical parameters. The Pullka, yet to be built, is simply a string attached at both ends with two bridges. Tension is measured with strain gauges. Players manipulate the string tension at each end together to modulate sound. We are looking at different musical mappings appropriate for two players. This project is open for 496 projects to continue to improve Tooka and/or make Pushka and Pullka. Musical knowledge is an asset.
Views: 69 HCT UBC
Siggraph2018
 
02:23
Siggraph Emerging Technologies 2018 Submission
Views: 227 HCT UBC
An Investigation of Textbook-Style Highlighting for Video
 
00:53
Video is used extensively as an instructional aid within educational contexts such as blended (flipped) courses, self-learning with MOOCs and informal learning through online tutorials. One challenge is providing mechanisms for students to manage their video collection and quickly review or search for content. We provided students with a number of video interface features to establish which they would find most useful for video courses. From this, we designed an interface which uses textbook-style highlighting on a video filmstrip and transcript, both presented adjacent to a video player. This interface was qualitatively evaluated to determine if highlighting works well for saving intervals, and what strategies students use when given both direct video highlighting and the text-based transcript interface. Our participants reported that highlighting is a useful addition to instructional video. The familiar interaction of highlighting text was preferred, with the filmstrip used for intervals with more visual stimuli. Paper and details: http://graphicsinterface.org/proceedings/gi2016/gi2016-26/
Views: 78 HCT UBC
A 3D Cubic Puzzle in pCubee - Sample User Video
 
02:38
A demonstration video showing a sample user interacting with a cubic puzzle inside pCubee as part of the submission for the 3DUI 2011 contest.
Views: 604 HCT UBC
Casual Authoring Using a Video Navigation History
 
02:46
We propose the use of a personal video navigation history, which records a user's viewing behaviour, as a basis for casual video editing and sharing. Our novel interaction supports users' navigation of previously-viewed intervals to construct new videos via simple well-known methods, such as playlists. The intervals in the history can be individually previewed and searched, filtered to identify frequently-viewed sections, and added to a playlist from which they can be refined and re-ordered to create new videos. Interval selection and playlist creation using a history-based interaction is compared to a more conventional filmstrip-based technique. Using our novel interaction participants took at most two-thirds the time taken by the conventional method, and we found users strongly prefer using a history-based mechanism to find previously-viewed intervals compared to a state-of-the-art method. Our study concludes that users are comfortable using a video history, and are happy to re-watch interesting parts of video to utilize the history's advantages. Matthew Fong, Abir Al Hajri, Gregor Miller and Sidney Fels Human Communication Technologies University of British Columbia Presented at GI2014 in Montreal, Quebec, Canada
Views: 155 HCT UBC
Visualizing Single-Video Viewing Statistics for Navigation and Sharing
 
03:09
Online video viewing has seen explosive growth, yet simple tools to facilitate navigation and sharing of the large video space have not kept pace. We propose the use of single-video viewing statistics as the basis for a visualization of a video called the View Count Record (VCR). Our novel visualization utilizes variable-sized thumbnails to represent the popularity (or affectiveness) of video intervals, and provides simple mechanisms for fast navigation, informed search, video previews, simple sharing of favourite clips and summarization. The viewing statistics are generated from an individual's video consumption, or crowd-sourced from many people watching the same video; both provide different scenarios for application (e.g. implicit tagging of interesting events for an individual, and quickly navigating to others' most-viewed scenes for crowd-sourced). A comparative user study evaluates the effectiveness of the VCR by asking participants to share previously seen affective parts within videos. Experimental results demonstrate that the VCR outperforms the state-of-art in a search task, and has been welcomed as a recommendation tool for clips within videos (using crowd-sourced statistics). It is perceived by participants as effective, intuitive and strongly preferred to current methods. Abir Al Hajri, Matthew Fong, Gregor Miller and Sidney Fels Human Communication Technologies University of British Columbia Presented at GI2014 in Montreal, Quebec, Canada
Views: 87 HCT UBC
Spheree at SIGGRAPH 2014 eTech
 
00:22
The Emerging Technologies Spheree exhibit at SIGGRAPH 2014, created as a collaboration between the University of British Columbia and the University of São Paulo.
Views: 313 HCT UBC
Automatic Camera Selection based on Faces
 
00:47
This is a demonstration of our Hive Framework using modules for camera access, image streaming, background subtraction, face detection and quality-of-view measures. Of the five views streaming live, the one which contains the most detected faces is shown to the user.
Views: 85 HCT UBC
Glove-TalkI-Part3.mov
 
02:16
Explanation of the Glove-Talk system (part 3 of 3) Refer to: Glove-Talk: A Neural Network Interface Between a Data-Glove and a Speech Synthesizer, Fels, S. and Hinton, G.; IEEE Transactions on Neural Networks, Vol 4, No. 1, pp. 2-8, 1993.
Views: 69 HCT UBC
GloveTalkII - Green Eggs and Ham
 
01:32
GloveTalkII is a system that translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parpameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech syntesizer.
Views: 25 HCT UBC
GloveTalkII - Unrehearsed conversation
 
05:54
GloveTalkII is a system that translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parpameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech syntesizer.
Views: 34 HCT UBC
GloveTalkII - Numbers
 
00:34
GloveTalkII is a system that translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parpameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech syntesizer.
Views: 5 HCT UBC
1,001,001 Faces: A gradient-based face navigation interface
 
02:01
Conventional face navigation systems focus on finding new faces via facial features. This method, though intuitive, has limitations. Notably, it is geared towards distinctive features, hence, does not work effectively for finding typical faces. We investigate an alternative approach to search and navigate through an overall face configuration space. For this, we implemented an interface which shows gradients of faces arranged spatially using an n-dimensional norm-based face generation method. It is like a colour-wheel but with faces (as shown in the image below).
Views: 21 HCT UBC
Iamascope: Fun video mix of Iamascope image and sound
 
00:17
The Iamascope is an interactive multimedia artwork. The Iamascope combines computer video, graphics, vision, and audio technology enabling performers to create striking imagery and sound. The result is an aesthetically uplifting interactive experience. At an installation, the user takes the place of a colourful piece of floating glass inside a computer generated kaleidoscope, and simultaneously views the kaleidoscopic image of themselves on a huge screen in real time. By applying image processing to the kaleidoscopic image, the performer's body movements directly control music in a beautiful dance of symmetry with the image. The image processing uses simple intensity differences over time which are calculated in real-time. The responsive nature of the whole system allows users to have an intimate, engaging, satisfying, multimedia experience.
Views: 9 HCT UBC
Hand Modeling for Adaptive Interfaces: Techniques for geometry and prediction of hand models
 
00:28
For existing human computer interaction (HCI) techniques, keyboards and mice are probably the best-known input devices. However, these two devices constrain dexterity and naturalness while we interact with computer-controlled applications. This limitation becomes more apparent when we employ these devices in virtual reality applications. Thus, there has been a tremendous push in research toward novel input devices and techniques to find a more natural and friendly way to interact with computers, in recent years. Within this research, the analysis of hand motion has attracted much attention among computer animation and virtual reality researchers since hands can easily and naturally perform many complex tasks. Further, hands can express our feelings and allow us to non-verbally communicate with others through gesture. The human hand is a very complex and delicate mechanical structure with about 30 degrees of freedom, which varies among individuals. Consequently, successfully modeling a human hand for interpretation by computer becomes a necessary and significant task.
Views: 23 HCT UBC
GloveTalkII - Row, Row, Row your boat
 
00:24
GloveTalkII is a system that translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parpameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech syntesizer.
Views: 43 HCT UBC
FlowField: System in action
 
04:43
The FlowField system is an interactive art piece in which a dynamic particle simluation immerses participants in a CAVE Automatic Virtual Environment. Simulated physical particles flow in a closed cylindrical path around the participant who feels he/she is standing inside this virtual cylinder of particles due to the stereoscopic particles projected in the CAVE. Input from the MTC Express is used to introduce obstructions into the particle flow. This system evokes a metaphor of fingers interrupting a continuous flow of water in order to allow participants use the MTC Express as a multi-point input device. This metaphor was chosen to emphasize the effectiveness of multi-point input over single-point input.
Views: 20 HCT UBC
GloveTalkII - Alphabet
 
00:40
GloveTalkII is a system that translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parpameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech syntesizer.
Views: 16 HCT UBC
Glove-TalkI-Part2.mov
 
02:18
Explanation of the Glove-Talk system (part 2 of 3) Refer to: Glove-Talk: A Neural Network Interface Between a Data-Glove and a Speech Synthesizer, Fels, S. and Hinton, G.; IEEE Transactions on Neural Networks, Vol 4, No. 1, pp. 2-8, 1993.
Views: 48 HCT UBC
Malleable Surface Interface: Capturing touch interaction through a deformable surface
 
02:37
The malleable surface touch interface combines a deformable input surface and video processing to provide a whole-hand interface that exhibits many attributes of conventional touch interfaces, such as multi-point and pressure sensitivity. This interface also offer passive haptic feedback, which can be effective with applications such as sculpting or massage.
Views: 37 HCT UBC
Iamascope: People at Siggraph'97 playing with the Iamascope
 
03:40
The Iamascope is an interactive multimedia artwork. The Iamascope combines computer video, graphics, vision, and audio technology enabling performers to create striking imagery and sound. The result is an aesthetically uplifting interactive experience. At an installation, the user takes the place of a colourful piece of floating glass inside a computer generated kaleidoscope, and simultaneously views the kaleidoscopic image of themselves on a huge screen in real time. By applying image processing to the kaleidoscopic image, the performer's body movements directly control music in a beautiful dance of symmetry with the image. The image processing uses simple intensity differences over time which are calculated in real-time. The responsive nature of the whole system allows users to have an intimate, engaging, satisfying, multimedia experience.
Views: 30 HCT UBC
Echology: An Interactive Spatial Sound and Video Artwork
 
02:03
We present a novel way of manipulating a spatial soundscape, one that encourages collaboration and exploration. Through a tabletop display surrounded by speakers and lights, participants are invited to engage in peaceful play with Beluga whales shown through a live web camera feed from the Vancouver Aquarium in Canada. Eight softly glowing buttons and a simple interface encourage collaboration with others who are also enjoying the swirling Beluga sounds overhead.
Views: 7 HCT UBC
FlowField: the architecture and operation of the FlowField interactive application
 
01:54
In FlowField, participants touch and caress a multi-point touchpad, the MTC Express, in a CAVE (Cave Automatic Virtual Environment), directly controlling a flowing particle field. Collisions in the particle field emit musical sounds providing a new type of musical interface that uses a dynamic flow process for its underlying musical structure. The particle flow field circles around the participant in a cylindrical path. Obstructions formed by whole hand input disturb the flow field like a hand in water. The interaction has very low latency and a fast frame rate, providing a visceral, dynamic experience. In FlowField, participants explore interaction through caress, suggesting reconnection with a sense of play, and experience a world through touch.
Views: 23 HCT UBC
Forklift Ballet: A ballet exploring the relationship of people and machines.
 
22:06
The forklift ballet is an homage to the special relationship between humanity and machinity. The machines should become part of us and enhance our humanity. When they do, they disappear and we see only an extended person. The experience of embodying and using the machine is its own pleasure.
Views: 43 HCT UBC
GloveTalkII - Initial GTII Vocabulary
 
00:59
GloveTalkII is a system that translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parpameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech syntesizer.
Views: 9 HCT UBC
PlesioPhone: A Cellphone and Telephone based Interactive Artwork
 
06:02
The Plesiophone series is a set of four interactive artworks that comment on the evolution of human communication. It uses the medium of the telephone and cell phone. Plesio is from the Greek word for near and is the opposite of Tele from telephone. Each piece attempts to provide an interactive experience to allow participants to step into the future essence of desired communication. Through the pieces we look at the question, "What is our communication future?" Do we want to have a "puff or air" to open our closed ears as in lands from Gulliver's travels?
Views: 10 HCT UBC
Swimming Across the Pacific: Swimming as a new paradigm for virtual environment navigation
 
03:28
This project comes 19 years after the " Swim Across the Atlantic Ocean" and is the continuation and development of the motivations and significance of the earlier project. The methods and means have changed considerably: in 1982 the swimming pool of the ocean liner Queen Elizabeth 2 was used during a five day voyage from Southampton to New York, this time the setting is a commercial aircraft flying from San Francisco to Tokyo. No water is involved, Alzek Misheff's virtual strokes take place inside a lightweight, easily dismantled cage built by Sidney Fels hanging amidst the passengers and the other participants of the project.
Views: 165 HCT UBC
Spherical Fish Tank VR research demo for IEEE VR 2017
 
01:07
Spherical Fish Tank VR research demo for IEEE VR 2017, Mar20-Mar22, Los Angeles, CA, US
Views: 143 HCT UBC
The 2Hearts Project: Enhancing non-verbal communication through music and graphics
 
03:55
Two people enter a Virtual Reality projection room. They are instrumented with heartbeat sensors. The participants face each other, speaking and touching as they interact. They hear music that is linked to their heartbeats, changing in harmony, rhythm and tone as heart rates rise and fall. They are surrounded by colorful virtual auras that move and change in response to the changing heart rates. They levitate above the virtual landscape and move to different regions as their interaction progresses,and the music shifts to different instrumentations and moods... This is the 2Hearts Reactive Environment (2HRE). The primary motivation for developing the 2HRE is to explore a new paradigm for computer-assisted interpersonal interaction. By using heartbeat signals as control inputs to music and graphics, we create a system in which the participants must work together, and work through each other, to achieve a desired result. Each action towards the other participant will be filtered through that person's unique personality and cognitive responses before affecting his heartbeat - essentially each player becomes an instrument. Gaining skill at this instrument requires and promotes a high degree of intimacy between the participants.
Views: 67 HCT UBC
Video Cubism: An interactive video visualization technique
 
02:45
Viewing video data along the X-T axis and Y-T axis has appeared in several forms in the literature. The main distinctions this work has is that the cut plane (or cut sphere) used to view the video data can be moved to any angle and position in real-time. This provides an opportunity to interactively explore the video cube from many different angles to get both aesthetically interesting static images as well as motion effects. Currently, a single cut plane or a cut sphere is supported. With the cut plane investigation is like being able to move a window around the video cube to see all sides as well as inside the video data; hence the name video cubism. With the cut sphere, unusual images are seen as a curvature cuts through time and space.
Views: 113 HCT UBC
Waking Dream: Reality of Awake and Dream (UBC)
 
37:47
We live in two illusory states: awake and dream. The two only co-exist at a special time during a "waking dream". At this point, we only exist; dream and awake co-exist. This can happen when we are waking up in the morning and is accompanied by a strong sense of situatedness and paralysis. It can be an unsettling, frightening, and enlightening moment. In one experience, we feel pressure on our chest holding us down in our bed but we can see the room around us. Something is happening around us, trying to get us out of bed but we can't get up. We are aware but immobile. Tension mounts and we try harder and harder to raise up. We panic and struggle. Then, we realize, we are dreaming and fall back asleep hoping to really wake up. This pattern cycles around as if layers of consciousness are being peeled back. In "Waking Dream", we explore this moment of coexistence. What does it mean? Is this "reality" free of illusion?
Views: 49 HCT UBC