Osaka is famous for its food, nightlife, people comedy and people. During the second week of November, one of Japan’s liveliest cities also played host to the ACE 2016 (http://ace2016.net/). ACE, which is short for Advancements in Computer Entertainment, is one of the leading international academic conferences covering the latest developments in technology and computing at the crossroads of academia, industry and entertainment.
Beginning in Singapore in 2004, ACE is now in its 13th year having been held at different, diverse venues all over the world. Osaka being the host city for the conference this year was advantageous for participants living, working and studying in Japan as they could have easy access to cutting edge research. For international visitors, many of whom were visiting Japan for the first time, Osaka offered the participants to experience the best of what Kansai has to offer in terms of sightseeing as well as the latest developments and breakthroughs in art and technology.
While the scope of ACE encompasses advances in computers and entertainment, in a general sense, the focus or concentration for each conference is generally influenced by the commercial/cultural/technological zeitgeist. While it is impossible to cover all the wonderful research and findings presented at ACE 2016 in the limited confines of this online article, I’d like to talk about some innovative works which contribute to the state of technology and entertainment that stood out (for me, at least).
Alice and Her Friend
As is explicitly stated in the name of the conference, the underlying tone of the work presented and demonstrated at ACE is that which is related to entertainment. Entertainment, however has many manifestations and interpretations for different people. “Alice and Her Friend” describes a picture book without any actual pictures that can aid visually-impaired children enjoy the picture book reading sensation and experience through multi-sensory interaction. The book itself is different shades of black. When folded, it resembles an accordion; when expanded it stretches out for the narrative to be experienced. Through the implementation of different sensors and a microcontroller, reader feels their way through the story while getting feedback, in a sense getting a more immersive experience than possible by just reading a book. Video
Reporting Solo
This past year, has seen a major increase in the proliferation and adoption of live self broadcasting. Periscope, Facebook Live and YouTube Live made headlines and continue to do so with the power, and global audience, those platforms give to the user/presenter. Usually a laptop/pc, tablet or smartphone are all one needs to begin a broadcast. As capturing and showing the moment is the most important factor for the broadcasters, bad camera work, not-so-great sound and a lack of effects or graphics is usually understood to be the norm and forgiven. However, as the number of platforms has grown, along with the bandwidth capacity, some broadcasters and viewers want to present their content in a more mature and sophisticated manner. The research team behind “Reporting Solo” aims to make the solitary experience for the broadcaster/reporter smoother and more professional. By facilitating a high quality broadcast with proper lighting, graphics and on-screen effects, the researchers are hoping to design a system that allows the presenter to focus on presenting rather than fumbling with equipment and have a challenging broadcast or, possibly, not being able to broadcast at all.
“Reporting Solo” fills an important gap that occurs often in technological leapfrogging: it provides middleware in the form of a software/hardware support system that enable users to fully utilize the current state of technology.
“Reporting Solo: A Design of Supporting System for Solo Live Reporting”
Kohei Matsumura(立命館大学) and Yoshinari Takegawa(公立はこだて未来大学 システム情報学部 平田竹川研究室)
Taifurin
While a small portion of the human (can they even be called that?) population still believes that climate change is not real, natural disasters that have taken place throughout the world in the last few years would demonstrate otherwise. Japan, as is well known, is very susceptive to a host of natural disasters and major weather phenomena. Typhoons are common occurrence in Japan and have only keep increasing in strength and number from season to season. “Taifurin” aims to combine the aesthetics and beauty found in the Japanese idea of wabi-sabi and combine them with a low-cost yet effective typhoon alert system. The catchy name itself — a combination of typhoon and furin, a traditional Japanese wind chime — reflects its simplicity and functionality. Even the technical details of the system adhere to the spirit of wabi-sabi. The system consists of a Raspberry Pi single board computer with a multi-colored LED and servo motor attached. Besides all the practical implications, “Taifurin” also serves a refreshing example that there is a place for the internet of NICE things.
“Taifurin: Wind-Chime Installation As A Novel Typhoon Early Warning System”
Paul Haimes, Tetsuaki Baba and Kumiko Kushiyama
IDEEA Lab(首都大学東京システムデザインン研究科)
Passive Misdair Display
Of course, entertainment is the primary motivation behind the research found at ACE. With so much emphasis recently being put on wearables, peripherals and specialized tools/hardware, being able to use everyday, affordable items for enhancing entertainment is very welcome. The researchers behind “Passive Midair Display” present a novel method for interactively engaging with dark spaces by using just a flashlight. Inspired by the very popular Japanese media franchise Yo-kai Watch[1] where a boy is able to see ghosts with the help of a watch. Much in the same vein, “Passive Midair Display” makes it possible to see what isn’t there by shedding some light on dark spaces with a flashlight. Natural movement by users holding flashlights is expected and factored into the interaction design. The user experience is dynamic as the angle and position of the light emanating from the light source will result in different objects (characters) appearing at different places. The ease of use and low barriers of entry allow users easy interaction with and within their environment and cast a light on the possibilities of enhanced and augmented methods of environment.
BathDrum2
Further decreasing the dependence on head-mounted displays, wearables, and smart devices required by the user to interact and play, “BathDrum2” proposes a way to make bathtime (more) fun all in the comfort of one’s birthday suit. Using sensors, a projector and machine learning technique, “BathDrum2” gives the bather a selection of percussion instruments at the edge of the bathtub that can be played by tapping onto the projected image. Not only does system use the location data of the projected percussion image to generate sounds, but it can also identify different tap tones based on finger gestures. Bathing is a pastime that transcends borders. Singing while bathing is an international pastime. Through this system, bathtime crooners will no longer have to sing acapella.
“BathDrum2 : Percussion Instruments on a Bathtub Edge with Low Latency Tap Tone Identification”:
Tomoyuki Sumida and Shigeyuki Hirai (平#研究室(京都産業大学 コンピュータ理工学部))