Webinar on Discovery in Distributed Multimodal Interaction

Speech, touch, typing and other forms of multimodal interaction are becoming
commonplace. At the same time, environments like cars, homes, classrooms and
offices are getting smarter and more connected. These two trends are opening
up the possibility of innovative applications where users' devices
automatically integrate themselves into their environments and provide
multimodal capabilities. For example, as a user brings a device into a car,
the car could automatically configure itself to match the user's preferences
for seat position, radio, and temperature. The car could then allow users to
control these functions by voice from their connected devices. As another example, in a hotel room,
users' devices could integrate themselves with the TV to make it possible to
immediately tune the TV by voice to their favorite channels. These
distributed, dynamic applications depend on the ability of devices and
environments to find each other and learn what modalities they support.

The W3C Multimodal Interaction Working Group is working on standards for
discovery and registration of modalities for user interaction in multimodal
systems. You are invited to attend a webinar on use cases and requirements
for these capabilities. The webinar will describe some of the use cases the
group has discussed, talk about new use cases, and present demos of modality
component discovery.

The webinar will take place on September 24, 2013 at 11:00 Eastern, and last for 90 minutes.

This webinar will be of interest to mobile application developer communities and platform providers working in industries such as health care, financial services, broadcasting, automotive, gaming and consumer devices.


  • Introduction (5 minutes):
    • Kazuyuki Ashimura, W3C
  • Overview of use cases for discovery and registration of modality components (10 minutes):
    • Deborah Dahl, Conversational Technologies
  • Industry presentations of use cases from the entertainment and consumer electronics industries (30 minutes:)
    • Myra Einstein and Peter Rosenberg, NBCUniversal (20 Minutes)
    • Jens Bachmann, Panasonic (20 Minutes)
  • Use case discussion Q&A (10 Minutes)
  • Some proposals for addressing the use cases (15 minutes):
    • Helena Rodriguez, Soixante-dix
  • Q&A (10 minutes)

Speaker Information

Kaz Ashimura is the W3C Activity Lead for the Web and TV Interest Group, the Multimodal Interaction Working Group and the Voice Browser Working Group. He joined the W3C Team at Keio University SFC in April 2005. Prior to joining the W3C Team, Kaz tackled the research and development on speech and natural language processing. Kaz holds a BS in Mathematics from Kyoto University, and is receiving his Ph.D in engineering from Nara Institute of Science and Technology.

Dr. Deborah Dahl is the Principal at Conversational Technologies, which applies speech, language, and multimodal technologies to create innovative solutions. She serves as the Chair of the World Wide Web Consortium’s (W3C) Multimodal Interaction Working Group and is also a member of the W3C Voice Browser Working Group and the HTML 5 Working Group. Her primary technical interest is in multimodal spoken dialog systems. Dr. Dahl received a Speech Luminary award from Speech Technology Magazine in 2012.

Myra Einstein has been a member of the NBCUniversal team for over 6 years, applying her Masters in Interactive Telecommunications from NYU to build Interactive TV systems and applications across the company. In that time, Myra has filed for several patents and was nominated for an Emmy for Outstanding New Approaches Sports Event Coverage. In her current role of Manager, Technology Policy, Myra is advancing NBCUniversal's interests in developing standards for a synchronized TV ecosystem with standards organizations such as ATSC, DLNA, W3C and others.

Peter Rosenberg , Enterprise Architect, Digital Media & Entertainment, has worked in various aspects of television, film & news production prior to joining NBC Sports as a systems analyst in 1995. Peter was NBCU’s lead systems architect for the streaming of the 2008 Beijing Olympic Games for which he received an Emmy award for Outstanding New Approaches - Sports Event Coverage. Currently, as a member of NBCU’s Advanced Engineering Team, he is engaged in various strategic challenges & standards efforts in the area of online media delivery.

Jens Bachmann is project leader at the Panasonic R&D Center in Langen,Germany. He studied computer science at the Goethe University of Frankfurt and joined Panasonic in 2002. Jens has a strong background in mobile communication and 3GPP standardization. Currently he is following technology trends in W3C and M2M standardization and is developing concepts for future smart CE solutions.

Helena Rodriguez,  Engineer, artist and cognitive scientist, B. Helena Rodriguez has over 15 years of experience in interaction technologies, multidisciplinary research, contemporary digital arts and pervasive UX systems. She attended the Universidad de los Andes with a BA in Philosophy and Linguistics and a BA in Electronic Arts; the Sorbonne University with a MFA Arts & Digital Media and a M.A in Political Science for IT; Telecom ParisTech, earning a Summa cum Laudae PhD on Computer Science. She is active in the semantic user interaction and multimodal standards activities in the W3C.

Raj Tumuluri, Principal, Openstream is Co-author, W3C MMI Architecture and is Member, W3C HTML-Speech, Web Speech

Webinar Host:



 If you have previously registered for this event, please login below:

Registration is required to attend this event. Please register now.
First Name*
Last Name*
Street Address 1*
Street Address 2
Postal Code
Work Phone*
Are you currently a W3C Member?*
You must have Javascript and Cookies enabled to access this webcast. Click here for Help.
Please enable Cookies in your browser before registering for the webcast.
*Denotes required.