Project Details
National Support for Project Together Anywhere, Together Anytime
Project Period: 1. 1. 2011 – 31. 12. 2011
Project Type: grant
Code: 7E11024
Agency: Ministerstvo školství, mládeže a tělovýchovy ČR
Program: Podpora projektů sedmého rámcového programu Evropského společenství pro výzkum, technologický rozvoj a demonstrace (2007 až 2013) podle zákona č. 171/2007 Sb.
social interaction, multimedia processing
TA2 (Together Anywhere, Together Anytime),
pronounced "tattoo", aims at defining
end-to-end systems for the development and delivery of new, creative
forms of
interactive, immersive, high quality media experiences for groups of
users such
as households and families. The overall
vision of TA2 can be summarised as "making communications and engagement
easier
among groups of people separated in space and time".
One of the key components
of TA2 is a set of generic and reliable audio, video,
and multimodalities integration and recognition tools. This includes
automatic
extraction of cues from raw data streams. The running TA2 project
stresses
low-level "instantaneous" cues; it does not deal with semantic-aware
integration of contextual information which could significantly improve
the
quality of cues.
The proposed TA2
project extension focuses on the medium-level (context-aware) cues
taking into
account not only low-level analysis outputs but also contextual
information, e.g.,
about the activated scenario. The created semantic cues will be used by
the TA2
system to orchestrate (i.e. frame, crop and represent) the audio-visual
elements
of the interaction between people.
The addition of
BUT to the consortium will allow the semantic relevance of the metadata
extracted from the analysis to be interpreted within the particular
contexts
described in the project. This will make the subsequent orchestration of
the
video more effective and more efficient and hence improve the end-user
experience. The extension will enable building better applications that
help
families to interact easily and openly through games, through improved
semi
automatic production and publication of user generated content, and
through
enhanced ambient connectedness between families.
2012
- BEDNAŘÍK, R.; VRZÁKOVÁ, H.; HRADIŠ, M. What you want to do next: A novel approach for intent prediction in gaze-based interaction. ETRA '12 Proceedings of the Symposium on Eye Tracking Research and Applications. Santa Barbara: Association for Computing Machinery, 2012.
p. 83-90. ISBN: 978-1-4503-1221-9. Detail - HRADIŠ, M.; EIVAZI, S.; BEDNAŘÍK, R. Voice activity detection in video mediated communication from gaze. ETRA '12 Proceedings of the Symposium on Eye Tracking Research and Applications. Santa Barbara: Association for Computing Machinery, 2012.
p. 329-332. ISBN: 978-1-4503-1221-9. Detail - HRADIŠ, M.; ŘEZNÍČEK, I.; BEHÚŇ, K. Semantic Class Detectors in Video Genre Recognition. Proceedings of VISAPP 2012. Rome: SciTePress - Science and Technology Publications, 2012.
p. 640-646. ISBN: 978-989-8565-03-7. Detail - KRÁL, J.; HRADIŠ, M. Restricted Boltzman Machines for Image Tag Suggestion. Proceedings of the 19th Conference STUDENT EEICT 2012. Brno: Brno University of Technology, 2012.
p. 1-5. Detail - MOTLÍČEK, P.; VALENTE, F.; SZŐKE, I. Improving Acoustic Based Keyword Spotting Using LVCSR Lattices. Proc. International Conference on Acoustics, Speech, and Signal Processing 2012. Kyoto: IEEE Signal Processing Society, 2012.
p. 4413-4416. ISBN: 978-1-4673-0044-5. Detail - POLÁČEK, O.; KLÍMA, M.; SPORKA, A.; ŽÁK, P.; HRADIŠ, M.; ZEMČÍK, P.; PROCHÁZKA, V. A Comparative Study on Distant Free-Hand Pointing. EuroiTV '12 Proceedings of the 10th European conference on Interactive tv and video. Berlin, Germany: Association for Computing Machinery, 2012.
p. 139-142. ISBN: 978-1-4503-1107-6. Detail
2011
- HRADIŠ, M.; ŘEZNÍČEK, I.; BEHÚŇ, K. Brno University of Technology at MediaEval 2011 Genre Tagging Task. Working Notes Proceedings of the MediaEval 2011 Workshop. CEUR Workshop Proceedings. Pisa, Italy: CEUR-WS.org, 2011.
p. 1-2. ISSN: 1613-0073. Detail - ŘEZNÍČEK, I.; ZEMČÍK, P. On-line human action detection using space-time interest points. Zborník príspevkov prezentovaných na konferencii ITAT, september 2011. Praha: Faculty of Mathematics and Physics, Charles University, 2011.
p. 39-45. ISBN: 978-80-89557-01-1. Detail
2010
- HRADIŠ, M.; BERAN, V.; ŘEZNÍČEK, I.; HEROUT, A.; BAŘINA, D.; VLČEK, A.; ZEMČÍK, P. Brno University of Technology at TRECVid 2010 SIN, CCD. In 2010 TREC Video Retrieval Evaluation Notebook Papers. Gaithersburg, MD: National Institute of Standards and Technology, 2010.
p. 1-10. Detail - ŘEZNÍČEK, I.; BAŘINA, D. Classifier creation framework for diverse classification tasks. Proceedings of the DT workshop. Žilina: Brno University of Technology, 2010.
p. 50-52. ISBN: 978-80-554-0304-5. Detail - ŽÁK, P.; BARTOŇ, R.; ZEMČÍK, P. Vision based user interface framework. Proceedings of the DT workshop. Žilina: 2010.
p. 100-102. ISBN: 978-80-554-0304-5. Detail