SEARCH

Search Details

KOBAYASHI Yoshinori
Mathematics, Electronics and Informatics DivisionProfessor
Department of Information and Computer Sciences

Researcher information

■ Degree
  • -, The University of Tokyo
■ Research Keyword
  • Human Robot Interaction
  • Human Computer Interaction
  • Conputer Vision
■ Field Of Study
  • Informatics, Intelligent robotics
  • Informatics, Perceptual information processing
■ Career
  • Apr. 2020 - Present, Saitama University
  • Apr. 2014 - Mar. 2020, Saitama University
  • Oct. 2007 - Mar. 2014, Saitama University
  • Apr. 2000 - Jun. 2004
■ Educational Background
  • Sep. 2007, The University of Tokyo, Graduate School of Information Science and Technology
  • 2007, The University of Tokyo, Japan
  • 2007, The University of Tokyo, Graduate School of Information Science and Technology
■ Member History
  • 01 Oct. 2010
    電子情報通信学会, 常任査読委員
  • 事業計画委員会, 委員
■ Award
  • 16 Mar. 2024, 奨励賞
  • 05 Nov. 2023, 研究発表大賞
  • 16 Dec. 2021, ヒューマンコミュニケーション賞
  • 28 Jun. 2018, Best Paper Award
  • 14 Jun. 2018, 最優秀学術賞
  • 29 Jan. 2015, Best Poster Award
  • 05 Aug. 2014, Best Paper Award
  • 06 Mar. 2013, Best Demonstration Award
  • 15 Mar. 2011, 学術奨励賞
  • 31 May 2010, 情報処理学会論文賞

Performance information

■ Paper
  • Pedestrian Tracking Using Ankle-Level 2D-LiDAR Based on ByteTrack
    Md. Mohibullah; Yuhei Hironaka; Yusuke Suda; Ryota Suzuki; Mahmudul Hasan; Yoshinori Kobayashi
    Lecture Notes in Computer Science, Volume:15046, First page:211, Last page:222, Jan. 2025, [Reviewed], [Last]
    Springer Nature Switzerland, English, Scientific journal
    DOI:https://doi.org/10.1007/978-3-031-77392-1_16
    DOI ID:10.1007/978-3-031-77392-1_16, ISSN:0302-9743, eISSN:1611-3349
  • Impression Evaluation of Chat Robot with Bodily Emotional Expression Incorporating Large-Scale Language Models
    Zhong Qiang; Yukiharu Nagai; Hisato Fukuda; Ryota Suzuki; Yoshinori Kobayashi
    Lecture Notes in Computer Science, Volume:14871, First page:439, Last page:449, Jul. 2024, [Reviewed], [Last]
    Springer Nature Singapore, English, Scientific journal
    DOI:https://doi.org/10.1007/978-981-97-5609-4_34
    DOI ID:10.1007/978-981-97-5609-4_34, ISSN:0302-9743, eISSN:1611-3349
  • Multimodal Emotion Recognition through Deep Fusion of Audio-Visual Data               
    T. Sultana, M. Jahan, M.K. Uddin, Y. Kobayashi, M. Hasan
    International Conference on Computer and Information Technology (ICCIT), First page:1, Last page:5, 2023
  • Safety Helmet Detection of Workers in Construction Site using YOLOv8               
    S.S. Mahmud, M.A. Islam, K.J. Ritu, M. Hasan, Y. Kobayashi, M. Mohibullah
    International Conference on Computer and Information Technology (ICCIT), First page:1, Last page:6, 2023
  • SelfBOT: An Automated Wheel-Chair Control Using Facial Gestures Only               
    K.J. Ritu, K. Ahammad, M. Mohibullah, M. Khatun, M.Z. Uddin, M.K. Uddin, Y. Kobayashi, M. Hasan
    International Conference on Computer and Information Technology (ICCIT), First page:1, Last page:6, 2023
  • LiDAR-based Detection, Tracking, and Property Estimation: A Contemporary Review               
    M. Hasan, J. Hanawa, R. Goto, R. Suzuki, H. Fukuda, Y. Kuno, Y. Kobayashi
    Neurocomputing, Volume:506, Number:C, First page:393, Last page:405, 2022
  • Enhancing Multimodal Interaction between Performers and Audience Members during Live Music Performances               
    Kouyou Otsu; Jinglong Yuan; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno; Keiichi Yamazaki
    Conference on Human Factors in Computing Systems - Proceedings, First page:1, Last page:6, May 2021
    Live performance provides a good example of enthusiastic interaction between people gathered together in a large group and one or more performers. In this research, we focused on elucidating the mechanism of such enthusiastic group interaction (collective effervescence) and how technology can contribute to its enhancement. We propose a support system for co-experience and physical co-actions among participants to enhance enthusiastic interaction between performers and audiences during live performances. This system focuses on a physical synchronization between the performer and the audience as the key that generates enthusiastic interaction during a live performance. Also, it supports enhanced bidirectional communication of the performer's actions and the audience's cheering behaviors. An experiment in an actual live performance environment in which collective effervescence was already occurring demonstrated that the bidirectional communication and visualization of physical movements in the proposed system contributed to the further unification of the group.
    International conference proceedings
    DOI:https://doi.org/10.1145/3411763.3451584
    Scopus:https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85105826264&origin=inward
    Scopus Citedby:https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85105826264&origin=inward
    DOI ID:10.1145/3411763.3451584, SCOPUS ID:85105826264
  • Person Tracking Using Ankle-Level LiDAR Based on Enhanced DBSCAN and OPTICS               
    Mahmudul Hasan; Junichi Hanawa; Riku Goto; Hisato Fukuda; Yoshinori Kuno; Yoshinori Kobayashi
    IEEJ Transactions on Electrical and Electronic Engineering, Volume:16, Number:5, First page:778, Last page:786, May 2021
    Along with the progress of deep learning techniques, people tracking using video cameras became easy and accurate. However, privacy and security issues are not enough to be concerned with vision-based monitoring. People may not be tolerated surveillance cameras installed everywhere in our daily life. A camera-based system may not work robustly in unusual situations such as smoke, fogs, or darkness. To cope with these problems, we propose a two-dimensional (2D) LiDAR-based people tracking technique based on clustering algorithms. A LiDAR sensor is a prominent approach for tracking people without disclosing their identity, even under challenging conditions. For tracking people, we propose modified density-based spatial clustering of applications with noise (DBSCAN) and ordering points to identify cluster structure (OPTICS) algorithms for clustering 2D LiDAR data. We have confirmed that our approach significantly improves the accuracy and robustness of people tracking through the experiments. © 2021 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.
    John Wiley and Sons Inc, English, Scientific journal
    DOI:https://doi.org/10.1002/tee.23358
    DOI ID:10.1002/tee.23358, ISSN:1931-4981, SCOPUS ID:85103964428
  • Interest Estimation Based on Biological Information and Body Sway               
    Y. Wang, K. Otsu, H. Fukuda, Y. Kobayashi, Y. Kuno
    International Workshop on Frontiers of Computer Vision (IW-FCV), 2021
  • Fusion in Dissimilarity Space for RGB-D Person Re-identification               
    M.K. Uddin, A. Lam, H. Fukuda, Y. Kobayashi, Y. Kuno
    Array, Volume:12, First page:1, Last page:13, 2021
  • エスノメソドロジー的視点に基づく購買支援システムの開発               
    山崎敬一,中西英之,小林貴訓
    サービス学会誌, Volume:7, Number:4, First page:130, Last page:137, 2021
  • Person Property Estimation Based on 2D LiDAR Data Using Deep Neural Network               
    Mahmudul Hasan; Riku Goto; Junichi Hanawa; Hisato Fukuda; Yoshinori Kuno; Yoshinori Kobayashi
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume:12836, First page:763, Last page:773, 2021
    Video-based estimation plays a very significant role in person identification and tracking. The emergence of new technology and increased computational capabilities make the system robust and accurate day by day. Different RGB and dense cameras are used in these applications over time. Video-based analyses are offensive, and individual identity is leaked. As an alternative to visual capturing, now LiDAR shows its credentials with utmost accuracy. Besides privacy issues but critical natural circumstances also can be addressed with LiDAR sensing. Some susceptible scenarios like heavy fog and smoke in the environment are downward performed with typical visual estimation. In this study, we figured out a way of estimating a person's property, i.e., height and age, etc., based on LiDAR data. We placed different 2D LiDARs in ankle levels and captured persons' movements. These distance data are being processed as motion history images. We used deep neural architecture for estimating the properties of a person and achieved significant accuracies. This 2D LiDAR-based estimation can be a new pathway for critical reasoning and circumstances. Furthermore, computational cost and accuracies are very influential over traditional approaches.
    Springer Science and Business Media Deutschland GmbH, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-030-84522-3_62
    DOI ID:10.1007/978-3-030-84522-3_62, ISSN:1611-3349, SCOPUS ID:85113631215
  • Tracking People Using Ankle-Level 2D LiDAR for Gait Analysis               
    Mahmudul Hasan; Junichi Hanawa; Riku Goto; Hisato Fukuda; Yoshinori Kuno; Yoshinori Kobayashi
    Advances in Intelligent Systems and Computing, Volume:1213, First page:40, Last page:46, 2021
    People tracking is one of the fundamental goals of human behavior recognition. Development of cameras, tracking algorithms and effective computations make it appropriate. But, when the question is privacy and secrecy, cameras have a great obligation on it. Our fundamental goal of this research is to replace video camera with a device (2D LiDAR) that significantly preserve the privacy of the user, solve the issue of narrow field of view and make the system functional simultaneously. We consider individual movements of every moving objects on the plane and figure out the objects as a person based on ankle orientation and movements. Our approach calculates the number of frames of every moving object and finally create a video based on those frames.
    Springer, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-030-51328-3_7
    DOI ID:10.1007/978-3-030-51328-3_7, ISSN:2194-5365, SCOPUS ID:85088514721
  • Video Analysis of Wheel Pushing Actions for Wheelchair Basketball Players               
    Keita Fukue; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno; Nami Shida; Mari Sugiyama; Takashi Handa; Tomoyuki Morita
    Communications in Computer and Information Science, Volume:1405, First page:233, Last page:241, 2021
    In wheelchair basketball, the performance of a player is not only determined by his/her physical ability but also depending on the settings of the wheelchair such as the height and/or angle of its seat and the size and/or position of its wheels. However, these wheelchair settings are based on the rules of thumb of players and instructors, and there are no specific guidelines or rules regarding the settings of wheelchair according to the physical characteristics and/or types of disabilities of the athletes. This study, therefore, aims to provide the suggestions to improve the performance of the athletes by analyzing the wheelchair starting actions in comparison to the top players with the highest ability of wheelchair operation. We propose the method to measure the detailed behaviors of players during starting actions from video footages. Through the experimental verification, we successfully retrieved the detailed behaviors of players and wheelchairs. We also confirmed it is possible to measure the difference in the behaviors among multiple participants and different wheelchair settings, respectively.
    International conference proceedings
    DOI:https://doi.org/10.1007/978-3-030-81638-4_19
    Scopus:https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85112731078&origin=inward
    Scopus Citedby:https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85112731078&origin=inward
    DOI ID:10.1007/978-3-030-81638-4_19, ISSN:1865-0929, eISSN:1865-0937, SCOPUS ID:85112731078
  • Video Analysis of Wheel Pushing Actions for Wheelchair Basketball Players.               
    Keita Fukue; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno; Nami Shida; Mari Sugiyama; Takashi Handa; Tomoyuki Morita
    Frontiers of Computer Vision - 27th International Workshop(IW-FCV), First page:233, Last page:241, 2021
    Springer, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-030-81638-4_19
    DOI ID:10.1007/978-3-030-81638-4_19, DBLP ID:conf/fcv/FukueFKKSSHM21
  • Enhancing Multimodal Interaction Between Performers and Audience Members During Live Music Performances.               
    Kouyou Otsu; Jinglong Yuan; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno; Keiichi Yamazaki
    CHI '21: CHI Conference on Human Factors in Computing Systems, First page:410, Last page:6, 2021
    ACM, International conference proceedings
    DOI:https://doi.org/10.1145/3411763.3451584
    DOI ID:10.1145/3411763.3451584, DBLP ID:conf/chi/OtsuYFKKY21
  • Robust and Fast Heart Rate Monitoring Based on Video Analysis and Its Application               
    Kouyou Otsu; Qiang Zhong; Das Keya; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    Advances in Intelligent Systems and Computing, Volume:1213 AISC, First page:250, Last page:257, 2021
    Techniques to remotely measure heartbeat information are useful for many applications such as daily health management and emotion estimation. In recent years, some methods to measure heartbeat information using a consumer RGB camera have been proposed. However, it is still a difficult challenge to accurately and quickly measure heart rate from videos with significant body movements. In this study, we propose a video-based heart rate measurement method that enables robust measurement in real-time by improving over previous slower methods that used local regions of the facial skin for measurement. From experiments using public datasets and self-collected videos, it was confirmed that the proposed method enables fast measurements while maintaining the accuracy of conventional methods.
    International conference proceedings
    DOI:https://doi.org/10.1007/978-3-030-51328-3_35
    Scopus:https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85088531749&origin=inward
    Scopus Citedby:https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85088531749&origin=inward
    DOI ID:10.1007/978-3-030-51328-3_35, ISSN:2194-5357, eISSN:2194-5365, SCOPUS ID:85088531749
  • Person-Following Shopping Support Robot Using Kinect Depth Camera Based on 3D Skeleton Tracking               
    Md Matiqul Islam; Antony Lam; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume:12465, First page:28, Last page:37, 2020
    The lack of caregivers in an aging society is a major social problem. Without assistance, many of the elderly and disabled are unable to perform daily tasks. One important daily activity is shopping in supermarkets. Pushing a shopping cart and moving it from shelf to shelf is tiring and laborious, especially for customers with certain disabilities or the elderly. To alleviate this problem, we develop a person following shopping support robot using a Kinect camera that can recognize customer shopping actions or activities. Our robot can follow within a certain distance behind the customer. Whenever our robot detects the customer performing a “hand in shelf” action in front of a shelf it positions itself beside the customer with a shopping basket so that the customer can easily put his or her product in the basket. Afterwards, the robot again follows the customer from shelf to shelf until he or she is done with shopping. We conduct our experiments in a real supermarket to evaluate its effectiveness.
    Springer Science and Business Media Deutschland GmbH, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-030-60796-8_3
    DOI ID:10.1007/978-3-030-60796-8_3, ISSN:1611-3349, SCOPUS ID:85093985863
  • Depth Guided Attention for Person Re-identification               
    Md Kamal Uddin; Antony Lam; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume:12465, First page:110, Last page:120, 2020
    Person re-identification is an important video-surveillance task for recognizing people from different non-overlapping camera views. Recently it has gained significant attention upon the introduction of different sensors (i.e. depth cameras) that provide the additional information irrespective of the visual features. Despite recent advances with deep learning models, state-of-the-art re-identification approaches fail to leverage the sensor-based additional information for robust feature representations. Most of these state-of-the-art approaches rely on complex dedicated attention-based architectures for feature fusion and thus become unsuitable for real-time deployment. In this paper, a new deep learning method is proposed for depth guided re-identification. The proposed method takes into account the depth-based additional information in the form of an attention mechanism, unlike state-of-the-art methods of complex architectures. Experimental evaluations on a depth-based benchmark dataset suggest the superiority of our proposed approach over the considered baseline as well as with the state-of-the-art.
    Springer Science and Business Media Deutschland GmbH, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-030-60796-8_10
    DOI ID:10.1007/978-3-030-60796-8_10, ISSN:1611-3349, SCOPUS ID:85093972508
  • Indoor Visual Re-localization Based on Confidence Score Using Omni-Directional Camera               
    Toshihiro Takahashi; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Communications in Computer and Information Science, Volume:1212, First page:192, Last page:205, 2020
    In this paper, we propose a novel re-localization method with deep learning using monocular image. A data augmentation method with semi-omni-directional image is introduced. Our method aims re-localization to be robust for changes in the surrounding situation. It is achieved by applying the uncertainty measurement obtained from Bayesian Neural Network. We confirm the effectiveness of our proposed method through the experiments.
    Springer, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-981-15-4818-5_15
    DOI ID:10.1007/978-981-15-4818-5_15, ISSN:1865-0937, SCOPUS ID:85090045042
  • An intelligent shopping support robot: understanding shopping behavior from 2D skeleton data using GRU network               
    Md Matiqul Islam; Antony Lam; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    ROBOMECH Journal, Volume:6, Number:1, Dec. 2019
    In supermarkets or grocery, a shopping cart is a necessary tool for shopping. In this paper, we have developed an intelligent shopping support robot that can carry a shopping cart while following its owners and provide the shopping support by observing the customer’s head orientation, body orientation and recognizing different shopping behaviors. Recognizing shopping behavior or the intensity of such action is important for understanding the best way to support the customer without disturbing him or her. This system also liberates elderly and disabled people from the burden of pushing shopping carts, because our proposed shopping cart is essentially a type of autonomous mobile robots that recognizes its owner and following him or her. The proposed system discretizes the head and body orientation of customer into 8 directions to estimate whether the customer is looking or turning towards a merchandise shelf. From the robot’s video stream, a DNN-based human pose estimator called OpenPose is used to extract the skeleton of 18 joints for each detected body. Using this extracted body joints information, we built a dataset and developed a novel Gated Recurrent Neural Network (GRU) topology to classify different actions that are typically performed in front of shelves: reach to shelf, retract from shelf, hand in shelf, inspect product, inspect shelf. Our GRU network model takes series of 32 frames skeleton data then gives the prediction. Using cross-validation tests, our model achieves an overall accuracy of 82%, which is a significant result. Finally, from the customer’s head orientation, body orientation and shopping behavior recognition we develop a complete system for our shopping support robot.
    Springer Science and Business Media Deutschland GmbH, English, Scientific journal
    DOI:https://doi.org/10.1186/s40648-019-0150-1
    DOI ID:10.1186/s40648-019-0150-1, ISSN:2197-4225, SCOPUS ID:85076560888
  • Social interaction with visitors: mobile guide robots capable of offering a museum tour               
    Mohammad Abu Yousuf; Yoshinori Kobayashi; Yoshinori Kuno; Keiichi Yamazaki; Akiko Yamazaki
    IEEJ Transactions on Electrical and Electronic Engineering, Volume:14, Number:12, First page:1823, Last page:1835, Dec. 2019
    The purpose of this study is to develop a mobile museum guide (MG) robot capable of creating and controlling spatial formations with visitors in different situations. Although much research has already been conducted in the area of nonverbal communication between guide robots and humans, the creation and controlling of spatial formations with multiple visitors is a fundamental function for MG robots that remains unexplored. Drawing upon psychological and sociological studies on the spatial relationships between humans, it is evident that to be effective MG robots should also possess the capability to create and control spatial formations in various situations. A MG robot needs to establish a spatial formation to initiate interaction with the visitors
    a spatial formation is a prerequisite before the robot can begin explaining an exhibit. Moreover, the guide robot must be able to identify interested bystanders and invite them into an ongoing explanation session, necessitating a reconfiguring of the spatial formation. Finally, the robot must be able to do this while continuing to explain multiple exhibits in a cohesive fashion. To devise a system capable of meeting these needs, we began by observing and videotaping scenes of actual museum galleries. Based on analyzing these data, we found that MG creates spatial formation with the visitors in a systematic way. We then developed a mobile robot system able to create and control spatial formations while guiding multiple visitors. A particle filter framework is employed to track the visitors' positions and body orientations and the orientations of their heads. We then evaluated the guide robot system in a series of experiments that focused on different situations where a guide robot creates a spatial formation with visitors. © 2019 Institute of Electrical Engineers of Japan. Published by John Wiley &
    Sons, Inc.
    John Wiley and Sons Inc., English, Scientific journal
    DOI:https://doi.org/10.1002/tee.23009
    DOI ID:10.1002/tee.23009, ISSN:1931-4981, SCOPUS ID:85071768505
  • Companion following robotic wheelchair with bus boarding capabilities               
    Shamim Al Mamun; Sarwar Ali; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    2018 Joint 7th International Conference on Informatics, Electronics and Vision and 2nd International Conference on Imaging, Vision and Pattern Recognition, ICIEV-IVPR 2018, First page:174, Last page:179, Feb. 2019
    In the last decade, several robotic wheelchairs possessing user-friendly interfaces and/or autonomous functions for reaching a goal have been proposed to meet the needs in aging societies. Moreover, it is vital to consider for the researchers, how to reduce the companion's load and support their activities like bus boarding and disembarkation. In this paper, we propose an autonomous bus boarding wheelchair system that can give freedom of movement to wheelchair users by following its companion side by side or in front-behind positions and simultaneously scan the environment to move smoothly in outside terrain. Additionally, our bus boarding wheelchair is able to detect buses and bus doors with precise measurements of the doorstep's height and width to board and disembark the bus. Our experiments show the effectiveness and applicability of our system in moving around urban areas in using community bus services.
    Institute of Electrical and Electronics Engineers Inc., English, International conference proceedings
    DOI:https://doi.org/10.1109/ICIEV.2018.8641059
    DOI ID:10.1109/ICIEV.2018.8641059, SCOPUS ID:85063236160
  • 遠隔買い物支援における複数視点と音声の位置               
    小松由和, 山崎晶子, 山崎敬一, 池田佳子, 歌田夢香, 久野義徳, 小林貴訓, 福田悠人
    情報処理学会論文誌, Volume:60, Number:1, First page:157, Last page:165, 2019
  • Robotic shopping trolley for supporting the elderly               
    Yoshinori Kobayashi; Seiji Yamazaki; Hidekazu Takahashi; Hisato Fukuda; Yoshinori Kuno
    Advances in Intelligent Systems and Computing, Volume:779, First page:344, Last page:353, 2019
    As the advance of an aging society in Japan, along with the lack of caregivers, the elderly care has become a crucial social problem. To cope with this problem, we are focusing on shopping. Shopping is one of the important daily activities and expected to be effective for the elderly rehabilitation because it feels easier than walking rehabilitation and it can give the positive effect for the cognitive functions by memorizing and checking out items to buy. The current shopping rehabilitation is carried out with a caregiver accompanied by the elderly one by one for guiding inside the store, carrying shopping basket, monitoring, etc. Consequently, the caregivers’ load is very high. In this paper, we propose a robotic shopping trolley that can reduce the caregivers’ load in shopping rehabilitation. We evaluate its effectiveness through experiments at an actual supermarket.
    Springer Verlag, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-319-94373-2_38
    DOI ID:10.1007/978-3-319-94373-2_38, ISSN:2194-5357, SCOPUS ID:85049158408
  • Interacting with Wheelchair Mounted Navigator Robot.               
    Akiko Yamazaki; Keiichi Yamazaki; Yusuke Arano; Yosuke Saito; Emi Iiyama; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Mensch und Computer 2019 - Workshopband, Hamburg, Germany, September 8-11, 2019, 2019, [Reviewed]
    Gesellschaft für Informatik e.V., International conference proceedings
    DOI:https://doi.org/10.18420/muc2019-ws-651
    DOI ID:10.18420/muc2019-ws-651, DBLP ID:conf/mc/YamazakiYASIFKK19
  • Exploiting Local Shape Information for Cross-Modal Person Re-identification.               
    Md. Kamal; Uddin; Antony Lam; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Intelligent Computing Methodologies - 15th International Conference, ICIC 2019, Nanchang, China, August 3-6, 2019, Proceedings, Part III, Volume:LNCS11645, First page:74, Last page:85, 2019, [Reviewed]
    Springer
    DOI:https://doi.org/10.1007/978-3-030-26766-7_8
    DOI ID:10.1007/978-3-030-26766-7_8, DBLP ID:conf/icic/UddinLFKK19
  • A Human-Robot Interaction System Based on Calling Hand Gestures.               
    Aye Su Phyo; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    Intelligent Computing Methodologies - 15th International Conference, ICIC 2019, Nanchang, China, August 3-6, 2019, Proceedings, Part III, Volume:LNCS11645, First page:43, Last page:52, 2019, [Reviewed]
    Springer
    DOI:https://doi.org/10.1007/978-3-030-26766-7_5
    DOI ID:10.1007/978-3-030-26766-7_5, DBLP ID:conf/icic/PhyoFLKK19
  • Smart Wheelchair Maneuvering Among People.               
    Sarwar Ali; Antony Lam; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Intelligent Computing Methodologies - 15th International Conference, ICIC 2019, Nanchang, China, August 3-6, 2019, Proceedings, Part III, Volume:LNCS11645, First page:32, Last page:42, 2019, [Reviewed]
    Springer
    DOI:https://doi.org/10.1007/978-3-030-26766-7_4
    DOI ID:10.1007/978-3-030-26766-7_4, DBLP ID:conf/icic/AliLFKK19
  • A Person-Following Shopping Support Robot Based on Human Pose Skeleton Data and LiDAR Sensor.               
    Md. Matiqul Islam; Antony Lam; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Intelligent Computing Methodologies - 15th International Conference, ICIC 2019, Nanchang, China, August 3-6, 2019, Proceedings, Part III, Volume:LNCS11645, First page:9, Last page:19, 2019, [Reviewed]
    Springer
    DOI:https://doi.org/10.1007/978-3-030-26766-7_2
    DOI ID:10.1007/978-3-030-26766-7_2, DBLP ID:conf/icic/IslamLFKK19
  • Teleoperation of a Robot through Audio-Visual Signal via Video Chat               
    Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    ACM/IEEE International Conference on Human-Robot Interaction, First page:111, Last page:112, Mar. 2018, [Reviewed]
    Telepresence robots have the potential for improving human-to-human communication when a person cannot be physically present at a given location. One way to achieve this is to construct a system that consists of a robot and video conferencing setup. However, a conventional implementation would involve building a separate server or control path for teleoperation of the robot in addition to the video conferencing system. In this paper, we propose an approach to robot teleoperation via a video call that does not require the use of an additional server or control path. Instead, we propose directly teleoperating the robot via the audio and video signals of the video call itself. We experiment on which signals are most suitable for this task and present our findings.
    IEEE Computer Society, English, International conference proceedings
    DOI:https://doi.org/10.1145/3173386.3177037
    DOI ID:10.1145/3173386.3177037, ISSN:2167-2148, DBLP ID:conf/hri/FukudaKK18, SCOPUS ID:85045279327
  • Pedestrian Tracking and Identification by Integrating Multiple Sensor Information               
    F. Endo, H. Fukuda, Y. Kobayashi, and Y. Kuno
    International Workshop on Frontiers of Computer Vision (IW-FCV2019), 2018
  • A Study on Proactive Methods for Initiating Interaction with Human by Social Robots               
    M. G. Rashed, D. Das, Y. Kobayashi, and Y. Kuno
    Asian Journal of Convergence in Technology, Volume:4, Number:2, First page:1, Last page:10, 2018
  • Affinity Live: A System for Enhancing Interaction between Performers and the Audience               
    大津耕陽; 福島史康; 高橋秀和; 平原実留; 福田悠人; 小林貴訓; 久野義徳; 山崎敬一
    情報処理学会論文誌ジャーナル(Web), Volume:59, Number:11, First page:2019, Last page:2029, 2018, [Reviewed]
    Japanese, Scientific journal
    ISSN:1882-7764, J-Global ID:201902250029525198
  • Autonomous Bus Boarding Robotic Wheelchair Using Bidirectional Sensing Systems.               
    Shamim Al Mamun; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    Advances in Visual Computing - 13th International Symposium, ISVC 2018, Las Vegas, NV, USA, November 19-21, 2018, Proceedings, First page:737, Last page:747, 2018, [Reviewed]
    Springer
    DOI:https://doi.org/10.1007/978-3-030-03801-4_64
    DOI ID:10.1007/978-3-030-03801-4_64, DBLP ID:conf/isvc/MamunFLKK18
  • Enhancing Multiparty Cooperative Movements: A Robotic Wheelchair that Assists in Predicting Next Actions.               
    Hisato Fukuda; Keiichi Yamazaki; Akiko Yamazaki; Yosuke Saito; Emi Iiyama; Seiji Yamazaki; Yoshinori Kobayashi; Yoshinori Kuno; Keiko Ikeda
    Proceedings of the 2018 on International Conference on Multimodal Interaction, ICMI 2018, Boulder, CO, USA, October 16-20, 2018, First page:409, Last page:417, 2018, [Reviewed]
    ACM, International conference proceedings
    DOI:https://doi.org/10.1145/3242969.3242983
    DOI ID:10.1145/3242969.3242983, DBLP ID:conf/icmi/FukudaYYSIYKKI18
  • Classification of Emotions from Video Based Cardiac Pulse Estimation.               
    Keya Das; Antony Lam; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Intelligent Computing Methodologies - 14th International Conference, ICIC 2018, Wuhan, China, August 15-18, 2018, Proceedings, Part III, Volume:LNAI10956, First page:296, Last page:305, 2018, [Reviewed]
    Springer
    DOI:https://doi.org/10.1007/978-3-319-95957-3_33
    DOI ID:10.1007/978-3-319-95957-3_33, DBLP ID:conf/icic/DasLFKK18
  • Smart Robotic Wheelchair for Bus Boarding Using CNN Combined with Hough Transforms.               
    Sarwar Ali; Shamim Al Mamun; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    Intelligent Computing Methodologies - 14th International Conference, ICIC 2018, Wuhan, China, August 15-18, 2018, Proceedings, Part III, Volume:LNAI10956, First page:163, Last page:172, 2018, [Reviewed]
    Springer
    DOI:https://doi.org/10.1007/978-3-319-95957-3_18
    DOI ID:10.1007/978-3-319-95957-3_18, DBLP ID:conf/icic/AliMFLKK18
  • Natural Calling Gesture Recognition in Crowded Environments.               
    Aye Su Phyo; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    Intelligent Computing Theories and Application - 14th International Conference, ICIC 2018, Wuhan, China, August 15-18, 2018, Proceedings, Part I, Volume:LNCS10954, First page:8, Last page:14, 2018, [Reviewed]
    Springer
    DOI:https://doi.org/10.1007/978-3-319-95930-6_2
    DOI ID:10.1007/978-3-319-95930-6_2, DBLP ID:conf/icic/PhyoFLKK18
  • Robustly tracking people with LIDARs in a crowded museum for behavioral analysis               
    Md. Golam Rashed; Ryota Suzuki; Takuya Yonezawa; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Volume:E100A, Number:11, First page:2458, Last page:2469, Nov. 2017, [Reviewed]
    This introduces a method which uses LIDAR to identify humans and track their positions, body orientation, and movement trajectories in any public space to read their various types of behavioral responses to surroundings. We use a network of LIDAR poles, installed at the shoulder level of typical adults to reduce potential occlusion between persons and/or objects even in large-scale social environments. With this arrangement, a simple but effective human tracking method is proposed that works by combining multiple sensors' data so that large-scale areas can be covered. The effectiveness of this method is evaluated in an art gallery of a real museum. The result revealed good tracking performance and provided valuable behavioral information related to the art gallery.
    Institute of Electronics, Information and Communication, Engineers, IEICE, English, International conference proceedings
    DOI:https://doi.org/10.1587/transfun.E100.A.2458
    DOI ID:10.1587/transfun.E100.A.2458, ISSN:1745-1337, DBLP ID:journals/ieicet/RashedSYLKK17, SCOPUS ID:85033446271
  • Enhanced concert experience using multimodal feedback from live performers               
    Kouyou Otsu; Hidekazu Takahashi; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Proceedings - 2017 10th International Conference on Human System Interactions, HSI 2017, First page:290, Last page:294, Aug. 2017, [Reviewed]
    In this paper, we aim to enhance the interaction between the performer and the audience in live idol performances. We propose a system for converting the movements of individual members of an idol group into vibrations and their voices into light on handheld devices for the audience. Specifically, for each performer, the system acquires data on movement and voice magnitudes via an acceleration sensor attached to the right wrist and microphone. The obtained data is then converted into motor vibrations and lights from an LED. The receiving devices for the audience members come in the form of a pen light or doll. A prototype system was made to collect acceleration data and voice magnitude data measurements for our experiments with an idol group in Japan to verify whether the performer's movements and singing voice could be correctly measured during real live performance conditions. We developed a program to present the strength of the movements and singing voice corresponding to one of the members as vibrations and lights based on the information of the recorded data. Then, an experiment was conducted for eight subjects that observed the performance. We found that seven out of eight subjects could identify the idol performer with corresponding vibrations and lighting from the device.
    Institute of Electrical and Electronics Engineers Inc., English, International conference proceedings
    DOI:https://doi.org/10.1109/HSI.2017.8005047
    Scopus:https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85030224011&origin=inward
    Scopus Citedby:https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85030224011&origin=inward
    DOI ID:10.1109/HSI.2017.8005047, DBLP ID:conf/hsi/OtsuTFKK17, SCOPUS ID:85030224011
  • Analysis and Prediction of Real Museum Visitors' Interests and Preferences Based on Their Behaviors               
    Md. Golam Rashed; Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    2017 INTERNATIONAL CONFERENCE ON ELECTRICAL, COMPUTER AND COMMUNICATION ENGINEERING (ECCE), First page:451, Last page:456, 2017, [Reviewed]
    Humans behaviors and experiences in social spaces are believed to be the result of the processes of the mind that are influenced by the different features of these spaces. By observing humans behaviors and experiences, it can be feasible to read their level of interests, preferences in any social environments. However, making manual large scale observation of human behaviors using paper-and-pencil based method is a very difficult and complicated task. In this study, an attractive solution to this complicated task is discussed. Here, we used our network enabled multiple LIDARs pole based human tracking system in supporting to our solution. This system can robustly track humans in any social environments. Our solution provides an easy way to observe humans behaviors from human tracking data to read their interests, preferences inside any social spaces. Finally, we tested our solution using a large set of human tracking data from an art gallery of a real museum to validate its effectiveness.
    IEEE, English, International conference proceedings
    Web of Science ID:WOS:000403395700080
  • Detecting inner emotions from video based heart rate sensing               
    Keya Das; Sarwar Ali; Koyo Otsu; Hisato Fukuda; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume:10363, First page:48, Last page:57, 2017, [Reviewed]
    Recognizing human emotion by computer vision is an interesting and challenging problem. In particular, the reading of inner emotions, has received limited attention. In this paper, we use a remote video-based heart rate sensing technique to obtain physiological data that provides an indication of a person’s inner emotions. This method allows for contactless estimates of heart rate data while the subject is watching emotionally stimulating video clips. We also compare against a wearable heart rate sensor to validate the usefulness of the proposed remote heart rate reading framework. We found that the reading of heart rates of a subject effectively detects the inner emotional reactions of human subjects while they were watching funny and horror videos—despite little to no facial expressions at times. These findings are validated from the reading of heart rates for 40 subjects with our vision-based method compared against conventional wearable sensors. We also find that the change in heart rate along with emotionally stimulating content is statistically significant and our remote sensor is well correlated with the wearable contact sensor.
    Springer Verlag, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-319-63315-2_5
    Scopus:https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85027891783&origin=inward
    Scopus Citedby:https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85027891783&origin=inward
    DOI ID:10.1007/978-3-319-63315-2_5, ISSN:1611-3349, eISSN:1611-3349, DBLP ID:conf/icic/DasAOFLKK17, SCOPUS ID:85027891783
  • Single laser bidirectional sensing for robotic wheelchair step detection and measurement               
    Shamim Al Mamun; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume:10363, First page:37, Last page:47, 2017, [Reviewed]
    Research interest in robotic wheelchairs is driven in part by their potential for improving the independence and quality-of-life of persons with disabilities and the elderly. Moreover, smart wheelchair systems aim to reduce the workload of the caregiver. In this paper, we propose a novel technique for 3D sensing of the terrain using a conventional Laser Range Finder (LRF). We mounted this sensing system onto our new six-wheeled robotic step-climbing wheelchair and propose a new step measurement technique using the histogram distribution of the laser data. We successfully measure the height of stair steps in a railway station. Our step measurement technique for the wheelchair also enables the wheelchair to autonomously board a bus. Our experiments show the effectiveness and its applicability to real world robotic wheelchair navigation.
    Springer Verlag, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-319-63315-2_4
    DOI ID:10.1007/978-3-319-63315-2_4, ISSN:1611-3349, DBLP ID:conf/icic/MamunLKK17, SCOPUS ID:85027877310
  • Maintaining Formation of Multiple Robotic Wheelchairs for Smooth Communication               
    Ryota Suzuki; Yoshinori Kobayashi; Yoshinori Kuno; Taichi Yamada; Keiichi Yamazaki; Akiko Yamazaki
    INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, Volume:25, Number:5, First page:1, Last page:19, Oct. 2016, [Reviewed]
    To meet the demands of an aging society, research on intelligent/robotic wheelchairs have been receiving a lot of attention. In elderly care facilities, care workers are required to communicate with the elderly in order to maintain both their mental and physical health. While this is regarded as important, having a conversation with someone on a wheelchair while pushing it from behind in a traditional setting would interfere with their smooth and natural conversation. So we are developing a robotic wheelchair system which allows companions and wheelchair users to move in a natural formation. This paper reports on an investigation to learn the patterns of human behavior when the wheelchair users and their companions communicate while walking together. The ethnographic observation reveals a natural formation of positioning for both companions and wheelchair users. Based on this investigation, we propose a multiple robotic wheelchair system which can maintain desirable formations for communication between wheelchairs.
    WORLD SCIENTIFIC PUBL CO PTE LTD, English, Scientific journal
    DOI:https://doi.org/10.1142/S0218213016400054
    DOI ID:10.1142/S0218213016400054, ISSN:0218-2130, eISSN:1793-6349, DBLP ID:journals/ijait/SuzukiKKYYY16, Web of Science ID:WOS:000384410200006
  • Object Identification to Service Robots using a Mobile Interface from User’s Perspective               
    Q. Zhong, H. Fukuda, Y. Kobayashi, Y. Kuno
    23th International Workshop on Frontiers of Computer Vision (IW-FCV2017), 2016
  • Tracking Visitors in a Real Museum for Behavioral Analysis               
    Md Golam Rashed; Ryota Suzuki; Takuya Yonezawa; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    2016 JOINT 8TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS (SCIS) AND 17TH INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (ISIS), First page:80, Last page:85, 2016, [Reviewed]
    This paper introduces a system to track visitors' positions and movement patterns through museum art galleries with the goal of assisting the curators and other Museum Professionals (MPs) in the tedious task of analyzing visitors' activities, behaviors and experiences. We use multiple LIDARs as a primary sensing technology in supporting our proposed system. We also describe how valuable information related to visitor behaviors can be autonomously collected and analyzed using our system. Additionally, we present a solution to visualize the visitors' movement patterns and preferences with respect to the exhibits. Finally, we tested our system in an art gallery of a real museum to validate its effectiveness.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/SCIS&ISIS.2016.209
    DOI ID:10.1109/SCIS&ISIS.2016.209, DBLP ID:conf/scisisis/RashedSYLKK16, Web of Science ID:WOS:000392122900015
  • Analysis of Multi-party Human Interaction towards a Robot Mediator               
    Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno; Akiko Yamazaki; Keiko Ikeda; Keiichi Yamazaki
    2016 25TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), First page:17, Last page:21, 2016, [Reviewed]
    In this research, we aim to reveal the difference between robot mediated multi-party human interaction and human mediated multi-party human interaction. We do this by examining and comparing the multi-party participants' interactions towards the robot human mediators in regard to the mediator's questions. From this experiment, we found that the participants engaged in interaction among each other more when the questioner was the robot. This paper suggests a concept and a design of robots, which enhances social interaction among humans because of its intervention.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ROMAN.2016.7745085
    DOI ID:10.1109/ROMAN.2016.7745085, ISSN:1944-9445, DBLP ID:conf/ro-man/FukudaKKYIY16, Web of Science ID:WOS:000390682500001
  • A Hippocampal Model for Episodic Memory using Neurogenesis and Asymmetric STDP               
    Motonobu Hattori; Yoshinori Kobayashi
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), First page:5189, Last page:5193, 2016, [Reviewed]
    Our memory is first acquired as memory of a personal event (episodic memory). Then, its temporal context is removed and the resultant memory is used in higher-order information processing, such as thinking, reasoning and so on. Therefore, episodic memory is the basis of our memories. In this paper, we propose an episodic memory model which adopts the physiological findings of the hippocampus. Episodic memory can be regarded as a temporal sequence of patterns. In general, such a sequence may have some patterns which appear several times in it, or plural temporal sequences may share some same patterns with each other. That is, it is indispensable for the episodic memory model to handle one-to-many association and context. In the proposed model, the neurogenesis in dentate gyrus (DG) contributes to build different representations in CA3 for the same input patterns, and facilitates one-to-many association. Moreover, the asymmetric spike timing dependent synaptic plasticity introduced to the learning of recurrent collateral in CA3 enables memory of context. Computer simulation results show that the proposed model can deal with complex temporal sequences.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IJCNN.2016.7727885
    DOI ID:10.1109/IJCNN.2016.7727885, ISSN:2161-4393, DBLP ID:conf/ijcnn/HattoriK16, Web of Science ID:WOS:000399925505055
  • Remote Monitoring and Communication System with a Doll-like Robot for the Elderly               
    Kouyou Otsu; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    PROCEEDINGS OF THE IECON 2016 - 42ND ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, First page:5941, Last page:5946, 2016, [Reviewed]
    In this paper, we propose a remote monitoring and communication system for elderly care. We aim to detect emergencies concerning elderly people living alone and send alerts to his/her family from a remote location. For this reason, the system has two modules: the monitoring module and the video chat module. When the monitoring module of the system detects any unusual events concerning the elderly, the system sends an alert message to their family's smart phone. Then, their family can monitor the current situation via the video chat module. The elderly and his/her family member can also easily use the video chat function for day-to-day communication without requiring any complex actions for the users. For the video chat display at the elderly side, we use a television set, which all elderly would be expected to have in their rooms. This gives our system a familiar appearance so the elderly can use it without feeling aversion towards the system. The monitoring system for detecting emergencies is a doll-like robot. Besides monitoring, the robot can also initiate simple conversations autonomously in normal situations, which can improve the elderly person's sense of familiarity towards the system.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IECON.2016.7793636
    DOI ID:10.1109/IECON.2016.7793636, ISSN:1553-572X, DBLP ID:conf/iecon/OtsuFKK16, Web of Science ID:WOS:000399031206036
  • Terrain Recognition for Smart Wheelchair               
    Shamim Al Mamun; Ryota Suzuki; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    INTELLIGENT COMPUTING METHODOLOGIES, ICIC 2016, PT III, Volume:9773, First page:461, Last page:470, 2016, [Reviewed]
    Research interest in robotic wheelchairs is driven in part by their potential for improving the independence and quality-of-life of persons with disabilities and the elderly. However the large majority of research to date has focused on indoor operations. In this paper, we aim to develop a smart wheelchair robot system that moves independently in outdoor terrain smoothly. To achive this, we propose a robotic wheelchair system that is able to classify the type of outdoor terrain according to their roughness for the comfort of the user and also control the wheelchair movements to avoid drop-off and watery areas on the road. An artificial neural network based classifier is constructed to classify the patterns and features extracted from the Laser Range Sensor (LRS) intensity and distance data. The overall classification accuracy is 97.24 % using extracted features from the intensity and distance data. These classification results can in turn be used to control the motor of the smart wheelchair.
    SPRINGER INT PUBLISHING AG, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-319-42297-8_43
    DOI ID:10.1007/978-3-319-42297-8_43, ISSN:0302-9743, DBLP ID:conf/icic/MamunSLKK16, Web of Science ID:WOS:000387430500043
  • "i'll Be There Next": A Multiplex Care Robot System that Conveys Service Order Using Gaze Gestures               
    Keiichi Yamazaki; Akiko Yamazaki; Keiko Ikeda; Chen Liu; Mihoko Fukushima; Yoshinori Kobayashi; Yoshinori Kuno
    ACM Transactions on Interactive Intelligent Systems, Volume:5, Number:4, First page:21:1-21:20, Last page:21:20, Jan. 2016, [Reviewed]
    In this article, we discuss our findings from an ethnographic study at an elderly care center where we observed the utilization of two different functions of human gaze to convey service order (i.e., "who is served first and who is served next"). In one case, when an elderly person requested assistance, the gaze of the care worker communicated that he/she would serve that client next in turn. In the other case, the gaze conveyed a request to the service seeker to wait until the care worker finished attending the current client. Each gaze function depended on the care worker's current engagement and other behaviors. We sought to integrate these findings into the development of a robot that might function more effectively in multiple human-robot party settings.We focused on the multiple functions of gaze and bodily actions, implementing those functions into our robot. We conducted three experiments to gauge a combination of gestures and gazes performed by our robot. This article demonstrates that the employment of gaze is an important consideration when developing robots that can interact effectively in multiple human-robot party settings.
    Association for Computing Machinery, English, Scientific journal
    DOI:https://doi.org/10.1145/2844542
    DOI ID:10.1145/2844542, ISSN:2160-6463, DBLP ID:journals/tiis/YamazakiYILFKK16, SCOPUS ID:84999635119
  • Supporting Human-Robot Interaction Based on the Level of Visual Focus of Attention               
    Dipankar Das; Md. Golam Rashed; Yoshinori Kobayashi; Yoshinori Kuno
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, Volume:45, Number:6, First page:664, Last page:675, Dec. 2015, [Reviewed]
    We propose a human-robot interaction approach for social robots that attracts and controls the attention of a target person depending on her/his current visual focus of attention. The system detects the person's current task (attention) and estimates the level by using the "task-related contextual cues" and "gaze pattern." The attention level is used to determine the suitable time to attract the target person's attention toward the robot. The robot detects the interest or willingness of the target person to interact with it. Then, depending on the level of interest of the target person, the robot generates awareness and establishes a communication channel with her/him. To evaluate the performance, we conducted an experiment using our static robot to attract the target human's attention when she/he is involved in four different tasks: reading, writing, browsing, and viewing paintings. The proposed robot determines the level of attention of the current task and considers the situation of the target person. Questionnaire measures confirmed that the proposed robot outperforms a simple attention control robot in attracting participants' attention in an acceptable way. It also causes less disturbance and establishes effective eye contact. We implemented the system into a commercial robotic platform (Robovie-R3) to initiate interaction between visitors and the robot in a museum scenario. The robot determined the visitors' gaze points and established a successful interaction with a success rate of 91.7%.
    IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, English, Scientific journal
    DOI:https://doi.org/10.1109/THMS.2015.2445856
    DOI ID:10.1109/THMS.2015.2445856, ISSN:2168-2291, eISSN:2168-2305, DBLP ID:journals/thms/DasRKK15, Web of Science ID:WOS:000366893700002
  • Museum Guide Robot by Considering Static and Dynamic Gaze Expressions to Communicate with Visitors               
    Kaname Sano; Keisuke Murata; Ryota Suzuki; Yoshinori Kuno; Daijiro Itagaki; Yoshinori Kobayashi
    ACM/IEEE International Conference on Human-Robot Interaction, Volume:02-05-, First page:125, Last page:126, Mar. 2015, [Reviewed]
    Human eyes not only serve the function of enabling us "to see" something, but also perform the vital role of allowing us "to show" our gaze for non-verbal communication. We have investigated the static design and dynamic behaviors of robot heads for suitable gaze communication with humans while giving a friendly impression. In this paper, we focus on how the robot's impression is affected by its eye blink and eyeball movement synchronized with head turning. Through experiments with human participants, we found that robot head turning with eye blinks give a friendly impression while robot head turning without eye blinks is suitable for making people shift their attention towards the robot's gaze direction. These findings are very important for communication robots such as museum guide robots. Therefore to demonstrate our approach, we developed a museum guide robot system employing suitable facial design and gaze behavior based on all of our findings.
    IEEE Computer Society, English, International conference proceedings
    DOI:https://doi.org/10.1145/2701973.2702011
    DOI ID:10.1145/2701973.2702011, ISSN:2167-2148, DBLP ID:conf/hri/SanoMSKIK15, SCOPUS ID:84969263617
  • Toward Museum Guide Robots Proactively Initiating Interaction with Humans               
    M. Golam Rashed; R. Suzuki; A. Lam; Y. Kobayashi; Y. Kuno
    ACM/IEEE International Conference on Human-Robot Interaction, Volume:02-05-, First page:1, Last page:2, Mar. 2015, [Reviewed]
    This paper describes current work toward the design of a guide robot system. We present a method to recognize people's interest and intention from their walking trajectories in indoor environments, which enables a service robot to proactively approach people to provide services to them. We conducted observational experiments in a museum as a target test environment where participants were asked to visit that museum. From these experiments, we have found mainly three kinds of walking trajectory patterns of the participants inside the museum that depend on their interest in the exhibits. Based on these findings, we developed a method to identify participants that may need guidance.We confirm the effectiveness of our method by experiments.
    IEEE Computer Society, English, International conference proceedings
    DOI:https://doi.org/10.1145/2701973.2701974
    DOI ID:10.1145/2701973.2701974, ISSN:2167-2148, DBLP ID:conf/hri/RashedSLKK15, SCOPUS ID:84969174987
  • An Empirical Robotic Framework for Initiating Interaction with the Target Human in Multiparty Settings               
    Mohammed Moshiul Hoque; Quazi Delwar Hossian; Kaushik Deb; Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, Volume:24, Number:2, Feb. 2015, [Reviewed]
    Currently work in robotics is expanding from industrial robots to robots that are employed in the living environment. For robots to be accepted into the real world, they must be capable to behave in such a way that humans do with other humans, and initiate interaction in the same way. This paper focuses on designing a robotic framework that is able to perform an important social function such as, initiating interaction with the target human in multiparty setting in a natural and social ways. However, it is not easy task for the robot to initiate an interaction process with a particular human, especially when the robot and the human are not facing each other or the intended target human is intensely engaged his/her work. In order to initiate an interaction process, we propose a framework that consists of two parts: capturing attention and ensuring attention capture. Evaluation experiments reveal the effectiveness of the proposed system in four viewing situations namely, central field of view (CFOV), near peripheral field of view (NPFOV), far peripheral field of view (FPFOV), and out of field of view (OFOV) respectively.
    WORLD SCIENTIFIC PUBL CO PTE LTD, English, Scientific journal
    DOI:https://doi.org/10.1142/S0218126615400034
    DOI ID:10.1142/S0218126615400034, ISSN:0218-1266, eISSN:1793-6454, DBLP ID:journals/jcsc/HoqueHDDKK15, Web of Science ID:WOS:000350770800004
  • A Vision Based Guide Robot System: Initiating Proactive Social Human Robot Interaction in Museum Scenarios               
    G. Rashed, R. Suzuki, A. Lam, Y. Kobayashi, Y. Kuno
    International Conference on Computer & Information Engineering(ICCIE2015), 2015
  • Toward a Robot System Supporting Communication between People with Dementia and Their Relatives               
    Y. Kuno, S. Goto, Y. Matsuda, T. Kikugawa, A. Lam, Y. Kobayashi
    International Conference on Intelligent Robots and Systems(IROS2015), 2015
  • Understanding Spetial Knowledge: An ontology-Based Representatioan for Object Identification               
    L. Cao, A. Lam, Y. Kobayashi, Y. Kuno, D. Kaji
    Transactions on Image Electronics and Visual Computing, Volume:3, Number:2, First page:150, Last page:163, 2015
  • Formations for Facilitating Communication Among Robotic Wheelchair Users and Companions               
    Yoshinori Kobayashi; Ryota Suzuki; Taichi Yamada; Yoshinori Kuno; Keiichi Yamazaki; Akiko Yamazaki
    SOCIAL ROBOTICS (ICSR 2015), Volume:9388, First page:370, Last page:379, 2015, [Reviewed]
    To meet the demands of an aging society, researches for intelligent/robotic wheelchairs have been receiving a lot of attention. In elderly care facilities, care workers are required to have a communication with the elderly in order to maintain their both mental and physical health. While this is regarded as important, a conversation with someone on a wheelchair while pushing it from his/her behind in a traditional setting would interfere with their smooth and natural conversation. Based on these concerns we are developing a robotic wheelchair which allows companions and wheelchair users move in a natural formation. This paper reports on an investigation how human behaves when the wheelchair users and their companions communicate while moving together.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-319-25554-5_37
    DOI ID:10.1007/978-3-319-25554-5_37, ISSN:0302-9743, DBLP ID:conf/socrob/KobayashiSYKYY15, Web of Science ID:WOS:000367711000037
  • Remote Communication Support System as Communication Catalyst for Dementia Care               
    Satoru Goto; Yoshimi Matsuda; Toshiki Kikugawa; Yoshinori Kobayashi; Yoshinori Kuno
    IECON 2015 - 41ST ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, First page:4465, Last page:4470, 2015, [Reviewed]
    In this paper we describe a prototype remote communication support system to be used as a communication catalyst for dementia care. The system aims to delay the progression of dementia and helps caregivers by facilitating the social interaction of dementia patients. In the system framework, a video telecommunication system was installed in order to enable communication between patients and caregivers. Next, a system control board was designed to maintain robust operation so that users could connect with each other anytime. In the system, we also implemented a reminiscence based therapy application to prevent the progression of dementia. The application can show old photos to dementia patients during a typical video chat. From the caregivers' point of view, our system is also an important form of assistance for repeating announcements and instructions to dementia patients with poor memory. Accordingly, we also made a video replay application which can be set to replay video at a fixed time everyday. Finally, we made aesthetically pleasing robots that serve to make interaction with the system more natural for dementia patients.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IECON.2015.7392795
    DOI ID:10.1109/IECON.2015.7392795, ISSN:1553-572X, DBLP ID:conf/iecon/GotoMKKK15, Web of Science ID:WOS:000382950704074
  • Facial Expression Recognition Based on Hybrid Approach               
    Md Abdul Mannan; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCED INTELLIGENT COMPUTING THEORIES AND APPLICATIONS, ICIC 2015, PT III, Volume:9227, First page:304, Last page:310, 2015, [Reviewed]
    This paper proposes an automatic system for facial expression recognition using a hybrid approach in the feature extraction phase (appearance and geometric). Appearance features are extracted as Local Directional Number (LDN) descriptors while facial landmark points and their displacements are considered as geometric features. Expression recognition is performed using multiple SVMs and decision level fusion. The proposed method was tested on the Extended Cohn-Kanade (CK+) database and obtained an overall 96.36 % recognition rate which outperformed the other state-of-the-art methods for facial expression recognition.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-319-22053-6_33
    DOI ID:10.1007/978-3-319-22053-6_33, ISSN:0302-9743, DBLP ID:conf/icic/MannanLKK15, Web of Science ID:WOS:000364716700033
  • Network guide robot system proactively initiating interaction with humans based on their local and global behaviors               
    Md. Golam Rashed; Royta Suzuki; Toshiki Kikugawa; Antony Lam; Yoshinori Kobayashi; Yoshinori Kuno
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume:9226, First page:283, Last page:294, 2015, [Reviewed]
    In this paper, we present a Human Robot Interaction (HRI) system which can determine people’s interests and intentions concerning exhibits in a museum, then proactively approach people that may want guidance or commentary about the exhibits. To do that, we first conducted observational experiments in a museum with participants. From these experiments, we have found, mainly three kinds of walking trajectory patterns that characterize global behavior, and visual attentional information that indicates the local behavior of the people. These behaviors ultimately indicate whether certain people are interested in the exhibits and could benefit from the robot system providing additional details about the exhibits. Based on our findings, we then designed and implemented a network enabled guide robot system for the museum. Finally, we demonstrated the viability of our proposed system by experimenting with a set of Desktop Robots as guide robots. Our experiments revealed that the proposed HRI system is effective for the network enabled Desktop Robots to proactively provide guidance.
    Springer Verlag, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-319-22186-1_28
    DOI ID:10.1007/978-3-319-22186-1_28, ISSN:1611-3349, DBLP ID:conf/icic/RashedSKLKK15, SCOPUS ID:84944754957
  • Object Pose Estimation Using Category Information from a Single Image               
    Shunsuke Shimizu; Hiroshi Koyasu; Yoshinori Kobayashi; Yoshinori Kuno
    2015 21ST KOREA-JAPAN JOINT WORKSHOP ON FRONTIERS OF COMPUTER VISION, First page:1, Last page:4, 2015, [Reviewed]
    3D object pose estimation is one of the most important challenges in the field of computer vision. A huge amount of image resources such as images on the web or photos taken before can be utilized if the system can estimate the 3D pose from a single image. On the other hand, the object's category and position on the image can be estimated by using state of the techniques in the general object recognition. We propose a method for 3D pose estimation from a single image on the basis of the known object category and position. We employ Regression Forests as the machine learning algorithm and HOG features as the input vectors. The regression function is created based on HOG features which express the differences in shapes depending on the viewing directions and corresponding poses. We evaluate the accuracy of pose estimation by using multiple objects with different categories.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/FCV.2015.7103728
    DOI ID:10.1109/FCV.2015.7103728, ISSN:2165-1051, DBLP ID:conf/fcv/ShimizuKKK15, Web of Science ID:WOS:000380375500031
  • A framework for controlling wheelchair motion by using gaze information               
    R. Tomari, Y. Kobayashi and Y. Kuno
    International Journal of Integrated Engineering, Volume:5, Number:3, First page:40, Last page:45, 2014
  • A Proactive Approach of Robotic Framework for Making Eye Contact with Humans               
    Mohammed Moshiul Hoque; Yoshinori Kobayashi; Yoshinori Kuno
    Advances in Human-Computer Interaction, Volume:2014, Number:694046, First page:1, Last page:19, 2014, [Reviewed]
    Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not an easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely engaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person and make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon in all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person before meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases: capturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness of the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of view, and out of field of view.
    Hindawi Limited, English, Scientific journal
    DOI:https://doi.org/10.1155/2014/694046
    DOI ID:10.1155/2014/694046, ISSN:1687-5907, SCOPUS ID:84934995536
  • Enhancing Wheelchair's Control Operation of a Severe Impairment User               
    Mohd Razali Md Tomari; Yoshinori Kobayashi; Yoshinori Kuno
    8TH INTERNATIONAL CONFERENCE ON ROBOTIC, VISION, SIGNAL PROCESSING & POWER APPLICATIONS: INNOVATION EXCELLENCE TOWARDS HUMANISTIC TECHNOLOGY, Volume:291, First page:65, Last page:72, 2014, [Reviewed]
    Users with severe motor ability are unable to control their wheelchair using standard joystick and hence an alternative control input is preferred. However, using such an input, undoubtedly the navigation burden for the user is significantly increased. In this paper a method on how to reduce such a burden with the help of smart navigation platform is proposed. Initially, user information is inferred using an IMU sensor and a bite-like switch. Then information from the environment is obtained using combination of laser and Kinect sensors. Eventually, both information from the environment and the user is analyzed to decide the final control operation that according to the user intention, safe and comfortable to the people in the surrounding. Experimental results demonstrate the feasibility of the proposed approach.
    SPRINGER, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-981-4585-42-2_8
    DOI ID:10.1007/978-981-4585-42-2_8, ISSN:1876-1100, Web of Science ID:WOS:000337307100008
  • A Robotic Framework for Shifting the Target Human's Attention in Multi-Party Setting               
    Mohammed Moshiul Hoque; Yoshinori Kobayashi; Yoshinori Kuno
    2014 INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV), 2014, [Reviewed]
    It is a major challenge in HRI to design a robotic agent that is able to direct its partner's attention from his/her existing attentional focus towards an intended direction. For this purpose, the agent may first turn its gaze to him/her in order to set up eye contact. However, such a turning action of the agent may not in itself be sufficient to establish eye contact with its partner in all cases, especially when the agent and the its partner are not facing each other or the partner is intensely engaged in a task. This paper focuses on designing a robotic framework to shift the target human's attentional focus toward the robot's intended direction from multiple humans. For this purpose, we proposed a conceptual framework with three phases: capturing attention, making eye contact, and shifting attention. We conducted an experiment to validate our model in HRI scenarios in which two participant interacted in a session at a time. One of them interacted as a target and other as a non-target. Experimental results with twenty participants shows the effectiveness of the proposed framework.
    IEEE, English, International conference proceedings
    Web of Science ID:WOS:000346137900107
  • Analysis of Socially Acceptable Smart Wheelchair Navigation Based on Head Cue Information               
    Razali Tomari; Yoshinori Kobayashi; Yoshinori Kuno
    MEDICAL AND REHABILITATION ROBOTICS AND INSTRUMENTATION (MRRI2013), Volume:42, First page:198, Last page:205, 2014, [Reviewed]
    Smart wheelchair can be defined as a standard power electrical wheelchair that equipped with a mobile robotic technology to assist the user in a number of situations. Most of the smart wheelchair work focusing on safety issue and less work considers a socially acceptable issue. Since wheelchairs are normally used in human-shared environment, it is important to ensure the assistive motion generated from the wheelchair is safe and comfortable to the human in the surrounding. Here the framework for catering such an issue is proposed. The system initially infers human's state from head cue information. Next, the information is interpreted for modeling human's comfort zone (CZ) based on rules rooted from Proxemics concept. Finally, the wheelchair's motion is generated by avoiding both, the CZ and the in place obstacle. Experimental results demonstrate the feasibility of the proposed framework (C) 2014 The Authors. Published by Elsevier B.V.
    ELSEVIER SCIENCE BV, English, International conference proceedings
    DOI:https://doi.org/10.1016/j.procs.2014.11.052
    DOI ID:10.1016/j.procs.2014.11.052, ISSN:1877-0509, Web of Science ID:WOS:000373732400027
  • Automatic Face Parts Extraction and Facial Expression Recognition               
    Dipankar Das; M. Moshiul Hoque; Jannatul Ferdousi Ara; Mirza A. F. M. Rashidul Hasan; Yoshinori Kobayashi; Yoshinori Kuno
    2014 9TH INTERNATIONAL FORUM ON STRATEGIC TECHNOLOGY (IFOST), First page:128, Last page:131, 2014, [Reviewed]
    Real-time facial expression analysis is an important yet challenging task in human computer interaction. This paper proposes a real-time person independent facial expression recognition system using a geometrical feature-based approach. The face geometry is extracted using the modified active shape model. Each part of the face geometry is effectively represented by the Census Transformation (CT) based feature histogram. The facial expression is classified by the SVM classifier with exponential. 2 weighted merging kernel. The proposed method was evaluated on the JAFFE database and in real-world environment. The experimental results show that the approach yields a high recognition rate and is applicable in real-time facial expression analysis.
    IEEE, English, International conference proceedings
    Web of Science ID:WOS:000392872100030
  • Robotic Wheelchair Moving with Multiple Companions               
    Masaya Arai; Yoshihisa Sato; Ryota Suzuki; Yoshinori Kobayashi; Yoshinori Kuno; Satoshi Miyazawa; Mihoko Fukushima; Keiichi Yamazaki; Akiko Yamazaki
    2014 23RD IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (IEEE RO-MAN), First page:513, Last page:518, 2014, [Reviewed]
    We have been conducting research on ways to contribute to reducing the workload of caregivers. We developed a wheelchair capable of following a single accompanying caregiver in a parallel position. However, observations in actual care facilities revealed that people often move as a group: for instance, two caregivers each pushed a wheelchair together. Therefore, we now aim to develop a robotic wheelchair system that allows multiple wheelchairs and accompanying caregivers to coordinate their movement together. However, such a situation necessitates a complex system. Making such a system practical represents an enormous challenge. Therefore, as a first step we decided to break down this complex "multiple-multiple" situation by isolating particular instances with multiple wheelchairs and/or multiple caregivers. In this paper, we focus on the "single-multiple" situation; i.e., a situation where a single wheelchair user is accompanied by multiple accompanies. We propose a robotic wheelchair system that facilitates coordinated movement between the wheelchair and the caregivers. In other words, the wheelchair system will track multiple people in the vicinity using GPGPU, and distinguish the accompanying caregivers from among other passersby based on the trajectory of the individuals' movement.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ROMAN.2014.6926304
    DOI ID:10.1109/ROMAN.2014.6926304, ISSN:1944-9445, DBLP ID:conf/ro-man/AraiSSKKMFYY14, Web of Science ID:WOS:000366603200084
  • Multiple Robotic Wheelchair System Considering Group Communication               
    Ryota Suzuki; Taichi Yamada; Masaya Arai; Yoshihisa Sato; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCES IN VISUAL COMPUTING (ISVC 2014), PT 1, Volume:8887, First page:805, Last page:814, 2014, [Reviewed]
    In recent years, there has been an increasing demand for elderly care in Japan due to the problems posed by a declining birthrate and an aging population. To deal with the problem, we aim to develop a multiple wheelchair robot system that moves with multiple companions collaboratively. In actual care, we noticed that for a group of four people, which included wheelchair users and their companions they tended to break up into two sets of two (1 wheelchair user and 1 caregiver) to move around or communicate with each other. Based on this observation, we propose a robotic wheelchair system that facilitates coordinated movement between the wheelchairs and the companions while maintaining suitable formations for communication among the group.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-319-14249-4_77
    DOI ID:10.1007/978-3-319-14249-4_77, ISSN:0302-9743, DBLP ID:conf/isvc/SuzukiYASKK14, Web of Science ID:WOS:000354694000077
  • Object Recognition Based on Human Description Ontology for Service Robots               
    Hisato Fukuda; Satoshi Mori; Yoshinori Kobayashi; Yoshinori Kuno; Daisuke Kachi
    IECON 2014 - 40TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, First page:4051, Last page:4056, 2014, [Reviewed]
    We are developing a helper robot able to fetch objects requested by users. This robot tries to recognize objects through verbal interaction with the user concerning the objects that it cannot detect autonomously. We have shown that the system can recognize objects based on an ontology for interaction. In this paper, we extend a human description ontology to link a "human description" to "attributes of objects" for our interactive object recognition framework. We develop an interactive object recognition system based on this ontology. Experimental results confirmed that the system could efficiently recognize objects by utilizing this ontology.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IECON.2014.7049109
    DOI ID:10.1109/IECON.2014.7049109, ISSN:1553-572X, DBLP ID:conf/iecon/FukudaMKKK14, Web of Science ID:WOS:000389471603132
  • Designing Robot Eyes and Head and Their Motions for Gaze Communication               
    Tomomi Onuki; Kento Ida; Tomoka Ezure; Takafumi Ishinoda; Kaname Sano; Yoshinori Kobayashi; Yoshinori Kuno
    INTELLIGENT COMPUTING THEORY, Volume:8588, First page:607, Last page:618, 2014, [Reviewed]
    Human eyes not only serve the function of enabling us "to see" something, but also perform the vital role of allowing us "to show" our gaze for non-verbal communication. The gaze of service robots should therefore also perform this function of "showing" in order to facilitate communication with humans. We have already examined which shape of robot eyes is most suitable for gaze reading while giving the friendliest impression, through carrying out experiments where we altered the shape and iris size of robot eyes. However, we need to consider more factors for effective gaze communication. Eyes are facial parts on the head and move with it. Thus, we examine how the robot should move its head when it turns to look at something. Then, we investigate which shape of robot head is suitable for gaze communication. In addition, we consider how the robot move its eyes and head while not attending to any particular object. We also consider the coordination of head and eye motions and the effect of blinking while turning its head. We propose appropriate head and eye design and their motions and confirm their effectiveness through experiments using human participants.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-319-09333-8_66
    DOI ID:10.1007/978-3-319-09333-8_66, ISSN:0302-9743, DBLP ID:conf/icic/OnukiIEISKK14, Web of Science ID:WOS:000345518700066
  • Recognizing Groups of Visitors for a Robot Museum Guide Tour               
    Atsushi Kanda; Masaya Arai; Ryota Suzuki; Yoshinori Kobayashi; Yoshinori Kuno
    2014 7TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTIONS (HSI), First page:123, Last page:128, 2014, [Reviewed]
    In this paper, we propose a robot system able to take visitors on guided tours in a museum. When developing a robot capable of giving a tour of an actual museum, it is necessary to implement a robot system able to measure the location and orientation of the visitors using bearing sensors installed in a specific environment. Furthermore, the robot needs information pertaining to both tour attendees and other visitors in the vicinity, in order to effectively lead a tour for the attendees. Therefore, we propose a new robot system consisting of simple elements such as a laser range finder attached to a pole. By merely placing the sensor poles, we can track the location and orientation of visitors. We employ a group detection method to distinguish tour attendees from other people around the robot. In contrast to previous group detection methods our method can recognize a group of people even when they are not walking. In experiments our system successfully tracked and grouped visitors in the vicinity even when they were just standing.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/HSI.2014.6860460
    DOI ID:10.1109/HSI.2014.6860460, ISSN:2158-2246, DBLP ID:conf/hsi/KandaA0KK14, Web of Science ID:WOS:000345791900019
  • Multiple robotic wheelchair system able to move with a companion using map information               
    Yoshihisa Sato; Ryota Suzuki; Masaya Arai; Yoshinori Kobayashi; Yoshinori Kuno; Mihoko Fukushima; Keiichi Yamazaki; Akiko Yamazaki
    ACM/IEEE International Conference on Human-Robot Interaction, First page:286, Last page:287, 2014, [Reviewed]
    In order to reduce the burden of caregivers facing an increased demand for care, particularly for the elderly, we developed a system whereby multiple robotic wheelchairs can automatically move alongside a companion. This enables a small number of people to assist a substantially larger number of wheelchair users effectively. This system utilizes an environmental map and an estimation of position to accurately identify the positional relations among the caregiver (or a companion) and each wheelchair. The wheelchairs are consequently able to follow along even if the caregiver cannot be directly recognized. Moreover, the system is able to establish and maintain appropriate positional relations.
    IEEE Computer Society, English, International conference proceedings
    DOI:https://doi.org/10.1145/2559636.2563694
    DOI ID:10.1145/2559636.2563694, ISSN:2167-2148, DBLP ID:conf/hri/SatoSAKKFYY14, SCOPUS ID:84896991756
  • Recognizing gaze pattern for human robot interaction               
    Dipankar Das; Md. Golam Rashed; Yoshinori Kobayashi; Yoshinori Kuno
    ACM/IEEE International Conference on Human-Robot Interaction, First page:142, Last page:143, 2014, [Reviewed]
    In this paper, we propose a human-robot interaction system in which the robot detects and classifies the target human's gaze pattern into either spontaneous looking or scenerelevant looking. If the gaze pattern is detected as the spontaneous looking, the robot waits for the target human without disturbing his/her attention. However, if the gaze pattern is detected as the scene-relevant looking, the robot establishes a communication channel with him/her in order to explain about the scene. We have implemented the proposed system into a robot, Robovie-R3 as a museum guide robot and tested the system to confirm its effectiveness.
    IEEE Computer Society, English, International conference proceedings
    DOI:https://doi.org/10.1145/2559636.2559818
    DOI ID:10.1145/2559636.2559818, ISSN:2167-2148, DBLP ID:conf/hri/DasRKK14, SCOPUS ID:84896969426
  • Effect of robot's gaze behaviors for attracting and controlling human attention               
    Mohammed Moshiul Hoque; Tomomi Onuki; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCED ROBOTICS, Volume:27, Number:11, First page:813, Last page:829, Aug. 2013, [Reviewed]
    Controlling someone's attention can be defined as shifting his/her attention from the existing direction to another. To shift someone's attention, gaining attention and meeting gaze are two most important prerequisites. If a robot would like to communicate a particular person, it should turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be effective in all situations, especially when the robot and the human are not facing each other or the human is intensely attending to his/her task. Therefore, the robot should perform some actions so that it can attract the target person and make him/her respond to the robot to meet gaze. In this paper, we present a robot that can attract a target person's attention by moving its head, make eye contact through showing gaze awareness by blinking its eyes, and directs his/her attention by repeating its eyes and head turns from the person to the target object. Experiments using 20 human participants confirm the effectiveness of the robot actions to control human attention.
    TAYLOR & FRANCIS LTD, English, Scientific journal
    DOI:https://doi.org/10.1080/01691864.2013.791654
    DOI ID:10.1080/01691864.2013.791654, ISSN:0169-1864, eISSN:1568-5535, DBLP ID:journals/ar/HoqueOKK13, Web of Science ID:WOS:000320579100001
  • Robotic Wheelchair Easy to Move and Communicate with Companions               
    Yoshinori Kobayashi; Yoshinori Kuno; Akiko Yamazaki; Ryota Suzuki; Yoshihisa Sato; Masaya Arai; Keiichi Yamazaki
    Conference on Human Factors in Computing Systems - Proceedings, Volume:2013-, First page:3079, Last page:3082, Apr. 2013, [Reviewed]
    Although it is desirable for wheelchair users to go out alone by operating wheelchairs on their own, they are often accompanied by caregivers or companions. In designing robotic wheelchairs, therefore, it is important to consider not only how to assist the wheelchair user but also how to reduce companions' load and support their activities. We specially focus on the communications among wheelchair users and companions because the face-to-face communication is known to be effective to ameliorate elderly mental health. Hence, we proposed a robotic wheelchair able to move alongside a companion. We demonstrate our robotic wheelchair. All attendees can try to ride and control our robotic wheelchair.
    Association for Computing Machinery, English, International conference proceedings
    DOI:https://doi.org/10.1145/2468356.2479615
    DOI ID:10.1145/2468356.2479615, DBLP ID:conf/chi/KobayashiSSAKYY13, SCOPUS ID:84955311117
  • Enhancing Wheelchair Manoeuvrability for Severe Impairment Users               
    Razali Tomari; Yoshinori Kobayashi; Yoshinori Kuno
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, Volume:10, First page:1, Last page:13, Feb. 2013, [Reviewed]
    A significant number of individuals with severe motor impairments are unable to control their wheelchair using a standard joystick. Even when they can facilitate the control input, navigation in a confined space or crowded environments is still a great challenge. Here we propose a wheelchair framework that enables the user to issue the command via a multi-input hands free interface (HFI), which subsequently assists him/her to overcome difficult circumstances using a multimodal control strategy. Initially the HFI inputs are analysed to infer the desired control mode and the user command. Then environmental information is perceived using a combination of laser and Kinect sensors for determining all possible obstacle locations and outputting a safety map around the wheelchair's vicinity. Eventually, the user's command is validated with the safety map to moderate the final motion, which is collision free and the best for the user's preference. The proposed method can reduce the burden of severe impairment users when controlling wheelchairs by continuously monitoring the surroundings and can make them move easily according to the users' intention.
    INTECH -OPEN ACCESS PUBLISHER, English, Scientific journal
    DOI:https://doi.org/10.5772/55477
    DOI ID:10.5772/55477, ISSN:1729-8806, Web of Science ID:WOS:000318846200002
  • Recognizing Objects with Indicated Shapes Based on Object Shape Ontology               
    Mori Satoshi; Fukuda Hisato; Kobayashi Yoshinori; Kuno Yoshinori; Kachi Daisuke
    The Journal of the Institute of Image Electronics Engineers of Japan, Volume:42, Number:4, First page:477, Last page:485, 2013
    Service robots need to be able to recognize objects located in complex environments. Al-though there has been recent progress in this area, it remains difficult for autonomous vision systems to recognize objects in natural conditions. Thus, we propose an interactive object recognition system,which asks the user to verbally provide information about the objects that it cannot recognize. However, humans may use various expressions to describe objects.Meanwhile, the same verbal expression may indicate different meanings depending on the situation. In this paper, we propose a framework of ontology representing such knowledge for interactive object recognition. Then, as the first step, we construct an ontology on object shape and develop an interactive object recognition system based on this ontology. It can recognize objects that have shapes indicated by natural language. We confirm the usefulness of the system through experiments using various daily objects.
    The Institute of Image Electronics Engineers of Japan, Japanese
    DOI:https://doi.org/10.11371/iieej.42.477
    DOI ID:10.11371/iieej.42.477, ISSN:0285-9831, CiNii Articles ID:130005108141, CiNii Books ID:AA12563298
  • Designing robot eyes for communicating gaze               
    Tomomi Onuki; Takafumi Ishinoda; Emi Tsuburaya; Yuki Miyata; Yoshinori Kobayashi; Yoshinori Kuno
    Interaction Studies, Volume:14, Number:3, First page:451, Last page:479, 2013, [Reviewed]
    Human eyes not only serve the function of enabling us "to see" something, but also perform the vital role of allowing us "to show" our gaze for non-verbal communication, such as through establishing eye contact and joint attention. The eyes of service robots should therefore also perform both of these functions. Moreover, they should be friendly in appearance so that humans may feel comfortable with the robots. Therefore we maintain that it is important to consider gaze communication capability and friendliness in designing the appearance of robot eyes. In this paper, we propose a new robot face with rear-projected eyes for changing their appearance while simultaneously realizing the showing of gaze by incorporating stereo cameras. Additionally, we examine which shape of robot eyes is most suitable for gaze reading and gives the friendliest impression, through experiments where we altered the shape and iris size of robot eyes. © 2013 John Benjamins Publishing Company.
    John Benjamins Publishing Company, English, Scientific journal
    DOI:https://doi.org/10.1075/is.14.3.07onu
    DOI ID:10.1075/is.14.3.07onu, ISSN:1572-0381, SCOPUS ID:84902243637
  • An intelligent human-robot interaction framework to control the human attention               
    Mohammed Moshiul Hoque; Kaushik Deb; Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    2013 International Conference on Informatics, Electronics and Vision, ICIEV 2013, 2013, [Reviewed]
    Attention control can be defined as shifting someone's attention from his/her existing attentional focus to another. However, it is not an easy task for the robot to control a human's attention toward its intended direction, especially when the robot and the human are not facing each other, or the human is intensely attending his/her task. The robot should convey some communicative intention through appropriate actions according to the human's situation. In this paper, we propose a robotic framework to control the human attention in terms of three phases: attracting attention, making eye contact, and shifting attention. Results show that the robot can attract a person's attention by three actions: head turning, head shaking, and uttering reference terms corresponding to three viewing situations in which the human vision senses the robot (near peripheral filed of view, far peripheral field of view, and out of field of view). After gaining attention, the robot makes eye contact through showing gaze awareness by blinking its eyes, and directs the human attention by the combination of eye and head turning behavior to share an object. Experiments using sixteen participants confirm the effectiveness of the propose framework to control human attention. © 2013 IEEE.
    English, International conference proceedings
    DOI:https://doi.org/10.1109/ICIEV.2013.6572539
    DOI ID:10.1109/ICIEV.2013.6572539, SCOPUS ID:84883358061
  • Question strategy and interculturality in human-robot interaction               
    Mihoko Fukushima; Rio Fujita; Miyuki Kurihara; Tomoyuki Suzuki; Keiichi Yamazaki; Akiko Yamazaki; Keiko Ikeda; Yoshinori Kuno; Yoshinori Kobayashi; Takaya Ohyama; Eri Yoshida
    ACM/IEEE International Conference on Human-Robot Interaction, First page:125, Last page:126, 2013, [Reviewed]
    This paper demonstrates the ways in which multi party human participants in 2 language groups, Japanese and English, engage with a quiz robot when they are asked a question. We focus on both speech and bodily conducts where we discovered both universalities and differences. © 2013 IEEE.
    English, International conference proceedings
    DOI:https://doi.org/10.1109/HRI.2013.6483533
    DOI ID:10.1109/HRI.2013.6483533, ISSN:2167-2148, SCOPUS ID:84875746420
  • Design of robot eyes suitable for gaze communication               
    Tomomi Onuki; Takafumi Ishinoda; Yoshinori Kobayashi; Yoshinori Kuno
    ACM/IEEE International Conference on Human-Robot Interaction, First page:203, Last page:204, 2013, [Reviewed]
    Human eyes not only serve the function of enabling us 'to see' something, but also perform the vital role of allowing us 'to show' our gaze for non-verbal communication. The eyes of service robots should therefore also perform both of these functions. Moreover, they should be friendly in appearance so that humans may feel comfortable with the robots. Therefore we maintain that it is important to consider gaze communication capability and friendliness in designing the appearance of robot eyes. In this paper, we propose a new robot face with rear-projected eyes for changing their appearance while simultaneously realizing the sight function by incorporating stereo cameras. Additionally, we examine which shape of robot eyes is most suitable for gaze reading and gives the friendliest impression, through experiments where we altered the shape and iris size of robot eyes. © 2013 IEEE.
    English, International conference proceedings
    DOI:https://doi.org/10.1109/HRI.2013.6483572
    DOI ID:10.1109/HRI.2013.6483572, ISSN:2167-2148, SCOPUS ID:84875716625
  • Object recognition for service robots based on human description of object attributes               
    Hisato Fukuda; Satoshi Mori; Katsutoshi Sakata; Yoshinori Kobayashi; Yoshinori Kuno
    IEEJ Transactions on Electronics, Information and Systems, Volume:133, Number:1, First page:18, Last page:27, 2013, [Reviewed]
    In order to be effective, it is essential for service robots to be able to recognize objects in complex environments. However, it is difficult for them to recognize objects autonomously without any mistakes in a real-world environment. Thus, in response to this challenge we conceived of an object recognition system that would utilize information about target objects acquired from the user through simple interaction. In this paper, we propose an interactive object recognition system using multiple attribute information (color, shape, and material), and introduce a robot using this system. Experimental results confirmed that the robot could indeed recognize objects by utilizing multiple attribute information obtained through interaction with the user. © 2013 The Institute of Electrical Engineers of Japan.
    Institute of Electrical Engineers of Japan, English, International conference proceedings
    DOI:https://doi.org/10.1541/ieejeiss.133.18
    DOI ID:10.1541/ieejeiss.133.18, ISSN:1348-8155, SCOPUS ID:84873823489
  • A mobile guide robot capable of establishing appropriate spatial formations               
    Mohammad Abu Yousuf; Yoshinori Kobayashi; Yoshinori Kuno; Keiichi Yamazaki; Akiko Yamazaki
    IEEJ Transactions on Electronics, Information and Systems, Volume:133, Number:1, First page:28, Last page:39, 2013, [Reviewed]
    This paper presents a model for a mobile museum guide robot that can dynamically establish an appropriate spatial relationship with visitors during explanation of an exhibit. We began by observing and videotaping scenes of actual museum galleries where human guides explained exhibits to visitors. Based on the analysis of the video, we developed a mobile robot system able to guide multiple visitors inside the gallery from one exhibit to another. The robot has the capability to establish a type of spatial formation known as the "F-formation" at the beginning of its explanation after arriving near any exhibit, a feature aided by its ability to employ the "pause and restart" strategy at certain moments in its talk to draw the visitors' attention towards itself. The robot is also able to identify and invite any bystanders around itself into its ongoing explanation, thereby reconfiguring the F-formation. The system uses spatial information from a laser range sensor and the heads of visitors are tracked using three USB cameras. A particle filter framework is employed to track the visitors' positions and body orientation, and the orientations of their heads, based on position data and panorama images captured by the laser range sensor and the USB cameras, respectively. The effectiveness of our method was confirmed through experiments. © 2013 The Institute of Electrical Engineers of Japan.
    Institute of Electrical Engineers of Japan, English, International conference proceedings
    DOI:https://doi.org/10.1541/ieejeiss.133.28
    DOI ID:10.1541/ieejeiss.133.28, ISSN:1348-8155, SCOPUS ID:84873801282
  • An Empirical Robotic Framework for Interacting with Multiple Humans               
    Mohammed Moshiul Hoque; Quazi Delwar Hossian; Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno; Kaushik Deb
    2013 INTERNATIONAL CONFERENCE ON ELECTRICAL INFORMATION AND COMMUNICATION TECHNOLOGY (EICT), 2013, [Reviewed]
    currently work in robotics is expanding from industrial robots to robots that are employed in the living environment. For robots to be accepted into the real world, they must be capable to behave in such a way that humans do with other humans. This paper focuses on designing a robotic framework to interact with multiple humans in a natural and social ways. To evaluate the robotic framework, we conducted an experiment to perform an important social function in any conversation such as initiating interaction with the target human in multiparty setting. Results show that the proposed robotic system is functioning to initiate an interaction process in four viewing situations.
    IEEE, English, International conference proceedings
    Web of Science ID:WOS:000342218600043
  • Interactions between a quiz robot and multiple participants Focusing on speech, gaze and bodily conduct in Japanese and English speakers               
    Akiko Yamazaki; Keiichi Yamazaki; Keiko Ikeda; Matthew Burdelski; Mihoko Fukushima; Tomoyuki Suzuki; Miyuki Kurihara; Yoshinori Kuno; Yoshinori Kobayashi
    INTERACTION STUDIES, Volume:14, Number:3, First page:366, Last page:389, 2013, [Reviewed]
    This paper reports on a quiz robot experiment in which we explore similarities and differences in human participant speech, gaze, and bodily conduct in responding to a robot's speech, gaze, and bodily conduct across two languages. Our experiment involved three-person groups of Japanese and English-speaking participants who stood facing the robot and a projection screen that displayed pictures related to the robot's questions. The robot was programmed so that its speech was coordinated with its gaze, body position, and gestures in relation to transition relevance places (TRPs), key words, and deictic words and expressions (e. g. this, this picture) in both languages. Contrary to findings on human interaction, we found that the frequency of English speakers' head nodding was higher than that of Japanese speakers in human-robot interaction (HRI). Our findings suggest that the coordination of the robot's verbal and non-verbal actions surrounding TRPs, key words, and deictic words and expressions is important for facilitating HRI irrespective of participants' native language.
    JOHN BENJAMINS PUBLISHING COMPANY, English, Scientific journal
    DOI:https://doi.org/10.1075/is.14.3.04yam
    DOI ID:10.1075/is.14.3.04yam, ISSN:1572-0373, eISSN:1572-0381, Web of Science ID:WOS:000338351400005
  • How to move towards visitors: A model for museum guide robots to initiate conversation               
    Mohammad A. Yousuf; Yoshinori Kobayashi; Yoshinori Kuno; Akiko Yamazaki; Keiichi Yamazaki
    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, First page:587, Last page:592, 2013, [Reviewed]
    Museum guide robots should observe visitors in order to identify those who may desire a guide, and then initiate conversation with them. This paper presents a model for such robot behavior. Initiation of conversation is an important concern for social service robots such as museum guide robots. When people enter into a social interaction, they tend to situate themselves in a spatial-orientational arrangement such that each is facing inward around a space to which each has immediate access. When this kind of particular spatial formation occurs, they can feel that they are participating in the conversation
    once they perceive their participation, they will subsequently try to maintain this spatial formation. We developed a model that describes the constraints and expected behaviors in initiation of conversation. Experimental results demonstrate that our model significantly improves a robot's performance in initiating conversation. © 2013 IEEE.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ROMAN.2013.6628543
    DOI ID:10.1109/ROMAN.2013.6628543, DBLP ID:conf/ro-man/YousufKKYY13, SCOPUS ID:84889560323
  • A maneuverable robotic wheelchair able to move adaptively with a caregiver by considering the situation               
    Yoshihisa Sato; Masaya Arai; Ryota Suzuki; Yoshinori Kobayashi; Yoshinori Kuno; Keiichi Yamazaki; Akiko Yamazaki
    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, First page:282, Last page:287, 2013, [Reviewed]
    Although it is desirable for wheelchair users to move alone by operating wheelchairs themselves, they are often pushed or accompanied by caregivers, especially in elderly care facilities. In designing robotic wheelchairs, therefore, it is important to consider how to reduce the caregiver's load and support their activities. We particularly focus on communication among wheelchair users and caregivers because face-to-face communication is known to be effective in ameliorating elderly mental health issues. Hence, we propose a maneuverable robotic wheelchair that can move autonomously towards a goal while considering the situation of the accompanying caregiver. Our robotic wheelchair starts to move autonomously towards the goal as soon as the caregiver sets the goal position, such as a bathroom, by utilizing its touch panel interface. While moving towards the goal, the robotic wheelchair autonomously controls its speed and direction of movement to facilitate natural communication between the wheelchair user and the caregiver, by observing the situation of the accompanying caregiver. © 2013 IEEE.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ROMAN.2013.6628460
    DOI ID:10.1109/ROMAN.2013.6628460, DBLP ID:conf/ro-man/SatoASKKYY13, SCOPUS ID:84889597248
  • Object Recognition for Service Robots through Verbal Interaction Based on Ontology               
    Hisato Fukuda; Satoshi Mori; Yoshinori Kobayashi; Yoshinori Kuno; Daisuke Kachi
    ADVANCES IN VISUAL COMPUTING, ISVC 2013, PT I, Volume:8033, First page:395, Last page:406, 2013, [Reviewed]
    We are developing a helper robot able to fetch objects requested by users. This robot tries to recognize objects through verbal interaction with the user concerning objects that it cannot detect autonomously. Since the robot recognizes objects based on verbal interaction with the user, such a robot must by necessity understand human descriptions of said objects. However, humans describe objects in various ways: they may describe attributes of whole objects, those of parts, or those viewable from a certain direction. Moreover, they may use the same descriptions to describe a range of different objects. In this paper, we propose an ontological framework for interactive object recognition to deal with such varied human descriptions. In particular, we consider human descriptions about object attributes, and develop an interactive object recognition system based on this ontology.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-41914-0_39
    DOI ID:10.1007/978-3-642-41914-0_39, ISSN:0302-9743, DBLP ID:conf/isvc/FukudaMKKK13, Web of Science ID:WOS:000335391300039
  • Attracting Attention and Establishing a Communication Channel Based on the Level of Visual Focus of Attention               
    Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), First page:2194, Last page:2201, 2013, [Reviewed]
    Recent research in HRI has emphasized the need to design affective interaction systems equipped with social intelligence. A robot's awareness of its social role encompasses the ability to behave in a socially acceptable manner, the ability to communicate appropriately according to the situation, and the ability to detect the feelings of interactive partners, as humans do with one another. In this paper, we propose an intelligent robotic method of attracting a target person's attention in a way congruent to satisfying these social requirements. If the robot needs to initiate communication urgently, such as in the case of reporting an emergency, it does not need to consider the current situation of the person it is addressing. Otherwise, the robot should observe the person to ascertain who or what s/he is looking at (VFOA), and how attentively s/he is doing so (VFOA level). Moreover, the robot must identify an appropriate time at which to attract the target person's attention so as to not interfere with his/her work. We have realized just such a robotic system by developing computer vision methods to detect a target person's VFOA and its level, and testing the system's effectiveness in a series of experiments.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IROS.2013.6696663
    DOI ID:10.1109/IROS.2013.6696663, ISSN:2153-0858, DBLP ID:conf/iros/DasKK13, Web of Science ID:WOS:000331367402053
  • Recognition of Request through Hand Gesture for Mobile Care Robots               
    Tomoya Tabata; Yoshinori Kobayashi; Yoshinori Kuno
    39TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY (IECON 2013), First page:8312, Last page:8316, 2013, [Reviewed]
    We are developing a service robot that provides assisted-care, such as serving tea to the elderly in care facilities. From the sociological analysis of human-human interaction in care facilities, we have found that people often use nonverbal behaviors such as raising their hand to make a request to caregivers. In this paper, we propose a request gesture recognition method for our service robot. In day care facilities, many people are moving around. Hence it is a difficult problem to correctly recognize request gestures among various movements. We have solved this problem by exploiting the sociological interaction analysis result. We have found that a person who would like to make a request to a caregiver waits for the moment when the caregiver looks in the direction towards the person, and then shows a request signal such as raising his/her hand. This means that the robot needs to analyze human motions only when it finds a frontal face in the direction of its face. We have developed a robot system to perform such request recognition and confirmed its effectiveness through experiments.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IECON.2013.6700525
    DOI ID:10.1109/IECON.2013.6700525, ISSN:1553-572X, DBLP ID:conf/iecon/TabataKK13, Web of Science ID:WOS:000331149508019
  • Tracking Visitors with Sensor Poles for Robot's Museum Guide Tour               
    Takaya Oyama; Eri Yoshida; Yoshinori Kobayashi; Yoshinori Kuno
    2013 6TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTIONS (HSI), First page:645, Last page:650, 2013, [Reviewed]
    In this paper, we propose a robot system which can take visitors on guided tours in a museum. When we consider developing a robot capable of giving a tour of an actual museum, we must implement a robot system able to measure the location and orientation of the robot and visitors using bearing sensors installed in a specific environment. Although many previous methods employed markers or tags attached to the robot to obtain positional information through cameras, it is not easy to situate sensors in the environment itself. SLAM is also used to localize the position of the robot. However, its robustness may not be sufficient when the system is deployed in an actual real-life situation since there will be many people moving freely around the robot. On the other hand, the robot needs information pertaining to both tour attendees and other visitors in the vicinity, in order to give a tour for the attendees. Therefore, we propose a new robot system which consists of simple devices such as multiple laser range finders attached to a pole. By just placing the sensor poles, we could track the location and orientation of the robot and visitors at the same time. We then conducted experiments to confirm the effectiveness and accuracy of our system. In addition, we conducted demonstrative experiments that our robot takes three visitors to the guide tour.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/HSI.2013.6577893
    DOI ID:10.1109/HSI.2013.6577893, ISSN:2158-2246, DBLP ID:conf/hsi/OhyamaYKK13, Web of Science ID:WOS:000333257400099
  • Design of Robot Eyes Suitable for Gaze Communication               
    Tomomi Onuki; Takafumi Ishinoda; Yoshinori Kobayashi; Yoshinori Kuno
    PROCEEDINGS OF THE 8TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2013), First page:203, Last page:204, 2013, [Reviewed]
    Human eyes not only serve the function of enabling us "to see" something, but also perform the vital role of allowing us "to show" our gaze for non-verbal communication. The eyes of service robots should therefore also perform both of these functions. Moreover, they should be friendly in appearance so that humans may feel comfortable with the robots. Therefore we maintain that it is important to consider gaze communication capability and friendliness in designing the appearance of robot eyes. In this paper, we propose a new robot face with rear-projected eyes for changing their appearance while simultaneously realizing the sight function by incorporating stereo cameras. Additionally, we examine which shape of robot eyes is most suitable for gaze reading and gives the friendliest impression, through experiments where we altered the shape and iris size of robot eyes.
    IEEE, English, International conference proceedings
    ISSN:2167-2121, DBLP ID:conf/hri/OnukiIKK13, Web of Science ID:WOS:000320655500078
  • Question Strategy and Interculturality in Human-Robot Interaction               
    Mihoko Fukushima; Rio Fujita; Miyuki Kurihara; Tomoyuki Suzuki; Keiichi Yamazaki; Akiko Yamazaki; Keiko Ikeda; Yoshinori Kuno; Yoshinori Kobayashi; Takaya Ohyama; Eri Yoshida
    PROCEEDINGS OF THE 8TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2013), First page:125, Last page:+, 2013, [Reviewed]
    This paper demonstrates the ways in which multi party human participants in 2 language groups, Japanese and English, engage with a quiz robot when they are asked a question. We focus on both speech and bodily conducts where we discovered both universalities and differences.
    IEEE, English, International conference proceedings
    ISSN:2167-2121, DBLP ID:conf/hri/FukushimaFKSYYIKKOY13, Web of Science ID:WOS:000320655500039
  • Attention control system considering the target person's attention level               
    Dipankar Das; Mohammed Moshiul Hoque; Yoshinori Kobayashi; Yoshinori Kuno
    ACM/IEEE International Conference on Human-Robot Interaction, First page:111, Last page:112, 2013, [Reviewed]
    In this paper, we propose an attention control system for social robots that attracts and controls the attention of a target person depending on his/her current attentional focus. The system recognizes the current task of the target person and estimates its level of focus by using the 'task related behavior pattern' of the target human. The attention level is used to determine the suitable cues to attract the target person's attention toward the robot. The robot detects the interest or willingness of the target person to interact with it. Then, depending on the level of interest, the robot displays an awareness signal and shifts his/her attention to an intended goal direction. © 2013 IEEE.
    IEEE/ACM, English, International conference proceedings
    DOI:https://doi.org/10.1109/HRI.2013.6483526
    DOI ID:10.1109/HRI.2013.6483526, ISSN:2167-2148, DBLP ID:conf/hri/DasHKK13, SCOPUS ID:84875694200
  • Designing Robot Eyes for Gaze Communication               
    Tomomi Onuki; Takafumi Ishinoda; Yoshinori Kobayashi; Yoshinori Kuno
    PROCEEDINGS OF THE 19TH KOREA-JAPAN JOINT WORKSHOP ON FRONTIERS OF COMPUTER VISION (FCV 2013), First page:97, Last page:102, 2013, [Reviewed]
    Human eyes not only serve the function of enabling us "to see" something, but also perform the vital role of allowing us "to show" our gaze for non-verbal communication, such as through establishing eye contact and joint attention. The eyes of service robots should therefore also perform both of these functions. Moreover, they should be friendly in appearance so that humans may feel comfortable with the robots. Therefore we maintain that it is important to consider gaze communication capability and friendliness in designing the appearance of robot eyes. In this paper, we propose a new robot face with rear-projected eyes for changing their appearance while simultaneously realizing the sight function by incorporating stereo cameras. Additionally, we examine which shape of robot eyes is most suitable for gaze reading and gives the friendliest impression, through experiments where we altered the shape and iris size of robot eyes.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/FCV.2013.6485468
    DOI ID:10.1109/FCV.2013.6485468, ISSN:2165-1051, DBLP ID:conf/fcv/OnukiIKK13, Web of Science ID:WOS:000318404900020
  • Tracking a Robot and Visitors in a Museum Using Sensor Poles               
    Takaya Ohyama; Eri Yoshida; Yoshinori Kobayashi; Yoshinori Kuno
    PROCEEDINGS OF THE 19TH KOREA-JAPAN JOINT WORKSHOP ON FRONTIERS OF COMPUTER VISION (FCV 2013), First page:36, Last page:41, 2013, [Reviewed]
    In this paper, we propose a robot system which can take visitors on guided tours in a museum. When we consider developing a robot capable of giving a tour of an actual museum, we must implement a robot system able to measure the location and orientation of the robot and visitors using bearing sensors installed in a specific environment. Although many previous methods employed markers or tags attached to the robot to obtain positional information through cameras, it is not easy to situate sensors in the environment itself. SLAM is also used to localize the position of the robot. However, its robustness may not be sufficient when the system is deployed in an actual real-life situation since there will be many people moving freely around the robot. On the other hand, the robot needs information pertaining to both tour attendees and other visitors in the vicinity, in order to give a tour for the attendees. Therefore, we propose a new robot system which consists of simple devices such as multiple laser range finders attached to a pole. By just placing the sensor poles, we could track the location and orientation of the robot and visitors at the same time. We then conducted experiments to confirm the effectiveness and accuracy of our system.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/FCV.2013.6485456
    DOI ID:10.1109/FCV.2013.6485456, ISSN:2165-1051, DBLP ID:conf/fcv/OhyamaYKK13, Web of Science ID:WOS:000318404900008
  • Development of Smart Wheelchair System for a user with Severe Motor Impairment               
    R. Tomari, Y. Kobayashi and Y. Kuno
    Journal of Procedia Engineering, Number:41, First page:538, Last page:546, 2012, [Reviewed]
    Scientific journal
  • Empirical framework for autonomous wheelchair systems in human-shared environments               
    Razali Tomari; Yoshinori Kobayashi; Yoshinori Kuno
    2012 IEEE International Conference on Mechatronics and Automation, ICMA 2012, First page:493, Last page:498, 2012, [Reviewed]
    Autonomous robotic wheelchairs are normally used in human-shared environments and hence they need to move in a safe and comfortable way. Here, we propose a framework that enables such motions in real indoor environments. Initially human regions are detected and tracked from head cues information. Next, the tracking attributes is interpreted for modeling human's comfort zone (CZ) based on human interaction rules rooted from Proxemics concept. Finally, the wheelchair's trajectory is generated by respecting the CZ and simultaneously avoiding in place obstacle. Experimental results demonstrate the feasibility of the proposed framework. © 2012 IEEE.
    English, International conference proceedings
    DOI:https://doi.org/10.1109/ICMA.2012.6283096
    DOI ID:10.1109/ICMA.2012.6283096, SCOPUS ID:84867607499
  • Design An Intelligent Robotic Head to Interacting with Humans               
    Mohammed Moshiul Hoque; Kaushik Deb; Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    2012 15TH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY (ICCIT), First page:539, Last page:545, 2012, [Reviewed]
    Attracting people attention to initiate an interaction is one of the most fundamental social capabilities both for the human and the robot. If the robot would like to communicate a particular person, it should turn its gaze to that person and make eye contact with him/her. However, only such a turning action is not enough to set up an eye contact phenomenon in all cases specially, when the robot and the target person are in greater distance and they are not facing each other. Therefore, the robot should perform some actions so that it can attract the target person before meeting his/her gaze. In this paper, we propose three actions (head turning, head shaking, and reference terms) for the robot as its attention attraction capabilities corresponding to the three viewing situations (near peripheral field of view, far peripheral field of view, and out of field of view) of human. A preliminary experiment using twelve participants confirms the effectiveness of the robot actions to attract human attention.
    IEEE, English, International conference proceedings
    ISSN:2474-9648, Web of Science ID:WOS:000392934600094
  • Robotic Wheelchair with Omni-directional Vision for Moving alongside a Caregiver               
    Yoshinori Kobayashi; Ryota Suzuki; Yoshinori Kuno
    38TH ANNUAL CONFERENCE ON IEEE INDUSTRIAL ELECTRONICS SOCIETY (IECON 2012), First page:4177, Last page:4182, 2012, [Reviewed]
    Recently, several robotic/intelligent wheelchairs equipped with user-friendly interfaces and/or autonomous functions to fulfill set goals have been proposed. Although wheelchair users may wish to go out alone, caregivers often accompany them. Therefore, it is important to consider how to reduce the caregivers' load while supporting their activities and facilitating communication between the caregiver and wheelchair user. Hence, we have proposed a robotic wheelchair that can move alongside a caregiver by observing his/her behavior, such as body position and orientation. This form of motion enables easy communication between the wheelchair user and the caregiver. However, in our previous system, the caregiver had to set their own initial position by tapping the touchscreen displaying the distance data image captured by a laser range sensor. In this paper, therefore, we introduce a new interface we designed by incorporating an omnidirectional camera so that the accompanying caregiver can easily set the initial position by tapping his/her facial image instead. Moreover, when the system loses the accompanying caregiver due to them making an unexpected motion, it promptly re-identifies the caregiver by using color appearance cues. These functions improve the usefulness of our robotic wheelchair and heighten the possibility of our wheelchair being utilized in our daily life.
    IEEE, English, International conference proceedings
    ISSN:1553-572X, Web of Science ID:WOS:000316962904023
  • Robotic Wheelchair Moving with Caregiver Collaboratively               
    Yoshinori Kobayashi; Yuki Kinpara; Erii Takano; Yoshinori Kuno; Keiichi Yamazaki; Akiko Yamazaki
    ADVANCED INTELLIGENT COMPUTING THEORIES AND APPLICATIONS: WITH ASPECTS OF ARTIFICIAL INTELLIGENCE, Volume:6839, First page:523, Last page:+, 2012, [Reviewed]
    This paper introduces a robotic wheelchair that can automatically move alongside a caregiver. Because wheelchair users are often accompanied by caregivers, it is vital to consider how to reduce a caregiver's load and support their activities, while simultaneously facilitating communication between the caregiver and the wheelchair user. Moreover, a sociologist pointed out that when a wheelchair user is accompanied by a companion, the latter is inevitably seen by others as a caregiver rather than a friend. In other words, the equality of the relationship is publicly undermined when the wheelchair is pushed by a companion. Hence, we propose a robotic wheelchair able to move alongside a companion, and facilitate easy communication between the companion and the wheelchair user. Laser range sensors are used for tracking the caregiver and observing the environment around the wheelchair. To confirm the effectiveness of the wheelchair in real-world situations, we conducted experiments at an elderly care center in Japan. Results and analyses are also reported in this paper.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-25944-9_68
    DOI ID:10.1007/978-3-642-25944-9_68, ISSN:0302-9743, DBLP ID:conf/icic/KobayashiKTKYY11, Web of Science ID:WOS:000306498200068
  • Vision-based attention control system for socially interactive robots               
    Dipankar Das; Mohammed Moshiul Hoque; Tomomi Onuki; Yoshinori Kobayashi; Yoshinori Kuno
    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, First page:496, Last page:502, 2012, [Reviewed]
    A social robot needs to attract the attention of a target human and shift it from his/her current focus to what is sought by the robot. The robot should recognize the current target's attention level to smoothly perform this attention control. In this paper, we propose a vision-based system to detect the level of attention or willingness of the target person towards the robot and to control his/her attention. The system estimates the attention level from rich visual cues of human's face and head. Then, by timing target's attention to determine the appropriate attention level, it generates aware signals and makes eye contact with the target. Finally, the robot shifts the target's attention to an intended direction. The experimental results reveal that the proposed system is effective in controlling the target's attention. © 2012 IEEE.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ROMAN.2012.6343800
    DOI ID:10.1109/ROMAN.2012.6343800, DBLP ID:conf/ro-man/DasHOKK12, SCOPUS ID:84870837100
  • Model for controlling a target human's attention in multi-party settings               
    Mohammed Moshiul Hoque; Dipankar Das; Tomomi Onuki; Yoshinori Kobayashi; Yoshinori Kuno
    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, First page:476, Last page:483, 2012, [Reviewed]
    It is a major challenge in HRI to design a social robot that is able to direct a target human's attention towards an intended direction. For this purpose, the robot may first turn its gaze to him/her in order to establish eye contact. However, such a turning action of the robot may not in itself be sufficient to make eye contact with the target person in all situations, especially when the robot and the person are not facing each other or the human is intensely engaged in a task. In this paper, we propose a conceptual model of attention control with five phases: attention attraction, eye contact, attention avoidance, gaze back, and attention shift. We conducted two experiments to validate our model in human-robot interaction scenarios. © 2012 IEEE.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ROMAN.2012.6343797
    DOI ID:10.1109/ROMAN.2012.6343797, DBLP ID:conf/ro-man/HoqueDOKK12, SCOPUS ID:84870817840
  • Object Recognition for Service Robots through Verbal Interaction about Multiple Attribute Information               
    Hisato Fukuda; Satoshi Mori; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCES IN VISUAL COMPUTING, ISVC 2012, PT I, Volume:7431, First page:620, Last page:631, 2012, [Reviewed]
    In order to be effective, it is essential for service robots to be able to recognize objects in complex environments. However, it is a difficult problem for them to recognize objects autonomously without any mistakes in a real-world environment. Thus, in response to this challenge we conceived of an object recognition system that would utilize information about target objects acquired from the user through simple interaction. In this paper, we propose image processing techniques to consider the shape composition and the material composition of objects in an interactive object recognition framework, and introduce a robot using this interactive object recognition system. Experimental results confirmed that the robot could indeed recognize objects by utilizing multiple attribute information obtained through interaction with the user.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-33179-4_59
    DOI ID:10.1007/978-3-642-33179-4_59, ISSN:0302-9743, DBLP ID:conf/isvc/FukudaMKK12, Web of Science ID:WOS:000363266600059
  • A Spatial-Based Approach for Groups of Objects               
    Lu Cao; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCES IN VISUAL COMPUTING, ISVC 2012, PT II, Volume:7432, First page:597, Last page:608, 2012, [Reviewed]
    We introduce a spatial-based feature approach to locating and recognizing objects in cases where several identical or similar objects are accumulated together. With respect to cognition, humans specify such cases with a group-based reference system, which can be considered as an extension of conventional notions of reference systems. On the other hand, a spatial-based feature is straightforward and distinctive, making it more suitable for object recognition tasks. We evaluate this approach by testing it on eight diverse object categories, and thereby provided comprehensive results. The performance exceeds the state of art by high accuracy, less attempts and fast running time.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-33191-6_59
    DOI ID:10.1007/978-3-642-33191-6_59, ISSN:0302-9743, DBLP ID:conf/isvc/CaoKK12, Web of Science ID:WOS:000363265800059
  • Wide Field of View Kinect Undistortion for Social Navigation Implementation               
    Razali Tomari; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCES IN VISUAL COMPUTING, ISVC 2012, PT II, Volume:7432, First page:526, Last page:535, 2012, [Reviewed]
    In planning navigation schemes for social robots, distinguishing between humans and other obstacles is crucial for obtaining a safe and comfortable motion. A Kinect camera is capable of fulfilling such a task but unfortunately can only deliver a limited field of view (FOV). Recently a lens that is capable of improving the Kinect's FOV has become commercially available from Nyko. However, this lens causes a distortion in the RGB-D data, including the depth values. To address this issue, we propose a two-staged undistortion strategy. Initially, pixel locations in both RGB and depth images are corrected using an inverse radial distortion model. Next, the depth data is post-filtered using 3D point cloud analysis to diminish the noise as a result of the undistorting process and remove the ground/ceiling information. Finally, the depth values are rectified using a neural network filter based on laser-assisted training. Experimental results demonstrate the feasibility of the proposed approach for fixing distorted RGB-D data.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-33191-6_52
    DOI ID:10.1007/978-3-642-33191-6_52, ISSN:0302-9743, DBLP ID:conf/isvc/TomariKK12, Web of Science ID:WOS:000363265800052
  • An Integrated Approach of Attention Control of Target Human by Nonverbal Behaviors of Robots in Different Viewing Situations               
    Mohammed Moshiul Hoque; Dipankar Das; Tomomi Onuki; Yoshinori Kobayashi; Yoshinori Kuno
    2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), First page:1399, Last page:1406, 2012, [Reviewed]
    A major challenge in HRI is to design a social robot that can attract a target human's attention to control his/her attention toward a particular direction in various social situations. If a robot would like to initiate an interaction with a person, it may turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be enough to initiate an interaction in all situations, especially when the robot and the human are not facing each other or the human intensely attends to his/her task. In this paper, we propose a conceptual model of attention control with four phases: attention attraction, eye contact, attention avoidance, and attention shift. In order to initiate an attention control process, the robot first tries to gain the target participant's attention toward it through head turning, or head shaking action depending on the three viewing situations where the robot is captured in his/her field of view (central field of view, near peripheral field of view, and far peripheral field of view). After gaining her/his attention, the robot makes eye contact only with the target person through showing gaze awareness by blinking its eyes, and directs her/his attention toward an object by turning its eyes and head cues. Moreover, the robot can show attention to aversion behaviors if non-target persons look at it. We design a robot based on the proposed approach, and it is confirmed as effective to control the target participant's attention in experimental evaluation.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IROS.2012.6385480
    DOI ID:10.1109/IROS.2012.6385480, ISSN:2153-0858, DBLP ID:conf/iros/HoqueDOKK12, Web of Science ID:WOS:000317042701144
  • Robotic System Controlling Target Human's Attention               
    Mohammed Moshiul Hoque; Dipankar Das; Tomomi Onuki; Yoshinori Kobayashi; Yoshinori Kuno
    INTELLIGENT COMPUTING THEORIES AND APPLICATIONS, ICIC 2012, Volume:7390, First page:534, Last page:544, 2012, [Reviewed]
    Attention control can be defined as shifting people's attention from their existing direction toward a goal direction. If a human would like to shift another's attention, s/he may first turn his/her gaze to that human to make eye contact. However, it is not an easy task for a robot when the human is not looking at it initially. In this paper, we propose a model of attention control with four parts: attention attraction, eye contact, attention avoidance, and attention shift. To initiate attention control process, the robot first tries to gain the target person's attention toward it through head turning or head shaking action depending on the three viewing situations where the robot is captured in his/her field of view (central field of view, near peripheral field of view, and far peripheral field of view). After gaining his/her attention, the robot makes eye contact through showing gaze awareness by blinking its eyes, and directs his/her attention to an object by turning both its eyes and head. If a person other than the target seems to be attracted by the robot, the robot turns its head away from that person to avoid his/her attention. Evaluation experiments confirmed that the proposed approach is effective to control the target person's attention.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-31576-3_68
    DOI ID:10.1007/978-3-642-31576-3_68, ISSN:0302-9743, DBLP ID:conf/icic/HoqueDOKK12, Web of Science ID:WOS:000314766200068
  • Development of a mobile museum guide robot that can configure spatial formation with visitors               
    Mohammad Abu Yousuf; Yoshinori Kobayashi; Yoshinori Kuno; Akiko Yamazaki; Keiichi Yamazaki
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume:7389, First page:423, Last page:432, 2012, [Reviewed]
    Museum guide robot is expected to establish a proper spatial formation known as "F-formation" with the visitors before starting its explanation of any exhibit. This paper presents a model for a mobile museum guide robot that can establish an F-formation appropriately and can employ "pause and restart" depending on the situation. We began by observing and videotaping scenes of actual museum galleries where human guides explain exhibits to visitors. Based on the analysis of the video, we developed a mobile robot system that can guide multiple visitors inside the gallery from one exhibit to another. The robot has the capability to establish the F-formation at the beginning of explanation after arriving near to any exhibit. The robot can also implement "pause and restart" depending on the situation at certain moment in its talk to first elicit the visitor's attention towards the robot. Experimental results suggest the efficacy of our proposed model. © 2012 Springer-Verlag.
    Springer, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-31588-6_55
    DOI ID:10.1007/978-3-642-31588-6_55, ISSN:0302-9743, DBLP ID:conf/icic/YousufKKYY12, SCOPUS ID:84865289839
  • Spatial-Based Feature for Locating Objects               
    Lu Cao; Yoshinori Kobayashi; Yoshinori Kuno
    INTELLIGENT COMPUTING THEORIES AND APPLICATIONS, ICIC 2012, Volume:7390, First page:128, Last page:137, 2012, [Reviewed]
    In this paper, we discuss how humans locate and detect objects using spatial expressions. Then we propose a spatial-based feature for object localization and recognition tasks. We develop a system that can recognize an object whose positional relation with another object is indicated verbally by a human. Experimental results using two image datasets prepared by the authors confirm the usefulness of the proposed feature.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-31576-3_17
    DOI ID:10.1007/978-3-642-31576-3_17, ISSN:0302-9743, DBLP ID:conf/icic/CaoKK12, Web of Science ID:WOS:000314766200017
  • Implementing Human Questioning Strategies into Quizzing-Robot               
    Takaya Ohyama; Yasutomo Maeda; Chiaki Mori; Yoshinori Kobayashi; Yoshinori Kuno; Rio Fujita; Keiichi Yamazaki; Shun Miyazawa; Akiko Yamazaki; Keiko Ikeda
    HRI'12: PROCEEDINGS OF THE SEVENTH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, First page:423, Last page:423, 2012, [Reviewed]
    ASSOC COMPUTING MACHINERY, English, International conference proceedings
    DOI:https://doi.org/10.1145/2157689.2157829
    DOI ID:10.1145/2157689.2157829, ISSN:2167-2121, DBLP ID:conf/hri/OhyamaMMKKFYMYI12, Web of Science ID:WOS:000393315300130
  • A Techno-Sociological Solution for Designing a Museum Guide Robot: Regarding Choosing an Appropriate Visitor               
    Akiko Yamazaki; Keiichi Yamazaki; Takaya Ohyama; Yoshinori Kobayashi; Yoshinori Kuno
    HRI'12: PROCEEDINGS OF THE SEVENTH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, First page:309, Last page:316, 2012, [Reviewed]
    In this paper, we present our work designing a robot that explains an exhibit to multiple visitors in a museum setting, based on ethnographic analysis of interactions between expert human guides and visitors. During the ethnographic analysis, we discovered that expert human guides employ some identical strategies and practices in their explanations. In particular, one of these is to involve all visitors by posing a question to an appropriate visitor among them, which we call the "creating a puzzle" sequence. This is done in order to draw visitors' attention towards not only the exhibit and but also the guide's explanation. While creating a puzzle, the human guide can monitor visitors' responses and choose an "appropriate" visitor (i.e. one who is likely to provide an answer). Based on these findings, sociologists and engineers together developed a guide robot that coordinates verbal and non-verbal actions in posing a question or "a puzzle" that will draw visitors' attention, and then explain the exhibit for multiple visitors. During the explanation, the robot chooses an "appropriate" visitor. We tested the robot at an actual museum. The results show that our robot increases visitors' engagement and interaction with the guide, as well as interaction and engagement among visitors.
    ASSOC COMPUTING MACHINERY, English, International conference proceedings
    DOI:https://doi.org/10.1145/2157689.2157800
    DOI ID:10.1145/2157689.2157800, ISSN:2167-2121, DBLP ID:conf/hri/YamazakiYOKK12, Web of Science ID:WOS:000393315300106
  • Establishment of Spatial Formation by a Mobile Guide Robot               
    Mohammad A. Yousuf; Yoshinori Kobayashi; Yoshinori Kuno; Keiichi Yamazaki; Akiko Yamazaki
    HRI'12: PROCEEDINGS OF THE SEVENTH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, First page:281, Last page:282, 2012, [Reviewed]
    A mobile museum guide robot is expected to establish a proper spatial formation with the visitors. After observing the videotaped scenes of human guide-visitors interaction at actual museum galleries, we have developed a mobile robot that can guide multiple visitors inside the gallery from one exhibit to another. The mobile guide robot is capable of establishing spatial formation known as "F-formation" at the beginning of explanation. It can also use a systematic procedure known as "pause and restart" depending on the situation through which a framework of mutual orientation between the speaker (robot) and visitors is achieved. The effectiveness of our method has been confirmed through experiments.
    ASSOC COMPUTING MACHINERY, English, International conference proceedings
    DOI:https://doi.org/10.1145/2157689.2157794
    DOI ID:10.1145/2157689.2157794, ISSN:2167-2121, DBLP ID:conf/hri/YousufKKYY12, Web of Science ID:WOS:000393315300101
  • Attracting and Controlling Human Attention through Robot's Behaviors Suited to the Situation               
    Mohammed M. Hoque; Tomomi Onuki; Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    HRI'12: PROCEEDINGS OF THE SEVENTH ANNUAL ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, First page:149, Last page:150, 2012, [Reviewed]
    A major challenge is to design a robot that can attract and control human attention in various social situations. If a robot would like to communicate a person, it may turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be enough in all situations, especially when the robot and the human are not facing each other. In this paper, we present an attention control approach through robot's behaviors that can attract a person's attention by three actions: head turning, head shaking, and uttering reference terms corresponding to three viewing situations in which the human vision senses the robot (near peripheral field of view, far peripheral field of view, and out of field of view). After gaining attention, the robot makes eye contact through showing gaze awareness by blinking its eyes, and directs the human attention by eye and head turning behaviors to share an object.
    ASSOC COMPUTING MACHINERY, English, International conference proceedings
    DOI:https://doi.org/10.1145/2157689.2157729
    DOI ID:10.1145/2157689.2157729, ISSN:2167-2121, DBLP ID:conf/hri/HoqueODKK12, Web of Science ID:WOS:000393315300036
  • Care robot able to show the order of service provision through bodily actions in multi-party settings.               
    Yoshinori Kobayashi; Keiichi Yamazaki; Akiko Yamazaki; Masahiko Gyoda; Tomoya Tabata; Yoshinori Kuno; Yukiko Seki
    CHI Conference on Human Factors in Computing Systems, CHI '12, Extended Abstracts Volume, Austin, TX, USA, May 5-10, 2012, First page:1889, Last page:1894, 2012, [Reviewed]
    ACM, International conference proceedings
    DOI:https://doi.org/10.1145/2212776.2223724
    DOI ID:10.1145/2212776.2223724, DBLP ID:conf/chi/KobayashiYYGTKS12
  • Assisted-care robot based on sociological interaction analysis               
    Wenxing Quan; Hitoshi Niwa; Naoto Ishikawa; Yoshinori Kobayashi; Yoshinori Kuno
    COMPUTERS IN HUMAN BEHAVIOR, Volume:27, Number:5, First page:1527, Last page:1534, Sep. 2011, [Reviewed]
    This paper presents our on-going work in developing service robots that provide assisted-care to the elderly in multi-party settings. In typical Japanese day-care facilities, multiple caregivers and visitors are co-present in the same room and any caregiver may provide assistance to any visitor. In order to effectively work in such settings, a robot should behave in a way that a person who needs assistance can easily initiate help from the robot. Based on findings from observations at several day-care facilities, we have developed a robot system that displays availability to multiple persons and then displays recipiency to an individual who initiates interaction with the robot. In this paper we detail this robot system and its experimental evaluation. (C) 2010 Elsevier Ltd. All rights reserved.
    PERGAMON-ELSEVIER SCIENCE LTD, English, Scientific journal
    DOI:https://doi.org/10.1016/j.chb.2010.10.022
    DOI ID:10.1016/j.chb.2010.10.022, ISSN:0747-5632, eISSN:1873-7692, DBLP ID:journals/chb/QuanNIKK11, Web of Science ID:WOS:000293319500010
  • Sub-Category Optimization through Cluster Performance Analysis for Multi-View Multi-Pose Object Detection               
    Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, Volume:E94D, Number:7, First page:1467, Last page:1478, Jul. 2011, [Reviewed]
    The detection of object categories with large variations in appearance is a fundamental problem in computer vision. The appearance of object categories can change due to intra-class variations, background clutter, and changes in viewpoint and illumination. For object categories with large appearance changes, some kind of sub-categorization based approach is necessary. This paper proposes a sub-category optimization approach that automatically divides an object category into an appropriate number of sub-categories based on appearance variations. Instead of using predefined intra-category sub-categorization based on domain knowledge or validation datasets, we divide the sample space by unsupervised clustering using discriminative image features. We then use a cluster performance analysis (CPA) algorithm to verify the performance of the unsupervised approach. The CPA algorithm uses two performance metrics to determine the optimal number of sub-categories per object category. Furthermore, we employ the optimal sub-category representation as the basis and a supervised multi-category detection system with chi(2) merging kennel function to efficiently detect and localize object categories within an image. Extensive experimental results are shown using a standard and the authors' own databases. The comparison results reveal that our approach outperforms the state-of-the-art methods.
    IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG, English, Scientific journal
    DOI:https://doi.org/10.1587/transinf.E94.D.1467
    DOI ID:10.1587/transinf.E94.D.1467, ISSN:0916-8532, DBLP ID:journals/ieicet/DasKK11, Web of Science ID:WOS:000292619500011
  • 材質を含む属性情報を利用したサービスロボットのための対話物体認識(若葉研究者の集い1,サマーセミナー2011(若葉研究者の集い))               
    福田 悠人; 小林 貴訓; 久野 義徳
    Volume:35, First page:11, Last page:12, 2011
    サービスロボットは複雑な環境の中から正確に対象物体を認識することが求められる.しかし,雑多なものが多く存在する実環境において,誤ることなく対象物体を識別することは困難であるといえる.そのため我々は,ロボットが人との対話により対象物体の手掛かりを獲得して認識を行う物体認識手法を提案している.本稿では,ロボットが人との対話により対象物体の色,材質,位置関係を獲得し,物体を認識するロボットシステムを構成した.
    Japanese
    DOI:https://doi.org/10.11485/itetr.35.33.0_11
    DOI ID:10.11485/itetr.35.33.0_11, CiNii Articles ID:130006088030
  • コミュニケーションを考慮した複数ロボット車椅子システム(若葉研究者の集い3,サマーセミナー2011(若葉研究者の集い))               
    高野 恵利衣; 小林 貴訓; 久野 義徳
    Volume:35, First page:41, Last page:42, 2011
    近年の急激な少子高齢化の進行に伴い,少ない人材で効率的な介護が行えるように,複数台のロボット車椅子が一人の介護者に協調追従を行うシステムの構築を行った.追随に関しては,介護者と車椅子利用者間のコミュニケーションや,障害物回避を考慮し,介護者と車椅子利用者が話しやすく,両者にとって快適な走行システムの構築を行った.
    Japanese
    DOI:https://doi.org/10.11485/itetr.35.33.0_41
    DOI ID:10.11485/itetr.35.33.0_41, CiNii Articles ID:130006088061
  • Mobile care robot accepting requests through nonverbal interaction               
    Masahiko Gyoda; Tomoya Tabata; Yoshinori Kobayashi; Yoshinori Kuno
    2011 17th Korea-Japan Joint Workshop on Frontiers of Computer Vision, FCV 2011, 2011, [Reviewed]
    This paper presents our on-going work in developing service robots that provide assisted-care to the elderly in multi-party settings. In typical Japanese day-care facilities, multiple caregivers and visitors are co-present in the same room and any caregiver may provide assistance to any visitor. In order to effectively work in such settings, a robot should behave in a way that a person who needs assistance can easily initiate help from the robot. Based on findings from observations at several day-care facilities, we have developed a mobile robot system that displays availability to multiple persons and then displays recipiency to an individual who initiates interaction with the robot. In this paper we detail this robot system and its experimental result. © 2011 IEEE.
    English, International conference proceedings
    DOI:https://doi.org/10.1109/FCV.2011.5739723
    DOI ID:10.1109/FCV.2011.5739723, SCOPUS ID:84881128986
  • Object spatial recognition for service robots: Where is the front?               
    Lu Cao; Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    2011 IEEE International Conference on Mechatronics and Automation, ICMA 2011, First page:875, Last page:880, 2011, [Reviewed]
    Exactly same objects produce dramatically different images depending on their poses to the camera and result in great ambiguity for spatial recognition. Different poses of same objects also lead to different orientations in the use of intrinsic system. Our current study is focusing on this issue and can be divided into three phases. First, we propose an object pose-estimation model which is capable of recognizing unseen views. We achieve this goal by building a discrete key-pose structure parameterized by an azimuth and using PHOG [20] descriptor to measure the shape correspondence between two images. A large number of instances are learned at the training stage through semi-supervised. Then, we show experimental results on our own dataset. Second, according to the analyzed criteria in the use of intrinsic system, we recognize the frontal orientation of an intrinsic geometry object combining with pose-estimation results (e.g., a LCD screen). Finally, we summarize our integrated model which is able to classify object category, estimate object pose, distinguish intrinsic spatial relations between reference and target objects and locate the target under users' instructions. © 2011 IEEE.
    English, International conference proceedings
    DOI:https://doi.org/10.1109/ICMA.2011.5985705
    DOI ID:10.1109/ICMA.2011.5985705, SCOPUS ID:81055124268
  • Controlling Human Attention through Robot's Gaze Behaviors               
    Mohammed Moshiul Hoque; Tomomi Onuki; Yoshinori Kobayashi; Yoshinori Kuno
    4TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION (HSI 2011), First page:195, Last page:202, 2011, [Reviewed]
    Controlling someone's attention can be defined as shifting his/her attention from the existing direction to another. However, it is not easy task for a robot to shift a particular human's attention if they are not in face-to-face situation. If the robot would like to communicate a particular person, it should turn its gaze to that person and make eye contact to establish mutual gaze. However, only such a turning action is not enough to set up eye contact when the robot and the target person are not facing each other. Therefore, the robot should perform some actions so that it can attract the target person and meet their gaze. In this paper, we present a robot that can attract a target person's attention by moving its head, make eye contact through showing gaze awareness by blinking its eyes, and establish joint attention by repeating its head turns from the person and the target object. Experiments using twenty human participants confirm the effectiveness of the robot actions to control human attention.
    IEEE, English, International conference proceedings
    ISSN:2158-2246, Web of Science ID:WOS:000392284600025
  • An Empirical Framework to Control Human Attention by Robot               
    Mohammed Moshiul Hoque; Tomami Onuki; Emi Tsuburaya; Yoshinori Kobayashi; Yoshinori Kuno; Takayuki Sato; Sachiko Kodama
    COMPUTER VISION - ACCV 2010 WORKSHOPS, PT I, Volume:6468, First page:430, Last page:439, 2011, [Reviewed]
    Human attention control simply means that the shifting of one's attention from one direction to another. To shift someone's attention, gaining attention and meeting gaze are two most important pre-requisites. If a person would like to communicate with another, the person's gaze should meet the receiver's gaze, and they should make eye contact. However, it is difficult to set up eye contact when the two people are not facing each other in non-linguistic way. Therefore, the sender should perform some actions to capture the receiver's attention so that they can meet face-to-face and establish eye contact. In this paper, we focus on what is the best action for a robot to attract human attention and how human and robot display gazing behavior each other for eye contact. In our system, the robot may direct its gaze toward a particular direction after making eye contact and the human will read the robot's gaze. As a result, s/he will shift his/her attention to the direction indicated by the robot gaze. Experimental results show that the robot's head motions can attract human attention, and the robot's blinking when their gaze meet can make the human feel that s/he makes eye contact with the robot.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-22822-3_43
    DOI ID:10.1007/978-3-642-22822-3_43, ISSN:0302-9743, DBLP ID:conf/accv/HoqueOTKKSK10, Web of Science ID:WOS:000392224200043
  • A considerate care robot able to serve in multi-party settings               
    Yoshinori Kobayashi; Masahiko Gyoda; Tomoya Tabata; Yoshinori Kuno; Keiichi Yamazaki; Momoyo Shibuya; Yukiko Seki; Akiko Yamazaki
    Proceedings - IEEE International Workshop on Robot and Human Interactive Communication, First page:27, Last page:32, 2011, [Reviewed]
    This paper introduces a service robot that provides assisted-care, such as serving tea to the elderly in care facilities. In multi-party settings, a robot is required to be able to deal with requests from multiple individuals simultaneously. In particular, when the service robot is concentrating on taking care of a specific person, other people who want to initiate interaction may feel frustrated with the robot. To a considerable extent this may be caused by the robot's behavior, which does not indicate any response to subsequent requests while preoccupied with the first. Therefore, we developed a robot that can project the order of service in a socially acceptable manner to each person who wishes to initiate interaction. In this paper we focus on the task of tea-serving, and introduce a robot able to bring tea to multiple users while accepting multiple requests. The robot can detect a person raising their hand to make a request, and move around people using its mobile functions while avoiding obstacles. When the robot detects a person's request while already serving tea to another person, it projects that it has received the order by indicating "you are next" through a nonverbal action, such as turning its gaze to the person. Because it can project the order of service and indicate its acknowledgement of their requests socially, people will likely feel more satisfied with the robot even when it cannot immediately address their needs. We confirmed the effectiveness of this capability through an experiment in which the robot distributed snacks to participants. © 2011 IEEE.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ROMAN.2011.6005286
    DOI ID:10.1109/ROMAN.2011.6005286, DBLP ID:conf/ro-man/KobayashiGTKYSSY11, SCOPUS ID:80052984224
  • 3D Free-Form Object Material Identification by Surface Reflection Analysis with a Time-of-Flight Range Sensor.               
    Md. Abdul Mannan; Lu Cao; Yoshinori Kobayashi; Yoshinori Kuno
    Proceedings of the IAPR Conference on Machine Vision Applications (IAPR MVA 2011), Nara Centennial Hall, Nara, Japan, June 13-15, 2011, First page:227, Last page:234, 2011, [Reviewed]
    DBLP ID:conf/mva/MannanCKK11, CiNii Articles ID:20000574026
  • Multi-view head detection and tracking with long range capability for social navigation planning               
    Razali Tomari; Yoshinori Kobayashi; Yoshinori Kuno
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume:6939, Number:2, First page:418, Last page:427, 2011, [Reviewed]
    Head pose is one of the important human cues in social navigation planning for robots to coexist with humans. Inferring such information from distant targets using a mobile platform is a challenging task. This paper tackles this issue to propose a method for detecting and tracking head pose with the mentioned constraints using RGBD camera (Kinect, Microsoft). Initially possible human regions are segmented out then validated by using depth and Hu moment features. Next, plausible head regions within the segmented areas are estimated by employing Haar-like features with the Adaboost classifier. Finally, the obtained head regions are post-validated by means of their dimension and their probability of containing skin before refining the pose estimation and tracking by a boosted-based particle filter. Experimental results demonstrate the feasibility of the proposed approach for detecting and tracking head pose from far range targets under spot-light and natural illumination conditions. © 2011 Springer-Verlag.
    Springer, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-24031-7_42
    DOI ID:10.1007/978-3-642-24031-7_42, ISSN:0302-9743, DBLP ID:conf/isvc/TomariKK11, SCOPUS ID:80053379405
  • Material information acquisition using a ToF range sensor for interactive object recognition               
    Md. Abdul Mannan; Hisato Fukuda; Yoshinori Kobayashi; Yoshinori Kuno
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Volume:6939, Number:2, First page:116, Last page:125, 2011, [Reviewed]
    This paper proposes a noncontact active vision technique that analyzes the reflection pattern of infrared light to estimate the object material according to the degree of surface smoothness (or roughness). To obtain the surface micro structural details and the surface orientation information of a free-form 3D object, the system employs only a time-of-flight range camera. It measures reflection intensity patterns with respect to surface orientation for various material objects. Then it classifies these patterns by Random Forest (RF) classifier to identify the candidate of material of reflected surface. We demonstrate the efficiency of the method through experiments by using several household objects under normal illuminating condition. Our main objective is to introduce material information in addition to color, shape and other attributes to recognize target objects more robustly in the interactive object recognition framework. © 2011 Springer-Verlag.
    Springer, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-24031-7_12
    DOI ID:10.1007/978-3-642-24031-7_12, ISSN:0302-9743, DBLP ID:conf/isvc/MannanFKK11, SCOPUS ID:80053358415
  • Japanese Abbreviation Expansion with Query and Clickthrough Logs.
    Kei Uchiumi; Mamoru Komachi; Keigo Machinaga; Toshiyuki Maezawa; Toshinori Satou; Yoshinori Kobayashi
    Fifth International Joint Conference on Natural Language Processing, IJCNLP 2011, Chiang Mai, Thailand, November 8-13, 2011, First page:410, Last page:419, 2011, [Reviewed]
    The Association for Computer Linguistics
    DBLP ID:conf/ijcnlp/UchiumiKMMSK11
  • Understanding the Meaning of Shape Description for Interactive Object Recognition               
    Satoshi Mori; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCED INTELLIGENT COMPUTING, Volume:6838, First page:350, Last page:356, 2011, [Reviewed]
    Service robots need to be able to recognize objects located in complex environments. Although there has been recent progress in this area, it remains difficult for autonomous vision systems to recognize objects in natural conditions. Thus we propose an interactive object recognition system, which asks the user to verbally provide information about the objects that it cannot recognize. However, humans may use various expressions to describe objects. Meanwhile, the same verbal expression may indicate different meanings depending on the situation. Thus we have examined human descriptions of object shapes through experiments using human participants. This paper presents the findings from the experiments useful in designing interactive object recognition systems.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-24728-6_47
    DOI ID:10.1007/978-3-642-24728-6_47, ISSN:0302-9743, DBLP ID:conf/icic/MoriKK11, Web of Science ID:WOS:000307317300047
  • A Wheelchair Which Can Automatically Move Alongside a Caregiver               
    Yoshinori Kobayashi; Yuki Kinpara; Erii Takano; Yoshinori Kuno; Keiichi Yamazaki; Akiko Yamazaki
    PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), First page:407, Last page:407, 2011, [Reviewed]
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1145/1957656.1957805
    DOI ID:10.1145/1957656.1957805, ISSN:2167-2121, DBLP ID:conf/hri/KobayashiKTKYY11, Web of Science ID:WOS:000393313200137
  • Assisted-Care Robot Dealing with Multiple Requests in Multi-party Settings               
    Yoshinori Kobayashi; Masahiko Gyoda; Tomoya Tabata; Yoshinori Kuno; Keiichi Yamazaki; Momoyo Shibuya; Yukiko Seki
    PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), First page:167, Last page:168, 2011, [Reviewed]
    This paper presents our ongoing work developing service robots that provide assisted-care, such as serving tea to the elderly in care facilities. In multi-party settings, a robot is required to be able to deal with requests from multiple individuals simultaneously. In particular, when the service robot is concentrating on taking care of a specific person, other people who want to initiate interaction may feel frustrated with the robot. To a considerable extent this may be caused by the robot's behavior, which does not indicate any response to subsequent requests while preoccupied with the first. Therefore, we developed a robot that can display acknowledgement, in a socially acceptable manner, to each person who wants to initiate interaction. In this paper we focus on the task of tea-serving, and introduce a robot able to bring tea to multiple users while accepting multiple requests. The robot can detect a person's request (raising their hand) and move around people using its localization system. When the robot detects a person's request while serving tea to another person, it displays its acknowledgement by indicating "Please wait" through a nonverbal action. Because it can indicate its acknowledgement of their requests socially, people will likely feel more satisfied with the robot even when it cannot immediately address their needs.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1145/1957656.1957714
    DOI ID:10.1145/1957656.1957714, ISSN:2167-2121, DBLP ID:conf/hri/KobayashiGTKYSS11, Web of Science ID:WOS:000393313200051
  • Robotic wheelchair moving with caregiver collaboratively depending on circumstances               
    Yoshinori Kobayashi; Yoshinori Kuno; Yuki Kinpara; Keiichi Yamazaki; Erii Takano; Akiko Yamazaki
    Conference on Human Factors in Computing Systems - Proceedings, First page:2239, Last page:2244, 2011, [Reviewed]
    This paper introduces a robotic wheelchair that can automatically move alongside a caregiver. Because wheelchair users are often accompanied by caregivers, it is vital to consider how to reduce a caregiver's load and support their activities, while simultaneously facilitating communication between the caregiver and the wheelchair user. Moreover, it has been pointed out that when a wheelchair user is accompanied by a companion, the latter is inevitably seen by others as a caregiver rather than a friend. To address this situation, we devised a robotic wheelchair able to move alongside a caregiver or companion, and facilitate easy communication between them and the wheelchair user. To confirm the effectiveness of the wheelchair in real-world situations, we conducted experiments at an elderly care center in Japan.
    ACM, English, International conference proceedings
    DOI:https://doi.org/10.1145/1979742.1979894
    DOI ID:10.1145/1979742.1979894, DBLP ID:conf/chi/KobayashiKTKYY11, SCOPUS ID:79957964463
  • Computer Vision Techniques for Tracking Human Faces and Heads               
    KOBAYASHI Yoshinori
    The Journal of the Institute of Television Engineers of Japan, Volume:64, Number:4, First page:463, Last page:467, Apr. 2010
    The Institute of Image Information and Television Engineers, Japanese
    DOI:https://doi.org/10.3169/itej.64.463
    DOI ID:10.3169/itej.64.463, ISSN:1342-6907, CiNii Articles ID:110009669365, CiNii Books ID:AN10588970
  • Decomposition and detection of multiple object categories through automatic topic optimization               
    Dipankar Das, Yoshinori Kobayashi, Yoshinori Kuno
    16th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV2010), First page:481, Last page:485, Feb. 2010, [Reviewed]
    FCV2010 Committee
  • Object detection in cluttered range images using edgel geometry               
    Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    IEEJ Transactions on Electronics, Information and Systems, Volume:130, Number:9, First page:10, Last page:1580, 2010, [Reviewed]
    In this paper, we present an object detection technique that uses scale invariant local edgel structures and their properties to locate multiple object categories within a range image in the presence of partial occlusion, cluttered background, and significant scale changes. The fragmented local edgels (key-edgel, eκ) are efficiently extracted from a 3D edge map by separating them at their corner points. The 3D edge maps are reliably constructed by combining both boundary and fold edges of 3D range images. Each key-edgel is described using our scale invariant descriptors that encode local geometric configuration by joining the edgel to adjacent edgels at its start and end points. Using key-edgels and their descriptors, our model generates promising hypothetical locations in the image. These hypotheses are then verified using more discriminative features. The discriminative feature consists of a bag-of-words histogram constructed by key-edgels and their descriptors, and a pyramid histogram of orientation gradients. To find the similarities between different feature types in a discriminative stage, we use an exponential χ2 merging kernel function. Our merging kernel outperforms the conventional rbf kernel of the SVM classifier. The approach is evaluated based on ten diverse object categories in a real-world environment. © 2010 The Institute of Electrical Engineers of Japan.
    Institute of Electrical Engineers of Japan, English, Scientific journal
    DOI:https://doi.org/10.1541/ieejeiss.130.1572
    DOI ID:10.1541/ieejeiss.130.1572, ISSN:1348-8155, SCOPUS ID:78049415111
  • People tracking using integrated sensors for human robot interaction               
    Yoshinori Kobayashi; Yoshinori Kuno
    Proceedings of the IEEE International Conference on Industrial Technology, First page:1617, Last page:1622, 2010, [Reviewed]
    In human-human interaction, position and orientation of participants' bodies and faces play an important role. Thus, robots need to be able to detect and track human bodies and faces, and obtain human positions and orientations to achieve effective human-robot interaction. It is difficult, however, to robustly obtain such information from video cameras alone in complex environments. Hence, we propose to use integrated sensors that are composed of a laser range sensor and an omni-directional camera. A Rao-Blackwellized particle filter framework is employed to track the position and orientation of both bodies and heads of people based on the distance data and panorama images captured from the laser range sensor and the omni-directional camera. In addition to the tracking techniques, we present two applications of our integrated sensor system. One is a robotic wheelchair moving with a caregiver
    the sensor system detects and tracks the caregiver and the wheelchair moves with the caregiver based on the tracking results. The other is a museum guide robot that explains exhibits to multiple visitors
    the position and orientation data of visitors' bodies and faces enable the robot to distribute its gaze to each of multiple visitors to keep their attention while talking. ©2010 IEEE.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ICIT.2010.5472444
    DOI ID:10.1109/ICIT.2010.5472444, SCOPUS ID:77954397137
  • "I will ask you" Choosing Answerers by Observing Gaze Responses using Integrated Sensors for Museum Guide Robots               
    Yoshinori Kobayashi; Takashi Shibata; Yosuke Hoshi; Yoshinori Kuno; Mai Okada; Keiichi Yamazaki
    2010 IEEE RO-MAN, First page:652, Last page:657, 2010, [Reviewed]
    This paper presents a method in a museum guide robot to choose an appropriate answerer among multiple visitors. First, we observed and videotaped scenes of gallery talk when human guides ask visitors questions. Based on an analysis of this video, we have found that the guides selects an answerer by distributing his or her gaze towards multiple visitors and observing the visitors' gaze responses during the question. Then, we performed experiments on a robot that distributes its gaze towards multiple visitors, and analyzed visitors' responses. From these experiments, we found that visitors who are asked questions by the robot felt embarrassed when he or she had no prior knowledge about the questions, and that visitor gaze during the questions plays an important role in avoiding being asked questions. Based on these findings, we have developed a function in a guide robot that observes visitor gaze response and selects an appropriate answerer based on these responses. Gaze responses are tracked and recognized using an omnidirectional camera and a laser range sensor. The effectiveness of our method was confirmed through experiments.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ROMAN.2010.5598721
    DOI ID:10.1109/ROMAN.2010.5598721, DBLP ID:conf/ro-man/KobayashiSHKOY10, Web of Science ID:WOS:000300610200112
  • Object Material Classification by Surface Reflection Analysis with a Time-of-Flight Range Sensor               
    Md. Abdul Mannan; Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCES IN VISUAL COMPUTING, PT II, Volume:6454, First page:439, Last page:448, 2010, [Reviewed]
    The main objective of this work is to analyze the reflectance properties of real object surfaces and investigate the degree of roughness. Our non-contact active vision technique utilizes the local surface geometry of objects and the longer wavelength scattering light reflected from their surface. After investigating the properties of microstructure of the material surface, the system classifies various household objects into several material categories according to the characteristic of the micro particles that belong to the surface of each object.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-17274-8_43
    DOI ID:10.1007/978-3-642-17274-8_43, ISSN:0302-9743, DBLP ID:conf/isvc/MannanDKK10, Web of Science ID:WOS:000290547200043
  • Spatial Resolution for Robot to Detect Objects               
    Lu Cao; Yoshinori Kobayashi; Yoshinori Kuno
    IEEE/RSJ 2010 INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2010), First page:4548, Last page:4553, 2010, [Reviewed]
    In this paper, we report on our development of a robotic system that assists people in accomplishing simple tasks in daily life (e. g., retrieving objects for handicapped and elderly people). These tasks, inevitably involve detecting various kinds of objects. In particular, here, we present an interactive method to detect objects using spatial information. Our experimental results confirm the usefulness and efficiency of our system. We also show how the approach can be improved and highlight necessary directions for future research.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IROS.2010.5651340
    DOI ID:10.1109/IROS.2010.5651340, ISSN:2153-0858, DBLP ID:conf/iros/CaoKK10, Web of Science ID:WOS:000287672005099
  • Sub-category optimization for multi-view multi-pose object detection               
    Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    Proceedings - International Conference on Pattern Recognition, First page:1405, Last page:1408, 2010, [Reviewed]
    Object category detection with large appearance variation is a fundamental problem in computer vision. The appearance of object categories can change due to intra-class variability, viewpoint, and illumination. For object categories with large appearance change a sub-categorization based approach is necessary. This paper proposes a sub-category optimization approach that automatically divides an object category into an appropriate number of sub-categories based on appearance variation. Instead of using a predefined intra-category sub-categorization based on domain knowledge or validation datasets, we divide the sample space by unsupervised clustering based on discriminative image features. Then the clustering performance is verified using a sub-category discriminant analysis. Based on the clustering performance of the unsupervised approach and sub-category discriminant analysis results we determine an optimal number of sub-categories per object category. Extensive experimental results are shown using two standard and the authors' own databases. The comparison results show that our approach outperforms the state-of-the-art methods. © 2010 IEEE.
    IEEE Computer Society, English, International conference proceedings
    DOI:https://doi.org/10.1109/ICPR.2010.347
    DOI ID:10.1109/ICPR.2010.347, ISSN:1051-4651, DBLP ID:conf/icpr/DasKK10, SCOPUS ID:78149474616
  • Smart Wheelchair Navigation Based on User's Gaze on Destination               
    Tomari Razali; Rong Zhu; Kobayashi Yoshinori; Kuno Yoshinori
    ADVANCED INTELLIGENT COMPUTING THEORIES AND APPLICATIONS, Volume:93, Number:21, First page:387, Last page:394, 2010, [Reviewed]
    Designing intelligent navigation systems is a challenging task because environmental uncertainty may prevent mission accomplishment. This paper presents a smart wheelchair navigation system using information from the goal position indicated by the user's gaze. The system tracks the user's head with a web camera to determine the direction where the gaze is fixed. Then, it detects the area that the user gazed at, in the panoramic image obtained from an omni-directional camera and sets the area as the destination. Once the destination is fixed, navigation starts by rotating the wheelchair heading to the destination by visual servoing with SURF descriptors around the destination area. During maneuver, a laser range sensor scans critical wheelchair's area for obstacles. If detected, their distribution is analyzed to generate steering commands for avoiding them and eventually ensuring the wheelchair's pose towards the goal. Experimental operation in indoor environments has shown the feasibility of the proposed system.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-14831-6_52
    DOI ID:10.1007/978-3-642-14831-6_52, ISSN:1865-0929, DBLP ID:conf/icic/RazaliZKK10, Web of Science ID:WOS:000289495000052
  • Choosing Answerers by Observing Gaze Responses for Museum Guide Robots               
    Yoshinori Kobayashi; Takashi Shibata; Yosuke Hoshi; Yoshinori Kuno; Mai Okada; Keiichi Yamazaki
    PROCEEDINGS OF THE 5TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI 2010), First page:109, Last page:110, 2010, [Reviewed]
    This paper presents a method of selecting the answerer from audiences for a museum guide robot. We performed the preliminary experiments that a robot distributed its gaze towards visitors to select an answerer and analyzed visitors' responses. From these experiments, we have found that the visitors who are asked questions by the robot feel embarrassed when they have no prior knowledge about the question and the visitor's gaze during the question plays an important role to avoid being asked question. Based on these findings we developed functions for a guide robot to select the answerer by observing behaviors of multiple visitors. Multiple visitors' head motions are tracked and recognized by using an omni-directional camera and a laser range sensor. The robot detects the visitors' positive and negative responses by observing their head motions while asking questions. We confirmed the effectiveness of our method by experiments.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1145/1734454.1734496
    DOI ID:10.1145/1734454.1734496, ISSN:2167-2121, DBLP ID:conf/hri/KobayashiSHKOY10, Web of Science ID:WOS:000394675400031
  • Selective function of speaker gaze before and during questions: Towards developing museum guide robots               
    Yoshinori Kobayashi; Takashi Shibata; Yosuke Hoshi; Yoshinori Kuno; Mai Okada; Keiichi Yamazaki
    Conference on Human Factors in Computing Systems - Proceedings, First page:4201, Last page:4206, 2010, [Reviewed]
    This paper presents a method of selecting the answerer from audiences for a museum guide robot. First, we observed and videotaped scenes when a human guide asks visitors questions in a gallery talk to engage visitors. Based on the interaction analysis, we have found that the human guide selects the appropriate answerer by distributing his/her gaze towards visitors and observing visitors' gaze responses during the pre-question phase. Then, we performed the experiments that a robot distributed its gaze towards visitors to select an answerer and analyzed visitors' responses. From the experiments, we have found that the visitors who are asked questions by the robot feel embarrassed when they have no prior knowledge about the questions and the visitor's gaze before and during the question play an important role to avoid being asked questions. Based on these findings we have developed a function for a guide robot to select the answerer by observing visitors' gaze responses. © 2010 Copyright is held by the author/owner(s).
    ACM, English, International conference proceedings
    DOI:https://doi.org/10.1145/1753846.1754126
    DOI ID:10.1145/1753846.1754126, DBLP ID:conf/chi/KobayashiSHKOY10, SCOPUS ID:77953088364
  • Multiple Object Category Detection and Localization Using Generative and Discriminative Models               
    Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, Volume:E92D, Number:10, First page:2112, Last page:2121, Oct. 2009, [Reviewed]
    This paper proposes an integrated approach to simultaneous detection and localization of multiple object categories using both generative and discriminative models. Our approach consists of first generating a set of hypotheses for each object category using it generative model (pLSA) with it bag of visual words representing each object. Based oil the variation of objects within a category, the pLSA model automatically fits to an optimal number of topics. Then. the discriminative part verifies each hypothesis using a multi-class SVM classifier with merging features that combines spatial shape and appearance of an object. In the post-processing stage, environmental context information along with the probabilistic output of the SVM classifier is used to improve the overall performance of the system. Our integrated approach with merging features and context information allows reliable detection and localization of various object categories in the same image. The performance of the proposed framework is evaluated on the various standards (MIT-CSAIL, UIUC, TUD etc.) and the authors' own datasets. In experiments we achieved superior results to some state of the art methods over a number of standard datasets. An extensive experimental evaluation on up to ten diverse object categories over thousands of images demonstrates that Our system works for detecting and localizing multiple objects within an image in the presence of cluttered background. substantial occlusion, and significant scale change.
    IEICE-INST ELECTRONICS INFORMATION COMMUNICATIONS ENG, English, Scientific journal
    DOI:https://doi.org/10.1587/transinf.E92.D.2112
    DOI ID:10.1587/transinf.E92.D.2112, ISSN:1745-1361, DBLP ID:journals/ieicet/DasKK09, Web of Science ID:WOS:000272394700034
  • Robot Vision for Human-Robot Communication               
    KUNO Yoshinori; KOBAYASHI Yoshinori
    JRSJ, Volume:27, Number:6, First page:630, Last page:633, Jul. 2009
    The Robotics Society of Japan, Japanese
    DOI:https://doi.org/10.7210/jrsj.27.630
    DOI ID:10.7210/jrsj.27.630, ISSN:0289-1824, CiNii Articles ID:10025113938, CiNii Books ID:AN00141189
  • 複合センサを用いた人物の行動計測に基づく自律移動車椅子               
    小林貴訓、金原悠貴、久野義徳
    第15回画像センシングシンポジウム(SSII09), Jun. 2009, [Reviewed]
    画像センシング技術研究会
  • Multiple object detection and localization using range and color images for service robots               
    Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    ICCAS-SICE 2009 - ICROS-SICE International Joint Conference 2009, Proceedings, First page:3485, Last page:3489, 2009
    In real-world applications, service robots need to locate and identify objects in a scene. A range sensor provides a robust estimate of depth information, which is useful to accurately locate objects in a scene. On the other hand, color information is an important property for object recognition task. The objective of this paper is to detect and localize multiple objects within an image using both range and color features. The proposed method uses 3D shape features to generate promising hypotheses within range images and verifies these hypotheses by using features obtained from both range and color images. © 2009 SICE.
    English, International conference proceedings
    SCOPUS ID:77951114006
  • A proposal of shoplifting countermeasures by the suspicious activity detection               
    大野宏; 中嶋信生; 佐藤洋一; 小林貴訓; 杉村大輔; 加納梢
    Security management, Volume:Vol.23, Number:No.1, First page:26, Last page:38, 2009, [Reviewed]
    Japanese
    ISSN:1343-6619, CiNii Articles ID:40016671211, CiNii Books ID:AA11592621
  • A Hybrid Model for Multiple Object Category Detection and Localization.
    Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    Proceedings of the IAPR Conference on Machine Vision Applications (IAPR MVA 2009), Keio University, Yokohama, Japan, May 20-22, 2009, First page:431, Last page:434, 2009, [Reviewed]
    DBLP ID:conf/mva/DasKK09
  • Object Detection and Localization in Clutter Range Images Using Edge Features               
    Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCES IN VISUAL COMPUTING, PT 2, PROCEEDINGS, Volume:5876, First page:172, Last page:183, 2009, [Reviewed]
    We present an object detection technique that uses local edgels and their geometry to locate multiple objects in a range image in the presence of partial occlusion, background clutter, and depth changes. The fragmented local edgels (key-edgels) are efficiently extracted from a 3D edge map by separating them at their corner points. Each key-edgel is described using our scale invariant; descriptor that encodes local geometric configuration by joining the edgel at their start and end points adjacent edgels. Using key-edgels and their descriptors, our model generates promising hypothetical locations in the image. These hypotheses are then verified using more discriminative features. The approach is evaluated on ten diverse object categories in a real-world environment.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-10520-3_16
    DOI ID:10.1007/978-3-642-10520-3_16, ISSN:0302-9743, DBLP ID:conf/isvc/DasKK09a, Web of Science ID:WOS:000279247100016
  • Efficient Hypothesis Generation through Sub-categorization for Multiple Object Detection               
    Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCES IN VISUAL COMPUTING, PT 2, PROCEEDINGS, Volume:5876, First page:160, Last page:171, 2009, [Reviewed]
    Hypothesis generation and verification technic:pie has recently attracted much attention in the research on multiple object category detection and localization in images. However, the performance of this strategy greatly depends on the accuracy of generated hypotheses. This paper proposes a method of multiple category object detection adopting the hypothesis generation and verification strategy that can solve the accurate hypothesis generation problem by sub-categorization. Our generative learning algorithm automatically sub-categorizes images of each category into one or more different groups depending on the object's appearance changes. Based on these sub-categories, efficient hypotheses are generated for each object category within an image in the recognition stage. These hypotheses are then verified to determine the appropriate object categories with their locations using the discriminative classifier. We compare our approach with previous related methods oil various standards and the authors' own datasets. The results show that our approach outperforms the state-of-the-art methods.
    SPRINGER-VERLAG BERLIN, English, International conference proceedings
    DOI:https://doi.org/10.1007/978-3-642-10520-3_15
    DOI ID:10.1007/978-3-642-10520-3_15, ISSN:0302-9743, DBLP ID:conf/isvc/DasKK09, Web of Science ID:WOS:000279247100015
  • The Surface Walker: a Hemispherical Mobile Robot with Rolling Contact Constraints               
    Masato Ishikawa; Yoshinori Kobayashi; Ryohei Kitayoshi; Toshiharu Sugie
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, First page:2446, Last page:2451, 2009, [Reviewed]
    In this paper, we propose a new example of non-holonomic mobile robot, which we call the surface walker. This robot is composed of a hemisphere-shaped shell and a 2-d.o.f. mass-control device (pendulum) inside it, and undergoes the rolling contact constraint between the hemispherical surface of the robot and the ground. Unlike a lot of non-holonomic robots which have ever been researched, the drift term exists in the system of the hemisphere robot. first, we show basic concepts which the hemisphere robot has, and construct the kinetic model of this robot. Then we realize the locomotion control of the robot by periodically oscillating the internal pendulum and show its effectiveness by control experiments.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IROS.2009.5354113
    DOI ID:10.1109/IROS.2009.5354113, DBLP ID:conf/iros/IshikawaKKS09, Web of Science ID:WOS:000285372901091
  • Robotic Wheelchair Based on Observations of People Using Integrated Sensors               
    Yoshinori Kobayashi; Yuki Kinpara; Tomoo Shibusawa; Yoshinori Kuno
    2009 IEEE-RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, First page:2013, Last page:2018, 2009, [Reviewed]
    Recently, several robotic/intelligent wheelchairs have been proposed that employ user-friendly interfaces or autonomous functions. Although it is often desirable for user to operate wheelchairs on their own, they are often accompanied by a caregiver or companion. In designing wheelchairs, it is important to reduce the caregiver load. In this paper we propose a robotic wheelchair that can move with a caregiver side by side. In contrast to a front-behind position, in a side-by-side position it is more difficult for wheelchairs to adjust when the caregiver makes a turn. To cope with this problem we present visual-laser tracking technique. In this technique, a laser range sensor and an omni-directional camera are integrated to observe the caregiver. A Rao-Blackwellized particle filter framework is employed to track the caregiver's position and orientation of both body and head based on the distance data and panorama images captured from the laser range sensor and the omni-directional camera. After presenting this technique, lye introduce an application of the wheelchair for museum visit use.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IROS.2009.5353933
    DOI ID:10.1109/IROS.2009.5353933, DBLP ID:conf/iros/KobayashiKSK09, Web of Science ID:WOS:000285372901024
  • Object recognition in service robots: Conducting verbal interaction on color and spatial relationship               
    Yoshinori Kuno; Katsutoshi Sakata; Yoshinori Kobayashi
    2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops 2009, First page:2025, Last page:2031, 2009, [Reviewed]
    Service robots need to be able to recognize objects located in complex environments. Although there has been recent progress in this area, it remains difficult for autonomous vision systems to recognize objects in natural conditions. In this paper, we propose an interactive object recognition system. In this system, the robot asks the user to verbally provide information about an object that it cannot detect. In particular, it asks the user questions regarding color and spatial relationship between objects depending on the situation. Experimental results confirm the usefulness and efficiency of our interaction system. ©2009 IEEE.
    IEEE Computer Society, English, International conference proceedings
    DOI:https://doi.org/10.1109/ICCVW.2009.5457530
    DOI ID:10.1109/ICCVW.2009.5457530, DBLP ID:conf/iccvw/KunoSK09, SCOPUS ID:77953195109
  • Assisted-care robot initiation of communication in multiparty settings               
    Yoshinori Kobayashi; Mai Okada; Yoshinori Kuno; Keiichii Yamazaki; Hitoshi Niwa; Akiko Yamazaki; Naonori Akiya
    Conference on Human Factors in Computing Systems - Proceedings, First page:3583, Last page:3588, 2009, [Reviewed]
    This paper presents on-going work in developing service robots that provide assisted-care to the elderly in multi-party settings. In typical Japanese day-care facilities, multiple caregivers and visitors are co-present in the same room and any caregiver may provide assistance to any visitor. In order to effectively work in such settings, a robot should behave in a way that a person who has a request can easily initiate communication with the robot. Based on findings from observations at several day-care facilities, we have developed a robot system that displays availability to multiple persons and then displays recipiency to an individual person who wants to initiate interaction. Our robot system and its experimental evaluation are detailed in this paper.
    ACM, English, International conference proceedings
    DOI:https://doi.org/10.1145/1520340.1520538
    DOI ID:10.1145/1520340.1520538, DBLP ID:conf/chi/KobayashiKNAOYY09, SCOPUS ID:70349189183
  • Revealing Gauguin: Engaging Visitors in Robot Guide's Explanation in an Art Museum               
    Keiiehi Yamazaki; Akiko Yamazaki; Mai Okada; Yoshinori Kuno; Yoshinori Kobayashi; Yosuke Hoshi; Karola Pitsch; Paul Luff; Dirk Vom Lehn; Christian Heath
    CHI2009: PROCEEDINGS OF THE 27TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1-4, First page:1437, Last page:1446, 2009, [Reviewed]
    Designing technologies that support the explanation of museum exhibits is a challenging domain. In this paper we develop an innovative approach - providing a robot guide with resources to engage visitors in an interaction about an art exhibit. We draw upon ethnographical fieldwork in an art museum, focusing on how tour guides interrelate talk and visual conduct, specifically how they ask questions of different kinds to engage and involve visitors in lengthy explanations of an exhibit. From this analysis we have developed a robot guide that can coordinate its utterances and body movement to monitor the responses of visitors to these. Detailed analysis of the interaction between the robot and visitors in an art museum suggests that such simple devices derived from the study of human interaction might be useful in engaging visitors in explanations of complex artifacts.
    ASSOC COMPUTING MACHINERY, English, International conference proceedings
    DOI:https://doi.org/10.1145/1518701.1518919
    DOI ID:10.1145/1518701.1518919, DBLP ID:conf/chi/YamazakiYOKKHPLLH09, Web of Science ID:WOS:000265679301030
  • Robotic Wheelchair for Museum Visit               
    Tomoo Shibusawa; Yoshinori Kobayashi; Yoshinori Kuno
    2008 PROCEEDINGS OF SICE ANNUAL CONFERENCE, VOLS 1-7, First page:2711, Last page:2714, 2008, [Reviewed]
    Recently, several robotic/intelligent wheelchairs have been proposed. Their main research topics are autonomous functions such as moving toward some goal while avoiding obstacles, and user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Thus we must consider reducing caregivers' load in addition to autonomous functions and user interfaces. This paper presents the wheelchair that takes appropriate actions depending on the situation. The wheelchair user and the caregiver move together side by side while chatting. When the caregiver stops close to some exhibit, the wheelchair detects the exhibit by its vision system and moves to a position where the user can look at it well. The wheelchair may turn toward the caregiver if the caregiver turns toward the user to talk about the exhibit.
    IEEE, English, International conference proceedings
    Web of Science ID:WOS:000263966702059
  • Interactively Instructing a Guide Robot through a Network               
    Yosuke Hoshi; Yoshinori Kobayashi; Tomoki Kasuya; Masato Fueki; Yoshinori Kuno
    2008 INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS, VOLS 1-4, First page:1565, Last page:1569, 2008, [Reviewed]
    In this paper, we propose a remote-interactive mode for a museum guide robot. In this mode, the remote operator can interact with the robot by using voice and gestures through a network. The operator can instruct the robot what to do using nonverbal behaviors such as touching an object on the display screen while saying an instruction. For example, the operator can ask the robot, "Bring this brochure to him," while first touching the brochure and then the person on the display. The brochure is detected and tracked using the SIFT feature matching between video camera images. After the robot takes the brochure, the robot detects and tracks the person, and then hands it to him to the person.
    IEEE, English, International conference proceedings
    Web of Science ID:WOS:000266771501062
  • Incorporating long-term observations of human actions for stable 3D people tracking               
    Daisuke Sugimura; Yoshinori Kobayashi; Yoichi Sato; Akihiro Sugimoto
    2008 IEEE WORKSHOP ON MOTION AND VIDEO COMPUTING, First page:30, Last page:+, 2008, [Reviewed]
    We propose a method for enhancing the stability of tracking people by incorporating long-term observations of human actions in a scene. Basic human actions, such as walking or standing still, are frequently observed at particular locations in an observation scene. By observing human actions for a long period of time, we can identify regions that are more likely to be occupied by a person. These regions have a high probability of a person existing compared with others. The key idea of our approach is to incorporate this probability as a bias in generating samples under the framework of a particle filter for tracking people. We call this bias the environmental existence map (EEM). The EEM is iteratively updated at every frame by using the tracking results from our tracker which leads to more stable tracking of people. Our experimental results demonstrate the effectiveness of our method.
    IEEE, English, International conference proceedings
    Web of Science ID:WOS:000259022200005
  • Human Robot Interaction through Simple Expressions for Object Recognition               
    Al Mansur; Katsutoshi Sakata; Tajin Rukhsana; Yoshinori Kobayashi; Yoshinori Kuno
    2008 17TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, VOLS 1 AND 2, First page:647, Last page:652, 2008, [Reviewed]
    Service robots need to be able to recognize and identify objects located within complex backgrounds. Since no single method may work in every situation, several methods need to be combined. However, there are several cases when autonomous recognition methods fail. We propose an interactive recognition method in these cases. To develop a natural Human Robot Interaction (HRI), it is necessary that the robot should unambiguously perceive the description of an object given by human. This paper reports on our experiment in which we examined the expressions humans use in describing ordinary objects. The results show that humans typically describe objects using one of multiple colors. The color is usually either that of the object background or that of the largest object portion. Based on these results, we describe our development of a robot vision system that can recognize objects when a user adopts simple expressions to describe the objects. This research suggests the importance of connecting 'symbolic expressions' with the 'real world' in human-robot interaction.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/ROMAN.2008.4600740
    DOI ID:10.1109/ROMAN.2008.4600740, DBLP ID:conf/ro-man/MansurSRKK08, Web of Science ID:WOS:000261700900108
  • Museum Guide Robot with Three Communication Modes               
    Yoshinori Kobayashi; Yosuke Hoshi; Goh Hoshino; Tomoki Kasuya; Masato Fueki; Yoshinori Kuno
    2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, First page:3224, Last page:3229, 2008, [Reviewed]
    Nonverbal behavior plays an important role in human communication. In this paper, we propose a novel museum guide robot that has three different types of communication modes (autonomous, remote-control, and remote-interactive), which are integrated to interact with visitors through nonverbal behavior. First, the autonomous mode is an autonomously controlled mode in which the robot can directly interact with visitors through nonverbal behaviors such as head movements or arm gestures. Second, when a visitor begins to ask the robot questions, the communication is changed into a remote-control mode. In this mode, a remote operator controls the robot with gestures to interact with visitors. Third, in the remote-interactive mode, a remote operator can also interact with the robot with voice and gestures through a network. In particular, the remote operator can ask the robot to do something with nonverbal behaviors such as pointing while saying, "Bring this brochure to him." These three communication modes are switched seamlessly depending on the situation.
    IEEE, English, International conference proceedings
    DOI:https://doi.org/10.1109/IROS.2008.4651131
    DOI ID:10.1109/IROS.2008.4651131, DBLP ID:conf/iros/KobayashiHHKFK08, Web of Science ID:WOS:000259998202070
  • Learning motion patterns and anomaly detection by Human trajectory analysis.               
    Naohiko Suzuki; Kosuke Hirasawa; Kenichi Tanaka; Yoshinori Kobayashi; Yoichi Sato; Yozo Fujino
    Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Montréal, Canada, 7-10 October 2007, First page:498, Last page:503, 2007, [Reviewed]
    IEEE
    DOI ID:10.1109/ICSMC.2007.4413596, DBLP ID:conf/smc/SuzukiHTKSF07, CiNii Articles ID:10024790294
  • 3D Head Tracking using the Particle Filter with Cascaded Classifiers.               
    Yoshinori Kobayashi; Daisuke Sugimura; Yoichi Sato; Kousuke Hirasawa; Naohiko Suzuki; Hiroshi Kage; Akihiro Sugimoto
    Proceedings of the British Machine Vision Conference 2006, Edinburgh, UK, September 4-7, 2006, First page:37, Last page:46, 2006, [Reviewed]
    British Machine Vision Association, International conference proceedings
    DOI:https://doi.org/10.5244/C.20.5
    DOI ID:10.5244/C.20.5, DBLP ID:conf/bmvc/KobayashiSSHSKS06
  • Interactive textbook and interactive venn diagram: Natural and intuitive interfaces on augmented desk system               
    Hideki Koike; Yoichi Sato; Yoshinori Kobayashi; Hiroaki Tobita; Motoki Kobayashi
    Conference on Human Factors in Computing Systems - Proceedings, First page:121, Last page:128, 2000, [Reviewed]
    This paper describes two interface prototypes which we have developed on our augmented desk interface system, EnhancedDesk. The first application is Interactive Textbook, which is aimed at providing an effective learning environment. When a student opens a page which describes experiments or simulations, Interactive Textbook automatically retrieves digital contents from its database and projects them onto the desk. Interactive Textbook also allows the student hands-on ability to interact with the digital contents. The second application is the Interactive Venn Diagram, which is aimed at supporting effective information retrieval. Instead of keywords, the system uses real objects such as books or CDs as keys for retrieval. The system projects a circle around each book; data corresponding the book are then retrieved and projected inside the circle. By moving two or more circles so that the circles intersect each other, the user can compose a Venn diagram interactively on the desk. We also describe the new technologies introduced in EnhancedDesk which enable us to implement these applications. Copyright ACM 2000.
    International conference proceedings
    DOI:https://doi.org/10.1145/332040.332415
    Scopus:https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=0033701502&origin=inward
    Scopus Citedby:https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=0033701502&origin=inward
    DOI ID:10.1145/332040.332415, SCOPUS ID:0033701502
  • Fast Tracking of Hands and Fingertips in Infrared Images for Augmented Desk Interface.               
    Yoichi Sato; Yoshinori Kobayashi; Hideki Koike
    4th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), 26-30 March 2000, Grenoble, France, First page:462, Last page:467, 2000, [Reviewed]
    IEEE Computer Society
    DOI ID:10.1109/AFGR.2000.840675, DBLP ID:conf/fgr/SatoKK00, CiNii Articles ID:10010143175
■ MISC
  • 2D-LiDARによる足元計測に基づくByteTrackを用いた歩行者追跡               
    廣中優平; 鈴木亮太; 小林貴訓
    Volume:30th, 2024
    J-Global ID:202402245966445819
  • Development of autonomous tracking and carrying functions for harvest support robots               
    青木一航; 鈴木亮太; 小林貴訓
    日本機械学会ロボティクス・メカトロニクス講演会講演論文集(CD-ROM), Volume:2024, 2024
    ISSN:2424-3124, J-Global ID:202502287615624313
  • Autonomous Mobile Robot for Interactive Factory Guide Tour               
    篠昂征; 鈴木亮太; 小林貴訓
    電子情報通信学会大会講演論文集(CD-ROM), Volume:2023, 2023
    ISSN:1349-144X, J-Global ID:202302221595663447
  • Whole Body Skeleton Estimation Based on Legs Motion Measurement Using 2D-LiDAR               
    須田悠介; 鈴木亮太; 小林貴訓
    電子情報通信学会大会講演論文集(CD-ROM), Volume:2023, 2023
    ISSN:1349-144X, J-Global ID:202302224059627328
  • Image Generation Based on User’s Utterance for Human-Robot Interaction               
    福田悠人; 鈴木亮太; 小林貴訓
    電子情報通信学会大会講演論文集(CD-ROM), Volume:2023, 2023
    ISSN:1349-144X, J-Global ID:202302267372270806
  • Performance Visualization Based on Video Analysis for Wheelchair Basketball Players               
    土屋直紀; 鈴木亮太; 小林貴訓; 久野義徳; 福田悠人; 信太奈美; 杉山真理; 半田隆志; 森田智之
    電子情報通信学会大会講演論文集(CD-ROM), Volume:2023, 2023
    ISSN:1349-144X, J-Global ID:202302251028724064
  • ペンライトを振るとアイドルの衣装が光り、動きが持つ手に伝わる               
    Aug. 2021
    雑誌記事
  • Motivating Distance Learning using Interactive Remote Devices               
    柿本涼太; 大津耕陽; 福田悠人; 小林貴訓
    電子情報通信学会大会講演論文集(CD-ROM), Volume:2021, 2021
    ISSN:1349-144X, J-Global ID:202102244133930886
  • 高齢者の買い物を支援するロボット買い物カート               
    Jul. 2020
    雑誌記事
  • User’s Interest Estimation Based on Multimodal Information               
    YANJING Wang; 大津耕陽; 福田悠人; 小林貴訓; 久野義徳
    電子情報通信学会大会講演論文集(CD-ROM), Volume:2020, 2020
    ISSN:1349-144X, J-Global ID:202002247430572601
  • Robust and Fast Heart Rate Measurement based on RGB Video Analysis               
    大津耕陽, 福田悠人, Antony Lam, 小林貴訓, 久野義徳
    画像ラボ, Volume:30, Number:6, First page:20, Last page:26, Jun. 2019
    Japanese
    ISSN:0915-6755, CiNii Articles ID:40021936398, CiNii Books ID:AN10164169
  • 映像解析に基づく頑健・高速な心拍数計測手法               
    大津耕陽; 福田悠人; 小林貴訓; LAM Antony; 久野義徳
    Volume:30, Number:6, 2019
    ISSN:0915-6755, J-Global ID:201902260327151463
  • 視覚効果によるパーソナルモビリティの誘導               
    泉田駿; 鈴木亮太; 福田悠人; 小林貴訓; 久野義徳
    Volume:2018, 2018
    ISSN:1349-144X, J-Global ID:201802231511171152
  • Sociological and Technological analysis of robots view in elderly care robots system               
    小松 由和; 山崎 晶子; 山崎 敬一; 小林 貴訓; 福田 悠人; 森田 有希野; 図子 智紀; 清水 美和
    Volume:116, Number:524, First page:85, Last page:88, 15 Mar. 2017
    Japanese
    ISSN:0913-5685, CiNii Articles ID:40021162766, CiNii Books ID:AN10487226
  • スマートフォン搭載センサを用いた歩行者同定               
    遠藤文人; 鈴木亮太; 福田悠人; 小林貴訓; 久野義徳
    Volume:2017, 2017
    ISSN:1349-144X, J-Global ID:201702222834204786
  • 位置情報に基づくサービスを提供する自律移動ショッピングカート               
    山崎誠治; 高橋秀和; 鈴木亮太; 山田大地; 福田悠人; 小林貴訓; 久野義徳
    Volume:2017, 2017
    ISSN:1349-144X, J-Global ID:201702291183886299
  • Poster Presentation : Robotic Wheelchair Moving Alongside a Companion with BLE Smart Phone               
    関根 凌太; 高橋 秀和; 鈴木 亮太; 福田 悠人; 小林 貴訓; 久野 義徳; 山崎 敬一; 山崎 晶子
    Volume:116, Number:217, First page:11, Last page:15, 14 Sep. 2016
    Japanese
    ISSN:0913-5685, CiNii Articles ID:40020961001, CiNii Books ID:AN10487226
  • Poster Presentation : Analysis of Robot Inducing Multi-Party Interactions               
    楊 澤坤; 福田 悠人; 山崎 敬一; 山崎 晶子; 小林 貴訓; 久野 義徳
    Volume:116, Number:217, First page:1, Last page:4, 14 Sep. 2016
    Japanese
    ISSN:0913-5685, CiNii Articles ID:40020960954, CiNii Books ID:AN10487226
  • D-12-79 Towards Estimating Museum Visitors' Degree of Interest Based on Movement Analysis               
    Yonezawa Takuya; Suzuki Ryota; Rashed Md. Golam; Kobayasi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2016, Number:2, First page:148, Last page:148, 01 Mar. 2016
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110010036453, CiNii Books ID:AN10471452
  • D-12-86 Detection of Exciting and Standing Behaviors of Robotic Wheelchair Users using IMU               
    Yokokura Takurou; Suzuki Ryota; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2016, Number:2, First page:155, Last page:155, 01 Mar. 2016
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110010036460, CiNii Books ID:AN10471452
  • D-22-11 Remote Control of Communication Robot using Audio-Visual Information through Video Chat               
    Mizumura Ikumi; Suzuki Ryota; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2016, Number:2, First page:235, Last page:235, 01 Mar. 2016
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110010036540, CiNii Books ID:AN10471452
  • D-22-10 Remote Monitoring System for Elderly Care Based on Observations of Interaction Behaviors               
    Otsu Kouyou; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2016, Number:2, First page:234, Last page:234, 01 Mar. 2016
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110010036539, CiNii Books ID:AN10471452
  • The Report of Japan Expo 2015 and an Experiment of Remote Supporting System for Idol Fan               
    山崎敬一,小林貴訓
    情報処理学会論文誌デジタルコンテンツ, Volume:4, Number:1, First page:iv, Last page:vi, 18 Feb. 2016
    Japanese
    ISSN:2187-8897, CiNii Articles ID:170000147841, CiNii Books ID:AA12628054
  • Material Information Acquisition for Interactive Object Recognition by Service Robots               
    M. A. Mannan, A. Lam, Y, Kobayashi, Y. Kuno
    IIEEJ transactions on image electronics and visual computing, Volume:4, Number:1, First page:20, Last page:31, 2016
    The Institute of Image Electronics Engineers of Japan, English
    ISSN:2188-191X, CiNii Articles ID:40020920587, CiNii Books ID:AA12661628
  • 対話状況の観察に基づく高齢者遠隔見守りシステム               
    大津耕陽; 小林貴訓; 久野義徳
    Volume:2016, 2016
    ISSN:1349-144X, J-Global ID:201602219587580712
  • D-22-4 Remote Communication System using a Remote Control Robot               
    Kikugawa Toshiki; Matsuda Yoshimi; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2015, Number:2, First page:194, Last page:194, 24 Feb. 2015
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110009946772, CiNii Books ID:AN10471452
  • D-22-3 People Tracking using Laser Range Sensors by Considering Environmental Information               
    Itagaki Daijiro; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2015, Number:2, First page:193, Last page:193, 24 Feb. 2015
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110009946771, CiNii Books ID:AN10471452
  • D-22-2 Robotic Wheelchair Recognizing Companions for Moving with Them               
    Yokokura Takurou; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2015, Number:2, First page:192, Last page:192, 24 Feb. 2015
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110009946770, CiNii Books ID:AN10471452
  • 複数同伴者とのグループコミュニケーションを考慮した複数ロボット車椅子システム               
    鈴木亮太; 新井雅也; 佐藤慶尚; 山田大地; 小林貴訓; 小林貴訓; 久野義徳; 宮澤怜; 福島三穂子; 山崎敬一; 山崎晶子
    Volume:J98-A, Number:1, 2015
    ISSN:1881-0195, J-Global ID:201502280860970300
  • Robotic Wheelchair System Allowing Group Communication among Multiple Wheelchair Users and Companions
    鈴木亮太, 新井雅也, 佐藤慶尚, 山田大地, 小林貴訓, 久野義徳, 宮澤怜, 福島三穂子, 山崎敬一, 山崎晶子
    電子情報通信学会論文誌, Volume:98, Number:1, First page:51, Last page:62, Jan. 2015
    copyright©2015 IEICEhttp://www.ieice.org/jpn/index.html車椅子の潜在的需要の増加にともない,多くのロポット車椅子が研究されている.高齢者介護施設では,「話しかけ」が心身の健康維持に重要であることから,施設利用者とのコミュニケーションを重要視している.一方で,高齢者介護施設では人手不足により一人の介護士が複数の車椅子を移動させざるを得ない状況がある.そこで,本論文では,複数のロポット車椅子と複数の同伴者がグループで移動する場合に着目し,グループコミュニケーションに配慮したフォーメーションを維持して移動する複数ロボット車椅子システムを提案する.コミュニケーションに適切なフォーメーションは,社会学のエスノメソドロジーの手法を用いて,グループ内コミュニケーションを観察・分析した結果から導く.車椅子に設置したレーザ測域センサにより,複数の同伴者の位置と姿勢を実時間で追跡する.また,あらかじめ作成した環境地図を用いて各車椅子の位置と姿勢の推定を行う.得られた情報を地図上で統合・共有することにより,任意のフォーメーションを維持して各車椅子が協調移動する.実際の高齢者介護施設にて,介護士及び施設利用者に利用頂き,提案の有効性を確認した.
    Japanese
    ISSN:0913-5707, CiNii Articles ID:120005575549
  • 高齢者の状態センシングに基づく遠隔見守りシステム               
    大津耕陽; 小林貴訓; 久野義徳
    Volume:20th, 2015
    J-Global ID:201602210691014233
  • Comparisons of Reactions between Japanese and English Speakers towards Questions Made by a Robot (NAO)               
    HASEGAWA Shiho; FUKUSHIMA Mihoko; YAMAZAKI Akiko; IKEDA Keiko; HU Siyang; YAMAZAKI Keiichi; FUKUDA Hisato; KOBAYASHI Yoshinori; KUNO Yoshinori
    Technical report of IEICE. HCS, Volume:114, Number:67, First page:259, Last page:264, 29 May 2014
    This paper reports a part of our project in which we investigate communication among multiple parties of human participants and a guide robot. Analyses are provided of a dataset comprising interactions in the UK and Japan. Reactions and responses of the human participants in communication with a guide robot were analyzed for a comparison between two settings. We found some differences between UK and Japan in terms of (i) scale of participants' reaction and (ii) occurrence of interactions among the human participants in the same experimental group. Analyses indicate that differences in linguistic contexts may result in important implications with regard to better understandings of human robot interaction design.
    The Institute of Electronics, Information and Communication Engineers, Japanese
    ISSN:0913-5685, CiNii Articles ID:110009903767, CiNii Books ID:AN10487226
  • Comparison of the Human Reactions towards Questions Made by Human, NAO, and Robovie-R3               
    HU Siyang; YAMAZAKI Keiichi; HASEGAWA Shiho; FUKUSHIMA Mihoko; YAMAZAKI Akiko; IKEDA Keiko; FUKUDA Hisato; KOBAYASHI Yoshinori; KUNO Yoshinori
    Technical report of IEICE. HCS, Volume:114, Number:67, First page:253, Last page:258, 29 May 2014
    In this paper, by employing interaction analysis as a methodology, we investigate and compare the ways in which human participants interact with 1) human, 2) Robovie (human size robot), and 3) Nao (small robot) who asks quiz questions. In particular, we will focus on how the multiple human participants interact with each other amongst themselves during the quiz session in the three different situations. Targeting at such situations where explanations and questions are made towards multiple human participants by human or different types of robots will lead us to understand how explanations and questions made by human or different types of robots influence on human's group communication who are the recipients of these quizzes.
    The Institute of Electronics, Information and Communication Engineers, Japanese
    ISSN:0913-5685, CiNii Articles ID:110009903766, CiNii Books ID:AN10487226
  • D-12-81 Investigating Robot Head Turning Coordinated with Eye Motion for Navigating User's Gaze               
    Sano Kaname; Onuki Tomomi; Ida Kento; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2014, Number:2, First page:156, Last page:156, 04 Mar. 2014
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110009830136, CiNii Books ID:AN10471452
  • D-12-80 Group Detection Based on Visitors' Trajectories for Museum Guide Robot               
    Kanda Atsushi; Arai Masaya; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2014, Number:2, First page:155, Last page:155, 04 Mar. 2014
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110009830135, CiNii Books ID:AN10471452
  • 同伴者と協調移動する複数ロボット車椅子システム               
    佐藤慶尚; 鈴木亮太; 山田大地; 小林貴訓; 小林貴訓; 久野義徳
    Volume:19th, 2014
    J-Global ID:201502256487121804
  • D-12-58 A Study on Object Recognition through Verbal Interaction Based on Ontology               
    Fukuda Hisato; Kobayashi Yoshinori; Kuno Yoshinori; Kachi Daisuke
    Proceedings of the IEICE General Conference, Volume:2013, Number:2, First page:151, Last page:151, 05 Mar. 2013
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110009711900, CiNii Books ID:AN10471452
  • D-12-47Recognizing Relative Positions of Multiple Robotic Wheelchairs using Invisible Markers               
    Arai Masaya; Yamazaki Akiko; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2013, Number:2, First page:140, Last page:140, 05 Mar. 2013
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110009711889, CiNii Books ID:AN10471452
  • A Quiz Robot which Uses Epistemic Status Change : Cultural Diversity and Universality               
    FUJITA Rio; FUKUSHIMA Mihoko; YAMAZAKI Keiichi; YAMAZAKI Akiko; IKEDA Keiko; KOBAYASHI Yoshinon; KUNO Yoshinori; OHYAMA Takaya; YOSHIDA Eri; MORIMOTO Ikuyo; BURDELSKI Matthew
    IEICE technical report. Artificial intelligence and knowledge-based processing, Volume:112, Number:435, First page:23, Last page:28, 18 Feb. 2013
    This study compared human (re)actions that indicate change of state of knowledge between Japanese and English speakers. We conducted an experiment using a quiz robot that gives questions to a group of three participants. We set up the robot so that it can perform appropriate bodily actions such as pointmg to an object and shifting gaze. Also, we generated questions that invoke participants' change of state of knowledge - knowing to unknowing, and vice versa. As a result of analysis of participants' reactions, we found the differences resulting from grammatical differences between Japanese and English. We also found some differences caused by whether participants have pnor knowledge about the answers or not. We need to analyze these issues m detail when studying intercultural communication.
    The Institute of Electronics, Information and Communication Engineers, Japanese
    ISSN:0913-5685, CiNii Articles ID:110009728494, CiNii Books ID:AN10013061
  • Robotic Wheelchair Moving Alongside a Companion               
    R. Suzuki, Y. Sato, Y. Kobayashi, Y. Kuno, K. Yamazaki, M. Arai and A. Yamazaki
    International Conference on Human-Robot Interaction (HRI2013) Demonstration, 2013
    システムのデモンストレーション
  • Robotic Wheelchair Moving Along Companions Based on Observations of Bodily Behaviors               
    小林貴訓, 高野恵利衣, 金原悠貴, 久野義徳, 小池智哉, 山崎晶子
    情報処理学会論文誌, Volume:53, Number:7, First page:1687, Last page:1697, 15 Jul. 2012, [Reviewed]
    高齢化社会の進行にともない,少ない人材で効率的な介護が行えるように,ある程度の自律移動を行う知的車椅子の開発が求められている.本稿では,同伴者と車椅子利用者が並んで移動できることがコミュニケーションの観点から重要であると考え,同伴者とスムーズに併進できるロボット車椅子を提案する.車椅子が同伴者と併進する状況では,同伴者の車椅子側への進路変更を円滑に行うために,同伴者の進みたい方向をすみやかに察知して車椅子を制御する必要がある.そこで,本稿では,レーザ測域センサを用いて同伴者の行動を計測する手法を新たに開発し,計測した同伴者の身体の向きの情報を車椅子の制御に利用することで,同伴者とスムーズに併進できるロボット車椅子を開発した.実際の高齢者施設での実験とアンケート調査により,本システムの有用性を確認した.Recently, several robotic wheelchairs have been proposed that employ user-friendly interfaces or autonomous functions. Although it is desirable for users to operate wheelchairs on their own, they are often accompanied by a companion. In designing wheelchairs, therefore, it is also important to reduce the companion load. In this paper we propose a new robotic wheelchair that can move with a companion side by side to support their communication. In contrast to a front-behind position, in a side-by-side position it is more difficult for wheelchairs to adjust its position when the companion makes a turn. To cope with this problem we present a new people tracking technique using a laser range sensor to observe the companion's bodily behaviors. A particle filter is employed to track the companion's position and orientation of body based on the range data. We confirm that our robotic wheelchair can move smoothly with a companion side by side by using a body orientation of the companion. We conducted the experiments at an actual elderly care facilities to confirm the effectiveness of our robotic wheelchair.
    Japanese
    ISSN:1882-7764, CiNii Articles ID:110009423537, CiNii Books ID:AN00116647
  • D-12-97 Touch Panel Interface for a Robotic Wheelchair Moving alongside a Caregiver               
    Suzuki Ryota; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2012, Number:2, First page:191, Last page:191, 06 Mar. 2012
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110009462237, CiNii Books ID:AN10471452
  • Care Robot Showing the Order of Service through Bodily Actions in Multiple Party Setting               
    行田 将彦; 田畠 知弥; 小林 貴訓; 久野 義徳; 山崎 敬一; 佐藤 信吾; 山崎 晶子
    Volume:2012, Number:12, First page:1, Last page:8, 12 Jan. 2012
    我々は,多人数場面に対応するケアロボットを開発している.ロボットが多人数に対応するためには,同時にロボットに要請を行う複数の人々に,誰に今対応するか,また次に誰に対応するかという,順番を身体的に示す必要がある.本稿では,高齢者介護施設のエスノグラフィーによって,多人数場面でケアワーカーの視線が対応する高齢者の順番の整序において,二つの違った意味を持っていることを示した.ある場合には,ケアワーカーの視線はすぐ次に行くことを示し,また別の場合には,今対応している人には対応を続けることを示し,同時に次の人には対応が終わるまで待ってもらうということを示す.それぞれの視線の意味は,ケアワーカーの現在の作業や,他にどのような身体的行動をしているかに依存する.我々はこうした視線や身体行動の持つ多重的な効果に着目し,それをロボットに組み込んだ.そしてそのロボットを用いた 2 つの実験により,ロボットの視線や身体的な行動の多重的な効果を確かめた.Service robots should be designed to show the order of service in multiple party setting. We found from our ethnography on the elderly care center that care worker's gaze and bodily actions can serve this function. We developed a robot system that can show gaze and other bodily actions to examine their effects. Experimental results confirm that the robot can show the order of service by its gaze and bodily actions.
    Japanese
    CiNii Articles ID:170000069004, CiNii Books ID:AA1221543X
  • Interactive Object Recognition for Service Robot using an RGB-D Camera               
    福田 悠人; 小林 貴訓; 久野 義徳
    Volume:2012, Number:34, First page:1, Last page:7, 12 Jan. 2012
    サービスロボットは実環境などの複雑な環境でも物体を認識することが求められる.しかし,実環境において自動的な物体認識により間違えることなく物体を認識することは困難な問題である.このような背景のもと我々は対話を用いてユーザから対象物体の情報を獲得,利用することで,確実に物体を認識するシステムを提案している.本稿ではそのような物体認識システムにおいて,RGB-Dカメラにより得られる物体の色,形,材質といった属性情報や位置関係を用いて認識を行う手法を提案している.複数の情報を用いることで,ロボットがより効率的に物体を認識することが期待できる.Service robots need to be able to recognize objects located in complex environments. However, it is difficult to recognize objects autonomously without any mistakes in natural conditions. Thus,we have proposed an object recognition system using information about target objects acquired from the user through simple interaction. In this paper,we propose an interactive object recognition system using multiple attribute information such as color, shape and material, and positional information among objects by using an RGB-D camera. Experimental results show that the robot can recognize objects by using multiple information obtained through interaction with the user.
    Japanese
    CiNii Articles ID:170000069080, CiNii Books ID:AA11131797
  • Interactive Object Recognition for Service Robot using an RGB-D Camera               
    FUKUDA HISATO; KOBAYASHI YOSHINORI; KUNO YOSHINORI
    Technical report of IEICE. PRMU, Volume:111, Number:379, First page:191, Last page:197, 12 Jan. 2012
    Service robots need to be able to recognize objects located in complex environ-ments. However, it is difficult to recognize objects autonomously without any mistakes in natural conditions. Thus, we have proposed an object recognition system using information about target objects acquired from the user through simple interaction. In this paper, we propose an interactive object recognition system using multiple attribute information such as color, shape and material, and positional information among objects by using an RGB-D camera. Experimental results show that the robot can recognize objects by using multiple information obtained through interaction with the user.
    The Institute of Electronics, Information and Communication Engineers, Japanese
    ISSN:0913-5685, CiNii Articles ID:110009482293, CiNii Books ID:AN10541106
  • Interactive Object Recognition for Service Robot using an RGB-D Camera               
    FUKUDA HISATO; KOBAYASHI YOSHINORI; KUNO YOSHINORI
    Technical report of IEICE. Multimedia and virtual environment, Volume:111, Number:380, First page:191, Last page:197, 12 Jan. 2012
    Service robots need to be able to recognize objects located in complex environ-ments. However, it is difficult to recognize objects autonomously without any mistakes in natural conditions. Thus, we have proposed an object recognition system using information about target objects acquired from the user through simple interaction. In this paper, we propose an interactive object recognition system using multiple attribute information such as color, shape and material, and positional information among objects by using an RGB-D camera. Experimental results show that the robot can recognize objects by using multiple information obtained through interaction with the user.
    The Institute of Electronics, Information and Communication Engineers, Japanese
    ISSN:0913-5685, CiNii Articles ID:110009482210, CiNii Books ID:AN10476092
  • Robotic Wheelchair Moving Along Companions Based on Observations of Bodily Behaviors               
    小林貴訓; 小林貴訓; 高野恵利衣; 金原悠貴; 鈴木亮太; 久野義徳; 小池智哉; 山崎晶子; 山崎敬一
    情報処理学会論文誌ジャーナル(CD-ROM), Volume:53, Number:7, 2012
    ISSN:1882-7837, J-Global ID:201202247341964167
  • 全方位カメラによる周辺情報を用いたロボット車椅子の開発               
    鈴木亮太; 高野恵利衣; 宗像信聡; 小林貴訓; 小林貴訓; 久野義徳
    Volume:18th, 2012
    J-Global ID:201402298047673888
  • Considerate Care Robot which Supports Projectability of Users in Multi-party Settings               
    小林貴訓 , 行田将彦 , 田畠知弥 , 久野義徳 , 山崎敬一 , 渋谷百代 , 関由起子 , 山崎晶子
    情報処理学会論文誌, Volume:52, Number:12, First page:3316, Last page:3327, 15 Dec. 2011
    介護支援を目的としたサービスロボットへの期待が高まっている.介護施設での食事などの多人数場面では,介護者は複数のユーザから同時にサービスを要求されることがある.このとき,お茶配りなどの同時に複数人に提供できないサービスでは,誰か1人の客(受容者)に対してだけより良く対応しようとすると,他の客(受容者)の不満が高まる場合がある.そこで,本稿では,このお茶配りタスクに着目し,実際の介護施設でのデイケアの場面の分析に基づいて,複数人からの要求に対して,適切な非言語行動を用いて対応することで,ユーザ全体が互いに許容可能なサービスを提供できるケアサービスロボットを提案する.具体的には,ロボットは,複数のユーザの周囲を巡回し,ユーザからのサービス要求があれば,そのユーザに近づく.そして,ロボットは,特定のユーザにサービスを提供している間でも,他のユーザからのサービス要求があれば,顔を向けて,次の順番をサービス要求者と周囲の人々に予期させる.実際にロボットを試作し,ユーザの印象調査実験を行い,この振舞いの有効性を確認した.This paper presents a service robot that provides assisted-care, such as serving tea to the elderly in care facilities. In multi-party settings, a robot is required to be able to deal with requests from multiple individuals simultaneously. In particular, when the service robot is concentrating on taking care of a specific person, other people who want to initiate interaction may feel frustrated with the robot. To a considerable extent this may be caused by the robot's behavior, which does not indicate any response to subsequent requests while preoccupied with the first. Therefore, we developed a robot that can project the order of the service to each person who wants to initiate interaction in a socially acceptable manner. In this paper we focus on the task of tea-serving, and introduce a robot able to bring tea to multiple users while accepting multiple requests. The robot can detect persons' requests indicated by raising their hands and move around people using its mobile functions while avoiding obstacles. When the robot detects a person's request while serving tea to another person, it projects the order of the services by indicating "you are the next" through a nonverbal action such as gazing. Because it can project the order of the services and indicate its acknowledgement of their requests socially, people will likely feel more satisfied with the robot even when it cannot immediately address their needs. We confirm the effectiveness of our robot through the experiment.
    Japanese
    ISSN:1882-7764, CiNii Articles ID:110008719908, CiNii Books ID:AN00116647
  • D-12-47 Effective Robot Head Gestures for Attracting Human Gaze               
    Onuki Tomomi; Tsuburaya Emi; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2011, Number:2, First page:150, Last page:150, 28 Feb. 2011
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110008574803, CiNii Books ID:AN10471452
  • D-12-48 Assisted-Care Robot Accepting Requests Nonverbally from Multiple People               
    Tabata Tomoya; Gyoda Masahiko; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2011, Number:2, First page:151, Last page:151, 28 Feb. 2011
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110008574804, CiNii Books ID:AN10471452
  • D-12-79 Development of a Dentailed head-Gesture Analysis System for Museum Guide Robots               
    Ohyama Takaya; Shibata Takashi; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2011, Number:2, First page:182, Last page:182, 28 Feb. 2011
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110008574835, CiNii Books ID:AN10471452
  • Multiple Robotic Wheelchair System Which Can Support Communications in Group
    小林 貴訓; 久野 義徳
    Volume:3, First page:37, Last page:37, 2011
    Japanese
    CiNii Articles ID:120003088328
  • 人物頭部の追跡技術               
    小林 貴訓
    Volume:21, Number:12, First page:9, Last page:14, Dec. 2010
    Japanese
    ISSN:0915-6755, CiNii Articles ID:40018732706, CiNii Books ID:AN10164169
  • Obstacle Warning System by Combining Vision and Ultra Sonic Sensors
    久野義徳、 小林貴訓
    埼玉大学地域オープンイノベーションセンター紀要, Volume:2, First page:74, Last page:74, Jul. 2010
    埼玉大学地域オープンイノベーションセンター, Japanese
    ISSN:1883-8278, CiNii Articles ID:120002354129
  • Development of Techniques for Visual Attention Control
    久野義徳、小林貴訓
    埼玉大学地域オープンイノベーションセンター紀要, Volume:2, First page:73, Last page:73, Jul. 2010
    埼玉大学地域オープンイノベーションセンター, Japanese
    ISSN:1883-8278, CiNii Articles ID:120002354127
  • Robotic Wheelchair Control Based on the Caregiver's Intention and Environments               
    KINPARA Yuki; TAKANO Elly; KOBAYASHI Yoshinori; KUNO Yoshinori
    Volume:72, First page:17, Last page:18, 08 Mar. 2010
    Japanese
    CiNii Articles ID:110008138993, CiNii Books ID:AN00349328
  • Care Robot Going Round to Look for People in Need               
    ISHIKAWA Naoto; GYODA Masahiko; ASABA Kentaro; KOBAYASHI Yoshinori; KUNO Yoshinori
    Volume:72, First page:15, Last page:16, 08 Mar. 2010
    Japanese
    CiNii Articles ID:110008138992, CiNii Books ID:AN00349328
  • Museum Guide Robot Choosing Answeres Based on Observations of Head Gesture               
    SHIBATA Takashi; HOSHI Yosuke; TOKITA Ken; KOBAYASHI Yoshinori; KUNO Yoshinori
    Volume:72, First page:21, Last page:22, 08 Mar. 2010
    Japanese
    CiNii Articles ID:110008138995, CiNii Books ID:AN00349328
  • D-12-86 Robotic Wheelchair Moving with Caregiver Based on Local Observations               
    Takano Eilly; Kinpara Yuki; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2010, Number:2, First page:197, Last page:197, 02 Mar. 2010
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110007882575, CiNii Books ID:AN10471452
  • 周辺状況を考慮して介護者と協調移動するロボット車椅子               
    小林貴訓
    2010
    CiNii Articles ID:10029484555
  • 人間とのコミュニケーションに関するビジョン技術               
    日本ロボット学会誌, Volume:27, Number:6, First page:40, Last page:43, Jul. 2009
    日本ロボット学会
  • D-12-56 Toward Ontology-Based Robot Vision               
    Kobayashi Yoshinori; Kuno Yoshinori; Kachi Daisuke
    Proceedings of the IEICE General Conference, Volume:2009, Number:2, First page:165, Last page:165, 04 Mar. 2009
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110007095934, CiNii Books ID:AN10471452
  • D-12-74 Interactive Object Recognition using Ontology-Based Rule Description               
    Mori Satoshi; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2009, Number:2, First page:183, Last page:183, 04 Mar. 2009
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110007095918, CiNii Books ID:AN10471452
  • D-12-129 Robotic Wheelchair Control Based on the Caregiver's Observation               
    Kinpara Yuki; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2009, Number:2, First page:238, Last page:238, 04 Mar. 2009
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110007095874, CiNii Books ID:AN10471452
  • D-12-113 Head Gesture Recognition for Museum Guide Robots in Multiparty Settings               
    Shibata Takashi; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2009, Number:2, First page:222, Last page:222, 04 Mar. 2009
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110007095888, CiNii Books ID:AN10471452
  • D-12-111 Client Recognition for Mobile Helper Robots by Observing Head Gestures               
    Ishikawa Naoto; Fujiwara Naoki; Quan Wenxing; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2009, Number:2, First page:220, Last page:220, 04 Mar. 2009
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110007095890, CiNii Books ID:AN10471452
  • Assisted-Care Robot Based on Sociological Interaction Analysis               
    W. Quan; N. Ishikawa; Y. Kobayashi; Y. Kuno
    2009
  • 観客を話に引き込むミュージアムガイドロボット:言葉と身体的行動の連携               
    星洋輔、小林貴訓、久野義徳、岡田真依、山崎敬一、山崎晶子
    電子情報通信学会論文誌A, Volume:92-A, Number:11, First page:764, Last page:772, 2009
    電子情報通信学会
  • 高齢者介護施設におけるコミュニケーションチャンネル確立過程の分析と支援システムの提案               
    秋谷直矩,丹羽仁史,岡田真依,山崎敬一,小林貴訓,久野義徳,山崎晶子
    情報処理学会論文誌, Volume:50, Number:1, First page:302, Last page:313, 2009
    情報処理学会
  • Spatial Relation Model for Object Recognition in Human-Robot Interaction               
    Lu Cao; Yoshinori Kobayashi; Yoshinori Kuno
    EMERGING INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PROCEEDINGS, Volume:5754, First page:574, Last page:584, 2009
    Carrying out user commands entails target object detection for service robots. When the robot system suffers from a limited object detection capability, effective communication between the user and the robot facilitates the reference resolution. We aim to develop a service robot, assisting handicapped and elderly people, where most of the user requests are directly or indirectly linked to some objects in the scene. Objects can be described using features like color, shape, size etc. For simple objects on simple backgrounds, theses attributes can be determined with satisfactory results. For complex scenes, position of an object and spatial relation with other objects in the scene, facilitate target object detection. This paper proposes a spatial relation model for the robot to interpret user's spatial relation descriptions. The robot can detect a target object by asking the user the spatial relationship of the object and some known objects automatically recognized.
    SPRINGER-VERLAG BERLIN, English
    DOI:https://doi.org/10.1007/978-3-642-04070-2_63
    DOI ID:10.1007/978-3-642-04070-2_63, ISSN:0302-9743, Web of Science ID:WOS:000271604900063
  • Assisted-Care Robot Based on Sociological Interaction Analysis               
    W. Quan; N. Ishikawa; Y. Kobayashi; Y. Kuno
    ICIC2009, 2009
  • 観客を話に引き込むミュージアムガイドロボット:言葉と身体的行動の連携               
    星洋輔; 小林貴訓; 久野義徳; 岡田真依; 山崎敬一; 山崎晶子
    電子情報通信学会論文誌, Volume:92-A, Number:11, First page:764, Last page:772, 2009
  • 高齢者介護施設におけるコミュニケーションチャンネル確立過程の分析と支援システムの提案               
    秋谷直矩; 丹羽仁史; 岡田真依; 山崎敬一; 小林貴訓; 久野義徳; 山崎晶子
    情報処理学会論文誌, Volume:50, Number:1, First page:302, Last page:313, 2009
  • 予期的行為の相互参照を通じた介護場面におけるロボットの依頼理解               
    久野義徳,小林貴訓,山崎晶子,山崎敬一
    情報爆発時代に向けた新しいIT基盤技術の研究 平成20年度研究概要, First page:72, Jan. 2009
  • People Tracking and Trajectory Estimation by Integrating Observations from Distributed Sensors for Local Area Surveillance               
    KOBAYASHI Yoshinori; SATO Yoichi
    IPSJ SIG Notes. CVIM, Volume:2008, Number:36, First page:231, Last page:246, 01 May 2008
    This paper describes the method of tracking people in an indoor environment by using a sparse network of multiple sensors. 3D position and the orientation of people's heads are tracked by using multiple cameras. To deal with appearance/disappearance of people including occlusions cased by people's interactions, laser range scanners are seamlessly integrated into the vision based tracking framework. The information from a sparse network of multiple sensors is used to estimate people's trajectories in the whole area including unobserved areas. This paper also describes the method for establishing correspondences between trajectories captured by different cameras and estimating trajectories in unobserved areas.
    Information Processing Society of Japan (IPSJ), Japanese
    ISSN:0919-6072, CiNii Articles ID:110006791943, CiNii Books ID:AA11131797
  • 視覚情報に基づく人間とロボットの対面およびネットワークコミュニケーション               
    久野義徳,山崎敬一,小林貴訓,葛岡英明,山崎晶子,山本敏雄,中村明生,川島理恵,MatthewBurdelski,鶴田幸恵,三橋浩次
    総務省戦略的情報通信研究開発推進制度(SCOPE)特定領域重点型研究開発次世代ヒューマンインタフェース・コンテンツ技術 研究成果報告書, Volume:平成17-19年度, Mar. 2008
  • Robotic Wheelchair for Supporting Art Appreciation               
    Shibusawa Tomoo; Kobayashi Yoshinori; Kuno Yoshinori
    Proceedings of the IEICE General Conference, Volume:2, First page:169, Last page:169, Mar. 2008
    総務省戦略的情報通信研究開発推進制度(SCOPE)特定領域重点型研究開発 次世代ヒューマンインタフェース・コンテンツ技術視覚情報に基づく人間とロボットの対面およびネットワークコミュニケーション(051303007)平成17年度〜平成19年度 総務省戦略的情報通信研究開発推進制度(SCOPE)研究成果報告書(平成20年3月)研究代表者 久野 義徳(埼玉大学大学院理工学研究科 教授)より抜粋
    The Institute of Electronics, Information and Communication Engineers, Japanese
    CiNii Articles ID:110006868942, CiNii Books ID:AN10471452
  • 行動履歴に基づく人物存在確率の利用による人物三次元追跡の安定化               
    杉村大輔,小林貴訓,佐藤洋一,杉本晃宏
    情報処理学会論文誌, Volume:1, Number:2, First page:100, Last page:110, 2008
    情報処理学会
  • 人物動線データ群における逸脱行動人物検出および行動パターン分類               
    鈴木直彦,平澤宏祐,田中健一,小林貴訓,佐藤洋一,藤野陽三
    電子情報通信学会論文誌, Volume:91, Number:6, First page:1550, Last page:1560, 2008
    電子情報通信学会
  • An Integrated Method for Multiple Object Detection and Localization               
    Dipankar Das; Al Mansur; Yoshinori Kobayashi; Yoshinori Kuno
    ADVANCES IN VISUAL COMPUTING, PT II, PROCEEDINGS, Volume:5359, First page:133, Last page:144, 2008
    The objective of this paper is to use computer vision to detect and localize multiple object; within an image in the presence of a cluttered background, substantial Occlusion and significant scale changes. Our approach consists of first generating a set of hypotheses for each object using a generative model (pLSA) with a bag of visual words representing each image. Then, the discriminative part verifies each hypothesis using a multi-class SVM classifier with merging features that combines both spatial shape and color appearance of an object. In the post-processing stage, environmental context information is used to improve the performance of the system. A combination of features and context information are used to investigate the performance on our local database. The best performance is obtained using object-specific weighted merging features and the context information. Our approach overcomes the limitations of some state of the art methods.
    SPRINGER-VERLAG BERLIN, English
    DOI:https://doi.org/10.1007/978-3-540-89646-3_14
    DOI ID:10.1007/978-3-540-89646-3_14, ISSN:0302-9743, Web of Science ID:WOS:000262709700014
  • 行動履歴に基づく人物存在確率の利用による人物三次元追跡の安定化               
    杉村大輔; 小林貴訓; 佐藤洋一; 杉本晃宏
    情報処理学会論文誌, Volume:1, Number:2, First page:100, Last page:110, 2008
  • 人物動線データ群における逸脱行動人物検出および行動パターン分類               
    鈴木直彦; 平澤宏祐; 田中健一; 小林貴訓; 佐藤洋一; 藤野陽三
    電子情報通信学会論文誌, Volume:91, Number:6, First page:1550, Last page:1560, 2008
  • Pattern Classification and Detection of an Abnormal Behavior Person in Human Trajectories               
    SUZUKI Naohiko; HIRASAWA Kosuke; TANAKA Kenichi; KOBAYASHI Yoshinori; SATO Yoichi; FUJINO Yozo
    IPSJ SIG Notes. CVIM, Volume:2007, Number:31, First page:109, Last page:115, 19 Mar. 2007
    Recently, development of vision sensors, GPS, and laser radar enables continuous detection of human positions in various situations. On the other hand, it is important to understand information of mobility trajectory for development of service on positions data. Therefore, in this research, we propose a new method that classify human movement patterns and detect an abnormal behavior person, so that we understand human movement. The method consists of two phases: (i) classification of past human trajectories and detection of an abnormal behavior person, (ii) detection of an abnormal behavior person in real-time based on learned human trajectories patterns. We show the method can classify human trajectories patterns and detect an abnormal person using observed data in real space. The method aims to be applied to marketing analysis, security system with video, and so on.
    Information Processing Society of Japan (IPSJ), Japanese
    ISSN:0919-6072, CiNii Articles ID:110006250825, CiNii Books ID:AA11131797
  • Incorporating environment models for improving vision-based tracking of people               
    Tatsuya Suzuki; Shinsuke Iwasaki; Yoshinori Kobayashi; Yoichi Sato; Akihiro Sugimoto
    Systems and Computers in Japan, Volume:38, Number:2, First page:71, Last page:80, Feb. 2007
    This paper presents a method for real-time 3D human tracking based on the particle filter by incorporating environment models. We track a human head represented with its 3D position and orientation by integrating the multiple cues from a set of distributed sensors. In particular, the multi-viewpoint color and depth images obtained from distributed stereo camera systems and the 3D shape of an indoor environment measured with a range sensor are used as the cues for 3D human head tracking. The 3D shape of an indoor environment allows us to assume the existing probability of a human head (we call this probability the environment model). While tracking the human head, we consider the environment model to improve the robustness of tracking in addition to the multi-camera's color and depth images. These cues including the environment model are used in the hypothesis evaluation and integrated naturally into the particle filter framework. The effectiveness of our proposed method is verified through experiments in a real environment. © 2007 Wiley Periodicals, Inc.
    English
    DOI:https://doi.org/10.1002/scj.20612
    DOI ID:10.1002/scj.20612, ISSN:0882-1666, SCOPUS ID:33847187068
  • パーティクルフィルタとカスケード型識別器の統合による人物三次元追跡 人物追跡の頑健化・高精度化に向けて               
    小林貴訓; 佐藤洋一; 杉村大輔; 関真規人; 平澤宏祐; 鈴木直彦; 鹿毛裕史; 杉本晃宏
    Volume:18, First page:28, Last page:33, 2007
  • パーティクルフィルタとカスケード型識別器の統合による人物三次元追跡               
    小林貴訓,杉村大輔,平澤宏祐,鈴木直彦,鹿毛裕史,佐藤洋一,杉本晃宏
    電子情報通信学会論文誌, Volume:90, First page:2049, Last page:2059, 2007
  • パーティクルフィルタとカスケード型識別器の統合による人物三次元追跡 人物追跡の頑健化・高精度化に向けて               
    小林貴訓; 佐藤洋一; 杉村大輔; 関真規人; 平澤宏祐; 鈴木直彦; 鹿毛裕史; 杉本晃宏
    画像ラボ, Volume:18, Number:12, First page:28, Last page:33, 2007
    Japanese
    ISSN:0915-6755, CiNii Articles ID:40015742660, CiNii Books ID:AN10164169
  • パーティクルフィルタとカスケード型識別器の統合による人物三次元追跡               
    小林貴訓; 杉村大輔; 平澤宏祐; 鈴木直彦; 鹿毛裕史; 佐藤洋一; 杉本晃宏
    電子情報通信学会論文誌, Volume:90, First page:2049, Last page:2059, 2007
  • People Tracking with Adaptive Environmental Attributes \\using the History of Human Activity               
    SUGIMURA Daisuke; KOBAYASHI Yoshinori; SATO Yoichi; SUGIMOTO Akihiro
    IPSJ SIG Notes. CVIM, Volume:2006, Number:115, First page:171, Last page:178, 10 Nov. 2006
    Various tracking techniques based on particle filters have been proposed. To enhance the robustness of tracking, it is very significant to consider environmental attributes which represent an existing probability of people in a scene. They can be used for an effective hypothesis generation by considering the area where people are likely to exist. The environmental attributes can be considered in two aspects: the one is based on the physical configuration of objects in a scene and the other is based on the history of people activities. In this paper, we establish the history based environmental attributes that are updated by people tracking result every frame using the online EM algorithm. Furthermore, we incorporate them into our tracking algorithm by using the ICONDENSATION framework. Our experimental results demonstrate the effectiveness of our method.
    Information Processing Society of Japan (IPSJ), Japanese
    ISSN:0919-6072, CiNii Articles ID:110005716410, CiNii Books ID:AA11131797
  • Detection of a Deviant Behavior Person Based on Hidden Markov Model               
    SUZUKI Naohiko; HIRASAWA Kosuke; TANAKA Kenichi; KOBAYASHI Yoshinori; SATO Yoichi; FUJINO Yozo
    IEICE technical report, Volume:106, Number:99, First page:43, Last page:48, 08 Jun. 2006
    Recently, development of vision sensors and GPS enables continuous detection of human positions in various situations. On the other hand, the needs for detecting of abnormal persons and analysis of customers are increasing in surveillance systems. Therefore, in this research, we propose two methods. One is to classify human movement patterns, and the other is to detect a deviant behavior person. To show the validity of the proposed methods, we evaluate them using human trajectories which are observed in real space.
    The Institute of Electronics, Information and Communication Engineers, Japanese
    ISSN:0913-5685, CiNii Articles ID:110004748923, CiNii Books ID:AN10541106
  • カスケード型識別器を用いたパーティクルフィルタによる人物三次元追跡               
    小林貴訓
    First page:222, Last page:228, 2006
    CiNii Articles ID:10024775135
  • 疎分散カメラ群を用いた人物行動軌跡の推定               
    小林貴訓
    2006
    CiNii Articles ID:10021375625
  • Tracking people by using distributed cameras with non-overlapping views               
    Kobayashi Yoshinori; Sato Yoichi; Sugimoto Akihiro
    IPSJ SIG Notes. CVIM, Volume:2005, Number:88, First page:169, Last page:176, 06 Sep. 2005
    A sparse network of multiple cameras can cover large environment for monitoring object's activity. To track objects successfully by using distributed cameras, we need to estimate objects' trajectory even in unobserved areas, and also need to establish correspondence between objects captured in different cameras. This paper provides a method for estimating trajectories of people by using distributed cameras with non-overlapping views. Trajectories of people are estimated by considering an evaluation function derived from a motion model, an observation model and an environment model.
    Information Processing Society of Japan (IPSJ), Japanese
    ISSN:0919-6072, CiNii Articles ID:110002702282, CiNii Books ID:AA11131797
  • 環境モデルの導入による人物追跡の安定化               
    鈴木達也,岩崎慎介,小林貴訓,佐藤洋一,杉本晃宏
    電子情報通信学会論文誌, Volume:88, First page:1592, Last page:1600, 2005
  • 環境モデルの導入による人物追跡の安定化               
    鈴木達也; 岩崎慎介; 小林貴訓; 佐藤洋一; 杉本晃宏
    電子情報通信学会論文誌, Volume:88, First page:1592, Last page:1600, 2005
  • 机型実世界指向システムにおける紙と電子情報の統合および手指による実時間インタラクションの実現               
    小池英樹; 小林貴訓; 佐藤洋一
    Volume:3, First page:577, Last page:585, 2001
  • Integrating Paper and Digital Information on EnhancedDesk: A Method for Realtime Finger Tracking on an Augmented Desk System               
    H. Koike; Y. Sato; Y. Kobayashi
    Volume:8, First page:307, Last page:322, 2001
  • 机型実世界指向システムにおける紙と電子情報の統合および手指による実時間インタラクションの実現               
    小池英樹; 小林貴訓; 佐藤洋一
    情報処理学会論文誌, Volume:3, Number:3, First page:577, Last page:585, 2001
    CiNii Articles ID:10011218548
  • Integrating Paper and Digital Information on EnhancedDesk: A Method for Realtime Finger Tracking on an Augmented Desk System               
    H. Koike; Y. Sato; Y. Kobayashi
    ACM Trans. Computer-Human Interaction, Volume:8, First page:307, Last page:322, 2001
  • Real-Time Tracking of Multiple Fingertips and Its Application for HCI               
    OKA Kenji; KOBAYASHI Yoshinori; SATO Yoichi; KOIKE Hideki
    IPSJ SIG Notes. CVIM, Volume:123, Number:82, First page:51, Last page:58, 14 Sep. 2000
    In this work, we introduce a fast and robust method for tracking positions of the center and the fingertips of a user's hand. In particular, our method makes use of infrared camera images for reliable detection of a user's hand even in complex backgrounds, and uses a template matching technique for detecting fingertips in each image frame. Then we consider correspondences of detected fingertips between successive image frames. This contributes to significantly better performance for detecting fingertips even when a hand is moving fast. By using the proposed method, we can measure loci of multiple fingertips successfully in real-time. In this paper, we describe the details of our proposed method, and report the result of the experiment that we conducted for evaluating the method's tracking performance.
    Information Processing Society of Japan (IPSJ), Japanese
    ISSN:0919-6072, CiNii Articles ID:110002674558, CiNii Books ID:AA11131797
  • 赤外線画像を用いた指先実時間追跡による Enhanced Desk の実現               
    小林貴訓
    1999
    CiNii Articles ID:80011389301
  • Enhanceddeak のための赤外線画像を用いた実時間指先認識インターフェース               
    小林貴訓
    First page:49, Last page:54, 1999
    CiNii Articles ID:20001460191
■ Books and other publications
  • 観客と協創する芸術II               
    大津耕陽, 福田悠人, 小林貴訓
    埼玉大学リベラルアーツ叢書, Mar. 2022
    Total pages:306
    ISBN:9784991013942
  • オブジェクト指向言語Java               
    小林貴訓, Htoo Htoo, 大沢裕
    コロナ社, Nov. 2016
    Total pages:232
    ISBN:9784339028652
  • 人と協働するロボット革命最前線               
    小林貴訓, 久野義徳, 山崎敬一, 山崎晶子
    NTS, May 2016
    Total pages:342
    ISBN:9784860434519
■ Lectures, oral presentations, etc.
  • 対話的ナビゲーションの高度化に向けたロボット車いす搭乗者の注目領域推定               
    伊藤大登; 鈴木亮太; 小林貴訓
    Mar. 2025
    Japanese, Oral presentation
  • 発話を促す対話ロボットのためのマルチモーダル感情推定               
    池田裕貴; 鈴木亮太; 小林貴訓
    Mar. 2025
    Japanese, Oral presentation
  • コンサート演出への応用に向けた群鑑賞行動認識               
    野呂広人; 鈴木亮太; 小林貴訓
    Mar. 2025
    Japanese, Oral presentation
  • 食事中のロボットの話しかけタイミングの調整のための非接触型咀嚼認識               
    木村俊樹; 鈴木亮太; 小林貴訓
    Mar. 2025
    Japanese, Oral presentation
  • Analysis of Interaction in a Remote Instruction System Based on an Analysis of Elderly Care Facilities               
    A. Yamazaki; K. Yamazkai; Y. Kobayashi
    International Symposium on Ethnomethodological Studies of the Practices of Law and Medical and Health Care, Mar. 2025, [Invited]
    English, Nominated symposium
  • 没入型VR体験と現実世界を繋ぐエージェントロボット               
    小野寺浩気; 鈴木亮太; 小林貴訓
    Mar. 2025
    Japanese, Poster presentation
  • 人と協働するロボットー自動走行車椅子を中心としてー               
    小林 貴訓
    Jan. 2025, [Invited]
    Japanese, Public discourse
  • 人物センシングと介護支援システムへの応用               
    小林 貴訓
    Oct. 2024, [Invited]
    Japanese, Invited oral presentation
  • 2D-LiDARによる足元計測に基づくByteTrackを用いた歩行者追跡               
    廣中優平; 鈴木亮太; 小林貴訓
    Jun. 2024
    Japanese, Poster presentation
  • 人と協働するロボットー自動走行車椅子を中心としてー               
    小林 貴訓
    Jun. 2024, [Invited]
    Japanese, Public discourse
  • Autonomous Wheelchair Following Traffic Guard's Instruction Based on Action Recognition               
    F. Yang; R. Suzuki; Y. Kobayashi
    Robomech2024, May 2024
    English, Poster presentation
  • 収穫支援ロボットのための自動追従・運搬機能の開発               
    青木一航; 鈴木亮太; 小林貴訓
    May 2024
    Japanese, Poster presentation
  • 同伴者追跡技術を援用したRTK-GNSSによる車椅子ナビゲーション               
    韓佳孝, 鈴木亮太, 小林貴訓
    Mar. 2024, [Domestic conference]
  • 2D-LiDARを用いた没入型VRにおける座位姿勢での疑似歩行インタフェース               
    高橋留以, 鈴木亮太, 小林貴訓
    Mar. 2024, [Domestic conference]
  • 会話を促進するロボットの身体的感情表現の評価               
    永井之晴, 鈴木亮太, 小林貴訓
    Mar. 2024, [Domestic conference]
  • 高齢者の外出意欲を増進する対話ロボット付き自律移動車椅子の提案               
    平山清貴, 鈴木亮太, 小林貴訓
    Mar. 2024, [Domestic conference]
  • 対話における視覚障碍者の空間認知を支援するロボットインターフェース               
    稲田晴文, 鈴木亮太, 小林貴訓
    Mar. 2024, [Domestic conference]
  • ARとロボットを用いた美術鑑賞体験の時空間的増強               
    長坂有美, 鈴木亮太, 小林貴訓
    Mar. 2024, [Domestic conference]
  • モニターテストにおける製品使用時のポジティブ・ネガティブ感情推定               
    瀧建人, 鈴木亮太, 小林貴訓
    Jun. 2023, [Domestic conference]
  • 対話的に工場案内する自律移動ロボット               
    篠昂征, 鈴木亮太, 小林貴訓
    Mar. 2023, [Domestic conference]
  • 対話性の付与に基づく過去のコンサート映像のライブ感増強               
    中山雅方, 鈴木亮太, 大津耕陽, 福田悠人, 小林貴訓
    Mar. 2023, [Domestic conference]
  • インタラクション応用に向けたマスクを用いた呼吸計測               
    塩澤大地, 鈴木亮太, 小林貴訓
    Mar. 2023, [Domestic conference]
  • 2D-LiDARによる足元計測に基づく全身骨格推定               
    須田悠介, 鈴木亮太, 小林貴訓
    Mar. 2023, [Domestic conference]
  • 車椅子バスケットボールの競技力向上に向けた情報提示               
    土屋直紀, 鈴木亮太, 小林貴訓, 久野義徳, 福田悠人, 信太奈美, 杉山真理, 半田隆志, 森田智之
    Mar. 2023, [Domestic conference]
  • 美術館における比較鑑賞へのOmniFlickViewの応用               
    高尾美菜, 鈴木亮太, 小林貴訓, 佐藤智実, 岩田健司
    Mar. 2023, [Domestic conference]
  • 対話ロボットのためのユーザの発話内容に基づく画像提示               
    福田悠人, 鈴木亮太, 小林貴訓
    Mar. 2023, [Domestic conference]
  • 買い物支援ロボットの開発に向けたユーザの購買行動の分析               
    中野渡駿, 鈴木亮太, 小林貴訓
    Nov. 2022, [Domestic conference]
  • ユーザの身体性を考慮した遠隔買い物支援ロボットカート               
    山口洋平, 萩庭大地, 福田悠人, 小林貴訓
    Jun. 2022, [Domestic conference]
  • 先行するユーザの歩行軌跡を模倣する移動ロボット               
    加藤淳志,小林貴訓
    Mar. 2022, [Domestic conference]
  • 身体配置を考慮した遠隔買い物支援ロボットカート               
    山口洋平,小林貴訓
    Mar. 2022, [Domestic conference]
  • 郷土の魅力を発信する高校生向けVRコンテンツの作成               
    石山悠斗,小林貴訓
    Mar. 2022, [Domestic conference]
  • 遠隔購買システムにおける複数視点共有の問題               
    神田捷来, 瀧本昇太, 山崎晶子, 山崎敬一, 小林貴訓
    Mar. 2022, [Domestic conference]
  • 遠隔購買行為における音声情報処理の問題               
    内田尚紀, 山崎晶子, 山崎敬一, 小林貴訓
    Mar. 2022, [Domestic conference]
  • 人型ロボットを用いた発話を促進する会議支援システム               
    須合優,山形良介,小林貴訓
    Mar. 2022, [Domestic conference]
  • 内装デザインのための配色検討システム               
    籏町実咲,福田悠人,小林貴訓
    Dec. 2021, [Domestic conference]
  • 2D LiDARとIMUセンサを用いた歩容に基づくユーザ同定               
    斉藤亮,福田悠人,小林貴訓
    Dec. 2021, [Domestic conference]
  • ユーザとの位置関係と援用した対話型ロボットショッピングカート               
    佐々木知紀,吉原拓海,中根旺浩,福田悠人,久野義徳,小林貴訓
    Jun. 2021, [Domestic conference]
  • 車椅子バスケットボール用車椅子における旋回時フレーム挙動の分析と最適化に向けた予備的検討               
    半田隆志, 香西良彦, 都知木邦裕, 信太奈美, 杉山真理, 森田智之, 福江啓太, 小林貴訓, 福田悠人, 久野義徳
    信学技報, Jun. 2021, [Domestic conference]
  • 遠隔学習の動機づけを支援するインタラクティブデバイス               
    柿本涼太,大津耕陽,福田悠人,小林貴訓
    Mar. 2021, [Domestic conference]
  • 遠隔対話時の発話を支援するCGエージェント               
    小林弥生,福田悠人,小林貴訓
    Mar. 2021, [Domestic conference]
  • 誘導と追従を切り替えながら移動するロボットショッピングカート               
    佐々木知紀,福田悠人,小林貴訓
    Mar. 2021, [Domestic conference]
  • 全方位画像を用いた確信度に基づく大域的自己位置推定               
    高橋俊裕,福田悠人,小林貴訓,久野義徳
    Jun. 2020, [Domestic conference]
  • 歩容情報計測に向けたLiDARによる歩行者追跡               
    塙潤一,後藤陸,福田悠人,小林貴訓,久野義徳
    Jun. 2020, [Domestic conference]
  • 車椅子バスケットボールにおける漕ぎ出し動作の画像解析               
    福江啓太, 福田悠人, 小林貴訓, 久野義徳, 信太奈美, 杉山真理, 半田隆志, 森田智之
    Mar. 2020, [Domestic conference]
  • 自律移動台車のための2D-LiDARを用いた歩行者追跡               
    塙潤一, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2020, [Domestic conference]
  • マルチモーダル情報に基づくユーザの興味度推定               
    王燕京, 大津耕陽, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2020, [Domestic conference]
  • ユーザの注視情報を伝達する遠隔買い物支援システム               
    萩庭大地, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2020, [Domestic conference]
  • 遠隔地の気配を共有する繋がり感提示デバイス               
    菊池拓哉, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2020, [Domestic conference]
  • 集団的創発を促進する会議支援システム               
    山形良介, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2020, [Domestic conference]
  • VR対話環境におけるアバターのふるまいが与える印象の調査               
    並川優衣, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2020, [Domestic conference]
  • 移動ロボットの認識状態提示に基づく歩行者との協調移動               
    金井浩亮, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2020, [Domestic conference]
  • 人物行動計測とインタラクティブシステムへの応用               
    小林貴訓
    信学技報, Mar. 2020, [Domestic conference]
  • パーソナルモビリティの誘導に向けた視覚刺激の検討               
    泉田駿, 鈴木亮太, 福田悠人, 小林貴訓, 久野義徳
    Jun. 2019, [Domestic conference]
  • LiDARで計測した歩容情報を用いたユーザ属性の分類               
    後藤陸, 福田悠人, 小林貴訓, 久野義徳
    Jun. 2019, [Domestic conference]
  • メディエータロボットを用いた非同期遠隔共食支援システム               
    板垣立稀, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2019, [Domestic conference]
  • 歩行者追跡に基づいて周辺状況を可視化するインタラクティブイルミネーション               
    小澤稔浩, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2019, [Domestic conference]
  • 搬送ロボットのための音声入力インタフェースの開発               
    吉原拓海, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2019, [Domestic conference]
  • ロボット車椅子のための電磁場変動センサを用いた同伴者位置計測               
    尾形 恵, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2019, [Domestic conference]
  • LiDARを用いた歩容情報計測に基づくユーザ属性の推定               
    後藤 陸, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2019, [Domestic conference]
  • エンゲージメント推定に基づくビデオ通話支援ロボット               
    傳 思成, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2019, [Domestic conference]
  • 搬送ロボットのための遠隔操作システムの開発               
    中根旺浩, 福田悠人, 小林貴訓, 久野義徳
    Mar. 2019, [Domestic conference]
  • 一体感を増強する遠隔ライブ参加システム               
    寺内 涼太,福島 史康 ,大津 耕陽,福田 悠人,小林 貴訓,久野 義徳,山崎 敬一
    Mar. 2019, [Domestic conference]
  • アイドルとファンを繋ぐ研究               
    小林貴訓
    Dec. 2018, [Domestic conference]
  • 全天球カメラを用いた遠隔対話のための視点映像生成               
    歌田夢香, 福田悠人, 小林貴訓, 久野義徳, 山崎敬一
    Jun. 2018, [Domestic conference]
  • 映像解析に基づく頑健・高速な心拍数計測手法               
    大津耕陽, Tilottoma Das, 福田悠人, Lam Antony, 小林貴訓, 久野義徳
    Jun. 2018, [Domestic conference]
  • 瞬きの引き込み現象を援用した対話エージェント               
    李明輝,福田悠人,小林貴訓,久野義徳
    May 2018, [Domestic conference]
  • 視覚効果によるパーソナルモビリティの誘導               
    泉田駿,鈴木亮太,福田悠人,小林貴訓,久野義徳
    Mar. 2018, [Domestic conference]
  • 人間の瞬きを再現した会話エージェント               
    李明輝,福田悠人,小林貴訓,久野義徳
    Mar. 2018, [Domestic conference]
  • 演者と聴衆の一体感を増強させるインタラクティブペンライト               
    福島史康,大津耕陽,福田悠人,久野義徳,平原実留,山崎敬一,小林貴訓
    Mar. 2018, [Domestic conference]
  • 清掃ロボット運用支援のための遠隔画像監視システム               
    川久保公補,福田悠人,小林貴訓,久野義徳,瀧澤秀和
    Mar. 2018, [Domestic conference]
  • 全天球カメラを用いた遠隔買い物支援システム               
    歌田夢香,福田悠人,小林貴訓,久野義徳,山崎敬一
    Mar. 2018, [Domestic conference]
  • レーザ測域センサの反射強度を用いた物体姿勢追跡               
    後藤宏輔,福田悠人,小林貴訓,久野義徳
    Mar. 2018, [Domestic conference]
  • 自律移動車椅子の動作を伝えるエージェントロボットの開発               
    飯山恵美,福田悠人,小林貴訓,久野義徳,山崎敬一
    Mar. 2018, [Domestic conference]
  • 多人数動作解析に基づく相互関係の理解               
    李春軒,福田悠人,久野義徳,小林貴訓
    Mar. 2018, [Domestic conference]
  • スマートフォンを援用した移動ロボットのための人物同定               
    下舘尚規,福田悠人,小林貴訓,久野義徳
    Mar. 2018, [Domestic conference]
  • 発話状況認識に基づく遠隔対話支援エージェント               
    及川開斗,福田悠人,久野義徳,小林貴訓
    Mar. 2018, [Domestic conference]
  • Teleoperation of a Robot through Audio-Visual Signal via Video Chat               
    H. Fukuda, Y. Kobayashi, and Y. Kuno
    Proc. 13th ACM/IEEE International Conference on Human-Robot, Mar. 2018, [International conference]
  • Precise Bus Door Detection for Robotic Wheelchair Boarding               
    Jiang Li, Hisato Fukuda, Yoshinori Kobayashi, Yoshinori Kuno
    Nov. 2017, [Domestic conference]
  • DNNに基づく感情推定手法の対話ロボットへの応用               
    山本 祐介, 福田 悠人, 小林 貴訓, 久野 義徳
    Nov. 2017, [Domestic conference]
  • 環境変化に頑健なビデオ映像による心拍数計測手法               
    大津耕陽, 倉橋知己, Tilottoma Das, 福田悠人, Lam Antony, 小林貴訓, 久野義徳
    第23回画像センシングシンポジウム(SSII2017)予稿集, Jun. 2017, [Domestic conference]
  • 高齢者の買い物を支援するロボットショッピングカート               
    山崎誠治, 高橋秀和, 鈴木亮太, 山田大地, 福田悠人, 小林貴訓, 久野義徳
    第23回画像センシングシンポジウム(SSII2017)予稿集, Jun. 2017, [Domestic conference]
  • スマートフォン搭載センサを用いた歩行者同定               
    遠藤文人,鈴木亮太,福田悠人,小林貴訓,久野義徳
    Mar. 2017, [Domestic conference]
  • 位置情報に基づくサ―ビスを提供する 自律移動ショッピングカート               
    山崎 誠治,高橋 秀和,鈴木 亮太,山田 大地,福田 悠人,小林 貴訓,久野 義徳
    Mar. 2017, [Domestic conference]
  • ロボットの視線フィードバックを援用した指示物体の同定               
    高田 靖人,福田 悠人,小林 貴訓,久野 義徳
    Mar. 2017, [Domestic conference]
  • Blind Area Detection for Safe and Comfortable Navigation of Robotic Wheelchairs               
    J.T. Husna, H. Fukuda, Y. Kobayashi, Y. Kuno
    Mar. 2017, [Domestic conference]
  • Robotic Agent to Support Navigation and Communication for Autonomous Wheelchair               
    R. Kumari, H. Fukuda, Y. Kobayashi, Y. Kuno
    Mar. 2017, [Domestic conference]
  • 玄関での日常会話を援用した徘徊抑止システムの提案               
    長嶺 洋佑,大津 耕陽,福田 悠人,小林 貴訓,久野 義徳
    Mar. 2017, [Domestic conference]
  • “ロボットのための動的観察に基づく物体認識手法の提案               
    三浦 且之,福田 悠人,小林 貴訓,久野 義徳
    Mar. 2017, [Domestic conference]
  • ロボットシステムを高齢者支援において使用する際の視点に関する社会学的工学的分析               
    小松由和, 山崎晶子, 山崎敬一, 小林貴訓, 福田悠人, 森田有希野, 図子智紀, 清水美和
    Mar. 2017, [Domestic conference]
  • ベクションを用いたパーソナルモビリティの誘導               
    鈴木 亮太,中村 優介,福田 悠人,小林 貴訓,久野 義徳
    インラクション2017予稿集, Mar. 2017, [Domestic conference]
  • 遠隔ビデオ通話機能を備えた自律追従型見守りロボット               
    Sidra Tariq,福田悠人,小林貴訓,久野義徳
    HAIシンポジウム予稿集, Dec. 2016, [Domestic conference]
  • BLE対応スマートフォンを持った同伴者と協調移動するロボット車椅子               
    関根凌太, 高橋秀和, 鈴木亮太, 福田悠人, 小林貴訓, 久野義徳, 山崎敬一, 山崎晶子
    Sep. 2016, [Domestic conference]
  • ロボットが誘発する多人数相互行為の分析               
    楊澤坤, 福田悠人, 山崎敬一, 山崎晶子, 小林貴訓, 久野義徳
    Sep. 2016, [Domestic conference]
  • 電磁場変動センサを応用したロボット車椅子の試作               
    小玉亮,鈴木亮太,小林貴訓,梶本裕之
    日本機械学会ロボティクス・メカトロニクス講演会予稿集, Jun. 2016, [Domestic conference]
  • テレビ電話と連携した人形型デバイスを用いた高齢者の遠隔見守りシステム               
    大津耕陽,松田 成,福田悠人,小林貴訓
    画像センシングシンポジウム予稿集, Jun. 2016, [Domestic conference]
  • 付加情報を用いた学習済みCNNに基づく物体認識               
    細原大輔, 福田悠人, 小林貴訓, 久野義徳
    画像センシングシンポジウム予稿集, Jun. 2016, [Domestic conference]
  • 人と協調するロボットの画像処理               
    小林貴訓
    May 2016, [Domestic conference]
  • 慣性計測センサを用いたロボット車椅子ユーザの暴れと立ち上がり検知               
    横倉拓朗,鈴木亮太,小林貴訓,久野義徳
    電子情報通信学会総合大会, Mar. 2016, [Domestic conference]
  • 美術館来訪者の興味度の推定に向けた移動行動の分析               
    米澤拓也,鈴木亮太,Md. Golam Rashed,小林貴訓,久野義徳
    電子情報通信学会総合大会, Mar. 2016, [Domestic conference]
  • 対話状況の観察に基づく高齢者遠隔見守りシステム               
    大津耕陽,小林貴訓,久野義徳
    電子情報通信学会総合大会, Mar. 2016, [Domestic conference]
  • テレビ電話を介した視聴覚情報によるロボットの遠隔操作               
    水村育美,鈴木亮太,小林貴訓,久野義徳
    電子情報通信学会総合大会, Mar. 2016, [Domestic conference]
  • 陳列状況の変化に頑健な自律移動ショッピングカート               
    高橋秀和,鈴木亮太,小林貴訓,久野義徳
    電子情報通信学会総合大会学生ポスタセッション, Mar. 2016, [Domestic conference]
  • 複数視点間の認識結果の統合に基づく物体認識               
    渡邉真悠,福田悠人,小林貴訓,久野義徳
    電子情報通信学会総合大会学生ポスタセッション, Mar. 2016, [Domestic conference]
  • Object Recognition Using Size Information Based on Pre-trained Convolutional Neural Network               
    D. Hosohara, H. Fukuda, Y. Kobayashi, Y. Kuno
    Korea-Japan joint Workshop on Frontiers of Computer Vision(FCV2016), Feb. 2016, [International conference]
  • 全方位カメラ画像からの継続的な人物追跡手法の提案               
    松藤彰宏,鈴木亮太,本田秀明・山本昇志,小林貴訓
    映像情報メディア学会冬季大会予稿集, Dec. 2015, [Domestic conference]
  • 認知症介護支援のためのキーワード検出に基づく対話システム               
    堀江直人,小林貴訓,久野義徳
    パターン計測シンポジウム, Nov. 2015, [Domestic conference]
  • 高齢者の状態センシングに基づく遠隔見守りシステム               
    大津耕陽,小林貴訓,久野義徳
    パターン計測シンポジウム, Nov. 2015, [Domestic conference]
  • サービスロボットのためのRGB-Dデータの領域分割に基づく時間経過を考慮したシーン理解               
    石川雅貴,小林貴訓,久野義徳
    パターン計測シンポジウム, Nov. 2015, [Domestic conference]
  • 認知症介護を支援する遠隔コミュニケーションシステム               
    松田成,小林貴訓,久野義徳
    画像センシングシンポジウム(SSII2015), Jun. 2015, [Domestic conference]
  • 一緒に移動する同伴者を自動認識するロボット車椅子               
    横倉拓朗, 小林貴訓, 久野義徳
    電子情報通信学会総合大会, Mar. 2015, [Domestic conference]
  • レーザ測域センサを用いた環境情報を考慮した人物追跡               
    板垣大二朗, 小林貴訓, 久野義徳
    電子情報通信学会総合大会, Mar. 2015, [Domestic conference]
  • ロボットを用いた遠隔コミュニケーションシステム               
    菊川俊樹, 小林貴訓, 久野義徳
    電子情報通信学会総合大会, Mar. 2015, [Domestic conference]
  • 相手の集中度を考慮したロボットによる注意獲得               
    山我直史, 小林貴訓, 久野義徳
    電子情報通信学会総合大会学生ポスターセッション, Mar. 2015, [Domestic conference]
  • ロボット車椅子の進路提示手法に関する検討               
    澤田拳, 小林貴訓, 久野義徳
    電子情報通信学会総合大会学生ポスターセッション, Mar. 2015, [Domestic conference]
  • エージェント対話システムのための高速カメラを用いた表情認識               
    秋山俊貴, 小林貴訓, 久野義徳
    電子情報通信学会総合大会学生ポスターセッショ, Mar. 2015, [Domestic conference]
  • Museum Guide Robot by Considering Static and Dynamic Gaze Expressions to Communicate with Visitors               
    K. Sano, K. Murata, R. Suzuki, Y. Kuno, D. Itagaki, Y. Kobayashi
    International Conference on Human-Robot Interaction (HRI2015) Late Breaking Report, Mar. 2015, [International conference]
  • Toward Museum Guide Robots Proactively Initiating Interaction with Humans               
    M.G. Rashed, R. Suzuki, A. Lam, Y. Kobayashi, Y. Kuno
    International Conference on Human-Robot Interaction (HRI2015) Late Breaking Report, Mar. 2015, [International conference]
  • Object Pose Estimation Using Category Information from a Single Image               
    S. Shimizu, H. Koyasu, Y. Kobayashi, Y. Kuno
    Korea-Japan joint Workshop on Frontiers of Computer Vision (FCV2015), Jan. 2015, [International conference]
  • Design of Robot Eyes Suitable for Gaze Communication               
    T. Onuki, T. Ishinoda, Y. Kobayashi and Y. Kuno
    International Conference on Human-Robot Interaction (HRI2013) Late breaking Report, 2013, [International conference]
  • Attention Control System Considering the Target Person's Attention Level               
    D. Das, M. Hoque, T. Onuki, Y. Kobayashi and Y. Kuno
    International Conference on Human-Robot Interaction (HRI2013) Late breaking Report, 2013, [International conference]
  • Question Strategy and Interculturality in Human-Robot Interaction               
    M. Fukushima, R. Fujita, M. Kurihara, T. Suzuki, K. Yamazaki, A. Yamazaki, K. Ikeda, Y. Kuno, Y. Kobayashi, T. Ohyama and E. Yoshida
    International Conference on Human-Robot Interaction (HRI2013) Late breaking Report, 2013, [International conference]
  • Tracking a Robot and Visitors in a Museum Using Sensor Poles               
    T. Ohyama, E. Yoshida, Y. Kobayashi and Y. Kuno
    19th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV2013), 2013, [International conference]
  • Designing Robot Eyes for Gaze Communication               
    T. Onuki, T. Ishinoda, Y. Kobayashi and Y. Kuno
    19th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV2013), 2013, [International conference]
  • 同伴者の行動に配慮したロボット車椅子の協調移動に関する検討               
    佐藤慶尚, 小林貴訓, 久野義徳
    電子情報通信学会総合大会学生ポスターセッション, 2013, [Domestic conference]
  • 介護支援ロボットのためのユーザリクエスト認識手法の検討               
    田畠知弥, 小林貴訓, 久野義徳
    電子情報通信学会総合大会学生ポスターセッション, 2013, [Domestic conference]
  • 不可視マーカを用いた複数ロボット車椅子の位置認識               
    新井雅也, 山崎晶子, 小林貴訓, 久野義徳
    電子情報通信学会総合大会, 2013, [Domestic conference]
  • オントロジーに基づく対話を援用した物体認識の検討               
    福田悠人, 小林貴訓, 久野義徳, 加地大介
    電子情報通信学会総合大会, 2013, [Domestic conference]
  • Robotic Wheelchair Moving with the Caregiver at the Selected Position According to the Situation               
    X. Xin, Y. Kobayashi, Y. Kuno
    Mar. 2012
  • 鑑賞者を適切な位置に誘導するガイドロボット               
    望月博康, 小林貴訓, 久野義徳
    Mar. 2012
  • Attracting and Controlling Human Attention through Robot’s Behaviors Suited to the Situation               
    M.M. Hoque, T. Onuki, D. Das, Y.Kobayashi, Y. Kuno
    Mar. 2012
  • 介護者に併走するロボット車椅子のためのタッチパネルインタフェース               
    鈴木亮太, 小林貴訓, 久野義徳
    Mar. 2012
  • Establishment of Spatial Formation by a Mobile Guide Robot               
    M. A. Yousuf, Y. Kobayashi, Y. Kuno, K. Yamazaki, A. Yamazaki
    Mar. 2012
  • Multiple Robotic Wheelchair System Based on the Observation of Circumstance               
    E.Takano, Y.Kobayashi, Y.Kuno
    Feb. 2012
  • A Mobile Guide Robot Capable of Formulating Spatial Formations               
    M.A.Yousuf, Y.Kobayashi, A.Yamazaki, K.Yamazaki, Y.Kuno
    Feb. 2012
  • Interactive Object Recognition Using Attribute Information               
    H.Fukuda, Y.Kobayashi, Y.Kuno
    Feb. 2012
  • Robotic Wheelchair with Omni-directional Vision for Moving Alongside a Caregiver               
    Y. Kobayashi, R. Suzuki and Y. Kuno
    Annual Conference of the IEEE Industrial Electronics Society (IECON2012), 2012, [International conference]
  • Robotic Wheelchair Easy to Move and Communicate with Companions               
    R. Suzuki, E. Takano, Y. Kobayashi, Y. Kuno, K. Yamazaki and A. Yamazaki
    IROS2012 Workshop on Progress, Challenges and Future Perspectives in Navigation and Manipulation Assistance for Robotic Wheelchairs, 2012, [International conference]
  • A Strategy to Enhance Visitors' Audience Participation towards a Museum Guide Robot               
    K. Ikeda, A. Yamazaki, K. Yamazaki, T. Ohyama, Y. Kobayashi and Y. Kuno
    IROS2012 Workshop on Human-Agent Interaction, 2012, [International conference]
  • 視線コミュニケーションを考慮したロボット頭部の開発               
    小貫朋実, 宮田雄規, 小林貴訓, 久野義徳
    画像センシングシンポジウム(SSII2012), 2012, [Domestic conference]
  • 全方位カメラによる周辺情報を用いたロボット車椅子の開発               
    鈴木亮太, 高野恵利衣, 宗像信聡, 小林貴訓, 久野義徳
    画像センシングシンポジウム(SSII2012), 2012, [Domestic conference]
  • RGB-Dカメラを用いたサービスロボットのための対話物体認識               
    福田悠人, 小林貴訓, 久野義徳
    Jan. 2012
  • 多重的身体行動を用いて複数人からの依頼に対応するケアロボット               
    行田将彦, 田畠知弥, 小林貴訓, 久野義徳, 山崎敬一, 佐藤信吾, 山崎晶子
    Jan. 2012
  • Spatial-Based Feature for Locating Objects               
    L. Cao, Y. Kobayashi and Y.Kuno
    Lecture Notes in Computer Science, 2012, [International conference]
  • An Integrated Approach of Attention Control of Target Human by Nonverbal Behaviors of Robots in Different Viewing Situations               
    M. Hoque, D. Das, T. Onuki, Y. Kobayashi and Y. Kuno
    International Conference on Intelligent Robots and Systems (IROS2012), 2012, [International conference]
  • Development of a Mobile Museum Guide Robot That Can Configure Spatial Formation with Visitors               
    M. Yousuf, Y. Kobayashi, Y. Kuno, A. Yamazaki and K. Yamazaki
    Lecture Notes in Computer Science, 2012, [International conference]
  • Model for Controlling a Target Human's Attention in Multi-Party Settings               
    M. Hoque, D. Das, T. Onuki, Y. Kobayashi and Y. Kuno
    International Symposium on Robot and Human Interactive Communication(Ro-Man2012), 2012, [International conference]
  • Empirical Framework for Autonomous Wheelchair System in Human-Shared Environments               
    R. Tomari, Y. Kobayashi and Y. Kuno
    International Conference on Mechatronics and Automation(ICMA2012), 2012, [International conference]
  • Vision-Based Attention Control System for Socially Interactive Robots               
    Vision-Based Attention Control System for Socially Interactive Robots
    International Symposium on Robot and Human Interactive Communication(Ro-Man2012), 2012, [International conference]
  • Wide Field of View Kinect Undistortion for Social Navigation Implementation               
    R. Tomari, Y. Kobayashi and Y. Kuno
    Lecture Notes in Computer Science, 2012, [International conference]
  • Robotic System Controlling Target Human's Attention               
    M. Hoque, D. Das, T. Onuki, Y. Kobayashi and Y. Kuno
    Lecture Notes in Computer Science, 2012, [International conference]
  • A Spatial-Based Approach for Groups of Objects               
    L. Cao, Y. Kobayashi and Y.Kuno
    Lecture Notes in Computer Science, 2012, [International conference]
  • Object Recognition for Service Robots through Verbal Interaction about Multiple Attribute Information               
    H. Fukuda, S. Mori, Y. Kobayashi and Y. Kuno
    Lecture Notes in Computer Science, 2012, [International conference]
  • Model of Guide Robot Behavior to Explain Multiple Exhibits to Multiple Visitors               
    M. Yousuf, Y. Kobayashi, Y. Kuno, K. Yamazaki and A. Yamazaki
    International Session of 30th Annual Conference of the Robotics Society of Japan (RSJ2012), 2012, [Domestic conference]
  • 親しみやすさと視線コミュニケーション機能を考慮したロボットの目のデザイン               
    小貫朋実, 宮田雄規, 小林貴訓, 久野義徳
    Dec. 2011
  • Implementation of F-Formation and "Pause and Restart" for a Mobile Museum Guide Robot               
    Mohammad Abu Yousuf, Yoshinori Kobayashi, Akiko Yamazaki, Yoshinori Kuno
    Dec. 2011
  • Controlling Human Attention by Robot's Behavior Depending on his/her Viewing Situations               
    M.M.Hoque, T. Onuki, E. Tsuburaya, Y. Kobayashi. Y. Kuno
    Nov. 2011
  • コミュニケーションを考慮した複数ロボット車椅子システム               
    高野恵利衣, 小林貴訓, 久野義徳
    Aug. 2011
  • 材質を含む属性情報を利用したサービスロボットのための対話物体認識               
    福田悠人, 小林貴訓, 久野義徳
    Aug. 2011
  • 非言語行動で依頼を受け付ける移動介護ロボット               
    行田将彦, 田畠知弥, 小林貴訓, 久野義徳
    画像センシングシンポジウム予稿集, Jun. 2011
  • 対話物体認識のための材質情報の獲得               
    福田悠人, 小林貴訓, 久野義徳
    画像センシングシンポジウム予稿集, Jun. 2011
  • 複数の人に非言語で対応する介護ロボット               
    田畠知弥、行田将彦、小林貴訓、久野義徳
    電子情報通信学会総合大会予稿集, 2011, [Domestic conference]
  • 搭乗者の不安を和らげるロボット車椅子の動作提示方法の検討               
    胡少丹、小林貴訓、久野義徳
    電子情報通信学会総合大会予稿集, 2011, [Domestic conference]
  • 親しみやすさと視線の読みとりやすさを兼ね備えたロボットの目のデザイン               
    圓谷恵美、小貫朋実、小林貴訓、久野義徳
    電子情報通信学会総合大会予稿集, 2011, [Domestic conference]
  • 自然な動作で人間の注視を獲得するロボット頭部動作の検討               
    小貫朋実、圓谷恵美、小林貴訓、久野義徳
    電子情報通信学会総合大会予稿集, 2011, [Domestic conference]
  • 視聴覚情報の融合による依頼者認識システムの開発               
    鈴木亮太、小林貴訓、久野義徳
    電子情報通信学会総合大会予稿集, 2011, [Domestic conference]
  • ミュージアムガイドロボットのための詳細な頭部ジェスチャ計測システムの開発               
    大山貴也、柴田高志、小林貴訓、久野義徳
    電子情報通信学会総合大会予稿集, 2011, [Domestic conference]
  • Situation-driven control of a robotic wheelchair to follow a caregiver               
    Yuki Kinpara, Elly Takano, Yoshinori Kobayashi, Yoshinori Kuno
    FCV2011, 2011, [International conference]
  • Mobile care robot accepting requests through nonverbal interaction               
    M. Gyoda, T. Tabata, Y. Kobayashi, Y. Kuno
    FCV2011 Proc., 2011
  • Assisted-care robot dealing with multiple requests in multi-party settings               
    Y. Kobayashi, M. Gyoda, T. Tabata, Y. Kuno, K. Yamazaki, M. Shibuya, Y. Seki
    HRI2011 Late Breaking Report, 2011, [International conference]
  • A Wheelchair Which Can Automatically Move Alongside a Caregiver               
    Yoshinori Kobayashi, Yuki Kinpara, Erii Takano, Yoshinori Kuno, Keiichi Yamazaki, Akiko Yamazaki
    HRI2011 Video Session, 2011, [International conference]
  • 高齢者を見守る介護ロボットのための自律移動システムの提案               
    行田将彦、小林貴訓、久野義徳
    電子情報通信学会総合大会(学生ポスターセッション), Mar. 2010
    学生ポスターセッション
  • 利用者の目的地推定に基づく自律移動車椅子の提案               
    朱エイ、小林貴訓、久野義徳
    電子情報通信学会総合大会, Mar. 2010
    学生ポスターセッション
  • 人間行動の社会学的分析に基づく複数人環境での人間とロボットのインタラクション               
    久野義徳、小林貴訓、山崎晶子、島村徹也
    情報爆発時代に向けた新しいIT基盤技術の研究 平成21年度研究概要, Mar. 2010
  • Choosing answerers by observing gaze responses for museum guide robots               
    Yoshinori Kobayashi, Takashi Shibata, Yosuke Hoshi, Yoshinori Kuno, Mai Okada, Keiichi Yamazaki
    HRI2010(5th ACM/IEEE International Conference on Human-Robot Interaction), Mar. 2010
  • 聞き手の様子を見ながら作品の説明をするミュージアムガイドロボット               
    小林貴訓、柴田高志、星洋輔、鴇田憲、久野義徳
    画像センシングシンポジウム予稿集, 2010, [International conference]
  • 状況に応じて形状表現の意味を理解する対話物体認識システム               
    森智史、小林貴訓、久野義徳
    HAIシンポジウム予稿集, 2010, [Domestic conference]
  • 周辺状況を考慮して介護者と協調移動するロボット車椅子               
    金原悠貴、高野恵利衣、小林貴訓、久野義徳
    画像センシングシンポジウム予稿集, 2010, [International conference]
  • 介護者の意図と周辺状況の観察に基づくロボット車椅子               
    金原悠貴、高野恵利衣、小林貴訓、久野義徳
    情報処理学会第72回全国大会, 2010
  • 頭部動作の計測に基づき質問相手を選択するガイドロボット               
    柴田高志、星洋輔、鴇田憲、小林貴訓、久野義徳
    情報処理学会第72回全国大会, 2010
  • 移動しながら見回りする介護ロボット               
    石川直人、行田将彦、浅羽健太郎、小林貴訓、久野義徳
    情報処理学会第72回全国大会, 2010
  • 周辺状況を考慮して介護者に追随するロボット車椅子               
    高野恵利衣、金原悠貴、小林貴訓、久野義徳
    電子情報通信学会総合大会, 2010
  • People tracking using integrated sensors for human robot interaction               
    2010
  • Choosing Answerers by Observing Gaze Responses for Museum Guide Robots               
    2010
  • 介護者の意図と周辺状況の観察に基づくロボット車椅子               
    情報処理学会全国大会, 2010
  • 頭部動作の計測に基づき質問相手を選択するガイドロボット               
    情報処理学会全国大会, 2010
  • 移動しながら見回りする介護ロボット               
    情報処理学会全国大会, 2010
  • 周辺状況を考慮して介護者に追随するロボット車椅子               
    電子情報通信学会総合大会, 2010
  • People tracking using integrated sensors for human robot interaction               
    ICIT, 2010
  • Choosing Answerers by Observing Gaze Responses for Museum Guide Robots               
    HRI2010 LBR, 2010
  • Object detection for service robots using a hybrid autonomous/interactive approach               
    Dipankar Das, Yoshinori Kobayashi, Yoshinori Kuno
    First IEEE Workshop on Computer Vision for Humanoid Robots in Real Environments, Sep. 2009
  • Head tracking and gesture recognition in museum guide robots for multiparty settings               
    Yoshinori Kobayashi,Takashi Shibata,Yosuke Hoshi,Yoshinori Kuno,Mai Okada,Keiichi Yamazaki,Akiko Yamazaki
    ECSCW2009 (European Conference on Computer Supported Cooperative Work), Sep. 2009
  • Multiple object detection and localization using range and color images for service robots               
    Dipankar Das, Yoshinori Kobayashi, Yoshinori Kuno
    ICROS-SICE International Joint Conference 2009, Aug. 2009
  • 介護ロボットのための距離画像を用いた複数人からの依頼理解               
    全文星,小林貴訓,久野義徳
    電子情報通信学会情報・システムソサイエティ総合大会特別号, Mar. 2009
  • Spatial Relation Descriptions for Interactive Robot Vision               
    L.. Cao, Y. Kobayashi, Y. Kuno
    電子情報通信学会情報・システムソサイエティ総合大会特別号, Mar. 2009
  • 複数鑑賞者に適応的な身体的行動を用いて解説をするミュージアムガイドロボット               
    柴田高志、鴇田憲、星洋輔、小林貴訓、久野義徳
    計測自動制御学会システムインテグレーション部門講演会論文集(SI2009), 2009
  • Object Detection and Localization in Clutter Range Images Using Edge Features               
    2009
  • Efficient Hypothesis Generation through Sub-categorization for Multiple Object Detection               
    2009
  • Multiple object category detection and localization using generative and discriminative models               
    2009
  • Robotic Wheelchair Based on Observations of People Using Integrated Sensors               
    2009
  • Object recognition in service robots: Conducting verbal interaction on color and spatial relationship               
    2009
  • Head Tracking and Gesture Recognition in Museum Guide Robots for Multiparty Settings               
    2009
  • 人間とのコミュニケーションに関するビジョン技術               
    2009
  • Multiple Object Detection and Localization using Range and Color Images for Service Robots               
    2009
  • 複合センサを用いた人物の行動計測に基づく自律移動車椅子               
    2009
  • Assisted-Care Robot Initiation Communication in Multiparty Settings               
    2009
  • Revealing Gauguin: Engaging Visitors in Robot Guide's Explanation in an Art Museum               
    2009
  • 移動介護ロボットのための頭部動作に基づく対象者の認識               
    石川直人,藤原直樹,全文星,小林貴訓,久野義徳
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • オントロジーに基づく対話物体認識               
    森智史,小林貴訓・久野義
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • オントロジーに基づくロボットビジョンの提案               
    小林貴訓,久野義徳,加地大介
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • 複数鑑賞者をガイドするロボットのための頭部ジェスチャ認識               
    柴田高志,小林貴訓,久野義徳
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • 介護者の状態の観察に基づいたロボット車椅子の制御               
    金原悠貴,小林貴訓,久野義徳
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • Collaborative Robotic Wheelchair Based on Visual and Laser Sensing               
    2009
  • 複数鑑賞者に適応的な身体的行動を用いて解説をするミュージアムガイドロボット               
    計測自動制御学会システムインテグレーション部門講演会論文集(SI2009), 2009
  • Object Detection and Localization in Clutter Range Images Using Edge Features               
    International Symposium on Visual Computing (ISVC2009), 2009
  • Efficient Hypothesis Generation through Sub-categorization for Multiple Object Detection               
    International Symposium on Visual Computing (ISVC2009), 2009
  • Multiple object category detection and localization using generative and discriminative models               
    IEICE Trans. Information and Systems, 2009
  • Robotic Wheelchair Based on Observations of People Using Integrated Sensors               
    International Conference on Intelligent RObots and Systems, 2009
  • Object recognition in service robots: Conducting verbal interaction on color and spatial relationship               
    ICCV Workshops (Human-Computer Interaction), 2009
  • Head Tracking and Gesture Recognition in Museum Guide Robots for Multiparty Settings               
    ECSCW2009 Poster, 2009
  • 人間とのコミュニケーションに関するビジョン技術               
    日本ロボット学会誌, 2009
  • Multiple Object Detection and Localization using Range and Color Images for Service Robots               
    Proc. ICCAS-SICE International Joint Conference, 2009
  • 複合センサを用いた人物の行動計測に基づく自律移動車椅子               
    画像センシングシンポジウム, 2009
  • Assisted-Care Robot Initiation Communication in Multiparty Settings               
    Computer-Human Interaction Extended Abstracts, 2009
  • Revealing Gauguin: Engaging Visitors in Robot Guide's Explanation in an Art Museum               
    Computer-Human Interaction, 2009
  • 移動介護ロボットのための頭部動作に基づく対象者の認識               
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • オントロジーに基づく対話物体認識               
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • オントロジーに基づくロボットビジョンの提案               
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • 複数鑑賞者をガイドするロボットのための頭部ジェスチャ認識               
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • 介護者の状態の観察に基づいたロボット車椅子の制御               
    電子情報通信学会2009年総合大会 情報・システム講演論文集, 2009
  • Collaborative Robotic Wheelchair Based on Visual and Laser Sensing               
    Y. Kobayashi, Y. Kinpara, T. Shibusawa, Y. Kuno
    Workshop on Frontiers of Computer Vision, 2009
  • ミュージアムガイドロボットへの遠隔対話指示               
    糟谷智樹,小林貴訓,久野義徳
    情報・システムソサイエティ総合大会特別号, Aug. 2008
  • 鑑賞行動を支援するロボット車椅子システム               
    渋澤朋央,小林貴訓,久野義徳
    総合大会講演論文集, Mar. 2008
  • ミュージアムガイドロボットへの遠隔対話指示               
    糟谷智樹,小林貴訓,久野義徳
    電子情報通信学会2008年総合大会 情報・システムソサイエティ総合大会特別号, Mar. 2008
  • 美術館における観客を引き込む解説ロボット               
    岡田真依,星洋輔,山崎敬一,山崎晶子,久野義徳,小林貴訓
    HAIシンポジウム, 2008
  • Interactively instructing a guide robot through a network               
    2008
  • Museum Guide Robot with Three Communication Modes               
    2008
  • Robotic Wheelchair for Museum Visit               
    2008
  • Human robot interaction through simple expressions for object recognition               
    2008
  • 3つのコミュニケーションモードを持つネットワークロボット               
    笛木雅人,糟谷智樹,星洋輔,星野豪,小林貴訓,久野義徳
    画像センシングシンポジウム, 2008
  • Incorporating Long-Term Observations of Human Actions for Stable 3D People Tracking               
    2008
  • 美術館における観客を引き込む解説ロボット               
    HAIシンポジウム, 2008
  • Interactively instructing a guide robot through a network               
    Y. Hoshi, Y. Kobayashi, T. Kasuya, M. Fueki, Y. Kuno
    International Conference on Control, Automation and Systems, 2008
  • Museum Guide Robot with Three Communication Modes               
    Y. Kobayashi, Y. Hoshi, G. Hoshino, T. Kasuya, M. Fueki and Y. Kuno
    International Conference on Intelligent RObots and Systems, 2008
  • Robotic Wheelchair for Museum Visit               
    Tomoo Shibusawa, Yoshinori Kobayashi , and Yoshinori Kuno
    Proc. SICE2008, 2008
  • Human robot interaction through simple expressions for object recognition               
    A. Mansur, K. Sakata, Y. Kobayashi, and Y. Kuno
    Proc. 17th IEEE RO-MAN, 2008
  • 3つのコミュニケーションモードを持つネットワークロボット               
    画像センシングシンポジウム, 2008
  • Incorporating Long-Term Observations of Human Actions for Stable 3D People Tracking               
    D. Sugimura D, Y. Kobayashi, Y. Sato and A. Sugimoto
    Workshop on Motion and Video Computing, 2008
  • 行動履歴に基づいた環境属性の自動構築を伴う三次元人物追跡               
    杉村大輔,小林貴訓,佐藤洋一,杉本晃宏
    画像の認識・理解シンポジウム, 2007
  • 分散カメラとレーザ測域センサの統合によるエリア内人物追跡               
    小林貴訓,杉村大輔,関真規人,平澤宏祐,鈴木直彦,鹿毛裕史,佐藤洋一,杉本晃宏
    画像の認識・理解シンポジウム, 2007
    Poster presentation
  • 人物動線データ分析による逸脱行動人物の検出               
    鈴木直彦,平澤宏祐,田中健一,小林貴訓,佐藤洋一,藤野陽三
    コンピュータビジョンとイメージメディア研究会, 2007
  • Learning Motion Patterns and Anomaly Detection by Human Trajectory Analysis               
    2007
  • 行動履歴に基づいた環境属性の自動構築を伴う三次元人物追跡               
    画像の認識・理解シンポジウム, 2007
  • 分散カメラとレーザ測域センサの統合によるエリア内人物追跡               
    画像の認識・理解シンポジウム, 2007
    Poster presentation
  • 人物動線データ分析による逸脱行動人物の検出               
    コンピュータビジョンとイメージメディア研究会, 2007
  • Learning Motion Patterns and Anomaly Detection by Human Trajectory Analysis               
    N. Suzuki, K. Hirasawa, K. Tanaka, Y. Kobayashi, Y. Sato and Y. Fujino
    International Conference on Systems, Man and Cybernetics, 2007
  • 行動履歴を反映させた適応的環境属性を伴う三次元人物追跡               
    杉村大輔,小林貴訓,佐藤洋一,杉本晃宏
    コンピュータビジョンとイメージメディア研究会, 2006
  • カスケード型識別器を用いたパーティクルフィルタによる人物三次元追跡               
    小林貴訓,杉村大輔,平澤宏祐,鈴木直彦,鹿毛裕史,佐藤洋一,杉本晃宏
    画像の認識・理解シンポジウム, 2006
  • 疎分散カメラ群を用いた人物行動軌跡の推定               
    小林貴訓,佐藤洋一,杉本晃宏
    画像の認識・理解シンポジウム, 2006
    Poster presentation
  • Hidden Markov Modelを用いた逸脱行動人物検出               
    2006
  • 3D Head Tracking using the Particle Filter with Cascaded Classifiers               
    2006
  • 行動履歴を反映させた適応的環境属性を伴う三次元人物追跡               
    コンピュータビジョンとイメージメディア研究会, 2006
  • カスケード型識別器を用いたパーティクルフィルタによる人物三次元追跡               
    画像の認識・理解シンポジウム, 2006
  • 疎分散カメラ群を用いた人物行動軌跡の推定               
    画像の認識・理解シンポジウム, 2006
    Poster presentation
  • Hidden Markov Modelを用いた逸脱行動人物検出               
    鈴木直彦,平澤宏祐,田中健一,小林貴訓,佐藤洋一,藤野陽三
    パターン認識・メディア理解研究会, 2006
  • 3D Head Tracking using the Particle Filter with Cascaded Classifiers               
    Y. Kobayashi, D. Sugimura, Y. Sato, H. Hirasawa, N. Suzuki, H. Kage and A. Sugimoto
    British Machine Vision Conference, 2006
  • 視野を共有しないカメラ群を用いた人物行動軌跡の推定               
    小林貴訓,佐藤洋一,杉本晃宏
    コンピュータビジョンとイメージメディア研究会, 2005
  • 視野を共有しないカメラ群を用いた人物行動軌跡の推定               
    コンピュータビジョンとイメージメディア研究会, 2005
  • ソフトウェア部品庫EvoManにおけるツール連携機能               
    田村直樹,川名康仁,山本康夫,小林貴訓
    ソフトウェア工学, 2002
  • ソフトウェア部品庫EvoManにおけるツール連携機能               
    ソフトウェア工学, 2002
  • 複数指先軌跡の実時間計測とHCIへの応用               
    岡兼司,小林貴訓,佐藤洋一,小池英樹
    コンピュータビジョンとイメージメディア研究会, 2000
  • Fast Tracking of Hands and Fingertips in Infrared Images for Augmented Desk Interface               
    2000
  • Interactive Textbook and Interactive Venn Diagram: Natural and Intuitive Interface on Augmented Desk System               
    2000
  • 複数指先軌跡の実時間計測とHCIへの応用               
    コンピュータビジョンとイメージメディア研究会, 2000
  • Fast Tracking of Hands and Fingertips in Infrared Images for Augmented Desk Interface               
    Y. Sato, Y. Kobayashi and H. Koike
    Automatic Face and Gesture Recognition, 2000
  • Interactive Textbook and Interactive Venn Diagram: Natural and Intuitive Interface on Augmented Desk System               
    H. Koike, Y. Sato, Y. Kobayashi, H. Tobita and M. Kobayashi
    Human Factors in Computing Systems, 2000
  • 赤外線画像を用いた指先実時間追跡によるEnhancedDeskの実現               
    小林貴訓,小池英樹,佐藤洋一
    ヒューマンインタフェース学会 ヒューマンインタフェースシンポジウム, Oct. 1999
  • EnhancedDeskのための赤外線画像を用いた実時間指先認識インタフェース               
    小林貴訓,佐藤洋一,小池英樹
    インタラクティブシステムとソフトウェア, 1999
  • EnhancedDeskのための赤外線画像を用いた実時間指先認識インタフェース               
    インタラクティブシステムとソフトウェア, 1999
  • Towards Detectng the Inner Emotions of Multiple People               
    K. Das, K. Otsu, A. Lam, Y. Kobayashi, and Y. Kuno
    [Domestic conference]
  • 対話を援用したグラフカットに基づく物体領域抽出               
    光野泰弘,小林貴訓,久野義徳
    2014年電子情報通信学会総合大会, [Domestic conference]
  • 認知症者の介護を支援する遠隔コミュニケーションシステム               
    松田成,久野義徳,小林貴訓
    2014年電子情報通信学会総合大会, [Domestic conference]
  • Proactive approach of making eye contact with the target human in multi-party settings               
    M. Hoque, D. Das, Y. Kobayashi, Y. Kuno and K. Deb
    Proc. Computer and Information Technology (ICCIT2013), [International conference]
  • オントロジーに基づく形状認識               
    森智史, 福田悠人, 小林貴訓, 久野義徳, 加地大介
    画像の認識・理解シンポジウム(MIRU2013), [Domestic conference]
  • 周辺状況を考慮してコミュニケーションを支援するロボット車椅子               
    新井雅也, 佐藤慶尚, 鈴木亮太, 小林貴訓, 久野義徳
    画像の認識・理解シンポジウム(MIRU2013), [Domestic conference]
  • 注視を演出するコミュニケーションロボット               
    小貫朋実, 江連智香, 石野田貴文, 小林貴訓, 久野義徳
    画像センシングシンポジウム(SSII2013), [Domestic conference]
  • Multiple robotic wheelchair system able to move with a companion using map information               
    Y. Sato, R. Suzuki, M. Arai, Y. Kobayashi, Y. Kuno, M. Fukushima, K. Yamazaki and A. Yamazaki
    Proc. International Conference on Human-Robot Interaction (HRI2014) Late breaking Report, [International conference]
  • Recognizing gaze pattern for human robot interaction               
    D. Das, M. G. Rashed , Y. Kobayashi and Y. Kuno
    Proc. International Conference on Human-Robot Interaction (HRI2014) Late breaking Report, [International conference]
  • Static and dynamic robot gaze expressions to communicating with humans               
    T. Onuki, T. Ezure, T. Ishinoda, Y. Kobayashi and Y. Kuno
    Proc. Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV2014), [International conference]
  • Observing Human's Face for Robot's Controlling his/her Attention               
    D. Das, Y. Kobayashi and Y. Kuno
    Proc. International Conference on Quality Control by Artificial Vision (QCAV2013), [International conference]
  • ミュージアムガイドロボットのための移動軌跡に基づく観客グループ判別               
    神田敦,小林貴訓,久野義徳
    2014年電子情報通信学会総合大会, [Domestic conference]
  • 人の注視を誘導する目と連動したロボットの振り向き動作の検討               
    佐野要,小貫朋実,井田賢人,小林貴訓,久野義徳
    2014年電子情報通信学会総合大会, [Domestic conference]
  • An intelligent human-robot interaction framework to control the human attention               
    M. M. Hoque , K. Deb, D. Das, Y. Kobayashi and Y. Kuno
    Proc. International Conaference on Informatics, Electronics & Vision (ICIEV), [International conference]
  • An intelligent human-robot interaction framework to control the human attention               
    M. M. Hoque , K. Deb, D. Das, Y. Kobayashi and Y. Kuno
    Proc. International Conaference on Informatics, Electronics & Vision (ICIEV2013), [International conference]
  • Attracting Attention and Establishing a Communication Channel Based on the Level of Visual Focus of Attention               
    D. Das, Y. Kobayashi and Y. Kuno
    Proc. International Conference on Intelligent Robots and Systems (IROS2013), [International conference]
  • A Maneuverable Robotic Wheelchair Able to Move Adaptively with a Caregiver by Considering the Situation               
    Y. Sato, M. Arai, R. Suzuki, Y. Kobayashi, Y. Kuno, K. Yamazaki and A. Yamazaki
    Proc. International Symposium on Robot and Human Interactive Communication(Ro-Man2013), [International conference]
  • Robotic Wheelchair Easy to Move and Communicate with Companions               
    Y. Kobayashi, R. Suzuki, Y. Sato, M. Arai, Y. Kuno, A. Yamazaki and K. Yamazaki
    Proc. CHI2013 Extended Abstracts, [International conference]
  • An empirical robotic framework for interacting with multiple humans               
    M. Hoque, Q. Hossian, D. Das, Y. Kobayashi, Y. Kuno and K. Deb
    Electrical Information and Communication Technology (EICT2014), [International conference]
  • How to Move Towards Visitors: A Model for Museum Guide Robots to Initiate Conversation               
    M. Yousuf, Y. Kobayashi, Y. Kuno, A. Yamazaki and K. Yamazaki
    Proc. International Symposium on Robot and Human Interactive Communication(Ro-Man2013), [International conference]
  • Tracking Visitors with Sensor Poles for Robot's Museum Guide Tour               
    T. Oyama, E. Yoshida, Y. Kobayashi and Y. Kuno
    Proc. International conference on Human System Interaction (HSI2013), [International conference]
  • Recognition of Request through Hand Gesture for Mobile Care Robots               
    T. Tabata, Y. Kobayashi and Y. Kuno
    Proc. Annual Conference of the IEEE Industrial Electronics Society (IECON2012), [International conference]
■ Teaching experience
  • Information and Computer Sciences Ⅲ, 講義
  • Synthetic Exercises on Information Engineering, 演習
  • Introductory Seminar on Engineering, 演習
  • Advanced Lectures on Visual Application Systems, 講義
  • Information Technology Basics, Practices I, 演習
  • Information Technology Basics I, 講義
  • Synthetic Exercises on Information Engineering, 演習
  • Introductory Seminar on Engineering, 演習
  • Information Technology Basics, Practices I, 演習
  • Information Technology Basics I, 講義
  • Introductory Seminar on Engineering, 演習
  • Synthetic Exercises on Information Engineering, 演習
  • Applied Linear Algebra Exercises, 演習
  • Information Technology Basics, Practices II, 演習
  • Information Technology Basics, Practices I, 演習
  • Synthetic Exercises on Information Engineering, 演習
  • Applied Linear Algebra Exercises, 演習
  • Information Technology Basics, Practices II, 演習
  • Experiments on Engineering Basics, 実習・実験
  • Information Technology Basics, Practices I, 演習
■ Affiliated academic society
  • IEEE
  • ACM
  • The Institute of Image Information and Television Engineers
  • The Institute of Electronics, Information and Communication Engineers
  • Information Processing Society of Japan
  • IEEE
  • ACM
  • -
  • The Robotic Society of Japan
■ Works
  • テレビ埼玉ウィークエンド930               
    Mar. 2012
    開催日:[20120316]開発したロボット車椅子を番組で紹介した.
  • TBS朝ズバ               
    Jan. 2012
    開催日:[20120123]開発したロボット車椅子を番組で紹介した.
  • コラボさいたま2012               
    Nov. 2011
    開催日:[20111111]開発したロボット車椅子を展示会で紹介した.
  • 日本テレビ「世界一受けたい授業」               
    Aug. 2011
    開催日:[20110806]開発したロボット車椅子を番組で紹介した.
  • 埼玉大学地域オープンイノベーションセンター               
    久野義徳、小林貴訓
    Nov. 2010
  • 同伴者に自動併走するロボット車椅子               
    久野義徳、小林貴訓
    Sep. 2010
  • 同伴者に自動併走するロボット車椅子               
    久野義徳、小林貴訓
    Sep. 2010
  • 人と共存するサービスロボット               
    久野義徳、小林貴訓
    Jul. 2010
  • Remote shopping support system               
    小林貴訓
    Workshop "Technology & Social Interaction"にて遠隔買い物支援システムに関しての発表を行った
  • 人と協働する移動ロボットの研究事例               
    小林貴訓
    人と協働するロボット技術研究会にて移動ロボットの研究事例についての講演を行った。
  • 自動運転車椅子のための進路提示手法の検討               
    金田理史
    科学者の芽育成プログラムの担当学生が埼玉大学の代表としてサイエンスカンファレンス2023で発表した
  • 人と協働するロボットの仕組みとその活用               
    小林貴訓
    人と協働するロボット技術研究会にて移動ロボットについての講演とデモを行った。
  • 人と協働するロボットの仕組みとその活用               
    小林貴訓
    埼玉県経営合理化協会にて移動ロボットについての招待講演とデモを行った。
  • ロボットの親しみやすさは何から生まれるのか?               
    中江嘉奈
    科学者の芽育成プログラムの担当学生が埼玉大学の代表としてサイエンスカンファレンス2022で発表した
  • 画像処理とロボットプログラミング               
    小林貴訓, 鈴木亮太
    wise-pサイエンス体験サマースクールにて画像処理とロボットプログラミングの講習会を行った
  • 人と協働するロボットの仕組みとその活用               
    小林貴訓
    埼玉県中小企業団体中央会研究会にて移動ロボットについての招待講演とデモを行った。
  • インタラクティブイルミネーション               
    小林貴訓
    埼玉大学イルミネーションサークル「埼大イルミ」と協働して,インタラクティブなイルミネーションの展示を行った
  • 未来の買い物とエンターテインメント               
    小林貴訓
    イオン北浦和店にて研究展示・デモを行った
  • インタラクティブイルミネーション               
    小林貴訓
    埼玉大学イルミネーションサークル「最大イルミ」と協働して,インタラクティブなイルミネーションの展示を行った.
  • 未来の買い物支援システム               
    小林貴訓
    イオン北浦和店にて研究展示・デモを行った.
  • 未来の買い物とエンターテインメント               
    小林貴訓
    イオン北浦和店にて研究展示・デモを行った.
  • ロボットって何?               
    小林貴訓
    埼玉県立熊谷女子高等学校にて出張講義を行った.
  • 新しく生まれた買い物カート               
    小林貴訓
    音声で操作できる買い物カートを番組で紹介した.
  • 高齢者を支援するロボット買い物カート               
    小林研究室
    彩の国ビジネスアリーナにてロボット買い物カートの展示を行った.
  • インタラクティブイルミネーション               
    小林研究室
    埼玉大学イルミネーションサークル「最大イルミ」と協働して,インタラクティブなイルミネーションの展示を行った.
  • アイドルと繋がるペンライト               
    小林研究室
    彩の国ビジネスアリーナにてペンライトシステムの展示を行った.
  • ロボットの未来を考える               
    小林貴訓
    NPO法人地域人ネットワークさいたま市民対象プログラミング講師養成講座にて講義を行った.
  • ロボットと仲良くなろう               
    小林貴訓
    田地域ケアプラザ・NTTテクノクロス共催「未来をつくるワークショップ」にて講師を担当した.
  • デジタルカメラはどうやって顔を見つけるのか               
    小林貴訓
    NPO法人地域人ネットワークさいたま市民対象プログラミング講師養成講座にて講義を行った.
  • デジタルカメラはどうやって顔を見つけるのか               
    小林貴訓
    埼玉県立熊谷高等学校にて出張講義を行った.
  • ロボット買い物カート               
    小林研究室
    ヨコスカ×スマートモビリティ・チャレンジにて,研究成果である高齢者の買い物を支援するロボット買い物カートの展示とデモを行った.
  • インタラクティブイルミネーション               
    小林研究室
    埼玉大学イルミネーションサークル「最大イルミ」と協働して,インタラクティブなイルミネーションの展示を行った.
  • Jewel☆Neige 特別ミニライブ               
    開発中のシステムの実験のため,実験型ミニライブを行った.
  • テレビ東京「ワールドビジネスサテライト」               
    開発したシステムを番組で紹介した.
  • テレビ神奈川「NEWS930α」               
    ロボットに関する授業が紹介された.
  • NHK WORLD「great gear」               
    開発したシステムを番組で紹介した.
  • テレビ東京「ワールドビジネスサテライト」               
    開発したシステムを番組で紹介した.
  • ロボットに利用できる実用画像処理               
    小林貴訓
    ロボットで利用できる実用的な画像処理技術について講演を行った.
  • 電動車いすロボット化ユニットの開発               
    小林貴訓
    ロボット買い物カートの展示を行った.
  • 人と協調する移動ロボット技術               
    小林貴訓
  • 電動車いすロボット化ユニットの開発               
    小林貴訓
    JST主催の国内最大規模の産学マッチングイベントにて研究成果の発表と展示を行った.
  • 彩の国ビジネスアリーナ2016               
    ロボット買い物カートの展示
  • 彩の国ビジネスアリーナ2016               
    コミュニケーション支援ロボット,ロボット車椅子の展示
  • 国際ロボット展               
    コミュニケーション支援ロボット,ロボット車椅子の展示
  • 介護予防・認知症予防体験フェア2015 in 浦安               
    システムのデモンストレーション
  • 情報学による未来社会のデザイン               
    システムのデモンストレーション
  • 彩の国ビジネスアリーナ2015               
    システムのデモンストレーション
  • 彩の国ビジネスアリーナ2014               
    システムのデモンストレーション
  • 国際画像機器展               
    小林貴訓
    開発したロボット車椅子を展示会で紹介した.
  • コラボさいたま2013               
    小林貴訓
    開発したロボット車椅子を展示会で紹介した.
  • 情報学による未来社会のデザイン               
    小林貴訓
    開発したロボット車椅子を展示会で紹介した.
■ Research projects
  • Practical Foundation of Low and Medical Care: Ethnomethodology of Embodiment and Social Norm               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (A), 01 Apr. 2024 - 31 Mar. 2028
    Saitama University
    Grant amount(Total):47970000, Direct funding:36900000, Indirect funding:11070000
    Grant number:24H00151
  • International comparative studies of medical care and daily activities using remote technology among super-aging societies.               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Fund for the Promotion of Joint International Research (International Collaborative Research), 08 Sep. 2023 - 31 Mar. 2026
    Saitama University
    Grant amount(Total):21060000, Direct funding:16200000, Indirect funding:4860000
    Grant number:23KK0032
  • 分散型環境における開放性と秘匿性が両立した会議空間の社会学的工学的研究               
    30 Jun. 2022 - 31 Mar. 2024
    Grant amount(Total):6500000, Direct funding:5000000, Indirect funding:1500000
    Grant number:22K18548
  • 人間とロボットの共生のための社会学的ロボット学               
    01 Apr. 2020 - 31 Mar. 2023
    Grant amount(Total):17810000, Direct funding:13700000, Indirect funding:4110000
    Grant number:20H01585
  • Constructing ethno-medialogy: Interdisciplinary investigations of correlations and transformations of narrative, embodiment and imagery               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Challenging Research (Exploratory), 28 Jun. 2019 - 31 Mar. 2023
    Saitama University
    Grant amount(Total):6370000, Direct funding:4900000, Indirect funding:1470000
    Grant number:19K21718
  • Techno-Sociological Research on Systems for Supporting the Daily Lives and Co-presence of the Elderly and Migrants with Their Hometowns               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (A), 01 Apr. 2019 - 31 Mar. 2023
    Saitama University
    Grant amount(Total):45370000, Direct funding:34900000, Indirect funding:10470000
    Grant number:19H00605
  • Developing a multiculturally adoptable embodied technological system based on sociological analysis of human behaviors in the multicultural social context               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Fund for the Promotion of Joint International Research (Fostering Joint International Research (B)), 09 Oct. 2018 - 31 Mar. 2023
    Saitama University
    Grant amount(Total):17940000, Direct funding:13800000, Indirect funding:4140000
    Grant number:18KK0053
  • Expression and Recognition of Emotion through Blinking between Humans and Robots               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Challenging Research (Exploratory), 30 Jun. 2017 - 31 Mar. 2020
    KUNO Yoshinori, Saitama University
    Grant amount(Total):6500000, Direct funding:5000000, Indirect funding:1500000
    Eyeblinks happen almost unconsciously and have recently been found to be related to the communication process. In particular, eyeblinks are synchronized between listeners and speakers in face-to-face conversation. Thus, we studied the use of eyeblinks in robots to help them smoothly communicate with humans. Indeed, we have observed the same synchronous eyeblinks between human participants and robots when the robots talk to them. We have developed a robot that can recognize human eyeblinks from a camera and blink synchronously with human participants that talk to the robot. We have performed experiments using human participants to examine how such blinking may affect communication. However, we have not yet obtained any concrete results. This is left for future work.
    Grant number:17K18850
  • Service Robots Based on an Integrated Ontology of Verbal and Nonverbal Behaviors               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (A), 01 Apr. 2014 - 31 Mar. 2019
    KUNO Yoshinori, Saitama University
    Grant amount(Total):41730000, Direct funding:32100000, Indirect funding:9630000
    In this research, we have combined and extended our two previous interdisciplinary studies with philosophy and sociology. The former is to recognize objects through natural language interaction with users based on an ontology, and the latter is to investigate interaction through nonverbal behaviors. We have developed a robot system that moves around a care facility and finds elderly people calling it by gestures. We do not need to specify any gesture patterns in advance but the robot can still recognize natural gestures intended for calling. It can also recognize objects indicated by users. Even in cases where the robot cannot automatically recognize objects at first, it can ask the users to verbally provide information about the objects to complete its tasks. Using our ontology, the robot can understand complex verbal expressions made by humans, where they could indicate the same things by different words or vice versa. We demonstrated the robot in an actual care facility.
    Grant number:26240038
  • Robotic System for Encouraging Elderly Dementia Patients to Communicate with Emotional Estimation and Monitoring Functions               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Challenging Exploratory Research, 01 Apr. 2014 - 31 Mar. 2017
    KUNO Yoshinori, Saitama University
    Grant amount(Total):3510000, Direct funding:2700000, Indirect funding:810000
    We have worked toward realizing a system to encourage elderly dementia patients living alone to actively communicate. We have developed a video communication system that enables the elderly to talk with their family members just by pushing a simple button. The system is also equipped with monitoring sensors that can send alarm email messages to family members if it senses a lack of activity in the elderly person’s room. In addition, a small robot is equipped at the elderly person’s side and can talk with the elderly person whenever his/her family members are unavailable to talk. We have also developed a method for recognizing emotions via facial expressions and estimating heart rate from facial video images. We have obtained promising results through experiments that we can estimate the elderly’s emotional status from the information obtained by our method.
    Grant number:26540131
  • Tracking and recognition of people in groups based on observations from multiple sensors               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (C), 01 Apr. 2014 - 31 Mar. 2017
    KOBAYASHI Yoshinori; KUNO Yoshinori, Saitama University
    Grant amount(Total):4810000, Direct funding:3700000, Indirect funding:1110000
    We have established the multiple people tracking technique which enables real-time tracking of body positions and orientations of multiple users by using LiDAR sensors. Behaviors of the people in the same group are learned in advance. Then, the group of people can be recognized by the system based on the trajectories of tracking people. We conducted the experiment for evaluating the effectiveness of our method in an actual art museum. In the result distributions of residence time for each area in front of the paintings can be visualized and people trajectory patterns can be classified into four types. By using the sensors in a smartphone, our LiDAR tracking system can identify the person who is holding his/her smartphone.
    Grant number:26330186
  • Integrated Object Recognition for Service Robots Based on Ontology with the Interactive Support from Humans               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (B), 01 Apr. 2011 - 31 Mar. 2015
    KUNO Yoshinori; KOBAYASHI Yoshinori; KACHI Daisuke, Saitama University
    Grant amount(Total):15990000, Direct funding:12300000, Indirect funding:3690000
    We have been developing a helper robot that is able to fetch objects requested by users. Such a robot must recognize the desired object(s) in order to carry out its tasks. However, it is difficult for a system to recognize objects autonomously without fail under various real-world conditions. To address this problem, we have proposed an integrated object recognition system. The system first attempts autonomous object recognition. If it fails, it switches to an interactive object recognition mode where the system asks the user to verbally provide information about the object that it is unable to detect autonomously. However, natural language descriptions from humans can be complex. Such descriptions may not even have a one-to-one correspondence to the physical attributes of objects. We have constructed an ontology for representing such complex relationships between human descriptions and physical attributes and developed a method to recognize objects based on this ontology.
    Grant number:23300065
  • Designing of Embodied Technologies based on multicultural ethnography of verbal and visual interaction               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (A), 01 Apr. 2011 - 31 Mar. 2015
    YAMAZAKI Keiichi; YAMAZAKI Akiko; KUNO Yoshinori; IKEDA Keiko; IMAI Michita; ONO Tetsuo; IGARASHI Motoko; KASHIMURA Shiro; KOBAYASHI Ako; SEKI Yukiko; MORIMOTO Ikuyo; BURDELSKI Matthew; KAWASHIMA Michie; NAKANISHI Hideyuki, Saitama University
    Grant amount(Total):34060000, Direct funding:26200000, Indirect funding:7860000
    Our research project was conducted collaboratively by sociologists and robot engineers. We have investigated how humans interact with each other through verbal and non-verbal actions. Based on these findings, we have designed embodied technologies in a robot that facilitate human interaction. In order to develop such technologies in cross-cultural settings, we conducted video ethnographies of interactions among visitors at various museums in different countries. We then analyzed these video recorded data by applying ethnomethodology and conversation analysis. The project team has also conducted several experiments with robots in cross-cultural settings. We have done comparative analysis between Japanese speaker groups and English speaker groups. During the research period, we also conducted another experiment using a mobile avatar robot, TEROOS. We conducted experiments in order to establish remote collaborative communication between people in Hawaii and in Japan.
    Grant number:23252001
  • People tracking and its applications by integrating multimodal observations               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Young Scientists (B), 01 Apr. 2012 - 31 Mar. 2014
    KOBAYASHI Yoshinori, Saitama University
    Grant amount(Total):4420000, Direct funding:3400000, Indirect funding:1020000
    We developed a new people tracking system which consists of simple devices such as a laser range finder and omni-directional camera attached to a pole. By just placing the several sensor poles to the environment, we could track the location and orientation of multiple persons accurately and robustly. In our experiment our system successfully tracks people in a sensing area continuously even when they are occluded sometime. We apply this system to our museum guide robot and robotic wheelchair. Because our system can track not only the position of people but also the body orientation and the gaze direction in a large environment, our robots can provide appropriate services to users by considering the behaviors of the users in detail.
    Grant number:24700157
  • Robot Eyes Suitable for Communication: Integrated Design of Appearance and Function               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Challenging Exploratory Research, 2011 - 2013
    KUNO Yoshinori; KOBAYASHI Yoshinori; KODAMA Sachiko; YAMAZAKI Keiichi; YAMAZAKI Akiko, Saitama University
    Grant amount(Total):3640000, Direct funding:2800000, Indirect funding:840000
    Human eyes not only serve the function of enabling us "to see" something, but also perform the vital role of allowing us "to show" our gaze for non-verbal communication, such as through establishing eye contact and joint attention. The eyes of service robots should therefore also perform both of these functions. Moreover, they should be friendly in appearance so that humans may feel comfortable with the robots. In this research, we first developed a robot face with rear-projected eyes for changing their appearance while simultaneously realizing the showing of gaze by incorporating stereo cameras. Then, we examined which shape of robot eyes is most suitable for gaze reading while giving the friendliest impression, through carrying out experiments where we altered the shape and iris size of robot eyes. Finally, we investigated how robots should move their eyes and head to give natural and friendly impressions.
    Grant number:23650094
  • Techno-Sociological Human Interface Study on Interaction of MuseumExperience and Supporting Museum Experience               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (B), 2009 - 2012
    YAMAZAKI Akiko; KUNO Yoshinori; KOBAYASHI Yoshinori; IKEDA Keiko; ONO Tetsuo; YAMAZAKI Keiichi, Tokyo University of Technology
    Grant amount(Total):18850000, Direct funding:14500000, Indirect funding:4350000
    In this study, we videotaped guides-multiple visitors’ interactions at various‘Nikkei’ museums around the world and analyzed their interactions by conversation analysis and interaction analysis based on ethnomethodology. From these findings, wesupport visitors’ museum experiences by developing museum guide robots.
    Grant number:21300316
  • 人間行動の社会学的分析に基づく複数人環境での人間とロボットのインタラクション               
    2009 - 2010
    Grant amount(Total):6100000, Direct funding:6100000
    Grant number:21013009
  • Object Recognition Integrating Autonomous and Interactive Methods Based on the Analysis of Human Expressions of Object Attributes and Spatial Relationships               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (B), 2007 - 2010
    KUNO Yoshinori; KOBAYSHI Yoshiori; YAMAZAKI Keiichi, Saitama University
    Grant amount(Total):17550000, Direct funding:13500000, Indirect funding:4050000
    Service robots need to be able to recognize objects located in complex environments. Although there has been recent progress in this area, it remains difficult for autonomous vision systems to recognize objects in natural conditions. In this research, we have proposed an interactive object recognition system. In this system, the robot asks the user to verbally provide information about an object that it cannot detect. In particular, it asks the user questions regarding color and spatial relationship between objects depending on the situation. Experimental results confirm the usefulness and efficiency of our interaction system.
    Grant number:19300055
  • Gaze tracking based on spatiotemporal integration of classifiers and recognition of relationship between people               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Young Scientists (B), 2008 - 2009
    YOSHINORI Kobayashi, Saitama University
    Grant amount(Total):4290000, Direct funding:3300000, Indirect funding:990000
    This research aims to establish people's tracking and behavior sensing techniques by integrating omni-directional cameras and laser range sensors. An omni-directional camera is set up on the top of a laser range sensor and placed shoulder level so that the sensors can observe upper body of people. By using this integrated sensor our proposed technique can track people's position and orientation of the body and the head. In general, though an omni-directional camera with a mirror has a wide field of view, the resolution of the captured image is very low. Our method can track people's gaze direction even when the human head is observed in low resolution by incorporating the information captured by a laser range sensor. We applied our developed method to the museum guide robot that can explain the exhibits to visitors and confirmed the effectiveness of our method.
    Grant number:20700152
  • Study of human support system based on sociological analysis of interactions in human care               
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (A), 2007 - 2009
    YAMAZAKI Keiichi; KUNO Yoshinori; KUZUOKA Hidiaki; YAMADA Youko; IGARASHI Motoko; KOBAYASHI Ako; YAMAZAKI Akiko; IDA Yasuko; ITOU Hiroaki; WATANUKI Keiichi; YUKIOKA Tetsuo; MATTHEW Burdelski, Saitama University
    Grant amount(Total):29380000, Direct funding:22600000, Indirect funding:6780000
    This study aims to develop various human support systems for human care facilities such as senior homes, nursery schools, and appreciation support areas in museums through utilizing video data recorded at such facilities and analyzed through the methods of ethnomethodology, an approach derived from sociology. The results enabled us to develop a wheelchair robot based on research conducted at a senior home, and a museum guide robot based on research conducted at an appreciation support area in a museum.
    Grant number:19203025
  • 予期的行為の相互参照を通じた介護場面におけるロボットの依頼理解               
    2007 - 2008
    Grant amount(Total):6800000, Direct funding:6800000
    Grant number:19024013
  • 使用者が自立的に見えるロボット車椅子               
    2006 - 2008
    Grant amount(Total):3200000, Direct funding:3200000
    Grant number:18650043
  • -               
    Competitive research funding
  • -               
    Competitive research funding
  • グループコミュニケーションの解明に基づく車椅子型移動ロボットシステムの開発               
  • 生活支援ロボットプロジェクト               
  • グループコミュニケーションを支援する複数ロボット車椅子システム               
■ Social Contribution Activities
  • WISE-Pサマーセミナー               
    lecturer, demonstrator
    24 Aug. 2025
  • WISE-Pラボ訪問               
    lecturer, demonstrator
    20 Mar. 2025
  • 人と協働する移動ロボット研究会               
    presenter, planner, organizing_member, report_writing
    28 Feb. 2025
  • JSTさくらハイスクールプログラム               
    presenter, organizing_member, demonstrator
    12 Nov. 2024
TOP