Ineractive Vision Page

_NowPrinting.jpg

Interactive Vision Group
[poster]
(in Japanese)

INTRODUCTION

Interactive_pict1.gif We aim to achieve a service to take foods out of a refrigerator by a nursing-care robot for elderly and disabled people. We try to recognize automatically names and positions of foods in the refrigerator.



RESEARCH CONTENT

[1] Recognition of Hidden Objects for Service Robot
fig1.png
We try to detect the position of foods that a user needs in a refrigerator. We register features of the color and shapes etc. of objects in advance, and detect a target based on the features in the refrigerator. Even if lighting condition etc. are changed or a part of the food is hidden, we aim at robust detections. And also, we recommend a suitable food for a user by recognizing the state of fruits, vegetables, and so on. Moreover, we recognize the foods more precisely by using the range information from stereo cameras.

Objects Recognition fig2.png
In the refrigerator, a food is often hidden by the other foods. We develop a system that can recognize the food from a part of it.
inpei.png
TREE RELATION
tree.png relation.png
Our system can recognize foods from any angles by describing the various appearance about the foods which have similar features. Description of the position relations among the features of the objects


[2] Refrigerator Monitor System
onnsei.png
Our refrigerator monitor system for a service robot manages the foods and interacts with a user. The system memorizes the position and the name of the foods and manages a stock in the refrigerator. When a user put a food into a refrigerator, our system registers the information of the food by background difference. In the conventional background difference, the registration often miss getting the difference because there are the reflections on the bottom face in the refrigerator. We use the identifications of the reflection areas to get the difference successfully.

uturikomi.gif Identifications of the reflection areas
Our system identifies the reflection areas based on the information such as specular reflections, transmitted lights, and so on. And, we try to correct the information needed to recognize an object through interactions with a user by a voice recognition technology.





MEMBERS

    B5 Keisuke Uema ( uema [at] i.ci.ritsumei.ac.jp )
    B4 Subaru Nishida ( nishida [at] i.ci.ritsumei.ac.jp )
    B4 Yuma Hirai ( yhirai [at] i.ci.ritsumei.ac.jp )
    B4 Hiroya Fukuhara ( fukuhara [at] i.ci.ritsumei.ac.jp )
    B4 Tatsuya Yatsuduka ( yatsuduka [at] i.ci.ritsumei.ac.jp )