Shogo Matsuno

Department of InformaticsAssociate Professor
Cluster I (Informatics and Computer Engineering)Associate Professor

Degree

  • 博士(工学), 電気通信大学

Research Keyword

  • Webインテリジェンス
  • Biomedical Engineering
  • Eye gaze input interface

Field Of Study

  • Informatics, Web and service informatics
  • Informatics, Sensitivity (kansei) informatics
  • Informatics, Human interfaces and interactions

Career

  • Mar. 2024 - Present
    The University of Electro-Communications, Graduate School of Informatics and Engineering, School of Informatics and Engineering, 准教授
  • Apr. 2018 - Present
    Tokyo Denki University, School of System Design and Technology, Researcher
  • Apr. 2021 - Feb. 2024
    Gunma University, Faculty of Informatics, Assistant Professor
  • Apr. 2018 - Mar. 2021
    Hottolink Inc., R&D, 主席研究員

Educational Background

  • Apr. 2014 - Sep. 2017
    The University of Electro-communications, Graduate School of Informatics and Engineering, Department of Informatics

Member History

  • Apr. 2023 - Present
    1号委員, 電気学会 計測技術委員会, Society
  • Apr. 2022 - Present
    Secretary, Information Processing Society Japan, Society

Award

  • Feb. 2021
    電子情報通信学会 言語理解とコミュニケーション研究会
    Best Paper Award, 榊剛史;松野省吾;檜野安弘
  • Sep. 2014
    IPSJ
    FIT Funai Best Paper Award, Shogo Matsuno;Naoaki Itakura;Minoru Ohyama;Shoichi Ohi;Kiyohiko Abe

Paper

  • Performance Improvement of 3D-CNN for Blink Types Classification by Data Augmentation
    Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
    IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan (IEE Japan), 144, 4, 328-329, 01 Apr. 2024
    Scientific journal
  • Classification of involuntary eye movements for eye gaze input interface
    MATSUNO Shogo
    Proceedings of the annual conference of JSAI, The japanese society for artificial intelligence, JSAI2024, 4Xin226-4Xin226, 2024, Gaze input interfaces operate computers by capturing voluntary eye movement and gazing as input signals. We are investigating methods to measure eye movement and blinking to develop a gaze input interface that enables various expressions by combining involuntary physiological responses in addition to conventional voluntary operations. This paper proposes a method to discriminate between voluntary and involuntary eye automatically blinks measured in parallel with special voluntary eye movements and to provide different feedback. Experimental evaluation of the proposed method shows that it discriminates voluntary eye movements with an accuracy of approximately 85%.
    Japanese
  • Detailed Analysis of Blink Types Classification Using a 3D Convolutional Neural Network
    Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
    IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan (IEE Japan), 143, 9, 971-978, 01 Sep. 2023
    Scientific journal
  • Discovery of Contrast Itemset with Statistical Background Between Two Continuous Variables
    Kaoru Shimada; Shogo Matsuno; Shota Saito
    Big Data Analytics and Knowledge Discovery, Springer Nature Switzerland, 14148 LNCS, 114-119, 10 Aug. 2023, We previously defined ItemSB as an extension of the concept of frequent itemsets and a new interpretation of the association rule expression, which has statistical properties in the background. We also proposed a method for its discovery by applying an evolutionary computation called GNMiner. ItemSB has the potential to become a new baseline method for data analysis that bridges the gap between conventional data analysis using frequent itemsets and statistical analyses. In this study, we examine the statistical properties of ItemSB, focusing on the setting between two continuous variables, including their correlation coefficients, and how to apply ItemSB to data analysis. As an extension of the discovery method, we define ItemSB that focuses on the existence of differences between two datasets as contrast ItemSB. We further report the results of evaluation experiments conducted on the properties of ItemSB from the perspective of reproducibility and reliability using contrast ItemSB.
    In book
  • Construction of Evaluation Datasets for Trend Forecasting Studies
    Shogo Matsuno; Sakae Mizuki; Takeshi Sakaki
    Proceedings of the International AAAI Conference on Web and Social Media, Association for the Advancement of Artificial Intelligence (AAAI), 17, 1041-1051, 02 Jun. 2023, In this study, we discuss issues in the traditional evaluation norms of trend forecasts, outline a suitable evaluation method, propose an evaluation dataset construction procedure, and publish Trend Dataset: the dataset we have created. As trend predictions often yield economic benefits, trend forecasting studies have been widely conducted. However, a consistent and systematic evaluation protocol has yet to be adopted. We consider that the desired evaluation method would address the performance of predicting which entity will trend, when a trend occurs, and how much it will trend based on a reliable indicator of the general public's recognition as a gold standard. Accordingly, we propose a dataset construction method that includes annotations for trending status (trending or non-trending), degree of trending (how well it is recognized), and the trend period corresponding to a surge in recognition rate. The proposed method uses questionnaire-based recognition rates interpolated using Internet search volume, enabling trend period annotation on a weekly timescale. The main novelty is that we survey when the respondents recognize the entities that are highly likely to have trended and those that haven't. This procedure enables a balanced collection of both trending and non-trending entities. We constructed the dataset and verified its quality. We confirmed that the interests of entities estimated using Wikipedia information enables the efficient collection of trending entities a priori. We also confirmed that the Internet search volume agrees with public recognition rate among trending entities.
    Scientific journal
  • 水平作業台ディスプレイのための複数カメラに基づく視線位置推定システム
    長野 真大; 石田 和貴; 中嶋 良介; 仲田 知弘; 松野 省吾; 岡本 一志; 山田 周歩; 山田 哲男; 杉 正夫
    精密工学会学術講演会講演論文集, 公益社団法人 精密工学会, 2023S, 34-35, 01 Mar. 2023, 組立作業における作業支援の一つとして,筆者らは水平作業台ディスプレイを提案してきた.先行研究では,水平作業台ディスプレイ上での視線位置推定システムが提案されたが,見てる場所による精度の違いや頭部移動に非対応な点が課題点として残った.本論文は,複数カメラを用いた視線位置推定システムを提案する.従来の課題点に対応するために,複数角度からの顔画像や頭部位値がわかる二値画像を入力とし視線位置推定を行う.
    Japanese
  • モーションキャプチャーと機械学習による作業の身体動作の可視化と作業者の推定方法—Work Movement Visualization and Worker Estimation Methods Using Motion Capture and Machine Learning—2022年度春季研究発表大会特集
    川根 龍人; 伊集院 大将; 杉 正夫; 中嶋 良介; 岡本 一志; 仲田 知弘; 松野 省吾; 山田 哲男
    日本設備管理学会誌 = Journal of the Society of Plant Engineers Japan / 日本設備管理学会編集事務局 編, 日本設備管理学会, 34, 4, 111-121, Feb. 2023
    Japanese
  • Gaze direction classification using vision transformer.
    Shogo Matsuno; Daiki Niikura; Kiyohiko Abe
    IIAI-AAI-Winter, 193-198, 2023
    International conference proceedings
  • Detection of voluntary eye movement for analysis about eye gaze behaviour in virtual communication.
    Shogo Matsuno
    Hci (43), 1832 CCIS, 273-279, 2023, In this study, we aim to realize smoother communication between avatars in virtual space and discuss the method of eye-gaze interaction used for avatar communication. It is necessary for this purpose, a specific gaze movement detection algorithm, which is necessary for measuring characteristic eye movements, blinks, and pupil movements. We developed those characteristic movement counting methods using an HMD built-in eye tracking system. Most input devices used in current virtual reality and augmented reality are hand gestures, head tracking, and voice input, despite the HMD attachment type. Therefore, in order to use the eye expression as a hands-free input modality, we consider an eye gaze input interface that does not depend on the measurement accuracy of the measurement device. Previous eye gaze interfaces have a difficulty called as “Midas touch” problem, which is the trade-off between the input speed and input errors. Therefore, using the method that has been developed so far, which as an input channel using characteristic eye movement and voluntary blinks, it aims to realize an input method that does not hinder the acquisition of other meta information alike gestures of eyes. Moreover, based on involuntary characteristic eye movements unconsciously expressed by the user, such as movement of the gaze, we discuss a system that enables “expression tactics” in the virtual space by providing natural feedback of the avatar's emotional expression and movement patterns. As a first step, we report the result of measured eyeball movements face-to-face through experiments in order to extract features of gaze and blinking.
    International conference proceedings
  • ItemSB: Itemsets with Statistically Distinctive Backgrounds Discovered by Evolutionary Method
    Kaoru Shimada; Takaaki Arahira; Shogo Matsuno
    International Journal of Semantic Computing, 16, 3, 357-378, 01 Sep. 2022, In this paper, we propose a method for discovering combinations of attributes (i.e. itemsets) against a background of statistical characteristics without obtaining frequent itemsets. The method considers a database with numerous attributes and can directly find a combination of highly correlated attributes from small populations in two consecutive variables of interest even from an incomplete database. As the proposed method determines local patterns in large-scale data, it may be used as a basis for large-scale data analysis. Evolutionary computations characterized by a network structure and a strategy to pool solutions are used throughout generations. Moreover, association rules are used to generalize the analysis method as itemsets with statistically distinctive backgrounds (ItemSBs). The class-association rules used for classification constitute a discovery method of attribute combinations, which are characteristic when the ratio of class attributes is obtained. The proposed method is an extension to statistical bivariate analysis. In addition, we determine contrast ItemSBs that are statistically different between two subgroups of data while satisfying the same conditions. Experimental results show the characteristics and effectiveness of the proposed method.
    International conference proceedings
  • Evolutionary operation setting for outcome accumulation type evolutionary rule discovery method
    Shogo Matsuno; Kaoru Shimada
    GECCO 2022 Companion - Proceedings of the 2022 Genetic and Evolutionary Computation Conference, 451-454, 09 Jul. 2022, Association rule analysis has been widely employed as a basic technique for data mining. Extensive research has also been conducted to apply evolutionary computing techniques to the field of data mining. This study presents a method to evaluate the settings of evolutionary operations in evolutionary rule discovery method, which is characterized by the execution of overall problem solving through the acquisition and accumulation of small results. Since the purpose of population evolution is different from that of general evolutionary computation methods that aim at discovering elite individuals, we examined the difference in the concept of settings during evolution and the evaluation of evolutionary computation by visualizing the progress and efficiency of problem solving. The rule discovery method (GNMiner) is characterized by reflecting acquired information in evolutionary operations; this study determines the relationship between the settings of evolutionary operations and the progress of each task execution stage and the achievement of the final result. This study obtains knowledge on the means of setting up evolutionary operations for efficient rule-set discovery by introducing an index to visualize the efficiency of outcome accumulation. This implies the possibility of setting up dynamic evolutionary operations in the outcome accumulation-type evolutionary computation in future studies.
    International conference proceedings
  • 水平作業台ディスプレイにおける作業者の注視点推定システム—Estimation System of Worker's Eye Gaze for Display-Mounted Workbench—2021年度春季研究発表大会特集
    山田 孟; 杉 正夫; 長野 真大; 中嶋 良介; 仲田 知弘; 松野 省吾; 岡本 一志; 山田 哲男
    日本設備管理学会誌 = Journal of the Society of Plant Engineers Japan / 日本設備管理学会編集事務局 編, 日本設備管理学会, 34, 1, 8-14, Apr. 2022, Peer-reviwed
    Scientific journal, Japanese
  • An Evaluation of Large-scale Information Network Embedding based on Latent Space Model Generating Links
    Shotaro Kawasaki; Ryosuke Motegi; Shogo Matsuno; Yoichi Seki
    ACM International Conference Proceeding Series, 164-170, 21 Jan. 2022, Graph representation learning encodes vertices as low-dimensional vectors that summarize their graph position and the structure of their local graph neighborhood. These methods give us beneficial representation in continuous space from big relational data. However, the algorithms are usually evaluated indirectly from the accuracy of applying the learning results to classification tasks because of not giving the correct answer when graph representation learning is applied. Therefore, this study proposes a method to evaluate graph representation learning algorithms by preparing correct learning results for the data by distributing objects in the latent space in advance and probabilistically generating relational graph data from the distributions in the latent space. Using this method, we evaluated LINE: Large-scale information network embedding, one of the most popular algorithms for learning graph representations. LINE consists of two algorithms optimizing two objective functions defined by first-order proximity and second-order proximity. We prepared two link-generating models suitable for these two objective functions and clarified that the corresponding LINE algorithm performed well for the link data generated by each model.
    International conference proceedings
  • Effeective Evolutionary Manipulation Seetting in GNP-based rule mining method
    Shogo Matsuno; Kaoru Shimada; Takaaki Arahira
    Lead, 27th International Symposium on Artificial Life and Robotics, Jan. 2022, Peer-reviwed
    International conference proceedings
  • Blink State Classification Using 3D Convolutional Neural Network
    Hironobu Sato; Kiyohiko Abe; Minoru Ohyama; Shogo Matsuno
    27th International Symposium on Artificial Life and Robotics, Jan. 2022, Peer-reviwed
    International conference proceedings
  • Blink input interface enabling multiple candidate selection through sound feedback
    Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
    ARTIFICIAL LIFE AND ROBOTICS, SPRINGER, 26, 3, 312-317, Aug. 2021, Several input interfaces that employ eye-blinking detection have been proposed. These input interfaces can detect operating intentions indicated by eye-blink actions. In this study, we propose a new blink input interface with the aim of increasing the classifiable blink types in addition to the regular voluntary blink. The proposed interface employs a sound feedback that facilitates multiple candidate selection. The feedback sound represents each input candidate and is generated while switching over time. This interface enables the user to input their operating intentions by controlling the timing of eye opening. We developed a prototype with the proposed interface and evaluated the system. The results indicated that a total classification rate of 96.5% is achieved for four types of voluntary blinks with 10 subjects. This classification rate is in accordance that of our previous report for two types of voluntary blinks. Thus, we concluded that our proposed method is reliable.
    Scientific journal, English
  • Product and Corporate Culture Diffusion via Twitter Analytics: A Case of Japanese Automobile Manufactures
    Yuta Kitano; Shogo Matsuno; Tetsuo Yamada; Kim Hua Tan
    Corresponding, 26th International Conference on Production Research (ICPR)2021, Jul. 2021, Peer-reviwed
    International conference proceedings, English
  • Evolutionary Method for Two-dimensional Associative Local Distribution Rule Mining
    Kaoru Shimada; Takaaki Arahira; Shogo Matsuno
    Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI, 2021-November, 1018-1025, 2021, In this paper, we propose a rule discovery method that can reveal a combination of attributes that provide characteristic distribution of two consecutive variables of interest directly at high speed in a database having many attributes. In numerical association rule mining (NARM), when using association rules that handle consecutive numerical data values, it is difficult to heuristically extract rules that focus on statistical distributions of numerical data. The proposed method enables quick discovery of the number of rules necessary for prediction purposes using evolutionary calculations characterized by a network structure and a strategy to pool solutions throughout generations. This effectively finds attribute combinations in which the values taken by two consecutive variables of interest are both narrow ranges and can address instance-based two-dimensional regression problems in a short time. As an evaluation experiment, a prediction task using musical data linked with map data was carried out, and the discovery condition of the flexible rule was set. This resulted in realizing a high coverage rate in the instance-based regression problem, and the proposed method was effective in rule discovery based on the statistical distribution in NARM.
    International conference proceedings
  • Evolutionary Method to Discover Itemsets with Statistically Distinctive Backgrounds
    Kaoru Shimada; Takaaki Arahira; Shogo Matsuno
    Proceedings - 2021 IEEE 4th International Conference on Artificial Intelligence and Knowledge Engineering, AIKE 2021, 113-120, 2021, In this paper, we propose a method for discovering combinations of attributes (itemsets) against a background of statistical characteristics without obtaining frequent itemsets. The method consists of a database with numerous attributes and can directly find a combination of attributes that are highly correlated with small populations in two consecutive variables of interest in an incomplete database. It determines locally found patterns in large-scale data, and is expected to be the basis of large-scale data analysis. It uses evolutionary computations characterized by a network structure and a strategy to pool solutions throughout the generations. Moreover, it uses association rules to generalize the analysis method. The class-association rules used for classification are a discovery method of attribute combinations, which are characteristic when the ratio of class attributes is noted. The proposed method can be positioned as an extension to the statistical analysis method of the bivariate. The results of the evaluation experiment show the characteristics and effectiveness of the method.
    International conference proceedings
  • Classification of Intentional Eye-blinks using Integration Values of Eye-blink Waveform
    Shogo Matsuno; Minoru Ohyama; Hironobu Sato; Kiyohiko Abe
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), IEEE, 1255-1261, 2020, We propose a method to automatically classify eye-blink types using the eye-blink waveform integral value. The method is assumed to apply to an input interface using eye and It performs automatic detection of intentional blinks. Attempts to treat eye gestures and blinks as input channels in addition to conventional gaze input has studied due to the spread of gaze tracking and gaze input interfaces recently. However, classifying the eye-blink type as intentional or spontaneous using existing eye-blink classification methods is difficult because eye-blinks are highly individual motions that are significantly influenced by various conditions. Therefore, in this research, we construct a more robust measurement environment, which does not require a strict setting such as fixing the relative distance between the face and the camera even for non-contact measurement. In order to realize this, we defined new feature parameters are defined to correct the individual differences from moving image measuring by Web camera to assume applying on mobile interface. The proposed method performs automatic detection of intentional blinks by automatically determining the threshold of blink types based on the waveform integration value as new feature parameter. We also constructed a blink measurement system to evaluate the proposed method and evaluated the proposed method by experiment. The system splits the interlaced image field into disparate fields for blink measurement with sufficient temporal resolution. It then extracts the waveform feature parameters and automatically classifies the eye-blink types. Experimental results show successful classification of intentional eye-blinks with 86% average accuracy, thus demonstrated the high accuracy of the proposed method compared to conventional methods based on eye-blink duration.
    International conference proceedings, English
  • Examination of Stammering Symptomatic Improvement Training Using Heartbeat-Linked Vibration Stimulation.
    Shogo Matsuno; Yuya Yamada; Naoaki Itakura; Tota Mizuno
    Augmented Cognition. Human Cognition and Behavior - 14th International Conference, Springer, 12197 LNAI, 223-232, 2020, In this paper, we propose a training method to improve the stammering symptom, which automatically adjusts the rhythm of speech using vibrational stimulation linked to heart rate through a smart watch. We focus on the rhythm control effect by vibration, confirmed by the tactile stimulation training, and propose a training method to improve the symptoms of stammering while automatically adjusting the rhythm of the utterance based on the heartbeat-linked vibration stimuli. In addition, a system using the proposed method is constructed, and its effects on the heart rate by providing vibration stimulation to stutterers and on stammering symptoms are investigated. We present the effectiveness of stammering improvement training through vibration stimuli by experimenting with eight subjects.
    International conference proceedings
  • Improved Advertisement Targeting via Fine-grained Location Prediction using Twitter.
    Shogo Matsuno; Sakae Mizuki; Takeshi Sakaki
    Companion of The 2020 Web Conference 2020, ACM / IW3C2, 527-532, 2020, With the growing demand for social network service advertisements, more accurate targeting methods are required. Therefore, the authors of this study try to predict the location of Twitter users in a more fine-grained manner with regard to area marketing. Specifically, the visit probability to each segment (e.g., prefecture and city) is predicted by a label propagation algorithm, and the user's location information is obtained from a geo-tagged tweet and an user profile as a label. The proposed method predicts which Twitter users may visit the corresponding area with improved granularity, to the level of an oaza or a block. As a verification experiment, we construct a system that outputs a list of users who are likely to visit the location when an oaza name is entered based on the proposed method, and we also try to predict the possibility of users visiting each segment. The results show that the average accuracy is 73% at the prefecture level, 42% at the city level, and 25% at the oaza level. In addition, the prediction accuracy in Tokyo alone is 31% at the oaza level. These results indicate the effectiveness of the proposed method.
    International conference proceedings
  • Classification of Intentional Eye-blinks using Integration Values of Eye-blink Waveform.
    Shogo Matsuno; Minoru Ohyama; Hironobu Sato 0001; Kiyohiko Abe
    2020 IEEE International Conference on Systems, Man, and Cybernetics(SMC), IEEE, 2020-October, 1255-1261, 2020, We propose a method to automatically classify eye-blink types using the eye-blink waveform integral value. The method is assumed to apply to an input interface using eye and It performs automatic detection of intentional blinks. Attempts to treat eye gestures and blinks as input channels in addition to conventional gaze input has studied due to the spread of gaze tracking and gaze input interfaces recently. However, classifying the eye-blink type as intentional or spontaneous using existing eye-blink classification methods is difficult because eye-blinks are highly individual motions that are significantly influenced by various conditions. Therefore, in this research, we construct a more robust measurement environment, which does not require a strict setting such as fixing the relative distance between the face and the camera even for non-contact measurement. In order to realize this, we defined new feature parameters are defined to correct the individual differences from moving image measuring by Web camera to assume applying on mobile interface. The proposed method performs automatic detection of intentional blinks by automatically determining the threshold of blink types based on the waveform integration value as new feature parameter. We also constructed a blink measurement system to evaluate the proposed method and evaluated the proposed method by experiment. The system splits the interlaced image field into disparate fields for blink measurement with sufficient temporal resolution. It then extracts the waveform feature parameters and automatically classifies the eye-blink types. Experimental results show successful classification of intentional eye-blinks with 86% average accuracy, thus demonstrated the high accuracy of the proposed method compared to conventional methods based on eye-blink duration.
    International conference proceedings
  • Advanced eye-gaze input system with two types of voluntary blinks
    Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
    ARTIFICIAL LIFE AND ROBOTICS, SPRINGER, 24, 3, 324-331, Sep. 2019, Peer-reviwed, Recently, several eye-gaze input systems have been developed. Some of these systems consider the eye-blinking action to be additional input information. A main purpose of eye-gaze input systems is to serve as a communication aid for the severely disabled. The input system, which employs eye blinks as command inputs, needs to identify voluntary (conscious) blinks. In the past, we developed an eye-gaze input system for the creation of Japanese text. Our previous system employed an indicator selection method for command inputs. This system was able to identify two types of voluntary blinks. These two types of voluntary blinks work as functions governing indicator selection and error correction, respectively. In the evaluation experiment of the previous system, errors were occasionally observed in the estimation of the number of indicators at which the user was gazing. In this study, we propose a new input system that employs a selection method based on a novel indicator estimation algorithm. We conducted an experiment to evaluate the performance of Japanese text creation using our new input system. This study reports that using our new input system improves the speed of text input. In addition, we demonstrate a comparison of the various related eye-gaze input systems.
    Scientific journal, English
  • A method of character input for the user interface with a low degree of freedom
    Shogo Matsuno; Susumu Chida; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    ARTIFICIAL LIFE AND ROBOTICS, SPRINGER, 24, 2, 250-256, Jun. 2019, Peer-reviwed, In recent times, smart devices equipped with small touch panels have become very popular. Many such smart devices use a software keyboard for character input. Unfortunately, software keyboards have a limitation: the number of buttons and the input degrees of freedom remain the same, because a button and an input value correspond one-to-one. Thus, if we reduce the screen size while the button size remains the same, the number of buttons must decrease. Alternatively, if we maintain the number of buttons and reduce the screen size, the size of the button decreases, making input increasingly difficult. In this study, we investigate a new character input method that is specifically adapted for use on small screens. The proposed input interface has 4x2 operational degrees of freedom and consists of four buttons and two actions. By handling two operations as one input, 64 input options are secured. Additionally, we experimentally evaluate the proposed character input user interface deployed on a smart device. The proposed method enables an input of approximately 25 characters per minute, and it shows robust input performance on a small screen compared to previous software methods. Thus, the proposed method is better suited to small screens than were previous methods.
    Scientific journal, English
  • An analysis method for eye motion and eye blink detection from colour images around ocular region
    Shogo Matsuno; Naoaki Itakura; Tota Mizuno
    INTERNATIONAL JOURNAL OF SPACE-BASED AND SITUATED COMPUTING, INDERSCIENCE ENTERPRISES LTD, 9, 1, 22-30, 2019, Peer-reviwed, This paper examines an analysis and measurement method that to aim to involve incorporating eye motion and eye blink as functions of an eye-based human-computer interface. Proposed method has been achieved by analysing the visible-light images captured without using special equipment, in order to make the eye-controlled input interface independent of gaze-position measurement. Specifically, we propose two algorithms, one for detecting eye motion using optical flow and the other for classifying voluntary eye blinks using changes of eye aperture areas. In addition, we tried evaluating experiment both detection algorithms simultaneously assuming application to input interface. The two algorithms were examined for applicability in an eye-based input interface. As a result of experiments, real-time detection of each eye motion and eye blink could be performed with an accuracy of about 80% on average.
    Scientific journal, English
  • Discrimination of Eye Blinks and Eye Movements as Features for Image Analysis of the Around Ocular Region for Use as an Input Interface
    Shogo Matsuno; Masatoshi Tanaka; Keisuke Yoshida; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    INNOVATIVE MOBILE AND INTERNET SERVICES IN UBIQUITOUS COMPUTING, IMIS-2018, SPRINGER INTERNATIONAL PUBLISHING AG, 773, 171-182, 2019, Peer-reviwed, This paper examines an input method for ocular analysis that incorporates eye-motion and eye-blink features to enable an eye-controlled input interface that functions independent of gaze-position measurement. This was achieved by analyzing the visible light in images captured without using special equipment. We propose applying two methods. One method detects eye motions using optical flow. The other method classifies voluntary eye blinks. The experimental evaluations assessed both identification algorithms simultaneously. Both algorithms were also examined for applicability in an input interface. The results have been consolidated and evaluated. This paper concludes by considering of the future of this topic.
    International conference proceedings, English
  • Investigation of Context-aware System Using Activity Recognition
    Yuki Watanabe; Reiji Suzumura; Shogo Matsuno; Minoru Ohyama
    2019 1ST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION (ICAIIC 2019), IEEE, 287-291, 2019, Peer-reviwed, The physical activity is important context information to define and understand the user's situation in real time and in detail. Therefore, we developed a context-aware function using the activity recognition and showed that it is possible to provide more appropriate support according to the user's situation. In this study, we first constructed a model by applying machine learning to data sensed by a smartphone in order to predict the physical activity of the user. In the experiment, high accuracy of 97.6% was obtained by using the model. Next, we developed three functions using the activity recognition. These functions predict the physical activity of user in real time. In addition, user support is performed according to the predicted physical activity. In the experiment using developed functions, it is confirmed that these functions worked correctly in real-world conditions.
    International conference proceedings, English
  • Expanding the Freedom of Eye-gaze Input Interface using Round-Trip Eye Movement under HMD Environment.
    Shogo Matsuno; Hironobu Sato 0001; Kiyohiko Abe; Minoru Ohyama
    International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, ICAT-EGVE 2019, Posters and Demos, Eurographics Association, 21-22, 2019, In this paper, we propose a specific gaze movement detection algorithm, which is necessary for implementing a gaze movement input interface using an HMD built-in eye tracking system. Most input devices used in current virtual reality and augmented reality are hand-held devices, hand gestures, head tracking and voice input, despite the HMD attachment type. Therefore, in order to use the eye expression as a hands-free input modality, we consider a gaze input interface that does not depend on the measurement accuracy of the measurement device. The proposed method generally assumes eye movement input different from eye gaze position input which is implemented using an eye tracking system. Specifically, by using reciprocation eye movement in an oblique direction as an input channel, it aims to realize an input method that does not block the view by a screen display and does not hinder the acquisition of other lines of sight meta information. Moreover, the proposed algorithm is actually implemented in HMD, and the detection accuracy of the roundtrip eye movement is evaluated by experiments. As a result, the detection results of 5 subjects were averaged to obtain 90% detection accuracy. The results show that it has enough accuracy to develop an input inter-face using eye movement.
    International conference proceedings
  • Examination of multi-optioning for cVEP-based BCI by fluctuation of indicator lighting intervals and luminance
    Shogo Matsuno; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), IEEE, 2019-October, 2743-2747, 2019, In this study, brain-computer interfaces (BCIs) using visual evoked potentials (VEPs) induced by blinking light have been studied. We examined them to change the phase and frequency of the blinking light with variable onset intervals and changing luminance to realize multiple choices in the BCI based on code modulated visual evoked potentials (cVEP) using transient VEP. We used three types of blinking lights with fluctuation interval in addition to one blinking light with fixed interval and attempted to discriminate between the four blinking lights that a subject observed. The amplitude of averaged VEPs with fluctuation interval decreased when a subject did not observe the blinking light. The discrimination rate for the blinking lights was approximately 84%. In addition, we examined the changing luminance of the blinking stimulus to increase the number of multiple choices. In this study, we obtained VEPs based on synchronous addition method and verified the effects of changing luminance and lighting interval in indicators. In addition, our study aims to increase the number of simultaneous choices of cVEP-based BCI using blinking light with variable onset interval and changing luminance. In this paper, we report the possibility of multiple choices by conducting experiments for the estimation of proposed methods.
    International conference proceedings, English
  • Estimating autonomic nerve activity using variance of thermal face images
    Shogo Matsuno; Tota Mizuno; Hirotoshi Asano; Kazuyuki Mito; Naoaki Itakura
    ARTIFICIAL LIFE AND ROBOTICS, SPRINGER, 23, 3, 367-372, Sep. 2018, Peer-reviwed, In this paper, we propose a novel method for evaluating mental workload (MWL) using variances in facial temperature. Moreover, our method aims to evaluate autonomic nerve activity using single facial thermal imaging. The autonomic nervous system is active under MWL. In previous studies, temperature differences between the nasal and forehead portions of the face were used in MWL evaluation and estimation. Hence, nasal skin temperature (NST) is said to be a reliable indicator of autonomic nerve activity. In addition, autonomic nerve activity has little effect on forehead temperature; thus, temperature differences between the nasal and forehead portions of the face have also been demonstrated to be a good indicator of autonomic nerve activity (along with other physiological indicators such as EEG and heart rate). However, these approaches have not considered temperature changes in other parts of the face. Thus, we propose novel method using variances in temperature for the entire face. Our proposed method enables capture of other parts of the face for temperature monitoring, thereby increasing evaluation and estimation accuracy at higher sensitivity levels than conventional methods. Finally, we also examined whether further high-precision evaluation and estimation was feasible. Our results proved that our proposed method is a highly accurate evaluation method compared with results obtained in previous studies using NST.
    Scientific journal, English
  • Investigation on Estimation of Autonomic Nerve Activity of VDT Workers Using Characteristics of Facial Skin Temperature Distribution
    Tomoyuki Murata; Kota Akehi; Shogo Matsuno; Kazuyuki Mito; Naoaki Itakura; Tota Mizuno
    23nd International Symposium on Artifical Life and Robotics, OS18-1, 845-848, Jan. 2018, Peer-reviwed
    English
  • A Method of Character Input under the Operating Environment with Low Degree of Freedom
    Shogo Matsuno; Susumu Chida; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    23nd International Symposium on Artifical Life and Robotics, OS18-2, 849-853, Jan. 2018, Peer-reviwed
    English
  • Improvement of Text Input System Using Two Types of Voluntary Blinks and Eye-Gaze Information
    Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
    23nd International Symposium on Artifical Life and Robotics, OS18-3, 854-858, Jan. 2018, Peer-reviwed
    English
  • Activity Estimation Using Device Positions of Smartphone Users
    Yuki Oguri; Shogo Matsuno; Minoru Ohyama
    ADVANCES IN NETWORK-BASED INFORMATION SYSTEMS, NBIS-2017, SPRINGER INTERNATIONAL PUBLISHING AG, 7, 1126-1135, 2018, Peer-reviwed, Various activity-based services have been created for use by smartphone users. In the field of activity recognition, researchers frequently use smartphones or devices equipped with built-in sensors to estimate activities. However, in contrast to wristwatch devices that are worn on the arm, users may change the position of the smartphone depending on their situation; this may include placing the device in a bag or pocket. Therefore, a change in the device position should be considered when estimating activities using a smartphone. Considerable research has been conducted under conditions in which a smart phone is placed in a trouser pocket, however, few studies have focused on the changing context and location of the smartphone. Using the Support Vector Machine (SVM) on an Android smartphone, this paper classifies seven types of activity with three types of smartphone position. The results of an experiment conducted with seven smartphone users, indicate that seven possible states were classified with an average accuracy of greater than 95.75%, regardless of the device position.
    International conference proceedings, English
  • Tourist Support System Using User Context Obtained from a Personal Information Device.
    Shogo Matsuno; Reiji Suzumura; Minoru Ohyama
    Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization, ACM, 91-95, 2018, Peer-reviwed, Tour planning is a difficult task for those who visit unfamiliar city destinations. Furthermore, building an itinerary becomes more difficult as the number of options, which can be incorporated into travel, increases. The authors aim to propose place of interest (POI) according to the narrative strategy of a tour guide to realize a better personalized mobile tour guide system and establish a method to support efficient route scheduling. As a basic stage, we will herein consider a method of naturally collecting context information of users through an interaction between users and information terminals. In addition, we will introduce a POI recommendation application using the context information being developed.
    International conference proceedings
  • Discrimination of Eye Blinks and Eye Movements as Features for Image Analysis of the Around Ocular Region for Use as an Input Interface.
    Shogo Matsuno; Masatoshi Tanaka; Keisuke Yoshida; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    Innovative Mobile and Internet Services in Ubiquitous Computing - Proceedings of the 12th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing(IMIS), Springer, 773, 171-182, 2018, Peer-reviwed, This paper examines an input method for ocular analysis that incorporates eye-motion and eye-blink features to enable an eye-controlled input interface that functions independent of gaze-position measurement. This was achieved by analyzing the visible light in images captured without using special equipment. We propose applying two methods. One method detects eye motions using optical flow. The other method classifies voluntary eye blinks. The experimental evaluations assessed both identification algorithms simultaneously. Both algorithms were also examined for applicability in an input interface. The results have been consolidated and evaluated. This paper concludes by considering of the future of this topic.
    International conference proceedings
  • Where can we accomplish our To-Do?: estimating the target location by analyzing the task
    Reiji Suzumura; Shogo Matsuno; Minoru Ohyama
    PROCEEDINGS 2018 IEEE 32ND INTERNATIONAL CONFERENCE ON ADVANCED INFORMATION NETWORKING AND APPLICATIONS (AINA), IEEE, 2018-May, 457-463, 2018, Peer-reviwed, Reminders are used in various situations. The users of a location-based reminder system must specify the location to receive the notification, and spend extra time doing so. However, if the location can be estimated from the task that the user enters, this step can be eliminated.The authors focused on To-Do list, i.e., tasks the user wants to accomplish. It was assumed that the location where the To-Do can be accomplished could be estimated by parsing the To-Do. In this paper, the authors propose a method of estimating the place where the given To-Do can be accomplished. A list of daily To-Dos was collected from students through a survey questionnaire, and the To-Dos were used to evaluate the proposed method. In addition, the authors introduced a prototype of a location-based reminder system that applied the proposed method.
    International conference proceedings, English
  • Recognition of a variety of activities considering smartphone positions
    Yuki Oguri; Shogo Matsuno; Minoru Ohyama
    INTERNATIONAL JOURNAL OF SPACE-BASED AND SITUATED COMPUTING, INDERSCIENCE ENTERPRISES LTD, 8, 2, 88-95, 2018, Peer-reviwed, We present a high-accuracy recognition method for various activities using smartphone sensors based on device positions. Many researchers have attempted to estimate various activities, particularly using sensors such as the built-in accelerometer of a smartphone. Considerable research has been conducted under conditions such as placing a smartphone in a trouser pocket; however, few have focused on the changing context and influence of the smartphone position. Herein, we present a method for recognising seven types of activities considering three smartphone positions, and conducted two experiments to estimate each activity and identify the actual state under continuous movement at a university campus. The results indicate that the seven states can be classified with an average accuracy of 98.53% for three different smartphone positions. We also correctly identified these activities with 91.66% accuracy. Using our method, we can create practical services such as healthcare applications with a high degree of accuracy.
    Scientific journal, English
  • Study of detection algorithm of pedestrians by image analysis with a crossing request when gazing at a pedestrian crossing signal
    Akira Tsuji; Naoaki Itakura; Tota Mizuno; Shogo Matsuno
    Journal of Information and Communication Engineering, 3, 5, 167-173, Dec. 2017, Peer-reviwed
    English
  • Non-contact Eye-Glance Input Interface Using Video Camera
    Kota Akehi; Shogo Matuno; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    Journal of Signal Processing, Research Institute of Signal Processing, Japan, 21, 4, 207-210, Jul. 2017, Peer-reviwed, In the past, many studies have been carried out on eye-gaze input; however, in this study, we developed an eye-glance input interface that tracks a combination of short eye movements. Unlike eye-gaze input that requires high accuracy measurements, eye-glance input can be detected with only a rough indication of the direction of the eye movements, making it possible to operate even terminals with small screens, such as smartphones. In this study, we used an inexpensive camera to measure eye movements and analyzed its output using the OpenCV, an open source computer vision and machine learning software library, to construct an inexpensive and non-contact interface. In a previous study, we developed an algorithm that detected eye-glance input through image analysis using OpenCV, and fed the result of the algorithm back to our subjects. In that study, the average detection rate for the eye-glance input was 76 %. However, we also observed several problems with the algorithm, particularly the problem of false detections due to blinking of the eyes, and implemented solutions for improvement. In this study, we have made improvement with respect to the unsatisfactory detection rate recorded in our previous study, and addressed problems related to user convenience.
    Scientific journal, English
  • Non-contact Eye-Glance Input Interface Using Video Camera
    Akehi Kota; Matuno Shogo; Itakura Naoaki; Mizuno Tota; Mito Kazuyuki
    Journal of Signal Processing, Research Institute of Signal Processing, Japan, 21, 4, 207-210, Mar. 2017, In the past, many studies have been carried out on eye-gaze input; however, in this study, we developed an eye-glance input interface that tracks a combination of short eye movements. Unlike eye-gaze input that requires high accuracy measurements, eye-glance input can be detected with only a rough indication of the direction of the eye movements, making it possible to operate even terminals with small screens, such as smartphones. In this study, we used an inexpensive camera to measure eye movements and analyzed its output using the OpenCV, an open source computer vision and machine learning software library, to construct an inexpensive and non-contact interface. In a previous study, we developed an algorithm that detected eye-glance input through image analysis using OpenCV, and fed the result of the algorithm back to our subjects. In that study, the average detection rate for the eye-glance input was 76 %. However, we also observed several problems with the algorithm, particularly the problem of false detections due to blinking of the eyes, and implemented solutions for improvement. In this study, we have made improvement with respect to the unsatisfactory detection rate recorded in our previous study, and addressed problems related to user convenience.
    English
  • Evaluation of Autonomic Nervous Activity with Variance of Facial Skin Thermal Image
    Tota Mizuno; Shunsuke Kawazura; Hirotoshi Asano; Shogo Matsuno; Kazuyuki Mito; Naoaki Itakura
    22nd International Symposium on Artifical Life and Robotics, 512-515, Jan. 2017, Peer-reviwed
    English
  • A Multiple Choices Method for Transient Type VEP Brain Computer Interface by Changing Luminance and Lighting Interval of Indicators
    Matsuno Shogo; Oh Mumu; Aizawa Shogo; Itakura Naoaki; Mizuno Tota; Mito Kazuyuki
    IEEJ Transactions on Electronics, Information and Systems, The Institute of Electrical Engineers of Japan, 137, 4, 616-620, 2017, Peer-reviwed,

    Brain-Computer interfaces (BCIs) have been studied by using transient visual evoked potential (VEP) with various blinking stimuli. We examined to change the phase and the frequency of the blinking light with variable onset interval and changing luminance in order to realize multiple choices in the BCI using transient VEP. However, the number of the multiple choices was limited to four choices. Therefore, we examined to change the luminance of the blinking stimulus in order to increase the number of multiple choices. In this study, we try obtained VEPs based on synchronous addition method, and verified effects of changing luminance and lighting interval of indicators. In addition, we develop multiple choices interface by using blinking light with variable onset interval and changing luminance. In this paper, we report the possibility of multiple choices by experiment of estimate for proposed methods.


    Scientific journal, Japanese
  • Feature analysis focused on temporal alteration of the eyeblink waveform using image analysis
    Shogo Matsuno; Minoru Ohyama; Kiyohiko Abe; Shoichi Ohi; Naoaki Itakura
    IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan, 137, 4, 645-651, 2017, Peer-reviwed, In this paper, we propose an eyeblink feature parameter for automatically classifying conscious and unconscious eyeblinks. For the feature parameter, we focus on eyeblink waveform integral values, which are defined as a measurement record of the progression of eyeblinks. Previous studies have used duration time and waveform amplitude as the feature parameters. The integral values, on the other hand, have characteristics of both of those feature parameters. We obtain these parameters using a National Television System Committee format video camera by splitting a single interlaced image into two fields. We use frame-splitting methods to obtain and analyze the integral value of the eyeblink waveform. We experimentally compared the feature parameters to automatically classify conscious and unconscious eyeblinks. Duration time and amplitude did not significantly differ in some subject cases
    however, we confirmed a significant difference when using the integral value. Our results suggest that eyeblink waveform integral values are effective for discriminating conscious eyeblinks. We believe that the integral value of the eyeblink waveform is applicable to an eyeblink input interface.
    Scientific journal, English
  • A Multiple-choice Input Interface using Slanting Eye Glance
    Matsuno Shogo; Ito Yuta; Akehi Kota; Itakura Naoaki; Mizuno Tota; Mito Kazuyuki
    IEEJ Transactions on Electronics, Information and Systems, The Institute of Electrical Engineers of Japan, 137, 4, 621-627, 2017, Peer-reviwed,

    The eye gaze input is attracting attention as a method for operating an information device with hands-free. However, it is difficult to use gaze input system over a small screen such as a smart device because eye gaze input method must accurately measure gaze positions. In order to solve this problem, we have proposed the eye-glance input method to use operating a small information device like smart phone. The eye-glance input method is able to input multiple-choice using oblique direction reciprocating movement. Accordingly, enabling an input operation that is independent of the screen size. In this paper, we report result of evaluation experiment of numbers inputting of using our developed a Multiple-choice eye-glance input system that utilized electrooculography that amplified via an AC coupling. As the result of experiments, it was found that the average of the input success rate and the average of the input character number per a minute in real-time eye-glance input at the experimental display design was 91.5% and about 15.2 character for 10 subjects.


    Scientific journal, Japanese
  • Study of Non-contact Eye Glance Input Interface with Video Camera
    Akehi Kota; Matsuno Shogo; Itakura Naoaki; Mizuno Tota; Mito Kazuyuki
    IEEJ Transactions on Electronics, Information and Systems, The Institute of Electrical Engineers of Japan, 137, 4, 628-633, 2017, Peer-reviwed,

    Our previous studies have proposed Eye Glance input system that is used to only combination of contrary directional eye movements instead of eye gaze input. It was measured by EOG, but constraint and dedicated apparatus are brought into question. So we proposed non-contact measuring method using camera to measure eye movement. We enabled the eyes movement measurement with internal camera and USB camera by using optical follow in Open CV. In this study, we improve algorithm to measure and give visual feedback to subjects in real time. Then we researched the algorithmic effectiveness and influence by feedback.


    Scientific journal, Japanese
  • Study of detection algorithm of pedestrians by image analysis with a crossing request when gazing at a pedestrian crossing signal
    Akira Tsuji; Naoaki Itakura; Tota Mizuno; Shogo Matsuno; Hiroshi Kazama; Takuma Toba
    2017 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATICS AND BIOMEDICAL SCIENCES (ICIIBMS), IEEE, 2018-January, 189-194, 2017, Peer-reviwed, For pedestrian traffic, pushbutton traffic signals are used. However, pushbutton signal machines are often installed in inconvenient locations. Therefore, a new method is required to allow pedestrians to switch the traffic signals using a crossing request other than a pushbutton. In this research, we paid attention to the motions of the pedestrians with a crossing request, who tend to "stay and watch the traffic signal in front of the crosswalk." Pedestrian images were captured using a camera built into the traffic signals and analyzed using image processing. A new traffic signal system is proposed that uses the motions of a pedestrian with a crossing request as an input signal.
    International conference proceedings, English
  • Investigation of Facial Region Extraction Algorithm Focusing on Temperature Distribution Characteristics of Facial Thermal Images.
    Tomoyuki Murata; Shogo Matsuno; Kazuyuki Mito; Naoaki Itakura; Tota Mizuno
    Communications in Computer and Information Science, Springer Verlag, 713, 347-352, 2017, Peer-reviwed, In our previous research, we expanded the range to be analyzed to the entire face. This was because there were regions in the mouth, in addition to the nose, where the temperature fluctuated according to the mental workload (MWL). We evaluated the MWL with high accuracy by this method. However, it has been clarified in previous studies that the edge portion of the face, where there is no angle between the thermography and the object to be photographed, exhibits decreased emissivity measured by reflection or the like, and, as a result, the accuracy of the temperature data decreases. In this study, we aim to automatically extract the target facial region from the thermal image taken by thermography by focusing on the temperature distribution of the facial thermal image, as well as examine the automation of the evaluation. As a result of evaluating whether the analysis range can be automatically extracted from 80 facial images, we succeeded in an automatic extraction that can be analyzed from about 90% of the images.
    International conference proceedings, English
  • Development of Device for Measurement of Skin Potential by Grasping of the Device.
    Tota Mizuno; Shogo Matsuno; Kota Akehi; Kazuyuki Mito; Naoaki Itakura; Hirotoshi Asano
    Communications in Computer and Information Science, Springer Verlag, 713, 237-242, 2017, Peer-reviwed, In this study, we developed a device for measuring skin potential activity requiring the subject to only grasp the interface. There is an extant method for measuring skin potential activity, which is an indicator for evaluating Mental Work-Load (MWL). It exploits the fact that when a human being experiences mental stress, such as tension or excitement, emotional sweating appears at skin sites such as the palm and sole; concomitantly, the skin potential at these sites varies. At present, skin potential activity of the hand is measured by electrodes attached to the whole arm. Alternatively, if a method can be developed to measure skin potential activity (and in turn emotional sweating) by an electrode placed on the palm only, it would be feasible to develop a novel portable burden-evaluation interface that can measure the MWL with the subject holding the interface. In this study, a prototype portable load-evaluation interface was investigated for its capacity to measure skin potential activity while the interface is held in the subject’s hand. This interface, wherein an electrode is attached to the device, rather than directly to the hand, can measure the parameters with the subject gripping the device. Moreover, by attaching the electrode laterally rather than longitudinally to the device, a touch by the subject, at any point on the sides of the device, enables measurement. The electrodes used in this study were tin foil tapes. In the experiment, subjects held the interface while it measured their MWL. However, the amplitude of skin potential activity (which reflects the strength of the stimulus administered on the subjects) obtained by the proposed method was lower than that obtained by the conventional method. Nonetheless, because sweat response due to stimulation could be quantified with the proposed method, the study demonstrated the possibility of load measurements considering only the palm.
    International conference proceedings, English
  • Automatic Classification of Eye Blinks and Eye Movements for an Input Interface Using Eye Motion.
    Shogo Matsuno; Masatoshi Tanaka; Keisuke Yoshida; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    Communications in Computer and Information Science, Springer Verlag, 713, 164-169, 2017, Peer-reviwed, The objective of this study is to develop a multi gesture input interface using several eye motions simultaneously. In this study, we proposed a new automatic classification method for eye blinks and eye movements from moving images captured using a web camera installed on an information device. Eye motions were classified using two methods of image analysis. One method is the classification of the moving direction based on optical flow. The other method is the detection of voluntary blinks based on integral value of eye blink waveform recorded by changing the eye opening area. We developed an algorithm to run the two methods simultaneously. We also developed a classification system based on the proposed method and conducted experimental evaluation in which the average classification rate was 79.33%. This indicates that it is possible to distinguish multiple eye movements using a general video camera.
    International conference proceedings, English
  • Development of a measuring device skin potential with grasping only
    Tota Mizuno; Shogo Matsuno; Kazuyuki Mito; Naoaki Itakura; Hirotoshi Asano
    Proceedings of the 19th International Conference on Human-Computer Interaction, 2017, Peer-reviwed
  • Basic study of evaluation that uses the center of gravity of a facial thermal image for the estimation of autonomic nervous activity
    Shogo Matsuno; Shunsuke Kosuge; Shunsuke Kawazura; Hirotoshi Asano; Naoaki Itakura; Tota Mizuno
    The Ninth International Conference on Advances in Computer-Human Interactions, 258-261, May 2016, Peer-reviwed
    English
  • Autonomic Nervous Activity Estimation Algorithm with Facial Skin Thermal Image
    Tota Mizuno; Shunsuke Kawazura; Shogo Matsuno; Kota Akehi; Hirotoshi Asano; Naoaki Itakura; Kazuyuki Mito
    The Ninth International Conference on Advances in Computer-Human Interactions, 262-266, May 2016, Peer-reviwed
    English
  • Method for Measuring Intentional Eye Blinks by Focusing on Momentary Movement around the Eyes
    Shogo Matsuno; Tota Mizuno; Naoaki Itakura
    RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing, 137-140, Mar. 2016, Peer-reviwed
    English
  • Development of Small Device for the Brain Computer Interface with Transient VEP Analysis
    Osano Ryohei; Ikai Masato; Matsuno Shogo; Itakura Naoaki; Mizuno Tota
    PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), IEEE, 3782-3785, 2016, Peer-reviwed, Recently, the input interfaces for the PC using the EEG have been studied. The visual evoked potential (VEP) caused by the blinking light is one of waveforms observed in the EEG signal. The brain computer interfaces which based on the transient VEP (TRVEP) analysis and realized the four choices also have been studied. However, now we have been using a large amplifier and AD board. Furthermore, these two devices are expensive. In this study, in order to solve this problem we developed a small and inexpensive device. In addition, we performed comparative experiments of the conventional device and performance.
    International conference proceedings, English
  • Differentiating conscious and unconscious eyeblinks for development of eyeblink computer input system
    Shogo Matsuno; Minoru Ohyama; Kiyohiko Abe; Shoichi Ohi; Naoaki Itakura
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 9312 LNCS, 160-174, 2016, Peer-reviwed, Invited, In this paper, we propose and evaluate a new conscious eyeblink differentiation method, comprising an algorithm that takes into account differences in individuals, for use in a prospective eyeblink user interface. The proposed method uses a frame-splitting technique that improves the time resolution by splitting a single interlaced image into two fields—even and odd. Measuring eyeblinks with sufficient accuracy using a conventional NTSC video camera (30 fps) is difficult. However, the proposed method uses eyeblink amplitude as well as eyeblink duration as distinction thresholds. Further, the algorithm automatically differentiates eyeblinks by considering individual differences and selecting a large parameter of significance in each user. The results of evaluation experiments conducted using 30 subjects indicate that the proposed method automatically differentiates conscious eyeblinks with an accuracy rate of 83.6%on average. These results indicate that automatic differentiation of conscious eyeblinks using a conventional video camera incorporated with our proposed method is feasible.
    International conference proceedings, English
  • Communication-aid system using eye-gaze and blink information
    Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
    Advances in Face Detection and Facial Image Analysis, Springer International Publishing, 333-358, 01 Jan. 2016, Peer-reviwed, Recently, a novel human-machine interface, the eye-gaze input system, has been reported. This system is operated solely through the user’s eye movements. Using this system, many communication-aid systems have been developed for people suffering from severe physical disabilities, such as amyotrophic lateral sclerosis (ALS). We observed that many such people can perform only very limited head movements. Therefore, we designed an eye-gaze input system that requires no special tracing devices to track the user’s head movement. The proposed system involves the use of a personal computer (PC) and home video camera to detect the users’ eye gaze through image analysis under natural light. Eye-gaze detection methods that use natural light require only daily-life devices, such as home video cameras and PCs. However, the accuracy of these systems is frequently low, and therefore, they are capable of classifying only a few indicators. In contrast, our proposed system can detect eye gaze with high-level accuracy and confidence
    that is, users can easily move the mouse cursor to their gazing point. In addition, we developed a classification method for eye blink types using the system’s feature parameters. This method allows the detection of voluntary (conscious) blinks. Thus, users can determine their input by performing voluntary blinks that represent mouse clicking. In this chapter, we present our eye-gaze and blink detection methods. We also discuss the communication-aid systems in which our proposed methods are applied.
    In book, English
  • Input Interface Using Eye-Gaze and Voluntary Blink
    Abe Kiyohiko; Sato Hironobu; Matsuno Shogo; Ohi Shoichi; Ohyama Minoru
    IEEJ Transactions on Electronics, Information and Systems, The Institute of Electrical Engineers of Japan, 136, 8, 1185-1193, 2016, Peer-reviwed, Recently, a novel human-machine interface known as the eye-gaze input system has been reported. This system is operated solely through the user's eye movements. Therefore, it can be used by people suffering from severe physical disabilities. We propose an eye-gaze input system that uses a personal computer and home video camera. This system detects the users' eye-gaze through image analysis under natural light including fluorescent or LED light. Our proposed system also has a high-level accuracy and confidence; that is, users can easily move the mouse cursor to their gazing point. We confirmed a large difference in the duration of voluntary (conscious) and involuntary (unconscious) blinks through a precursor experiment. In addition, we confirmed that these durations vary significantly depending on the subject. By using the duration of eye blink, voluntary blink can be detected automatically. Through this method, we developed an eye-gaze input interface that uses information of voluntary blinks. That is, users can decide their input by performing voluntary blinks that represent mouse clicking.
    Scientific journal, Japanese
  • Measuring facial skin temperature changes caused by mental work-load with infrared thermography
    Tota Mizuno; Takeru Sakai; Shunsuke Kawazura; Hirotoshi Asano; Kota Akehi; Shogo Matsuno; Kazuyuki Mito; Yuichiro Kume; Naoaki Itakura
    IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan, 136, 11, 1581-1585, 2016, Peer-reviwed, We evaluated the temperature change of facial parts as affected by mental work-load (MWL) using infrared thermography. Under MWL, autonomic nerves are active, and the skin surface temperature changes with muscular contraction. In particular, the nasal part of the face experiences the most intense change. Based on this, in previous studies MWL was evaluated by using nasal skin temperature. However, we considered whether other parts of the face experience temperature change under MWL. Therefore, in this study, to identify which other parts of the face experience temperature change, we performed an experiment to acquire facial thermal images when subjects perform a mental arithmetic calculation task. Our results indicate that, in addition to the nasal part, the temperatures around the lips and cheek might also increase under MWL.
    Scientific journal, English
  • Eye-movement measurement for operating a smart device: a small-screen line-of-sight input system
    Shogo Matsuno; Saitoh Sorao; Chida Susumu; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyttki Mito
    PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), IEEE, 3798-3800, 2016, Peer-reviwed, A real-time eye-glance input interface is developed for a camera-enabled smartphone. Eye-glance input is one of various line-of-sight input methods that capture eye movements over a relatively small screen. In previous studies, a quasi-eyecontrol input interface was developed using the eye-gaze method, which uses gaze position as an input trigger. This method has allowed intuitive and accurate inputting to information devices. However, there are certain problems with it: (1) measurement accuracy requires accurate calibration and a fixed positional relationship between user and system; (2) deciding input position by eye-gaze time slows down the inputting process; (3) it is necessary to present orientation information when performing input. Put differently, problem (3) requires the accuracy of any eye-gaze measuring device to increase as the screen becomes smaller. The eye-gaze method has traditionally needed a relatively wide screen, which has made eye-control input difficult with a smartphone. Our proposed method can solve this problem because the required input accuracy is independent of screen size. We report a prototype input interface based on an eye-glance input method for a smartphone. This system has an experimentally measured line-of-sight accuracy of similar to 70%.
    International conference proceedings, English
  • Input interface suitable for touch panel operation on a small screen
    Susumu Chida; Shogo Matsuno; Naoaki Itakura; Tota Mizuno
    PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), IEEE, 3679-3683, 2016, Peer-reviwed, Recent years have seen rapid widespread adoption of smart devices equipped with a small touch panel. Generally, a software keyboard is problematic despite the use of a small touch panel for input; however, the number of choices and the operational flexibility of the input characters are always equal. This is because an increase in the number of buttons reduces the size of each button, whereas retaining the button size would necessarily reduce the number of buttons. In this study, we propose a new character input interface suitable for a small screen. In addition, we con figured the input interface based on the proposed method, and report the experiments that were carried out to evaluate the interface. As a result, even though the input may be difficult when using the conventional method, the proposed method is hardly considered to influence the input performance.
    International conference proceedings, English
  • Differentiating conscious and unconscious eyeblinks for development of eyeblink computer input system
    Shogo Matsuno; Minoru Ohyama; Kiyohiko Abe; Shoichi Ohi; Naoaki Itakura
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 9312 LNCS, 160-174, 2016, Peer-reviwed, In this paper, we propose and evaluate a new conscious eyeblink differentiation method, comprising an algorithm that takes into account differences in individuals, for use in a prospective eyeblink user interface. The proposed method uses a frame-splitting technique that improves the time resolution by splitting a single interlaced image into two fields—even and odd. Measuring eyeblinks with sufficient accuracy using a conventional NTSC video camera (30 fps) is difficult. However, the proposed method uses eyeblink amplitude as well as eyeblink duration as distinction thresholds. Further, the algorithm automatically differentiates eyeblinks by considering individual differences and selecting a large parameter of significance in each user. The results of evaluation experiments conducted using 30 subjects indicate that the proposed method automatically differentiates conscious eyeblinks with an accuracy rate of 83.6%on average. These results indicate that automatic differentiation of conscious eyeblinks using a conventional video camera incorporated with our proposed method is feasible.
    International conference proceedings, English
  • A Study of an Intention Communication Assisting System Using Eye Movement
    Shogo Matsuno; Yuta Ito; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    COMPUTERS HELPING PEOPLE WITH SPECIAL NEEDS, PT II (ICCHP 2016), SPRINGER INT PUBLISHING AG, 9759, 495-502, 2016, Peer-reviwed, In this paper, we propose a new intention communication assisting system that uses eye movement. The proposed method solves the problems associated with a conventional eye gaze input method. A hands-free input method that uses the behavior of the eye, including blinking and line of sight, has been used for assisting the intention communication of people with severe physical disabilities. In particular, a line-of-sight input device that uses eye gazes has been used extensively because of its intuitive operation. In addition, this device can be used by any patient, except those with weak eye. However, the eye gaze method has disadvantages such as a certain level of input time is required for determining the eye gaze input, or it is necessary to present the information for fixation when performing input. In order to solve these problems, we propose a new line-of-sight input method, eye glance input method. Eye glance input can be performed in four directions by detecting reciprocating movement ( eye glance) in the oblique direction. Using the proposed method, it is possible to perform rapid environmental control with simple measurements. In addition, we developed an evaluation system using electrooculogram based on the proposed method. The evaluation system experimentally evaluated the input accuracy of 10 subjects. As a result, an average accuracy of approximately 84.82 % was determined, which confirms the effectiveness of the proposed method. In addition, we examined the application of the proposed method to actual intention communication assisting systems.
    International conference proceedings, English
  • Advancement of a To-Do Reminder System Focusing on Context of the User
    Masatoshi Tanaka; Keisuke Yoshida; Shogo Matsuno; Minoru Ohyama
    HCI INTERNATIONAL 2016 - POSTERS' EXTENDED ABSTRACTS, PT II, SPRINGER INTERNATIONAL PUBLISHING AG, 618, 385-391, 2016, Peer-reviwed, In recent years, smartphones have become rapidly popular and their performance has improved remarkably. Therefore, it is possible to estimate user context by using sensors and functions equipped in smartphones. We propose a To-Do reminder system using user indoor position information and moving state. In conventional reminder systems, users have to input the information of place (resolution place). The resolution place is where the To-Do item can be solved and the user receives a reminder. These conventional reminder systems are constructed based on outdoor position information using GPS. In this paper, we propose a new reminder system that makes it unnecessary to input the resolution place. In this newly developed system, we introduce a rule-based system for estimating the resolution place in a To-Do item. The estimation is done based on an object word and a verb, which are included in most tasks in a To-Do list. In addition, we propose an automatic judgment method to determine if a To-Do task has been completed.
    International conference proceedings, English
  • Physiological and Psychological Evaluation by Skin Potential Activity Measurement Using Steering Wheel While Driving
    Shogo Matsuno; Takahiro Terasaki; Shogo Aizawa; Tota Mizuno; Kazuyuki Mito; Naoaki Itakura
    HCI INTERNATIONAL 2016 - POSTERS' EXTENDED ABSTRACTS, PT II, SPRINGER INT PUBLISHING AG, 618, 177-181, 2016, Peer-reviwed, This paper proposes a new method for practical skin potential activity (SPA) measurement while driving a car by installing electrodes on the outer periphery of the steering wheel. Evaluating the psychophysiological state of the driver while driving is important for accident prevention. We investigated whether the physiological and psychological state of the driver can be evaluated by measuring SPA while driving. Therefore, we have devised a way to measure SPA measurement by installing electrodes in a handle. Electrodes are made of tin foil and are placed along the outer periphery of the wheel considering that their position while driving is not fixed. The potential difference is increased by changing the impedance through changing the width of electrodes. Moreover we try to experiment using this environment. An experiment to investigate the possibility of measuring SPA using the conventional and the proposed methods were conducted with five healthy adult males. A physical stimulus was applied to the forearm of the subjects. It was found that the proposed method could measure SPA, even though the result was slightly smaller than that of the conventional method of affixing electrodes directly on hands.
    International conference proceedings, English
  • O-047 A study on moving state estimation using smartphone built-in sensors
    Yoshida Keisuke; Oguri Yuki; Matsuno Shogo; Ohyama Minoru
    RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing, Forum on Information Technology, 14, 4, 547-548, 24 Aug. 2015, Peer-reviwed
    Japanese
  • Feature Analysis of Eyeblink Waveform for Automatically Classifying Conscious Blinks
    Shogo Matsuno; Naoaki Itakura; Minoru Ohyama; Shoichi Ohi; Kiyohiko Abe
    Proceedings of the International Conference on Electronics and Software Science, 216-222, Jul. 2015, Peer-reviwed
  • Improvement in Eye Glance Input Interface Using OpenCV
    Kota Akehi; Shogo Matuno; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    Proceedings of the International Conference on Electronics and Software Science, 207-211, Jul. 2015, Peer-reviwed
    English
  • Facial Skin Temperature Fluctuation by Mental Work-Load with Thermography
    Tota Mizuno; Takeru Sakai; Shunsuke Kawazura; Hirotoshi Asano; Kota Akehi; Shogo Matsuno; Kazuyuki Mito; Yuichiro Kume; Naokaki Itakura
    Proceedings of the International Conference on Electronics and Software Science, 212-215, Jul. 2015, Peer-reviwed
    English
  • Determining a Mobile Device's Indoor and Outdoor Location Considering Actual Use
    Katsuyoshi Ozaki; Keisuke Yoshida; Shogo Matsuno; Minoru Ohyama
    Jornal of Advanced Control, Automation and Robotics, 1, 1, 54-58, Jan. 2015, Peer-reviwed
    English
  • Determining Mobile Device Indoor and Outdoor Location in Various Environments Estimation of User Context
    Katsuyoshi Ozaki; Keisuke Yoshida; Shogo Matsuno; Minoru Ohyama
    2015 INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATICS AND BIOMEDICAL SCIENCES (ICIIBMS), IEEE, 161-164, 2015, Peer-reviwed, To obtain a person's location information with high accuracy in mobile device, it is necessary for a mobile device to switch its localization method depending on whether the user is indoors or outdoors. We propose a method to determine indoor and outdoor location using only the sensors on a mobile device. To obtain a decision with high accuracy for many devices, the method must consider individual difference between devices. We confirmed that using a majority decision method reduces the influence of individual device difference. Moreover, for highly accurate decisions in various environments, it is necessary to consider the differences in environments, such as large cities surrounded by high-rise buildings versus suburban areas. We measured classification features in different environments and the accuracy of classifier constructed using these features was 99.6%.
    International conference proceedings, English
  • Input Interface Using Eye-Gaze and Blink Information
    Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
    HCI INTERNATIONAL 2015 - POSTERS' EXTENDED ABSTRACTS, PT I, SPRINGER-VERLAG BERLIN, 528, 463-467, 2015, Peer-reviwed, We have developed an eye-gaze input system for people with severe physical disabilities. The system utilizes a personal computer and a home video camera to detect eye-gaze under natural light, and users can easily move the mouse cursor to any point on the screen to which they direct their gaze. We constructed this system by first confirming a large difference in the duration of voluntary (conscious) and involuntary (unconscious) blinks through a precursor experiment. Consequently, on the basis of the results obtained, we developed our eye-gaze input interface, which uses the information received from voluntary blinks. More specifically, users can decide on their input by performing voluntary blinks as substitutes for mouse clicks. In this paper, we discuss the eye-gaze and blink information input interface developed and the results of evaluations conducted.
    International conference proceedings, English
  • Computer input system using eye glances
    Shogo Matsuno; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 9172, 425-432, 2015, Peer-reviwed, We have developed a real-time Eye Glance input interface using a Web camera to capture eye gaze inputs. In previous studies, an eye control input interface was developed using an electro-oculograph (EOG) amplified by AC coupling. Our proposed Eye Gesture input interface used a combination of eye movements and did not require the restriction of head movement, unlike conventional eye gaze input methods. However, this method required an input start operation before capturing could commence. This led us to propose the Eye Glance input method that uses a combination of contradirectional eye movements as inputs and avoids the need for start operations. This method required the use of electrodes, which were uncomfortable to attach. The interface was therefore changed to a camera that used facial pictures to record eye movements to realize an improved noncontact and low-restraint interface. The Eye Glance input method measures the directional movement and time required by the eye to move a specified distance using optical flow with OpenCV from Intel. In this study, we analyzed the waveform obtained from eye movements using a purpose- built detection algorithm. In addition, we examined the reasons for detecting a waveform when eye movements failed.
    International conference proceedings, English
  • RJ-004 Shape Feature Parameters to Classify Blink Types
    Matsuno Shogo; Ohyama Minoru; Abe Kiyohiko; Ohi Shoichi; Itakura Naoaki
    情報科学技術フォーラム講演論文集, FIT(電子情報通信学会・情報処理学会)運営委員会, 13, 3, 29-32, Aug. 2014, Peer-reviwed
    Japanese
  • Measurement of feature parameters for blink type classifcation using a high speed camera
    Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
    IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan, 134, 10, 1584-1585, 2014, Peer-reviwed, Human eye blinks include voluntary and involuntary blinks. If the voluntary blinks can be classified in automatic, an input decision can be made when user's voluntary blinks occur. We have developed a method for the eye blink detection using a video camera. By using this method, the feature parameters for eye blink types classification can be estimated from these wave patterns. We conducted the experiments to measure the feature parameters by using a high speed camera. In this paper, we present our method for eye blink detection and its feature parameters.
    Scientific journal, Japanese
  • Analysis of Trends in the Occurrence of Eyeblinks for an Eyeblink Input Interface
    Shogo Matsuno; Naoaki Itakura; Minoru Ohyama; Shoichi Ohi; Kiyohiko Abe
    2014 IEEE 2ND INTERNATIONAL WORKSHOP ON USABILITY AND ACCESSIBILITY FOCUSED REQUIREMENTS ENGINEERING (USARE), IEEE, 25-31, 2014, Peer-reviwed, This paper presents the results of the analysis of trends in the occurrence of eyeblinks for devising new input channels in handheld and wearable information devices. However, engineering a system that can distinguish between voluntary and spontaneous blinks is difficult. The study analyzes trends in the occurrence of eyeblinks of 50 subjects to classify blink types via experiments. However, noticeable differences between voluntary and spontaneous blinks exist for each subject. Three types of trends based on shape feature parameters (duration and amplitude) of eyeblinks were discovered. This study determines that the system can automatically and effectively classify voluntary and spontaneous eyeblinks.
    International conference proceedings, English
  • RJ-005 The Effectiveness of Method to Measure Eye Blinks Using Split-Interlaced Images
    Matsuno Shogo; Ohyama Minoru; Abe Kiyohiko; Sato Hironobu; Ohi Shoichi
    情報科学技術フォーラム講演論文集, FIT(電子情報通信学会・情報処理学会)運営委員会, 12, 3, 39-42, Aug. 2013, Peer-reviwed
    Japanese
  • Automatic discrimination of voluntary and spontaneous Eyeblinks
    Shogo Matsuno; Minoru Ohyama; Shoichi Ohi; Kiyohiko Abe; Hironobu Sato
    ACHI 2013 - 6th International Conference on Advances in Computer-Human Interactions, 433-439, Jan. 2013, Peer-reviwed, © Copyright 2013 IARIA. This paper proposes a method to analyze the automatic detection and discrimination of eyeblinks for use with a human-computer interface. When eyeblinks are detected, the eyeblink waveform is also acquired from a change in the eye aperture area of the subject by giving a sound signal. We compared voluntary and spontaneous blink parameters obtained by experiments, and we found that the trends of the subjects for important feature parameters could be sorted into three types. As a result, the realization of automatic discrimination of voluntary and spontaneous eye blinking can be expected.
    International conference proceedings
  • Classification of blink type by a frame splitting method using hi-vision image
    Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
    IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan, 133, 7, 3-1300, 2013, Peer-reviwed, Recently, the human-machine interface using information of user's eye blink was reported. This system operates a personal computer in real-time. Eye blinks can be classified into voluntary (conscious) blinks and involuntary (unconscious) blinks. If the voluntary blinks can be distinguished in automatic, an input decision can be made when user's voluntary blinks occur. By using this system, the usability of input is increased. We have proposed a new eye blink detection method that uses a Hi-Vision video camera. This method utilizes split interlace images of the eye. These split images are odd- and even- field images in the 1080i Hi-Vision format and are generated from interlaced images. The proposed method yields a time resolution that is double that in the 1080i Hi-Vision format. We refer to this approach as a "frame-splitting method". We also proposed a method for automatic eye blink extraction using this method. The extraction method is capable of classifying the start and end points of eye blinks. In other words, the feature parameters of voluntary and involuntary blinks can be measured by this extraction method. In this paper, we propose a new classification method for eye blink types using these feature parameters. © 2013 The Institute of Electrical Engineers of Japan.
    Scientific journal, English
  • Classification of blink type by a frame splitting method using hi-vision image
    Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
    IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan, 133, 7, 3-1300, 2013, Peer-reviwed, Recently, the human-machine interface using information of user's eye blink was reported. This system operates a personal computer in real-time. Eye blinks can be classified into voluntary (conscious) blinks and involuntary (unconscious) blinks. If the voluntary blinks can be distinguished in automatic, an input decision can be made when user's voluntary blinks occur. By using this system, the usability of input is increased. We have proposed a new eye blink detection method that uses a Hi-Vision video camera. This method utilizes split interlace images of the eye. These split images are odd- and even- field images in the 1080i Hi-Vision format and are generated from interlaced images. The proposed method yields a time resolution that is double that in the 1080i Hi-Vision format. We refer to this approach as a "frame-splitting method". We also proposed a method for automatic eye blink extraction using this method. The extraction method is capable of classifying the start and end points of eye blinks. In other words, the feature parameters of voluntary and involuntary blinks can be measured by this extraction method. In this paper, we propose a new classification method for eye blink types using these feature parameters. © 2013 The Institute of Electrical Engineers of Japan.
    Scientific journal, English
  • Automatic classification of eye blink types using a frame-splitting method
    Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, 8019 LNAI, PART 1, 117-124, 2013, Peer-reviwed, Human eye blinks include voluntary (conscious) blinks and involuntary (unconscious) blinks. If voluntary blinks can be detected automatically, then input decisions can be made when voluntary blinks occur. Previously, we proposed a novel eye blink detection method using a Hi-Vision video camera. This method utilizes split interlaced images of the eye, which are generated from 1080i Hi-Vision format images. The proposed method yields a time resolution that is twice as high as that of the 1080i Hi-Vision format. We refer to this approach as the frame-splitting method. In this paper, we propose a new method for automatically classifying eye blink types on the basis of specific characteristics using the frame-splitting method. © 2013 Springer-Verlag Berlin Heidelberg.
    International conference proceedings, English
  • RJ-003 A study on discrimination of voluntary eyeblink from spontaneous eyeblink
    Matsuno Shogo; Abe Kiyohiko; Sato Hironobu; Ohi Shoichi; Ohyama Minoru
    FIT2012(第11回情報技術フォーラム)講演論文集, 9, FIT(電子情報通信学会・情報処理学会)運営委員会, 3, 23-26, 2012, Peer-reviwed
    Japanese

MISC

  • 3次元畳み込みニューラルネットワークを用いた瞬目種類識別の詳細分析—Detailed Analysis of Blink Types Classification Using a 3D Convolutional Neural Network
    阿部 清彦; 大山 実; 佐藤 寛修; 松野 省吾
    東京 : 日本工業出版, May 2024, 画像ラボ = Image laboratory / 画像ラボ編集委員会 編, 35, 5, 27-31, Japanese, 0915-6755, AN10164169
  • 特集:編集委員 今年の抱負2024「場に囚われないコミュニケーションとAI」
    松野 省吾
    一般社団法人 人工知能学会, 01 Jan. 2024, 人工知能, 39, 1, 30-30, Japanese, 2188-2266, 2435-8614
  • 2022年春季シンポジウムルポ(第85回)
    松野 省吾
    日本オペレーションズ・リサーチ学会, Aug. 2022, オペレーションズ・リサーチ = Communications of the Operations Research Society of Japan : 経営の科学, 67, 8, 451-453, Japanese, 0030-3674, AN00364999
  • ソーシャルデータの分析と活用 データサイエンスの実ビジネス活用に向けて~活用パターンと実践における課題の紹介~
    榊剛史; 榊剛史; 松野省吾
    2021, Estrela, 330, 1343-5647, 202102219574572341

Lectures, oral presentations, etc.

  • Classification of involuntary eye movements for eye gaze input interface
    MATSUNO Shogo
    Japanese, Proceedings of the annual conference of JSAI, The japanese society for artificial intelligence, Gaze input interfaces operate computers by capturing voluntary eye movement and gazing as input signals. We are investigating methods to measure eye movement and blinking to develop a gaze input interface that enables various expressions by combining involuntary physiological responses in addition to conventional voluntary operations. This paper proposes a method to discriminate between voluntary and involuntary eye automatically blinks measured in parallel with special voluntary eye movements and to provide different feedback. Experimental evaluation of the proposed method shows that it discriminates voluntary eye movements with an accuracy of approximately 85%.
    2024
    2024 2024
  • 水平作業台ディスプレイのための複数カメラに基づく視線位置推定システム
    長野 真大; 石田 和貴; 中嶋 良介; 仲田 知弘; 松野 省吾; 岡本 一志; 山田 周歩; 山田 哲男; 杉 正夫
    Japanese, 精密工学会学術講演会講演論文集, 公益社団法人 精密工学会, 組立作業における作業支援の一つとして,筆者らは水平作業台ディスプレイを提案してきた.先行研究では,水平作業台ディスプレイ上での視線位置推定システムが提案されたが,見てる場所による精度の違いや頭部移動に非対応な点が課題点として残った.本論文は,複数カメラを用いた視線位置推定システムを提案する.従来の課題点に対応するために,複数角度からの顔画像や頭部位値がわかる二値画像を入力とし視線位置推定を行う.
    01 Mar. 2023
    01 Mar. 2023- 01 Mar. 2023
  • A Study on Accuracy of Worker Classification by Deep Learning using 3D Motion Data
    Kawane Ryuto; Ijuin Hiromasa; Sugi Masao; Nakajima Ryosuke; Nakada Tomohiro; Okamoto Kazushi; Matsuno Shogo; Yamada Tetsuo
    Japanese, Proceedings of JSPE Semestrial Meeting, The Japan Society for Precision Engineering, 本研究では、組立作業の3次元モーションデータの深層学習がどの身体部位の動作データで作業者を分類しているかを調べるために、光学式モーションキャプチャーで取得した各身体部位のデータを組み合わせた学習によって、作業者分類を行う。さらに、各身体部位の分類結果を比較し、分類精度に影響を与える身体部位を特定する。
    01 Mar. 2023
    01 Mar. 2023- 01 Mar. 2023
  • Evolutionary computation method to discover statistically characteristic itemsets
    SHIMADA Kaoru; MATSUNO Shogo; ARAHIRA Takaaki
    Japanese, Proceedings of the Annual Conference of JSAI, The Japanese Society for Artificial Intelligence, We propose a method for discovering combinations of attributes (itemsets) against a background of statistical characteristics without obtaining frequent itemsets. The method uses evolutionary computations characterized by a network structure and a strategy to pool solutions over generations. The method directly discovers combinations of attributes such that a high correlation is observed between two continuous value variables from a database consisting of a large number of attributes as explanatory variables and two continuous value variables as objects of interest for their statistical properties. The proposed method, which seeks to achieve the discovery of small groups with statistical backgrounds from large data sets, extends the concept of frequent itemsets and provides a basis for generalizing the association rule representation.
    2022
    2022 2022
  • Motion Analysis of Worker by Motion Capture and Deep Learning Software
    川根龍人; 伊集院大将; 杉正夫; 中嶋良介; 仲田知弘; 岡本一志; 松野省吾; 山田哲男
    精密工学会大会学術講演会講演論文集
    2022
    2022 2022
  • アソシエーションルールを継続的に発見する進化計算手法の評価
    松野省吾; 嶋田香
    Webインテリジェンスとインタラクション研究会
    17 Dec. 2021
  • デジタル屋台における情報提示位置のユーザビリティの視線計測による比較—縦置き型ディスプレイと平置き型ディスプレイ
    長野 真大; 山田 孟; 中嶋 良介; 仲田 知弘; 松野 省吾; 岡本 一志; 山田 哲男; 杉 正夫
    Japanese, 精密工学会学術講演会講演論文集, 公益社団法人 精密工学会, 筆者らはデジタル屋台における組立作業において作業者へ情報的な支援を行うことを目的として,作業台にディスプレイを埋め込んだ水平作業台ディスプレイを提案してきた.先行研究では,水平作業台ディスプレイと縦置き型ディスプレイの比較が行われたが,主観評価以外で違いを見出すことはできなかった.そこで本研究では,先行研究と同様,2つのシステムで比較実験を行いつつ,作業中の視線を計測し,定量的な違いを評価する.
    03 Mar. 2021
    03 Mar. 2021- 03 Mar. 2021
  • ソーシャルデータの分析と活用 データサイエンスの実ビジネス活用に向けて~活用パターンと実践における課題の紹介~
    榊剛史; 榊剛史; 松野省吾
    Estrela
    2021
    2021 2021
  • Verifying the impact of user follower composition on the spreadability of SNS posts
    MATSUNO Shogo; SANTI Saeyor; SAKAKI Takeshi; HINO Yasuhiro
    Japanese, Proceedings of the Annual Conference of JSAI, The Japanese Society for Artificial Intelligence,

    The impact of social media on the diffusion of information is becoming increasingly difficult to ignore in marketing communications and news dissemination. In particular, the diffusion of information through social media is said to play a major role in the spread of echo chambers and fake news. In this research, we would like to clarify what factors affect the scale of information diffusion on social media in corporate PR and news dissemination. In this paper, we define the characteristics of influencers as: 1) users who have many followers who spread their posts, and 2) users who post many tweets (≅. users who spread posts without hesitation), and examined the influence on post spread (retweet) using a social graph constructed from Twitter records. As a result, we found that the probability of a post being spread is higher for users with either of these characteristics than for randomly selected users. And in particular, the probability of a post being spread is highest for users with many followers who have a small number of followers/followers in their private graphs.


    2021
    2021 2021
  • 水平作業台ディスプレイにおける作業者の頭部位置移動に対応した注視点推定システムの提案
    山田孟; 長野真大; 中嶋良介; 仲田知弘; 松野省吾; 岡本一志; 山田哲男; 杉正夫
    精密工学会大会学術講演会講演論文集
    2021
    2021 2021
  • デジタル屋台における情報提示位置のユーザビリティの視線計測による比較-縦置き型ディスプレイと平置き型ディスプレイ-
    長野真大; 山田孟; 中嶋良介; 仲田知弘; 松野省吾; 岡本一志; 山田哲男; 杉正夫
    精密工学会大会学術講演会講演論文集
    2021
    2021 2021
  • 顔画像の分析による非接触なヒューマンモニタリング技術
    松野省吾
    Invited oral presentation, 精密工学会大会学術講演会講演論文集, Invited
    2021
    2021 2021
  • Concept of Human and Environmentally-Friendly Sustainable Production Support System by Integrating Smart Devices and Machine Learning
    伊集院大将; 中嶋良介; 杉正夫; 仲田知弘; 山田周歩; 松野省吾; 松野省吾; 岡本一志; 滝聖子; 山田哲男
    精密工学会大会学術講演会講演論文集
    2021
    2021 2021
  • Research and Challenges of Current AI for Generation Z by Questionnaire and Text Analysis
    山田哲男; 舛井海斗; 松野省吾; 長沢敬祐; 伊集院大将; 石垣綾; 稲葉通将; 井上全人; YU Y.; 岡本一志; 北田皓嗣; ZHOU L.; 杉正夫; 滝聖子; 中嶋良介; 仲田知弘; 大戸(藤田)恵理; 山田周歩
    横幹連合コンファレンス予稿集(Web)
    2021
    2021 2021
  • テキストマイニングによる現在のAIのアンケート調査分析
    舛井海斗; 松野省吾; 長沢敬祐; 山田哲男
    日本経営工学会秋季大会予稿集(Web)
    2021
    2021 2021
  • A remote experiment system for eye-gaze input interface
    阿部清彦; 佐藤寛修; 松野省吾; 大山実
    電気学会電子・情報・システム部門大会(Web)
    2021
    2021 2021
  • Blink Types Classification by 3D Convolutional Neural Network
    佐藤寛修; 阿部清彦; 松野省吾; 大山実
    電気学会電子・情報・システム部門大会(Web)
    2021
    2021 2021
  • Analysis on Geographic Bias in Private Graphs on Twitter towards SNS Marketing Applications
    榊剛史; 松野省吾; 檜野安弘
    電子情報通信学会技術研究報告(Web)
    2021
    2021 2021
  • Construction of spam filter for Twitter social listening and advertising operation
    松野省吾; 水木栄; 榊剛史
    Japanese, 人工知能学会全国大会(Web), The Japanese Society for Artificial Intelligence,

    This paper proposes a method for constructing a filter that removes various types of noise and spam posts, which are issues in implementing Web marketing measures using Twitter. Advertising media costs using SNS are increasing year by year, and there is a need for more efficient targeting methods. As a prerequisite, securing an information source with low noise is indispensable in conducting an analysis. Twitter, also known as microblogging, is widely used for word-of-mouth analysis and content marketing. On the other hand, unclear sentences, unique terms, and non-sentences are more likely to appear than ordinary blogs, Wikipedia, and Web sites, so there are parts that cannot be handled by ordinary filtering. The authors constructed a filter to classify tweets that would be the noise of analysis based on their characteristics and to detect them by type, with social listening and promotion using Twitter in mind. In addition, we evaluated the accuracy of the spam filter constructed by the experiment and confirmed that tweets unnecessary for analysis were removed with an accuracy of about 90% as a whole. This has reduced the work involved in performing social listening, and has enabled higher quality analysis.


    2020
    2020 2020
  • 音声フィードバックを用いた多肢選択瞬目入力の検討
    佐藤寛修; 阿部清彦; 松野省吾; 大山実
    Japanese, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), https://jglobal.jst.go.jp/detail?JGLOBAL_ID=201902282582452203
    28 Aug. 2019
  • 新しい形状特徴パラメータによる瞬目種類識別の性能比較—Performance Comparison of Blink Types Classification Using a Novel Feature Parameter in Blinking Waveform
    佐藤, 寛修; 阿部, 清彦; 松野, 省吾; 大山, 実
    Japanese, 研究報告, 関東学院大学理工/建築・環境学会, https://kguopac.kanto-gakuin.ac.jp/webopac/NI30003415, type:03685373
    筆者らは,肢体不自由者のコミュニケーション支援などを目的として,2種類の意図的な瞬き(随意性瞬目)の動作をユーザの入力意図として識別可能な,瞬目入力インタフェースを開発している.このインタフェースでは,2種類の入力意図にことなる機能をそれぞれ割り当てることができる.たとえば,通常の入力決定のほかに,誤り訂正の機能を割り当てることなどができる.そのため,2種類の瞬目識別を入力インタフェースとして適用することで,入力効率の改善が期待できる.本研究では,自然光下の画像処理によって眼球開口部の面積変化(瞬目波形)を解析する瞬目計測法を採用している.これまで,3種類の瞬目から得た瞬目波形の形状特徴パラメータを対象に,2種類の随意性瞬目の識別法を開発した.この瞬目種類識別法を視線入力システムの入力決定に採用し評価を行ったところ,瞬目種類の識別の誤りが散見された.本稿では,瞬目の新たな形状特徴パラメータをもちいた瞬目種類識別の結果を示す.また,その結果を用いて,新しい特徴パラメータとこれまでに採用していた特徴パラメータとの識別性能を比較する.
    We have been developed an input system equipped with eye-blink based interfaces that is able to classify two types of voluntary (conscious) blink actions as user's input intentions. One purpose of the input system is communication aid for the severely disabled. The classification method for two types of input intentions enables us to assignment of an individual command to each type of these intentions. In one example, functions for an input decision and an error correction are assigned to these two types of input intentions, respectively. Applying this blink types classification to a human-computer interface will improve the efficiency when inputting commands. Our blink measurement method analyzes variation in pixels of open-eye area using image analysis under natural light. We previously investigated features on three waveform parameters to develop a classification method for two types of voluntary blinks. In our previous examination using the eye-gaze input system applied this blink types classification, we observed several errors on blink types classification. This paper shows a new result of blink types classification using a novel feature parameter in blinking waveform. Using the result, we compare the classification performance between the novel feature parameter and the parameter previously employed.
    identifier:119-124
    Mar. 2019
    Mar. 2019 Mar. 2019
  • Performance Comparison of Blink Types Classification Using a Novel Feature Parameter in Blinking Waveform
    佐藤寛修; 阿部清彦; 松野省吾; 大山実
    Japanese, 関東学院大学理工/建築・環境学会研究報告, https://jglobal.jst.go.jp/detail?JGLOBAL_ID=201902239743909908
    01 Mar. 2019
  • Constructing of the word embedding model by Japanese large scale SNS + Web corpus
    MATSUNO Shogo; MIZUKI Sakae; SAKAKI Takeshi
    Japanese, Proceedings of the Annual Conference of JSAI, http://ci.nii.ac.jp/naid/130007658900,

    In this paper, we present the word embedding model constructed by Japanese text existing on SNS including Twitter. This model is created from a Japanese large-scale corpus using multiple categories such as SNS data, Wikipedia, and Web pages as media. Perorming the evaluation by the word similarity calculation task with Speaman's rank correlation coefficient as the evaluation index for the created word embedding model resulted in a performance of about 7 points better than the model created by only Wikipedia as the learning corpus was obtained. The presented word embedding model in this paper is planned to be released through the website, and we hope that by utilizing this model, natural language processing research for SNS data will become more active.


    2019
  • 東京近郊を対象としたTwitterユーザの細粒度ロケーション予測に関する検討
    松野省吾; 水木栄; 榊剛史
    Webインテリジェンスとインタラクション研究会
    Dec. 2018
  • 瞬目種類識別のための複数の特徴パラメータに関する検討
    佐藤寛修; 阿部清彦; 松野省吾; 大山実
    Japanese, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), https://jglobal.jst.go.jp/detail?JGLOBAL_ID=201802270347531176
    05 Sep. 2018
  • 畳み込みニューラルネットワークによる開眼閉眼状態の識別
    阿部清彦; 佐藤寛修; 松野省吾; 大山実
    Japanese, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), https://jglobal.jst.go.jp/detail?JGLOBAL_ID=201802245313928991
    05 Sep. 2018
  • マウス型視線入力インタフェースのユーザビリティに関する検討
    阿部清彦; 佐藤寛修; 松野省吾; 大山実
    Japanese, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702264989887328
    06 Sep. 2017
  • 2種類の随意性瞬目と視線を用いた文字入力システムの検討
    佐藤寛修; 阿部清彦; 松野省吾; 大山実
    Japanese, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702278188112611
    06 Sep. 2017
  • 手首動作によるスマートデバイス向け入力インタフェースの検討
    松浦隼人; 明比宏太; 松野省吾; 水野統太; 水戸和幸; 板倉直明
    Japanese, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702288850499976
    06 Sep. 2017
  • 赤外線サーモグラフィを用いた顔面全体の皮膚温度による自律神経活動推定の検討
    村田禎侑; 明比宏太; 松野省吾; 水戸和幸; 板倉直明; 水野統太
    Japanese, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702221518089680
    06 Sep. 2017
  • 屋外におけるTo‐Do解決支援システムの提案
    鈴村礼治; 松野省吾; 大山実
    Japanese, 情報科学技術フォーラム講演論文集, http://jglobal.jst.go.jp/public/201702269189034606
    05 Sep. 2017
  • スマートフォンを用いた様々な移動状態を含む連続動作の識別
    小栗悠生; 松野省吾; 大山実
    Japanese, 電子情報通信学会大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702280207126324
    29 Aug. 2017
  • 屋外におけるTo-Do解決支援システムの提案
    鈴村礼治; 松野省吾; 大山実
    情報科学技術フォーラム講演論文集
    2017
    2017 2017
  • A proposal of the character input method using flick selecting a lot of choices by a few operation degrees of freedom for smart devices
    千田進; 松野省吾; 板倉直明; 水野統太
    Japanese, 電気学会計測研究会資料, http://jglobal.jst.go.jp/public/201702270283887479
    21 Dec. 2016
  • スマートデバイスにおける少操作自由度・多選択肢型文字入力方式の提案と検討—A proposal and investigation of the character input method selecting a lot of choices by a few operation degrees of freedom for smart devices—光・量子デバイス研究会・医療工学応用一般(QIE-2)
    千田 進; 松野 省吾; 明比 宏太; 板倉 直明; 水野 統太; 水戸 和幸
    Japanese, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan / 光・量子デバイス研究会 [編], 電気学会, http://id.ndl.go.jp/bib/028585399
    22 Apr. 2016
    22 Apr. 2016- 22 Apr. 2016
  • transient型VEP解析を用いた脳波インタフェースにおける点灯間隔変動点滅光の輝度変化を利用した多選択肢化の検討—A study of the multiple choices for the brain computer interface based on the transient VEP analysis using blinking light with variable onset interval and changing luminance—光・量子デバイス研究会・医療工学応用一般(QIE-2)
    小佐野 涼平; 王 夢夢; 松野 省吾; 板倉 直明; 水戸 和幸; 水野 統太
    Japanese, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan / 光・量子デバイス研究会 [編], 電気学会, http://id.ndl.go.jp/bib/028585405
    22 Apr. 2016
    22 Apr. 2016- 22 Apr. 2016
  • A study of the multiple choices for the brain computer interface based on the transient VEP analysis using blinking light with variable onset interval and changing luminance
    小佐野 涼平; 王 夢夢; 松野 省吾; 板倉 直明; 水戸 和幸; 水野 統太
    Japanese, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan, http://ci.nii.ac.jp/naid/40021348513
    22 Apr. 2016
  • A proposal and investigation of the character input method selecting a lot of choices by a few operation degrees of freedom for smart devices
    千田 進; 松野 省吾; 明比 宏太; 板倉 直明; 水野 統太; 水戸 和幸
    Japanese, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan, http://ci.nii.ac.jp/naid/40021348507
    22 Apr. 2016
  • H-2-7 An Automatic Classification Method of Eye blinks for a Computer Inputting Interface
    Matsuno Shogo; Ohyama Minoru; Abe Kiyohiko; Ohi Shoichi; Itakura Naoaki
    Japanese, Proceedings of the IEICE Engineering Sciences Society/NOLTA Society Conference, http://ci.nii.ac.jp/naid/110010023308
    01 Mar. 2016
  • D-9-21 A Proposal of To-Do Lists Support System using Indoor Position
    Tanaka Masatoshi; Yoshida Keisuke; Matsuno Shogo; Ohyama Minoru
    Japanese, Proceedings of the IEICE General Conference, http://ci.nii.ac.jp/naid/110010036767
    01 Mar. 2016
  • D-9-20 Study of Calorie Consumption Estimation Method using a Smartphone
    Yoshida Keisuke; Tanaka Masatoshi; Matsuno Shogo; Ohyama Minoru
    Japanese, Proceedings of the IEICE General Conference, http://ci.nii.ac.jp/naid/110010036766
    01 Mar. 2016
  • H-2-8 Preliminary Study on Usability Measurement of Eye-gaze Input Interface
    Abe Kiyohiko; Sato Hironobu; Matsuno Shogo; Ohi Shoichi; Ohyama Minoru
    Japanese, Proceedings of the IEICE Engineering Sciences Society/NOLTA Society Conference, http://ci.nii.ac.jp/naid/110010023309
    01 Mar. 2016
  • 顔面熱画像を利用した瞬間型感性評価技術
    水野統太; 河連俊介; 松野省吾; 明比宏太; 水戸和幸; 板倉直明
    日本感性工学会大会予稿集(CD-ROM)
    2016
    2016 2016
  • Investigation of an input interface without operating the touch panel for smart devices
    徳田倫太郎; 千田進; 明比宏太; 松野省吾; 水戸和幸; 水野統太; 板倉直明
    電気学会計測研究会資料
    2016
    2016 2016
  • 視線入力インタフェースのユーザビリティ計測
    阿部清彦; 佐藤寛修; 松野省吾; 大井尚一; 大山実
    電気学会電子・情報・システム部門大会講演論文集(CD-ROM)
    2016
    2016 2016
  • O-048 A Study on Indoor/Outdoor Distinguishing Method using a Mobile Device
    Ozaki Katsuyoshi; Tanaka Masatoshi; Yoshida Keisuke; Matsuno Shogo; Ohyama Minoru
    Japanese, 情報科学技術フォーラム講演論文集, http://ci.nii.ac.jp/naid/110009988348
    24 Aug. 2015
  • O-047 A study on moving state estimation using smartphone built-in sensors
    Yoshida Keisuke; Oguri Yuki; Matsuno Shogo; Ohyama Minoru
    Japanese, 情報科学技術フォーラム講演論文集, http://ci.nii.ac.jp/naid/110009988347
    24 Aug. 2015
  • O-057 A Proposal of Context-Aware System for To-Do Lists
    Tanaka Masatoshi; Yoshida Keisuke; Matsuno Shogo; Ohyama Minoru
    Japanese, 情報科学技術フォーラム講演論文集, http://ci.nii.ac.jp/naid/110009988094
    24 Aug. 2015
  • J-039 Improvement of Eye glance Input interface with OpenCV
    Akehi Kota; Matsuno Shogo; Itakura Naoaki; Mizuno Tota; Mito Kazuyuki
    Japanese, 情報科学技術フォーラム講演論文集, http://ci.nii.ac.jp/naid/110009988182
    24 Aug. 2015
  • Study of Eye Glance Input interface with OpenCV
    明比 宏太; 松野 省吾; 板倉 直明; 水戸 和幸; 水野 統太
    Japanese, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan, http://ci.nii.ac.jp/naid/40021348348
    24 Apr. 2015
  • D-9-19 A study on Moving State Estimation in the User Device
    Yoshida Keisuke; Matsuno Shogo; Ohyama Minoru
    Japanese, Proceedings of the IEICE General Conference, http://ci.nii.ac.jp/naid/110009944921
    24 Feb. 2015
  • A-15-16 Measuring of Eye Blink using Optical-Flow
    Matsuno Shogo; Akehi Kota; Itakura Naoaki; Mizuno Tota; Mito Kazuyuki
    Japanese, Proceedings of the IEICE General Conference, http://ci.nii.ac.jp/naid/110009944389
    24 Feb. 2015
  • B-18-63 A Location Estimation Method considering Radio Wave Reception Sensitivity of Mobile Devices
    Tanaka Masatoshi; Yoshida Keisuke; Matsuno Shogo; Ohyama Minoru
    Japanese, Proceedings of the IEICE General Conference, http://ci.nii.ac.jp/naid/110009928689
    24 Feb. 2015
  • モバイル端末を用いた屋内外判定法の検討
    尾崎勝義; 田中幹衡; 吉田慶介; 松野省吾; 大山実
    情報科学技術フォーラム講演論文集
    2015
    2015 2015
  • B-15-8 The Network Configuration for Switching Seamlessly Indoor and Outdoor Location-based Services
    Yoshida Keisuke; Furukawa Masahiro; Ohyama Minoru; Matsuno Shogo
    Japanese, Proceedings of the Society Conference of IEICE, http://ci.nii.ac.jp/naid/110009883097
    09 Sep. 2014
  • B-18-25 A Study on Indoor Location Estimation
    Tanaka Masatoshi; Yoshida Keisuke; Matsuno Shogo; Ohyama Minoru
    Japanese, Proceedings of the Society Conference of IEICE, http://ci.nii.ac.jp/naid/110009883715
    09 Sep. 2014
  • B-15-10 A study on moving state estimation
    Yoshida Keisuke; Matsuno Shogo; Ohyama Minoru
    Japanese, Proceedings of the Society Conference of IEICE, http://ci.nii.ac.jp/naid/110009883099
    09 Sep. 2014
  • 意図的な瞬目に現れる個人的特徴に関する一検討
    松野省吾; 大山実; 阿部清彦; 佐藤寛修; 大井尚一
    Japanese, 第76回全国大会講演論文集, http://ci.nii.ac.jp/naid/170000086905, 瞬目を用いてコンピュータ等への入力操作を行うために随意性瞬目と不随意性瞬目の自動的な識別が望まれている。従来の画像処理を用いた手法による瞬目検出は、時間分解能の不足等により瞬目の特徴を詳細に取得することが困難であり、瞬目種類を識別するためには専用の機器が必要である。そこで、筆者らは動画像を構成するインタレース画像をフィールドに分割するフレーム分割法を用いることで一般的なNTSCビデオカメラにおいても瞬目の特徴を取得することに成功した。更に、この手法を用いた実験を行い、音教示による意図的な瞬目の個人による生起傾向の差異を調査したので報告する。
    11 Mar. 2014
  • D-9-28 Walking States Classification Using a Smartphone
    Yoshida Keisuke; Matsuno Shogo; Ohyama Minoru
    Japanese, Proceedings of the IEICE General Conference, http://ci.nii.ac.jp/naid/110009827770
    04 Mar. 2014
  • 屋内外位置情報サービスをシームレスに切り替え可能なネットワーク構成の検討
    吉田慶介; 古川雅大; 大山実; 松野省吾
    電子情報通信学会大会講演論文集(CD-ROM)
    2014
    2014 2014
  • A-15-11 Measurement of Blink Feature Parameters using a High Speed Camera
    Abe Kiyohiko; Sato Hironobu; Matsuno Shogo; Ohi Shoichi; Ohyama Minoru
    Japanese, Proceedings of the IEICE General Conference, http://ci.nii.ac.jp/naid/110009699288
    05 Mar. 2013

Affiliated academic society

  • Apr. 2022 - Present
    人工知能学会
  • 2018 - Present
    Association for Computing Machinery
  • 2015 - Present
    THE INSTITUTE OF ELECTRICAL ENGINEERS OF JAPAN
  • 2014 - Present
    The Institute of Electrical and Electronics Engineers
  • 2012 - Present
    INFORMATION PROCESSING SOCIETY OF JAPAN
  • 2011 - Present
    THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS

Research Themes

  • 仮想空間におけるアバターを介した感性情報共有の高解像度化
    松野 省吾
    日本学術振興会, 科学研究費助成事業, 群馬大学, 若手研究, 24K20878
    01 Apr. 2024 - 31 Mar. 2028
  • 視線と瞬目を用いたキャリブレーションフリー入力インタフェース
    阿部 清彦; 佐藤 寛修; 松野 省吾
    日本学術振興会, 科学研究費助成事業, 東京電機大学, 基盤研究(C), Coinvestigator, ユーザの視線や瞬目(瞬き)の情報によりパソコンなどを操作する視線入力は、重度肢体不自由者など一般的な入力インタフェースの使用が困難な人たちでも利用が可能である。しかしながら、従来の視線入力インタフェースの多くは、使用前にユーザごとにキャリブレーション(較正)を行なう必要があり、使用に煩雑さがあった。本研究では、畳み込みニューラルネットワークを利用することにより、ノートパソコンのインカメラで撮影されたユーザの眼球近傍画像から視線と瞬目の情報をリアルタイムで捉え、パソコンを操作する新しい入力インタフェースを開発する。この視線入力インタフェースは、キャリブレーションを必要としないという大きな特長がある。 令和3年度の研究では、視線方向識別のための学習モデルを新規に作成し、研究代表者らの従来の手法では上下左右正面の5方向の視線を識別していたものを、左上右上を追加し7方向の識別を可能とした。これにより、パソコン画面上でのカーソル移動の操作性が向上し、カーソル移動だけでなく入力画面の切り替えなどを視線のみで簡単に行える入力インタフェースを構築することができる。 また、畳み込みニューラルネットワークを時間軸方向に拡張した3D-CNNを応用し、ユーザの瞬目を検出する新しい手法を開発した。新たに構築した3D-CNNを用い、意識的な瞬目と無意識に生じる瞬目を撮影した眼球近傍の動画像ファイルから学習モデルを構築したところ、意識的な瞬目の検出を高精度にキャリブレーションフリーで行なえることを実験により確認した。 令和3年度に開発したこれらの手法のうち、視線方向識別については一般的なパソコンでリアルタイム処理が可能であるものの、瞬目検出についてはまだオフライン処理を行っている。今後、瞬目検出のリアルタイム処理を実現し、視線方向識別手法と組み合わせて実用的な視線入力インタフェースを構築する。, 21K12801
    01 Apr. 2021 - 31 Mar. 2024
  • Remote communication support interface of naturally conveys the work of the mind
    松野 省吾
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, Gunma University, Grant-in-Aid for Early-Career Scientists, 本研究では,遠隔・仮想空間におけるコミュニケーションの質的な向上を目指し,非言語情報によるインタラクションを感性情報によって活性化するコミュニケーション支援インタフェースの開発を目的としている.具体的には,ユーザの視線と瞬目の情報を捉え,コミュニケーションにおける話者の感性情報の表出を計測することで,遠隔コミュニケーション時に欠落してしまう非言語インタラクションの復元を試み,情報の相互共有を促すシステムの構築を目指す.このようなシステムの要素技術として,視線計測と瞬目種類の検出と識別が必要となるが,これらのうち瞬目種類の識別について,研究代表者は実用的な方法を既に開発しており,本研究では話者の感性情報の表出と,視線移動や瞬目の生起の関係性について研究を進めている. 視線移動や瞬目は数百ミリ秒で一連の動作を完了する比較的高速な生理現象である.そのため,専用の計測機器(アイトラッカー)を使用して,コミュニケーション時における話者の視線移動を高時間分解能で記録することで,感性情報を推測するための基準となるパラメータの調査を進めている.このとき,研究代表者らの開発した瞬目識別アルゴリズムとアイトラッカーの提供するAPIを組み合わせることで,特定の動作を自動的に計量するソフトウェアの開発を行い,本研究で構築する計測環境であっても視線計測と瞬目識別の自動計測が可能であることを予備実験により確認している., 21K17841
    01 Apr. 2021 - 31 Mar. 2024
  • 表情による駆け引きを実現するアバター間コミュニケーション技術の構築
    公益財団法人 中山隼雄科学技術文化財団, 研究助成事業, 2021年度助成研究(A-2), Principal investigator
    Mar. 2022 - Feb. 2024
  • Eye glance and electroencephalogram input interface system adapted to portable devices
    Itakura Naoaki; Mito Kazuyuki; Mizuno Tota; Matsuno Shogo; Akehi Kota
    Japan Society for the Promotion of Science, Grants-in-Aid for Scientific Research, The University of Electro-Communications, Grant-in-Aid for Scientific Research (C), We researched to realize an eye glance or electroencephalogram(EEG) input interface that enables input to a portable device using eye glance or EEGs without using fingers. For eye glance input, instantaneous other view (eye glance) was used. For EEG input, lighting interval fluctuation stimulus was used. In the case of eye glance input, it was possible to detect eye glances with a discrimination rate of 90% or more by image analysis using Open CV. In the case of EEG input, it was possible to detect the blinking stimulus that subjects gaze with a discrimination rate of 90% or more by transient type EEG analysis using the lighting interval fluctuation stimulus. Furthermore, we proposed a general purpose design that can input many options with a small number of degrees of freedom as an interface for portable devices, and examined a gesture input method using wrist movement., 16K01538
    01 Apr. 2016 - 31 Mar. 2019