松野 省吾
情報学専攻 | 准教授 |
Ⅰ類(情報系) | 准教授 |
経歴
受賞
論文
- Performance Improvement of 3D-CNN for Blink Types Classification by Data Augmentation
Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan (IEE Japan), 144巻, 4号, 掲載ページ 328-329, 出版日 2024年04月01日
研究論文(学術雑誌) - 非随意動作を用いる視線入力インタフェースの検討
松野 省吾
人工知能学会全国大会論文集, 一般社団法人 人工知能学会, JSAI2024巻, 掲載ページ 4Xin226-4Xin226, 出版日 2024年, 視線入力インタフェースは一般に随意的な視線の移動や注視を入力信号として捉え,コンピュータなどの操作を行う装置である.本研究では,従来の随意的な操作に加えて,非随意な生理反応を併用して多様な表現を可能とする視線入力インタフェースの開発を目的とし,視線移動や瞬目を計測する手法を検討している.本稿では,特殊な随意性の視線移動と並行して計測した瞬目の随意性と非随意性を自動的に識別し,異なるフィードバックを行うための識別手法を提案する.さらに,実験により提案手法の評価を行ったところ,約85%の精度で随意的な視線運動を識別できた.
日本語 - Detailed Analysis of Blink Types Classification Using a 3D Convolutional Neural Network
Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan (IEE Japan), 143巻, 9号, 掲載ページ 971-978, 出版日 2023年09月01日
研究論文(学術雑誌) - Discovery of Contrast Itemset with Statistical Background Between Two Continuous Variables
Kaoru Shimada; Shogo Matsuno; Shota Saito
Big Data Analytics and Knowledge Discovery, Springer Nature Switzerland, 14148 LNCS巻, 掲載ページ 114-119, 出版日 2023年08月10日, We previously defined ItemSB as an extension of the concept of frequent itemsets and a new interpretation of the association rule expression, which has statistical properties in the background. We also proposed a method for its discovery by applying an evolutionary computation called GNMiner. ItemSB has the potential to become a new baseline method for data analysis that bridges the gap between conventional data analysis using frequent itemsets and statistical analyses. In this study, we examine the statistical properties of ItemSB, focusing on the setting between two continuous variables, including their correlation coefficients, and how to apply ItemSB to data analysis. As an extension of the discovery method, we define ItemSB that focuses on the existence of differences between two datasets as contrast ItemSB. We further report the results of evaluation experiments conducted on the properties of ItemSB from the perspective of reproducibility and reliability using contrast ItemSB.
論文集(書籍)内論文 - Construction of Evaluation Datasets for Trend Forecasting Studies
Shogo Matsuno; Sakae Mizuki; Takeshi Sakaki
Proceedings of the International AAAI Conference on Web and Social Media, Association for the Advancement of Artificial Intelligence (AAAI), 17巻, 掲載ページ 1041-1051, 出版日 2023年06月02日, In this study, we discuss issues in the traditional evaluation norms of trend forecasts, outline a suitable evaluation method, propose an evaluation dataset construction procedure, and publish Trend Dataset: the dataset we have created. As trend predictions often yield economic benefits, trend forecasting studies have been widely conducted. However, a consistent and systematic evaluation protocol has yet to be adopted. We consider that the desired evaluation method would address the performance of predicting which entity will trend, when a trend occurs, and how much it will trend based on a reliable indicator of the general public's recognition as a gold standard. Accordingly, we propose a dataset construction method that includes annotations for trending status (trending or non-trending), degree of trending (how well it is recognized), and the trend period corresponding to a surge in recognition rate. The proposed method uses questionnaire-based recognition rates interpolated using Internet search volume, enabling trend period annotation on a weekly timescale. The main novelty is that we survey when the respondents recognize the entities that are highly likely to have trended and those that haven't. This procedure enables a balanced collection of both trending and non-trending entities. We constructed the dataset and verified its quality. We confirmed that the interests of entities estimated using Wikipedia information enables the efficient collection of trending entities a priori. We also confirmed that the Internet search volume agrees with public recognition rate among trending entities.
研究論文(学術雑誌) - 水平作業台ディスプレイのための複数カメラに基づく視線位置推定システム
長野 真大; 石田 和貴; 中嶋 良介; 仲田 知弘; 松野 省吾; 岡本 一志; 山田 周歩; 山田 哲男; 杉 正夫
精密工学会学術講演会講演論文集, 公益社団法人 精密工学会, 2023S巻, 掲載ページ 34-35, 出版日 2023年03月01日, 組立作業における作業支援の一つとして,筆者らは水平作業台ディスプレイを提案してきた.先行研究では,水平作業台ディスプレイ上での視線位置推定システムが提案されたが,見てる場所による精度の違いや頭部移動に非対応な点が課題点として残った.本論文は,複数カメラを用いた視線位置推定システムを提案する.従来の課題点に対応するために,複数角度からの顔画像や頭部位値がわかる二値画像を入力とし視線位置推定を行う.
日本語 - モーションキャプチャーと機械学習による作業の身体動作の可視化と作業者の推定方法—Work Movement Visualization and Worker Estimation Methods Using Motion Capture and Machine Learning—2022年度春季研究発表大会特集
川根 龍人; 伊集院 大将; 杉 正夫; 中嶋 良介; 岡本 一志; 仲田 知弘; 松野 省吾; 山田 哲男
日本設備管理学会誌 = Journal of the Society of Plant Engineers Japan / 日本設備管理学会編集事務局 編, 日本設備管理学会, 34巻, 4号, 掲載ページ 111-121, 出版日 2023年02月
日本語 - Detection of voluntary eye movement for analysis about eye gaze behaviour in virtual communication.
Shogo Matsuno
Hci (43), 1832 CCIS巻, 掲載ページ 273-279, 出版日 2023年, In this study, we aim to realize smoother communication between avatars in virtual space and discuss the method of eye-gaze interaction used for avatar communication. It is necessary for this purpose, a specific gaze movement detection algorithm, which is necessary for measuring characteristic eye movements, blinks, and pupil movements. We developed those characteristic movement counting methods using an HMD built-in eye tracking system. Most input devices used in current virtual reality and augmented reality are hand gestures, head tracking, and voice input, despite the HMD attachment type. Therefore, in order to use the eye expression as a hands-free input modality, we consider an eye gaze input interface that does not depend on the measurement accuracy of the measurement device. Previous eye gaze interfaces have a difficulty called as “Midas touch” problem, which is the trade-off between the input speed and input errors. Therefore, using the method that has been developed so far, which as an input channel using characteristic eye movement and voluntary blinks, it aims to realize an input method that does not hinder the acquisition of other meta information alike gestures of eyes. Moreover, based on involuntary characteristic eye movements unconsciously expressed by the user, such as movement of the gaze, we discuss a system that enables “expression tactics” in the virtual space by providing natural feedback of the avatar's emotional expression and movement patterns. As a first step, we report the result of measured eyeball movements face-to-face through experiments in order to extract features of gaze and blinking.
研究論文(国際会議プロシーディングス) - ItemSB: Itemsets with Statistically Distinctive Backgrounds Discovered by Evolutionary Method
Kaoru Shimada; Takaaki Arahira; Shogo Matsuno
International Journal of Semantic Computing, 16巻, 3号, 掲載ページ 357-378, 出版日 2022年09月01日, In this paper, we propose a method for discovering combinations of attributes (i.e. itemsets) against a background of statistical characteristics without obtaining frequent itemsets. The method considers a database with numerous attributes and can directly find a combination of highly correlated attributes from small populations in two consecutive variables of interest even from an incomplete database. As the proposed method determines local patterns in large-scale data, it may be used as a basis for large-scale data analysis. Evolutionary computations characterized by a network structure and a strategy to pool solutions are used throughout generations. Moreover, association rules are used to generalize the analysis method as itemsets with statistically distinctive backgrounds (ItemSBs). The class-association rules used for classification constitute a discovery method of attribute combinations, which are characteristic when the ratio of class attributes is obtained. The proposed method is an extension to statistical bivariate analysis. In addition, we determine contrast ItemSBs that are statistically different between two subgroups of data while satisfying the same conditions. Experimental results show the characteristics and effectiveness of the proposed method.
研究論文(国際会議プロシーディングス) - Evolutionary operation setting for outcome accumulation type evolutionary rule discovery method
Shogo Matsuno; Kaoru Shimada
GECCO 2022 Companion - Proceedings of the 2022 Genetic and Evolutionary Computation Conference, 掲載ページ 451-454, 出版日 2022年07月09日, Association rule analysis has been widely employed as a basic technique for data mining. Extensive research has also been conducted to apply evolutionary computing techniques to the field of data mining. This study presents a method to evaluate the settings of evolutionary operations in evolutionary rule discovery method, which is characterized by the execution of overall problem solving through the acquisition and accumulation of small results. Since the purpose of population evolution is different from that of general evolutionary computation methods that aim at discovering elite individuals, we examined the difference in the concept of settings during evolution and the evaluation of evolutionary computation by visualizing the progress and efficiency of problem solving. The rule discovery method (GNMiner) is characterized by reflecting acquired information in evolutionary operations; this study determines the relationship between the settings of evolutionary operations and the progress of each task execution stage and the achievement of the final result. This study obtains knowledge on the means of setting up evolutionary operations for efficient rule-set discovery by introducing an index to visualize the efficiency of outcome accumulation. This implies the possibility of setting up dynamic evolutionary operations in the outcome accumulation-type evolutionary computation in future studies.
研究論文(国際会議プロシーディングス) - 水平作業台ディスプレイにおける作業者の注視点推定システム—Estimation System of Worker's Eye Gaze for Display-Mounted Workbench—2021年度春季研究発表大会特集
山田 孟; 杉 正夫; 長野 真大; 中嶋 良介; 仲田 知弘; 松野 省吾; 岡本 一志; 山田 哲男
日本設備管理学会誌 = Journal of the Society of Plant Engineers Japan / 日本設備管理学会編集事務局 編, 日本設備管理学会, 34巻, 1号, 掲載ページ 8-14, 出版日 2022年04月, 査読付
研究論文(学術雑誌), 日本語 - An Evaluation of Large-scale Information Network Embedding based on Latent Space Model Generating Links
Shotaro Kawasaki; Ryosuke Motegi; Shogo Matsuno; Yoichi Seki
ACM International Conference Proceeding Series, 掲載ページ 164-170, 出版日 2022年01月21日, Graph representation learning encodes vertices as low-dimensional vectors that summarize their graph position and the structure of their local graph neighborhood. These methods give us beneficial representation in continuous space from big relational data. However, the algorithms are usually evaluated indirectly from the accuracy of applying the learning results to classification tasks because of not giving the correct answer when graph representation learning is applied. Therefore, this study proposes a method to evaluate graph representation learning algorithms by preparing correct learning results for the data by distributing objects in the latent space in advance and probabilistically generating relational graph data from the distributions in the latent space. Using this method, we evaluated LINE: Large-scale information network embedding, one of the most popular algorithms for learning graph representations. LINE consists of two algorithms optimizing two objective functions defined by first-order proximity and second-order proximity. We prepared two link-generating models suitable for these two objective functions and clarified that the corresponding LINE algorithm performed well for the link data generated by each model.
研究論文(国際会議プロシーディングス) - Effeective Evolutionary Manipulation Seetting in GNP-based rule mining method
Shogo Matsuno; Kaoru Shimada; Takaaki Arahira
筆頭著者, 27th International Symposium on Artificial Life and Robotics, 出版日 2022年01月, 査読付
研究論文(国際会議プロシーディングス) - Blink State Classification Using 3D Convolutional Neural Network
Hironobu Sato; Kiyohiko Abe; Minoru Ohyama; Shogo Matsuno
27th International Symposium on Artificial Life and Robotics, 出版日 2022年01月, 査読付
研究論文(国際会議プロシーディングス) - Blink input interface enabling multiple candidate selection through sound feedback
Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
ARTIFICIAL LIFE AND ROBOTICS, SPRINGER, 26巻, 3号, 掲載ページ 312-317, 出版日 2021年08月, Several input interfaces that employ eye-blinking detection have been proposed. These input interfaces can detect operating intentions indicated by eye-blink actions. In this study, we propose a new blink input interface with the aim of increasing the classifiable blink types in addition to the regular voluntary blink. The proposed interface employs a sound feedback that facilitates multiple candidate selection. The feedback sound represents each input candidate and is generated while switching over time. This interface enables the user to input their operating intentions by controlling the timing of eye opening. We developed a prototype with the proposed interface and evaluated the system. The results indicated that a total classification rate of 96.5% is achieved for four types of voluntary blinks with 10 subjects. This classification rate is in accordance that of our previous report for two types of voluntary blinks. Thus, we concluded that our proposed method is reliable.
研究論文(学術雑誌), 英語 - Product and Corporate Culture Diffusion via Twitter Analytics: A Case of Japanese Automobile Manufactures
Yuta Kitano; Shogo Matsuno; Tetsuo Yamada; Kim Hua Tan
責任著者, 26th International Conference on Production Research (ICPR)2021, 出版日 2021年07月, 査読付
研究論文(国際会議プロシーディングス), 英語 - Evolutionary Method for Two-dimensional Associative Local Distribution Rule Mining
Kaoru Shimada; Takaaki Arahira; Shogo Matsuno
Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI, 2021-November巻, 掲載ページ 1018-1025, 出版日 2021年, In this paper, we propose a rule discovery method that can reveal a combination of attributes that provide characteristic distribution of two consecutive variables of interest directly at high speed in a database having many attributes. In numerical association rule mining (NARM), when using association rules that handle consecutive numerical data values, it is difficult to heuristically extract rules that focus on statistical distributions of numerical data. The proposed method enables quick discovery of the number of rules necessary for prediction purposes using evolutionary calculations characterized by a network structure and a strategy to pool solutions throughout generations. This effectively finds attribute combinations in which the values taken by two consecutive variables of interest are both narrow ranges and can address instance-based two-dimensional regression problems in a short time. As an evaluation experiment, a prediction task using musical data linked with map data was carried out, and the discovery condition of the flexible rule was set. This resulted in realizing a high coverage rate in the instance-based regression problem, and the proposed method was effective in rule discovery based on the statistical distribution in NARM.
研究論文(国際会議プロシーディングス) - Evolutionary Method to Discover Itemsets with Statistically Distinctive Backgrounds
Kaoru Shimada; Takaaki Arahira; Shogo Matsuno
Proceedings - 2021 IEEE 4th International Conference on Artificial Intelligence and Knowledge Engineering, AIKE 2021, 掲載ページ 113-120, 出版日 2021年, In this paper, we propose a method for discovering combinations of attributes (itemsets) against a background of statistical characteristics without obtaining frequent itemsets. The method consists of a database with numerous attributes and can directly find a combination of attributes that are highly correlated with small populations in two consecutive variables of interest in an incomplete database. It determines locally found patterns in large-scale data, and is expected to be the basis of large-scale data analysis. It uses evolutionary computations characterized by a network structure and a strategy to pool solutions throughout the generations. Moreover, it uses association rules to generalize the analysis method. The class-association rules used for classification are a discovery method of attribute combinations, which are characteristic when the ratio of class attributes is noted. The proposed method can be positioned as an extension to the statistical analysis method of the bivariate. The results of the evaluation experiment show the characteristics and effectiveness of the method.
研究論文(国際会議プロシーディングス) - Classification of Intentional Eye-blinks using Integration Values of Eye-blink Waveform
Shogo Matsuno; Minoru Ohyama; Hironobu Sato; Kiyohiko Abe
2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), IEEE, 掲載ページ 1255-1261, 出版日 2020年, We propose a method to automatically classify eye-blink types using the eye-blink waveform integral value. The method is assumed to apply to an input interface using eye and It performs automatic detection of intentional blinks. Attempts to treat eye gestures and blinks as input channels in addition to conventional gaze input has studied due to the spread of gaze tracking and gaze input interfaces recently. However, classifying the eye-blink type as intentional or spontaneous using existing eye-blink classification methods is difficult because eye-blinks are highly individual motions that are significantly influenced by various conditions. Therefore, in this research, we construct a more robust measurement environment, which does not require a strict setting such as fixing the relative distance between the face and the camera even for non-contact measurement. In order to realize this, we defined new feature parameters are defined to correct the individual differences from moving image measuring by Web camera to assume applying on mobile interface. The proposed method performs automatic detection of intentional blinks by automatically determining the threshold of blink types based on the waveform integration value as new feature parameter. We also constructed a blink measurement system to evaluate the proposed method and evaluated the proposed method by experiment. The system splits the interlaced image field into disparate fields for blink measurement with sufficient temporal resolution. It then extracts the waveform feature parameters and automatically classifies the eye-blink types. Experimental results show successful classification of intentional eye-blinks with 86% average accuracy, thus demonstrated the high accuracy of the proposed method compared to conventional methods based on eye-blink duration.
研究論文(国際会議プロシーディングス), 英語 - Examination of Stammering Symptomatic Improvement Training Using Heartbeat-Linked Vibration Stimulation.
Shogo Matsuno; Yuya Yamada; Naoaki Itakura; Tota Mizuno
Augmented Cognition. Human Cognition and Behavior - 14th International Conference, Springer, 12197 LNAI巻, 掲載ページ 223-232, 出版日 2020年, In this paper, we propose a training method to improve the stammering symptom, which automatically adjusts the rhythm of speech using vibrational stimulation linked to heart rate through a smart watch. We focus on the rhythm control effect by vibration, confirmed by the tactile stimulation training, and propose a training method to improve the symptoms of stammering while automatically adjusting the rhythm of the utterance based on the heartbeat-linked vibration stimuli. In addition, a system using the proposed method is constructed, and its effects on the heart rate by providing vibration stimulation to stutterers and on stammering symptoms are investigated. We present the effectiveness of stammering improvement training through vibration stimuli by experimenting with eight subjects.
研究論文(国際会議プロシーディングス) - Improved Advertisement Targeting via Fine-grained Location Prediction using Twitter.
Shogo Matsuno; Sakae Mizuki; Takeshi Sakaki
Companion of The 2020 Web Conference 2020, ACM / IW3C2, 掲載ページ 527-532, 出版日 2020年, With the growing demand for social network service advertisements, more accurate targeting methods are required. Therefore, the authors of this study try to predict the location of Twitter users in a more fine-grained manner with regard to area marketing. Specifically, the visit probability to each segment (e.g., prefecture and city) is predicted by a label propagation algorithm, and the user's location information is obtained from a geo-tagged tweet and an user profile as a label. The proposed method predicts which Twitter users may visit the corresponding area with improved granularity, to the level of an oaza or a block. As a verification experiment, we construct a system that outputs a list of users who are likely to visit the location when an oaza name is entered based on the proposed method, and we also try to predict the possibility of users visiting each segment. The results show that the average accuracy is 73% at the prefecture level, 42% at the city level, and 25% at the oaza level. In addition, the prediction accuracy in Tokyo alone is 31% at the oaza level. These results indicate the effectiveness of the proposed method.
研究論文(国際会議プロシーディングス) - Classification of Intentional Eye-blinks using Integration Values of Eye-blink Waveform.
Shogo Matsuno; Minoru Ohyama; Hironobu Sato 0001; Kiyohiko Abe
2020 IEEE International Conference on Systems, Man, and Cybernetics(SMC), IEEE, 2020-October巻, 掲載ページ 1255-1261, 出版日 2020年, We propose a method to automatically classify eye-blink types using the eye-blink waveform integral value. The method is assumed to apply to an input interface using eye and It performs automatic detection of intentional blinks. Attempts to treat eye gestures and blinks as input channels in addition to conventional gaze input has studied due to the spread of gaze tracking and gaze input interfaces recently. However, classifying the eye-blink type as intentional or spontaneous using existing eye-blink classification methods is difficult because eye-blinks are highly individual motions that are significantly influenced by various conditions. Therefore, in this research, we construct a more robust measurement environment, which does not require a strict setting such as fixing the relative distance between the face and the camera even for non-contact measurement. In order to realize this, we defined new feature parameters are defined to correct the individual differences from moving image measuring by Web camera to assume applying on mobile interface. The proposed method performs automatic detection of intentional blinks by automatically determining the threshold of blink types based on the waveform integration value as new feature parameter. We also constructed a blink measurement system to evaluate the proposed method and evaluated the proposed method by experiment. The system splits the interlaced image field into disparate fields for blink measurement with sufficient temporal resolution. It then extracts the waveform feature parameters and automatically classifies the eye-blink types. Experimental results show successful classification of intentional eye-blinks with 86% average accuracy, thus demonstrated the high accuracy of the proposed method compared to conventional methods based on eye-blink duration.
研究論文(国際会議プロシーディングス) - Advanced eye-gaze input system with two types of voluntary blinks
Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
ARTIFICIAL LIFE AND ROBOTICS, SPRINGER, 24巻, 3号, 掲載ページ 324-331, 出版日 2019年09月, 査読付, Recently, several eye-gaze input systems have been developed. Some of these systems consider the eye-blinking action to be additional input information. A main purpose of eye-gaze input systems is to serve as a communication aid for the severely disabled. The input system, which employs eye blinks as command inputs, needs to identify voluntary (conscious) blinks. In the past, we developed an eye-gaze input system for the creation of Japanese text. Our previous system employed an indicator selection method for command inputs. This system was able to identify two types of voluntary blinks. These two types of voluntary blinks work as functions governing indicator selection and error correction, respectively. In the evaluation experiment of the previous system, errors were occasionally observed in the estimation of the number of indicators at which the user was gazing. In this study, we propose a new input system that employs a selection method based on a novel indicator estimation algorithm. We conducted an experiment to evaluate the performance of Japanese text creation using our new input system. This study reports that using our new input system improves the speed of text input. In addition, we demonstrate a comparison of the various related eye-gaze input systems.
研究論文(学術雑誌), 英語 - A method of character input for the user interface with a low degree of freedom
Shogo Matsuno; Susumu Chida; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
ARTIFICIAL LIFE AND ROBOTICS, SPRINGER, 24巻, 2号, 掲載ページ 250-256, 出版日 2019年06月, 査読付, In recent times, smart devices equipped with small touch panels have become very popular. Many such smart devices use a software keyboard for character input. Unfortunately, software keyboards have a limitation: the number of buttons and the input degrees of freedom remain the same, because a button and an input value correspond one-to-one. Thus, if we reduce the screen size while the button size remains the same, the number of buttons must decrease. Alternatively, if we maintain the number of buttons and reduce the screen size, the size of the button decreases, making input increasingly difficult. In this study, we investigate a new character input method that is specifically adapted for use on small screens. The proposed input interface has 4x2 operational degrees of freedom and consists of four buttons and two actions. By handling two operations as one input, 64 input options are secured. Additionally, we experimentally evaluate the proposed character input user interface deployed on a smart device. The proposed method enables an input of approximately 25 characters per minute, and it shows robust input performance on a small screen compared to previous software methods. Thus, the proposed method is better suited to small screens than were previous methods.
研究論文(学術雑誌), 英語 - An analysis method for eye motion and eye blink detection from colour images around ocular region
Shogo Matsuno; Naoaki Itakura; Tota Mizuno
INTERNATIONAL JOURNAL OF SPACE-BASED AND SITUATED COMPUTING, INDERSCIENCE ENTERPRISES LTD, 9巻, 1号, 掲載ページ 22-30, 出版日 2019年, 査読付, This paper examines an analysis and measurement method that to aim to involve incorporating eye motion and eye blink as functions of an eye-based human-computer interface. Proposed method has been achieved by analysing the visible-light images captured without using special equipment, in order to make the eye-controlled input interface independent of gaze-position measurement. Specifically, we propose two algorithms, one for detecting eye motion using optical flow and the other for classifying voluntary eye blinks using changes of eye aperture areas. In addition, we tried evaluating experiment both detection algorithms simultaneously assuming application to input interface. The two algorithms were examined for applicability in an eye-based input interface. As a result of experiments, real-time detection of each eye motion and eye blink could be performed with an accuracy of about 80% on average.
研究論文(学術雑誌), 英語 - Discrimination of Eye Blinks and Eye Movements as Features for Image Analysis of the Around Ocular Region for Use as an Input Interface
Shogo Matsuno; Masatoshi Tanaka; Keisuke Yoshida; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
INNOVATIVE MOBILE AND INTERNET SERVICES IN UBIQUITOUS COMPUTING, IMIS-2018, SPRINGER INTERNATIONAL PUBLISHING AG, 773巻, 掲載ページ 171-182, 出版日 2019年, 査読付, This paper examines an input method for ocular analysis that incorporates eye-motion and eye-blink features to enable an eye-controlled input interface that functions independent of gaze-position measurement. This was achieved by analyzing the visible light in images captured without using special equipment. We propose applying two methods. One method detects eye motions using optical flow. The other method classifies voluntary eye blinks. The experimental evaluations assessed both identification algorithms simultaneously. Both algorithms were also examined for applicability in an input interface. The results have been consolidated and evaluated. This paper concludes by considering of the future of this topic.
研究論文(国際会議プロシーディングス), 英語 - Investigation of Context-aware System Using Activity Recognition
Yuki Watanabe; Reiji Suzumura; Shogo Matsuno; Minoru Ohyama
2019 1ST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION (ICAIIC 2019), IEEE, 掲載ページ 287-291, 出版日 2019年, 査読付, The physical activity is important context information to define and understand the user's situation in real time and in detail. Therefore, we developed a context-aware function using the activity recognition and showed that it is possible to provide more appropriate support according to the user's situation. In this study, we first constructed a model by applying machine learning to data sensed by a smartphone in order to predict the physical activity of the user. In the experiment, high accuracy of 97.6% was obtained by using the model. Next, we developed three functions using the activity recognition. These functions predict the physical activity of user in real time. In addition, user support is performed according to the predicted physical activity. In the experiment using developed functions, it is confirmed that these functions worked correctly in real-world conditions.
研究論文(国際会議プロシーディングス), 英語 - Expanding the Freedom of Eye-gaze Input Interface using Round-Trip Eye Movement under HMD Environment.
Shogo Matsuno; Hironobu Sato 0001; Kiyohiko Abe; Minoru Ohyama
International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments, ICAT-EGVE 2019, Posters and Demos, Eurographics Association, 掲載ページ 21-22, 出版日 2019年, In this paper, we propose a specific gaze movement detection algorithm, which is necessary for implementing a gaze movement input interface using an HMD built-in eye tracking system. Most input devices used in current virtual reality and augmented reality are hand-held devices, hand gestures, head tracking and voice input, despite the HMD attachment type. Therefore, in order to use the eye expression as a hands-free input modality, we consider a gaze input interface that does not depend on the measurement accuracy of the measurement device. The proposed method generally assumes eye movement input different from eye gaze position input which is implemented using an eye tracking system. Specifically, by using reciprocation eye movement in an oblique direction as an input channel, it aims to realize an input method that does not block the view by a screen display and does not hinder the acquisition of other lines of sight meta information. Moreover, the proposed algorithm is actually implemented in HMD, and the detection accuracy of the roundtrip eye movement is evaluated by experiments. As a result, the detection results of 5 subjects were averaged to obtain 90% detection accuracy. The results show that it has enough accuracy to develop an input inter-face using eye movement.
研究論文(国際会議プロシーディングス) - Examination of multi-optioning for cVEP-based BCI by fluctuation of indicator lighting intervals and luminance
Shogo Matsuno; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), IEEE, 2019-October巻, 掲載ページ 2743-2747, 出版日 2019年, In this study, brain-computer interfaces (BCIs) using visual evoked potentials (VEPs) induced by blinking light have been studied. We examined them to change the phase and frequency of the blinking light with variable onset intervals and changing luminance to realize multiple choices in the BCI based on code modulated visual evoked potentials (cVEP) using transient VEP. We used three types of blinking lights with fluctuation interval in addition to one blinking light with fixed interval and attempted to discriminate between the four blinking lights that a subject observed. The amplitude of averaged VEPs with fluctuation interval decreased when a subject did not observe the blinking light. The discrimination rate for the blinking lights was approximately 84%. In addition, we examined the changing luminance of the blinking stimulus to increase the number of multiple choices. In this study, we obtained VEPs based on synchronous addition method and verified the effects of changing luminance and lighting interval in indicators. In addition, our study aims to increase the number of simultaneous choices of cVEP-based BCI using blinking light with variable onset interval and changing luminance. In this paper, we report the possibility of multiple choices by conducting experiments for the estimation of proposed methods.
研究論文(国際会議プロシーディングス), 英語 - Estimating autonomic nerve activity using variance of thermal face images
Shogo Matsuno; Tota Mizuno; Hirotoshi Asano; Kazuyuki Mito; Naoaki Itakura
ARTIFICIAL LIFE AND ROBOTICS, SPRINGER, 23巻, 3号, 掲載ページ 367-372, 出版日 2018年09月, 査読付, In this paper, we propose a novel method for evaluating mental workload (MWL) using variances in facial temperature. Moreover, our method aims to evaluate autonomic nerve activity using single facial thermal imaging. The autonomic nervous system is active under MWL. In previous studies, temperature differences between the nasal and forehead portions of the face were used in MWL evaluation and estimation. Hence, nasal skin temperature (NST) is said to be a reliable indicator of autonomic nerve activity. In addition, autonomic nerve activity has little effect on forehead temperature; thus, temperature differences between the nasal and forehead portions of the face have also been demonstrated to be a good indicator of autonomic nerve activity (along with other physiological indicators such as EEG and heart rate). However, these approaches have not considered temperature changes in other parts of the face. Thus, we propose novel method using variances in temperature for the entire face. Our proposed method enables capture of other parts of the face for temperature monitoring, thereby increasing evaluation and estimation accuracy at higher sensitivity levels than conventional methods. Finally, we also examined whether further high-precision evaluation and estimation was feasible. Our results proved that our proposed method is a highly accurate evaluation method compared with results obtained in previous studies using NST.
研究論文(学術雑誌), 英語 - Investigation on Estimation of Autonomic Nerve Activity of VDT Workers Using Characteristics of Facial Skin Temperature Distribution
Tomoyuki Murata; Kota Akehi; Shogo Matsuno; Kazuyuki Mito; Naoaki Itakura; Tota Mizuno
23nd International Symposium on Artifical Life and Robotics, OS18-1巻, 掲載ページ 845-848, 出版日 2018年01月, 査読付
英語 - A Method of Character Input under the Operating Environment with Low Degree of Freedom
Shogo Matsuno; Susumu Chida; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
23nd International Symposium on Artifical Life and Robotics, OS18-2巻, 掲載ページ 849-853, 出版日 2018年01月, 査読付
英語 - Improvement of Text Input System Using Two Types of Voluntary Blinks and Eye-Gaze Information
Hironobu Sato; Kiyohiko Abe; Shogo Matsuno; Minoru Ohyama
23nd International Symposium on Artifical Life and Robotics, OS18-3巻, 掲載ページ 854-858, 出版日 2018年01月, 査読付
英語 - Activity Estimation Using Device Positions of Smartphone Users
Yuki Oguri; Shogo Matsuno; Minoru Ohyama
ADVANCES IN NETWORK-BASED INFORMATION SYSTEMS, NBIS-2017, SPRINGER INTERNATIONAL PUBLISHING AG, 7巻, 掲載ページ 1126-1135, 出版日 2018年, 査読付, Various activity-based services have been created for use by smartphone users. In the field of activity recognition, researchers frequently use smartphones or devices equipped with built-in sensors to estimate activities. However, in contrast to wristwatch devices that are worn on the arm, users may change the position of the smartphone depending on their situation; this may include placing the device in a bag or pocket. Therefore, a change in the device position should be considered when estimating activities using a smartphone. Considerable research has been conducted under conditions in which a smart phone is placed in a trouser pocket, however, few studies have focused on the changing context and location of the smartphone. Using the Support Vector Machine (SVM) on an Android smartphone, this paper classifies seven types of activity with three types of smartphone position. The results of an experiment conducted with seven smartphone users, indicate that seven possible states were classified with an average accuracy of greater than 95.75%, regardless of the device position.
研究論文(国際会議プロシーディングス), 英語 - Tourist Support System Using User Context Obtained from a Personal Information Device.
Shogo Matsuno; Reiji Suzumura; Minoru Ohyama
Adjunct Publication of the 26th Conference on User Modeling, Adaptation and Personalization, ACM, 掲載ページ 91-95, 出版日 2018年, 査読付, Tour planning is a difficult task for those who visit unfamiliar city destinations. Furthermore, building an itinerary becomes more difficult as the number of options, which can be incorporated into travel, increases. The authors aim to propose place of interest (POI) according to the narrative strategy of a tour guide to realize a better personalized mobile tour guide system and establish a method to support efficient route scheduling. As a basic stage, we will herein consider a method of naturally collecting context information of users through an interaction between users and information terminals. In addition, we will introduce a POI recommendation application using the context information being developed.
研究論文(国際会議プロシーディングス) - Discrimination of Eye Blinks and Eye Movements as Features for Image Analysis of the Around Ocular Region for Use as an Input Interface.
Shogo Matsuno; Masatoshi Tanaka; Keisuke Yoshida; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
Innovative Mobile and Internet Services in Ubiquitous Computing - Proceedings of the 12th International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing(IMIS), Springer, 773巻, 掲載ページ 171-182, 出版日 2018年, 査読付, This paper examines an input method for ocular analysis that incorporates eye-motion and eye-blink features to enable an eye-controlled input interface that functions independent of gaze-position measurement. This was achieved by analyzing the visible light in images captured without using special equipment. We propose applying two methods. One method detects eye motions using optical flow. The other method classifies voluntary eye blinks. The experimental evaluations assessed both identification algorithms simultaneously. Both algorithms were also examined for applicability in an input interface. The results have been consolidated and evaluated. This paper concludes by considering of the future of this topic.
研究論文(国際会議プロシーディングス) - Where can we accomplish our To-Do?: estimating the target location by analyzing the task
Reiji Suzumura; Shogo Matsuno; Minoru Ohyama
PROCEEDINGS 2018 IEEE 32ND INTERNATIONAL CONFERENCE ON ADVANCED INFORMATION NETWORKING AND APPLICATIONS (AINA), IEEE, 2018-May巻, 掲載ページ 457-463, 出版日 2018年, 査読付, Reminders are used in various situations. The users of a location-based reminder system must specify the location to receive the notification, and spend extra time doing so. However, if the location can be estimated from the task that the user enters, this step can be eliminated.The authors focused on To-Do list, i.e., tasks the user wants to accomplish. It was assumed that the location where the To-Do can be accomplished could be estimated by parsing the To-Do. In this paper, the authors propose a method of estimating the place where the given To-Do can be accomplished. A list of daily To-Dos was collected from students through a survey questionnaire, and the To-Dos were used to evaluate the proposed method. In addition, the authors introduced a prototype of a location-based reminder system that applied the proposed method.
研究論文(国際会議プロシーディングス), 英語 - Recognition of a variety of activities considering smartphone positions
Yuki Oguri; Shogo Matsuno; Minoru Ohyama
INTERNATIONAL JOURNAL OF SPACE-BASED AND SITUATED COMPUTING, INDERSCIENCE ENTERPRISES LTD, 8巻, 2号, 掲載ページ 88-95, 出版日 2018年, 査読付, We present a high-accuracy recognition method for various activities using smartphone sensors based on device positions. Many researchers have attempted to estimate various activities, particularly using sensors such as the built-in accelerometer of a smartphone. Considerable research has been conducted under conditions such as placing a smartphone in a trouser pocket; however, few have focused on the changing context and influence of the smartphone position. Herein, we present a method for recognising seven types of activities considering three smartphone positions, and conducted two experiments to estimate each activity and identify the actual state under continuous movement at a university campus. The results indicate that the seven states can be classified with an average accuracy of 98.53% for three different smartphone positions. We also correctly identified these activities with 91.66% accuracy. Using our method, we can create practical services such as healthcare applications with a high degree of accuracy.
研究論文(学術雑誌), 英語 - Study of detection algorithm of pedestrians by image analysis with a crossing request when gazing at a pedestrian crossing signal
Akira Tsuji; Naoaki Itakura; Tota Mizuno; Shogo Matsuno
Journal of Information and Communication Engineering, 3巻, 5号, 掲載ページ 167-173, 出版日 2017年12月, 査読付
英語 - Non-contact Eye-Glance Input Interface Using Video Camera
Kota Akehi; Shogo Matuno; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
Journal of Signal Processing, 信号処理学会, 21巻, 4号, 掲載ページ 207-210, 出版日 2017年07月, 査読付, In the past, many studies have been carried out on eye-gaze input; however, in this study, we developed an eye-glance input interface that tracks a combination of short eye movements. Unlike eye-gaze input that requires high accuracy measurements, eye-glance input can be detected with only a rough indication of the direction of the eye movements, making it possible to operate even terminals with small screens, such as smartphones. In this study, we used an inexpensive camera to measure eye movements and analyzed its output using the OpenCV, an open source computer vision and machine learning software library, to construct an inexpensive and non-contact interface. In a previous study, we developed an algorithm that detected eye-glance input through image analysis using OpenCV, and fed the result of the algorithm back to our subjects. In that study, the average detection rate for the eye-glance input was 76 %. However, we also observed several problems with the algorithm, particularly the problem of false detections due to blinking of the eyes, and implemented solutions for improvement. In this study, we have made improvement with respect to the unsatisfactory detection rate recorded in our previous study, and addressed problems related to user convenience.
研究論文(学術雑誌), 英語 - Non-contact Eye-Glance Input Interface Using Video Camera
Akehi Kota; Matuno Shogo; Itakura Naoaki; Mizuno Tota; Mito Kazuyuki
Journal of Signal Processing, Research Institute of Signal Processing, Japan, 21巻, 4号, 掲載ページ 207-210, 出版日 2017年03月, In the past, many studies have been carried out on eye-gaze input; however, in this study, we developed an eye-glance input interface that tracks a combination of short eye movements. Unlike eye-gaze input that requires high accuracy measurements, eye-glance input can be detected with only a rough indication of the direction of the eye movements, making it possible to operate even terminals with small screens, such as smartphones. In this study, we used an inexpensive camera to measure eye movements and analyzed its output using the OpenCV, an open source computer vision and machine learning software library, to construct an inexpensive and non-contact interface. In a previous study, we developed an algorithm that detected eye-glance input through image analysis using OpenCV, and fed the result of the algorithm back to our subjects. In that study, the average detection rate for the eye-glance input was 76 %. However, we also observed several problems with the algorithm, particularly the problem of false detections due to blinking of the eyes, and implemented solutions for improvement. In this study, we have made improvement with respect to the unsatisfactory detection rate recorded in our previous study, and addressed problems related to user convenience.
英語 - Evaluation of Autonomic Nervous Activity with Variance of Facial Skin Thermal Image
Tota Mizuno; Shunsuke Kawazura; Hirotoshi Asano; Shogo Matsuno; Kazuyuki Mito; Naoaki Itakura
22nd International Symposium on Artifical Life and Robotics, 掲載ページ 512-515, 出版日 2017年01月, 査読付
英語 - Transient型VEP解析手法を用いた脳波インタフェースにおける指標点灯間隔および輝度の変化を利用した多選択肢化の検討
松野省吾; OH Mumu; 相沢彰吾; 板倉直明; 水野統太; 水戸和幸
電気学会論文誌 C, 一般社団法人 電気学会, 137巻, 4号, 掲載ページ 616-620, 出版日 2017年, 査読付,Brain-Computer interfaces (BCIs) have been studied by using transient visual evoked potential (VEP) with various blinking stimuli. We examined to change the phase and the frequency of the blinking light with variable onset interval and changing luminance in order to realize multiple choices in the BCI using transient VEP. However, the number of the multiple choices was limited to four choices. Therefore, we examined to change the luminance of the blinking stimulus in order to increase the number of multiple choices. In this study, we try obtained VEPs based on synchronous addition method, and verified effects of changing luminance and lighting interval of indicators. In addition, we develop multiple choices interface by using blinking light with variable onset interval and changing luminance. In this paper, we report the possibility of multiple choices by experiment of estimate for proposed methods.
研究論文(学術雑誌), 日本語 - Feature analysis focused on temporal alteration of the eyeblink waveform using image analysis
Shogo Matsuno; Minoru Ohyama; Kiyohiko Abe; Shoichi Ohi; Naoaki Itakura
IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan, 137巻, 4号, 掲載ページ 645-651, 出版日 2017年, 査読付, In this paper, we propose an eyeblink feature parameter for automatically classifying conscious and unconscious eyeblinks. For the feature parameter, we focus on eyeblink waveform integral values, which are defined as a measurement record of the progression of eyeblinks. Previous studies have used duration time and waveform amplitude as the feature parameters. The integral values, on the other hand, have characteristics of both of those feature parameters. We obtain these parameters using a National Television System Committee format video camera by splitting a single interlaced image into two fields. We use frame-splitting methods to obtain and analyze the integral value of the eyeblink waveform. We experimentally compared the feature parameters to automatically classify conscious and unconscious eyeblinks. Duration time and amplitude did not significantly differ in some subject cases
however, we confirmed a significant difference when using the integral value. Our results suggest that eyeblink waveform integral values are effective for discriminating conscious eyeblinks. We believe that the integral value of the eyeblink waveform is applicable to an eyeblink input interface.
研究論文(学術雑誌), 英語 - 斜め視線移動を用いた多選択肢入力インタフェースの開発
松野 省吾; 伊藤 雄太; 明比 宏太; 板倉 直明; 水野 統太; 水戸 和幸
電気学会論文誌. C, 一般社団法人 電気学会, 137巻, 4号, 掲載ページ 621-627, 出版日 2017年, 査読付,The eye gaze input is attracting attention as a method for operating an information device with hands-free. However, it is difficult to use gaze input system over a small screen such as a smart device because eye gaze input method must accurately measure gaze positions. In order to solve this problem, we have proposed the eye-glance input method to use operating a small information device like smart phone. The eye-glance input method is able to input multiple-choice using oblique direction reciprocating movement. Accordingly, enabling an input operation that is independent of the screen size. In this paper, we report result of evaluation experiment of numbers inputting of using our developed a Multiple-choice eye-glance input system that utilized electrooculography that amplified via an AC coupling. As the result of experiments, it was found that the average of the input success rate and the average of the input character number per a minute in real-time eye-glance input at the experimental display design was 91.5% and about 15.2 character for 10 subjects.
研究論文(学術雑誌), 日本語 - ビデオカメラを用いた非接触な視線入力インタフェースの検討
明比 宏太; 松野 省吾; 板倉 直明; 水野 統太; 水戸 和幸
電気学会論文誌. C, 一般社団法人 電気学会, 137巻, 4号, 掲載ページ 628-633, 出版日 2017年, 査読付,Our previous studies have proposed Eye Glance input system that is used to only combination of contrary directional eye movements instead of eye gaze input. It was measured by EOG, but constraint and dedicated apparatus are brought into question. So we proposed non-contact measuring method using camera to measure eye movement. We enabled the eyes movement measurement with internal camera and USB camera by using optical follow in Open CV. In this study, we improve algorithm to measure and give visual feedback to subjects in real time. Then we researched the algorithmic effectiveness and influence by feedback.
研究論文(学術雑誌), 日本語 - Study of detection algorithm of pedestrians by image analysis with a crossing request when gazing at a pedestrian crossing signal
Akira Tsuji; Naoaki Itakura; Tota Mizuno; Shogo Matsuno; Hiroshi Kazama; Takuma Toba
2017 2ND INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATICS AND BIOMEDICAL SCIENCES (ICIIBMS), IEEE, 2018-January巻, 掲載ページ 189-194, 出版日 2017年, 査読付, For pedestrian traffic, pushbutton traffic signals are used. However, pushbutton signal machines are often installed in inconvenient locations. Therefore, a new method is required to allow pedestrians to switch the traffic signals using a crossing request other than a pushbutton. In this research, we paid attention to the motions of the pedestrians with a crossing request, who tend to "stay and watch the traffic signal in front of the crosswalk." Pedestrian images were captured using a camera built into the traffic signals and analyzed using image processing. A new traffic signal system is proposed that uses the motions of a pedestrian with a crossing request as an input signal.
研究論文(国際会議プロシーディングス), 英語 - Investigation of Facial Region Extraction Algorithm Focusing on Temperature Distribution Characteristics of Facial Thermal Images.
Tomoyuki Murata; Shogo Matsuno; Kazuyuki Mito; Naoaki Itakura; Tota Mizuno
Communications in Computer and Information Science, Springer Verlag, 713巻, 掲載ページ 347-352, 出版日 2017年, 査読付, In our previous research, we expanded the range to be analyzed to the entire face. This was because there were regions in the mouth, in addition to the nose, where the temperature fluctuated according to the mental workload (MWL). We evaluated the MWL with high accuracy by this method. However, it has been clarified in previous studies that the edge portion of the face, where there is no angle between the thermography and the object to be photographed, exhibits decreased emissivity measured by reflection or the like, and, as a result, the accuracy of the temperature data decreases. In this study, we aim to automatically extract the target facial region from the thermal image taken by thermography by focusing on the temperature distribution of the facial thermal image, as well as examine the automation of the evaluation. As a result of evaluating whether the analysis range can be automatically extracted from 80 facial images, we succeeded in an automatic extraction that can be analyzed from about 90% of the images.
研究論文(国際会議プロシーディングス), 英語 - Development of Device for Measurement of Skin Potential by Grasping of the Device.
Tota Mizuno; Shogo Matsuno; Kota Akehi; Kazuyuki Mito; Naoaki Itakura; Hirotoshi Asano
Communications in Computer and Information Science, Springer Verlag, 713巻, 掲載ページ 237-242, 出版日 2017年, 査読付, In this study, we developed a device for measuring skin potential activity requiring the subject to only grasp the interface. There is an extant method for measuring skin potential activity, which is an indicator for evaluating Mental Work-Load (MWL). It exploits the fact that when a human being experiences mental stress, such as tension or excitement, emotional sweating appears at skin sites such as the palm and sole; concomitantly, the skin potential at these sites varies. At present, skin potential activity of the hand is measured by electrodes attached to the whole arm. Alternatively, if a method can be developed to measure skin potential activity (and in turn emotional sweating) by an electrode placed on the palm only, it would be feasible to develop a novel portable burden-evaluation interface that can measure the MWL with the subject holding the interface. In this study, a prototype portable load-evaluation interface was investigated for its capacity to measure skin potential activity while the interface is held in the subject’s hand. This interface, wherein an electrode is attached to the device, rather than directly to the hand, can measure the parameters with the subject gripping the device. Moreover, by attaching the electrode laterally rather than longitudinally to the device, a touch by the subject, at any point on the sides of the device, enables measurement. The electrodes used in this study were tin foil tapes. In the experiment, subjects held the interface while it measured their MWL. However, the amplitude of skin potential activity (which reflects the strength of the stimulus administered on the subjects) obtained by the proposed method was lower than that obtained by the conventional method. Nonetheless, because sweat response due to stimulation could be quantified with the proposed method, the study demonstrated the possibility of load measurements considering only the palm.
研究論文(国際会議プロシーディングス), 英語 - Automatic Classification of Eye Blinks and Eye Movements for an Input Interface Using Eye Motion.
Shogo Matsuno; Masatoshi Tanaka; Keisuke Yoshida; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
Communications in Computer and Information Science, Springer Verlag, 713巻, 掲載ページ 164-169, 出版日 2017年, 査読付, The objective of this study is to develop a multi gesture input interface using several eye motions simultaneously. In this study, we proposed a new automatic classification method for eye blinks and eye movements from moving images captured using a web camera installed on an information device. Eye motions were classified using two methods of image analysis. One method is the classification of the moving direction based on optical flow. The other method is the detection of voluntary blinks based on integral value of eye blink waveform recorded by changing the eye opening area. We developed an algorithm to run the two methods simultaneously. We also developed a classification system based on the proposed method and conducted experimental evaluation in which the average classification rate was 79.33%. This indicates that it is possible to distinguish multiple eye movements using a general video camera.
研究論文(国際会議プロシーディングス), 英語 - Development of a measuring device skin potential with grasping only
Tota Mizuno; Shogo Matsuno; Kazuyuki Mito; Naoaki Itakura; Hirotoshi Asano
Proceedings of the 19th International Conference on Human-Computer Interaction, 出版日 2017年, 査読付 - Basic study of evaluation that uses the center of gravity of a facial thermal image for the estimation of autonomic nervous activity
Shogo Matsuno; Shunsuke Kosuge; Shunsuke Kawazura; Hirotoshi Asano; Naoaki Itakura; Tota Mizuno
The Ninth International Conference on Advances in Computer-Human Interactions, 掲載ページ 258-261, 出版日 2016年05月, 査読付
英語 - Autonomic Nervous Activity Estimation Algorithm with Facial Skin Thermal Image
Tota Mizuno; Shunsuke Kawazura; Shogo Matsuno; Kota Akehi; Hirotoshi Asano; Naoaki Itakura; Kazuyuki Mito
The Ninth International Conference on Advances in Computer-Human Interactions, 掲載ページ 262-266, 出版日 2016年05月, 査読付
英語 - Method for Measuring Intentional Eye Blinks by Focusing on Momentary Movement around the Eyes
Shogo Matsuno; Tota Mizuno; Naoaki Itakura
RISP International Workshop on Nonlinear Circuits, Communications and Signal Processing, 掲載ページ 137-140, 出版日 2016年03月, 査読付
英語 - Development of Small Device for the Brain Computer Interface with Transient VEP Analysis
Osano Ryohei; Ikai Masato; Matsuno Shogo; Itakura Naoaki; Mizuno Tota
PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), IEEE, 掲載ページ 3782-3785, 出版日 2016年, 査読付, Recently, the input interfaces for the PC using the EEG have been studied. The visual evoked potential (VEP) caused by the blinking light is one of waveforms observed in the EEG signal. The brain computer interfaces which based on the transient VEP (TRVEP) analysis and realized the four choices also have been studied. However, now we have been using a large amplifier and AD board. Furthermore, these two devices are expensive. In this study, in order to solve this problem we developed a small and inexpensive device. In addition, we performed comparative experiments of the conventional device and performance.
研究論文(国際会議プロシーディングス), 英語 - Differentiating conscious and unconscious eyeblinks for development of eyeblink computer input system
Shogo Matsuno; Minoru Ohyama; Kiyohiko Abe; Shoichi Ohi; Naoaki Itakura
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 9312 LNCS巻, 掲載ページ 160-174, 出版日 2016年, 査読付, 招待, In this paper, we propose and evaluate a new conscious eyeblink differentiation method, comprising an algorithm that takes into account differences in individuals, for use in a prospective eyeblink user interface. The proposed method uses a frame-splitting technique that improves the time resolution by splitting a single interlaced image into two fields—even and odd. Measuring eyeblinks with sufficient accuracy using a conventional NTSC video camera (30 fps) is difficult. However, the proposed method uses eyeblink amplitude as well as eyeblink duration as distinction thresholds. Further, the algorithm automatically differentiates eyeblinks by considering individual differences and selecting a large parameter of significance in each user. The results of evaluation experiments conducted using 30 subjects indicate that the proposed method automatically differentiates conscious eyeblinks with an accuracy rate of 83.6%on average. These results indicate that automatic differentiation of conscious eyeblinks using a conventional video camera incorporated with our proposed method is feasible.
研究論文(国際会議プロシーディングス), 英語 - Communication-aid system using eye-gaze and blink information
Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
Advances in Face Detection and Facial Image Analysis, Springer International Publishing, 掲載ページ 333-358, 出版日 2016年01月01日, 査読付, Recently, a novel human-machine interface, the eye-gaze input system, has been reported. This system is operated solely through the user’s eye movements. Using this system, many communication-aid systems have been developed for people suffering from severe physical disabilities, such as amyotrophic lateral sclerosis (ALS). We observed that many such people can perform only very limited head movements. Therefore, we designed an eye-gaze input system that requires no special tracing devices to track the user’s head movement. The proposed system involves the use of a personal computer (PC) and home video camera to detect the users’ eye gaze through image analysis under natural light. Eye-gaze detection methods that use natural light require only daily-life devices, such as home video cameras and PCs. However, the accuracy of these systems is frequently low, and therefore, they are capable of classifying only a few indicators. In contrast, our proposed system can detect eye gaze with high-level accuracy and confidence
that is, users can easily move the mouse cursor to their gazing point. In addition, we developed a classification method for eye blink types using the system’s feature parameters. This method allows the detection of voluntary (conscious) blinks. Thus, users can determine their input by performing voluntary blinks that represent mouse clicking. In this chapter, we present our eye-gaze and blink detection methods. We also discuss the communication-aid systems in which our proposed methods are applied.
論文集(書籍)内論文, 英語 - 視線と随意性瞬目を用いる入力インタフェース
阿部 清彦; 佐藤 寛修; 松野 省吾; 大井 尚一; 大山 実
電気学会論文誌. C, 一般社団法人 電気学会, 136巻, 8号, 掲載ページ 1185-1193, 出版日 2016年, 査読付, Recently, a novel human-machine interface known as the eye-gaze input system has been reported. This system is operated solely through the user's eye movements. Therefore, it can be used by people suffering from severe physical disabilities. We propose an eye-gaze input system that uses a personal computer and home video camera. This system detects the users' eye-gaze through image analysis under natural light including fluorescent or LED light. Our proposed system also has a high-level accuracy and confidence; that is, users can easily move the mouse cursor to their gazing point. We confirmed a large difference in the duration of voluntary (conscious) and involuntary (unconscious) blinks through a precursor experiment. In addition, we confirmed that these durations vary significantly depending on the subject. By using the duration of eye blink, voluntary blink can be detected automatically. Through this method, we developed an eye-gaze input interface that uses information of voluntary blinks. That is, users can decide their input by performing voluntary blinks that represent mouse clicking.
研究論文(学術雑誌), 日本語 - Measuring facial skin temperature changes caused by mental work-load with infrared thermography
Tota Mizuno; Takeru Sakai; Shunsuke Kawazura; Hirotoshi Asano; Kota Akehi; Shogo Matsuno; Kazuyuki Mito; Yuichiro Kume; Naoaki Itakura
IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan, 136巻, 11号, 掲載ページ 1581-1585, 出版日 2016年, 査読付, We evaluated the temperature change of facial parts as affected by mental work-load (MWL) using infrared thermography. Under MWL, autonomic nerves are active, and the skin surface temperature changes with muscular contraction. In particular, the nasal part of the face experiences the most intense change. Based on this, in previous studies MWL was evaluated by using nasal skin temperature. However, we considered whether other parts of the face experience temperature change under MWL. Therefore, in this study, to identify which other parts of the face experience temperature change, we performed an experiment to acquire facial thermal images when subjects perform a mental arithmetic calculation task. Our results indicate that, in addition to the nasal part, the temperatures around the lips and cheek might also increase under MWL.
研究論文(学術雑誌), 英語 - Eye-movement measurement for operating a smart device: a small-screen line-of-sight input system
Shogo Matsuno; Saitoh Sorao; Chida Susumu; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyttki Mito
PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), IEEE, 掲載ページ 3798-3800, 出版日 2016年, 査読付, A real-time eye-glance input interface is developed for a camera-enabled smartphone. Eye-glance input is one of various line-of-sight input methods that capture eye movements over a relatively small screen. In previous studies, a quasi-eyecontrol input interface was developed using the eye-gaze method, which uses gaze position as an input trigger. This method has allowed intuitive and accurate inputting to information devices. However, there are certain problems with it: (1) measurement accuracy requires accurate calibration and a fixed positional relationship between user and system; (2) deciding input position by eye-gaze time slows down the inputting process; (3) it is necessary to present orientation information when performing input. Put differently, problem (3) requires the accuracy of any eye-gaze measuring device to increase as the screen becomes smaller. The eye-gaze method has traditionally needed a relatively wide screen, which has made eye-control input difficult with a smartphone. Our proposed method can solve this problem because the required input accuracy is independent of screen size. We report a prototype input interface based on an eye-glance input method for a smartphone. This system has an experimentally measured line-of-sight accuracy of similar to 70%.
研究論文(国際会議プロシーディングス), 英語 - Input interface suitable for touch panel operation on a small screen
Susumu Chida; Shogo Matsuno; Naoaki Itakura; Tota Mizuno
PROCEEDINGS OF THE 2016 IEEE REGION 10 CONFERENCE (TENCON), IEEE, 掲載ページ 3679-3683, 出版日 2016年, 査読付, Recent years have seen rapid widespread adoption of smart devices equipped with a small touch panel. Generally, a software keyboard is problematic despite the use of a small touch panel for input; however, the number of choices and the operational flexibility of the input characters are always equal. This is because an increase in the number of buttons reduces the size of each button, whereas retaining the button size would necessarily reduce the number of buttons. In this study, we propose a new character input interface suitable for a small screen. In addition, we con figured the input interface based on the proposed method, and report the experiments that were carried out to evaluate the interface. As a result, even though the input may be difficult when using the conventional method, the proposed method is hardly considered to influence the input performance.
研究論文(国際会議プロシーディングス), 英語 - Differentiating conscious and unconscious eyeblinks for development of eyeblink computer input system
Shogo Matsuno; Minoru Ohyama; Kiyohiko Abe; Shoichi Ohi; Naoaki Itakura
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 9312 LNCS巻, 掲載ページ 160-174, 出版日 2016年, 査読付, In this paper, we propose and evaluate a new conscious eyeblink differentiation method, comprising an algorithm that takes into account differences in individuals, for use in a prospective eyeblink user interface. The proposed method uses a frame-splitting technique that improves the time resolution by splitting a single interlaced image into two fields—even and odd. Measuring eyeblinks with sufficient accuracy using a conventional NTSC video camera (30 fps) is difficult. However, the proposed method uses eyeblink amplitude as well as eyeblink duration as distinction thresholds. Further, the algorithm automatically differentiates eyeblinks by considering individual differences and selecting a large parameter of significance in each user. The results of evaluation experiments conducted using 30 subjects indicate that the proposed method automatically differentiates conscious eyeblinks with an accuracy rate of 83.6%on average. These results indicate that automatic differentiation of conscious eyeblinks using a conventional video camera incorporated with our proposed method is feasible.
研究論文(国際会議プロシーディングス), 英語 - A Study of an Intention Communication Assisting System Using Eye Movement
Shogo Matsuno; Yuta Ito; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
COMPUTERS HELPING PEOPLE WITH SPECIAL NEEDS, PT II (ICCHP 2016), SPRINGER INT PUBLISHING AG, 9759巻, 掲載ページ 495-502, 出版日 2016年, 査読付, In this paper, we propose a new intention communication assisting system that uses eye movement. The proposed method solves the problems associated with a conventional eye gaze input method. A hands-free input method that uses the behavior of the eye, including blinking and line of sight, has been used for assisting the intention communication of people with severe physical disabilities. In particular, a line-of-sight input device that uses eye gazes has been used extensively because of its intuitive operation. In addition, this device can be used by any patient, except those with weak eye. However, the eye gaze method has disadvantages such as a certain level of input time is required for determining the eye gaze input, or it is necessary to present the information for fixation when performing input. In order to solve these problems, we propose a new line-of-sight input method, eye glance input method. Eye glance input can be performed in four directions by detecting reciprocating movement ( eye glance) in the oblique direction. Using the proposed method, it is possible to perform rapid environmental control with simple measurements. In addition, we developed an evaluation system using electrooculogram based on the proposed method. The evaluation system experimentally evaluated the input accuracy of 10 subjects. As a result, an average accuracy of approximately 84.82 % was determined, which confirms the effectiveness of the proposed method. In addition, we examined the application of the proposed method to actual intention communication assisting systems.
研究論文(国際会議プロシーディングス), 英語 - Advancement of a To-Do Reminder System Focusing on Context of the User
Masatoshi Tanaka; Keisuke Yoshida; Shogo Matsuno; Minoru Ohyama
HCI INTERNATIONAL 2016 - POSTERS' EXTENDED ABSTRACTS, PT II, SPRINGER INTERNATIONAL PUBLISHING AG, 618巻, 掲載ページ 385-391, 出版日 2016年, 査読付, In recent years, smartphones have become rapidly popular and their performance has improved remarkably. Therefore, it is possible to estimate user context by using sensors and functions equipped in smartphones. We propose a To-Do reminder system using user indoor position information and moving state. In conventional reminder systems, users have to input the information of place (resolution place). The resolution place is where the To-Do item can be solved and the user receives a reminder. These conventional reminder systems are constructed based on outdoor position information using GPS. In this paper, we propose a new reminder system that makes it unnecessary to input the resolution place. In this newly developed system, we introduce a rule-based system for estimating the resolution place in a To-Do item. The estimation is done based on an object word and a verb, which are included in most tasks in a To-Do list. In addition, we propose an automatic judgment method to determine if a To-Do task has been completed.
研究論文(国際会議プロシーディングス), 英語 - Physiological and Psychological Evaluation by Skin Potential Activity Measurement Using Steering Wheel While Driving
Shogo Matsuno; Takahiro Terasaki; Shogo Aizawa; Tota Mizuno; Kazuyuki Mito; Naoaki Itakura
HCI INTERNATIONAL 2016 - POSTERS' EXTENDED ABSTRACTS, PT II, SPRINGER INT PUBLISHING AG, 618巻, 掲載ページ 177-181, 出版日 2016年, 査読付, This paper proposes a new method for practical skin potential activity (SPA) measurement while driving a car by installing electrodes on the outer periphery of the steering wheel. Evaluating the psychophysiological state of the driver while driving is important for accident prevention. We investigated whether the physiological and psychological state of the driver can be evaluated by measuring SPA while driving. Therefore, we have devised a way to measure SPA measurement by installing electrodes in a handle. Electrodes are made of tin foil and are placed along the outer periphery of the wheel considering that their position while driving is not fixed. The potential difference is increased by changing the impedance through changing the width of electrodes. Moreover we try to experiment using this environment. An experiment to investigate the possibility of measuring SPA using the conventional and the proposed methods were conducted with five healthy adult males. A physical stimulus was applied to the forearm of the subjects. It was found that the proposed method could measure SPA, even though the result was slightly smaller than that of the conventional method of affixing electrodes directly on hands.
研究論文(国際会議プロシーディングス), 英語 - O-047 複数センサを利用した移動状態の推定手法に関する検討(O分野:情報システム,一般論文)
吉田 慶介; 小栗 悠生; 松野 省吾; 大山 実
情報科学技術フォーラム講演論文集, FIT(電子情報通信学会・情報処理学会)運営委員会, 14巻, 4号, 掲載ページ 547-548, 出版日 2015年08月24日, 査読付
日本語 - Feature Analysis of Eyeblink Waveform for Automatically Classifying Conscious Blinks
Shogo Matsuno; Naoaki Itakura; Minoru Ohyama; Shoichi Ohi; Kiyohiko Abe
Proceedings of the International Conference on Electronics and Software Science, 掲載ページ 216-222, 出版日 2015年07月, 査読付 - Improvement in Eye Glance Input Interface Using OpenCV
Kota Akehi; Shogo Matuno; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
Proceedings of the International Conference on Electronics and Software Science, 掲載ページ 207-211, 出版日 2015年07月, 査読付
英語 - Facial Skin Temperature Fluctuation by Mental Work-Load with Thermography
Tota Mizuno; Takeru Sakai; Shunsuke Kawazura; Hirotoshi Asano; Kota Akehi; Shogo Matsuno; Kazuyuki Mito; Yuichiro Kume; Naokaki Itakura
Proceedings of the International Conference on Electronics and Software Science, 掲載ページ 212-215, 出版日 2015年07月, 査読付
英語 - Determining a Mobile Device's Indoor and Outdoor Location Considering Actual Use
Katsuyoshi Ozaki; Keisuke Yoshida; Shogo Matsuno; Minoru Ohyama
Jornal of Advanced Control, Automation and Robotics, 1巻, 1号, 掲載ページ 54-58, 出版日 2015年01月, 査読付
英語 - Determining Mobile Device Indoor and Outdoor Location in Various Environments Estimation of User Context
Katsuyoshi Ozaki; Keisuke Yoshida; Shogo Matsuno; Minoru Ohyama
2015 INTERNATIONAL CONFERENCE ON INTELLIGENT INFORMATICS AND BIOMEDICAL SCIENCES (ICIIBMS), IEEE, 掲載ページ 161-164, 出版日 2015年, 査読付, To obtain a person's location information with high accuracy in mobile device, it is necessary for a mobile device to switch its localization method depending on whether the user is indoors or outdoors. We propose a method to determine indoor and outdoor location using only the sensors on a mobile device. To obtain a decision with high accuracy for many devices, the method must consider individual difference between devices. We confirmed that using a majority decision method reduces the influence of individual device difference. Moreover, for highly accurate decisions in various environments, it is necessary to consider the differences in environments, such as large cities surrounded by high-rise buildings versus suburban areas. We measured classification features in different environments and the accuracy of classifier constructed using these features was 99.6%.
研究論文(国際会議プロシーディングス), 英語 - Input Interface Using Eye-Gaze and Blink Information
Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
HCI INTERNATIONAL 2015 - POSTERS' EXTENDED ABSTRACTS, PT I, SPRINGER-VERLAG BERLIN, 528巻, 掲載ページ 463-467, 出版日 2015年, 査読付, We have developed an eye-gaze input system for people with severe physical disabilities. The system utilizes a personal computer and a home video camera to detect eye-gaze under natural light, and users can easily move the mouse cursor to any point on the screen to which they direct their gaze. We constructed this system by first confirming a large difference in the duration of voluntary (conscious) and involuntary (unconscious) blinks through a precursor experiment. Consequently, on the basis of the results obtained, we developed our eye-gaze input interface, which uses the information received from voluntary blinks. More specifically, users can decide on their input by performing voluntary blinks as substitutes for mouse clicks. In this paper, we discuss the eye-gaze and blink information input interface developed and the results of evaluations conducted.
研究論文(国際会議プロシーディングス), 英語 - Computer input system using eye glances
Shogo Matsuno; Kota Akehi; Naoaki Itakura; Tota Mizuno; Kazuyuki Mito
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 9172巻, 掲載ページ 425-432, 出版日 2015年, 査読付, We have developed a real-time Eye Glance input interface using a Web camera to capture eye gaze inputs. In previous studies, an eye control input interface was developed using an electro-oculograph (EOG) amplified by AC coupling. Our proposed Eye Gesture input interface used a combination of eye movements and did not require the restriction of head movement, unlike conventional eye gaze input methods. However, this method required an input start operation before capturing could commence. This led us to propose the Eye Glance input method that uses a combination of contradirectional eye movements as inputs and avoids the need for start operations. This method required the use of electrodes, which were uncomfortable to attach. The interface was therefore changed to a camera that used facial pictures to record eye movements to realize an improved noncontact and low-restraint interface. The Eye Glance input method measures the directional movement and time required by the eye to move a specified distance using optical flow with OpenCV from Intel. In this study, we analyzed the waveform obtained from eye movements using a purpose- built detection algorithm. In addition, we examined the reasons for detecting a waveform when eye movements failed.
研究論文(国際会議プロシーディングス), 英語 - RJ-004 瞬目種類識別のための形状特徴量に関する一検討(船井ベストペーパー賞受賞,J分野:ヒューマンコミュニケーション&インタラクション,査読付き論文)
松野 省吾; 大山 実; 阿部 清彦; 大井 尚一; 板倉 直明
情報科学技術フォーラム講演論文集, FIT(電子情報通信学会・情報処理学会)運営委員会, 13巻, 3号, 掲載ページ 29-32, 出版日 2014年08月, 査読付
日本語 - 高速度カメラによる瞬目種類識別のための特徴パラメータの計測
阿部 清彦; 佐藤 寛修; 松野 省吾; 大井 尚一; 大山 実
電気学会論文誌E(センサ・マイクロマシン部門誌), 一般社団法人 電気学会, 134巻, 10号, 掲載ページ 1584-1585, 出版日 2014年, 査読付, Human eye blinks include voluntary and involuntary blinks. If the voluntary blinks can be classified in automatic, an input decision can be made when user's voluntary blinks occur. We have developed a method for the eye blink detection using a video camera. By using this method, the feature parameters for eye blink types classification can be estimated from these wave patterns. We conducted the experiments to measure the feature parameters by using a high speed camera. In this paper, we present our method for eye blink detection and its feature parameters.
研究論文(学術雑誌), 日本語 - Analysis of Trends in the Occurrence of Eyeblinks for an Eyeblink Input Interface
Shogo Matsuno; Naoaki Itakura; Minoru Ohyama; Shoichi Ohi; Kiyohiko Abe
2014 IEEE 2ND INTERNATIONAL WORKSHOP ON USABILITY AND ACCESSIBILITY FOCUSED REQUIREMENTS ENGINEERING (USARE), IEEE, 掲載ページ 25-31, 出版日 2014年, 査読付, This paper presents the results of the analysis of trends in the occurrence of eyeblinks for devising new input channels in handheld and wearable information devices. However, engineering a system that can distinguish between voluntary and spontaneous blinks is difficult. The study analyzes trends in the occurrence of eyeblinks of 50 subjects to classify blink types via experiments. However, noticeable differences between voluntary and spontaneous blinks exist for each subject. Three types of trends based on shape feature parameters (duration and amplitude) of eyeblinks were discovered. This study determines that the system can automatically and effectively classify voluntary and spontaneous eyeblinks.
研究論文(国際会議プロシーディングス), 英語 - RJ-005 フレーム分割法を用いた瞬目計測の有効性に関する一検討(J分野:ヒューマンコミュニケーション&インタラクション,査読付き論文)
松野 省吾; 大山 実; 阿部 清彦; 佐藤 寛修; 大井 尚一
情報科学技術フォーラム講演論文集, FIT(電子情報通信学会・情報処理学会)運営委員会, 12巻, 3号, 掲載ページ 39-42, 出版日 2013年08月, 査読付
日本語 - Automatic discrimination of voluntary and spontaneous Eyeblinks
Shogo Matsuno; Minoru Ohyama; Shoichi Ohi; Kiyohiko Abe; Hironobu Sato
ACHI 2013 - 6th International Conference on Advances in Computer-Human Interactions, 掲載ページ 433-439, 出版日 2013年01月, 査読付, © Copyright 2013 IARIA. This paper proposes a method to analyze the automatic detection and discrimination of eyeblinks for use with a human-computer interface. When eyeblinks are detected, the eyeblink waveform is also acquired from a change in the eye aperture area of the subject by giving a sound signal. We compared voluntary and spontaneous blink parameters obtained by experiments, and we found that the trends of the subjects for important feature parameters could be sorted into three types. As a result, the realization of automatic discrimination of voluntary and spontaneous eye blinking can be expected.
研究論文(国際会議プロシーディングス) - Classification of blink type by a frame splitting method using hi-vision image
Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
IEEJ Transactions on Electronics, Information and Systems, Institute of Electrical Engineers of Japan, 133巻, 7号, 掲載ページ 3-1300, 出版日 2013年, 査読付, Recently, the human-machine interface using information of user's eye blink was reported. This system operates a personal computer in real-time. Eye blinks can be classified into voluntary (conscious) blinks and involuntary (unconscious) blinks. If the voluntary blinks can be distinguished in automatic, an input decision can be made when user's voluntary blinks occur. By using this system, the usability of input is increased. We have proposed a new eye blink detection method that uses a Hi-Vision video camera. This method utilizes split interlace images of the eye. These split images are odd- and even- field images in the 1080i Hi-Vision format and are generated from interlaced images. The proposed method yields a time resolution that is double that in the 1080i Hi-Vision format. We refer to this approach as a "frame-splitting method". We also proposed a method for automatic eye blink extraction using this method. The extraction method is capable of classifying the start and end points of eye blinks. In other words, the feature parameters of voluntary and involuntary blinks can be measured by this extraction method. In this paper, we propose a new classification method for eye blink types using these feature parameters. © 2013 The Institute of Electrical Engineers of Japan.
研究論文(学術雑誌), 英語 - ハイビジョン画像を用いたフレーム分割法による瞬目種類の識別
阿部 清彦; 佐藤 寛修; 松野 省吾; 大井 尚一; 大山 実
電気学会論文誌. C, 電子・情報・システム部門誌 = The transactions of the Institute of Electrical Engineers of Japan. C, A publication of Electronics, Information and Systems Society, 一般社団法人 電気学会, 133巻, 7号, 掲載ページ 3-1300, 出版日 2013年, 査読付, Recently, the human-machine interface using information of user's eye blink was reported. This system operates a personal computer in real-time. Eye blinks can be classified into voluntary (conscious) blinks and involuntary (unconscious) blinks. If the voluntary blinks can be distinguished in automatic, an input decision can be made when user's voluntary blinks occur. By using this system, the usability of input is increased. We have proposed a new eye blink detection method that uses a Hi-Vision video camera. This method utilizes split interlace images of the eye. These split images are odd- and even- field images in the 1080i Hi-Vision format and are generated from interlaced images. The proposed method yields a time resolution that is double that in the 1080i Hi-Vision format. We refer to this approach as a “frame-splitting method”. We also proposed a method for automatic eye blink extraction using this method. The extraction method is capable of classifying the start and end points of eye blinks. In other words, the feature parameters of voluntary and involuntary blinks can be measured by this extraction method. In this paper, we propose a new classification method for eye blink types using these feature parameters.
研究論文(学術雑誌), 英語 - Automatic classification of eye blink types using a frame-splitting method
Kiyohiko Abe; Hironobu Sato; Shogo Matsuno; Shoichi Ohi; Minoru Ohyama
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer, 8019 LNAI巻, PART 1号, 掲載ページ 117-124, 出版日 2013年, 査読付, Human eye blinks include voluntary (conscious) blinks and involuntary (unconscious) blinks. If voluntary blinks can be detected automatically, then input decisions can be made when voluntary blinks occur. Previously, we proposed a novel eye blink detection method using a Hi-Vision video camera. This method utilizes split interlaced images of the eye, which are generated from 1080i Hi-Vision format images. The proposed method yields a time resolution that is twice as high as that of the 1080i Hi-Vision format. We refer to this approach as the frame-splitting method. In this paper, we propose a new method for automatically classifying eye blink types on the basis of specific characteristics using the frame-splitting method. © 2013 Springer-Verlag Berlin Heidelberg.
研究論文(国際会議プロシーディングス), 英語 - 随意性瞬目と自発性瞬目の識別に関する検討
松野省吾; 阿部 清彦; 佐藤 寛修; 大井 尚一; 大山 実
FIT2012(第11回情報技術フォーラム)講演論文集, 9, FIT(電子情報通信学会・情報処理学会)運営委員会, 3巻, 掲載ページ 23-26, 出版日 2012年, 査読付
日本語
MISC
- 3次元畳み込みニューラルネットワークを用いた瞬目種類識別の詳細分析—Detailed Analysis of Blink Types Classification Using a 3D Convolutional Neural Network
阿部 清彦; 大山 実; 佐藤 寛修; 松野 省吾
東京 : 日本工業出版, 出版日 2024年05月, 画像ラボ = Image laboratory / 画像ラボ編集委員会 編, 35巻, 5号, 掲載ページ 27-31, 日本語, 0915-6755, AN10164169 - 特集:編集委員 今年の抱負2024「場に囚われないコミュニケーションとAI」
松野 省吾
一般社団法人 人工知能学会, 出版日 2024年01月01日, 人工知能, 39巻, 1号, 掲載ページ 30-30, 日本語, 2188-2266, 2435-8614 - 2022年春季シンポジウムルポ(第85回)
松野 省吾
日本オペレーションズ・リサーチ学会, 出版日 2022年08月, オペレーションズ・リサーチ = Communications of the Operations Research Society of Japan : 経営の科学, 67巻, 8号, 掲載ページ 451-453, 日本語, 0030-3674, AN00364999 - ソーシャルデータの分析と活用 データサイエンスの実ビジネス活用に向けて~活用パターンと実践における課題の紹介~
榊剛史; 榊剛史; 松野省吾
出版日 2021年, Estrela, 330号, 1343-5647, 202102219574572341
講演・口頭発表等
- 非随意動作を用いる視線入力インタフェースの検討
松野 省吾
日本語, 人工知能学会全国大会論文集, 一般社団法人 人工知能学会, 視線入力インタフェースは一般に随意的な視線の移動や注視を入力信号として捉え,コンピュータなどの操作を行う装置である.本研究では,従来の随意的な操作に加えて,非随意な生理反応を併用して多様な表現を可能とする視線入力インタフェースの開発を目的とし,視線移動や瞬目を計測する手法を検討している.本稿では,特殊な随意性の視線移動と並行して計測した瞬目の随意性と非随意性を自動的に識別し,異なるフィードバックを行うための識別手法を提案する.さらに,実験により提案手法の評価を行ったところ,約85%の精度で随意的な視線運動を識別できた.
発表日 2024年
開催期間 2024年- 2024年 - 水平作業台ディスプレイのための複数カメラに基づく視線位置推定システム
長野 真大; 石田 和貴; 中嶋 良介; 仲田 知弘; 松野 省吾; 岡本 一志; 山田 周歩; 山田 哲男; 杉 正夫
日本語, 精密工学会学術講演会講演論文集, 公益社団法人 精密工学会, 組立作業における作業支援の一つとして,筆者らは水平作業台ディスプレイを提案してきた.先行研究では,水平作業台ディスプレイ上での視線位置推定システムが提案されたが,見てる場所による精度の違いや頭部移動に非対応な点が課題点として残った.本論文は,複数カメラを用いた視線位置推定システムを提案する.従来の課題点に対応するために,複数角度からの顔画像や頭部位値がわかる二値画像を入力とし視線位置推定を行う.
発表日 2023年03月01日
開催期間 2023年03月01日- 2023年03月01日 - 3次元モーションデータの深層学習による作業者分類の精度に関する一考察
川根 龍人; 伊集院 大将; 杉 正夫; 中嶋 良介; 仲田 知弘; 岡本 一志; 松野 省吾; 山田 哲男
日本語, 精密工学会学術講演会講演論文集, 公益社団法人 精密工学会, 本研究では、組立作業の3次元モーションデータの深層学習がどの身体部位の動作データで作業者を分類しているかを調べるために、光学式モーションキャプチャーで取得した各身体部位のデータを組み合わせた学習によって、作業者分類を行う。さらに、各身体部位の分類結果を比較し、分類精度に影響を与える身体部位を特定する。
発表日 2023年03月01日
開催期間 2023年03月01日- 2023年03月01日 - 統計的に特徴的な背景を持つアイテムセットを発見するための進化計算方法
嶋田 香; 松野 省吾; 荒平 高章
日本語, 人工知能学会全国大会論文集, 一般社団法人 人工知能学会, 頻出アイテムセットを取得することなく、統計的特性を背景にもつ属性の組み合わせを発見する方法を世代継続的に成果を蓄積していくことを特徴とする進化計算を用いて提案する。この方法は、説明変数となる多数の属性と統計的特性の注目対象となる2つの連続変数からなるデータベースから、2つの連続変数の間に高い相関がみられるような属性の組合せを直接に発見する。提案手法は、大規模データからの統計的背景を有する小集団の発見を実現しようとするものであるが、頻出アイテム集合の概念の拡張、アソシエーションルール表現の一般化の基礎となるものである。
発表日 2022年
開催期間 2022年- 2022年 - モーションキャプチャーと深層学習ソフトウェアによる作業者の動作分析
川根龍人; 伊集院大将; 杉正夫; 中嶋良介; 仲田知弘; 岡本一志; 松野省吾; 山田哲男
精密工学会大会学術講演会講演論文集
発表日 2022年
開催期間 2022年- 2022年 - アソシエーションルールを継続的に発見する進化計算手法の評価
松野省吾; 嶋田香
Webインテリジェンスとインタラクション研究会
発表日 2021年12月17日 - デジタル屋台における情報提示位置のユーザビリティの視線計測による比較—縦置き型ディスプレイと平置き型ディスプレイ
長野 真大; 山田 孟; 中嶋 良介; 仲田 知弘; 松野 省吾; 岡本 一志; 山田 哲男; 杉 正夫
日本語, 精密工学会学術講演会講演論文集, 公益社団法人 精密工学会, 筆者らはデジタル屋台における組立作業において作業者へ情報的な支援を行うことを目的として,作業台にディスプレイを埋め込んだ水平作業台ディスプレイを提案してきた.先行研究では,水平作業台ディスプレイと縦置き型ディスプレイの比較が行われたが,主観評価以外で違いを見出すことはできなかった.そこで本研究では,先行研究と同様,2つのシステムで比較実験を行いつつ,作業中の視線を計測し,定量的な違いを評価する.
発表日 2021年03月03日
開催期間 2021年03月03日- 2021年03月03日 - ソーシャルデータの分析と活用 データサイエンスの実ビジネス活用に向けて~活用パターンと実践における課題の紹介~
榊剛史; 榊剛史; 松野省吾
Estrela
発表日 2021年
開催期間 2021年- 2021年 - ユーザのフォロワー構成による投稿の拡散されやすさに関する検証
松野 省吾; セーヨー サンティ; 榊 剛史; 檜野 安弘
日本語, 人工知能学会全国大会論文集, 一般社団法人 人工知能学会,企業におけるマーケティングコミュニケーションやニュースの発信など,様々な情報を幅広く伝達する上で,ソーシャルメディアによる情報拡散の影響は無視できなくなっている.特にエコーチャンバーやフェイクニュースの拡散などでは,ソーシャルメディアによる情報拡散が主要な役割を果たしていると言われている.本研究では,企業のPRやニュースの発信において,ソーシャルメディア上の情報拡散の規模がどのような要素が影響しているかを明らかにしていきたい.SNSにおいて社会に対して大きな影響力を持つ人物はインフルエンサーと呼ばれる.そこで,筆者らはインフルエンサーの性質を,1)投稿を拡散するユーザを多くもつユーザ.かつ,2)投稿数の多いユーザ(≅拡散を躊躇なくするユーザ)であると定義し,Twitterの記録から構築したソーシャルグラフを用いて投稿拡散への影響を検証した.その結果,いずれかの性質を持つユーザはランダムに選択したユーザよりも投稿の拡散される確率が高く,特に,プライベートグラフの中でフォロー/フォロワー数が少ないフォロワーを多く抱えるユーザの投稿は最も拡散される確率が高くなることが判った.
発表日 2021年
開催期間 2021年- 2021年 - 水平作業台ディスプレイにおける作業者の頭部位置移動に対応した注視点推定システムの提案
山田孟; 長野真大; 中嶋良介; 仲田知弘; 松野省吾; 岡本一志; 山田哲男; 杉正夫
精密工学会大会学術講演会講演論文集
発表日 2021年
開催期間 2021年- 2021年 - デジタル屋台における情報提示位置のユーザビリティの視線計測による比較-縦置き型ディスプレイと平置き型ディスプレイ-
長野真大; 山田孟; 中嶋良介; 仲田知弘; 松野省吾; 岡本一志; 山田哲男; 杉正夫
精密工学会大会学術講演会講演論文集
発表日 2021年
開催期間 2021年- 2021年 - 顔画像の分析による非接触なヒューマンモニタリング技術
松野省吾
口頭発表(招待・特別), 精密工学会大会学術講演会講演論文集, 招待
発表日 2021年
開催期間 2021年- 2021年 - スマートデバイスと機械学習を融合した人と環境に優しいサステナブル生産支援システムの構想について
伊集院大将; 中嶋良介; 杉正夫; 仲田知弘; 山田周歩; 松野省吾; 松野省吾; 岡本一志; 滝聖子; 山田哲男
精密工学会大会学術講演会講演論文集
発表日 2021年
開催期間 2021年- 2021年 - Z世代が抱く現在のAIに関するアンケートテキスト分析の研究と課題
山田哲男; 舛井海斗; 松野省吾; 長沢敬祐; 伊集院大将; 石垣綾; 稲葉通将; 井上全人; YU Y.; 岡本一志; 北田皓嗣; ZHOU L.; 杉正夫; 滝聖子; 中嶋良介; 仲田知弘; 大戸(藤田)恵理; 山田周歩
横幹連合コンファレンス予稿集(Web)
発表日 2021年
開催期間 2021年- 2021年 - テキストマイニングによる現在のAIのアンケート調査分析
舛井海斗; 松野省吾; 長沢敬祐; 山田哲男
日本経営工学会秋季大会予稿集(Web)
発表日 2021年
開催期間 2021年- 2021年 - 視線入力インタフェース向け遠隔実験システム
阿部清彦; 佐藤寛修; 松野省吾; 大山実
電気学会電子・情報・システム部門大会(Web)
発表日 2021年
開催期間 2021年- 2021年 - 3次元畳み込みニューラルネットワークによる瞬目種類識別の検討
佐藤寛修; 阿部清彦; 松野省吾; 大山実
電気学会電子・情報・システム部門大会(Web)
発表日 2021年
開催期間 2021年- 2021年 - SNSマーケティング応用に向けたTwitter上のプライベートグラフにおける地理的な偏りの検証
榊剛史; 松野省吾; 檜野安弘
電子情報通信学会技術研究報告(Web)
発表日 2021年
開催期間 2021年- 2021年 - Twitterソーシャルリスニングと広告運用のためのスパムフィルタ構築
松野省吾; 水木栄; 榊剛史
日本語, 人工知能学会全国大会(Web), 一般社団法人 人工知能学会,本稿はTwitterを用いたWebマーケティング施策を実施する上での課題である各種のノイズ・スパム投稿を排除するフィルタの構築方法を提案する。SNSを用いた広告媒体費用は年々拡大しており,より効率的なターゲティング手法の実現が求められている。その前提として,分析を行うにあたってノイズの少ない情報源の確保は必須である。中でも,マイクロブログとも呼ばれるTwitterは口コミの分析やコンテンツマーケティングに広く用いられている。その一方で,通常のブログやWikipediaと比べ,くだけた文章や独自用語,非文等が頻繁に出現するため,通常のフィルタリングでは対応できない部分が存在する。そこで,筆者らはTwitterを用いたソーシャルリスニングとプロモーションの実施を念頭に,分析のノイズとなるツイートを分類し,タイプ別に検出するためのフィルタを構築した。また,実験により構築したスパムフィルタの精度を評価したところ,全体として約90%の精度で分析に不要なツイートを取り除くことを確認した。これにより,ソーシャルリスニングを実施する際の作業を低減させ,より質の高い分析を実施できるようになった。
発表日 2020年
開催期間 2020年- 2020年 - 音声フィードバックを用いた多肢選択瞬目入力の検討
佐藤寛修; 阿部清彦; 松野省吾; 大山実
日本語, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), https://jglobal.jst.go.jp/detail?JGLOBAL_ID=201902282582452203
発表日 2019年08月28日 - 新しい形状特徴パラメータによる瞬目種類識別の性能比較—Performance Comparison of Blink Types Classification Using a Novel Feature Parameter in Blinking Waveform
佐藤, 寛修; 阿部, 清彦; 松野, 省吾; 大山, 実
日本語, 研究報告, 関東学院大学理工/建築・環境学会, https://kguopac.kanto-gakuin.ac.jp/webopac/NI30003415, type:03685373
筆者らは,肢体不自由者のコミュニケーション支援などを目的として,2種類の意図的な瞬き(随意性瞬目)の動作をユーザの入力意図として識別可能な,瞬目入力インタフェースを開発している.このインタフェースでは,2種類の入力意図にことなる機能をそれぞれ割り当てることができる.たとえば,通常の入力決定のほかに,誤り訂正の機能を割り当てることなどができる.そのため,2種類の瞬目識別を入力インタフェースとして適用することで,入力効率の改善が期待できる.本研究では,自然光下の画像処理によって眼球開口部の面積変化(瞬目波形)を解析する瞬目計測法を採用している.これまで,3種類の瞬目から得た瞬目波形の形状特徴パラメータを対象に,2種類の随意性瞬目の識別法を開発した.この瞬目種類識別法を視線入力システムの入力決定に採用し評価を行ったところ,瞬目種類の識別の誤りが散見された.本稿では,瞬目の新たな形状特徴パラメータをもちいた瞬目種類識別の結果を示す.また,その結果を用いて,新しい特徴パラメータとこれまでに採用していた特徴パラメータとの識別性能を比較する.
We have been developed an input system equipped with eye-blink based interfaces that is able to classify two types of voluntary (conscious) blink actions as user's input intentions. One purpose of the input system is communication aid for the severely disabled. The classification method for two types of input intentions enables us to assignment of an individual command to each type of these intentions. In one example, functions for an input decision and an error correction are assigned to these two types of input intentions, respectively. Applying this blink types classification to a human-computer interface will improve the efficiency when inputting commands. Our blink measurement method analyzes variation in pixels of open-eye area using image analysis under natural light. We previously investigated features on three waveform parameters to develop a classification method for two types of voluntary blinks. In our previous examination using the eye-gaze input system applied this blink types classification, we observed several errors on blink types classification. This paper shows a new result of blink types classification using a novel feature parameter in blinking waveform. Using the result, we compare the classification performance between the novel feature parameter and the parameter previously employed.
identifier:119-124
発表日 2019年03月
開催期間 2019年03月- 2019年03月 - 新しい形状特徴パラメータによる瞬目種類識別の性能比較
佐藤寛修; 阿部清彦; 松野省吾; 大山実
日本語, 関東学院大学理工/建築・環境学会研究報告, https://jglobal.jst.go.jp/detail?JGLOBAL_ID=201902239743909908
発表日 2019年03月01日 - 日本語大規模SNS+Webコーパスによる単語分散表現のモデル構築
松野 省吾; 水木 栄; 榊 剛史
日本語, 人工知能学会全国大会論文集, http://ci.nii.ac.jp/naid/130007658900,本稿では,筆者らの構築したTwitterをはじめとしたSNS上に存在する日本語の文章に対応する単語分散表現モデルを紹介する. 本モデルはSNSデータ,Wikipedia,Webページといった複数カテゴリを媒体とした日本語大規模コーパスから作成される.作成した単語分散表現モデルに対し,Speamanの順位相関係数を評価指標とした単語類似度算出タスクによる評価を実施したところ,wikipediaのみを学習コーパスとして用いたモデルと比較して7ポイント程度良い性能を得られた.本稿で紹介した単語分散表現モデルはWebサイトを通じて公開する予定であり,本モデルが活用されることで,SNSデータを対象とした自然言語処理研究が一層盛んになることを期待したい.
発表日 2019年 - 東京近郊を対象としたTwitterユーザの細粒度ロケーション予測に関する検討
松野省吾; 水木栄; 榊剛史
Webインテリジェンスとインタラクション研究会
発表日 2018年12月 - 瞬目種類識別のための複数の特徴パラメータに関する検討
佐藤寛修; 阿部清彦; 松野省吾; 大山実
日本語, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), https://jglobal.jst.go.jp/detail?JGLOBAL_ID=201802270347531176
発表日 2018年09月05日 - 畳み込みニューラルネットワークによる開眼閉眼状態の識別
阿部清彦; 佐藤寛修; 松野省吾; 大山実
日本語, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), https://jglobal.jst.go.jp/detail?JGLOBAL_ID=201802245313928991
発表日 2018年09月05日 - マウス型視線入力インタフェースのユーザビリティに関する検討
阿部清彦; 佐藤寛修; 松野省吾; 大山実
日本語, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702264989887328
発表日 2017年09月06日 - 2種類の随意性瞬目と視線を用いた文字入力システムの検討
佐藤寛修; 阿部清彦; 松野省吾; 大山実
日本語, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702278188112611
発表日 2017年09月06日 - 手首動作によるスマートデバイス向け入力インタフェースの検討
松浦隼人; 明比宏太; 松野省吾; 水野統太; 水戸和幸; 板倉直明
日本語, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702288850499976
発表日 2017年09月06日 - 赤外線サーモグラフィを用いた顔面全体の皮膚温度による自律神経活動推定の検討
村田禎侑; 明比宏太; 松野省吾; 水戸和幸; 板倉直明; 水野統太
日本語, 電気学会電子・情報・システム部門大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702221518089680
発表日 2017年09月06日 - 屋外におけるTo‐Do解決支援システムの提案
鈴村礼治; 松野省吾; 大山実
日本語, 情報科学技術フォーラム講演論文集, http://jglobal.jst.go.jp/public/201702269189034606
発表日 2017年09月05日 - スマートフォンを用いた様々な移動状態を含む連続動作の識別
小栗悠生; 松野省吾; 大山実
日本語, 電子情報通信学会大会講演論文集(CD-ROM), http://jglobal.jst.go.jp/public/201702280207126324
発表日 2017年08月29日 - 屋外におけるTo-Do解決支援システムの提案
鈴村礼治; 松野省吾; 大山実
情報科学技術フォーラム講演論文集
発表日 2017年
開催期間 2017年- 2017年 - スマートデバイスにおけるフリックを用いた少操作自由度・多選択肢型文字入力方式の提案
千田進; 松野省吾; 板倉直明; 水野統太
日本語, 電気学会計測研究会資料, http://jglobal.jst.go.jp/public/201702270283887479
発表日 2016年12月21日 - スマートデバイスにおける少操作自由度・多選択肢型文字入力方式の提案と検討—A proposal and investigation of the character input method selecting a lot of choices by a few operation degrees of freedom for smart devices—光・量子デバイス研究会・医療工学応用一般(QIE-2)
千田 進; 松野 省吾; 明比 宏太; 板倉 直明; 水野 統太; 水戸 和幸
日本語, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan / 光・量子デバイス研究会 [編], 電気学会, http://id.ndl.go.jp/bib/028585399
発表日 2016年04月22日
開催期間 2016年04月22日- 2016年04月22日 - transient型VEP解析を用いた脳波インタフェースにおける点灯間隔変動点滅光の輝度変化を利用した多選択肢化の検討—A study of the multiple choices for the brain computer interface based on the transient VEP analysis using blinking light with variable onset interval and changing luminance—光・量子デバイス研究会・医療工学応用一般(QIE-2)
小佐野 涼平; 王 夢夢; 松野 省吾; 板倉 直明; 水戸 和幸; 水野 統太
日本語, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan / 光・量子デバイス研究会 [編], 電気学会, http://id.ndl.go.jp/bib/028585405
発表日 2016年04月22日
開催期間 2016年04月22日- 2016年04月22日 - transient型VEP解析を用いた脳波インタフェースにおける点灯間隔変動点滅光の輝度変化を利用した多選択肢化の検討 (光・量子デバイス研究会・医療工学応用一般(QIE-2))
小佐野 涼平; 王 夢夢; 松野 省吾; 板倉 直明; 水戸 和幸; 水野 統太
日本語, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan, http://ci.nii.ac.jp/naid/40021348513
発表日 2016年04月22日 - スマートデバイスにおける少操作自由度・多選択肢型文字入力方式の提案と検討 (光・量子デバイス研究会・医療工学応用一般(QIE-2))
千田 進; 松野 省吾; 明比 宏太; 板倉 直明; 水野 統太; 水戸 和幸
日本語, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan, http://ci.nii.ac.jp/naid/40021348507
発表日 2016年04月22日 - H-2-7 入力インタフェースのための瞬目自動識別法(H-2.ヒューマン情報処理,一般セッション)
松野 省吾; 大山 実; 阿部 清彦; 大井 尚一; 板倉 直明
日本語, 電子情報通信学会基礎・境界ソサイエティ/NOLTAソサイエティ大会講演論文集, http://ci.nii.ac.jp/naid/110010023308
発表日 2016年03月01日 - D-9-21 屋内位置情報を用いたTo-Doリストサポートシステムの提案(D-9.ライフインテリジェンスとオフィス情報システム,一般セッション)
田中 幹衡; 吉田 慶介; 松野 省吾; 大山 実
日本語, 電子情報通信学会総合大会講演論文集, http://ci.nii.ac.jp/naid/110010036767
発表日 2016年03月01日 - D-9-20 スマートフォンを用いた消費カロリー推定手法の検討(D-9.ライフインテリジェンスとオフィス情報システム,一般セッション)
吉田 慶介; 田中 幹衡; 松野 省吾; 大山 実
日本語, 電子情報通信学会総合大会講演論文集, http://ci.nii.ac.jp/naid/110010036766
発表日 2016年03月01日 - H-2-8 視線入力インタフェースのユーザビリティ計測に関する検討(H-2.ヒューマン情報処理,一般セッション)
阿部 清彦; 佐藤 寛修; 松野 省吾; 大井 尚一; 大山 実
日本語, 電子情報通信学会基礎・境界ソサイエティ/NOLTAソサイエティ大会講演論文集, http://ci.nii.ac.jp/naid/110010023309
発表日 2016年03月01日 - 顔面熱画像を利用した瞬間型感性評価技術
水野統太; 河連俊介; 松野省吾; 明比宏太; 水戸和幸; 板倉直明
日本感性工学会大会予稿集(CD-ROM)
発表日 2016年
開催期間 2016年- 2016年 - タッチパネル操作によらないスマートデバイス向け入力インタフェースの検討
徳田倫太郎; 千田進; 明比宏太; 松野省吾; 水戸和幸; 水野統太; 板倉直明
電気学会計測研究会資料
発表日 2016年
開催期間 2016年- 2016年 - 視線入力インタフェースのユーザビリティ計測
阿部清彦; 佐藤寛修; 松野省吾; 大井尚一; 大山実
電気学会電子・情報・システム部門大会講演論文集(CD-ROM)
発表日 2016年
開催期間 2016年- 2016年 - O-048 モバイル端末を用いた屋内外判定法の検討(O分野:情報システム,一般論文)
尾崎 勝義; 田中 幹衡; 吉田 慶介; 松野 省吾; 大山 実
日本語, 情報科学技術フォーラム講演論文集, http://ci.nii.ac.jp/naid/110009988348
発表日 2015年08月24日 - O-047 複数センサを利用した移動状態の推定手法に関する検討(O分野:情報システム,一般論文)
吉田 慶介; 小栗 悠生; 松野 省吾; 大山 実
日本語, 情報科学技術フォーラム講演論文集, http://ci.nii.ac.jp/naid/110009988347
発表日 2015年08月24日 - O-057 状況に応じたTo-Doリストサポートシステムの提案(O分野:情報システム,一般論文)
田中 幹衡; 吉田 慶介; 松野 省吾; 大山 実
日本語, 情報科学技術フォーラム講演論文集, http://ci.nii.ac.jp/naid/110009988094
発表日 2015年08月24日 - J-039 OpenCVを用いたEye Glance入力インタフェースの改良(J分野:ヒューマンコミュニケーション&インタラクション,一般論文)
明比 宏太; 松野 省吾; 板倉 直明; 水野 統太; 水戸 和幸
日本語, 情報科学技術フォーラム講演論文集, http://ci.nii.ac.jp/naid/110009988182
発表日 2015年08月24日 - OpenCVを用いた Eye Glance入力インタフェースの検討 (光・量子デバイス研究会・医療工学応用一般(QIE-1))
明比 宏太; 松野 省吾; 板倉 直明; 水戸 和幸; 水野 統太
日本語, 電気学会研究会資料. OQD = The papers of technical meeting on optical and quantum devices, IEE Japan, http://ci.nii.ac.jp/naid/40021348348
発表日 2015年04月24日 - D-9-19 ユーザ端末単体での移動状態の識別に関する検討(D-9.ライフインテリジェンスとオフィス情報システム,一般セッション)
吉田 慶介; 松野 省吾; 大山 実
日本語, 電子情報通信学会総合大会講演論文集, http://ci.nii.ac.jp/naid/110009944921
発表日 2015年02月24日 - A-15-16 オプティカルフローを用いたまばたきの計測(A-15.ヒューマン情報処理,一般セッション)
松野 省吾; 明比 宏太; 板倉 直明; 水野 統太; 水戸 和幸
日本語, 電子情報通信学会総合大会講演論文集, http://ci.nii.ac.jp/naid/110009944389
発表日 2015年02月24日 - B-18-63 端末の電波受信感度を考慮した位置推定手法の検討(B-18.知的環境とセンサネットワーク,一般セッション)
田中 幹衡; 吉田 慶介; 松野 省吾; 大山 実
日本語, 電子情報通信学会総合大会講演論文集, http://ci.nii.ac.jp/naid/110009928689
発表日 2015年02月24日 - モバイル端末を用いた屋内外判定法の検討
尾崎勝義; 田中幹衡; 吉田慶介; 松野省吾; 大山実
情報科学技術フォーラム講演論文集
発表日 2015年
開催期間 2015年- 2015年 - B-15-8 屋内外位置情報サービスをシームレスに切り替え可能なネットワーク構成の検討(B-15.モバイルネットワークとアプリケーション,一般セッション)
吉田 慶介; 古川 雅大; 大山 実; 松野 省吾
日本語, 電子情報通信学会ソサイエティ大会講演論文集, http://ci.nii.ac.jp/naid/110009883097
発表日 2014年09月09日 - B-18-25 屋内位置情報に関する検討(B-18.知的環境とセンサネットワーク,一般セッション)
田中 幹衡; 吉田 慶介; 松野 省吾; 大山 実
日本語, 電子情報通信学会ソサイエティ大会講演論文集, http://ci.nii.ac.jp/naid/110009883715
発表日 2014年09月09日 - B-15-10 歩行状態識別に関する検討 : オンライン学習の適用(B-15.モバイルネットワークとアプリケーション,一般セッション)
吉田 慶介; 松野 省吾; 大山 実
日本語, 電子情報通信学会ソサイエティ大会講演論文集, http://ci.nii.ac.jp/naid/110009883099
発表日 2014年09月09日 - 意図的な瞬目に現れる個人的特徴に関する一検討
松野省吾; 大山実; 阿部清彦; 佐藤寛修; 大井尚一
日本語, 第76回全国大会講演論文集, http://ci.nii.ac.jp/naid/170000086905, 瞬目を用いてコンピュータ等への入力操作を行うために随意性瞬目と不随意性瞬目の自動的な識別が望まれている。従来の画像処理を用いた手法による瞬目検出は、時間分解能の不足等により瞬目の特徴を詳細に取得することが困難であり、瞬目種類を識別するためには専用の機器が必要である。そこで、筆者らは動画像を構成するインタレース画像をフィールドに分割するフレーム分割法を用いることで一般的なNTSCビデオカメラにおいても瞬目の特徴を取得することに成功した。更に、この手法を用いた実験を行い、音教示による意図的な瞬目の個人による生起傾向の差異を調査したので報告する。
発表日 2014年03月11日 - D-9-28 スマートフォンを用いた移動状態識別手法(D-9.ライフインテリジェンスとオフィス情報システム,一般セッション)
吉田 慶介; 松野 省吾; 大山 実
日本語, 電子情報通信学会総合大会講演論文集, http://ci.nii.ac.jp/naid/110009827770
発表日 2014年03月04日 - 屋内外位置情報サービスをシームレスに切り替え可能なネットワーク構成の検討
吉田慶介; 古川雅大; 大山実; 松野省吾
電子情報通信学会大会講演論文集(CD-ROM)
発表日 2014年
開催期間 2014年- 2014年 - A-15-11 ハイスピードカメラによる瞬目特徴パラメータの計測(A-15.ヒューマン情報処理)
阿部 清彦; 佐藤 寛修; 松野 省吾; 大井 尚一; 大山 実
日本語, 電子情報通信学会総合大会講演論文集, http://ci.nii.ac.jp/naid/110009699288
発表日 2013年03月05日
所属学協会
共同研究・競争的資金等の研究課題
- 仮想空間におけるアバターを介した感性情報共有の高解像度化
松野 省吾
日本学術振興会, 科学研究費助成事業, 群馬大学, 若手研究, 24K20878
研究期間 2024年04月01日 - 2028年03月31日 - 視線と瞬目を用いたキャリブレーションフリー入力インタフェース
阿部 清彦; 佐藤 寛修; 松野 省吾
日本学術振興会, 科学研究費助成事業, 東京電機大学, 基盤研究(C), 研究分担者, ユーザの視線や瞬目(瞬き)の情報によりパソコンなどを操作する視線入力は、重度肢体不自由者など一般的な入力インタフェースの使用が困難な人たちでも利用が可能である。しかしながら、従来の視線入力インタフェースの多くは、使用前にユーザごとにキャリブレーション(較正)を行なう必要があり、使用に煩雑さがあった。本研究では、畳み込みニューラルネットワークを利用することにより、ノートパソコンのインカメラで撮影されたユーザの眼球近傍画像から視線と瞬目の情報をリアルタイムで捉え、パソコンを操作する新しい入力インタフェースを開発する。この視線入力インタフェースは、キャリブレーションを必要としないという大きな特長がある。 令和3年度の研究では、視線方向識別のための学習モデルを新規に作成し、研究代表者らの従来の手法では上下左右正面の5方向の視線を識別していたものを、左上右上を追加し7方向の識別を可能とした。これにより、パソコン画面上でのカーソル移動の操作性が向上し、カーソル移動だけでなく入力画面の切り替えなどを視線のみで簡単に行える入力インタフェースを構築することができる。 また、畳み込みニューラルネットワークを時間軸方向に拡張した3D-CNNを応用し、ユーザの瞬目を検出する新しい手法を開発した。新たに構築した3D-CNNを用い、意識的な瞬目と無意識に生じる瞬目を撮影した眼球近傍の動画像ファイルから学習モデルを構築したところ、意識的な瞬目の検出を高精度にキャリブレーションフリーで行なえることを実験により確認した。 令和3年度に開発したこれらの手法のうち、視線方向識別については一般的なパソコンでリアルタイム処理が可能であるものの、瞬目検出についてはまだオフライン処理を行っている。今後、瞬目検出のリアルタイム処理を実現し、視線方向識別手法と組み合わせて実用的な視線入力インタフェースを構築する。, 21K12801
研究期間 2021年04月01日 - 2024年03月31日 - 心の働きを自然に伝える遠隔コミュニケーション支援インタフェースの開発
松野 省吾
日本学術振興会, 科学研究費助成事業, 群馬大学, 若手研究, 本研究では,遠隔・仮想空間におけるコミュニケーションの質的な向上を目指し,非言語情報によるインタラクションを感性情報によって活性化するコミュニケーション支援インタフェースの開発を目的としている.具体的には,ユーザの視線と瞬目の情報を捉え,コミュニケーションにおける話者の感性情報の表出を計測することで,遠隔コミュニケーション時に欠落してしまう非言語インタラクションの復元を試み,情報の相互共有を促すシステムの構築を目指す.このようなシステムの要素技術として,視線計測と瞬目種類の検出と識別が必要となるが,これらのうち瞬目種類の識別について,研究代表者は実用的な方法を既に開発しており,本研究では話者の感性情報の表出と,視線移動や瞬目の生起の関係性について研究を進めている. 視線移動や瞬目は数百ミリ秒で一連の動作を完了する比較的高速な生理現象である.そのため,専用の計測機器(アイトラッカー)を使用して,コミュニケーション時における話者の視線移動を高時間分解能で記録することで,感性情報を推測するための基準となるパラメータの調査を進めている.このとき,研究代表者らの開発した瞬目識別アルゴリズムとアイトラッカーの提供するAPIを組み合わせることで,特定の動作を自動的に計量するソフトウェアの開発を行い,本研究で構築する計測環境であっても視線計測と瞬目識別の自動計測が可能であることを予備実験により確認している., 21K17841
研究期間 2021年04月01日 - 2024年03月31日 - 表情による駆け引きを実現するアバター間コミュニケーション技術の構築
公益財団法人 中山隼雄科学技術文化財団, 研究助成事業, 2021年度助成研究(A-2), 研究代表者
研究期間 2022年03月 - 2024年02月 - 携帯端末に適応した視線・脳波入力インタフェースシステム
板倉 直明; 水戸 和幸; 水野 統太; 松野 省吾; 明比 宏太
日本学術振興会, 科学研究費助成事業, 電気通信大学, 基盤研究(C), 手指を使用せずに機器への入力を可能とする視線・脳波入力インタフェースを携帯機器で実現するための研究を行った。視線入力では、Eye Glance(瞬間他所視)入力を使用し、脳波入力では、点灯間隔変動刺激を使用した。視線入力では、Open CVを利用した画像解析で90%以上の判別率でEye Glance入力を検出できた。脳波入力では、点灯間隔変動刺激を利用したtransient型脳波解析で90%以上の判別率で注視点滅刺激を検出できた。さらに、携帯機器のインタフェースとして、少数自由度で多くの選択肢を入力できる汎用的なデザインを提案し、手首動作を用いたジェスチャ入力方式の検討も行った。, 16K01538
研究期間 2016年04月01日 - 2019年03月31日