7 国際: 2017年3月アーカイブ
Title: Conducting Online Behavioral Research Using Crowdsourcing Services in Japan
Journal(書誌情報）: Frontiers in Psychology, 8:378, 2017.
Recent research on human behavior has often collected empirical data from the
online labor market, through a process known as crowdsourcing. As well as the
United States and the major European countries, there are several crowdsourcing
services in Japan. For research purpose, Amazon's Mechanical Turk (MTurk) is the
widely used platform among those services. Previous validation studies have shown
many commonalities between MTurk workers and participants from traditional samples
based on not only personality but also performance on reasoning tasks. The present
study aims to extend these findings to non-MTurk (i.e., Japanese) crowdsourcing
samples in which workers have different ethnic backgrounds from those of MTurk.
We conducted three surveys (N = 426, 453, 167, respectively) designed to compare
Japanese crowdsourcing workers and university students in terms of their
demographics, personality traits, reasoning skills, and attention to instructions.
The results generally align with previous studies and suggest that non-MTurk
participants are also eligible for behavioral research. Furthermore, small screen
devices are found to impair participants' attention to instructions. Several
recommendations concerning this sample are presented.
Title: The Feasibility of a Japanese Crowdsourcing Service for Experimental Research in Psychology
Journal(書誌情報）: SAGE Open, 7(1), 2017.
Recent studies have empirically validated the data obtained from Amazon's Mechanical Turk.
Amazon's Mechanical Turk workers behaved similarly not only in simple surveys but also in tasks
used in cognitive behavioral experiments that employ multiple trials and require
continuous attention to the task. The present study aimed to extend these findings
to data from Japanese crowdsourcing pool in which participants have different
ethnic backgrounds from Amazon's Mechanical Turk workers. In five cognitive
experiments, such as the Stroop and Flanker experiments, the reaction times and
error rates of Japanese crowdsourcing workers and those of university students
were compared and contrasted. The results were consistent with those of previous
studies, although the students responded more quickly and poorly than the workers.
These findings suggested that the Japanese crowdsourcing sample is another eligible participant
pool in behavioral research; however, further investigations are needed to
address issues of qualitative differences between student and worker samples.
The difference in foresight using the scanning method between experts and non-experts
Technological Forecasting and Social Change
We examined the factors that produce differences in generating scenarios on the near future using the scanning method. Participants were asked to briefly read (scan) 151 articles about new technology, the latest customs, fashion, social change, value system transition, or emerging social problems, and then to generate three scenarios about the near future based on the articles. We compared the generated scenarios between scanning method experts and non-experts with no prior experience with the scanning method. We found that experts generated more unique scenarios than non-experts did, and that experts and non-experts differed in the diversity of articles referenced when generating scenarios. We discuss the relationship between the present findings and previous findings on divergent thinking.
Hidehito Honda, hitohonda.02[at]gmail.com
Kazuhiro Ueda, ueda[at]gregorio.c.u-tokyo.ac.jp