References

Archak, N. (2010). Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on TopCoder. com. In Proceedings of the 19th international conference on World Wide Web (pp. 21–30). New York: ACM. doi: 10.1145/1772690.1772694.

Burke, J.A., Estrin, D., Hansen, M., Parker, A., Ramanathan, N., Reddy, S., et al. (2006). Participatory sensing. U.C.L.A. Center for Embedded Network Sensing, Retrieved March 2, 2015, from: https://escholarship.org/uc/item/19h777qd.

Chesbrough, H. (2003). Open innovation: the new imperative for creating and profiting from technology. Boston: Harvard Business School Press.

Cuel, R., Morozova, O., Rohde, M., Simperl, E., Siorpaes, K., Tokarchuk, O., et al. (2011). Motivation mechanisms for participation in human-driven semantic content creation. International Journal of Knowledge Engineering and Data Mining, 1(4), 331–349. Retrieved March 3, 2015, from http://www.wiwi.uni-siegen.de/wirtschaftsinformatik/paper/2011/cuel-motivation_participation_2011.pdf.

Dawson, R., & Bynghall, S. (2012). Getting results from crowds. San Francisco, CA: Advanced Human Technologies.

Dow, S., Kulkarni, A., Bunge, B., Nguyen, T., Klemmer, S., & Hartmann, B. (2011). Shepherding the crowd: managing and providing feedback to crowd workers. In CHI’11 Extended Abstracts on Human Factors in Computing Systems (pp. 1669–1674). New York: ACM. doi: 10.1145/1979742.1979826.

Drağan, L., Luczak-Rösch, M., Simperl, L., Packer, H., Moreau, L., & Berendt, B. (2015). A-posteriori provenance-enabled linking of publications and datasets via crowdsourcing. D-Lib Magazine, 21(1/2). Retrieved April 7, 2015, from http://www.dlib.org/dlib/january15/dragan/01dragan.html.

Feyisetan, O., Simperl, E., van Kleek, M., & Shadbolt, N. (2015). Improving paid microtasks through gamification and adaptive furtherance incentives. In Proceedings of the 24th International Conference on World Wide Web (pp. 333–343). New York: ACM. doi:10.1145/2736277.2741639.

Grudin, J. (1994). Computer-supported cooperative work: history and focus. Computer, 27(5), 19–26. doi: 10.1109/2.291294.

Haythornthwaite, C. (2009). Crowds and communities: light and heavyweight models of peer production. In on (pp. 1–10). Washington, DC: IEEE. doi: 10.1109/ HICSS.2009.137.

Hendler, J., & Berners-Lee, T. (2010). From the semantic web to social machines: a research challenge for AI on the World Wide Web. Artificial Intelligence, 174(2), 156–161. doi: 10.1016/j.artint.2009.11.010.

Heymann, P., & Garcia-Molina, H. (2011). Turkalytics: analytics for human computation. In Proceedings of the 20th international conference on World Wide Web (pp. 477–486). New York: ACM. doi: 10.1145/1963405.1963473.

Hitchcock, S., Brody, T., Gutteridge, C., Carr, L., Hall, W., Harnad, S., et al. (2002). Open citation linking: the way forward. D-Lib Magazine, 8(10). Retrieved April 7, 2015, from http://www.dlib.org/dlib/october02/hitchcock/10hitchcock.html.

Howe, J. (2006). The rise of crowdsourcing. Wired Magazine, 14(6), 1–4. Retrieved April 7, 2015, from http://archive.wired.com/wired/archive/14.06/crowds.html?pg=1&topic=crowds&topic_set=.

Ipeirotis, P.G., Provost, F., & Wang, J. (2010). Quality management on Amazon Mechanical Turk. In Proceedings of the ACM SIGKDD workshop on human computation (pp. 64–67). New York: ACM. Retrieved April 7, 2015, from http://misrc.csom.umn.edu/workshops/2012/fall/Ipeirotis.pdf.

Kaufmann, N., Schulze, T., & Veit, D. (2011). More than fun and money. Worker motivation in Crowdsourcing – A study on Mechanical Turk. In Proceedings of the Seventeenth Americas Conference on Information Systems (AMCIS) (pp. 1–11). AIS Electronic Library.

Le, J., Edmonds, A., Hester, V., & Biewald,, L. (2010). Ensuring quality in crowdsourced search relevance evaluation: the effects of training question distribution. In Proceedings of the SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation (pp. 21–26). New York: ACM. Retrieved April 7, 2015, from http://ir.ischool.utexas.edu/cse2010/materials/leetal.pdf.

Lévy, P. & Bonomo, R. (1999). Collective intelligence: Mankind’s emerging world in cyberspace. Cambridge, MA: Perseus Publishing.

Malone, T., Laubacher, R., & Dellarocas, C. (2009). Harnessing crowds: mapping the genome of collective intelligence. MIT Sloan Research Paper No. 4732-09. Retrieved April 7, 2015, from http://cci.mit.edu/publications/CCIwp2009-01.pdf.

Malone, T.W., Laubacher, R., & Dellarocas, C. (2010). The collective intelligence genome. MITSloan Management Review 51(3). 20–31. Retrieved March 2, 2015, from http://gaius.cbpp.uaa.alaska.edu/afef/CollectiveIntel.pdf.

Oleson, D., Sorokin, A., Laughlin, G., Hester, V., Le, J., & Biewald, L. (2011). Programmatic gold: targeted and scalable quality assurance in crowdsourcing. Human computation: papers from the 2011 AAAI Workshop (WS-11-11), pp. 43–48. Retrieved March 3, 2015, from http://www.aaai.org/ocs/index.php/WS/AAAIW11/paper/download/3995/4267.

Parameswaran, M., & Whinston, A.B. (2007). Research issues in social computing. Journal of the Association for Information Systems, 8(6), 336–350.

Quinn, A.J., & Bederson, B.B. (2011). Human computation: a survey and taxonomy of a growing field. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1403–1412). New York: ACM. doi: 10.1145/1978942.1979148.

Raykar, V., & Yu, S. (2011). Ranking annotators for crowdsourced labeling tasks. In Advances in neural information processing systems 24: 25th Annual Conference on Neural Information (pp. 1809–1817). Red Hook, NY: Curran Associates. Retrieved April 7, 2015, from http://papers.nips.cc/paper/4469-ranking-annotators-for-crowdsourced-labeling-tasks.pdf.

Simperl, E., Cuel, R., & Stein, M. (2013). Incentive-centric semantic web application engineering. Synthesis Lectures on the Semantic Web: Theory and Technology, 3(1), 1–117. doi: 10.2200/S00460ED1V01Y201212WBE004.

Singer, Y., & Mittal, M. (2013). Pricing mechanisms for crowdsourcing markets. In Proceedings of the 22nd international conference on World Wide Web (pp. 1157–1166). New York: ACM. Retrieved April 7, 2015, from http://www.eecs.harvard.edu/econcs/pubs/Singer_www13.pdf.

Smart, P., Simperl, E., & Shadbolt, N. (2014). A taxonomic framework for social machines. In D. Miorandi, V. Maltese, M. Rovatsos, A. Nijholt, & J. Stewart (Eds.), Social Collective Intelligence, (pp. 51–85). Cham, Switzerland: Springer.

Surowiecki, J. (2005). The wisdom of crowds. New York: Anchor.

Tapscott, D. & Williams, A.D. (2008). Wikinomics: how mass collaboration changes everything. New York: Penguin.

Thaler, S., Simperl, E., & Wölger, S. (2012). An experiment in comparing human computation techniques. Internet Computing, IEEE, 16(5), 52–58. doi: 10.1109/MIC.2012.67.

Von Ahn, L. (2009). Human computation. In Proceedings of the 46th Annual Design Automation Conference (DAC’09) (pp. 418–419). New York: ACM. doi: 10.1145/1629911.1630023.

Von Ahn, L., & Dabbish, L. (2008). Designing games with a purpose. Communications of the ACM, 51(8), 58–67. doi: 10.1145/1378704.1378719.

Wang, F.-Y., Carley, K.M., Zeng, D., & Mao, W. (2007). Social computing: from social informatics to social intelligence. Intelligent Systems, IEEE, 22(2), 79–83. doi: 10.1109/MIS.2007.41.

Wiggins, A., & Crowston, K. (2011). From conservation to crowdsourcing: a typology of citizen science. In Proceedings of the 44th Hawaii International Conference on System Sciences (HICSS) (pp. 1–10). New York: IEEE and Computer Society Press. doi: 10.1109/HICSS.2011.207.