Please use this identifier to cite or link to this item: http://hdl.handle.net/2307/5036
Title: Crowdsourcing large scale data extraction from the web: bridging automatic and supervised approaches
Authors: Qiu, Disheng
Advisor: Merialdo, Paolo
Issue Date: 8-Jun-2015
Publisher: Università degli studi Roma Tre
Abstract: The Web is a rich source of data that represents a valuable resource for many organizations. Data in theWeb is usually encoded in HTML pages, thus they are not processable; a data extraction process, which is made by software modules called wrappers, is required to use these data. Several attempts have been conducted to reduce the e↵orts of generating wrappers. Supervised approaches, based on annotated pages, achieve high accuracy; but the costs of the training data, i.e. annotations, limit their scalability. Unsupervised approaches have been developed to achieve high scalability, but the diversity of the data sources can drastically limit the accuracy of the results. Overall, obtaining high accuracy and high scalability is challenging because of the scale of the Web and the heterogeneity of the published information. In this dissertation we describe a solution to address these challenges: to scale to the Web we define an unsupervised approach that is built considering several wrapper inference techniques; to control the quality we define a quality model that understands at runtime if human feedback is required; feedback is provided by workers enrolled from a crowdsourcing platform. Crowdsourcing represents an e↵ective way to reduce the costs for the annotation process, but previous proposals are designed for experts and they are not suitable for the crowd, in fact, workers from crowdsourcing platforms are typically non-expert. An open issue to scale the generation of wrappers is the collection of the pages to wrap, we describe an end-to-end pipeline that discovers and crawls relevant websites in a case study for product specifications. An extensive evaluation with real data confirms that: (i) we can generate accurate wrappers with few simple interactions from the crowd; (ii) we can accurately estimate workers’ error rate and select at runtime the number of workers to enroll for a task; (iii) we can e↵ectively start by considering unsupervised approaches and switch to the crowd to increase the quality; and (iv) we can discover thousands of websites from a small initial seed.
URI: http://hdl.handle.net/2307/5036
Access Rights: info:eu-repo/semantics/openAccess
Appears in Collections:X_Dipartimento di Ingegneria
T - Tesi di dottorato

Show full item record Recommend this item

Page view(s)

78
Last Week
0
Last month
0
checked on Mar 29, 2024

Download(s)

31
checked on Mar 29, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.