Developer(s) | University of Leipzig, Freie Universität Berlin, OpenLink Software |
---|---|
Initial release | 23 January 2007 |
Stable release | DBpedia 3.7 / 11 September 2011[1] |
Written in | Scala, Java, VSP |
Operating system | Virtuoso Universal Server |
Type | Semantic Web, Linked Data |
License | GNU General Public License |
Website | dbpedia.org |
DBpedia is a project aiming to extract structured content from the information created as part of the Wikipedia project. This structured information is then made available on the World Wide Web.[2] DBpedia allows users to query relationships and properties associated with Wikipedia resources, including links to other related datasets.[3] DBpedia has been described by Tim Berners-Lee as one of the more famous parts of the Linked Data project.[4]
Contents |
The project was started by people at the Free University of Berlin and the University of Leipzig, in collaboration with OpenLink Software,[5] and the first publicly available dataset was published in 2007. It is made available under free licences, allowing others to reuse the dataset.
Wikipedia articles consist mostly of free text, but also include structured information embedded in the articles, such as "infobox" tables, categorisation information, images, geo-coordinates and links to external Web pages. This structured information is extracted and put in a uniform dataset which can be queried.
As of September 2011[update], the DBpedia dataset describes more than 3.64 million things, out of which 1.83 million are classified in a consistent ontology, including 416,000 persons, 526,000 places, 106,000 music albums, 60,000 films, 17,500 video games, 169,000 organizations, 183,000 species and 5,400 diseases. The DBpedia data set features labels and abstracts for these 3.64 million things in up to 97 different languages; 2,724,000 links to images and 6,300,000 links to external web pages; 6,200,000 external links into other RDF datasets, 740,000 Wikipedia categories, and 18,100,000 YAGO2 categories. From this dataset, information spread across multiple pages can be extracted, for example book authorship can be put together from pages about the work, or the author.
The DBpedia project uses the Resource Description Framework (RDF) to represent the extracted information. As of September 2011[update], the DBpedia dataset consists of over 1 billion pieces of information (RDF triples) out of which 385 million were extracted from the English edition of Wikipedia and 665 million were extracted from other language editions.[6]
One of the challenges in extracting information from Wikipedia is that the same concepts can be expressed using different properties in templates, such as birthplace and placeofbirth. Because of this, queries about where people were born would have to search for both of these properties in order to get more complete results. As a result, the DBpedia Mapping Language has been developed to help in mapping these properties to an ontology while reducing the number of synonyms. Due to the large diversity of infoboxes and properties in use on Wikipedia, the process of developing and improving these mappings has been opened to public contributions.[7]
DBpedia extracts factual information from Wikipedia pages, allowing users to find answers to questions where the information is spread across many different Wikipedia articles. Data is accessed using an SQL-like query language for RDF called SPARQL. For example, imagine you were interested in the Japanese shōjo manga series Tokyo Mew Mew, and wanted to find the genres of other works written by its illustrator. DBpedia combines information from Wikipedia's entries on Tokyo Mew Mew, Mia Ikumi and on works such as Super Doll Licca-chan and Koi Cupid. Since DBpedia normalises information into a single database, the following query can be asked without needing to know exactly which entry carries each fragment of information, and will list related genres:
PREFIX dbprop: <http://dbpedia.org/property/> PREFIX db: <http://dbpedia.org/resource/> SELECT ?who ?work ?genre WHERE { db:Tokyo_Mew_Mew dbprop:illustrator ?who . ?work dbprop:author ?who . OPTIONAL { ?work dbprop:genre ?genre } . }
DBpedia has a broad scope of entities covering different areas of human knowledge. This makes it a natural hub for connecting datasets, where external datasets could link to its concepts.[8] The DBpedia dataset is interlinked on the RDF level with various other Open Data datasets on the Web. This enables applications to enrich DBpedia data with data from these datasets. As of January 2011[update], there are more than 6.5 million interlinks between DBpedia and external datasets including: Freebase, OpenCyc, UMBEL, GeoNames, Musicbrainz, CIA World Fact Book, DBLP, Project Gutenberg, DBtune Jamendo, Eurostat, Uniprot, Bio2RDF, and US Census data.[9][10] The Thomson Reuters initiative OpenCalais, the Linked Open Data project of the New York Times, the Zemanta API and DBpedia Spotlight also include links to DBpedia.[11][12][13] The BBC uses DBpedia to help organize its content.[14][15] Faviki uses DBpedia for semantic tagging.[16]
Amazon provides DBpedia Public Data Set that can be integrated into Amazon Web Services applications.[17]
|