Wir sind Informatikstudenten an der Universität Kassel, die sich unter dem Schirm der Gesellschaft für Informatik e.V. als deren bundesweit erst zweite Studierendengruppe zusammengeschlossen haben.
Als Studierendengruppe der GI werden wir viele Möglichkeiten haben, Ideen, die etwas außerhalb unseres Vorlesungsalltags liegen, umzusetzen. Das fängt bei der Durchführung von Firmenexkursionen an, geht über die Organisation von Wettbewerben (z.B. bald ein Capture-the-Flag) und hört selbst bei Ideen wie Vorträgen (in Schulen oder vor Kommilitonen) nicht auf.
Social bookmarking systems and their emergent information structures, known as folksonomies, are increasingly important data sources for Semantic Web applications. A key question for harvesting semantics from these systems is how to extend and adapt traditional notions of similarity to folksonomies, and which measures are best suited for applications such as navigation support, semantic search, and ontology learning. Here we build an evaluation framework to compare various general folksonomy-based similarity measures derived from established information-theoretic, statistical, and practical measures. Our framework deals generally and symmetrically with users, tags, and resources. For evaluation purposes we focus on similarity among tags and resources, considering different ways to aggregate annotations across users. After comparing how tag similarity measures predict user-created tag relations, we provide an external grounding by user-validated semantic proxies based on WordNet and the Open Directory. We also investigate the issue of scalability. We find that mutual information with distributional micro-aggregation across users yields the highest accuracy, but is not scalable; per-user projection with collaborative aggregation provides the best scalable approach via incremental computations. The results are consistent across resource and tag similarity.
mendation service which can be called via HTTP by BibSonomy's recommender when a user posts a bookmark or publication. All participating recommenders are called on each posting process, one of them is choosen to actually deliver the results to the user. We can then measure
This year's discovery challenge presents two tasks in the new area
of social bookmarking. One task covers spam detection and
the other covers tag recommendations. As we are hosting the social bookmark and
publication sharing system BibSonomy, we are able to provide a dataset
of BibSonomy for the challenge. A training dataset for both tasks is provided at the beginning of the competition.
The test dataset will be released 48 hours before the final deadline. Due to a very tight schedule we cannot grant any deadline
extension.
The presentation of the results will take place at the ECML/PKDD workshop where the top teams are
invited to present their approaches and results.
M. Toepfer, G. Fette, P. Beck, P. Klügl, und F. Puppe. Proceedings of the Workshop on Open Infrastructures and Analysis Frameworks for HLT, Seite 83--92. Dublin, Ireland, (2014)