Reading time: 9 - 11 minutes
In this blog post, we dive into how knowledge graphs play an important role in IKEA's recommendation systems, based on our experience attending two presentations by IKEA at the 2023 Knowledge Graph Conference.
The future of libraries and linked data: How the National Library Board of Singapore modernized its data management
Reading time: 9 minutes
In this blog post, we'll discuss the powerful knowledge graph-based solution that transformed NLB's library and resource management, and how you, too, can leverage these tools to support your organization's data-driven use case!
Semantic Knowledge Graphs from Human Remains - Modeling osteological research data from biological anthropology
Reading time: 8 - 15 minutes
This blog post is co-authored by Felix Engel and Stefan Schlager. Felix and Stefan work for the department of Biological Anthropology at the University of Freiburg where they lead the development of AnthroGraph – an application that researchers can use to model anthropological data as knowledge graphs and intuitively explore, visualize and find information. In this guest post for the metaphacts blog, they explain how metaphactory was used as the development framework for AnthroGraph and how the resulting application can support the standardization of research data and the creation of reliable, curated and reusable collections of osteological research, ultimately allowing researchers to collaborate across disciplines and perform large-scale analyses.
Reading time: 6 - 11 minutes
In our previous post, we covered the basics of how the Ontotext and metaphacts joint solution based on GraphDB and metaphactory helps customers accelerate their knowledge graph journey and generate value from it in a matter of days.
This post looks at a specific clinical trial scoping example, powered by a knowledge graph that we have built for the EU funded project FROCKG, where both Ontotext and metaphacts are partners. It demonstrates how GraphDB and metaphactory work together and how you can employ the platform's intuitive and out-of-the-box search, visualization and authoring components to empower end users to consume data from your knowledge graph.
You can also listen to our on-demand webinar on the same topic or check out our use case brief.
Reading time: 8 - 15 minutes
Publishing FAIR data in the humanities sector
Reference data is a crucial element of data curation in the cultural heritage and humanities sector. Using reference data brings multiple benefits, such as consistent cataloguing, easier lookup and interaction with the data, or compatibility with other data collections that use the same reference data. Overall, the use of reference data can support the publication of FAIR data - data that is findable, accessible, interoperable and reusable.
In museum collection management, for example, various thesauri can be used as reference data to ensure the accurate and consistent cataloguing of items in a controlled manner and according to specific terminologies. Thesauri exist for various areas of expertise. One example is the Getty Art and Architecture Thesaurus® (AAT) which describes the different types of items of art, architecture and material culture, such as "cathedral" as a type of religious building. Authority data has also been published to support the unique identification of specific entities such as persons, organizations, or places, for example, "Cologne cathedral" as a specific instance of the type "cathedral". Such authority data sources include The Integrated Authority File (GND) or the Union List of Artist Names® Online (ULAN) and are specifically important for disambiguating over entities with the same name, e.g., Boston, the town in the UK, and Boston, the city in the USA.
Digital humanities projects often combine several research directions and use materials that cover multiple disciplinary areas. This makes the implementation of reference data difficult, as several reference data sources need to be used to cover all aspects and facets of a project. Moreover, technical access to reference data is inconsistent, with systems using different interfaces and APIs, which makes integration challenging.