An Interconnected System for Reference Data
Reading time: 8 - 15 minutes
Publishing FAIR data in the humanities sector
Reference data is a crucial element of data curation in the cultural heritage and humanities sector. Using reference data brings multiple benefits, such as consistent cataloguing, easier lookup and interaction with the data, or compatibility with other data collections that use the same reference data. Overall, the use of reference data can support the publication of FAIR data - data that is findable, accessible, interoperable and reusable.
In museum collection management, for example, various thesauri can be used as reference data to ensure the accurate and consistent cataloguing of items in a controlled manner and according to specific terminologies. Thesauri exist for various areas of expertise. One example is the Getty Art and Architecture Thesaurus® (AAT) which describes the different types of items of art, architecture and material culture, such as "cathedral" as a type of religious building. Authority data has also been published to support the unique identification of specific entities such as persons, organizations, or places, for example, "Cologne cathedral" as a specific instance of the type "cathedral". Such authority data sources include The Integrated Authority File (GND) or the Union List of Artist Names® Online (ULAN) and are specifically important for disambiguating over entities with the same name, e.g., Boston, the town in the UK, and Boston, the city in the USA.
Digital humanities projects often combine several research directions and use materials that cover multiple disciplinary areas. This makes the implementation of reference data difficult, as several reference data sources need to be used to cover all aspects and facets of a project. Moreover, technical access to reference data is inconsistent, with systems using different interfaces and APIs, which makes integration challenging.
GraphDB & metaphactory Part I: Generating Value from Your Knowledge Graph in Days
Reading time: 5 - 10 minutes
Large enterprises have identified knowledge graphs as a solid foundation for making data FAIR and unlocking the value of their data assets. Data fabrics built on FAIR data drive digital transformation initiatives that put companies ahead of the competition.
But while the benefits of knowledge graphs have become clear, the road to their implementation has often been long and complex, and success has relied on the involvement of seasoned knowledge graph experts.
This blog post goes through the basics of the joint solution delivered by Ontotext and metaphacts to speed up this journey.
A Knowledge Graph for the Agri-Food Sector
Reading time: 7 - 13 minutes
Raul Palma leads the data analytics and semantics department at the Poznan Supercomputing and Networking Center (PSNC), where he coordinates the R&D activities and the center’s participation in various EU projects around these topics. In this guest post for the metaphacts blog, Raul explains how knowledge graph technology can address data integration challenges in the agri-food sector, showcasing it through a few use cases. He describes how he leveraged metaphactory to build a domain-specific application - FOODIE - that delivers intuitive access to distributed, heterogeneous data sources and allows end users to extract meaningful insights.
FOODIE is an agriculture knowledge hub delivered as a Web application built on top of metaphactory Knowledge Graph platform. The application enables an integrated view and access over multiple datasets which have been collected from various and heterogeneous sources relevant to the agriculture sector, transformed, and published as Linked Data / in a Knowledge Graph.
Smart Solutions for Identifying Compatible Components - Powered by metaphactory and RDFox
Reading time: 4 - 8 minutes
Determining compatibility between individual entities is an essential process for many businesses, across various industries and business models; from industrial configuration, supply chain, bill of materials, evaluating terms in contracts, or even for match making apps. The process may sometimes require the user to check hundreds of thousands or millions of possible combinations, to assess whether components fit together, or if components meet specified requirements. Additional factors may also need to be taken into account, for example, regulations or customer budgets. Traditional approaches are inefficient for modern day applications due to the large volumes of data, heterogeneity of data formats, complexity of customer specifications, and concerns over scalability.