Ontotext announces the release of GraphDB™ 6.1, its flagship native RDF triplestore. This new release comes just four months after Ontotext released GraphDB 6.0 which focused on improvements to the high availability cluster, 3-5 times faster load speeds over 5.4 and pre-built connectors to Lucene, SoLR and Elasticsearch.
The major highlights of GraphDB 6.1 include:
Dramatic Improvements in Write Transactions – For small insert/update/delete operations on large repositories, 6.1 performance is now 2 to 5 times faster than 6.0. Results on the LDBC Semantic Publishing Benchmark (SPB) with 50 million statements clearly show write updates that are twice as fast as 6.0. This improvement is even more dramatic using 1 billion statements where the writes are 5 times faster than 6.0. Overall, this represents a 1400% improvement over OLWIM 5.4. Read performance was also improved for both benchmarks.
Stability Improvements to the Enterprise Replication Cluster – Many may recall that GraphDB 6.0 included higher fault tolerances, faster recovery mechanisms and reliable synchronization across geographically remote data centers. When it was released in August of 2014, this represented a leadership position for native RDF triplestores in the market. Version 6.1 enhances that position including proper master shutdown sequence error handling, error-resilient synchronization threads, and safe saving of configuration properties of the cluster configuration.
Live Database Load Improvements – GraphDB 6.1 now allows users to load large new datasets into live database instances. For example, in a production cluster with 4 reading threads (queries) and 1 writing thread (updates) a large RDF database like DBPedia with hundreds of millions of statements can be added now within a few hours without introducing unacceptable write latency by more than 1 to 2 seconds. Overall cluster speed is not disrupted.
Bulk Loading Tools and GraphDB Workbench – LoadRDF, the bulk loading tool, now allows for different files to be loaded into different contexts as well as providing statements to be written programmatically. In performance benchmarks, 566 million statements from DBPedia 2014 were loaded in less than an hour at rates of 180,000 statements, per second. GraphDB Workbench now includes user and role based security and usability enhancements.
“Despite releasing GraphDB™ 6.0 just four months ago, our GraphDB engineering team has been committed to continuous improvement of our graph database engine and we look forward to an exciting 2015”, states Atanas Kiryakov, CEO of Ontotext. “Constant improvements to our native RDF triplestore mean that all of our solutions in media & publishing, life sciences, healthcare, financial services and government benefit from improved performance, stability and usability. The improved efficiency of handling small updates allows GraphDB to support Dynamic Semantic Publishing workloads for repositories loaded with billions of statements.”
For more details about the enhancements and upgrades to GraphDB™ visit GraphDB 6.1 Release Notes.
Ontotext provides complete semantic platform transforming how organizations identify meaning across massive amounts of unstructured data. Ontotext blends text mining, powerful SPARQL queries, semantic annotation and semantic search with an RDF graph database (GraphDB™) that infers new meaning at scale. Ontotext S4, The Self Services Semantic Suite, allows developers to build text mining and semantic applications in the cloud.
GraphDB™ is the only native RDF triplestore with the ability to perform semantic inferencing at scale. Ontotext launched GraphDB™ as OWLIM in 2004 and the product has been successfully deployed by organizations around the world including the BBC and AstraZeneca. GraphDB™ is available in three versions: GraphDB™ Lite is a free semantic repository that can manage up to 100 million RDF statements in-memory; GraphDB™ Standard has the ability to manage 10’s of billions of RDF statements on a single server; GraphDB™ Enterprise adds clustering capabilities and full text search connectors.