Taking Semantic Web to its next level with Cognitive Computing

Semantic Web Cognitive ComputingAnalyst reporting on the tech industry are at times ‘too clever by half’ when promoting trends or shooting down others when making yearly predictions about the ever evolving field of data analytics. An apt example, is the considerable campaign against the semantic web and its benefits to enterprises, and intelligence agents all over the world.  Many have condemned its so called vagueness and try to push the narrative that the uncertainties which arise during the development of large ontologies makes the concept obsolete. Therefore it should be or can be replaced by cognitive computing. But the fact remains that most people either condemn the idea of the semantic web out of ignorance or the need to push his or her own agenda.
understanding of the semantic web’s concept
A thorough understanding of the semantic web’s conceptto accurately extract meaning from unstructured content, through the juxtaposition of interoperability standards such as HTTP,  XML, HTML, RDP and OWLshould be enough proof that integrating cognitive computing into the concept will drastically reduce the uncertainties it currently faces. Thereby providing the leverage required to take the semantic web to its next level but for some individuals, simple explanations aren’t enough. Therefore, there is a need to provide more in-depth analysis on how cognitive computing and semantics can work hand in hand to advance the admirable concept of the semantic web.
Extracting Information from Unstructured Content  
Cognitive computing has played and continues to play a huge role in extracting implicit semantics from unstructured data and it employs the use of cognitive tools such as pattern recognition, machine learning and natural-language processing, which is also a staple of semantics computing, to achieve its victories. It is of note that the extracted entities, relationships, analysis, sentiments and other parameters play a stellar role in developing the semantic web constructs which includes RDF ontologies needed to build annotations, tags, and metadata. These parameters aid the creation of a consistent semantic structure from the unstructured characters in a data store. Here, cognitive computing pushes the semantic web a step further by facilitating and automating the semantic process involved in translating and connecting data to create accessible frameworks.

Extracting Information from Unstructured ContentCognition as a Service and the Semantic Web

Cognition as a Service (CaaS) which should also be viewed as a subset of cognitive computing, has also been hailed as a viable replacement for when we give up on pursuing the idea of developing a common framework that aids data exchange on the Web. Proponents of this idea continue to argue that making every mobile app, web app and operating system intelligent enough to interact with its users will be more helpful to enterprises and individuals than the semantic web.

These proponents fail to understand that the very concept of CaaS ties into the semantic web process but from a more individualistic perspective. For each intelligent app will simply be cognitive in its own niche while shunning information it deems irrelevant. Examples are Googles Alpha Go, Apple’s Siri and NexT IT’s Alme platform which employs cognitive computing powered by CaaS platforms developed exclusively for their parent companies.

In this light, the best case scenario for CaaS is each tech powerhouse developing its own CaaS platform and pawning the corresponding APIs to developers for profit. On the other hand, the semantic web aims at implementing semantic standards among heterogeneous environments, including CaaS ecosystems, thereby allowing content everywhere to be available, readable, searchable and comprehensible to both human and automated consumers.

Integrating Cognitive Computing in Semantics: Its Practical Business Value

The practical benefits of a collaboration between semantic technology and cognitive computing can be better understood using Welltok, a healthcare platform which uses semantic technology, as a case study.

In its aim to provide better results and programs tailored to fit its users health status, Welltok took advantage of cognitive computing by integrating IBM Watson’s cognitive abilities into its CafeWell platform. Welltok’s CafeWell which analyzes a user’s health information and provides real-time healthcare recommendations, added an extra layer of intelligence thereby building a semantic framework which provided more specialized help to patients.

This brilliant move, added an extra layer of intelligence that allows the CafeWell platform understand its users better. Now, users on the platform can receive modified recommendations as the platform learns more about their health status and needs by accessing new data.

Cognitionthe machinery of rational thoughton its own is definitely empty without semantics and while the race to enhance cognitive computing and Cognition as a Service are admirable, it is worth noting that leaving the semantic web behind will be counterproductive to the big data analytics industry in the long run. So while we discuss the new age of cognitive computing, integrating its concepts into approximately 2 decades of semantic web growth must be a part of these discussions.  

Milena Yankova

Milena Yankova

Director Global Marketing at Ontotext
A bright lady with a PhD in Computer Science, Milena's path started in the role of a developer, passed through project and quickly led her to product management. For her a constant source of miracles is how technology supports and alters our behaviour, engagement and social connections.
Milena Yankova

Related Posts

  • Featured Image

    The New Cache on the Block: A Caching strategy in GraphDB To Better Utilize Memory

    The ability to seamlessly integrate datasets and the speed at which this can be done are mission critical when it comes to working with big data. The new caching system of GraphDB is better, faster and smarter and solves the issues of the old caching strategy in GraphDB.

  • Featured Image

    Fighting Fake News: Ontotext’s Role in EU-Funded Pheme Project

    Before ‘fake news’ became the latest buzzword, in January 2014 Ontotext started working on Project PHEME – ‘Computing Veracity Across Media, Languages, and Social Networks’ alongside eight other partners. The EU-funded project aimed at creating a computational framework for automatic discovery and verification of information at scale and fast.

  • Datathon Case Overview: Revealing Hidden Links Through Open Data

    For the first Datathon in Central and Eastern Europe, the Data Science Society team and the partner companies provided various business cases in the field of data science, offering challenges to the participants who set out to solve them in less than 48 hours. At the end of the event, there were 16 teams presenting their results after a weekend of work.

Back to top