Speaker: Andrew Flinders, Principal Data Scientist, Northrop Grumman
Session type: Full Length Session
Abstract: Free-form text often contains critical information necessary to understand a situation. However, because the user can enter text with few constraints, programmatically aggregating individual responses into a cohesive whole can be extremely difficult. Similarities between individual responses can illuminate constellations within the data that outline a bigger picture. Graph architectures are the ideal mechanism by which these connections can be revealed and explored. With the recent advent of transformer deep learning models, natural language can now be embedded into vectors that more completely capture the semantic meaning of the words. Graph analysis of similarity scores calculated between transformer embeddings provides the big picture view that is often so elusive. Thus, through a combination of deep learning, shallow learning, and graph algorithms we can extract greater insight from free-form text. In this talk, we will explore an example of this method using Neo4j and Google’s BERT transformer model.