Big Data – Bigger Challenge
The volume of net new data being created each year is growing exponentially — a trend that is set to continue for the foreseeable future. The higher the volumes of data get, the more complex data becomes, and the more challenging it gets to generate insights and values from that data. But increased volume isn’t the only force we are facing today: On top of this staggering growth in the volume of data, we are also seeing an increase in both the amount of semi-structure and the degree of connectedness present in that data.
Google, Facebook, Twitter, Adobe and American Express among them have turned to graph technologies to tackle this complexity at the heart of Big Data. Just recently an article by Dr. Roy Martsen outlines how Google started the graph analysis trend in the modern era using links between documents on the Web to understand their semantic context. Google has since then continued to write history and its graph-centric approach has seen the company deliver innovation at scale and dominate not only in its core search market, but also across the information management space.
Graph Technology – Unlocking the Meaning of Big Data
Graphs are a new way of thinking for explicitly modelling the factors that make today’s big data so complex: Semi-structure and connectedness. Putting it in a nutshell: a graph database is an online transactional system that allows you to store, manage and query your data in the form of a graph, i.e. a graph database enables you to represent any kind of data in a highly accessible, elegant way using nodes and relationships, both of which may host properties. The key thing about such a model is that it makes relations first-class citizens of the data, rather than treating them as metadata. As real data points, they can be queried and understood in their variety, weight and quality.
Read more
Keywords: Big Data dataconomy emil eifrem