8 Data Trends Worth Paying Attention to in 2017
There’s no doubt that last year was a great one for tech. Things like smart devices (IoT devices) are becoming mainstream. Artificial intelligence and machine learning are already present in healthcare, the automotive industry, retail, and a bunch of other industries. Apps are growing, but also transforming with the help of virtual reality and chatbots. All of these technologies are creating mountains of data that need to be sorted, analyzed and acted upon.
If you are to keep up with the competition, you’ll need to use as many of these new technologies as possible, and that also means you need to be up to date with all the latest data trends. As technology advances, and our understanding and implementation of data widens, the trends surrounding data change. Below you will find what we believe will be data trends worth paying attention to this year.
Complying with the GDPR
Without a doubt, GDPR is the single most important thing related to data, not just in 2017, but in 2018, 2019, and further down the years. GDPR, or General Data Protection Regulation, is a European Union document that aims to regulate how businesses, those that have customers in the EU, use and share the data they gather. Considering the size and the importance of the European market, it is safe to say that GDPR will affect pretty much every company, regardless of where it is in the world.
Created to replace the Data Protection Directive, it will enter into application on May 25 2018. Until then, companies have to coordinate their data management practices with the GDPR, or risk severe fines.
The GDPR says companies that fail to comply with its guidelines risk up to €20 million in fines, or 4 percent of their annual turnover, whichever is greater.
Recent market research in the UK, for example, has shown that more than half of companies aren’t even aware the GDPR exists, let alone what it regulates, and how severe the fines are for not complying. As the deadline moves closer, more companies will become aware of the new digital business landscape and will hastily start preparing. That will result in significant changes in how businesses handle data, including which data they collect, which data they keep, and for how long. It will also mean changes to the methods used to gather the data, and methods of keeping it safe.
User experience has become one of the main focuses for both desktop and mobile businesses, and qualitative analytics allow organizations to measure and improve that experience. Being able to understand how your app behaves in the wild, how users are interacting with it, what are its pain points and biggest advantages, those are just some of the things traditional, quantitative analytics cannot provide.
With user session recordings, app professionals can learn how users interact with their apps. For example, if users are confused by a settings menu or impatient with a ‘slow’ screen. This qualitative data can help app professionals pinpoint exactly what elements are causing friction within their UX and optimize confidently. This data can also enable them to see what makes apps crash or freeze, consequently giving them a powerful shortcut to bug troubleshooting and patching.
With touch heatmaps, on the other hand, app professionals are able to see on an aggregate level exactly which parts of their app’s UI are being used more, and which are being neglected. They can also understand if their UI design and gestures are intuitive for their users or generating unresponsive gestures. Ultimately, this qualitative data can help them confidently redesign their app’s interface, add new features, bolster usability, and validate design A/B tests.
User experience is at the very core of app development in 2017 (including mobile), and qualitative analytics will play a major part in perfecting that experience. That’s why we believe qualitative analytics data will have a huge impact on businesses in 2017.
Putting unstructured data to use
We know what unstructured data is and how it’s created. We also know, thanks to IBM, that four fifths of all data created nowadays is unstructured. The problem, however, lies in the fact that unstructured data carries a lot of weight. It’s not some irrelevant trash data that only occupies space and gives nothing in return. In fact, it is essential to knowing a company inside out. It is often considered the company’s ‘secret sauce’, as unstructured data often hides the thought processes that lead to great success stories. Therefore, it is only logical that one of the bigger trends we’ll see going forward is how to make good use of unstructured data. According to Sinequa’s Director of Consulting for North America, Jeff Evernham, this will represent a ‘fundamental shift’.
Unstructured data has been around before, so why the sudden burst of interest in 2017? Mostly because we’ve invented technologies which help us tackle these mountains of data with more ease. Technologies such as language processing and cognitive search abilities are necessary to pull out relevant information from a pile of, well, everything.
With the help of machine learning and data visualization, we will able to make sense of unstructured data, which is bound to give some valuable insights on, for example, the company workflow, where new revenue streams could be uncovered, and where significant savings can be made.
Drawing insights out of dark data
Organizations are becoming increasingly aware of dark data – huge swathes of information that is gathered on customers, employees, partners and clients, but which is never used. With data analytics’ increasing importance, and the CDO role (Chief Data Officer) entering the boardroom, a trend of drawing insights out of dark data is picking up. There are two reasons why organizations will pay more attention to dark data. The first one is that the GDPR will force them dissect it. The document is very clear about which data organizations can keep, where and how long. GDPR doesn’t care if it’s dark data or not. The second reason is that they now have the awareness, and the ability, to put that information to good use. For example, server log files can shed light on certain user behavior, which can result in an improvement of services, and / or user experience.
The transformation of the data scientist role
A few years ago, ‘data scientist’ was one of the hottest jobs you could land. Back in the day when data-driven decisions were just picking up as a trend, everyone wanted a data scientist to be able to analyze large amounts of data.
Trends do change fast, that’s a fact – but data scientists are still a hot role, despite suffering changes. The number of organizations, from various industries, employing data analysts and Chief Data Officers is growing by the day. But their job description is changing, thanks to the insatiable human desire for improvement.
With every passing day, data is getting bigger and bigger, to the point where humans are no longer able to analyze it on their own. We’re not talking just about data volume. The number of different insights that can be drawn out of this data is also growing, and we have come to a point where humans hardly suffice.
Luckily, machine learning and AI technology is improving, and it is changing what data scientists and Chief Data Officers (CDO) will be doing in 2017 and beyond. Instead of doing the non-creative job of analyzing data and presenting the results for the boardroom, or other executives, data scientist will be tasked with coming up with creative, actionable solutions based on that data.
The death of data silos
Data, in all its forms, shapes and sizes, is essential for making data-driven decisions. But in order to get there, all of the data needs to be gathered in a single place, and analyzed as a single entity, in order to be able to draw valid conclusions. At this point, it is easier said than done. Large enterprises are using vast amounts of apps for their everyday activities, and the data those apps create are usually clustered in data silos, which are then spread around the organization.
With data being scattered around the organization in different silos, you cannot make data-driven decisions. In the words of Cyfe’s Ben Carpel, “silos are the bane of productive and profitable businesses.” Organizations around the world have noticed the trend of data siloing and have identified it as one of the biggest threats to their business. That’s why it is safe to assume that one of the bigger trends we’ll see in 2017 will be killing off of the data silo.
Once again – easier said than done. Large enterprises currently use between 300 and 400 cloud apps, resulting in a sea of data. As we move into 2017, organizations will slowly start to break data silos apart and bring all data under one roof, one app at a time. We might not see the final demise of data silos in 2017, but we will most definitely see its first steps.
IoT attacks and security measures
Data comes with a price, and big data comes with an even bigger price. Hackers have always been out there, looking for vulnerable places to get a hold of valuable information. This year, the Internet of Things (IoT) will grow substantially, and with it – the threat surface.
The IoT threat surface (all of the different devices hackers can attack), currently comprises of 6.4 billion devices, and by 2020, it will grow to 20.8 billion devices, according to Gartner. The more devices out there, the harder it is to stay safe. On the other hand, it will become much easier for cyber-criminals to get what they’re after.
We can expect more digitally connected cars, home appliances, meters and wearables, not to mention traffic lights, city lights or parking meters. All of these will create an enormous amount of data which cyber-criminals will definitely be after.
Luckily enough, we humans are very good at keeping our digital assets safe</sarcasm>. In 2017, we can expect to see a lot more IoT devices, a lot more data generated by those devices, a few high-profile data breaches, a few responsible people getting fired, and a lot of tough security measures being implemented.
IoT, and the data these devices will create, is a significant business growth driver, revenue driver and user experience booster. Make sure to stay vigilant. Stay up to date with the latest security trends, industry standards and recommendations.
Managed services in the cloud
For the past couple of years, many companies have been experimenting and toying with the idea of big data in the cloud. A recent report by O’Reilly, which you can find on this link, says a vast majority of companies testing big data in the cloud are very likely to expand their use of similar services. In other words, whoever has tried it, loved it, and will be looking for more.
It makes sense. Managed services nowadays are being used for a number of things like storage, AI, data processing or analytics. With proprietary solutions, companies can focus on the task at hand, and not on the tools, freeing up much needed time.
However, we don’t expect everything to be moving into the cloud. Various reasons, like complying with the GDPR, or being limited by legacy systems, will force companies to go for a combination of cloud and on-premise solutions.
Such a mix bag of technology will also transform the required IT skillset for workers in the industry. In 2017 and beyond, data and IT experts will be required to manage a mix of solutions including hybrid clouds, on-premise solutions and cloud solutions.
Data will continue to have an enormous impact on businesses all over the world. In the past, we have learned that mountains of data can be collected, analyzed and acted upon to improve businesses and products. Now, we are learning how to govern data, how to make sure nothing gets left behind, and how to make sure we’re interpreting the data correctly. We’re also slowly understanding how to use machine learning and artificial intelligence to aid us in analyzing data, and how all of that can lead to not only great business experiences, but to great end user experiences, as well.
This post was originally featured on Buildfire’s blog.