DataIQ Leaders briefing – Best practice for keeping your tech stack on point

At a DataIQ Leaders roundtable in November 2021, members discussed how they are cultivating their tech stacks in the digital world. The conversation focused on the role of governance in scaling up, issues around the democratisation of technology and the relationship between technology and recruitment.
Cloud animation

As data increasingly moves into the cloud and tech stacks become more vertically-integrated by vendors, organisations have an unprecedented opportunity to move to new platforms, and scale up existing ones, at speed and at relatively low cost. With speed comes risk, particularly around those very processes and governance. Tech vendors continue to innovate and offer new services, meaning the question is less about whether the data tech stack can keep pace, and more about whether the organisation can.

At a DataIQ Leaders roundtable in November 2021, members discussed how they are cultivating their tech stacks in the digital world. The conversation focused on the role of governance in scaling up, issues around the democratisation of technology and the relationship between technology and recruitment.

Controllably scaling-up

Thanks to the cloud, organisations can take an iterative approach to technology. Flexibility allows for upgrades to be implemented at rates and scales that suit specific business models and strategic objectives. The days of being locked-in to expensive to buy, expensive to upgrade software are numbered. This has provided data teams with the ability to experiment with new technology. It is common these days to hear analysts and data scientists talk of spinning up new instances in the cloud in which to run a short-term project, for example.

However, with this newfound freedom comes a heightened risk of running afoul of governance standards. This is a particular issue for one member, the head of data science at a global pharmaceutical organisation, whose team is working on an ambitious project to create a unified, global tech platform for the business. As such, data governance has to be considered on a global scale.

“We’re at the mercy of global governance,” they said. “This prevents us from realising the benefits of the cloud in terms of scaling-up with speed and at low cost.” Despite GDPR becoming the international gold standard, rules and attitudes towards data privacy and security still vary between jurisdictions. Thresholds set in parts of the US could fall below the standards required within the UK and the EU, and vice versa. The member added: “It’s not about the tech stack keeping up, it’s about whether our internal governance and processes can keep up with the tech.”

The member added: “We’re dealing with a lot of personally sensitive information, and doing anything on a large scale tends to have to go through governing bodies that aren’t necessarily based in the UK. There’s more people to convince, which limits our potential to try new things out.”

Robust internal processes can provide the security needed to facilitate improvements and experimentation. One member, the head of product for an audience platform at a large broadcaster, explained that two internal bodies – information security and privacy and legal – govern the use of data within their organisation. Set processes – such as data minimisation and anonymisation – need to be adopted and followed before any data usage or experimentation is signed off.

“Via these bodies we’ve made a real concerted effort to split any personally identifiable or sensitive data away from the anonymised and pseudonymised data,” they said. “We only have to go through a light data protection impact assessment process to use this data, allowing for a lot more experimentation.”

Democratising experimentation

The scope for exploring new data projects grows when the right to experiment is democratised. Key to facilitating this kind of experimental culture are processes and education. One part of this is using cost management as a way to identify when experiments are being run as shadow IT operations  – often those instances being run up in the cloud are funded from corporate credit cards unless there is a more rigorous set of controls in place.

“There’s an education piece with regards to reining in some of our more experimental engineers,” said one member. “We also have an internal tool, which funnels anything that goes through AWS through a process that provides a cost code to my team, meaning we can catch anything before it becomes a problem.”

Beyond the data team, democratised data platforms have given the wider business the ability to make iterative improvements to models. On the one hand, this can lead to higher levels of data literacy across the enterprise. On the other, non-experts can generate unnecessary proliferation and heterogeneity in data tech, models and reports.

One member’s organisation has developed over 800 Tableau dashboards, some of which are used efficiently throughout the business, others that are only used for a niche purpose by a handful of staff. “We want to democratise the use of data,” they said. “But we haven’t got mature enough processes in place for non-data experts to explore and experiment with platforms for this kind of thing to be productive.”

Technology and data talent

Any data leader will tell you that data engineers are hard to come by and even harder to retain. The key tasks of an engineer – integrating multiple data sources, managing data migrations, creating the right data models – require technical expertise across multiple data types, coding languages and tech platforms. As the technical requirements of the role have grown, so too has the stock of the average data engineer, especially with regards to their salary demands.

Yet the proliferation of cloud-based data platforms could help data leaders overcome this headache. One member noted: “I’m envisaging at some stage that our cloud vendors will inform us that a lot of the building and scaling of platforms that we formerly relied on data engineers to conduct will be automated within the cloud, particularly when we give those providers feedback about our recruitment and retainment issues.”

With competition between cloud providers heating up, automation of data engineering could become a unique selling point. Widely-available skills such as the forty-year-old querying language SQL could then come back into demand, which one member thinks could lead to the rise of the “analyst engineer” – individuals whose primary focus is on running queries on data without having to worry about complex coding.

They said: “SQL was becoming uncool 10 years ago, but there’s now an acknowledgement that it’s great to use during transformations, particularly within the cloud. This gives us an opportunity to take some of the high-skill requirements out of our engineering roles via those analyst engineers.”

Due to demand, DataIQ is running two Leaders roundtables in December. The sessions, titled “Did we make the organisation more data literate?”, will consider what data literacy and data culture should look like, and how to tell when improvements have been made. Only a handful of spaces remain, book yours here.

Upcoming Events

No event found!
Prev Next