What does generative AI mean for different areas?
The first thing each data professional from each business in the DataIQ community did was to assess how generative AI could impact their specific needs and niche. The DataIQ membership is incredibly diverse – from governmental agencies to B2B SaaS providers – meaning the use of generative AI within the community varies wildly. As one member from the first discussion stated, “what is it we are trying to achieve with generative AI?” Data leaders and business leaders need to examine their business strategies, goals and objectives and come to an agreement about how these can be progressed with generative AI.
Although DataIQ’s community is filled with diverse organisations that have different objectives, this is an opportunity for all data leaders to highlight the importance and power of data. These tools are fuelled by data and the outcomes they can produce can frequently be demonstrated in monetary terms, which is easy to digest. There is an ongoing struggle for data professionals to promote the value of data as it often is intangible for non-data professionals, but the new AI tools could well be an opportunity for data to be given the spotlight. It was noted by multiple roundtable participants that these tools – particularly ChatGPT – have become a new fascination for many executives, which is a great opportunity for data leaders to highlight the potential benefits of generative AI tools and receive support from decision makers to improve data literacy across the organisation.
A couple of DataIQ members mentioned they had joined the roundtable discussions to “see how other people are dealing with this new tool and how they are using it” as they themselves were not yet at a stage of implementation – usually because of low data maturity or lack of funds. They were seeking information and ideas on consolidating underlying data and the architectures being used to be able to deliver quick results and value when the timing was right to unleash generative AI tools.
Aspirations for AI
A key aspect of generative AI that most businesses were interested in exploring was the way in which it can help with efficiency. A DataIQ member from the publishing world described how their primary focus with the new technology is to examine how it can improve efficiency at delivering value to customers. A second member from the insurance world agreed that efficiency was at the heart of their AI aspirations and had been for a while. A member explained how they follow the motto “it is much easier to save money than it is to make money” when addressing efficiencies in business and AI capabilities are now so powerful that this can be harnessed even further than before. At the tip of the iceberg, AI tools have the potential to identify exactly when replacement equipment is needed, the most direct way they can be implemented and the most cost-effective way to switch systems without impacting daily operations. The savings that can be found for all businesses, no matter their niche, could easily be worth millions every year. It is hoped that these savings will at least be partly reinvested into the data capabilities of the business.
Each roundtable contributor then had individual wants and needs from generative AI. One member from an educational organisation described how they were hoping for AI tools to be able to improve bias from human examiners. “We are in a race to develop auto-scoring solutions using generative AI,” the member said. “Our entire business model is arguably dependant on us doing that successfully. There are many organisations like ours that have to be agile to not lose out to competitors.”
One major problem that businesses face, particularly legacy businesses, is that data is often not tagged. To be able to learn to tag data and have it consistently categorised usually entails having a strong data culture across the organisation and a solid level of data literacy. Unfortunately, this is seldom the case and businesses that have a young data maturity or have extensive legacy data often face an immense backlog of data categorising to achieve efficiency. It was hoped by many roundtable participants that AI tools will be able to make this task easier – if not fully automated – which would achieve a much higher data maturity. The caveat of this approach was that there are highly sensitive data sets and the regulations surrounding generative AI tools needs to catch up to ensure high standards are maintained.
As we continue living in an age of constant social media and connection, there have been problems when it comes to people voicing their opinions, disobeying website comment rules and even going as far as trolling or doxing someone which can cause untold misery. It is hoped by many at the roundtable that using AI tools it will be possible to moderate comment sections and serial rule-breakers in a faster and more accurate way. One member from a media organisation explained how they frequently must ban users for breaking their site rules and, often, it is the same person using an alternative account to flout these rules. By training AI through large language models (LLMs), it could be possible to have these accounts located and banned before they are able to cause any long-term damage.
For media-centric organisations, multiple roundtable contributors mentioned how they are examining the potential for advertising to be designed and created by generative AI. This, in theory, will come at a fraction of the cost of traditional advertising campaigns and could be completed in a fraction of the time. The scale of the advertising creations is yet to be realised, but there is no reason to not believe that full television adverts could be created through these new AI tools and be indistinguishable from those created in the traditional methods.
Elsewhere, it is hoped that vulnerable customers or users can be quickly identified in times of crisis, as one member discussed. Now, this would not be relevant for all businesses and would be heavily geared towards utility providers, healthcare and government organisations, but in times of need generative AI tools could theoretically help identify those most in need and provide the most effective way for them to receive assistance. It is a prime example of utilising data for good and making the most from AI tools to enhance wellbeing.
The limitations of AI
At the other end of the spectrum for generative AI are its limitations. Businesses need to examine the areas that generative AI cannot benefit them and understand why this is the case – be it ethical, technical or simply unsuitable. It is easy to get swept up in the excitement of new tools and possibilities, but data professionals and business decision makers must hold back and take a measured approach to ensure they are making the right choices.
When it comes to the idea of being able to assist vulnerable users in times of need, the information required to fuel this program would be highly sensitive. This causes issues surrounding whether people opt in for their data to be used and then heightens the requirements on security and cyberattacks. There is already a high level of concern from the public surrounding the use of their data, so it stands to reason that collecting more sensitive data is going to be a tough sell. Furthermore, as many tech professionals state, it is no longer a case of “if” there will be a cyberattack, but “when” there will be one. The importance of security cannot be overstated, and this itself comes with a hefty price tag.
One member from a media group explained how although generative AI can help their operation and cost-saving initiatives, “AI will never take out human curation: the timing, the pitch, the positioning, the relevancy of what we are putting in front of our readers, listeners, viewers, and that is what I think separates us from AI.” This then raised the question of would consumers be satisfied paying a subscription to a service about human interest stories and relevant news pieces if they found out the story itself had been generated by AI? Most agreed they would not be happy with that. Furthermore, television shows, if scripted by AI, simply do not have the emotional intelligence as those written by a human. Perhaps this will change in the future with ongoing tech developments, but the same question remains – would consumers be happy purchasing a subscription to something written by AI? It is one thing to use AI to help create targeted advertising campaigns for specific demographics, but another entirely to use AI to create the stories and entertainment being consumed.
As has been well documented by DataIQ members and the data and analytics community as a whole, there is a severe talent shortage within the industry. One participant said there is “a struggle to keep data staff longer than two years”, which exacerbates the brain drain and risks losing company secrets to the competition. By adding a new dynamic such as generative AI, there will be further needs to train and source the right talent for the role to ensure efficiency, ethics, compliance and keeping focus on the business objectives. These tools are incredibly powerful and require a solid understanding of data processes and culture to be effectively utilised – this means specialised team members will need to be recruited. As one roundtable member commented, “we can all wheel in a piano, but only a few people can use it to play music.” Recruiting talent would be difficult at the best of times as very few professionals have the specialist knowledge of generative AI, but in a time of skills shortage it becomes even harder. It is not difficult to foresee a bidding war for AI talent becoming the norm in the near future.
There is a cost when it comes to implementing generative AI. Organisations can either build their own which is a costly endeavour, or they can purchase from a third party which comes with its own drawbacks. There is a third option, which is to use free tools, but the risks involved with this solution surrounding ownership of data, plus the fact that free tools severely limit the amount you can use them makes them almost pointless for large scale operations. As one member explained, “Chat GPT offers up to 8,000 tokens and a token is approximately 35 words. You could spend a weekend playing with it and then be told that you have used your allotted tokens for a year – game over. It would be foolish to plot your entire marketing strategy or data strategy on the assumption that the free tool is going to be there for you to use constantly.”
When building your own generative AI tool, usually based on LLMs, the amount of time, finances and hours that are required to be put into it are hard to quantify. It is not a quick process, and it will need multiple staff members to create – and even then, there is no guarantee it will work in the way required. This is why many organisations are opting to purchase third-party solutions as the maintenance and upgrades of the system do not need to be considered. The downside is that the organisation will be beholden to the costs and user agreement terms of the creator. Furthermore, the value of data is becoming widely known and there may well be instances where any data put into the AI tool then becomes the property of the third party. This is a serious concern for some heavily regulated industries such as finance but could be less of an issue for providers that have a very limited amount of sensitive information. “I have been speaking with a friend who works for a business and their model is taking data, creating data products and models and then selling that back to a client base,” said one participant. “If they use a public, generative AI tool to achieve this, they do not own the intellectual property (IP).” This is a huge concern as it can easily lead to lost revenue, massive legal battles over ownership and a high degree of uncertainty.
In conjunction with the worry about whether customers would pay for generated content, the recent writer’s strike in the USA has been, in part, to do with ensuring that generative AI tools do not infringe on the work of writers. This is not just a concern for media businesses, but for any business that requires copy or even marketing campaigns. The age-old balance between new technologies and protecting careers is far from over and the era of generative AI looks set to continue this conversation. DataIQ members in the media explained to the roundtable groups that this is a serious concern as at the heart of their businesses is content creation and entertainment, and this all starts with a script. It could be the case that regulations and contract negotiations see generative AI tools having their capabilities capped for specific industries as ethics and compliance take centre stage.
One often overlooked impact of generative AI and the development of LLMs is the carbon impact they have. These tools are very energy intensive and if there is a sudden rise in the amount of businesses utilising them, the energy needs will increase substantially. There is already an energy crisis with utility costs growing exponentially and improving environmental impacts are a huge concern and target for businesses across the globe. It is difficult to see how the carbon impact of this new technology can be justified or offset in an efficient and timely manner. It should be noted that there is no clear or standardised cross-industry agreement on what the benchmark for data sustainability should be, which compounds the issue.
How can generative AI be implemented?
Once the uses of generative AI have been assessed for an organisation and the risks involved have been weighed up, the next challenge is integrating these tools into the business.
One media member explained to the group how their executive board was keen to embrace AI but did not know how it would work within their current framework. The data team devised a way that would allow generative AI tools to work over the top of the existing operations without interfering with the established routine. The member explained how “because everything is getting commoditised, we must use tools that can run over the top and that do not touch any of your first party data or internal data” to reduce any risks and maintain focus on the wider data strategy.
Many organisations have implemented a blanket ban on the technology until more is known about its capabilities and risks. As mentioned by multiple roundtable participants, this is completely reasonable for businesses that handle highly sensitive data, such as financial institutions, but the problem is it is difficult to stop people from using personal computers to interact with generative AI. There are some small-scale experiments taking place to learn more about the potential benefits, but the secretive use of AI – akin to shadow IT – leaves many data leaders feeling vulnerable and uncertain. There are undoubtedly risks with any form of new technology, but generative AI is incredibly powerful, and these risks must be given the respect they deserve. It would be foolish for businesses to rush in to installing generative AI tools without performing due diligence; the caveat is that the technology is evolving so rapidly that data compliance, regulation and ethics are having a difficult time keeping pace.
The big discussion, as noted by one healthcare member, is the debate between “buy versus build”, or perhaps a combination of the two. As previously mentioned, the cost aspect is very high on the agenda, but following that, the second issue becomes evaluating which approach is most appropriate for the organisation. There is no one-stop solution for generative AI and it is down to the individual decision makers to assess which method would be most suitable for them to achieve their objectives safely while remaining compliant.
Final thoughts
Ultimately, there are undoubtedly colossal benefits to be found from generative AI, but there are some serious concerns about its power and regulation. Every single DataIQ member had a unique story to tell, whether it was taking their first steps into the world of AI, or working out how these new tools can supplement ongoing operations. It will take some time to gain a better understanding of how AI can benefit each business, but the potential is there. The diversity of solutions that can be brought about by AI is simply staggering, but safety and security remains the most prominent concern for all members and regulations and guidelines simply cannot keep pace with the rapid evolution. These tools become more powerful each day and there is no telling what generative AI will look like in 12 months’ time. It could well be the case that there will be different regional and international regulations surrounding the use of AI and this could then vary even further depending on the sector. Data professionals need to remain vigilant with the use of AI but must embrace a sense of adventure and exploration as the benefits could genuinely be revolutionary in the way data is valued.
By being a part of the DataIQ community, organisations of all shapes, sizes and niches have access to discussions and insights surrounding new technologies which can drastically improve success rates, data literacy and return on investment.
Click here to register for involvement at an upcoming roundtable discussion.
Make sure you take the DataIQ Generative AI assessment to see if your organisation is prepared for AI.