Transparency with AI tools
There is a lot of work to be done to quell the fears of the public. A recent study of the public found that:
- 74% want companies to ask for permission before personal or financial data is used by AI.
- 79% want to know who else their data is shared with.
- 70% are very concerned about unrestricted and unpoliced use of their data in AI.
- 73% in the UK think companies collect too much data (compared to 81% and 82% in the US and Australia).
- 87% worry that AI will make protecting their data much more challenging.
- 73% want greater data security to ensure their personal data is not hacked.
These are all legitimate concerns, and it is not uncommon for organisations to be fined for data breaches, incorrect use of data, and regulatory noncompliance which further exacerbates the public’s fear of AI.
The first port of call is to ensure that there is a high degree of transparency with the AI tools being used by a company. For the most part, these tools will be fairly mundane, and many AI tools have been in use for a while. The issue around transparency with the use of data has received a renewed focus thanks to the sudden rise of generative (genAI ) tools; organisations must be upfront about how they are being used and trained to quell fears.
Even within a business, different departments use different forms of AI for different day-to-day tasks. Before it gets to the stage of publicly showcasing your AI tools, make sure everyone within the organisation understands:
- What the tools are
- Why they are used
- Their limitations.
Education is the best weapon to dispel fears and concerns. One decision makers and stakeholders across an organisation understand how and why AI is being used, it become easier to reflect that message to the wider public.
Meeting your own AI expectations
Ask yourself: “Does my organisation meet and exceed my own expectations of how data and AI should be treated and utilised?”
If the answer is anything other than yes, there is work to be done. And even if it is yes, there is still work to be done to ensure it stays that way for the future.
It is not a stretch of the imagination to think that some knee-jerk reactionary AI regulations can be put in place when one business is found to have been breaking the rules and trust with training AI. Much like cyberattacks, this needs to be treated with an attitude of “when it happens”, not “if it happens”. There is a lot of truth in the old saying that one bad apple spoils the barrel, particularly when it comes to topics that people are not fully informed about.
Data leaders need to make sure that:
- Their teams comprehensively understand the AI and genAI tools being used and trained within the organisations
- Non-data leaders and stakeholders must understand how and why AI tools are being used.
- Customers are informed of the use of AI when it relates to their data and given ample opportunity to opt out or change any settings (if applicable).
- All controls around the use of AI and genAI must go above and beyond the current regulations as there is a high likelihood of new regulations being put in place over the next 12 months.
- There must be flexibility in the use and implementation of AI and genAI tools to accommodate any regulatory changes that are likely to be implemented.
This is not easy to do, and it is not an overnight fix, but it is possible. By starting your AI journey slow and steady, data leaders can avoid most of these pitfalls. Speak to DataIQ members about the tools they are using and for what purposes, how their architecture and processes accommodate these tools, and what their AI aspirations are for the future – this will help you find what is most suitable and secure for your business needs.
Appreciate what AI means to non-data people
Ultimately, there is a lot of concern around AI and what it is truly capable of doing. Rightly or wrongly, there are AI and genAI tools that can be accessed by anyone around the globe, but users need to be told explicitly how and why their data will be used. There are great things that can be achieved with AI, such as detecting cancer earlier, but these stories need to be advertised and highlighted to the public to change the perception of data collection.
The issue with using tools like AI daily means we get used to them and forget about how powerful they can be, which is why vigilance must be kept realising that non-data professionals (the majority of the public) can have preconceptions about AI and mislead views because of major news stories or Hollywood fiction.
Senior data leader members can access DataIQ’s six self-complete assessment tools to better understand their company’s readiness for tools such as genAI.