{"id":6838,"date":"2023-12-13T12:27:15","date_gmt":"2023-12-13T12:27:15","guid":{"rendered":"https:\/\/members.dataiq.global\/?post_type=mec-events&#038;p=6838"},"modified":"2024-03-26T08:29:46","modified_gmt":"2024-03-26T08:29:46","slug":"roundtable-ethical-ai-17-01-24","status":"publish","type":"mec-events","link":"https:\/\/www.dataiq.global\/devstage\/iqevents\/roundtable-ethical-ai-17-01-24\/","title":{"rendered":"Roundtable &#8211; Ethical AI &#8211; 17 Jan 24"},"content":{"rendered":"<p><span style=\"font-size: 18pt; font-family: arial, helvetica, sans-serif;\"><strong>Ethical AI: <span class=\"TextRun SCXW116451047 BCX8\" lang=\"EN-GB\" xml:lang=\"EN-GB\" data-contrast=\"auto\"><span class=\"NormalTextRun SCXW116451047 BCX8\">First do no harm (then do some good)<\/span><\/span><span class=\"EOP SCXW116451047 BCX8\" data-ccp-props=\"{}\">\u00a0<\/span><\/strong><\/span><\/p>\n<p><span style=\"font-family: arial, helvetica, sans-serif;\">The US Presidential Order and Bletchley Declaration in 2023 put a firm focus on the issue of safety with regards to frontier AI and the goal of ensuring it is developed to the benefit of everybody.<\/span><\/p>\n<p><span style=\"font-family: arial, helvetica, sans-serif;\">Further down the scale, however, organisations are currently left to make their own choices around the impact and effects of their use of artificial intelligence, whether core approaches such as machine learning or new solutions like generative AI.<\/span><\/p>\n<p><span style=\"font-family: arial, helvetica, sans-serif;\">For data-driven brands that have built out an ethical data framework as part of their data governance, many of the central principles looked AI-ready. Yet key differences and issues do need to be addressed, including the ethical values of upstream business partners and the mitigation of risks and bias in models downstream.<\/span><\/p>\n<p><span style=\"font-family: arial, helvetica, sans-serif;\">This roundtable will explore the boundaries within which ethical AI needs to operate and the challenges of ensuring policies and frameworks become behaviours, not tick boxes.<\/span><\/p>\n<p><span style=\"font-family: arial, helvetica, sans-serif;\"><strong>Why should you attend?<\/strong><\/span><\/p>\n<ul>\n<li><span style=\"font-family: arial, helvetica, sans-serif;\">Learn from other practitioners and share your experiences within your peer group<\/span><\/li>\n<li><span style=\"font-family: arial, helvetica, sans-serif;\">Create new contacts with other DataIQ members, extend your professional network into the wider community<\/span><\/li>\n<\/ul>\n<p><span style=\"font-family: arial, helvetica, sans-serif;\"><strong>Format<\/strong><\/span><\/p>\n<ul>\n<li><span style=\"font-family: arial, helvetica, sans-serif;\">1 hour, digital roundtable delivered via Zoom<\/span><\/li>\n<li><span style=\"font-family: arial, helvetica, sans-serif;\">Closed forum, open discussion, recruiter and vendor-free, Chatham House rules<\/span><\/li>\n<li><span style=\"font-family: arial, helvetica, sans-serif;\">Small and focused group of senior data leaders<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong><span style=\"font-family: arial, helvetica, sans-serif;\">Please be aware that places are limited, make sure you register your interest early.<\/span><\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ethical AI: First do no harm (then do some good)\u00a0 The US Presidential Order and Bletchley Declaration in 2023 put a firm focus on the issue of safety with regards to frontier AI and the goal of ensuring it is developed to the benefit of everybody. Further down the scale, however, organisations are currently left [&hellip;]<\/p>\n","protected":false},"author":204,"featured_media":4961,"comment_status":"closed","ping_status":"closed","template":"elementor_header_footer","tags":[218,241,231,85],"pillar":[198,193,194],"false":[83],"mec_speaker":[289],"mec_sponsor":[],"class_list":["post-6838","mec-events","type-mec-events","status-publish","has-post-thumbnail","hentry","tag-artificial-intelligence","tag-data-ethics","tag-generative-ai","tag-strategy","pillar-governance","pillar-strategy","pillar-leadership","mec_category-roundtable"],"acf":[],"publishpress_future_workflow_manual_trigger":{"enabledWorkflows":[]},"_links":{"self":[{"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/mec-events\/6838","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/mec-events"}],"about":[{"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/types\/mec-events"}],"author":[{"embeddable":true,"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/users\/204"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/comments?post=6838"}],"version-history":[{"count":0,"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/mec-events\/6838\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/media\/4961"}],"wp:attachment":[{"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/media?parent=6838"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/tags?post=6838"},{"taxonomy":"pillar","embeddable":true,"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/pillar?post=6838"},{"taxonomy":"mec_category","embeddable":true,"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/false?post=6838"},{"taxonomy":"mec_speaker","embeddable":true,"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/mec_speaker?post=6838"},{"taxonomy":"mec_sponsor","embeddable":true,"href":"https:\/\/www.dataiq.global\/devstage\/wp-json\/wp\/v2\/mec_sponsor?post=6838"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}