Thirsty for more expert insights?

Subscribe to our Tea O'Clock newsletter!

Subscribe

Monthly Brandtech Blend - September 2024

Margaux Montagner
Published on
2/9/2024
What's been happening in the brand tech world this month? The UK reacts to Google’s decision to keep third-party cookies; the AI Act comes into force in the European Union; and several GDPR complaints are issued against X for its unconsented use of user data to train AI.

Google’s third-party cookies decision disappoints in the UK 

The Information Commissioner’s Office (ICO), the UK’s data protection authority, has expressed disappointment over Google's recent decision to abandon its plans to phase out third-party cookies in Chrome, also raising concerns about Google's Privacy Sandbox initiative, which aims to balance advertising performance with user privacy. They highlighted potential vulnerabilities in the Sandbox's tools that could compromise user privacy and allow for the identification of users who have opted out of tracking. Stephen Bonner, the ICO's deputy commissioner, emphasized that eliminating third-party cookies would have been a positive step for consumers and indicated that the ICO would continue to advocate for more privacy-friendly alternatives in digital advertising.

In contrast to the ICO's strong stance, the Competition and Markets Authority (CMA) has taken a more measured approach regarding Google's cookie retention plans. The CMA has previously flagged concerns that the Privacy Sandbox could distort competition by concentrating advertising spend within Google's ecosystem. As Google plans to introduce a one-time prompt for users to set their privacy preferences while retaining third-party cookies, both the ICO and CMA will need to assess the implications of this new strategy. Google has stated that it will engage with regulators and the industry as it rolls out these changes, but the specifics of how the ICO and CMA will collaborate with Google remain uncertain. 

Note that, despite Google’s reversal, many observers maintain that third-party cookies are still set to disappear

Read more at the Drum.

The AI Act finally enters into force

Aiming to promote responsible AI development and deployment within the EU, the European Artificial Intelligence Act (AI Act) came into force on August 1st. Proposed by the European Commission in April 2021 and finalized by the European Parliament and Council in late 2023, the Act addresses potential risks to citizens' health, safety, and fundamental rights. It establishes clear requirements for developers and deployers of AI, while also aiming to reduce administrative and financial burdens for businesses. The Act categorizes AI systems based on risk levels, from minimal risk systems like spam filters to high-risk applications such as AI in medical software, which must adhere to stringent regulations.

In addition to the AI Act, the European Commission has initiated a consultation on a Code of Practice for providers of general-purpose AI (GPAI) models, which will address key concerns such as transparency and risk management. This Code is expected to be finalized by April 2025, with provisions on GPAI coming into effect in 12 months. The feedback gathered from various stakeholders, including businesses and civil society, will inform the Commission's draft and contribute to establishing the AI Office. This office is set to oversee the implementation and enforcement of the AI Act's rules. 

Through these frameworks, the EU means to position itself as a global leader in “safe AI,” fostering an ecosystem that enhances public services, healthcare, and overall productivity while upholding human rights and fundamental values.

More info from the European Commission

X hit with 9 GDPR complaints for feeding user data to AI chatbot

Elon Musk's social media platform X is facing nine complaints for training its AI chatbot, Grok, with unconsented European user data. The nonprofit Noyb (“None of your business”), which focuses on privacy law, claims that this practice violates the EU's General Data Protection Regulation (GDPR), which mandates explicit user consent for data collection. Noyb argues that X's approach of harvesting data without informing users or obtaining permission is problematic, especially given that the platform's large user base could have provided sufficient consented data if it had been sought for. 

X’s situation has broader implications for the AI industry, as it raises critical questions about how publicly available personal data can be collected and used in compliance with EU data protection laws. Experts emphasize the need for AI companies to ensure lawful data processing, transparency, and user opt-out options while also integrating privacy safeguards from the design stage. This scrutiny is not unique to X; other major tech firms, like Meta, have also faced challenges related to GDPR compliance in their AI initiatives. The stringent requirements of GDPR are creating hurdles for American companies looking to launch AI products in Europe, complicating their ability to navigate the regulatory landscape effectively.

More details at the Drum.

All articles

Related articles

Press Review - June 2024

4 mins
Margaux Montagner

Thirsty for more expert insights? Subscribe to our monthly newsletter.

Discover all the latest news, articles, webinar replays and fifty-five events in our monthly newsletter, Tea O'Clock.

First name*
Last name*
Company*
Preferred language*
Email*
Merci !

Votre demande d'abonnement a bien été prise en compte.
Oops! Something went wrong while submitting the form.