Google has introduced Meridian, an open-source marketing mix model (MMM) designed to assist marketers in optimizing their advertising budgets. Utilizing Bayesian causal inference methods, Meridian means to provide deeper insights into both online and offline media channels, addressing the limitations of traditional MMMs that primarily focused on offline media and branding. Meridian also promises to enable advertisers to gauge the true impact of their marketing efforts, moving beyond standard conversion metrics to illustrate how brand-building activities — such as TV commercials and YouTube ads — can influence long-term business outcomes and customer acquisition.
The tool's data platform offers access to essential Google media metrics, including impressions, clicks, and costs, while also tracking reach and frequency for video campaigns on platforms like YouTube – thus allowing marketers to predict how brand interactions can lead to future purchases. And as an open-source solution, Meridian can be customized to meet specific business needs, incorporating external factors like economic conditions. To assist marketers in implementing and optimizing the platfor, Google has established a partner program with over 20 certified agencies (including fifty-five France).
More details at Search Engine Journal.
DeepSeek, a Chinese AI lab backed by High-Flyer Capital Management, has quickly gained international attention after its chatbot app topped the Apple and Google Play Store charts. Originally founded as part of a quantitative hedge fund leveraging AI for trading, DeepSeek spun off in 2023 to focus on AI research. Despite facing U.S. export bans on advanced AI chips, the company has developed powerful AI models, such as DeepSeek-V2 and DeepSeek-V3, which outperform many competitors while being more cost-efficient, including OpenAI and Anthropic.
DeepSeek's rapid rise has unsettled the AI landscape, causing Nvidia's stock to drop and prompting responses from industry leaders like OpenAI’s Sam Altman and Meta’s Mark Zuckerberg. Meanwhile, developers have already embraced DeepSeek, with over 500 derivative models created on Hugging Face, right as some companies and governments are banning its usage due to concerns over foreign influence and content regulations. As the U.S. government grows increasingly wary, DeepSeek's future remains uncertain, though further advancements in its AI models are inevitable.
Visit TechCrunch for more information.
OpenAI has launched o3-mini, the latest model in its family of AI reasoning systems, designed to offer a balance between power and affordability. Unlike traditional language models, o3-mini focuses on thorough fact-checking before delivering answers, making it particularly well-suited for STEM fields like programming, math, and science. OpenAI claims that o3-mini is more reliable than its predecessor, o1-mini, with external testers preferring its responses over 50% of the time and noting a 39% reduction in major mistakes. The model is also 24% faster and 63% cheaper than o1-mini, making it a cost-effective alternative for users and developers.
O3-mini is available to all ChatGPT users, with premium subscribers receiving higher usage limits. Developers can access the model via OpenAI’s API, where they can adjust its reasoning effort to balance accuracy and speed. Unlike some of its competitors, such as DeepSeek’s R1 model, o3-mini excels in certain benchmarks but lags in others, particularly when tested on advanced scientific knowledge. However, OpenAI emphasizes the model’s strong safety measures, claiming it surpasses even GPT-4o in preventing harmful outputs. The company is also integrating search functionality into its reasoning models, though it remains a prototype feature.
While o3-mini is not OpenAI’s most powerful model, it represents an important step in making AI reasoning more accessible and cost-effective – perhaps partially as a response to post-DeepSeek criticism.
Read more at TechCrunch.
The European Union’s AI Act has reached its first compliance deadline as of February 2, empowering regulators to ban AI systems deemed to pose “unacceptable risk.” The Act categorizes AI into four risk levels, with minimal and limited risk applications facing little to no regulation, while high-risk systems, such as AI in healthcare, will be subject to stringent oversight. The current focus is on prohibiting AI use cases like social scoring, biometric profiling, predictive policing based on appearance, and real-time biometric surveillance in public spaces. Companies found violating these regulations could face fines of up to €35 million or 7% of their annual revenue. Although enforcement measures won’t take full effect until August, over 100 companies—including Amazon, Google, and OpenAI—have voluntarily committed to early compliance through the EU AI Pact, though notable absentees include Apple, Meta, and Mistral.
While the AI Act enforces strict bans, some exceptions exist, particularly for law enforcement, which can use biometric AI systems in public spaces under specific circumstances, like locating an abducted person or preventing imminent threats. Additionally, emotion-detecting AI is permitted in workplaces and schools if justified by medical or safety needs. However, regulatory clarity remains a concern, as the European Commission has yet to release promised compliance guidelines. Experts warn that the AI Act will not operate in isolation, as existing laws like GDPR and cybersecurity regulations may overlap, creating compliance complexities for companies. With full enforcement approaching in August, businesses must prepare for potential legal and operational challenges in adhering to the evolving AI regulatory landscape.
More info at TechCrunch.
Discover all the latest news, articles, webinar replays and fifty-five events in our monthly newsletter, Tea O'Clock.