AIR#121 - July 03, 2024

Good morning, AI aficionados! As you sip your morning coffee, get ready to dive into today's edition of AIR: The AI Recon. Leading the headlines is a fascinating exploration of whether Turing truly proved the undecidability of the Halting Problem. This new paper delves into the nuances of Turing's work, sparking debates among scholars and AI enthusiasts alike. If you're intrigued by the historical and theoretical underpinnings of AI, this is a must-read.

But that's not all! Figma has made waves by disabling its AI tool after it was found to be copying Apple's Weather app. This incident has raised significant questions about quality assurance and the ethical use of AI in design. Meanwhile, in a landmark move, Apple has secured an observer role on OpenAI's board, further strengthening its ties with the AI giant. This strategic partnership is set to reshape the landscape of AI collaboration and governance.

And for those concerned about the environmental impact of AI, Google's carbon emissions have surged nearly 50% due to the energy demands of their AI operations. This development poses a major challenge to Google's net-zero goals and highlights the urgent need for sustainable AI practices. Whether you're here for groundbreaking tech updates, ethical debates, or the latest industry buzz, today's edition is packed with stories that will both intrigue and challenge you. So, sit back, sip your coffee, and let's delve into the dynamic world of artificial intelligence together!

Business

Figma Disables AI Tool After Copying Apple’s Weather App
Figma disables its AI tool after it was found copying Apple's Weather app. CEO admits QA issues and pushes for a fix.

πŸ”₯ Apple to Get OpenAI Board Observer Role in Landmark AI Pact
Apple gains observer role on OpenAI board in new AI pact; Phil Schiller appointed. Strengthening ties with OpenAI.

πŸ”₯ Google's Carbon Emissions Surge 50% Due to AI Energy Demand
Google's carbon emissions surged 50% since 2019 due to AI energy demands, challenging its net-zero goal by 2030.

πŸ”₯ Brazil Bans Meta from Mining Data for AI Training
Brazil bans Meta from using local data to train AI, citing privacy concerns. Meta calls it a setback for innovation.

Gen AI: Massive Spend, Minimal Benefit?
Tech giants invest $1tn in Gen AI, but returns are minimal so far. Will the massive spending ever pay off?

[GitHub] Tegon: AI-First Open Source Jira & Linear Alternative
Tegon: AI-first, open-source alternative to Jira and Linear, automates task management for engineering teams. Try it on GitHub!

A.I. Ushering in Age of Killer Robots in Ukraine War
A.I. is transforming the Ukraine war with autonomous killer drones and weapons, raising ethical and legal concerns globally.

Google Emissions Soar Nearly 50% in Five Years Due to AI Surge
Google's emissions surged nearly 50% in 5 years due to AI expansion, challenging its 2030 net-zero goal.

Gen AI Takes Over Finance: Top Applications and Challenges
Generative AI is transforming finance, enhancing efficiency, decision-making, and personalization, but faces data privacy, regulatory, and skill challenges.

What Happened to the AI Revolution? (2024)
Despite hype and investment, AI has yet to show significant economic impact beyond tech giants' projections.

Engineering

πŸ”₯ [GitHub] Microsoft Research's GraphRAG: Advanced Tool for Complex Data Discovery
Microsoft's GraphRAG, a new tool for complex data discovery, is now on GitHub. It offers advanced, structured info retrieval and response generation.

πŸ”₯ Testing Kolmogorov-Arnold Networks: A Hands-On Experience
Kolmogorov-Arnold Networks (KANs) show promise but need extensive tuning and complexity, making traditional neural networks a simpler default choice.

πŸ”₯ Meta 3D Gen: Fast Text-to-3D Asset Creation by Meta
Meta 3D Gen: Meta's new AI quickly converts text to high-quality 3D assets in under a minute, revolutionizing 3D creation and retexturing.

The Illustrated Transformer: Visualizing Machine Learning Concepts
Jay Alammar's "The Illustrated Transformer" simplifies complex machine learning concepts, making them accessible and widely referenced in top universities.

πŸ”₯ [GitHub] Integrate Mistral Codestral & GPT-4o into Jupyter with Pretzel.AI
Integrate Mistral Codestral & GPT-4o into Jupyter with Pretzel.AI for enhanced AI code generation, inline completions, and error fixing. πŸš€

Microsoft Sneakily Updates Phi-3 Mini with Major Enhancements
Microsoft quietly upgrades Phi-3 Mini: better code understanding, enhanced output, improved multi-turn instructions, and more! πŸš€

[Paper] DETRs Surpass YOLOs in Real-Time Object Detection
DETRs outperform YOLOs in real-time object detection with better speed and accuracy, thanks to the new RT-DETR model.

Figma Pulls AI Tool Amid Accusations of Copying Apple's Design
Figma pulls AI tool after accusations of copying Apple's design, despite claims it wasn't trained on Apple content.

[Paper] Mooncake: Kimi's KVCache-centric Architecture for LLM Serving by Moonshot AI
Mooncake by Moonshot AI boosts Kimi's LLM serving with a KVCache-centric architecture, achieving up to 525% more throughput and 75% more requests.

GPT4All 3.0: Open-Source Local LLM Desktop App Released
GPT4All 3.0 released! Open-source, private LLM desktop app with major UI/UX overhaul, supporting 1000+ models. Install at nomic.ai/gpt4all.

[GitHub] KusionStack's Karpor: The Ultimate Kubernetes Visualization Tool
Karpor by KusionStack: The top Kubernetes visualization tool, boosting developer efficiency with AI, multi-cloud support, and intuitive insights.

Figma Disables AI After Accusations of Copying Apple's Weather App
Figma halts AI design tool after accusations of copying Apple’s Weather app. CEO denies claims but pauses feature for QA.

Academic

[Paper] Did Turing Prove the Undecidability of the Halting Problem?
Did Turing truly prove the halting problem's undecidability? New paper explores this with a nuanced conclusion.

[Paper] LLMs Achieve Adult Human Performance on Higher-Order Theory of Mind Tasks
LLMs like GPT-4 now match or surpass adult humans in higher-order theory of mind tasks, impacting AI-human interactions.

Explainability is Not a Game
SHAP scores in AI can be misleading, assigning importance to irrelevant features and ignoring critical ones, undermining trust in AI decisions.

Read more