AIR#108 - June 20, 2024

Good morning, AI aficionados! Grab your coffee and get ready to dive into today's edition of AIR: The AI Recon. Leading the headlines is a jaw-dropping story about Google Gemini almost causing a botulism outbreak in one family's kitchen. This cautionary tale underscores the importance of double-checking AI advice, especially when it comes to health and safety. It's a must-read for anyone who relies on AI for daily tasks and wants to avoid potential pitfalls.

But that's not all! Safe Superintelligence Inc. is making waves with its ambitious mission to build the world's first safe superintelligence. Prioritizing both safety and rapid advancements, this initiative could redefine the future of AI, making it an exciting development to follow. If you're passionate about the ethical and safe deployment of AI technologies, this is one story you won't want to miss. Meanwhile, Nvidia has released free LLMs that rival GPT-4 in benchmarks, offering high-quality synthetic data for training and commercial use. This move is set to democratize access to top-tier AI capabilities, making it a game-changer for developers everywhere.

And for those who love a good scholarly debate, a new paper exposes the practice of "open-washing" in AI, particularly under the EU AI Act. The paper tracks the true openness of ChatGPT alternatives, highlighting the importance of genuine transparency in the AI industry. Whether you're here for the groundbreaking tech updates, ethical debates, or the latest industry buzz, today's edition is packed with stories that will both intrigue and challenge you. So, sit back, sip your coffee, and let's delve into the dynamic world of artificial intelligence together!

Business

🔥 Google Gemini Tried to Kill My Family
Google Gemini almost caused a botulism outbreak in my family. Always double-check AI advice!

KamiAI – The Easiest Way to Extract Data from Documents
Extract data from documents effortlessly with KamiAI's OCR & LLM tech. Secure, affordable, and easy to use. Try it now!

OpenAI-Backed Nonprofits Reneged on Transparency Pledges
OpenAI-backed nonprofits backtrack on transparency promises, withholding financial and governance info despite initial pledges.

Warner Music CEO: Metadata Issues Make Industry Vulnerable to AI
Warner Music CEO warns that metadata issues make the music industry vulnerable to AI, risking artists' revenue and creativity.

Engineering

🔥 Safe Superintelligence Inc: Pioneering the Future of AI Safety
Safe Superintelligence Inc. aims to build the world's first safe superintelligence, prioritizing safety and rapid advancements. Join the mission.

[GitHub] AiuniAI's Unique3D: High-Quality 3D Mesh Generation from a Single Image
AiuniAI's Unique3D generates high-quality 3D meshes from a single image in 30 seconds. Check out their GitHub for more info!

🔥 AI-Powered Enzyme to React Testing Library Conversion at Slack
Slack uses AI to convert 15,000 Enzyme tests to React Testing Library, saving 22% developer time and boosting productivity. 🚀

China's DeepSeek Coder Beats GPT-4 Turbo as First Open-Source Coding Model
China's DeepSeek Coder V2, an open-source model, outperforms GPT-4 Turbo in coding tasks, supporting 338 languages and extensive contexts.

Nvidia releases free LLMs that match GPT-4 in benchmarks
Nvidia's free Nemotron-4 340B LLMs rival GPT-4 in benchmarks, offering high-quality synthetic data for training and commercial use.

[Paper] Microsoft Releases Weights for Florence-2 Vision Model
Microsoft releases weights for Florence-2 Vision Model, enhancing image-to-text capabilities.

Microsoft Goes All Out on Generative AI
Microsoft launches OpenAI library for .NET & AI Toolkit for VSCode, boosting GenAI capabilities for developers with new tools and integrations.

[GitHub] Comprehensive Math Learning Path for ML and DS by Sithu-Khant
Comprehensive math path on GitHub by Sithu-Khant for mastering Machine Learning and Data Science. Dive in and contribute! 📚✨

[Paper] LLAMAFUZZ: Large Language Model Enhanced Greybox Fuzzing
LLAMAFUZZ uses large language models to enhance greybox fuzzing, outperforming competitors by finding more bugs and improving code coverage.

[GitHub] Addepar's RedFlag: AI for High-Risk Code Detection
Addepar's RedFlag uses AI to detect high-risk code changes, ideal for CI pipelines and release testing, enhancing security reviews.

ControlNet Animates Game of Life
ControlNet animates Game of Life, preserving cell grids with stable diffusion. Try it on HuggingFace or Colab for GPU-limited fun!

[Paper] Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation by Fudan University, Baidu, ETH Zurich, and Nanjing University
New AI model "Hallo" syncs speech audio with portrait animations, enhancing lip sync, expressions, and motion for realistic visuals.

Exploring Tokenizers: Moses vs. SpaCy
Dive into the world of tokenizers: Moses for rule-based and SpaCy for modern NLP. Each has unique strengths and limitations.

Academic

[Paper] Rethinking Open Source Generative AI: Open-Washing and the EU AI Act
New paper exposes "open-washing" in AI, tracking the true openness of ChatGPT alternatives under the EU AI Act. Openness is crucial!

[Paper] Adversarial Perturbations Fail to Protect Artists from Generative AI
Adversarial perturbations fail to protect artists from generative AI, leaving them vulnerable to style mimicry. New solutions needed.

Ray Kurzweil: AI's Impact on Energy, Manufacturing, and Medicine
Ray Kurzweil predicts AI will soon transform energy, manufacturing, and medicine, revolutionizing the physical world by 2029.

Human Brains Can Detect Deepfake Voices
Human brains can detect deepfake voices, showing different neural responses compared to real voices, according to University of Zurich research.

Another Company Caught Using AI to Create Fake Journalists and Journalism
Another company, Hoodline, caught using AI to create fake journalists and low-quality news, further eroding trust in journalism.

Ukraine Uses AI to Remove Russian Landmines
Ukraine uses AI to prioritize landmine removal, tackling a task that would take 700 years without tech.

[Paper] Latest LLMs for Leaderboard Extraction
New study evaluates GPT-4, Mistral 7B, and Llama-2 for extracting leaderboard data from AI research papers, revealing model strengths and limits.

Read more