AIR#13 - March 17th, 2024
Good morning, digital pioneers! Today's edition of AIR: The AI Recon is a rollercoaster ride through the ever-evolving world of artificial intelligence, brewed to perfection for your morning indulgence. We're kicking things off with a sobering look at Apple's AI dilemma, likening its growth without AI to the stable but sluggish pace of Coca-Cola, rather than the explosive momentum of a tech titan.
The atmosphere gets electric as we recount the booing of San Francisco techies at SXSW, a clear signal of growing skepticism towards the "AI is culture" narrative. For our developer dynamos, we've got a treasure trove of innovations from Flash Attention in CUDA to Microsoft's AutoDev, and Apple's breakthrough in multimodal AI, promising a future where our gadgets understand us better than ever. Powering the AI behemoths, we explore sustainable solutions for data center energy demands, while DARPA's new initiative aims to shield us from the dark arts of deepfakes.
The saga continues with EagleX 1.7T outperforming LLaMA 7B, a novel ASCII art hack exposing AI vulnerabilities, and the FTC setting its sights on Reddit's AI data deals. As OpenAI ventures into chip manufacturing with Abu Dhabi's backing, and Google embraces open-source RISC-V for its AI silicon, we're witnessing a seismic shift in the AI landscape. Amidst these tales of innovation and intrigue, we delve into the darker corners of AI's impact, from the coprophagic crisis to the challenges of multimodal LLM pre-training. So, grab your cup of ambition, and let's dive into the digital deep end – it's an exploration of AI's brightest innovations and shadowiest corners you won't want to miss.
Business
Apple Without AI Looks More Like Coca-Cola Than High-Growth Tech
Apple's growth slows without AI, now mirrors stable Coca-Cola more than a booming tech giant.
Festival crowd boos San Francisco techies over 'AI is a culture' video
SXSW crowd boos SF techies' "AI is a culture" video, highlighting resistance to overhyped AI narratives.
How to feed all the power-hungry AI data centers
AI's hunger for power meets its match: Solar+battery tech to run data centers 24/7, slashing costs & paving a sustainable future.
DARPA to launch efforts that will bolster defenses against manipulated media
DARPA launches new efforts to combat deepfakes, including an analytic catalog and AI Forensics Challenge, aiming to fortify defenses against manipulated media.
🔥 Reddit's Sale of User Data for AI Training Draws FTC Investigation
Reddit's user data sale for AI training sparks FTC probe amid privacy and fairness concerns, highlighting a growing scrutiny in AI data deals.
Engineering
🔥 Flash Attention in ~100 lines of CUDA
New GitHub repo simplifies Flash Attention with ~100 lines of CUDA, making it accessible for beginners. Speed boost in AI models!
🔥 AutoDev: Automated AI-driven development by Microsoft
Microsoft's AutoDev revolutionizes software development, automating complex tasks with AI agents, ensuring security and efficiency.
Apple researchers achieve breakthroughs in multimodal AI
Apple's AI breakthrough in multimodal tech promises future products with advanced image and text understanding, signaling a major AI investment.
LLM Inference Speed of Light
Exploring the "speed of light" for LLM inference: A deep dive into minimizing token generation time through optimal hardware and software synergy.
EagleX 1.7T: Soaring past LLaMA 7B 2T in both English and Multi-lang evals
EagleX 1.7T outshines LLaMA 7B in English and multi-lingual benchmarks, setting a new standard with its efficient, green AI model.
ASCII art elicits harmful responses from 5 major AI chatbots
ASCII art bypasses AI chatbot safety protocols, enabling harmful responses. Researchers reveal a novel hack method.
Google to use open source RISC-V for its custom AI silicon, TPU
Google's next-gen TPU may use SiFive's RISC-V cores, hinting at a major shift towards open source AI silicon.
Academic
OpenAI aims to make its own AI processors – in talks with Abu Dhabi investors
OpenAI partners with Abu Dhabi's MGX for a bold leap into AI chip manufacturing, aiming to secure autonomy from Nvidia and boost global AI capabilities.
🔥 MM1: Methods, Analysis and Insights from Multimodal LLM Pre-training
MM1: Groundbreaking multimodal LLMs up to 30B parameters set new benchmarks in few-shot learning by optimizing data mix and image encoding.
The Coprophagic AI Crisis
AI's promise of self-improvement through more data is flawed; the web's drowning in AI-generated "botshit," worsening the AI it trains on. A vicious cycle of diminishing returns.