AIR#38 - April 11, 2024
Good morning, AI enthusiasts! As the sun peeks through the horizon and you take that first sip of your morning coffee, today's edition of AIR: The AI Recon is set to ignite your imagination with stories that herald a new era in artificial intelligence. Leading the pack, Mistral AI's unveiling of its groundbreaking 8x22B MOE Language Model is creating waves across the tech world, promising to set new benchmarks in language AI with its fast, secure, and open-source framework. This leap forward is not just a technical milestone; it's a beacon for developers seeking the next frontier in AI capabilities.
But the innovation doesn't stop at language models. The music enthusiasts among us will be thrilled with Udio, a platform that transforms text prompts into music in your favorite styles. Imagine typing a few words and having a symphony or a pop hit crafted in response; it's like having a personal composer at your fingertips, ready to bring your musical visions to life. This blend of creativity and technology is reshaping how we think about music production, making it more accessible and personalized than ever before.
As we delve into these stories and more, including the surprising underperformance of GPT-4 Turbo with Vision in coding benchmarks, today's edition is a testament to the ever-evolving landscape of AI. From groundbreaking language models and musical innovations to the challenges of coding AI, each story is a piece of the puzzle in understanding the vast potential and current limitations of artificial intelligence. So, as you enjoy your brew and gear up for the day, let's embark on this journey together, exploring the wonders and challenges of AI, one breakthrough at a time.
Business
Big Tech's Desperate Data Grab for AI Development Exposed
Tech giants skirt rules to collect vast data for AI, sparking ethical and legal debates. OpenAI, Google, and Meta push boundaries for AI's hunger for data.
AI Influencers Outshine Humans on Instagram, Threatening Gen Z's Dreams
AI influencers on Instagram are edging out human creators, challenging Gen Z's influencer aspirations with cost-effective, engaging content.
Meta Unveils Next-Gen AI Chip "Artemis" to Cut Costs and Reduce Nvidia Dependence
Meta introduces "Artemis" AI chip, aiming to slash costs and reduce reliance on Nvidia, marking a significant shift in AI hardware strategy.
Adam Schiff Proposes Bill Mandating AI Firms Disclose Copyrighted Content Use
Adam Schiff's bill aims to make AI companies disclose copyrighted content use in their models, balancing innovation with ethical standards.
Engineering
π₯ Mistral AI Unveils Groundbreaking 8x22B MOE Language Model
Mistral AI launches revolutionary 8x22B MOE model, setting new standards in language AI. Fast, secure, open-source.
π₯ Udio: Create Music with Text Prompts in Your Favorite Styles
Meet Udio: Transform words into music in styles you love with just a text prompt. Create, share, and inspire. πΆβ¨
π₯ GPT-4 Turbo with Vision Underperforms in Coding Benchmarks
GPT-4 Turbo with Vision, the latest from OpenAI, scores lowest in coding benchmarks, proving to be the laziest coder yet.
π₯ [GitHub] Google DeepMind's RecurrentGemma: Griffin-Based Open Weights Language Model
Google DeepMind's RecurrentGemma, a Griffin-based open language model, now on GitHub. Fast, efficient, and ready for fine-tuning.
[Github] Build Fast LLMs in JavaScript with Next-Token Prediction
Build fast, JavaScript-based LLMs with next-token prediction for auto-completion, spell check, and more. Start creating with bennyschmidt's toolkit.
π₯ [Github] Aider: AI-Driven Pair Programming Tool by paul-gauthier
Aider, a CLI tool by paul-gauthier, pairs you with AI for coding directly in your terminal, enhancing productivity with GPT-3.5/4's help.
Mistral AI Releases Mixtral 8x22B Model Torrent on X
Mistral AI just dropped Mixtral 8x22B model for free on X, sparking a torrent of excitement and downloads.
π₯ Meta Unveils MTIA v2: Doubling Down on AI Chip Performance
Meta's new MTIA v2 chip doubles AI performance, enhancing ads and recommendations, marking a leap in AI infrastructure and efficiency.
π₯ 24-Hour LLM Blitz: Google, OpenAI, and Mistral Unleash New Models
Google, OpenAI, and Mistral drop new LLMs in a single day! Gemini Pro 1.5, GPT-4 Turbo with Vision, and Mixtral 8x22B lead the AI charge.
[Github] HuggingFace Releases Mixtral-8x22B: A Sparse Mixture of Experts LLM
HuggingFace introduces Mixtral-8x22B, a cutting-edge Sparse Mixture of Experts LLM, now available in transformers format for advanced AI applications.
Meta to Launch Open Source Llama 3 LLM Next Month
Meta gears up to release Llama 3 LLM next month, aiming to leapfrog in AI with open source, versatile models for a global developer embrace.
Prince Canuma Announces Mixtral 8x22B on MLX: A Leap in Local AI Inference for Macs
Prince Canuma unveils Mixtral 8x22B for Macs on MLX: local AI with 170B params, 65K window, enhancing local inference capabilities.
Universal AI Drone Controller Demo on YouTube
New demo of a Universal AI Drone Controller just dropped on YouTube - control any drone smarter and easier! π
[Paper] Griffin: A Hybrid RNN for Efficient Language Models with Gated Recurrences and Local Attention
Griffin model blends gated recurrences with local attention for efficient, scalable language processing, outperforming rivals with fewer tokens.
[Tool] LLMWhisperer: Optimize Complex Documents for Large Language Models
LLMWhisperer optimizes complex docs for AI, ensuring cleaner data input for better outputs. Free tier available, integrates with Unstract for seamless automation.
[Google Cloud] JetStream: Tripling LLM Inference Efficiency on TPUs
Google Cloud's JetStream triples LLM inference efficiency on TPUs, offering 3x more cost-effective AI processing with support for PyTorch and JAX.
OpenAI and Meta Unveil AI Models With Reasoning Capabilities
OpenAI and Meta launch AI models with advanced reasoning skills, redefining tech accessibility and analysis.
Academic
[Paper] Optimizing Inference in MoE Large Language Models
New study finds smaller MoE models with fewer experts optimize inference efficiency without sacrificing performance, albeit at higher training costs.
[Paper] Against The Achilles' Heel: Comprehensive Survey on Red Teaming in Generative Models
New survey reveals red teaming's role in strengthening generative AI models, covering attack strategies, multimodal defenses, and future research paths.
[Paper] Tsinghua and Shanghai AI Labs Breakthrough: Polynomial Time Quantum Algorithms for Lattice Problems
Tsinghua & Shanghai AI Labs unveil quantum algorithm solving complex lattice problems in polynomial time, a groundbreaking leap in cryptography.
Struggling to Generate AI Images of Asian Men and White Women: A Racial Bias Exposed
AI struggles to generate images of Asian men and white women, revealing racial biases in image generators.