AIR#83 - May 26, 2024
Good morning, AI aficionados! Grab your coffee and get ready to dive into today's edition of AIR: The AI Recon, where the world of artificial intelligence is buzzing with stories that will both intrigue and challenge you. Leading the headlines is the EU's approval of a historic AI Act, setting global standards with strict regulations and penalties. This landmark legislation aims to ensure AI is developed and used responsibly, making waves across industries worldwide. Meanwhile, a scandal has erupted as a nonconsensual AI porn maker accidentally leaks customer emails, raising serious privacy concerns and sparking debates about the ethical use of AI.
But it's not all about the legal and ethical drama. Dive into the academic trenches with a new study that offers best practices for reproducible evaluation of language models, a crucial step in advancing NLP research. For those looking to sharpen their AI tools, VisionGPT on GitHub promises to analyze images in seconds, offering instant AI-driven insights with just a few clicks. And if you're a developer eager to own your data, Unleashed Chat lets you deploy your own private, uncensored AI chatbot, complete with live data querying and Bitcoin payments.
As we explore these stories, let's not forget the broader implications they bring to the table. From Google's admission that AI errors are "inherent" and unsolved, causing user frustration, to Scott Galloway's provocative argument that we're in an AI bubble driven by hype, today's edition is packed with insights that challenge our understanding of AI's role in society. So, sit back, sip your coffee, and let's delve into the dynamic world of artificial intelligence together.
Business
EU Approves Historic AI Act
EU approves historic AI Act, setting global standards for AI regulation with strict rules, penalties, and a risk-based approach.
Nonconsensual AI Porn Maker Leaks Customers' Emails
AI porn maker leaks customer emails, exposing clients and sparking privacy concerns.
Google Admits It Can't Fix AI's Wild Errors
Google admits AI errors are "inherent" and unsolved, causing misinformation and user frustration, despite ongoing efforts to improve.
Meta Uses Instagram and Facebook Photos to Train AI Models
Meta uses public Instagram and Facebook photos to train its AI, ensuring no private content is used.
Bubble.ai: Are We in an AI Bubble? Scott Galloway Weighs In
Scott Galloway argues we're in an AI bubble driven by hype and speculation, similar to past tech bubbles. Will it pop or endure?
Microsoft Unveils Copilot for Telegram
Microsoft launches Copilot for Telegram, an AI assistant powered by GPT for smarter chats, game tips, travel plans, and more. Try it now!
Engineering
[GitHub] VisionGPT: Analyze Images in Seconds with AI
Analyze images in seconds with VisionGPT on GitHub. Upload photos for instant AI-driven insights. Try it now!
Unleashed Chat: Deploy Your Own Private, Uncensored AI Chatbot
Deploy Unleashed Chat: private, uncensored AI chatbot with open-source models, live data querying, and Bitcoin payments. Own your data.
I made a choose your own adventure game with AI NPCs
Create your own adventure game with AI NPCs, becoming the main character and shaping the story as you go. 🌟🎮
Pocket-Sized AI Models: A New Era of Computing
Pocket-sized AI models like Microsoft's Phi-3-mini enable powerful AI on local devices, enhancing privacy and responsiveness without cloud reliance.
Llamafile 0.8.5: Tiny Models 2x Faster on Threadripper
Llamafile 0.8.5 doubles tiny model performance on AMD Threadripper, enhancing CPU and GPU efficiency for LLMs.
[Github] Unsafe OpenAI Code: GPT-4o Uses eval() on Untrusted Text
⚠️ Warning: GPT-4o uses eval()
on untrusted text, posing a significant security risk. Developers must handle function execution safely.
What Does GPT Stand For? Understanding GPT 3.5, GPT 4, GPT-4o, and More | ZDNET
GPT stands for Generative Pre-trained Transformer, powering AI chatbots like ChatGPT with models from GPT-3.5 to the latest GPT-4o.
[Paper] StopThePop: Sorted Gaussian Splatting for View-Consistent Real-Time Rendering (SIGGRAPH 2024)
StopThePop's new hierarchical rasterization cuts popping artifacts and boosts rendering speed by 1.6x, halving memory use.
What is RHEL AI? Your Guide to Open Source AI with Red Hat
Red Hat unveils RHEL AI, an open-source platform for developing and running AI models, empowering experts without data science skills.
Academic
[Paper] Lessons from the Trenches on Reproducible Evaluation of Language Models
"New study reveals best practices and tools for reproducible evaluation of language models, addressing key challenges in NLP research."
[Paper] ConvNeXt: A ConvNet for the 2020s
ConvNeXt reimagines ConvNets for the 2020s, rivaling Vision Transformers in accuracy and scalability while retaining simplicity and efficiency.
[Preprint] Automated Scoring of Math Self-Explanations Using LLM-Generated Datasets
LLMs improve automated math self-explanation scoring by enriching datasets, boosting accuracy with a semi-supervised approach.
GPT-4o Passes Math Test GPT-4 and Claude Failed
GPT-4o aced a math test that GPT-4 and Claude failed, suggesting improvements in AI math capabilities.
[Paper] MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
MoRA introduces high-rank updating for efficient fine-tuning, outperforming LoRA on memory tasks while maintaining parameter efficiency.
[Paper] Programming Skills of Large Language Models: ChatGPT vs. Gemini AI
ChatGPT and Gemini AI's programming skills compared in a new study, highlighting their code quality and implications for software development.
[Paper] Prefrontal Cortex-Inspired Architecture for Planning in LLMs
New architecture inspired by the prefrontal cortex enhances planning in LLMs, showing significant improvements on complex tasks.
Superhuman AI: What Does It Mean and How Can We Tell?
AI excels in specific tasks like debates and medical diagnoses but struggles elsewhere. Full AGI is still a distant goal.
[Paper] Beware of Botshit: Managing Epistemic Risks of Generative Chatbots
New study warns of "botshit" from chatbots: coherent but inaccurate content. Learn to manage epistemic risks for safer AI use.
[Paper] Synaptic Information Storage Capacity Measured With Information Theory (MIT)
MIT study quantifies synaptic information storage using Shannon information theory, revealing 24 distinguishable synaptic strengths in CA1.