AIR#245 - Catch Critical Vulnerabilities with AI šŸš€

Hey there!

Here's the latest AI news for today. Enjoy!

Today's top stories

šŸ”„ Using Large Language Models to Catch Vulnerabilities
Google's Big Sleep project uses large language models to discover and fix a critical SQLite vulnerability before release.

AMD Open-Source 1B OLMo Language Models
AMD launches open-source OLMo language models with 1 billion parameters, enhancing AI customization and performance for developers.

I just tested ChatGPT Search vs. Google ā€“ here's the results
ChatGPT Search outperforms Google in speed, accuracy, and user experience, offering clear, ad-free, real-time answers.

The overlooked GenAI use case: cleaning, processing, and analyzing data
GenAI's potential in data cleaning and analysis is overlooked, despite significant enterprise interest and job postings in this area.

AI Generated Game: Oasis
AI creates a new game called Oasis, but users face issues due to JavaScript being disabled in their browsers.

Chinese researchers develop AI model for military use on back of Meta's Llama
Chinese researchers linked to the PLA have adapted Meta's Llama model into a military AI tool, ChatBIT, for intelligence tasks.

DAWN: Designing Distributed Agents in a Worldwide Network
DAWN introduces a framework for global collaboration among LLM-based agents, enhancing safety and operational versatility.

Anthropic's Claude AI Chatbot Now Has a Mac App, but It's an Electron Turd
Anthropic's Claude AI chatbot launches a Mac app, but it's criticized as a clunky Electron version lacking native features.

Claude can now view images within a PDF
Claude can now analyze images within PDFs, enhancing its functionality and user experience.

Anthropic has hired an 'AI welfare' researcher
Anthropic hires Kyle Fish as its first 'AI welfare' researcher to explore moral obligations towards AI systems.

What if A.I. Is Good for Hollywood?
A.I. is transforming Hollywood by enhancing visual effects, enabling realistic aging, and streamlining production costs.

AI's "Human in the Loop" Isn't
AI's "human in the loop" fails to ensure accountability, often exacerbating biases and errors instead of preventing them.

ML Foundations: Understanding the Math Behind Backpropagation
Explore the math behind backpropagation in neural networks, implementing it from scratch to classify MNIST digits with Python.

Claude 3.5 Sonnet is now available to all Copilot users
Claude 3.5 Sonnet is now in public preview for all GitHub Copilot users, enhancing coding capabilities in Visual Studio Code.

Show HN: Aux Machina ā€“ AI photo generator without complex prompting
Aux Machina simplifies AI image generation, allowing users to create unique visuals effortlessly without complex prompts.

ChatGPT Dreams Up Fake Studies, Alaska Cites Them to Support School Phone Ban
Alaska's education officials used AI-generated fake studies to justify a school phone ban, raising concerns about policy validity.

Read more