Skip to main content
AI Buzz logo

AI News and Industry Impact

373 articles · Page 13 of 32

Illustration contrasting a dynamic Liquid Neural Network (LNN) with a static, token-based Transformer architecture.

LFM2-VL Release: Liquid AI's New Architecture for Mobile AI

By Nick Allyn3 min read

Liquid AI has announced the release of LFM2-VL, a new family of open-weight vision-language models that challenges the industry’s reliance on the Transformer architecture. This release introduces a model built on Liquid Neural Networks (LNNs), a fundamentally different design inspired by biological nervous systems. By prioritizing continuous-time data processing and computational efficiency, the LFM2-VL models

Diagram of Kioxia's memory-semantic flash module on a PCIe 5.0 x16 bus for direct CPU access, bypassing the NVMe protocol.

Kioxia vs. CXL: A New Direct-Attached Flash for AI Bottlenecks

By Nick Allyn5 min read

Kioxia has unveiled a 5TB high-bandwidth flash module, a novel class of device that serves as a Kioxia 64 GB/s flash for AI and high-performance computing (HPC) to directly address critical data bottlenecks. Unlike traditional SSDs, this prototype connects flash memory directly to the CPU over a full PCIe 5.0 x16 interface, the same used

Stylized brain graphic representing CodeSignal's proprietary LLM powering the Cosmo AI tutor interface for job skills.

CodeSignal Cosmo: AI Tutor Built on a Specialized Hiring LLM

By Nick Allyn4 min read

CodeSignal announced the launch of Cosmo in May 2024, an AI-powered interactive tutor designed to help professionals master in-demand job skills. Positioned as the “Duolingo for job skills,” the application enters a competitive market by leveraging a key technical differentiator: a proprietary Large Language Model (LLM). Instead of building on a generalist foundation model like

A pigeon in a Skinner box linked to a modern AI neural network, illustrating the evolution of reinforcement learning principles.

ChatGPT's RLHF: AI Alignment via Skinner's Psychology

By Nick Allyn5 min read

The sophisticated alignment of large language models like ChatGPT, a process central to their safety and utility, operates on a principle first systematically demonstrated nearly a century ago with pigeons. The technique, Reinforcement Learning from Human Feedback (RLHF), reveals a direct lineage from the psychological “shaping” experiments of B. F. Skinner to the core of

Diagram of Hunyuan-ViT's Vision-Expert MoE, routing visual data to specialized OCR and high-resolution analysis experts.

Tencent Hunyuan-ViT: Vision-Expert MoE Beats GPT-4V Score

By Nick Allyn9 min read

Tencent has released technical details for its new large vision model, Hunyuan-ViT, which has demonstrated state-of-the-art performance across a suite of nine major multimodal benchmarks. The model surpasses established rivals like Google’s Gemini Pro Vision and OpenAI’s GPT-4V in specific evaluations, including the complex MathVista benchmark for visual mathematical reasoning. This achievement stems from the

Diagram of the dots.ocr 1.7B VLM processing a multilingual document into structured JSON data, showing its compact architecture.

dots.ocr 1.7B: SOTA Document AI with Small-Model Efficiency

By Nick Allyn4 min read

A new 1.7B parameter vision-language model named dots.ocr has achieved state-of-the-art (SOTA) performance on complex multilingual document parsing benchmarks, representing a significant development in Intelligent Document Processing (IDP). The model’s architecture and performance signal a strategic shift in the industry, prioritizing specialized, computational efficiency over the massive scale of general-purpose multimodal models like GPT-4V. By

Diagram of Progressive Curriculum Reinforcement Learning, showing a structured path from simple visual tasks to complex reasoning.

VL-Cogito: Alibaba's Breakthrough in Multimodal AI Reasoning

By Nick Allyn4 min read

Alibaba DAMO Academy has announced a significant development in multimodal AI with VL-Cogito, a vision-language model trained using a novel technique called Progressive Curriculum Reinforcement Learning (PCRL). This approach is engineered to directly address a critical, well-documented weakness in even the most advanced AI systems: the gap between pattern recognition and genuine, multi-step reasoning. The

A conceptual image of a fractured OpenAI logo, symbolizing the internal governance crisis and public distrust ahead of GPT-5's release.

OpenAI's Governance Crisis Overshadows GPT-5's Launch

By Nick Allyn6 min read

As anticipation builds for OpenAI’s next-generation model, a fictional Reddit post asking “AMA about GPT-5” but getting a sharp reply about corporate governance perfectly captures the company’s current reality. The enthusiasm for new technology is now directly challenged by a growing crisis of confidence. Recent high-profile departures from its safety team, coupled with the unresolved

Conceptual graphic of OpenAI's rumored three-tiered GPT-5 model structure, showing Base, Advanced, and a top Pro tier.

OpenAI's GPT-5 Strategy: A Tiered Model to Fund AGI

By Nick Allyn4 min read

Recent analysis of OpenAI’s product code suggests the company is preparing a multi-tiered rollout for its next-generation model, GPT-5. According to a report from Alexey Shabanov of TestingCatalog, the plan points to a three-level system: a base model for free users, an advanced version for ChatGPT Plus, and a new top-tier “Pro” model with “research-level”

AI security assistant interface highlighting a critical threat, representing Vectra AI's Attack Signal Intelligence technology.

Vectra AI Challenges Microsoft with 'Signal-First' Gen AI

By Nick Allyn5 min read

Vectra AI has officially entered the generative AI security arms race, announcing its Vectra AI new security assistant for its Vectra AI Platform. The development places the company in direct competition with cybersecurity giants like Microsoft and CrowdStrike, who have already launched their own AI co-pilots. Vectra’s move is a significant strategic play, aiming to

Conceptual art of a chess match showing OpenAI's strategic pivot against open-source rivals like DeepSeek and Meta.

OpenAI Pivots to Open-Weight in Response to DeepSeek

By Nick Allyn4 min read

In a landmark strategic shift, OpenAI has announced the release of two open-weight models, directly entering a competitive arena it once observed from its proprietary fortress. This move is a clear acknowledgment of the mounting pressure from a new generation of powerful and efficient open-source alternatives, most notably DeepSeek-V2, which have demonstrated performance competitive with

Abstract visualization of a geometric shield deflecting a malicious data point, representing Topological Data Analysis in AI security.

Geometric Defense for AI: TDA Achieves 98% Attack Detection

By Nick Allyn5 min read

A recent multimodal AI security breakthrough demonstrates a powerful new defense against sophisticated threats, using a mathematical approach to analyze the fundamental ‘shape’ of data. Researchers have shown that Topological Data Analysis (TDA) can identify malicious inputs designed to fool multimodal AI systems with over 98% accuracy. This development introduces a geometrically-grounded security layer that

© 2026 AI-Buzz. Early access — data updated daily.