← Back to AI & Tech
🤖 AI & Tech

AI & Tech Digest — April 2, 2026 | AlgoCents

Perplexity: AI-Native Browser Enters Public Beta

Perplexity launched a public beta of its AI-native browser, replacing the traditional address bar with a natural language query field and integrating real-time web synthesis directly into every page view.

Why it matters: This is the most direct attack on Google’s Chrome dominance since Firefox’s peak. Perplexity’s approach embeds answer synthesis at the OS-browser layer rather than the search layer — meaning publishers lose referral traffic even if a user “visits” their site in a summary pane. The SEO industry is having a very bad morning.

Source: The Verge


Meta: Llama 4 Released Under Commercial Open Licence

Meta released Llama 4 in three parameter sizes — 8B, 70B, and 405B — under an updated commercial licence that permits enterprise deployment without royalty payments up to 700 million monthly active users.

Why it matters: The 70B model reportedly matches GPT-4o on standard benchmarks at a fraction of the inference cost when self-hosted. This is a direct attack on Anthropic and OpenAI’s API businesses — any enterprise currently paying $15–$20 per million output tokens has a strong economic incentive to evaluate self-hosting within the next quarter.

Source: Meta AI Blog


Microsoft: Copilot Autonomous Agent Mode Now Generally Available

Microsoft rolled out autonomous agent mode for Microsoft 365 Copilot to all enterprise customers, allowing the AI to take multi-step actions inside Excel, Outlook, Teams, and SharePoint without human confirmation at each step.

Why it matters: The productivity productivity story just shifted from “Copilot assists” to “Copilot executes.” Enterprise security teams are scrambling to understand the blast radius of an autonomous agent with access to full Microsoft 365 tenancy permissions. Expect a wave of governance policy updates over the next 30 days.

Source: Microsoft Blog


Security: GPT-4o System Prompt Extraction Exploit Published

A researcher published a proof-of-concept exploit that reliably extracts a significant portion of GPT-4o’s hidden system prompts via a structured multi-turn adversarial conversation technique.

Why it matters: System prompt confidentiality is the primary IP protection mechanism for thousands of commercial AI products built on GPT-4o. The exploit — which works even with instruction-following safeguards active — exposes competitive secrets in deployed applications. OpenAI confirmed it is investigating and has pushed a partial mitigation, but has not claimed the issue is fully resolved.

Source: Hacker News


Australia: AI Safety Institute Opens in Canberra

The Australian AI Safety Institute officially opened its Canberra headquarters today, staffed initially by 45 researchers drawn from CSIRO, ANU, and UNSW, with a mandate to evaluate frontier models for national security risk.

Why it matters: Australia becomes the fifth country with a dedicated government AI safety evaluation body, following the UK, USA, Japan, and Singapore. The institute’s first formal evaluation report — which will cover Llama 4, Claude 4 Opus, and Gemini Ultra 2 — is due to be published by 30 June. Findings will inform procurement policy for all federal government AI tool deployments.

Source: Department of Industry, Science and Resources