Lattica Emerges from Stealth to Solve AI’s Biggest Privacy Challenge with FHE
Lattica’s cloud-based solution uses Fully Homomorphic Encryption to query encrypted data on AI models without decrypting it, preserving privacy and bolstering security.
More results...
Lattica’s cloud-based solution uses Fully Homomorphic Encryption to query encrypted data on AI models without decrypting it, preserving privacy and bolstering security.
In this Help Net Security interview, Yinglian Xie, CEO at DataVisor, explains how evolving fraud tactics require adaptive, AI-driven prevention strategies. With fraudsters using generative AI to launch sophisticated attacks, financial institutions must…
This is a truly fascinating paper: “Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties:
Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them…
NIST just released a comprehensive taxonomy of adversarial machine learning attacks and countermeasures.
Machine learning certification opens many exciting doors. Amazon Web Services (AWS) surveys of IT leaders found that 94% of them agreed that certified team members provide added value above and …
In AI applications, machine learning (ML) models are the core decision-making engines that drive predictions, recommendations, and autonomous actions. Unlike traditional IT applications, which rely on predefined rules and static algorithms, ML models a…
Today, we are discussing Computer Vision applications, one of the most impactful AI-powered technologies that is reshaping our…
Discover how artificial intelligence is shaping the future of workplace management, from optimizing efficiency to enhancing employee experience.…
Artificial intelligence (AI) is entering a new era with the rise of agentic AI, a groundbreaking innovation redefining how machines interact with the world and perform tasks. Unlike traditional AI systems that operate within the bounds of human-defined algorithms and instructions, agentic AI stands apart because it can act autonomously, adapt to changing environments, and […]
Navigating the New Frontier: Agentic AI’s Promise and Challenges was originally published on Global Security Review.
By Artem Dinaburg Today we’re going to provision some cloud infrastructure the Max Power way: by combining automation with unchecked AI output. Unfortunately, this method produces cloud infrastructure code that 1) works and 2) has terrible security properties. In a nutshell, AI-based tools like Claude and ChatGPT readily provide extremely bad cloud infrastructure provisioning code, […]