Biometrics Privacy in the Cloud Era

By 2022, 81% of smartphones were equipped with biometric scanning, highlighting the convenience such technology offers users. Native biometric systems provide a high level of security by leveraging algorithms and hardware to authenticate users, ensuring that sensitive data stays secure from hacks and leaks within the device. Beyond simply unlocking a phone, biometric authentication also enables quick and seamless access to mobile applications, and this enhanced user experience continues to drive the growing adoption of biometric technology.

However, the next generation of biometric applications is expanding beyond smartphones into the cloud, raising new concerns about biometrics privacy and security. Cloud-based data, no matter how reputable the company or government organization that is handling it might be, is vulnerable, because unencrypted data must be accessed from storage to process and match biometric information.

Next-Gen Biometric Apps in Action

Amazon’s One app takes a photo of a person’s palms, converts them into a digital signature, and stores them in its cloud. The app user can then pay for groceries at Whole Foods without a credit card or phone, simply by hovering his or her hand over a sensor at the checkout. A similar option using face scanning is being implemented by JPMorgan Chase for payments at the Whataburger restaurant chain.

Biometrics are not only being used for retail. The TSA has started scanning people’s faces instead of their passports. Travelers who have registered for the Touchless Identity Solution and added their ID photo to the TSA’s cloud-based Travel Verification Service will benefit from a streamlined security process that results in shorter lines and less friction. In another instance of biometric identification, MasterCard has begun offering facial recognition as an option for accessing its user accounts in place of passwords.

The Threat to Privacy

These applications expand biometric adoptions to hundreds of million users, keeping all that sensitive information in the cloud. While considerate precautions are made to secure it, the data still must be decrypted before it can be processed, making it vulnerable. There have already been examples of biometrics privacy breaches, which are especially threatening because they include vital information that cannot be replaced or changed, such as full name, date of birth, or height and weight.

The Answer is FHE

There is, however, a viable solution to the biometrics privacy issue on the horizon. Fully Homographic Encryption (FHE) is a technology that will change the paradigm and enable processing directly on encrypted data, thereby ensuring that biometric information is kept private, even in the cloud.

Chain Reaction is at the forefront of the race to design and produce a processor that will enable real-time processing of FHE at cloud scale. This cutting-edge technology will usher Amazon, the TSA, JP Morgan, and many others into a new era of privacy-preserving applications, data collaboration, and security, while enabling all the benefits of biometric scanning.

Another of the Tech Giants Joins the FHE Bandwagon

At the beginning of August, Apple announced that it was releasing its new “Swift-Homomorphic-Encryption”, an open-source package to empower developers and researchers to create privacy-preserving applications within the Apple ecosystem. This announcement makes Apple the latest of the tech giants to indicate their commitment to privacy by making Fully Homomorphic Encryption (FHE) tools available to the public. 

Even more significantly, the release of such tools indicates recognition among the tech leaders that FHE is a promising privacy-enhancing technology (PET), enabling processing on data while it remains encrypted. Let’s explore what tools are now available from the tech giants, as they hop aboard the FHE bandwagon. 

Software Libraries and Toolkits 

A software library contains programming code in a variety of languages that developers can use to implement FHE within their applications. Some libraries also include basic programs or toolkits which help developers streamline their coding process and avoid rebuilding modules from scratch.  

Tech giants have an incentive to release their own libraries of code optimized for their specific programming or application environments.  This code can run on different hardware such as CPUs, GPUs, TPUs (for AI), allowing software engineers to test new optimizations, compare benchmarks, and assess how well their code enhances privacy across different platforms, while remaining within the tech giant’s ecosystem. 

FHE Libraries: What the Tech Giants Offer 

Google has released its Jaxite cryptographic software library, which was originally developed to accelerate neural network computations and was found to be as effective in accelerating FHE computation. Google is also developing HEIR (Homomorphic Encryption Intermediate Representation) to enable interoperability of FHE programs within its programming environment. 

Microsoft has offered SEAL (Simple Encrypted Arithmetic Library) since 2015 to apply FHE to basic arithmetic operations, but with ongoing research leading to more complex development, the company is hoping to apply SEAL to more advanced applications. 

Intel offers a toolkit that uses various libraries (including Microsoft SEAL) for implementing Homomorphic Encryption in the Intel architecture. 

And, as mentioned, Apple has now joined this list with their Swift-Homomorphic-Encryption. Apple’s offering is not as powerful or scalable as Fully Homomorphic Encryption in that it does not perform bootstrapping. It is therefore less compute intensive, but easier to implement. 

Compilers 

Several tech giants also offer translators and compilers that convert high-level FHE code into optimized applications capable of operating on encrypted (ciphertext) data. These tools automate much of the process, helping developers build efficient and accurate FHE applications for various tasks. 

FHE Compilers: What’s Available from the Tech Giants 

Companies that offer FHE compilers include Google, whose FHE Transpiler converts C++ programs and TensorFlow machine learning models; Microsoft, whose CHET (Compiler and Runtime for Homomorphic Evaluation of Tensor Programs) implements FHE in Tensor neural network inference tasks; and Amazon, which offers an FHE compiler within SageMaker to enable inference endpoints to operate on encrypted data and generate encrypted results. 

Real World Applications 

The tech giants have not only made FHE accessible for programmers and developers but have also made basic applications for public use, incorporating FHE or similar forms of homomorphic encryption. 

Tech Giant FHE Applications 

IBM, which made the initial breakthrough with developing FHE, offers an online demo of FHE for performing secure AI and Machine Learning analytics on encrypted data, but such a demo must be scheduled through IBM’s cybersecurity consulting services. 

Apple uses its Swift HE in the CallerID Lookup application in iOS, keeping both the query and the result private even from the system.  

Microsoft has used the code in its SEAL library to process basic FitBit and AppleHealth data (such as total runs, distance run, and time into metrics such as average speed) while all data (including private user info) remains encrypted. Microsoft Edge has a Password Monitor feature that compares a user’s passwords privately against a database of known compromised passwords, which, by using FHE to conduct the check, keeps the user’s passwords private from Microsoft or any other party while they are being monitored. 

Finally, while they are not yet publicly available, Intel has been working with NASDAQ to implement AI-based fraud detection and anti-money laundering applications using FHE calculations. 

Software and Hardware Development 

It is not only the tech giants who are driving the development of FHE. Companies like Zama, Duality, and Fhenix are offering FHE solutions to protect private data on a per company basis, typically securing data equivalent to that of a single server rack through software. While this approach may not yet scale to the size of cloud or AI data centers, it represents a significant step forward in enhancing data privacy. 

One reason why FHE has not been adopted more comprehensively already is the massive increase in complexity for some operations. The breakthrough for FHE will be the development of a dedicated FHE hardware processor that can accelerate FHE computation enough that it can overcome such complexity while scaling for cloud and AI deployments. Efforts toward this have been ongoing for many years, with the most well-known project sponsored by DARPA (Defense Advanced Research Project Agency) of the US Department of Defense – the DPRIVE (Data Protection in Virtual Environments) initiative.  

Microsoft, Intel, and notable others have been working on an FHE hardware accelerator for the past three years as part of the DPRIVE initiative. DPRIVE seeks to enable FHE computation within a factor of ten of unencrypted computations such that data will be secure in all states across DoD and commercial applications. DPRIVE’s goal was to successfully accelerate FHE computation by 10,000 times the processing of a standard CPU, but to truly achieve cloud and AI scale, the acceleration will need to approach 100,000 times a CPU’s capability. 

There are also private semiconductor companies that are working on such a processor, including Cornami, Optalysys, Niobium, and Chain Reaction, which is developing its 3PU™ Privacy Processor. Chain Reaction’s core competency in chip design enables it to accelerate FHE computation with the goal of achieving cloud and AI scale.  

FHE: The Holy Grail of Privacy 

The fact that so much development effort is being placed into FHE, especially by the biggest companies in the tech world, gives credence to the idea that FHE is the “holy grail” of privacy. It is the only post-quantum secure technology for protecting our privacy, and as AI and cloud computing gain a foothold and expand to ever-more industries, only FHE can be counted on to protect our personal data. 

Apple’s latest announcement is further evidence that FHE is no longer merely a future technology, as there are libraries, compilers, basic applications, and limited software solutions that can be used right now for specific tasks and corporate uses. 

However, only once a dedicated processor, such as Chain Reaction’s 3PU™, is developed will FHE processing be accelerated to make it ubiquitous throughout the cloud and AI architectures. At that point, FHE will reach its full potential and become the game-changing technology that the tech giants are betting on it to be. 

Effective Performance: Maximizing Bitcoin Mining Profitability

The Bitcoin mining industry has traditionally been defined by performance, a term that consists primarily of hashrate and power efficiency. Whether a mining company is successful is almost exclusively a factor of whether they are able to extract maximum hashing from their rigs and whether they are able to do so while consuming as little power as possible.

While this is the key metric, the rated performance of a specific machine is not necessarily reflected in the miner’s overall profitability because many additional factors come into account. For example, two of the most important factors to a mining data center’s bottom line are uptime and operational flexibility.

Especially since the halving event this past April, the added difficulty of mining for Bitcoin has made profitability that much more complex an equation for miners, who must pay close attention to the performance of their entire facility. Hashrate and power efficiency become two variables among many when considering which rigs to buy to constitute the makeup of their data centers.

Therefore, it is necessary to define a metric that measures the hashrate and efficiency after taking into account all the external factors that can adversely affect performance. Such an evaluation would focus on a data center’s effective performance.

Factoring Uptime into Effective Performance

Uptime is easy to understand as a critical factor in the profitability of a data center. Every second that a system is down is an opportunity lost. This has always been a factor for mining companies, but now, mining difficulty has made it such that the poor uptime of the incumbent system providers cannot be ignored.

Among the difficulties that miners face that directly affect uptime are rigs that shut down or malfunction at high ambient temperatures, dead-on-arrival rate, rigs that break down within the first few months of operation, high maintenance response time, and time lost on reboots. An excellent hashrate is highly attractive, but mining companies know that hashrate is meaningless while the mining rig is down.

Thus, by focusing on maintaining uptime, a miner’s effective performance is much greater, and the mining data center is much more profitable.

Adding Efficiency Through Operational Flexibility

Another major factor in today’s mining industry is operational flexibility.  A miner’s ability to design a data center with built-in agility provides them the opportunity to improve hashing density, upgrade existing systems (instead of replacing them), and incorporate custom infrastructure using de facto form factors, without needing to overhaul existing deployments. This introduces efficiency, and therefore new profitability, into the data center, leveraging the unique knowledge and experience that mining companies possess to incorporate out-of-the-box planning to overcome the deficiencies of older generations of miners.

Another aspect of operational flexibility can be applied to curtailment, the highly lucrative practice of selling energy back to the grid at times of peak need.  Curtailment has become a prominent strategy toward adding revenue or offsetting power costs, but it has its downsides. For example, turning on and off miners can reduce their reliability, create imbalance in the data center, and limit the opportunity to participate in more aggressive response-time curtailment programs. But curtailment does not need to be an all-or-nothing proposition. With built-in flexibility in the mining systems, it is possible to take advantage of curtailment opportunities while continuing to hash at a lower rate.

When miners incorporate operational flexibility into their data centers, they maximize their effective performance and significantly enhance their chances at profitability, even in the post-halving era.

Maximize Your Mining

As the rated performance of a mining system in terms of hashrate and power efficiency does not reflect the data center’s ability to turn a profit, large-scale bitcoin miners seek mining solutions that emphasize effective performance.

For example, Chain Reaction’s EL3CTRUM Bitcoin miner is designed to address the factors that maximize effective mining performance. EL3CTRUM was conceived based on input and guidance from miners, with a focus on optimizing reliability and resilience toward enhancing uptime, and introducing operational and data center design flexibility by offering ASICs, hashboards, and systems. The result is improved profitability and reduced total cost of ownership.

There is a lot more to surviving in today’s market than just buying the rigs with the highest hashrate and lowest power efficiency. In today’s fast-paced, competitive mining industry, the most successful miners are those who use their expertise and experience to take a holistic view of their mining operations, and who ensure that maximum uptime and flexibility are factors toward optimizing the effective performance of their data centers.

Microsoft CoPilot Feature Highlights Data Privacy Concerns

In July 2024, CoPilot, Microsoft’s generative AI chatbot, released a new feature, allowing it to access and process a company’s proprietary data stored on OneDrive. This gives Microsoft’s corporate clients a powerful tool for summarizing and analyzing internal data stored within the company’s servers, and it alters the equation that such customers use when considering the advantages of AI in the cloud versus data privacy.

GenAI has been able to analyze corporate data until now as well, but the process was too slow to be viable. Even accessing a single large document could take up to a full day, which meant that more extensive projects, such as learning from a company’s data center or from data in the cloud were off-limits to such tools.

However, now that there is a tool capable of analyzing such large quantities of enterprise data, companies will need to weigh the benefits of such a platform against the potential risks to exposing the data beyond their proprietary servers.

CoPilot OneDrive vs. Data Privacy

When a company shares proprietary data from OneDrive with CoPilot, the information is automatically incorporated into CoPilot’s machine learning module, potentially exposing sensitive data to the AI. Even though Microsoft’s CoPilot offers commercial data protection, it is not clear what the result of ending the contract with Microsoft would be: will the propriety data be deleted? If so, what part of it, and would any remain in the public domain?

“Of course, once Microsoft’s AI ‘knows’ the contents of your company’s internal documents, you’ll be less likely to ever sever your ongoing subscription,” states Mark Hachman, senior editor at PCWorld.  Companies will be less likely to terminate their subscription only to further expose that data to a competing AI module.

Microsoft maintains that the CoPilot OneDrive feature uses an exclusive learning module for that company’s data, never sharing it with its external AI apps. This means that CoPilot should be immune from leaks to the world outside the company’s domain. Nonetheless, Microsoft admits that CoPilot is not encrypted end-to-end, such that it is potentially vulnerable to hacks, and even internally, data privacy could be compromised by exposing it to employees without proper access rights.

Fully Homomorphic Encryption for Private LLM

The most promising innovation on the horizon to secure a company’s privacy from Artificial Intelligence is Fully Homomorphic Encryption. FHE allows applications to perform computation on encrypted data without ever needing to decrypt it.  FHE allows CoPilot to perform analysis exclusively on encrypted data, so that proprietary data is never actually exposed to the learning module.

Chain Reaction is at the forefront of the race to design and produce a processor that will enable real-time processing of Fully Homomorphic Encryption. The processor will enable AI to process data without compromising it, creating a Private LLM (Large Language Model). Corporations will finally be able to benefit from using Artificial Intelligence like CoPilot without the fear that proprietary code and sensitive corporate information will be compromised.

Before J.A.R.V.I.S Goes Haywire: The Need for FHE in AI Agents

Anyone who has seen the Iron Man movies has probably thought how great it would be to have your own J.A.R.V.I.S., Tony Stark’s personal AI assistant. According to recent reports, many of today’s tech giants are working on very similar AI agents, personal assistants who organizes your busy work schedule and handle tedious activities that reduce productivity.

OpenAI, Microsoft, Google, and others are investing heavily in AI agents as the next generation of AI after chatbots. They are actively developing agent software designed to automate intricate tasks by assuming control over a user’s devices. Imagine never needing to manage payroll, write memos, return messages, or even book your own travel reservations. The AI agent would automatically manage your basic work assignments, leaving you time to focus on more important matters.

AI Agents and Your Data

While this sounds great, companies should tread carefully before allowing such AI agents into their workplaces. By granting an AI agent access to corporate devices, companies introduce significant security vulnerabilities to their proprietary data and that of their clients.

For example, employees could unwittingly expose sensitive information to the AI agent, or they could inadvertently open avenues for unauthorized access to data stored on the shared devices.

In addition, utilizing AI agents for certain tasks, such as gathering public data or booking flight tickets, would lead to significant data privacy and security risks. Automated AI agents would have authorization to access and transmit personal and proprietary information, potentially leading to unwanted data disclosures that could lead to reputational and financial damage.

In fact, the AI agent software has an inherent security flaw at its core, namely that it revolves around a Large Language Model (LLM), the machine learning module of the AI. Every piece of information that the agent accesses and every interaction the agent conducts is necessarily grafted into its LLM and could be churned back by the AI agent to other users.

Fully Homomorphic Encryption Secures AI Agents

To address these security threats, a robust, proactive encryption protocol is needed to safeguard the sensitive data processed by AI agents. The most promising innovation in development to secure privacy from AI agents is Fully Homomorphic Encryption. FHE allows applications to perform computation on encrypted data without ever needing to decrypt it. The AI agent would be unable to store confidential information in its LLM because that private information would always remain encrypted thanks to FHE.

Chain Reaction is at the forefront of the race to design and produce a processor that will enable real-time processing of Fully Homomorphic Encryption. This cutting-edge technology will enable AI agents to serve as loyal aides and personal assistants, while preventing them from exposing proprietary or personal data. Corporate enterprises could then confidently take advantage of artificial intelligence to increase productivity and profits without fear that their code and employees’ sensitive information is being compromised.